[
  {
    "path": ".gitignore",
    "content": "# 忽略操作系统生成的文件\n.DS_Store\nThumbs.db\n\n# 忽略编译生成的文件\n*.class\n*.exe\n*.o\n*.so\n.eggs/\n*.egg-info/\n\n\n# 忽略包管理工具生成的文件\nnode_modules/\nvendor/\n\n# 忽略 Python 缓存目录\n__pycache__/\n\n# 忽略日志文件\n*.log\n\n# 忽略环境配置文件\n.env\n\n# 忽略IDE/编辑器配置文件\n.idea/\n.vscode/\n\n# test folder\ntest*/\n\n# ckpts\n\nckpts/\n"
  },
  {
    "path": "README.md",
    "content": "<h1 align='center'>DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration</h1>\n\n<div align='center'>\n    <a href='' target='_blank'>Yan Chen</a><sup>1*</sup>&emsp;\n    <a href='' target='_blank'>Hanlin Shang</a><sup>1*</sup>&emsp;\n    <a href='' target='_blank'>Ce Liu</a><sup>1</sup>&emsp;\n    <a href='' target='_blank'>Yuxuan Chen</a><sup>1</sup>&emsp;\n    <a href='' target='_blank'>Hui Li</a><sup>1</sup>&emsp;\n    <a href='' target='_blank'>Weihao Yuan</a><sup>2</sup>&emsp;\n</div>\n<div align='center'>\n    <a href='' target='_blank'>Hao Zhu</a><sup>3</sup>&emsp;\n    <a href='' target='_blank'>Zilong Dong</a><sup>2</sup>&emsp;\n    <a href='https://sites.google.com/site/zhusiyucs/home' target='_blank'>Siyu Zhu</a><sup>1✉️</sup>&emsp;\n</div>\n\n<div align='center'>\n    <sup>1</sup>Fudan University&emsp; \n    <sup>2</sup>Alibaba Group&emsp;\n    <sup>3</sup>Nanjing University&emsp;\n</div>\n\n<div align='Center'>\n<i><strong><a href='https://iccv.thecvf.com/Conferences/2025' target='_blank'>ICCV 2025 Highlight</a></strong></i>\n</div>\n\n<br>\n<div align='center'>\n    <a href='https://github.com/fudan-generative-vision/DicFace'><img src='https://img.shields.io/github/stars/fudan-generative-vision/DicFace'></a>\n    <!-- <a href='https://github.com/fudan-generative-vision/DicFace/#/'><img src='https://img.shields.io/badge/Project-HomePage-Green'></a> -->\n    <a href='https://arxiv.org/abs/2506.13355'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>\n    <!-- <a href=''><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a> -->\n    <!-- <a href='assets/wechat.jpeg'><img src='https://badges.aleen42.com/src/wechat.svg'></a> -->\n</div>\n\n<br>\n\n<table align=\"center\" border=\"0\" style=\"width: 100%; margin-top: 80px;\">\n  <tr>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/274ecc2b-3d89-4d31-bb0a-a5f3611fae8a\" \n             muted autoplay loop style=\"display: block; margin: 0 auto;\"></video>\n    </td>\n  </tr>\n</table>\n\n## 🖼️ Showcase\n\n### Blind Face Restoration\n<table align=\"center\" width=\"100%\" border=\"0\" cellpadding=\"10\">\n  <tr>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/eb61d793-b860-476e-bae5-f6fcade1e11f\" muted autoplay loop width=\"480\"></video>\n    </td>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/eb9be43a-8fb9-4fbd-ac92-a686ab0c188b\" muted autoplay loop width=\"480\"></video>\n    </td>\n  </tr>\n</table>\n\n\n### Face Inpainting\n<table align=\"center\" width=\"100%\" border=\"0\" cellpadding=\"10\">\n  <tr>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/1cd12d53-2ead-4cf3-b56c-1a6316484e93\" muted autoplay loop width=\"480\"></video>\n    </td>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/a16b7021-a401-41cb-9a39-37a788f6a001\" muted autoplay loop width=\"480\"></video>\n    </td>\n  </tr>\n</table>\n\n### Face Colorization\n<table align=\"center\" width=\"100%\" border=\"0\" cellpadding=\"10\">\n  <tr>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/cb038911-8b26-472d-8fb9-a6cdda127084\" muted autoplay loop width=\"480\"></video>\n    </td>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/ffc85ef7-4987-42af-b892-79544ea29f87\" muted autoplay loop width=\"480\"></video>\n    </td>\n  </tr>\n</table>\n\n### 🐾 Wild Data Examples\n\n<div align=\"center\">\n\n<video src=\"https://github.com/user-attachments/assets/90fe03dd-b0cc-446b-bb6a-169e98c875df\" muted autoplay loop width=\"3240\"></video>\n<video src=\"https://github.com/user-attachments/assets/c165fca5-652b-4586-a928-2ba5bda6ae03\" muted autoplay loop width=\"3240\"></video>\n<br>\n<video src=\"https://github.com/user-attachments/assets/f911165d-2259-4378-828c-a4468e5fa4dc\" muted autoplay loop width=\"3240\"></video>\n<br>\n<table align=\"center\" width=\"100%\" border=\"0\" cellpadding=\"10\">\n  <tr>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/34eea191-f972-4b6f-9529-cc39b9831875\" muted autoplay loop width=\"480\"></video>\n    </td>\n    <td style=\"text-align: center;\">\n      <video src=\"https://github.com/user-attachments/assets/b7f0466b-321d-42b5-ae70-65b4a7347698\" muted autoplay loop width=\"480\"></video>\n    </td>\n  </tr>\n</table>\n</div>\n\n## 📰 News\n\n- **`2025/07/25`**: 🎉🎉🎉 Our paper has been accepted to [ICCV 2025](https://iccv.thecvf.com/Conferences/2025)and selected as a highlight.\n- **`2025/06/26`**: 🎉🎉🎉 Our paper has been accepted to [ICCV 2025](https://iccv.thecvf.com/Conferences/2025).\n- **`2025/06/25`**: Release our test data on huggingface [repo](https://huggingface.co/datasets/fudan-generative-ai/DicFace-test_dataset).\n- **`2025/06/23`**: Release our pretrained model on huggingface [repo](https://huggingface.co/fudan-generative-ai/DicFace).\n- **`2025/06/17`**: Paper submitted on Arixiv. [paper](https://arxiv.org/abs/2506.13355)\n- **`2025/06/16`**: 🎉🎉🎉 Release inference scripts\n\n\n\n## 📅️ Roadmap\n\n| Status | Milestone                                                                                              |    ETA     |\n| :----: | :----------------------------------------------------------------------------------------------------- | :--------: |\n|   ✅   | **[Inference Code release](https://github.com/fudan-generative-vision/DicFace)**                       |  2025-6-16 |\n|   ✅   | **[Model Weight release， baidu-link](https://pan.baidu.com/s/1VTNbdtZDvgY0163a1T8ITw?pwd=dicf)**       |2025-6-16   |\n|   ✅   | **[Paper submitted on Arixiv](https://arxiv.org/abs/2506.13355)**                                       |  2025-6-17 |\n|   ✅   | **[Test data release](https://huggingface.co/datasets/fudan-generative-ai/DicFace-test_dataset)**       |  2025-6-25 |\n|   ✅   | **[Training Code release]()**                                                                           |  2025-6-26 |\n\n\n\n## ⚙️ Installation\n\n- System requirement: PyTorch version >=2.4.1, python == 3.10\n- Tested on GPUs: A800, python version == 3.10, PyTorch version == 2.4.1, cuda version == 12.1\n\nDownload the codes:\n\n```bash\n  git clone https://github.com/fudan-generative-vision/DicFace\n  cd DicFace\n```\n\nCreate conda environment:\n\n```bash\n  conda create -n DicFace python=3.10\n  conda activate DicFace\n```\n\nInstall PyTorch\n\n```bash\n  conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia\n```\n\nInstall packages with `pip`\n\n```bash\n  pip install -r requirements.txt\n  python basicsr/setup.py develop\n  conda install -c conda-forge dlib\n```\n\n### 📥 Download Pretrained Models\n\nThe pre-trained weights have been uploaded to Baidu Netdisk. Please download them from the [link](https://pan.baidu.com/s/1VTNbdtZDvgY0163a1T8ITw?pwd=dicf)\n\nNow you can easily get all pretrained models required by inference from our HuggingFace [repo](https://huggingface.co/fudan-generative-ai/DicFace_model).\n\n**File Structure of Pretrained Models**\nThe downloaded .ckpts directory contains the following pre-trained models:\n\n```\n.ckpts\n|-- CodeFormer                  # CodeFormer-related models\n|   |-- bfr_100k.pth            # Blind Face Restoration model \n|   |-- color_100k.pth          # Color Restoration model \n|   |-- codeformer.pth          # codeformer model\n|   |-- vqgan_discriminator.pth # vqgan_discriminator model\n|   `-- inpainting_100k.pth     # Image Inpainting model\n|-- dlib                        # dlib face-related models\n|   |-- mmod_human_face_detector.dat  # Human face detector\n|   `-- shape_predictor_5_face_landmarks.dat  # 5-point face landmark predictor\n|-- facelib                     # Face processing library models\n|   |-- detection_Resnet50_Final.pth  # ResNet50 face detector \n|   |-- detection_mobilenet0.25_Final.pth  # MobileNet0.25 face detector \n|   |-- parsing_parsenet.pth    # Face parsing model\n|   |-- yolov5l-face.pth        # YOLOv5l face detection model\n|   `-- yolov5n-face.pth        # YOLOv5n face detection model\n|-- realesrgan                  # Real-ESRGAN super-resolution model\n|   `-- RealESRGAN_x2plus.pth   # 2x super-resolution enhancement model\n`-- vgg                         # VGG feature extraction model\n    `-- vgg.pth                 # VGG network pre-trained weights\n```\n\n### 🎮 Run Inference\n\n#### for blind face restoration\n\n```bash\npython scripts/inference.py \\\n\t\t-i /path/to/video \\\n\t\t-o /path/to/output_folder \\\n\t\t--max_length 10 \\\n\t\t--save_video_fps 24 \\\n\t\t--ckpt_path /bfr/bfr_weight.pth \\\n\t\t--bg_upsampler realesrgan \\\n\t\t--save_video \n\n# or your videos has been aligned\npython scripts/inference.py \\\n\t\t-i /path/to/video \\\n\t\t-o /path/to/output_folder \\\n\t\t--max_length 10 \\\n\t\t--save_video_fps 24 \\\n\t\t--ckpt_path /bfr/bfr_weight.pth \\\n\t\t--save_video \\\n\t\t--has_aligned\n```\n\n#### for colorization & inpainting task\n\n\n**The current colorization & inpainting tasks only supports input of aligned faces. If a non-aligned face is input, it may lead to unsatisfactory final results.**\n\n``` bash \n# for colorization task\npython scripts/inference_color_and_inpainting.py \\\n\t\t-i /path/to/video_warped \\\n\t\t-o /path/to/output_folder \\\n\t\t--max_length 10 \\\n\t\t--save_video_fps 24 \\\n\t\t--ckpt_path /colorization/colorization_weight.pth \\\n\t\t--bg_upsampler realesrgan \\\n\t\t--save_video \\\n\t\t--has_aligned\n\n# for inpainting task\npython scripts/inference_color_and_inpainting.py \\\n\t\t-i /path/to/video_warped \\\n\t\t-o /path/to/output_folder \\\n\t\t--max_length 10 \\\n\t\t--save_video_fps 24 \\\n\t\t--ckpt_path /inpainting/inpainting_weight.pth \\\n\t\t--bg_upsampler realesrgan \\\n\t\t--save_video \\\n\t\t--has_aligned\n```\n\n## Test Data  \n\nOur test data can be accessed via the following links:  \n- Baidu Netdisk: [https://pan.baidu.com/s/1zMp3fnf6LvlRT9CAoL1OUw](https://pan.baidu.com/s/1zMp3fnf6LvlRT9CAoL1OUw) (Password: `drhh`)  \n- Hugging Face Dataset: [https://huggingface.co/datasets/fudan-generative-ai/DicFace-test_dataset](https://huggingface.co/datasets/fudan-generative-ai/DicFace-test_dataset)  \n\n\n### Directory Structure  \nThe downloaded `test_data_set` directory contains the following folders:  \n```\n./test_data\n├── LR_Blind                  # Blind face restoration test image folders\n│   ├── Clip+_HebIzK_LP4+P2+C1+F16589-16715\n│   ├── ...                   # Additional test image folders\n│   └── Clip+y5OFsRIRkwc+P0+C0+F9797-9938\n│\n├── TEST_DATA                 # Ground-truth (GT) image folders\n│   ├── Clip+_HebIzK_LP4+P2+C1+F16589-16715\n│   ├── ...\n│   └── Clip+y5OFsRIRkwc+P0+C0+F9797-9938\n│\n├── vfhq_test_color_input     # Colorization test image folders\n│   ├── Clip+_HebIzK_LP4+P2+C1+F16589-16715\n│   ├── ...\n│   └── Clip+y5OFsRIRkwc+P0+C0+F9797-9938\n│\n├── vfhq_test_inpaint_input_512  # Inpainting test image folders (512x512)\n│   ├── Clip+_HebIzK_LP4+P2+C1+F16589-16715\n│   ├── ...\n│   └── Clip+y5OFsRIRkwc+P0+C0+F9797-9938\n│\n└── vfhq_test_landmarks       # Facial landmark files for warping operations\n```\n\n\n### Usage  \nTo process the test data, use the `warp_images.py` script:  \n```shell\npython scripts/warp_images.py \\\n    -i input_test_data_folder \\\n    -o vfhq_test_inpaint_input_512_warped \\\n    -l /path/to/test_data_folder/vfhq_test_landmarks\n```  \n\nAfter warping the test data, you can use the inference scripts to generate results for the test dataset.\n\n\n### Training\n\n#### Training Data\nWe utilize the VFHQ dataset for both training and testing. The test data is specifically sourced from VFHQ-Test. For more details, please refer to the official project page: [VFHQ](https://liangbinxie.github.io/projects/vfhq/).\n\n### Prerequisites for Training\nBefore initiating the training process, ensure that you have completed the following steps:\n\n1. **Image Size Requirement**:\n   - All input images must be resized to 512 x 512 pixels.\n\n2. **Download Necessary Files**:\n   - Obtain the metadata files and facial landmark information from our Hugging Face repository. [TBD(not ready)]\n\n3. **Configure YAML Files**:\n   - Edit the configuration file located at `options/xxx.yaml` to specify your training parameters and dataset paths.\n\n### Initiate Training\nOnce the prerequisites are met, start the training process by executing the following command:\n```bash\nbash train.sh\n```\n\nThis script will initiate the training procedure using the settings defined in your YAML configuration file.\n\n\n## 🤗 Acknowledgements\n\nThis project is open sourced under NTU S-Lab License 1.0. Redistribution and use should follow this license. The code framework is mainly modified from [CodeFormer](https://github.com/sczhou/CodeFormer). Please refer to the original repo for more usage and documents.\n\n## 📝 Citation\n\nIf you find our work useful for your research, please consider citing the paper:\n\n```\n@misc{chen2025dicfacedirichletconstrainedvariationalcodebook,\n      title={DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration}, \n      author={Yan Chen and Hanlin Shang and Ce Liu and Yuxuan Chen and Hui Li and Weihao Yuan and Hao Zhu and Zilong Dong and Siyu Zhu},\n      year={2025},\n      eprint={2506.13355},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https://arxiv.org/abs/2506.13355}, \n}\n\n```\n\n"
  },
  {
    "path": "basicsr/VERSION",
    "content": "1.3.2\n"
  },
  {
    "path": "basicsr/__init__.py",
    "content": "# https://github.com/xinntao/BasicSR\n# flake8: noqa\nfrom .archs import *\nfrom .data import *\nfrom .losses import *\nfrom .metrics import *\nfrom .models import *\nfrom .ops import *\nfrom .train import *\nfrom .utils import *\nfrom .version import __gitsha__, __version__\n"
  },
  {
    "path": "basicsr/archs/__init__.py",
    "content": "import importlib\nfrom copy import deepcopy\nfrom os import path as osp\n\nfrom basicsr.utils import get_root_logger, scandir\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n__all__ = ['build_network']\n\n# automatically scan and import arch modules for registry\n# scan all the files under the 'archs' folder and collect files ending with\n# '_arch.py'\narch_folder = osp.dirname(osp.abspath(__file__))\narch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]\n# import all the arch modules\n_arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames]\n\n\ndef build_network(opt):\n    opt = deepcopy(opt)\n    network_type = opt.pop('type')\n    net = ARCH_REGISTRY.get(network_type)(**opt)\n    logger = get_root_logger()\n    logger.info(f'Network [{net.__class__.__name__}] is created.')\n    return net\n"
  },
  {
    "path": "basicsr/archs/arcface_arch.py",
    "content": "import torch.nn as nn\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n\ndef conv3x3(inplanes, outplanes, stride=1):\n    \"\"\"A simple wrapper for 3x3 convolution with padding.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        outplanes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n    \"\"\"\n    return nn.Conv2d(inplanes, outplanes, kernel_size=3, stride=stride, padding=1, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    \"\"\"Basic residual block used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n    \"\"\"\n    expansion = 1  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass IRBlock(nn.Module):\n    \"\"\"Improved residual block (IR Block) used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n        use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.\n    \"\"\"\n    expansion = 1  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):\n        super(IRBlock, self).__init__()\n        self.bn0 = nn.BatchNorm2d(inplanes)\n        self.conv1 = conv3x3(inplanes, inplanes)\n        self.bn1 = nn.BatchNorm2d(inplanes)\n        self.prelu = nn.PReLU()\n        self.conv2 = conv3x3(inplanes, planes, stride)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.downsample = downsample\n        self.stride = stride\n        self.use_se = use_se\n        if self.use_se:\n            self.se = SEBlock(planes)\n\n    def forward(self, x):\n        residual = x\n        out = self.bn0(x)\n        out = self.conv1(out)\n        out = self.bn1(out)\n        out = self.prelu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        if self.use_se:\n            out = self.se(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.prelu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    \"\"\"Bottleneck block used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n    \"\"\"\n    expansion = 4  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass SEBlock(nn.Module):\n    \"\"\"The squeeze-and-excitation block (SEBlock) used in the IRBlock.\n\n    Args:\n        channel (int): Channel number of inputs.\n        reduction (int): Channel reduction ration. Default: 16.\n    \"\"\"\n\n    def __init__(self, channel, reduction=16):\n        super(SEBlock, self).__init__()\n        self.avg_pool = nn.AdaptiveAvgPool2d(1)  # pool to 1x1 without spatial information\n        self.fc = nn.Sequential(\n            nn.Linear(channel, channel // reduction), nn.PReLU(), nn.Linear(channel // reduction, channel),\n            nn.Sigmoid())\n\n    def forward(self, x):\n        b, c, _, _ = x.size()\n        y = self.avg_pool(x).view(b, c)\n        y = self.fc(y).view(b, c, 1, 1)\n        return x * y\n\n\n@ARCH_REGISTRY.register()\nclass ResNetArcFace(nn.Module):\n    \"\"\"ArcFace with ResNet architectures.\n\n    Ref: ArcFace: Additive Angular Margin Loss for Deep Face Recognition.\n\n    Args:\n        block (str): Block used in the ArcFace architecture.\n        layers (tuple(int)): Block numbers in each layer.\n        use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.\n    \"\"\"\n\n    def __init__(self, block, layers, use_se=True):\n        if block == 'IRBlock':\n            block = IRBlock\n        self.inplanes = 64\n        self.use_se = use_se\n        super(ResNetArcFace, self).__init__()\n\n        self.conv1 = nn.Conv2d(1, 64, kernel_size=3, padding=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.prelu = nn.PReLU()\n        self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n        self.bn4 = nn.BatchNorm2d(512)\n        self.dropout = nn.Dropout()\n        self.fc5 = nn.Linear(512 * 8 * 8, 512)\n        self.bn5 = nn.BatchNorm1d(512)\n\n        # initialization\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.xavier_normal_(m.weight)\n            elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n            elif isinstance(m, nn.Linear):\n                nn.init.xavier_normal_(m.weight)\n                nn.init.constant_(m.bias, 0)\n\n    def _make_layer(self, block, planes, num_blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))\n        self.inplanes = planes\n        for _ in range(1, num_blocks):\n            layers.append(block(self.inplanes, planes, use_se=self.use_se))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.prelu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n        x = self.bn4(x)\n        x = self.dropout(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc5(x)\n        x = self.bn5(x)\n\n        return x"
  },
  {
    "path": "basicsr/archs/arch_util.py",
    "content": "import collections.abc\nimport math\nimport torch\nimport torchvision\nimport warnings\nfrom distutils.version import LooseVersion\nfrom itertools import repeat\nfrom torch import nn as nn\nfrom torch.nn import functional as F\nfrom torch.nn import init as init\nfrom torch.nn.modules.batchnorm import _BatchNorm\n\nfrom basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv\nfrom basicsr.utils import get_root_logger\n\n\n@torch.no_grad()\ndef default_init_weights(module_list, scale=1, bias_fill=0, **kwargs):\n    \"\"\"Initialize network weights.\n\n    Args:\n        module_list (list[nn.Module] | nn.Module): Modules to be initialized.\n        scale (float): Scale initialized weights, especially for residual\n            blocks. Default: 1.\n        bias_fill (float): The value to fill bias. Default: 0\n        kwargs (dict): Other arguments for initialization function.\n    \"\"\"\n    if not isinstance(module_list, list):\n        module_list = [module_list]\n    for module in module_list:\n        for m in module.modules():\n            if isinstance(m, nn.Conv2d):\n                init.kaiming_normal_(m.weight, **kwargs)\n                m.weight.data *= scale\n                if m.bias is not None:\n                    m.bias.data.fill_(bias_fill)\n            elif isinstance(m, nn.Linear):\n                init.kaiming_normal_(m.weight, **kwargs)\n                m.weight.data *= scale\n                if m.bias is not None:\n                    m.bias.data.fill_(bias_fill)\n            elif isinstance(m, _BatchNorm):\n                init.constant_(m.weight, 1)\n                if m.bias is not None:\n                    m.bias.data.fill_(bias_fill)\n\n\ndef make_layer(basic_block, num_basic_block, **kwarg):\n    \"\"\"Make layers by stacking the same blocks.\n\n    Args:\n        basic_block (nn.module): nn.module class for basic block.\n        num_basic_block (int): number of blocks.\n\n    Returns:\n        nn.Sequential: Stacked blocks in nn.Sequential.\n    \"\"\"\n    layers = []\n    for _ in range(num_basic_block):\n        layers.append(basic_block(**kwarg))\n    return nn.Sequential(*layers)\n\n\nclass ResidualBlockNoBN(nn.Module):\n    \"\"\"Residual block without BN.\n\n    It has a style of:\n        ---Conv-ReLU-Conv-+-\n         |________________|\n\n    Args:\n        num_feat (int): Channel number of intermediate features.\n            Default: 64.\n        res_scale (float): Residual scale. Default: 1.\n        pytorch_init (bool): If set to True, use pytorch default init,\n            otherwise, use default_init_weights. Default: False.\n    \"\"\"\n\n    def __init__(self, num_feat=64, res_scale=1, pytorch_init=False):\n        super(ResidualBlockNoBN, self).__init__()\n        self.res_scale = res_scale\n        self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)\n        self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)\n        self.relu = nn.ReLU(inplace=True)\n\n        if not pytorch_init:\n            default_init_weights([self.conv1, self.conv2], 0.1)\n\n    def forward(self, x):\n        identity = x\n        out = self.conv2(self.relu(self.conv1(x)))\n        return identity + out * self.res_scale\n\n\nclass Upsample(nn.Sequential):\n    \"\"\"Upsample module.\n\n    Args:\n        scale (int): Scale factor. Supported scales: 2^n and 3.\n        num_feat (int): Channel number of intermediate features.\n    \"\"\"\n\n    def __init__(self, scale, num_feat):\n        m = []\n        if (scale & (scale - 1)) == 0:  # scale = 2^n\n            for _ in range(int(math.log(scale, 2))):\n                m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))\n                m.append(nn.PixelShuffle(2))\n        elif scale == 3:\n            m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))\n            m.append(nn.PixelShuffle(3))\n        else:\n            raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.')\n        super(Upsample, self).__init__(*m)\n\n\ndef flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True):\n    \"\"\"Warp an image or feature map with optical flow.\n\n    Args:\n        x (Tensor): Tensor with size (n, c, h, w).\n        flow (Tensor): Tensor with size (n, h, w, 2), normal value.\n        interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'.\n        padding_mode (str): 'zeros' or 'border' or 'reflection'.\n            Default: 'zeros'.\n        align_corners (bool): Before pytorch 1.3, the default value is\n            align_corners=True. After pytorch 1.3, the default value is\n            align_corners=False. Here, we use the True as default.\n\n    Returns:\n        Tensor: Warped image or feature map.\n    \"\"\"\n    assert x.size()[-2:] == flow.size()[1:3]\n    _, _, h, w = x.size()\n    # create mesh grid\n    grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x))\n    grid = torch.stack((grid_x, grid_y), 2).float()  # W(x), H(y), 2\n    grid.requires_grad = False\n\n    vgrid = grid + flow\n    # scale grid to [-1,1]\n    vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0\n    vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0\n    vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3)\n    output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners)\n\n    # TODO, what if align_corners=False\n    return output\n\n\ndef resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False):\n    \"\"\"Resize a flow according to ratio or shape.\n\n    Args:\n        flow (Tensor): Precomputed flow. shape [N, 2, H, W].\n        size_type (str): 'ratio' or 'shape'.\n        sizes (list[int | float]): the ratio for resizing or the final output\n            shape.\n            1) The order of ratio should be [ratio_h, ratio_w]. For\n            downsampling, the ratio should be smaller than 1.0 (i.e., ratio\n            < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e.,\n            ratio > 1.0).\n            2) The order of output_size should be [out_h, out_w].\n        interp_mode (str): The mode of interpolation for resizing.\n            Default: 'bilinear'.\n        align_corners (bool): Whether align corners. Default: False.\n\n    Returns:\n        Tensor: Resized flow.\n    \"\"\"\n    _, _, flow_h, flow_w = flow.size()\n    if size_type == 'ratio':\n        output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1])\n    elif size_type == 'shape':\n        output_h, output_w = sizes[0], sizes[1]\n    else:\n        raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.')\n\n    input_flow = flow.clone()\n    ratio_h = output_h / flow_h\n    ratio_w = output_w / flow_w\n    input_flow[:, 0, :, :] *= ratio_w\n    input_flow[:, 1, :, :] *= ratio_h\n    resized_flow = F.interpolate(\n        input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners)\n    return resized_flow\n\n\n# TODO: may write a cpp file\ndef pixel_unshuffle(x, scale):\n    \"\"\" Pixel unshuffle.\n\n    Args:\n        x (Tensor): Input feature with shape (b, c, hh, hw).\n        scale (int): Downsample ratio.\n\n    Returns:\n        Tensor: the pixel unshuffled feature.\n    \"\"\"\n    b, c, hh, hw = x.size()\n    out_channel = c * (scale**2)\n    assert hh % scale == 0 and hw % scale == 0\n    h = hh // scale\n    w = hw // scale\n    x_view = x.view(b, c, h, scale, w, scale)\n    return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)\n\n\nclass DCNv2Pack(ModulatedDeformConvPack):\n    \"\"\"Modulated deformable conv for deformable alignment.\n\n    Different from the official DCNv2Pack, which generates offsets and masks\n    from the preceding features, this DCNv2Pack takes another different\n    features to generate offsets and masks.\n\n    Ref:\n        Delving Deep into Deformable Alignment in Video Super-Resolution.\n    \"\"\"\n\n    def forward(self, x, feat):\n        out = self.conv_offset(feat)\n        o1, o2, mask = torch.chunk(out, 3, dim=1)\n        offset = torch.cat((o1, o2), dim=1)\n        mask = torch.sigmoid(mask)\n\n        offset_absmean = torch.mean(torch.abs(offset))\n        if offset_absmean > 50:\n            logger = get_root_logger()\n            logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.')\n\n        if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'):\n            return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding,\n                                                 self.dilation, mask)\n        else:\n            return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding,\n                                         self.dilation, self.groups, self.deformable_groups)\n\n\ndef _no_grad_trunc_normal_(tensor, mean, std, a, b):\n    # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py\n    # Cut & paste from PyTorch official master until it's in a few official releases - RW\n    # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n    def norm_cdf(x):\n        # Computes standard normal cumulative distribution function\n        return (1. + math.erf(x / math.sqrt(2.))) / 2.\n\n    if (mean < a - 2 * std) or (mean > b + 2 * std):\n        warnings.warn(\n            'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. '\n            'The distribution of values may be incorrect.',\n            stacklevel=2)\n\n    with torch.no_grad():\n        # Values are generated by using a truncated uniform distribution and\n        # then using the inverse CDF for the normal distribution.\n        # Get upper and lower cdf values\n        low = norm_cdf((a - mean) / std)\n        up = norm_cdf((b - mean) / std)\n\n        # Uniformly fill tensor with values from [low, up], then translate to\n        # [2l-1, 2u-1].\n        tensor.uniform_(2 * low - 1, 2 * up - 1)\n\n        # Use inverse cdf transform for normal distribution to get truncated\n        # standard normal\n        tensor.erfinv_()\n\n        # Transform to proper mean, std\n        tensor.mul_(std * math.sqrt(2.))\n        tensor.add_(mean)\n\n        # Clamp to ensure it's in the proper range\n        tensor.clamp_(min=a, max=b)\n        return tensor\n\n\ndef trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):\n    r\"\"\"Fills the input Tensor with values drawn from a truncated\n    normal distribution.\n\n    From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py\n\n    The values are effectively drawn from the\n    normal distribution :math:`\\mathcal{N}(\\text{mean}, \\text{std}^2)`\n    with values outside :math:`[a, b]` redrawn until they are within\n    the bounds. The method used for generating the random values works\n    best when :math:`a \\leq \\text{mean} \\leq b`.\n\n    Args:\n        tensor: an n-dimensional `torch.Tensor`\n        mean: the mean of the normal distribution\n        std: the standard deviation of the normal distribution\n        a: the minimum cutoff value\n        b: the maximum cutoff value\n\n    Examples:\n        >>> w = torch.empty(3, 5)\n        >>> nn.init.trunc_normal_(w)\n    \"\"\"\n    return _no_grad_trunc_normal_(tensor, mean, std, a, b)\n\n\n# From PyTorch\ndef _ntuple(n):\n\n    def parse(x):\n        if isinstance(x, collections.abc.Iterable):\n            return x\n        return tuple(repeat(x, n))\n\n    return parse\n\n\nto_1tuple = _ntuple(1)\nto_2tuple = _ntuple(2)\nto_3tuple = _ntuple(3)\nto_4tuple = _ntuple(4)\nto_ntuple = _ntuple"
  },
  {
    "path": "basicsr/archs/dir_dist_codeformer_multiscale_arch.py",
    "content": "import math\nimport numpy as np\nimport torch\nfrom torch import nn, Tensor\nimport torch.nn.functional as F\nfrom typing import Optional, List\n\nfrom basicsr.archs.vqgan_arch import *\nfrom basicsr.utils import get_root_logger\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\nimport torch.distributions as dist\n\nfrom einops import rearrange\n\ndef calc_mean_std(feat, eps=1e-5):\n    \"\"\"Calculate mean and std for adaptive_instance_normalization.\n\n    Args:\n        feat (Tensor): 4D tensor.\n        eps (float): A small value added to the variance to avoid\n            divide-by-zero. Default: 1e-5.\n    \"\"\"\n    size = feat.size()\n    assert len(size) == 4, 'The input feature should be 4D tensor.'\n    b, c = size[:2]\n    feat_var = feat.view(b, c, -1).var(dim=2) + eps\n    feat_std = feat_var.sqrt().view(b, c, 1, 1)\n    feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1)\n    return feat_mean, feat_std\n\n\ndef adaptive_instance_normalization(content_feat, style_feat):\n    \"\"\"Adaptive instance normalization.\n\n    Adjust the reference features to have the similar color and illuminations\n    as those in the degradate features.\n\n    Args:\n        content_feat (Tensor): The reference feature.\n        style_feat (Tensor): The degradate features.\n    \"\"\"\n    size = content_feat.size()\n    style_mean, style_std = calc_mean_std(style_feat)\n    content_mean, content_std = calc_mean_std(content_feat)\n    normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size)\n    return normalized_feat * style_std.expand(size) + style_mean.expand(size)\n\n\nclass PositionEmbeddingSine(nn.Module):\n    \"\"\"\n    This is a more standard version of the position embedding, very similar to the one\n    used by the Attention is all you need paper, generalized to work on images.\n    \"\"\"\n\n    def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):\n        super().__init__()\n        self.num_pos_feats = num_pos_feats\n        self.temperature = temperature\n        self.normalize = normalize\n        if scale is not None and normalize is False:\n            raise ValueError(\"normalize should be True if scale is passed\")\n        if scale is None:\n            scale = 2 * math.pi\n        self.scale = scale\n\n    def forward(self, x, mask=None):\n        if mask is None:\n            mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)\n        not_mask = ~mask\n        y_embed = not_mask.cumsum(1, dtype=torch.float32)\n        x_embed = not_mask.cumsum(2, dtype=torch.float32)\n        if self.normalize:\n            eps = 1e-6\n            y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale\n            x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale\n\n        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)\n        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)\n\n        pos_x = x_embed[:, :, :, None] / dim_t\n        pos_y = y_embed[:, :, :, None] / dim_t\n        pos_x = torch.stack(\n            (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4\n        ).flatten(3)\n        pos_y = torch.stack(\n            (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4\n        ).flatten(3)\n        pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)\n        return pos\n\n\ndef _get_activation_fn(activation):\n    \"\"\"Return an activation function given a string\"\"\"\n    if activation == \"relu\":\n        return F.relu\n    if activation == \"gelu\":\n        return F.gelu\n    if activation == \"glu\":\n        return F.glu\n    raise RuntimeError(f\"activation should be relu/gelu, not {activation}.\")\n\n\nclass TransformerSALayer(nn.Module):\n    def __init__(self, embed_dim, nhead=8, dim_mlp=2048, dropout=0.0, activation=\"gelu\"):\n        super().__init__()\n        self.self_attn = nn.MultiheadAttention(embed_dim, nhead, dropout=dropout)\n        # Implementation of Feedforward model - MLP\n        self.linear1 = nn.Linear(embed_dim, dim_mlp)\n        self.dropout = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(dim_mlp, embed_dim)\n\n        self.norm1 = nn.LayerNorm(embed_dim)\n        self.norm2 = nn.LayerNorm(embed_dim)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n\n        self.activation = _get_activation_fn(activation)\n\n    def with_pos_embed(self, tensor, pos: Optional[Tensor]):\n        return tensor if pos is None else tensor + pos\n\n    def forward(\n        self,\n        tgt,\n        tgt_mask: Optional[Tensor] = None,\n        tgt_key_padding_mask: Optional[Tensor] = None,\n        query_pos: Optional[Tensor] = None,\n    ):\n\n        tgt2 = self.norm1(tgt)\n        q = k = self.with_pos_embed(tgt2, query_pos)\n        tgt2 = self.self_attn(q,\n                              k,\n                              value=tgt2,\n                              attn_mask=tgt_mask,\n                              key_padding_mask=tgt_key_padding_mask)[0]\n        tgt = tgt + self.dropout1(tgt2)\n\n        # ffn\n        tgt2 = self.norm2(tgt)\n        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))\n        tgt = tgt + self.dropout2(tgt2)\n        return tgt\n \n \nclass TransformerSALayerTemporal(nn.Module):\n    def __init__(self, embed_dim, nhead=8, dim_mlp=2048, dropout=0.0, activation=\"gelu\"):\n        super().__init__()\n\n        self.self_attn = nn.MultiheadAttention(embed_dim, nhead, dropout=dropout)\n        # Implementation of Feedforward model - MLP\n        self.linear1 = nn.Linear(embed_dim, dim_mlp)\n        self.dropout = nn.Dropout(dropout)\n        self.linear2 = nn.Linear(dim_mlp, embed_dim)\n\n        self.norm1 = nn.LayerNorm(embed_dim)\n        self.norm2 = nn.LayerNorm(embed_dim)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n\n        self.activation = _get_activation_fn(activation)\n\n    def with_pos_embed(self, tensor, pos: Optional[Tensor]):\n        return tensor if pos is None else tensor + pos\n\n    def forward(self,\n                tgt,\n                frame_length=10,\n                batch_size=1,\n                tgt_mask: Optional[Tensor] = None,\n                tgt_key_padding_mask: Optional[Tensor] = None,\n                query_pos: Optional[Tensor] = None):\n\n        tgt = rearrange(tgt, \"d (b t) c -> t (b d) c\", t=frame_length)\n\n        tgt2 = self.norm1(tgt)\n        q = k = self.with_pos_embed(tgt2, query_pos)\n        tgt2 = self.self_attn(q,\n                              k,\n                              value=tgt2,\n                              attn_mask=tgt_mask,\n                              key_padding_mask=tgt_key_padding_mask)[0]\n        tgt = tgt + self.dropout1(tgt2)\n\n        # ffn\n        tgt2 = self.norm2(tgt)\n        tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))\n        tgt = tgt + self.dropout2(tgt2)\n        # reshape\n        tgt = rearrange(tgt, \"t (b d) c -> d (b t) c\", b=batch_size)\n\n        return tgt\n\n\nclass Fuse_sft_block(nn.Module):\n    def __init__(self, in_ch, out_ch):\n        super().__init__()\n        self.encode_enc = ResBlock(2*in_ch, out_ch)\n\n        self.scale = nn.Sequential(\n                    nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),\n                    nn.LeakyReLU(0.2, True),\n                    nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1))\n\n        self.shift = nn.Sequential(\n                    nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),\n                    nn.LeakyReLU(0.2, True),\n                    nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1))\n\n    def forward(self, enc_feat, dec_feat, w=1):\n        enc_feat = self.encode_enc(torch.cat([enc_feat, dec_feat], dim=1))\n        scale = self.scale(enc_feat)\n        shift = self.shift(enc_feat)\n        residual = w * (dec_feat * scale + shift)\n        out = dec_feat + residual\n        return out\n\nclass ExpModule(nn.Module):\n    def forward(self, x):\n        return torch.exp(x)\n\n\nclass MultiScaleFuse(nn.Module):\n    def __init__(self):\n        super(MultiScaleFuse, self).__init__()\n        self.s64_conv = nn.Conv2d(in_channels=256*16, out_channels=256, kernel_size=1)\n        self.s32_conv = nn.Conv2d(in_channels=256*4, out_channels=256, kernel_size=1)\n        self.s16_conv = nn.Conv2d(in_channels=256*1, out_channels=256, kernel_size=1)\n        self.out = nn.Conv2d(in_channels=256*3, out_channels=256, kernel_size=3, stride=1, padding=1)\n\n    def forward(self, s64, s32, s16):\n\n        feat_64 = rearrange(s64, \"bt c (h h1) (w w1) -> bt (c h1 w1) h w\", h1=4, w1=4) \n        feat_64 = self.s64_conv(feat_64)\n        feat_32 = rearrange(s32, \"bt c (h h1) (w w1) -> bt (c h1 w1) h w\", h1=2, w1=2) \n        feat_32 = self.s32_conv(feat_32)\n        feat_16 = self.s16_conv(s16)\n\n        out = self.out(torch.concat([feat_64, feat_32, feat_16], dim=1))\n        return out\n\n@ARCH_REGISTRY.register()\nclass TemporalCodeFormerDirDistMultiScale(VQAutoEncoder):\n    def __init__(self,\n                 dim_embed=512,\n                 n_head=8,\n                 n_layers=9, \n                 codebook_size=1024,\n                 latent_size=256,\n                 connect_list=['32', '64', '128', '256'],\n                 fix_modules=['quantize','generator'],\n                 vqgan_path=None,\n                 frame_length=10,\n                 new_codebook_size=None):\n        super(TemporalCodeFormerDirDistMultiScale, self).__init__(512, 64, [1, 2, 2, 4, 4, 8], 'nearest', 2, [16], codebook_size)\n\n        if vqgan_path is not None:\n            self.load_state_dict(\n                torch.load(vqgan_path, map_location='cpu')['params_ema'])\n\n        self.frame_length = frame_length\n\n        self.connect_list = connect_list\n        self.n_layers = n_layers\n        self.dim_embed = dim_embed\n        self.dim_mlp = dim_embed * 2\n\n        self.position_emb = nn.Parameter(torch.zeros(latent_size, self.dim_embed))\n        self.position_emb_temporal = nn.Parameter(torch.zeros(self.frame_length, self.dim_embed))\n        self.feat_emb = nn.Linear(256, self.dim_embed)\n\n        self.codebook_size = codebook_size\n        self.new_codebook_size = None\n        if new_codebook_size is not None:\n            self.new_codebook_size = new_codebook_size\n            self.codebook_size += new_codebook_size\n            self.new_codebook = nn.Parameter(torch.normal(mean=0, std=0.75, size=(new_codebook_size, 256)))\n            self.new_codebook.requires_grad = True\n\n        self.multiscale = MultiScaleFuse()\n\n        # transformer in Space\n        self.ft_layers = nn.Sequential(*[TransformerSALayer(embed_dim=dim_embed,\n                                                            nhead=n_head,\n                                                            dim_mlp=self.dim_mlp,\n                                                            dropout=0.1)\n                                    for _ in range(self.n_layers)])\n        # transformer in Temporal\n        self.dir_dist_layers = nn.Sequential(*[TransformerSALayerTemporal(embed_dim=dim_embed,\n                                                                          nhead=n_head,\n                                                                          dim_mlp=self.dim_mlp,\n                                                                          dropout=0.1)\n                                    for _ in range(self.n_layers)])\n\n        # logits_predict head\n        self.idx_pred_layer = nn.Sequential(\n            nn.LayerNorm(dim_embed),\n            nn.Linear(dim_embed, self.codebook_size, bias=False),\n        )\n\n        self.channels = {\n            '16': 512,\n            '32': 256,\n            '64': 256,\n            '128': 128,\n            '256': 128,\n            '512': 64,\n        }\n\n\n        self.fuse_encoder_block = {'512':2, '256':5, '128':8, '64':11, '32':14, '16':18}\n        self.fuse_generator_block = {'16':6, '32': 9, '64':12, '128':15, '256':18, '512':21}\n\n        # fuse_convs_dict\n        self.fuse_convs_dict = nn.ModuleDict()\n        for f_size in self.connect_list:\n            in_ch = self.channels[f_size]\n            self.fuse_convs_dict[f_size] = Fuse_sft_block(in_ch, in_ch)\n\n        self.softplus_layer = nn.Softplus()\n        self.position_emb.requires_grad = False\n        print(\"Module: position_emb_spatial Frozen!\")\n\n        if fix_modules is not None:\n            print(fix_modules, \"frozen!\")\n            for module in fix_modules:\n                for param_name, param in getattr(self, module).named_parameters():\n                    if \"conv3d\" in param_name:\n                        param.requires_grad = True\n                    else:\n                        # print(f\"Module: {module}, Parameter name: {param_name} Frozen!\")\n                        param.requires_grad = False\n\n    def _init_weights(self, module):\n        if isinstance(module, (nn.Linear, nn.Embedding)):\n            module.weight.data.normal_(mean=0.0, std=0.02)\n            if isinstance(module, nn.Linear) and module.bias is not None:\n                module.bias.data.zero_()\n        elif isinstance(module, nn.LayerNorm):\n            module.bias.data.zero_()\n            module.weight.data.fill_(1.0)\n\n    def forward(self, x, w=0, detach_16=True, code_only=False, adain=False):\n        # ################### Encoder #####################\n        enc_feat_dict = {}\n        out_list = [self.fuse_encoder_block[f_size] for f_size in self.connect_list]\n        for i, block in enumerate(self.encoder.blocks):\n            x = block(x) \n            if i in out_list:\n                enc_feat_dict[str(x.shape[-1])] = x.clone()\n\n        lq_feat = self.multiscale(enc_feat_dict['64'], enc_feat_dict['32'], x)\n\n        bt, c, h, width = lq_feat.shape\n        b = bt // self.frame_length\n        t = self.frame_length\n        # ################# Spatial & Temporal Transformers ###################\n        spatial_pos_emb = self.position_emb.unsqueeze(1).repeat(1, bt, 1)\n        temporal_pos_emb = self.position_emb_temporal.unsqueeze(1).repeat(1, b*h*width, 1)\n        feat_emb = self.feat_emb(lq_feat.flatten(2).permute(2, 0, 1))\n        query_emb = feat_emb\n\n        for layer_space, layer_temporal in zip(self.ft_layers, self.dir_dist_layers):\n            query_emb = layer_space(query_emb, query_pos=spatial_pos_emb)\n            query_emb = layer_temporal(query_emb, query_pos=temporal_pos_emb, frame_length=t, batch_size=b)\n\n        alpha = self.idx_pred_layer(query_emb)\n        alpha = alpha.permute(1, 0, 2)\n        alpha = self.softplus_layer(alpha) + 1e-2\n\n        dirichlet_dist = dist.Dirichlet(alpha)\n        parameters = dirichlet_dist.rsample()\n\n        parameters_reshaped = parameters.reshape(-1, self.codebook_size)\n\n        if self.new_codebook_size is not None:\n            quant_feat = torch.matmul(parameters_reshaped[:, :-self.new_codebook_size], self.quantize.embedding.weight) + \\\n                 torch.matmul(parameters_reshaped[:, -self.new_codebook_size:], self.new_codebook)\n        else:\n            quant_feat = torch.matmul(parameters_reshaped, self.quantize.embedding.weight) \n\n        quant_feat = rearrange(quant_feat, \"(b t h w) c -> (b t) c h w\", b=b, t=t, h=h, w=width)\n\n\n        if adain:\n            quant_feat = adaptive_instance_normalization(quant_feat, lq_feat)\n\n        # ################## Generator ####################\n        x = quant_feat\n        fuse_list = [self.fuse_generator_block[f_size] for f_size in self.connect_list]\n\n        for i, block in enumerate(self.generator.blocks):\n            x = block(x) \n            if i in fuse_list:\n                f_size = str(x.shape[-1])\n                if w > 0:\n                    x = self.fuse_convs_dict[f_size](enc_feat_dict[f_size].detach(), x, w)\n        out = x\n        return out, lq_feat, alpha + 1e-6\n"
  },
  {
    "path": "basicsr/archs/rrdbnet_arch.py",
    "content": "import torch\nfrom torch import nn as nn\nfrom torch.nn import functional as F\n\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom .arch_util import default_init_weights, make_layer, pixel_unshuffle\n\n\nclass ResidualDenseBlock(nn.Module):\n    \"\"\"Residual Dense Block.\n\n    Used in RRDB block in ESRGAN.\n\n    Args:\n        num_feat (int): Channel number of intermediate features.\n        num_grow_ch (int): Channels for each growth.\n    \"\"\"\n\n    def __init__(self, num_feat=64, num_grow_ch=32):\n        super(ResidualDenseBlock, self).__init__()\n        self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1)\n        self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1)\n        self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1)\n        self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1)\n        self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1)\n\n        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n        # initialization\n        default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)\n\n    def forward(self, x):\n        x1 = self.lrelu(self.conv1(x))\n        x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))\n        x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))\n        x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))\n        x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))\n        # Emperically, we use 0.2 to scale the residual for better performance\n        return x5 * 0.2 + x\n\n\nclass RRDB(nn.Module):\n    \"\"\"Residual in Residual Dense Block.\n\n    Used in RRDB-Net in ESRGAN.\n\n    Args:\n        num_feat (int): Channel number of intermediate features.\n        num_grow_ch (int): Channels for each growth.\n    \"\"\"\n\n    def __init__(self, num_feat, num_grow_ch=32):\n        super(RRDB, self).__init__()\n        self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch)\n        self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch)\n        self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch)\n\n    def forward(self, x):\n        out = self.rdb1(x)\n        out = self.rdb2(out)\n        out = self.rdb3(out)\n        # Emperically, we use 0.2 to scale the residual for better performance\n        return out * 0.2 + x\n\n\n@ARCH_REGISTRY.register()\nclass RRDBNet(nn.Module):\n    \"\"\"Networks consisting of Residual in Residual Dense Block, which is used\n    in ESRGAN.\n\n    ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks.\n\n    We extend ESRGAN for scale x2 and scale x1.\n    Note: This is one option for scale 1, scale 2 in RRDBNet.\n    We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size\n    and enlarge the channel size before feeding inputs into the main ESRGAN architecture.\n\n    Args:\n        num_in_ch (int): Channel number of inputs.\n        num_out_ch (int): Channel number of outputs.\n        num_feat (int): Channel number of intermediate features.\n            Default: 64\n        num_block (int): Block number in the trunk network. Defaults: 23\n        num_grow_ch (int): Channels for each growth. Default: 32.\n    \"\"\"\n\n    def __init__(self, num_in_ch, num_out_ch, scale=4, num_feat=64, num_block=23, num_grow_ch=32):\n        super(RRDBNet, self).__init__()\n        self.scale = scale\n        if scale == 2:\n            num_in_ch = num_in_ch * 4\n        elif scale == 1:\n            num_in_ch = num_in_ch * 16\n        self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1)\n        self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch)\n        self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1)\n        # upsample\n        self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)\n        self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)\n        self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)\n        self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)\n\n        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n    def forward(self, x):\n        if self.scale == 2:\n            feat = pixel_unshuffle(x, scale=2)\n        elif self.scale == 1:\n            feat = pixel_unshuffle(x, scale=4)\n        else:\n            feat = x\n        feat = self.conv_first(feat)\n        body_feat = self.conv_body(self.body(feat))\n        feat = feat + body_feat\n        # upsample\n        feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest')))\n        feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest')))\n        out = self.conv_last(self.lrelu(self.conv_hr(feat)))\n        return out"
  },
  {
    "path": "basicsr/archs/vgg_arch.py",
    "content": "import os\nimport torch\nfrom collections import OrderedDict\nfrom torch import nn as nn\nfrom torchvision.models import vgg as vgg\n\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\nVGG_PRETRAIN_PATH = './ckpts/vgg/vgg16-397923af.pth'\nNAMES = {\n    'vgg11': [\n        'conv1_1', 'relu1_1', 'pool1', 'conv2_1', 'relu2_1', 'pool2', 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2',\n        'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2',\n        'pool5'\n    ],\n    'vgg13': [\n        'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2',\n        'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4',\n        'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'pool5'\n    ],\n    'vgg16': [\n        'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2',\n        'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2',\n        'relu4_2', 'conv4_3', 'relu4_3', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3',\n        'pool5'\n    ],\n    'vgg19': [\n        'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2',\n        'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'conv3_4', 'relu3_4', 'pool3', 'conv4_1',\n        'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3', 'relu4_3', 'conv4_4', 'relu4_4', 'pool4', 'conv5_1', 'relu5_1',\n        'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', 'conv5_4', 'relu5_4', 'pool5'\n    ]\n}\n\n\ndef insert_bn(names):\n    \"\"\"Insert bn layer after each conv.\n\n    Args:\n        names (list): The list of layer names.\n\n    Returns:\n        list: The list of layer names with bn layers.\n    \"\"\"\n    names_bn = []\n    for name in names:\n        names_bn.append(name)\n        if 'conv' in name:\n            position = name.replace('conv', '')\n            names_bn.append('bn' + position)\n    return names_bn\n\n\n@ARCH_REGISTRY.register()\nclass VGGFeatureExtractor(nn.Module):\n    \"\"\"VGG network for feature extraction.\n\n    In this implementation, we allow users to choose whether use normalization\n    in the input feature and the type of vgg network. Note that the pretrained\n    path must fit the vgg type.\n\n    Args:\n        layer_name_list (list[str]): Forward function returns the corresponding\n            features according to the layer_name_list.\n            Example: {'relu1_1', 'relu2_1', 'relu3_1'}.\n        vgg_type (str): Set the type of vgg network. Default: 'vgg19'.\n        use_input_norm (bool): If True, normalize the input image. Importantly,\n            the input feature must in the range [0, 1]. Default: True.\n        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].\n            Default: False.\n        requires_grad (bool): If true, the parameters of VGG network will be\n            optimized. Default: False.\n        remove_pooling (bool): If true, the max pooling operations in VGG net\n            will be removed. Default: False.\n        pooling_stride (int): The stride of max pooling operation. Default: 2.\n    \"\"\"\n\n    def __init__(self,\n                 layer_name_list,\n                 vgg_type='vgg19',\n                 use_input_norm=True,\n                 range_norm=False,\n                 requires_grad=False,\n                 remove_pooling=False,\n                 pooling_stride=2):\n        super(VGGFeatureExtractor, self).__init__()\n\n        self.layer_name_list = layer_name_list\n        self.use_input_norm = use_input_norm\n        self.range_norm = range_norm\n\n        self.names = NAMES[vgg_type.replace('_bn', '')]\n        if 'bn' in vgg_type:\n            self.names = insert_bn(self.names)\n\n        # only borrow layers that will be used to avoid unused params\n        max_idx = 0\n        for v in layer_name_list:\n            idx = self.names.index(v)\n            if idx > max_idx:\n                max_idx = idx\n\n        if os.path.exists(VGG_PRETRAIN_PATH):\n            vgg_net = getattr(vgg, vgg_type)(pretrained=False)\n            state_dict = torch.load(VGG_PRETRAIN_PATH, map_location=lambda storage, loc: storage)\n            vgg_net.load_state_dict(state_dict)\n        else:\n            vgg_net = getattr(vgg, vgg_type)(pretrained=True)\n        \n        features = vgg_net.features[:max_idx + 1]\n\n        modified_net = OrderedDict()\n        for k, v in zip(self.names, features):\n            if 'pool' in k:\n                # if remove_pooling is true, pooling operation will be removed\n                if remove_pooling:\n                    continue\n                else:\n                    # in some cases, we may want to change the default stride\n                    modified_net[k] = nn.MaxPool2d(kernel_size=2, stride=pooling_stride)\n            else:\n                modified_net[k] = v\n\n        self.vgg_net = nn.Sequential(modified_net)\n\n        if not requires_grad:\n            self.vgg_net.eval()\n            for param in self.parameters():\n                param.requires_grad = False\n        else:\n            self.vgg_net.train()\n            for param in self.parameters():\n                param.requires_grad = True\n\n        if self.use_input_norm:\n            # the mean is for image with range [0, 1]\n            self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))\n            # the std is for image with range [0, 1]\n            self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))\n\n    def forward(self, x):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Input tensor with shape (n, c, h, w).\n\n        Returns:\n            Tensor: Forward results.\n        \"\"\"\n        if self.range_norm:\n            x = (x + 1) / 2\n        if self.use_input_norm:\n            x = (x - self.mean) / self.std\n        output = {}\n\n        for key, layer in self.vgg_net._modules.items():\n            x = layer(x)\n            if key in self.layer_name_list:\n                output[key] = x.clone()\n\n        return output\n"
  },
  {
    "path": "basicsr/archs/vqgan_arch.py",
    "content": "'''\nVQGAN code, adapted from the original created by the Unleashing Transformers authors:\nhttps://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py\n\n'''\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport copy\nfrom basicsr.utils import get_root_logger\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\ndef normalize(in_channels):\n    return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)\n    \n\n@torch.jit.script\ndef swish(x):\n    return x*torch.sigmoid(x)\n\n\n#  Define VQVAE classes\nclass VectorQuantizer(nn.Module):\n    def __init__(self, codebook_size, emb_dim, beta):\n        super(VectorQuantizer, self).__init__()\n        self.codebook_size = codebook_size  # number of embeddings\n        self.emb_dim = emb_dim  # dimension of embedding\n        self.beta = beta  # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2\n        self.embedding = nn.Embedding(self.codebook_size, self.emb_dim)\n        self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size)\n\n    def forward(self, z):\n        # reshape z -> (batch, height, width, channel) and flatten\n        z = z.permute(0, 2, 3, 1).contiguous()\n        z_flattened = z.view(-1, self.emb_dim)\n\n        # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z\n        d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \\\n            2 * torch.matmul(z_flattened, self.embedding.weight.t())\n\n        mean_distance = torch.mean(d)\n        # find closest encodings\n        min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1)\n        # min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False)\n        # [0-1], higher score, higher confidence\n        # min_encoding_scores = torch.exp(-min_encoding_scores/10)\n\n        min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z)\n        min_encodings.scatter_(1, min_encoding_indices, 1)\n\n        # get quantized latent vectors\n        z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)\n        # compute loss for embedding\n        loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2)\n        # preserve gradients\n        z_q = z + (z_q - z).detach()\n\n        # perplexity\n        e_mean = torch.mean(min_encodings, dim=0)\n        perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))\n        # reshape back to match original input shape\n        z_q = z_q.permute(0, 3, 1, 2).contiguous()\n\n        return z_q, loss, {\n            \"perplexity\": perplexity,\n            \"min_encodings\": min_encodings,\n            \"min_encoding_indices\": min_encoding_indices,\n            \"mean_distance\": mean_distance\n            }\n\n    def get_codebook_feat(self, indices, shape):\n        # input indices: batch*token_num -> (batch*token_num)*1\n        # shape: batch, height, width, channel\n        indices = indices.view(-1,1)\n        min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices)\n        min_encodings.scatter_(1, indices, 1)\n        # get quantized latent vectors\n        z_q = torch.matmul(min_encodings.float(), self.embedding.weight)\n\n        if shape is not None:  # reshape back to match original input shape\n            z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous()\n\n        return z_q\n\n\nclass GumbelQuantizer(nn.Module):\n    def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0):\n        super().__init__()\n        self.codebook_size = codebook_size  # number of embeddings\n        self.emb_dim = emb_dim  # dimension of embedding\n        self.straight_through = straight_through\n        self.temperature = temp_init\n        self.kl_weight = kl_weight\n        self.proj = nn.Conv2d(num_hiddens, codebook_size, 1)  # projects last encoder layer to quantized logits\n        self.embed = nn.Embedding(codebook_size, emb_dim)\n\n    def forward(self, z):\n        hard = self.straight_through if self.training else True\n\n        logits = self.proj(z)\n\n        soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard)\n\n        z_q = torch.einsum(\"b n h w, n d -> b d h w\", soft_one_hot, self.embed.weight)\n\n        # + kl divergence to the prior loss\n        qy = F.softmax(logits, dim=1)\n        diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean()\n        min_encoding_indices = soft_one_hot.argmax(dim=1)\n\n        return z_q, diff, {\n            \"min_encoding_indices\": min_encoding_indices\n        }\n\n\nclass Downsample(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0)\n\n    def forward(self, x):\n        pad = (0, 1, 0, 1)\n        x = torch.nn.functional.pad(x, pad, mode=\"constant\", value=0)\n        x = self.conv(x)\n        return x\n\n\nclass Upsample(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1)\n\n    def forward(self, x):\n        x = F.interpolate(x, scale_factor=2.0, mode=\"nearest\")\n        x = self.conv(x)\n\n        return x\n\n\nclass ResBlock(nn.Module):\n    def __init__(self, in_channels, out_channels=None):\n        super(ResBlock, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = in_channels if out_channels is None else out_channels\n        self.norm1 = normalize(in_channels)\n        self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)\n        self.norm2 = normalize(out_channels)\n        self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)\n        if self.in_channels != self.out_channels:\n            self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)\n\n    def forward(self, x_in):\n        x = x_in\n        x = self.norm1(x)\n        x = swish(x)\n        x = self.conv1(x)\n        x = self.norm2(x)\n        x = swish(x)\n        x = self.conv2(x)\n        if self.in_channels != self.out_channels:\n            x_in = self.conv_out(x_in)\n\n        return x + x_in\n\n\nclass AttnBlock(nn.Module):\n    def __init__(self, in_channels):\n        super().__init__()\n        self.in_channels = in_channels\n\n        self.norm = normalize(in_channels)\n        self.q = torch.nn.Conv2d(\n            in_channels,\n            in_channels,\n            kernel_size=1,\n            stride=1,\n            padding=0\n        )\n        self.k = torch.nn.Conv2d(\n            in_channels,\n            in_channels,\n            kernel_size=1,\n            stride=1,\n            padding=0\n        )\n        self.v = torch.nn.Conv2d(\n            in_channels,\n            in_channels,\n            kernel_size=1,\n            stride=1,\n            padding=0\n        )\n        self.proj_out = torch.nn.Conv2d(\n            in_channels,\n            in_channels,\n            kernel_size=1,\n            stride=1,\n            padding=0\n        )\n\n    def forward(self, x):\n        h_ = x\n        h_ = self.norm(h_)\n        q = self.q(h_)\n        k = self.k(h_)\n        v = self.v(h_)\n\n        # compute attention\n        b, c, h, w = q.shape\n        q = q.reshape(b, c, h*w)\n        q = q.permute(0, 2, 1)   \n        k = k.reshape(b, c, h*w)\n        w_ = torch.bmm(q, k) \n        w_ = w_ * (int(c)**(-0.5))\n        w_ = F.softmax(w_, dim=2)\n\n        # attend to values\n        v = v.reshape(b, c, h*w)\n        w_ = w_.permute(0, 2, 1) \n        h_ = torch.bmm(v, w_)\n        h_ = h_.reshape(b, c, h, w)\n\n        h_ = self.proj_out(h_)\n\n        return x+h_\n\n\nclass Encoder(nn.Module):\n    def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions):\n        super().__init__()\n        self.nf = nf\n        self.num_resolutions = len(ch_mult)\n        self.num_res_blocks = num_res_blocks\n        self.resolution = resolution\n        self.attn_resolutions = attn_resolutions\n\n        curr_res = self.resolution\n        in_ch_mult = (1,)+tuple(ch_mult)\n\n        blocks = []\n        # initial convultion\n        blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1))\n\n        # residual and downsampling blocks, with attention on smaller res (16x16)\n        for i in range(self.num_resolutions):\n            block_in_ch = nf * in_ch_mult[i]\n            block_out_ch = nf * ch_mult[i]\n            for _ in range(self.num_res_blocks):\n                blocks.append(ResBlock(block_in_ch, block_out_ch))\n                block_in_ch = block_out_ch\n                if curr_res in attn_resolutions:\n                    blocks.append(AttnBlock(block_in_ch))\n\n            if i != self.num_resolutions - 1:\n                blocks.append(Downsample(block_in_ch))\n                curr_res = curr_res // 2\n\n        # non-local attention block\n        blocks.append(ResBlock(block_in_ch, block_in_ch))\n        blocks.append(AttnBlock(block_in_ch))\n        blocks.append(ResBlock(block_in_ch, block_in_ch))\n\n        # normalise and convert to latent size\n        blocks.append(normalize(block_in_ch))\n        blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1))\n        self.blocks = nn.ModuleList(blocks)\n\n    def forward(self, x):\n        for block in self.blocks:\n            x = block(x)\n            \n        return x\n\n\nclass Generator(nn.Module):\n    def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions):\n        super().__init__()\n        self.nf = nf \n        self.ch_mult = ch_mult \n        self.num_resolutions = len(self.ch_mult)\n        self.num_res_blocks = res_blocks\n        self.resolution = img_size \n        self.attn_resolutions = attn_resolutions\n        self.in_channels = emb_dim\n        self.out_channels = 3\n        block_in_ch = self.nf * self.ch_mult[-1]\n        curr_res = self.resolution // 2 ** (self.num_resolutions-1)\n\n        blocks = []\n        # initial conv\n        blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1))\n\n        # non-local attention block\n        blocks.append(ResBlock(block_in_ch, block_in_ch))\n        blocks.append(AttnBlock(block_in_ch))\n        blocks.append(ResBlock(block_in_ch, block_in_ch))\n\n        for i in reversed(range(self.num_resolutions)):\n            block_out_ch = self.nf * self.ch_mult[i]\n\n            for _ in range(self.num_res_blocks):\n                blocks.append(ResBlock(block_in_ch, block_out_ch))\n                block_in_ch = block_out_ch\n\n                if curr_res in self.attn_resolutions:\n                    blocks.append(AttnBlock(block_in_ch))\n\n            if i != 0:\n                blocks.append(Upsample(block_in_ch))\n                curr_res = curr_res * 2\n\n        blocks.append(normalize(block_in_ch))\n        blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1))\n\n        self.blocks = nn.ModuleList(blocks)\n   \n\n    def forward(self, x):\n        for block in self.blocks:\n            x = block(x)\n            \n        return x\n\n  \n@ARCH_REGISTRY.register()\nclass VQAutoEncoder(nn.Module):\n    def __init__(self, img_size, nf, ch_mult, quantizer=\"nearest\", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256,\n                beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None):\n        super().__init__()\n        logger = get_root_logger()\n        self.in_channels = 3 \n        self.nf = nf \n        self.n_blocks = res_blocks \n        self.codebook_size = codebook_size\n        self.embed_dim = emb_dim\n        self.ch_mult = ch_mult\n        self.resolution = img_size\n        self.attn_resolutions = attn_resolutions\n        self.quantizer_type = quantizer\n        self.encoder = Encoder(\n            self.in_channels,\n            self.nf,\n            self.embed_dim,\n            self.ch_mult,\n            self.n_blocks,\n            self.resolution,\n            self.attn_resolutions\n        )\n        if self.quantizer_type == \"nearest\":\n            self.beta = beta #0.25\n            self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta)\n        elif self.quantizer_type == \"gumbel\":\n            self.gumbel_num_hiddens = emb_dim\n            self.straight_through = gumbel_straight_through\n            self.kl_weight = gumbel_kl_weight\n            self.quantize = GumbelQuantizer(\n                self.codebook_size,\n                self.embed_dim,\n                self.gumbel_num_hiddens,\n                self.straight_through,\n                self.kl_weight\n            )\n        self.generator = Generator(\n            self.nf, \n            self.embed_dim,\n            self.ch_mult, \n            self.n_blocks, \n            self.resolution, \n            self.attn_resolutions\n        )\n\n        if model_path is not None:\n            chkpt = torch.load(model_path, map_location='cpu')\n            if 'params_ema' in chkpt:\n                self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema'])\n                logger.info(f'vqgan is loaded from: {model_path} [params_ema]')\n            elif 'params' in chkpt:\n                self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])\n                logger.info(f'vqgan is loaded from: {model_path} [params]')\n            else:\n                raise ValueError(f'Wrong params!')\n\n\n    def forward(self, x):\n        x = self.encoder(x)\n        quant, codebook_loss, quant_stats = self.quantize(x)\n        x = self.generator(quant)\n        return x, codebook_loss, quant_stats\n\n\n\n# patch based discriminator\n@ARCH_REGISTRY.register()\nclass VQGANDiscriminator(nn.Module):\n    def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None):\n        super().__init__()\n\n        layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)]\n        ndf_mult = 1\n        ndf_mult_prev = 1\n        for n in range(1, n_layers):  # gradually increase the number of filters\n            ndf_mult_prev = ndf_mult\n            ndf_mult = min(2 ** n, 8)\n            layers += [\n                nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False),\n                nn.BatchNorm2d(ndf * ndf_mult),\n                nn.LeakyReLU(0.2, True)\n            ]\n\n        ndf_mult_prev = ndf_mult\n        ndf_mult = min(2 ** n_layers, 8)\n\n        layers += [\n            nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False),\n            nn.BatchNorm2d(ndf * ndf_mult),\n            nn.LeakyReLU(0.2, True)\n        ]\n\n        layers += [\n            nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)]  # output 1 channel prediction map\n        self.main = nn.Sequential(*layers)\n\n        if model_path is not None:\n            chkpt = torch.load(model_path, map_location='cpu')\n            if 'params_d' in chkpt:\n                self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d'])\n            elif 'params' in chkpt:\n                self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])\n            else:\n                raise ValueError(f'Wrong params!')\n\n    def forward(self, x):\n        return self.main(x)"
  },
  {
    "path": "basicsr/data/__init__.py",
    "content": "import importlib\nimport numpy as np\nimport random\nimport torch\nimport torch.utils.data\nfrom copy import deepcopy\nfrom functools import partial\nfrom os import path as osp\n\nfrom basicsr.data.prefetch_dataloader import PrefetchDataLoader\nfrom basicsr.utils import get_root_logger, scandir\nfrom basicsr.utils.dist_util import get_dist_info\nfrom basicsr.utils.registry import DATASET_REGISTRY\n\n__all__ = ['build_dataset', 'build_dataloader']\n\n# automatically scan and import dataset modules for registry\n# scan all the files under the data folder with '_dataset' in file names\ndata_folder = osp.dirname(osp.abspath(__file__))\ndataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')]\n# import all the dataset modules\n_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames]\n\n\ndef build_dataset(dataset_opt):\n    \"\"\"Build dataset from options.\n\n    Args:\n        dataset_opt (dict): Configuration for dataset. It must constain:\n            name (str): Dataset name.\n            type (str): Dataset type.\n    \"\"\"\n    dataset_opt = deepcopy(dataset_opt)\n    dataset = DATASET_REGISTRY.get(dataset_opt['type'])(dataset_opt)\n    logger = get_root_logger()\n    logger.info(f'Dataset [{dataset.__class__.__name__}] - {dataset_opt[\"name\"]} ' 'is built.')\n    return dataset\n\n\ndef build_dataloader(dataset, dataset_opt, num_gpu=1, dist=False, sampler=None, seed=None):\n    \"\"\"Build dataloader.\n\n    Args:\n        dataset (torch.utils.data.Dataset): Dataset.\n        dataset_opt (dict): Dataset options. It contains the following keys:\n            phase (str): 'train' or 'val'.\n            num_worker_per_gpu (int): Number of workers for each GPU.\n            batch_size_per_gpu (int): Training batch size for each GPU.\n        num_gpu (int): Number of GPUs. Used only in the train phase.\n            Default: 1.\n        dist (bool): Whether in distributed training. Used only in the train\n            phase. Default: False.\n        sampler (torch.utils.data.sampler): Data sampler. Default: None.\n        seed (int | None): Seed. Default: None\n    \"\"\"\n    phase = dataset_opt['phase']\n    rank, _ = get_dist_info()\n    if phase == 'train':\n        if dist:  # distributed training\n            batch_size = dataset_opt['batch_size_per_gpu']\n            num_workers = dataset_opt['num_worker_per_gpu']\n        else:  # non-distributed training\n            multiplier = 1 if num_gpu == 0 else num_gpu\n            batch_size = dataset_opt['batch_size_per_gpu'] * multiplier\n            num_workers = dataset_opt['num_worker_per_gpu'] * multiplier\n        dataloader_args = dict(\n            dataset=dataset,\n            batch_size=batch_size,\n            shuffle=False,\n            num_workers=num_workers,\n            sampler=sampler,\n            drop_last=True)\n        if sampler is None:\n            dataloader_args['shuffle'] = True\n        dataloader_args['worker_init_fn'] = partial(\n            worker_init_fn, num_workers=num_workers, rank=rank, seed=seed) if seed is not None else None\n    elif phase in ['val', 'test']:  # validation\n        dataloader_args = dict(dataset=dataset, batch_size=1, shuffle=False, num_workers=0)\n    else:\n        raise ValueError(f'Wrong dataset phase: {phase}. ' \"Supported ones are 'train', 'val' and 'test'.\")\n\n    dataloader_args['pin_memory'] = dataset_opt.get('pin_memory', False)\n\n    prefetch_mode = dataset_opt.get('prefetch_mode')\n    if prefetch_mode == 'cpu':  # CPUPrefetcher\n        num_prefetch_queue = dataset_opt.get('num_prefetch_queue', 1)\n        logger = get_root_logger()\n        logger.info(f'Use {prefetch_mode} prefetch dataloader: ' f'num_prefetch_queue = {num_prefetch_queue}')\n        return PrefetchDataLoader(num_prefetch_queue=num_prefetch_queue, **dataloader_args)\n    else:\n        # prefetch_mode=None: Normal dataloader\n        # prefetch_mode='cuda': dataloader for CUDAPrefetcher\n        return torch.utils.data.DataLoader(**dataloader_args)\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n    # Set the worker seed to num_workers * rank + worker_id + seed\n    worker_seed = num_workers * rank + worker_id + seed\n    np.random.seed(worker_seed)\n    random.seed(worker_seed)\n"
  },
  {
    "path": "basicsr/data/color_dataset.py",
    "content": "import os\nimport random\nfrom pathlib import Path\n\nfrom PIL import Image\nimport cv2\nimport ffmpeg\nimport io\nimport av\nimport numpy as np\nimport torch\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.data.degradations import (random_add_gaussian_noise,\n                                       random_mixed_kernels)\nfrom basicsr.data.data_util import paths_from_folder, brush_stroke_mask, brush_stroke_mask_video, random_ff_mask\nfrom basicsr.data.transforms import augment\nfrom basicsr.utils import FileClient, get_root_logger, img2tensor, imfrombytes, scandir\nfrom basicsr.utils.registry import DATASET_REGISTRY\nfrom facelib.utils.face_restoration_helper import FaceAligner\nfrom torch.utils import data as data\n\n@DATASET_REGISTRY.register()\nclass ColorizationDataset(data.Dataset):\n    def __init__(self, opt):\n        super(ColorizationDataset, self).__init__()\n        self.opt = opt\n        self.gt_root = Path(opt['dataroot_gt'])\n\n        self.num_frame = opt['video_length'] # 5\n        self.scale = opt['scale'] # [1, 4]\n        self.need_align = opt.get('need_align', False) # False\n        self.normalize = opt.get('normalize', False) # True\n\n        self.keys = []\n        with open(opt['global_meta_info_file'], 'r') as fin:\n            for line in fin:\n                real_clip_path = '/'.join(line.split('/')[:-1])\n                clip_length = int(line.split('/')[-1])\n                self.keys.extend([f'{real_clip_path}/{clip_length:08d}/{0:08d}'])\n\n        # file client (io backend)\n        self.file_client = None\n        self.io_backend_opt = opt['io_backend']\n        self.is_lmdb = False\n        if self.io_backend_opt['type'] == 'lmdb':\n            self.is_lmdb = True\n            self.io_backend_opt['db_paths'] = [self.gt_root]\n            self.io_backend_opt['client_keys'] = ['gt']\n\n        # temporal augmentation configs\n        self.interval_list = opt['interval_list'] # [1]\n        self.random_reverse = opt['random_reverse']\n        interval_str = ','.join(str(x) for x in opt['interval_list']) # '1'\n        logger = get_root_logger()\n        logger.info(f'Temporal augmentation interval list: [{interval_str}]; '\n                    f'random reverse is {self.random_reverse}.')\n\n        # degradations\n        # blur\n        self.blur_kernel_size = opt['blur_kernel_size'] # 21\n        self.kernel_list = opt['kernel_list']           # ['iso', 'aniso']\n        self.kernel_prob = opt['kernel_prob']           # [0.5, 0.5]  \n        self.blur_x_sigma = opt['blur_x_sigma']         # [0.2, 3]\n        self.blur_y_sigma = opt['blur_y_sigma']         # [0.2, 3]\n        # noise\n        self.noise_range = opt['noise_range']           # [0, 25] \n        # resize\n        self.resize_prob = opt['resize_prob']           # [0.25, 0.25, 0.5]\n        # crf\n        self.crf_range = opt['crf_range']               # [10, 30]\n        # codec\n        self.vcodec = opt['vcodec']                     # ['libx264']\n        self.vcodec_prob = opt['vcodec_prob']           # [1]\n\n        logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, '\n                    f'x_sigma: [{\", \".join(map(str, self.blur_x_sigma))}], '\n                    f'y_sigma: [{\", \".join(map(str, self.blur_y_sigma))}], ')\n        logger.info(f'Noise: [{\", \".join(map(str, self.noise_range))}]')\n        logger.info(f'CRF compression: [{\", \".join(map(str, self.crf_range))}]')\n        logger.info(f'Codec: [{\", \".join(map(str, self.vcodec))}]')\n\n        if self.need_align:\n            self.dataroot_meta_info = opt['dataroot_meta_info']\n            self.face_aligner = FaceAligner(\n                upscale_factor=1,\n                face_size=512,\n                crop_ratio=(1, 1),\n                det_model='retinaface_resnet50',\n                save_ext='png',\n                use_parse=True)\n\n    def __getitem__(self, index):\n        if self.file_client is None:\n            self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)\n\n        key = self.keys[index]\n        real_clip_path = '/'.join(key.split('/')[:-2])\n        clip_length = int(key.split('/')[-2])\n        frame_idx = int(key.split('/')[-1])\n        clip_name = real_clip_path.split('/')[-1]\n\n        if os.path.exists(os.path.join(self.gt_root, \"train\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"train\", clip_name))))\n        elif os.path.exists(os.path.join(self.gt_root, \"test\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"test\", clip_name))))\n        else:\n            paths = sorted(list(scandir(os.path.join(self.gt_root, clip_name))))\n\n        # determine the neighboring frames\n        interval = random.choice(self.interval_list)\n\n        # exceed the length, re-select a new clip\n        while (clip_length - self.num_frame * interval) < 0:\n            interval = random.choice(self.interval_list)\n\n        # ensure not exceeding the borders\n        start_frame_idx = frame_idx - self.num_frame // 2 * interval\n        end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n\n        while (start_frame_idx < 0) or (end_frame_idx > clip_length):\n            frame_idx = random.randint(self.num_frame // 2 * interval,\n                                       clip_length - self.num_frame // 2 * interval)\n            start_frame_idx = frame_idx - self.num_frame // 2 * interval\n            end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n        neighbor_list = list(range(start_frame_idx, end_frame_idx, interval))\n\n        # random reverse\n        if self.random_reverse and random.random() < 0.5:\n            neighbor_list.reverse()\n\n        assert len(neighbor_list) == self.num_frame, (\n            f'Wrong length of neighbor list: {len(neighbor_list)}')\n\n        # get the neighboring GT frames\n        img_gts = []\n\n        need_align = False\n        if self.need_align:\n            clip_info_path = os.path.join(self.dataroot_meta_info, f'{clip_name}.txt')\n            if os.path.exists(clip_info_path):\n                need_align = True\n                clip_info = []\n                with open(clip_info_path, 'r', encoding='utf-8') as fin:\n                    for line in fin:\n                        line = line.strip()\n                        clip_info.append(line)\n\n        for neighbor in neighbor_list:\n            img_gt_path = os.path.join(self.gt_root, clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"train\", clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"test\", clip_name, paths[neighbor])\n\n            img_gt = np.asarray(Image.open(img_gt_path))[:, :, ::-1] / 255.0\n            img_gts.append(img_gt)\n            \n        # augmentation - flip, rotate\n        img_gts = augment(img_gts, self.opt['use_flip'], self.opt['use_rot']) # False, False\n\n        # ------------- generate grayscale frames --------------#\n        img_lqs = img_gts\n        img_lqs = [cv2.cvtColor((_ * 255).astype('uint8'), cv2.COLOR_BGR2GRAY) for _ in  img_lqs]\n        img_lqs = [np.repeat(_[..., None], repeats=3, axis=2) / 255. for _ in img_lqs]\n\n        # -------------- Align ---------------#\n        if need_align:\n            align_lqs, align_gts = [], []\n            for frame_idx, (img_lq, img_gt) in enumerate(zip(img_lqs, img_gts)):\n                landmarks_str = clip_info[start_frame_idx + frame_idx].split(' ')\n                landmarks = np.array([float(x) for x in landmarks_str]).reshape(5, 2)\n                self.face_aligner.clean_all()\n\n                # align and warp each face\n                img_lq, img_gt = self.face_aligner.align_pair_face(img_lq, img_gt, landmarks)\n                align_lqs.append(img_lq)\n                align_gts.append(img_gt)\n            img_lqs, img_gts = align_lqs, align_gts\n\n        img_gts = img2tensor(img_gts)\n        img_lqs = img2tensor(img_lqs)\n        img_gts = torch.stack(img_gts, dim=0)\n        img_lqs = torch.stack(img_lqs, dim=0)\n\n        if self.normalize:\n            normalize(img_lqs, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n            normalize(img_gts, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n\n        return {'in': img_lqs, 'gt': img_gts, 'key': key}\n\n    def __len__(self):\n        return len(self.keys)\n"
  },
  {
    "path": "basicsr/data/data_sampler.py",
    "content": "import math\nimport torch\nfrom torch.utils.data.sampler import Sampler\n\n\nclass EnlargedSampler(Sampler):\n    \"\"\"Sampler that restricts data loading to a subset of the dataset.\n\n    Modified from torch.utils.data.distributed.DistributedSampler\n    Support enlarging the dataset for iteration-based training, for saving\n    time when restart the dataloader after each epoch\n\n    Args:\n        dataset (torch.utils.data.Dataset): Dataset used for sampling.\n        num_replicas (int | None): Number of processes participating in\n            the training. It is usually the world_size.\n        rank (int | None): Rank of the current process within num_replicas.\n        ratio (int): Enlarging ratio. Default: 1.\n    \"\"\"\n\n    def __init__(self, dataset, num_replicas, rank, ratio=1):\n        self.dataset = dataset\n        self.num_replicas = num_replicas\n        self.rank = rank\n        self.epoch = 0\n        self.num_samples = math.ceil(len(self.dataset) * ratio / self.num_replicas)\n        self.total_size = self.num_samples * self.num_replicas\n\n    def __iter__(self):\n        # deterministically shuffle based on epoch\n        g = torch.Generator()\n        g.manual_seed(self.epoch)\n        indices = torch.randperm(self.total_size, generator=g).tolist()\n\n        dataset_size = len(self.dataset)\n        indices = [v % dataset_size for v in indices]\n\n        # subsample\n        indices = indices[self.rank:self.total_size:self.num_replicas]\n        assert len(indices) == self.num_samples\n\n        return iter(indices)\n\n    def __len__(self):\n        return self.num_samples\n\n    def set_epoch(self, epoch):\n        self.epoch = epoch\n"
  },
  {
    "path": "basicsr/data/data_util.py",
    "content": "import cv2\nimport math\nimport numpy as np\nimport torch\nfrom os import path as osp\nfrom PIL import Image, ImageDraw\nfrom torch.nn import functional as F\n\nfrom basicsr.data.transforms import mod_crop\nfrom basicsr.utils import img2tensor, scandir\n\n\ndef read_img_seq(path, require_mod_crop=False, scale=1):\n    \"\"\"Read a sequence of images from a given folder path.\n\n    Args:\n        path (list[str] | str): List of image paths or image folder path.\n        require_mod_crop (bool): Require mod crop for each image.\n            Default: False.\n        scale (int): Scale factor for mod_crop. Default: 1.\n\n    Returns:\n        Tensor: size (t, c, h, w), RGB, [0, 1].\n    \"\"\"\n    if isinstance(path, list):\n        img_paths = path\n    else:\n        img_paths = sorted(list(scandir(path, full_path=True)))\n    imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths]\n    if require_mod_crop:\n        imgs = [mod_crop(img, scale) for img in imgs]\n    imgs = img2tensor(imgs, bgr2rgb=True, float32=True)\n    imgs = torch.stack(imgs, dim=0)\n    return imgs\n\n\ndef generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'):\n    \"\"\"Generate an index list for reading `num_frames` frames from a sequence\n    of images.\n\n    Args:\n        crt_idx (int): Current center index.\n        max_frame_num (int): Max number of the sequence of images (from 1).\n        num_frames (int): Reading num_frames frames.\n        padding (str): Padding mode, one of\n            'replicate' | 'reflection' | 'reflection_circle' | 'circle'\n            Examples: current_idx = 0, num_frames = 5\n            The generated frame indices under different padding mode:\n            replicate: [0, 0, 0, 1, 2]\n            reflection: [2, 1, 0, 1, 2]\n            reflection_circle: [4, 3, 0, 1, 2]\n            circle: [3, 4, 0, 1, 2]\n\n    Returns:\n        list[int]: A list of indices.\n    \"\"\"\n    assert num_frames % 2 == 1, 'num_frames should be an odd number.'\n    assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.'\n\n    max_frame_num = max_frame_num - 1  # start from 0\n    num_pad = num_frames // 2\n\n    indices = []\n    for i in range(crt_idx - num_pad, crt_idx + num_pad + 1):\n        if i < 0:\n            if padding == 'replicate':\n                pad_idx = 0\n            elif padding == 'reflection':\n                pad_idx = -i\n            elif padding == 'reflection_circle':\n                pad_idx = crt_idx + num_pad - i\n            else:\n                pad_idx = num_frames + i\n        elif i > max_frame_num:\n            if padding == 'replicate':\n                pad_idx = max_frame_num\n            elif padding == 'reflection':\n                pad_idx = max_frame_num * 2 - i\n            elif padding == 'reflection_circle':\n                pad_idx = (crt_idx - num_pad) - (i - max_frame_num)\n            else:\n                pad_idx = i - num_frames\n        else:\n            pad_idx = i\n        indices.append(pad_idx)\n    return indices\n\n\ndef paired_paths_from_lmdb(folders, keys):\n    \"\"\"Generate paired paths from lmdb files.\n\n    Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is:\n\n    lq.lmdb\n    ├── data.mdb\n    ├── lock.mdb\n    ├── meta_info.txt\n\n    The data.mdb and lock.mdb are standard lmdb files and you can refer to\n    https://lmdb.readthedocs.io/en/release/ for more details.\n\n    The meta_info.txt is a specified txt file to record the meta information\n    of our datasets. It will be automatically created when preparing\n    datasets by our provided dataset tools.\n    Each line in the txt file records\n    1)image name (with extension),\n    2)image shape,\n    3)compression level, separated by a white space.\n    Example: `baboon.png (120,125,3) 1`\n\n    We use the image name without extension as the lmdb key.\n    Note that we use the same key for the corresponding lq and gt images.\n\n    Args:\n        folders (list[str]): A list of folder path. The order of list should\n            be [input_folder, gt_folder].\n        keys (list[str]): A list of keys identifying folders. The order should\n            be in consistent with folders, e.g., ['lq', 'gt'].\n            Note that this key is different from lmdb keys.\n\n    Returns:\n        list[str]: Returned path list.\n    \"\"\"\n    assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '\n                               f'But got {len(folders)}')\n    assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')\n    input_folder, gt_folder = folders\n    input_key, gt_key = keys\n\n    if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')):\n        raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb '\n                         f'formats. But received {input_key}: {input_folder}; '\n                         f'{gt_key}: {gt_folder}')\n    # ensure that the two meta_info files are the same\n    with open(osp.join(input_folder, 'meta_info.txt')) as fin:\n        input_lmdb_keys = [line.split('.')[0] for line in fin]\n    with open(osp.join(gt_folder, 'meta_info.txt')) as fin:\n        gt_lmdb_keys = [line.split('.')[0] for line in fin]\n    if set(input_lmdb_keys) != set(gt_lmdb_keys):\n        raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.')\n    else:\n        paths = []\n        for lmdb_key in sorted(input_lmdb_keys):\n            paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)]))\n        return paths\n\n\ndef paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl):\n    \"\"\"Generate paired paths from an meta information file.\n\n    Each line in the meta information file contains the image names and\n    image shape (usually for gt), separated by a white space.\n\n    Example of an meta information file:\n    ```\n    0001_s001.png (480,480,3)\n    0001_s002.png (480,480,3)\n    ```\n\n    Args:\n        folders (list[str]): A list of folder path. The order of list should\n            be [input_folder, gt_folder].\n        keys (list[str]): A list of keys identifying folders. The order should\n            be in consistent with folders, e.g., ['lq', 'gt'].\n        meta_info_file (str): Path to the meta information file.\n        filename_tmpl (str): Template for each filename. Note that the\n            template excludes the file extension. Usually the filename_tmpl is\n            for files in the input folder.\n\n    Returns:\n        list[str]: Returned path list.\n    \"\"\"\n    assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '\n                               f'But got {len(folders)}')\n    assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')\n    input_folder, gt_folder = folders\n    input_key, gt_key = keys\n\n    with open(meta_info_file, 'r') as fin:\n        gt_names = [line.split(' ')[0] for line in fin]\n\n    paths = []\n    for gt_name in gt_names:\n        basename, ext = osp.splitext(osp.basename(gt_name))\n        input_name = f'{filename_tmpl.format(basename)}{ext}'\n        input_path = osp.join(input_folder, input_name)\n        gt_path = osp.join(gt_folder, gt_name)\n        paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))\n    return paths\n\n\ndef paired_paths_from_folder(folders, keys, filename_tmpl):\n    \"\"\"Generate paired paths from folders.\n\n    Args:\n        folders (list[str]): A list of folder path. The order of list should\n            be [input_folder, gt_folder].\n        keys (list[str]): A list of keys identifying folders. The order should\n            be in consistent with folders, e.g., ['lq', 'gt'].\n        filename_tmpl (str): Template for each filename. Note that the\n            template excludes the file extension. Usually the filename_tmpl is\n            for files in the input folder.\n\n    Returns:\n        list[str]: Returned path list.\n    \"\"\"\n    assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '\n                               f'But got {len(folders)}')\n    assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}')\n    input_folder, gt_folder = folders\n    input_key, gt_key = keys\n\n    input_paths = list(scandir(input_folder))\n    gt_paths = list(scandir(gt_folder))\n    assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: '\n                                               f'{len(input_paths)}, {len(gt_paths)}.')\n    paths = []\n    for gt_path in gt_paths:\n        basename, ext = osp.splitext(osp.basename(gt_path))\n        input_name = f'{filename_tmpl.format(basename)}{ext}'\n        input_path = osp.join(input_folder, input_name)\n        assert input_name in input_paths, (f'{input_name} is not in ' f'{input_key}_paths.')\n        gt_path = osp.join(gt_folder, gt_path)\n        paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))\n    return paths\n\n\ndef paths_from_folder(folder):\n    \"\"\"Generate paths from folder.\n\n    Args:\n        folder (str): Folder path.\n\n    Returns:\n        list[str]: Returned path list.\n    \"\"\"\n\n    paths = list(scandir(folder))\n    paths = [osp.join(folder, path) for path in paths]\n    return paths\n\n\ndef paths_from_lmdb(folder):\n    \"\"\"Generate paths from lmdb.\n\n    Args:\n        folder (str): Folder path.\n\n    Returns:\n        list[str]: Returned path list.\n    \"\"\"\n    if not folder.endswith('.lmdb'):\n        raise ValueError(f'Folder {folder}folder should in lmdb format.')\n    with open(osp.join(folder, 'meta_info.txt')) as fin:\n        paths = [line.split('.')[0] for line in fin]\n    return paths\n\n\ndef generate_gaussian_kernel(kernel_size=13, sigma=1.6):\n    \"\"\"Generate Gaussian kernel used in `duf_downsample`.\n\n    Args:\n        kernel_size (int): Kernel size. Default: 13.\n        sigma (float): Sigma of the Gaussian kernel. Default: 1.6.\n\n    Returns:\n        np.array: The Gaussian kernel.\n    \"\"\"\n    from scipy.ndimage import filters as filters\n    kernel = np.zeros((kernel_size, kernel_size))\n    # set element at the middle to one, a dirac delta\n    kernel[kernel_size // 2, kernel_size // 2] = 1\n    # gaussian-smooth the dirac, resulting in a gaussian filter\n    return filters.gaussian_filter(kernel, sigma)\n\n\ndef duf_downsample(x, kernel_size=13, scale=4):\n    \"\"\"Downsamping with Gaussian kernel used in the DUF official code.\n\n    Args:\n        x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w).\n        kernel_size (int): Kernel size. Default: 13.\n        scale (int): Downsampling factor. Supported scale: (2, 3, 4).\n            Default: 4.\n\n    Returns:\n        Tensor: DUF downsampled frames.\n    \"\"\"\n    assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.'\n\n    squeeze_flag = False\n    if x.ndim == 4:\n        squeeze_flag = True\n        x = x.unsqueeze(0)\n    b, t, c, h, w = x.size()\n    x = x.view(-1, 1, h, w)\n    pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2\n    x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect')\n\n    gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale)\n    gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0)\n    x = F.conv2d(x, gaussian_filter, stride=scale)\n    x = x[:, :, 2:-2, 2:-2]\n    x = x.view(b, t, c, x.size(2), x.size(3))\n    if squeeze_flag:\n        x = x.squeeze(0)\n    return x\n\n\ndef brush_stroke_mask(img, color=(255,255,255)):\n    min_num_vertex = 8\n    max_num_vertex = 28\n    mean_angle = 2*math.pi / 5\n    angle_range = 2*math.pi / 12\n    # training large mask ratio (training setting)\n    min_width = 30\n    max_width = 70\n    # very large mask ratio (test setting and refine after 200k)\n    # min_width = 80\n    # max_width = 120\n    def generate_mask(H, W, img=None):\n        average_radius = math.sqrt(H*H+W*W) / 8\n        mask = Image.new('RGB', (W, H), 0)\n        if img is not None:\n            mask = img # Image.fromarray(img)\n\n        for _ in range(np.random.randint(1, 4)):\n            num_vertex = np.random.randint(min_num_vertex, max_num_vertex)\n            angle_min = mean_angle - np.random.uniform(0, angle_range)\n            angle_max = mean_angle + np.random.uniform(0, angle_range)\n            angles = []\n            vertex = []\n            for i in range(num_vertex):\n                if i % 2 == 0:\n                    angles.append(2*math.pi - np.random.uniform(angle_min, angle_max))\n                else:\n                    angles.append(np.random.uniform(angle_min, angle_max))\n\n            h, w = mask.size\n            vertex.append((int(np.random.randint(0, w)), int(np.random.randint(0, h))))\n            for i in range(num_vertex):\n                r = np.clip(\n                    np.random.normal(loc=average_radius, scale=average_radius//2),\n                    0, 2*average_radius)\n                new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w)\n                new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h)\n                vertex.append((int(new_x), int(new_y)))\n\n            draw = ImageDraw.Draw(mask)\n            width = int(np.random.uniform(min_width, max_width))\n            draw.line(vertex, fill=color, width=width)\n            for v in vertex:\n                draw.ellipse((v[0] - width//2,\n                              v[1] - width//2,\n                              v[0] + width//2,\n                              v[1] + width//2),\n                             fill=color)\n\n        return mask\n\n    width, height = img.size\n    mask = generate_mask(height, width, img)\n    return mask\n\n\n\ndef brush_stroke_mask_video(imgs, color=(255,255,255)):\n    min_num_vertex = 8\n    max_num_vertex = 28\n    mean_angle = 2 * math.pi / 5\n    angle_range = 2 * math.pi / 12\n    # training large mask ratio (training setting)\n    min_width = 30\n    max_width = 70\n    # very large mask ratio (test setting and refine after 200k)\n    # min_width = 80\n    # max_width = 120\n    def generate_mask(H, W, imgs=None):\n        average_radius = math.sqrt(H*H+W*W) / 8\n        # mask = Image.new('RGB', (W, H), 0)\n        # if img is not None:\n        #     mask = img # Image.fromarray(img)\n\n        for _ in range(np.random.randint(1, 4)):\n            num_vertex = np.random.randint(min_num_vertex, max_num_vertex)\n            angle_min = mean_angle - np.random.uniform(0, angle_range)\n            angle_max = mean_angle + np.random.uniform(0, angle_range)\n            angles = []\n            vertex = []\n            for i in range(num_vertex):\n                if i % 2 == 0:\n                    angles.append(2*math.pi - np.random.uniform(angle_min, angle_max))\n                else:\n                    angles.append(np.random.uniform(angle_min, angle_max))\n\n            h, w = imgs[0].size\n            vertex.append((int(np.random.randint(0, w)), int(np.random.randint(0, h))))\n            for i in range(num_vertex):\n                r = np.clip(\n                    np.random.normal(loc=average_radius, scale=average_radius//2),\n                    0, 2*average_radius)\n                new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w)\n                new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h)\n                vertex.append((int(new_x), int(new_y)))\n\n            width_ = int(np.random.uniform(min_width, max_width))\n            for img in imgs:\n                draw = ImageDraw.Draw(img)\n                draw.line(vertex, fill=color, width=width_)\n                for v in vertex:\n                    draw.ellipse((v[0] - width_//2,\n                                 v[1] - width_//2,\n                                 v[0] + width_//2,\n                                 v[1] + width_//2),\n                                 fill=color)\n\n        return imgs\n\n    width, height = imgs[0].size\n    mask = generate_mask(height, width, imgs)\n    return mask\n\n\n\n\ndef random_ff_mask(shape, max_angle = 10, max_len = 100, max_width = 70, times = 10):\n    \"\"\"Generate a random free form mask with configuration.\n    Args:\n        config: Config should have configuration including IMG_SHAPES,\n            VERTICAL_MARGIN, HEIGHT, HORIZONTAL_MARGIN, WIDTH.\n    Returns:\n        tuple: (top, left, height, width)\n    Link:\n        https://github.com/csqiangwen/DeepFillv2_Pytorch/blob/master/train_dataset.py\n    \"\"\"\n    height = shape[0]\n    width = shape[1]\n    mask = np.zeros((height, width), np.float32)\n    times = np.random.randint(times-5, times)\n    for i in range(times):\n        start_x = np.random.randint(width)\n        start_y = np.random.randint(height)\n        for j in range(1 + np.random.randint(5)):\n            angle = 0.01 + np.random.randint(max_angle)\n            if i % 2 == 0:\n                angle = 2 * 3.1415926 - angle\n            length = 10 + np.random.randint(max_len-20, max_len)\n            brush_w = 5 + np.random.randint(max_width-30, max_width)\n            end_x = (start_x + length * np.sin(angle)).astype(np.int32)\n            end_y = (start_y + length * np.cos(angle)).astype(np.int32)\n            cv2.line(mask, (start_y, start_x), (end_y, end_x), 1.0, brush_w)\n            start_x, start_y = end_x, end_y\n    return mask.astype(np.float32)"
  },
  {
    "path": "basicsr/data/degradations.py",
    "content": "import cv2\nimport math\nimport numpy as np\nimport random\nimport torch\nfrom scipy import special\nfrom scipy.stats import multivariate_normal\nfrom torchvision.transforms.functional import rgb_to_grayscale\n\n# -------------------------------------------------------------------- #\n# --------------------------- blur kernels --------------------------- #\n# -------------------------------------------------------------------- #\n\n\n# --------------------------- util functions --------------------------- #\ndef sigma_matrix2(sig_x, sig_y, theta):\n    \"\"\"Calculate the rotated sigma matrix (two dimensional matrix).\n\n    Args:\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n\n    Returns:\n        ndarray: Rotated sigma matrix.\n    \"\"\"\n    d_matrix = np.array([[sig_x**2, 0], [0, sig_y**2]])\n    u_matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])\n    return np.dot(u_matrix, np.dot(d_matrix, u_matrix.T))\n\n\ndef mesh_grid(kernel_size):\n    \"\"\"Generate the mesh grid, centering at zero.\n\n    Args:\n        kernel_size (int):\n\n    Returns:\n        xy (ndarray): with the shape (kernel_size, kernel_size, 2)\n        xx (ndarray): with the shape (kernel_size, kernel_size)\n        yy (ndarray): with the shape (kernel_size, kernel_size)\n    \"\"\"\n    ax = np.arange(-kernel_size // 2 + 1., kernel_size // 2 + 1.)\n    xx, yy = np.meshgrid(ax, ax)\n    xy = np.hstack((xx.reshape((kernel_size * kernel_size, 1)), yy.reshape(kernel_size * kernel_size,\n                                                                           1))).reshape(kernel_size, kernel_size, 2)\n    return xy, xx, yy\n\n\ndef pdf2(sigma_matrix, grid):\n    \"\"\"Calculate PDF of the bivariate Gaussian distribution.\n\n    Args:\n        sigma_matrix (ndarray): with the shape (2, 2)\n        grid (ndarray): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size.\n\n    Returns:\n        kernel (ndarrray): un-normalized kernel.\n    \"\"\"\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.exp(-0.5 * np.sum(np.dot(grid, inverse_sigma) * grid, 2))\n    return kernel\n\n\ndef cdf2(d_matrix, grid):\n    \"\"\"Calculate the CDF of the standard bivariate Gaussian distribution.\n        Used in skewed Gaussian distribution.\n\n    Args:\n        d_matrix (ndarrasy): skew matrix.\n        grid (ndarray): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size.\n\n    Returns:\n        cdf (ndarray): skewed cdf.\n    \"\"\"\n    rv = multivariate_normal([0, 0], [[1, 0], [0, 1]])\n    grid = np.dot(grid, d_matrix)\n    cdf = rv.cdf(grid)\n    return cdf\n\n\ndef bivariate_Gaussian(kernel_size, sig_x, sig_y, theta, grid=None, isotropic=True):\n    \"\"\"Generate a bivariate isotropic or anisotropic Gaussian kernel.\n\n    In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.\n\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n        isotropic (bool):\n\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    if isotropic:\n        sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])\n    else:\n        sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    kernel = pdf2(sigma_matrix, grid)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_generalized_Gaussian(kernel_size, sig_x, sig_y, theta, beta, grid=None, isotropic=True):\n    \"\"\"Generate a bivariate generalized Gaussian kernel.\n\n    ``Paper: Parameter Estimation For Multivariate Generalized Gaussian Distributions``\n\n    In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.\n\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        beta (float): shape parameter, beta = 1 is the normal distribution.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    if isotropic:\n        sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])\n    else:\n        sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.exp(-0.5 * np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta))\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_plateau(kernel_size, sig_x, sig_y, theta, beta, grid=None, isotropic=True):\n    \"\"\"Generate a plateau-like anisotropic kernel.\n\n    1 / (1+x^(beta))\n\n    Reference: https://stats.stackexchange.com/questions/203629/is-there-a-plateau-shaped-distribution\n\n    In the isotropic mode, only `sig_x` is used. `sig_y` and `theta` is ignored.\n\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        beta (float): shape parameter, beta = 1 is the normal distribution.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    if isotropic:\n        sigma_matrix = np.array([[sig_x**2, 0], [0, sig_x**2]])\n    else:\n        sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.reciprocal(np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta) + 1)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef random_bivariate_Gaussian(kernel_size,\n                              sigma_x_range,\n                              sigma_y_range,\n                              rotation_range,\n                              noise_range=None,\n                              isotropic=True):\n    \"\"\"Randomly generate bivariate isotropic or anisotropic Gaussian kernels.\n\n    In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.\n\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        noise_range(tuple, optional): multiplicative kernel noise,\n            [0.75, 1.25]. Default: None\n\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    if isotropic is False:\n        assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n        assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n        sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n        rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n    else:\n        sigma_y = sigma_x\n        rotation = 0\n\n    kernel = bivariate_Gaussian(kernel_size, sigma_x, sigma_y, rotation, isotropic=isotropic)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef random_bivariate_generalized_Gaussian(kernel_size,\n                                          sigma_x_range,\n                                          sigma_y_range,\n                                          rotation_range,\n                                          beta_range,\n                                          noise_range=None,\n                                          isotropic=True):\n    \"\"\"Randomly generate bivariate generalized Gaussian kernels.\n\n    In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.\n\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        beta_range (tuple): [0.5, 8]\n        noise_range(tuple, optional): multiplicative kernel noise,\n            [0.75, 1.25]. Default: None\n\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    if isotropic is False:\n        assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n        assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n        sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n        rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n    else:\n        sigma_y = sigma_x\n        rotation = 0\n\n    # assume beta_range[0] < 1 < beta_range[1]\n    if np.random.uniform() < 0.5:\n        beta = np.random.uniform(beta_range[0], 1)\n    else:\n        beta = np.random.uniform(1, beta_range[1])\n\n    kernel = bivariate_generalized_Gaussian(kernel_size, sigma_x, sigma_y, rotation, beta, isotropic=isotropic)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef random_bivariate_plateau(kernel_size,\n                             sigma_x_range,\n                             sigma_y_range,\n                             rotation_range,\n                             beta_range,\n                             noise_range=None,\n                             isotropic=True):\n    \"\"\"Randomly generate bivariate plateau kernels.\n\n    In the isotropic mode, only `sigma_x_range` is used. `sigma_y_range` and `rotation_range` is ignored.\n\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi/2, math.pi/2]\n        beta_range (tuple): [1, 4]\n        noise_range(tuple, optional): multiplicative kernel noise,\n            [0.75, 1.25]. Default: None\n\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    if isotropic is False:\n        assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n        assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n        sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n        rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n    else:\n        sigma_y = sigma_x\n        rotation = 0\n\n    # TODO: this may be not proper\n    if np.random.uniform() < 0.5:\n        beta = np.random.uniform(beta_range[0], 1)\n    else:\n        beta = np.random.uniform(1, beta_range[1])\n\n    kernel = bivariate_plateau(kernel_size, sigma_x, sigma_y, rotation, beta, isotropic=isotropic)\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n\n    return kernel\n\n\ndef random_mixed_kernels(kernel_list,\n                         kernel_prob,\n                         kernel_size=21,\n                         sigma_x_range=(0.6, 5),\n                         sigma_y_range=(0.6, 5),\n                         rotation_range=(-math.pi, math.pi),\n                         betag_range=(0.5, 8),\n                         betap_range=(0.5, 8),\n                         noise_range=None):\n    \"\"\"Randomly generate mixed kernels.\n\n    Args:\n        kernel_list (tuple): a list name of kernel types,\n            support ['iso', 'aniso', 'skew', 'generalized', 'plateau_iso',\n            'plateau_aniso']\n        kernel_prob (tuple): corresponding kernel probability for each\n            kernel type\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        beta_range (tuple): [0.5, 8]\n        noise_range(tuple, optional): multiplicative kernel noise,\n            [0.75, 1.25]. Default: None\n\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    kernel_type = random.choices(kernel_list, kernel_prob)[0]\n    if kernel_type == 'iso':\n        kernel = random_bivariate_Gaussian(\n            kernel_size, sigma_x_range, sigma_y_range, rotation_range, noise_range=noise_range, isotropic=True)\n    elif kernel_type == 'aniso':\n        kernel = random_bivariate_Gaussian(\n            kernel_size, sigma_x_range, sigma_y_range, rotation_range, noise_range=noise_range, isotropic=False)\n    elif kernel_type == 'generalized_iso':\n        kernel = random_bivariate_generalized_Gaussian(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            betag_range,\n            noise_range=noise_range,\n            isotropic=True)\n    elif kernel_type == 'generalized_aniso':\n        kernel = random_bivariate_generalized_Gaussian(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            betag_range,\n            noise_range=noise_range,\n            isotropic=False)\n    elif kernel_type == 'plateau_iso':\n        kernel = random_bivariate_plateau(\n            kernel_size, sigma_x_range, sigma_y_range, rotation_range, betap_range, noise_range=None, isotropic=True)\n    elif kernel_type == 'plateau_aniso':\n        kernel = random_bivariate_plateau(\n            kernel_size, sigma_x_range, sigma_y_range, rotation_range, betap_range, noise_range=None, isotropic=False)\n    return kernel\n\n\nnp.seterr(divide='ignore', invalid='ignore')\n\n\ndef circular_lowpass_kernel(cutoff, kernel_size, pad_to=0):\n    \"\"\"2D sinc filter\n\n    Reference: https://dsp.stackexchange.com/questions/58301/2-d-circularly-symmetric-low-pass-filter\n\n    Args:\n        cutoff (float): cutoff frequency in radians (pi is max)\n        kernel_size (int): horizontal and vertical size, must be odd.\n        pad_to (int): pad kernel size to desired size, must be odd or zero.\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    kernel = np.fromfunction(\n        lambda x, y: cutoff * special.j1(cutoff * np.sqrt(\n            (x - (kernel_size - 1) / 2)**2 + (y - (kernel_size - 1) / 2)**2)) / (2 * np.pi * np.sqrt(\n                (x - (kernel_size - 1) / 2)**2 + (y - (kernel_size - 1) / 2)**2)), [kernel_size, kernel_size])\n    kernel[(kernel_size - 1) // 2, (kernel_size - 1) // 2] = cutoff**2 / (4 * np.pi)\n    kernel = kernel / np.sum(kernel)\n    if pad_to > kernel_size:\n        pad_size = (pad_to - kernel_size) // 2\n        kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))\n    return kernel\n\n\n# ------------------------------------------------------------- #\n# --------------------------- noise --------------------------- #\n# ------------------------------------------------------------- #\n\n# ----------------------- Gaussian Noise ----------------------- #\n\n\ndef generate_gaussian_noise(img, sigma=10, gray_noise=False):\n    \"\"\"Generate Gaussian noise.\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        sigma (float): Noise scale (measured in range 255). Default: 10.\n\n    Returns:\n        (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    if gray_noise:\n        noise = np.float32(np.random.randn(*(img.shape[0:2]))) * sigma / 255.\n        noise = np.expand_dims(noise, axis=2).repeat(3, axis=2)\n    else:\n        noise = np.float32(np.random.randn(*(img.shape))) * sigma / 255.\n    return noise\n\n\ndef add_gaussian_noise(img, sigma=10, clip=True, rounds=False, gray_noise=False):\n    \"\"\"Add Gaussian noise.\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        sigma (float): Noise scale (measured in range 255). Default: 10.\n\n    Returns:\n        (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    noise = generate_gaussian_noise(img, sigma, gray_noise)\n    out = img + noise\n    if clip and rounds:\n        out = np.clip((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = np.clip(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\ndef generate_gaussian_noise_pt(img, sigma=10, gray_noise=0):\n    \"\"\"Add Gaussian noise (PyTorch version).\n\n    Args:\n        img (Tensor): Shape (b, c, h, w), range[0, 1], float32.\n        scale (float | Tensor): Noise scale. Default: 1.0.\n\n    Returns:\n        (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],\n            float32.\n    \"\"\"\n    b, _, h, w = img.size()\n    if not isinstance(sigma, (float, int)):\n        sigma = sigma.view(img.size(0), 1, 1, 1)\n    if isinstance(gray_noise, (float, int)):\n        cal_gray_noise = gray_noise > 0\n    else:\n        gray_noise = gray_noise.view(b, 1, 1, 1)\n        cal_gray_noise = torch.sum(gray_noise) > 0\n\n    if cal_gray_noise:\n        noise_gray = torch.randn(*img.size()[2:4], dtype=img.dtype, device=img.device) * sigma / 255.\n        noise_gray = noise_gray.view(b, 1, h, w)\n\n    # always calculate color noise\n    noise = torch.randn(*img.size(), dtype=img.dtype, device=img.device) * sigma / 255.\n\n    if cal_gray_noise:\n        noise = noise * (1 - gray_noise) + noise_gray * gray_noise\n    return noise\n\n\ndef add_gaussian_noise_pt(img, sigma=10, gray_noise=0, clip=True, rounds=False):\n    \"\"\"Add Gaussian noise (PyTorch version).\n\n    Args:\n        img (Tensor): Shape (b, c, h, w), range[0, 1], float32.\n        scale (float | Tensor): Noise scale. Default: 1.0.\n\n    Returns:\n        (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],\n            float32.\n    \"\"\"\n    noise = generate_gaussian_noise_pt(img, sigma, gray_noise)\n    out = img + noise\n    if clip and rounds:\n        out = torch.clamp((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = torch.clamp(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\n# ----------------------- Random Gaussian Noise ----------------------- #\ndef random_generate_gaussian_noise(img, sigma_range=(0, 10), gray_prob=0):\n    sigma = np.random.uniform(sigma_range[0], sigma_range[1])\n    if np.random.uniform() < gray_prob:\n        gray_noise = True\n    else:\n        gray_noise = False\n    return generate_gaussian_noise(img, sigma, gray_noise)\n\n\ndef random_add_gaussian_noise(img, sigma_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):\n    noise = random_generate_gaussian_noise(img, sigma_range, gray_prob)\n    out = img + noise\n    if clip and rounds:\n        out = np.clip((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = np.clip(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\ndef random_generate_gaussian_noise_pt(img, sigma_range=(0, 10), gray_prob=0):\n    sigma = torch.rand(\n        img.size(0), dtype=img.dtype, device=img.device) * (sigma_range[1] - sigma_range[0]) + sigma_range[0]\n    gray_noise = torch.rand(img.size(0), dtype=img.dtype, device=img.device)\n    gray_noise = (gray_noise < gray_prob).float()\n    return generate_gaussian_noise_pt(img, sigma, gray_noise)\n\n\ndef random_add_gaussian_noise_pt(img, sigma_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):\n    noise = random_generate_gaussian_noise_pt(img, sigma_range, gray_prob)\n    out = img + noise\n    if clip and rounds:\n        out = torch.clamp((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = torch.clamp(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\n# ----------------------- Poisson (Shot) Noise ----------------------- #\n\n\ndef generate_poisson_noise(img, scale=1.0, gray_noise=False):\n    \"\"\"Generate poisson noise.\n\n    Reference: https://github.com/scikit-image/scikit-image/blob/main/skimage/util/noise.py#L37-L219\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        scale (float): Noise scale. Default: 1.0.\n        gray_noise (bool): Whether generate gray noise. Default: False.\n\n    Returns:\n        (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    if gray_noise:\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n    # round and clip image for counting vals correctly\n    img = np.clip((img * 255.0).round(), 0, 255) / 255.\n    vals = len(np.unique(img))\n    vals = 2**np.ceil(np.log2(vals))\n    out = np.float32(np.random.poisson(img * vals) / float(vals))\n    noise = out - img\n    if gray_noise:\n        noise = np.repeat(noise[:, :, np.newaxis], 3, axis=2)\n    return noise * scale\n\n\ndef add_poisson_noise(img, scale=1.0, clip=True, rounds=False, gray_noise=False):\n    \"\"\"Add poisson noise.\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        scale (float): Noise scale. Default: 1.0.\n        gray_noise (bool): Whether generate gray noise. Default: False.\n\n    Returns:\n        (Numpy array): Returned noisy image, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    noise = generate_poisson_noise(img, scale, gray_noise)\n    out = img + noise\n    if clip and rounds:\n        out = np.clip((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = np.clip(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\ndef generate_poisson_noise_pt(img, scale=1.0, gray_noise=0):\n    \"\"\"Generate a batch of poisson noise (PyTorch version)\n\n    Args:\n        img (Tensor): Input image, shape (b, c, h, w), range [0, 1], float32.\n        scale (float | Tensor): Noise scale. Number or Tensor with shape (b).\n            Default: 1.0.\n        gray_noise (float | Tensor): 0-1 number or Tensor with shape (b).\n            0 for False, 1 for True. Default: 0.\n\n    Returns:\n        (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],\n            float32.\n    \"\"\"\n    b, _, h, w = img.size()\n    if isinstance(gray_noise, (float, int)):\n        cal_gray_noise = gray_noise > 0\n    else:\n        gray_noise = gray_noise.view(b, 1, 1, 1)\n        cal_gray_noise = torch.sum(gray_noise) > 0\n    if cal_gray_noise:\n        img_gray = rgb_to_grayscale(img, num_output_channels=1)\n        # round and clip image for counting vals correctly\n        img_gray = torch.clamp((img_gray * 255.0).round(), 0, 255) / 255.\n        # use for-loop to get the unique values for each sample\n        vals_list = [len(torch.unique(img_gray[i, :, :, :])) for i in range(b)]\n        vals_list = [2**np.ceil(np.log2(vals)) for vals in vals_list]\n        vals = img_gray.new_tensor(vals_list).view(b, 1, 1, 1)\n        out = torch.poisson(img_gray * vals) / vals\n        noise_gray = out - img_gray\n        noise_gray = noise_gray.expand(b, 3, h, w)\n\n    # always calculate color noise\n    # round and clip image for counting vals correctly\n    img = torch.clamp((img * 255.0).round(), 0, 255) / 255.\n    # use for-loop to get the unique values for each sample\n    vals_list = [len(torch.unique(img[i, :, :, :])) for i in range(b)]\n    vals_list = [2**np.ceil(np.log2(vals)) for vals in vals_list]\n    vals = img.new_tensor(vals_list).view(b, 1, 1, 1)\n    out = torch.poisson(img * vals) / vals\n    noise = out - img\n    if cal_gray_noise:\n        noise = noise * (1 - gray_noise) + noise_gray * gray_noise\n    if not isinstance(scale, (float, int)):\n        scale = scale.view(b, 1, 1, 1)\n    return noise * scale\n\n\ndef add_poisson_noise_pt(img, scale=1.0, clip=True, rounds=False, gray_noise=0):\n    \"\"\"Add poisson noise to a batch of images (PyTorch version).\n\n    Args:\n        img (Tensor): Input image, shape (b, c, h, w), range [0, 1], float32.\n        scale (float | Tensor): Noise scale. Number or Tensor with shape (b).\n            Default: 1.0.\n        gray_noise (float | Tensor): 0-1 number or Tensor with shape (b).\n            0 for False, 1 for True. Default: 0.\n\n    Returns:\n        (Tensor): Returned noisy image, shape (b, c, h, w), range[0, 1],\n            float32.\n    \"\"\"\n    noise = generate_poisson_noise_pt(img, scale, gray_noise)\n    out = img + noise\n    if clip and rounds:\n        out = torch.clamp((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = torch.clamp(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\n# ----------------------- Random Poisson (Shot) Noise ----------------------- #\n\n\ndef random_generate_poisson_noise(img, scale_range=(0, 1.0), gray_prob=0):\n    scale = np.random.uniform(scale_range[0], scale_range[1])\n    if np.random.uniform() < gray_prob:\n        gray_noise = True\n    else:\n        gray_noise = False\n    return generate_poisson_noise(img, scale, gray_noise)\n\n\ndef random_add_poisson_noise(img, scale_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):\n    noise = random_generate_poisson_noise(img, scale_range, gray_prob)\n    out = img + noise\n    if clip and rounds:\n        out = np.clip((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = np.clip(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\ndef random_generate_poisson_noise_pt(img, scale_range=(0, 1.0), gray_prob=0):\n    scale = torch.rand(\n        img.size(0), dtype=img.dtype, device=img.device) * (scale_range[1] - scale_range[0]) + scale_range[0]\n    gray_noise = torch.rand(img.size(0), dtype=img.dtype, device=img.device)\n    gray_noise = (gray_noise < gray_prob).float()\n    return generate_poisson_noise_pt(img, scale, gray_noise)\n\n\ndef random_add_poisson_noise_pt(img, scale_range=(0, 1.0), gray_prob=0, clip=True, rounds=False):\n    noise = random_generate_poisson_noise_pt(img, scale_range, gray_prob)\n    out = img + noise\n    if clip and rounds:\n        out = torch.clamp((out * 255.0).round(), 0, 255) / 255.\n    elif clip:\n        out = torch.clamp(out, 0, 1)\n    elif rounds:\n        out = (out * 255.0).round() / 255.\n    return out\n\n\n# ------------------------------------------------------------------------ #\n# --------------------------- JPEG compression --------------------------- #\n# ------------------------------------------------------------------------ #\n\n\ndef add_jpg_compression(img, quality=90):\n    \"\"\"Add JPG compression artifacts.\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        quality (float): JPG compression quality. 0 for lowest quality, 100 for\n            best quality. Default: 90.\n\n    Returns:\n        (Numpy array): Returned image after JPG, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    img = np.clip(img, 0, 1)\n    encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), quality]\n    _, encimg = cv2.imencode('.jpg', img * 255., encode_param)\n    img = np.float32(cv2.imdecode(encimg, 1)) / 255.\n    return img\n\n\ndef random_add_jpg_compression(img, quality_range=(90, 100)):\n    \"\"\"Randomly add JPG compression artifacts.\n\n    Args:\n        img (Numpy array): Input image, shape (h, w, c), range [0, 1], float32.\n        quality_range (tuple[float] | list[float]): JPG compression quality\n            range. 0 for lowest quality, 100 for best quality.\n            Default: (90, 100).\n\n    Returns:\n        (Numpy array): Returned image after JPG, shape (h, w, c), range[0, 1],\n            float32.\n    \"\"\"\n    quality = np.random.uniform(quality_range[0], quality_range[1])\n    return add_jpg_compression(img, quality)\n"
  },
  {
    "path": "basicsr/data/gaussian_kernels.py",
    "content": "import math\nimport numpy as np\nimport random\nfrom scipy.ndimage.interpolation import shift\nfrom scipy.stats import multivariate_normal\n\n\ndef sigma_matrix2(sig_x, sig_y, theta):\n    \"\"\"Calculate the rotated sigma matrix (two dimensional matrix).\n    Args:\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n    Returns:\n        ndarray: Rotated sigma matrix.\n    \"\"\"\n    D = np.array([[sig_x**2, 0], [0, sig_y**2]])\n    U = np.array([[np.cos(theta), -np.sin(theta)],\n                  [np.sin(theta), np.cos(theta)]])\n    return np.dot(U, np.dot(D, U.T))\n\n\ndef mesh_grid(kernel_size):\n    \"\"\"Generate the mesh grid, centering at zero.\n    Args:\n        kernel_size (int):\n    Returns:\n        xy (ndarray): with the shape (kernel_size, kernel_size, 2)\n        xx (ndarray): with the shape (kernel_size, kernel_size)\n        yy (ndarray): with the shape (kernel_size, kernel_size)\n    \"\"\"\n    ax = np.arange(-kernel_size // 2 + 1., kernel_size // 2 + 1.)\n    xx, yy = np.meshgrid(ax, ax)\n    xy = np.hstack((xx.reshape((kernel_size * kernel_size, 1)),\n                    yy.reshape(kernel_size * kernel_size,\n                               1))).reshape(kernel_size, kernel_size, 2)\n    return xy, xx, yy\n\n\ndef pdf2(sigma_matrix, grid):\n    \"\"\"Calculate PDF of the bivariate Gaussian distribution.\n    Args:\n        sigma_matrix (ndarray): with the shape (2, 2)\n        grid (ndarray): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size.\n    Returns:\n        kernel (ndarrray): un-normalized kernel.\n    \"\"\"\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.exp(-0.5 * np.sum(np.dot(grid, inverse_sigma) * grid, 2))\n    return kernel\n\n\ndef cdf2(D, grid):\n    \"\"\"Calculate the CDF of the standard bivariate Gaussian distribution.\n        Used in skewed Gaussian distribution.\n    Args:\n        D (ndarrasy): skew matrix.\n        grid (ndarray): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size.\n    Returns:\n        cdf (ndarray): skewed cdf.\n    \"\"\"\n    rv = multivariate_normal([0, 0], [[1, 0], [0, 1]])\n    grid = np.dot(grid, D)\n    cdf = rv.cdf(grid)\n    return cdf\n\n\ndef bivariate_skew_Gaussian(kernel_size, sig_x, sig_y, theta, D, grid=None):\n    \"\"\"Generate a bivariate skew Gaussian kernel.\n        Described in `A multivariate skew normal distribution`_ by Shi et. al (2004).\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        D (ndarrasy): skew matrix.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    .. _A multivariate skew normal distribution:\n        https://www.sciencedirect.com/science/article/pii/S0047259X03001313\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    pdf = pdf2(sigma_matrix, grid)\n    cdf = cdf2(D, grid)\n    kernel = pdf * cdf\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef mass_center_shift(kernel_size, kernel):\n    \"\"\"Calculate the shift of the mass center of a kenrel.\n    Args:\n        kernel_size (int):\n        kernel (ndarray): normalized kernel.\n    Returns:\n        delta_h (float):\n        delta_w (float):\n    \"\"\"\n    ax = np.arange(-kernel_size // 2 + 1., kernel_size // 2 + 1.)\n    col_sum, row_sum = np.sum(kernel, axis=0), np.sum(kernel, axis=1)\n    delta_h = np.dot(row_sum, ax)\n    delta_w = np.dot(col_sum, ax)\n    return delta_h, delta_w\n\n\ndef bivariate_skew_Gaussian_center(kernel_size,\n                                   sig_x,\n                                   sig_y,\n                                   theta,\n                                   D,\n                                   grid=None):\n    \"\"\"Generate a bivariate skew Gaussian kernel at center. Shift with nearest padding.\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        D (ndarrasy): skew matrix.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): centered and normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    kernel = bivariate_skew_Gaussian(kernel_size, sig_x, sig_y, theta, D, grid)\n    delta_h, delta_w = mass_center_shift(kernel_size, kernel)\n    kernel = shift(kernel, [-delta_h, -delta_w], mode='nearest')\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_anisotropic_Gaussian(kernel_size,\n                                   sig_x,\n                                   sig_y,\n                                   theta,\n                                   grid=None):\n    \"\"\"Generate a bivariate anisotropic Gaussian kernel.\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    kernel = pdf2(sigma_matrix, grid)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_isotropic_Gaussian(kernel_size, sig, grid=None):\n    \"\"\"Generate a bivariate isotropic Gaussian kernel.\n    Args:\n        kernel_size (int):\n        sig (float):\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = np.array([[sig**2, 0], [0, sig**2]])\n    kernel = pdf2(sigma_matrix, grid)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_generalized_Gaussian(kernel_size,\n                                   sig_x,\n                                   sig_y,\n                                   theta,\n                                   beta,\n                                   grid=None):\n    \"\"\"Generate a bivariate generalized Gaussian kernel.\n        Described in `Parameter Estimation For Multivariate Generalized Gaussian Distributions`_\n        by Pascal et. al (2013).\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        beta (float): shape parameter, beta = 1 is the normal distribution.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    .. _Parameter Estimation For Multivariate Generalized Gaussian Distributions:\n        https://arxiv.org/abs/1302.6498\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.exp(\n        -0.5 * np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta))\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_plateau_type1(kernel_size, sig_x, sig_y, theta, beta, grid=None):\n    \"\"\"Generate a plateau-like anisotropic kernel.\n    1 / (1+x^(beta))\n    Args:\n        kernel_size (int):\n        sig_x (float):\n        sig_y (float):\n        theta (float): Radian measurement.\n        beta (float): shape parameter, beta = 1 is the normal distribution.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = sigma_matrix2(sig_x, sig_y, theta)\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.reciprocal(\n        np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta) + 1)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef bivariate_plateau_type1_iso(kernel_size, sig, beta, grid=None):\n    \"\"\"Generate a plateau-like isotropic kernel.\n    1 / (1+x^(beta))\n    Args:\n        kernel_size (int):\n        sig (float):\n        beta (float): shape parameter, beta = 1 is the normal distribution.\n        grid (ndarray, optional): generated by :func:`mesh_grid`,\n            with the shape (K, K, 2), K is the kernel size. Default: None\n    Returns:\n        kernel (ndarray): normalized kernel.\n    \"\"\"\n    if grid is None:\n        grid, _, _ = mesh_grid(kernel_size)\n    sigma_matrix = np.array([[sig**2, 0], [0, sig**2]])\n    inverse_sigma = np.linalg.inv(sigma_matrix)\n    kernel = np.reciprocal(\n        np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta) + 1)\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef random_bivariate_skew_Gaussian_center(kernel_size,\n                                          sigma_x_range,\n                                          sigma_y_range,\n                                          rotation_range,\n                                          noise_range=None,\n                                          strict=False):\n    \"\"\"Randomly generate bivariate skew Gaussian kernels at center.\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n    assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n    if strict:\n        sigma_max = np.max([sigma_x, sigma_y])\n        sigma_min = np.min([sigma_x, sigma_y])\n        sigma_x, sigma_y = sigma_max, sigma_min\n    rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n\n    sigma_max = np.max([sigma_x, sigma_y])\n    thres = 3 / sigma_max\n    D = [[np.random.uniform(-thres, thres),\n          np.random.uniform(-thres, thres)],\n         [np.random.uniform(-thres, thres),\n          np.random.uniform(-thres, thres)]]\n\n    kernel = bivariate_skew_Gaussian_center(kernel_size, sigma_x, sigma_y,\n                                            rotation, D)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma_x, sigma_y, rotation, D\n    else:\n        return kernel\n\n\ndef random_bivariate_anisotropic_Gaussian(kernel_size,\n                                          sigma_x_range,\n                                          sigma_y_range,\n                                          rotation_range,\n                                          noise_range=None,\n                                          strict=False):\n    \"\"\"Randomly generate bivariate anisotropic Gaussian kernels.\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n    assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n    if strict:\n        sigma_max = np.max([sigma_x, sigma_y])\n        sigma_min = np.min([sigma_x, sigma_y])\n        sigma_x, sigma_y = sigma_max, sigma_min\n    rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n\n    kernel = bivariate_anisotropic_Gaussian(kernel_size, sigma_x, sigma_y,\n                                            rotation)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma_x, sigma_y, rotation\n    else:\n        return kernel\n\n\ndef random_bivariate_isotropic_Gaussian(kernel_size,\n                                        sigma_range,\n                                        noise_range=None,\n                                        strict=False):\n    \"\"\"Randomly generate bivariate isotropic Gaussian kernels.\n    Args:\n        kernel_size (int):\n        sigma_range (tuple): [0.6, 5]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_range[0] < sigma_range[1], 'Wrong sigma_x_range.'\n    sigma = np.random.uniform(sigma_range[0], sigma_range[1])\n\n    kernel = bivariate_isotropic_Gaussian(kernel_size, sigma)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma\n    else:\n        return kernel\n\n\ndef random_bivariate_generalized_Gaussian(kernel_size,\n                                          sigma_x_range,\n                                          sigma_y_range,\n                                          rotation_range,\n                                          beta_range,\n                                          noise_range=None,\n                                          strict=False):\n    \"\"\"Randomly generate bivariate generalized Gaussian kernels.\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        beta_range (tuple): [0.5, 8]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n    assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n    if strict:\n        sigma_max = np.max([sigma_x, sigma_y])\n        sigma_min = np.min([sigma_x, sigma_y])\n        sigma_x, sigma_y = sigma_max, sigma_min\n    rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n    if np.random.uniform() < 0.5:\n        beta = np.random.uniform(beta_range[0], 1)\n    else:\n        beta = np.random.uniform(1, beta_range[1])\n\n    kernel = bivariate_generalized_Gaussian(kernel_size, sigma_x, sigma_y,\n                                            rotation, beta)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma_x, sigma_y, rotation, beta\n    else:\n        return kernel\n\n\ndef random_bivariate_plateau_type1(kernel_size,\n                                   sigma_x_range,\n                                   sigma_y_range,\n                                   rotation_range,\n                                   beta_range,\n                                   noise_range=None,\n                                   strict=False):\n    \"\"\"Randomly generate bivariate plateau type1 kernels.\n    Args:\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi/2, math.pi/2]\n        beta_range (tuple): [1, 4]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_x_range[0] < sigma_x_range[1], 'Wrong sigma_x_range.'\n    assert sigma_y_range[0] < sigma_y_range[1], 'Wrong sigma_y_range.'\n    assert rotation_range[0] < rotation_range[1], 'Wrong rotation_range.'\n    sigma_x = np.random.uniform(sigma_x_range[0], sigma_x_range[1])\n    sigma_y = np.random.uniform(sigma_y_range[0], sigma_y_range[1])\n    if strict:\n        sigma_max = np.max([sigma_x, sigma_y])\n        sigma_min = np.min([sigma_x, sigma_y])\n        sigma_x, sigma_y = sigma_max, sigma_min\n    rotation = np.random.uniform(rotation_range[0], rotation_range[1])\n    if np.random.uniform() < 0.5:\n        beta = np.random.uniform(beta_range[0], 1)\n    else:\n        beta = np.random.uniform(1, beta_range[1])\n\n    kernel = bivariate_plateau_type1(kernel_size, sigma_x, sigma_y, rotation,\n                                     beta)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma_x, sigma_y, rotation, beta\n    else:\n        return kernel\n\n\ndef random_bivariate_plateau_type1_iso(kernel_size,\n                                       sigma_range,\n                                       beta_range,\n                                       noise_range=None,\n                                       strict=False):\n    \"\"\"Randomly generate bivariate plateau type1 kernels (iso).\n    Args:\n        kernel_size (int):\n        sigma_range (tuple): [0.6, 5]\n        beta_range (tuple): [1, 4]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'\n    assert sigma_range[0] < sigma_range[1], 'Wrong sigma_x_range.'\n    sigma = np.random.uniform(sigma_range[0], sigma_range[1])\n    beta = np.random.uniform(beta_range[0], beta_range[1])\n\n    kernel = bivariate_plateau_type1_iso(kernel_size, sigma, beta)\n\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    if strict:\n        return kernel, sigma, beta\n    else:\n        return kernel\n\n\ndef random_mixed_kernels(kernel_list,\n                         kernel_prob,\n                         kernel_size=21,\n                         sigma_x_range=[0.6, 5],\n                         sigma_y_range=[0.6, 5],\n                         rotation_range=[-math.pi, math.pi],\n                         beta_range=[0.5, 8],\n                         noise_range=None):\n    \"\"\"Randomly generate mixed kernels.\n    Args:\n        kernel_list (tuple): a list name of kenrel types,\n            support ['iso', 'aniso', 'skew', 'generalized', 'plateau_iso', 'plateau_aniso']\n        kernel_prob (tuple): corresponding kernel probability for each kernel type\n        kernel_size (int):\n        sigma_x_range (tuple): [0.6, 5]\n        sigma_y_range (tuple): [0.6, 5]\n        rotation range (tuple): [-math.pi, math.pi]\n        beta_range (tuple): [0.5, 8]\n        noise_range(tuple, optional): multiplicative kernel noise, [0.75, 1.25]. Default: None\n    Returns:\n        kernel (ndarray):\n    \"\"\"\n    kernel_type = random.choices(kernel_list, kernel_prob)[0]\n    if kernel_type == 'iso':\n        kernel = random_bivariate_isotropic_Gaussian(\n            kernel_size, sigma_x_range, noise_range=noise_range)\n    elif kernel_type == 'aniso':\n        kernel = random_bivariate_anisotropic_Gaussian(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            noise_range=noise_range)\n    elif kernel_type == 'skew':\n        kernel = random_bivariate_skew_Gaussian_center(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            noise_range=noise_range)\n    elif kernel_type == 'generalized':\n        kernel = random_bivariate_generalized_Gaussian(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            beta_range,\n            noise_range=noise_range)\n    elif kernel_type == 'plateau_iso':\n        kernel = random_bivariate_plateau_type1_iso(\n            kernel_size, sigma_x_range, beta_range, noise_range=noise_range)\n    elif kernel_type == 'plateau_aniso':\n        kernel = random_bivariate_plateau_type1(\n            kernel_size,\n            sigma_x_range,\n            sigma_y_range,\n            rotation_range,\n            beta_range,\n            noise_range=noise_range)\n    # add multiplicative noise\n    if noise_range is not None:\n        assert noise_range[0] < noise_range[1], 'Wrong noise range.'\n        noise = np.random.uniform(\n            noise_range[0], noise_range[1], size=kernel.shape)\n        kernel = kernel * noise\n    kernel = kernel / np.sum(kernel)\n    return kernel\n\n\ndef show_one_kernel():\n    import matplotlib.pyplot as plt\n    kernel_size = 21\n\n    # bivariate skew Gaussian\n    D = [[0, 0], [0, 0]]\n    D = [[3 / 4, 0], [0, 0.5]]\n    kernel = bivariate_skew_Gaussian_center(kernel_size, 2, 4, -math.pi / 4, D)\n    # bivariate anisotropic Gaussian\n    kernel = bivariate_anisotropic_Gaussian(kernel_size, 2, 4, -math.pi / 4)\n    # bivariate anisotropic Gaussian\n    kernel = bivariate_isotropic_Gaussian(kernel_size, 1)\n    # bivariate generalized Gaussian\n    kernel = bivariate_generalized_Gaussian(\n        kernel_size, 2, 4, -math.pi / 4, beta=4)\n\n    delta_h, delta_w = mass_center_shift(kernel_size, kernel)\n    print(delta_h, delta_w)\n\n    fig, axs = plt.subplots(nrows=2, ncols=2)\n    # axs.set_axis_off()\n    ax = axs[0][0]\n    im = ax.matshow(kernel, cmap='jet', origin='upper')\n    fig.colorbar(im, ax=ax)\n\n    # image\n    ax = axs[0][1]\n    kernel_vis = kernel - np.min(kernel)\n    kernel_vis = kernel_vis / np.max(kernel_vis) * 255.\n    ax.imshow(kernel_vis, interpolation='nearest')\n\n    _, xx, yy = mesh_grid(kernel_size)\n    # contour\n    ax = axs[1][0]\n    CS = ax.contour(xx, yy, kernel, origin='upper')\n    ax.clabel(CS, inline=1, fontsize=3)\n\n    # contourf\n    ax = axs[1][1]\n    kernel = kernel / np.max(kernel)\n    p = ax.contourf(\n        xx, yy, kernel, origin='upper', levels=np.linspace(-0.05, 1.05, 10))\n    fig.colorbar(p)\n\n    plt.show()\n\n\ndef show_plateau_kernel():\n    import matplotlib.pyplot as plt\n    kernel_size = 21\n\n    kernel = plateau_type1(kernel_size, 2, 4, -math.pi / 8, 2, grid=None)\n    kernel_norm = bivariate_isotropic_Gaussian(kernel_size, 5)\n    kernel_gau = bivariate_generalized_Gaussian(\n        kernel_size, 2, 4, -math.pi / 8, 2, grid=None)\n    delta_h, delta_w = mass_center_shift(kernel_size, kernel)\n    print(delta_h, delta_w)\n\n    # kernel_slice = kernel[10, :]\n    # kernel_gau_slice = kernel_gau[10, :]\n    # kernel_norm_slice = kernel_norm[10, :]\n    # fig, ax = plt.subplots()\n    # t = list(range(1, 22))\n\n    # ax.plot(t, kernel_gau_slice)\n    # ax.plot(t, kernel_slice)\n    # ax.plot(t, kernel_norm_slice)\n\n    # t = np.arange(0, 10, 0.1)\n    # y = np.exp(-0.5 * t)\n    # y2 = np.reciprocal(1 + t)\n    # print(t.shape)\n    # print(y.shape)\n    # ax.plot(t, y)\n    # ax.plot(t, y2)\n    # plt.show()\n\n    fig, axs = plt.subplots(nrows=2, ncols=2)\n    # axs.set_axis_off()\n    ax = axs[0][0]\n    im = ax.matshow(kernel, cmap='jet', origin='upper')\n    fig.colorbar(im, ax=ax)\n\n    # image\n    ax = axs[0][1]\n    kernel_vis = kernel - np.min(kernel)\n    kernel_vis = kernel_vis / np.max(kernel_vis) * 255.\n    ax.imshow(kernel_vis, interpolation='nearest')\n\n    _, xx, yy = mesh_grid(kernel_size)\n    # contour\n    ax = axs[1][0]\n    CS = ax.contour(xx, yy, kernel, origin='upper')\n    ax.clabel(CS, inline=1, fontsize=3)\n\n    # contourf\n    ax = axs[1][1]\n    kernel = kernel / np.max(kernel)\n    p = ax.contourf(\n        xx, yy, kernel, origin='upper', levels=np.linspace(-0.05, 1.05, 10))\n    fig.colorbar(p)\n\n    plt.show()\n"
  },
  {
    "path": "basicsr/data/inpainting_dataset.py",
    "content": "import os\nimport random\nfrom pathlib import Path\n\nfrom PIL import Image\nimport cv2\nimport ffmpeg\nimport io\nimport av\nimport numpy as np\nimport torch\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.data.degradations import (random_add_gaussian_noise,\n                                       random_mixed_kernels)\nfrom basicsr.data.data_util import paths_from_folder, brush_stroke_mask, brush_stroke_mask_video, random_ff_mask\nfrom basicsr.data.transforms import augment\nfrom basicsr.utils import FileClient, get_root_logger, img2tensor, imfrombytes, scandir\nfrom basicsr.utils.registry import DATASET_REGISTRY\nfrom facelib.utils.face_restoration_helper import FaceAligner\nfrom torch.utils import data as data\n\n@DATASET_REGISTRY.register()\nclass InpaintingDataset(data.Dataset):\n    def __init__(self, opt):\n        super(InpaintingDataset, self).__init__()\n        self.opt = opt\n        self.gt_root = Path(opt['dataroot_gt'])\n\n        self.num_frame = opt['video_length'] # 5\n        self.scale = opt['scale'] # [1, 4]\n        self.need_align = opt.get('need_align', False) # False\n        self.normalize = opt.get('normalize', False) # True\n\n        self.keys = []\n        with open(opt['global_meta_info_file'], 'r') as fin:\n            for line in fin:\n                real_clip_path = '/'.join(line.split('/')[:-1])\n                clip_length = int(line.split('/')[-1])\n                self.keys.extend([f'{real_clip_path}/{clip_length:08d}/{0:08d}'])\n\n        # file client (io backend)\n        self.file_client = None\n        self.io_backend_opt = opt['io_backend']\n        self.is_lmdb = False\n        if self.io_backend_opt['type'] == 'lmdb':\n            self.is_lmdb = True\n            self.io_backend_opt['db_paths'] = [self.gt_root]\n            self.io_backend_opt['client_keys'] = ['gt']\n\n        # temporal augmentation configs\n        self.interval_list = opt['interval_list'] # [1]\n        self.random_reverse = opt['random_reverse']\n        interval_str = ','.join(str(x) for x in opt['interval_list']) # '1'\n        logger = get_root_logger()\n        logger.info(f'Temporal augmentation interval list: [{interval_str}]; '\n                    f'random reverse is {self.random_reverse}.')\n\n        # degradations\n        # blur\n        self.blur_kernel_size = opt['blur_kernel_size'] # 21\n        self.kernel_list = opt['kernel_list']           # ['iso', 'aniso']\n        self.kernel_prob = opt['kernel_prob']           # [0.5, 0.5]  \n        self.blur_x_sigma = opt['blur_x_sigma']         # [0.2, 3]\n        self.blur_y_sigma = opt['blur_y_sigma']         # [0.2, 3]\n        # noise\n        self.noise_range = opt['noise_range']           # [0, 25] \n        # resize\n        self.resize_prob = opt['resize_prob']           # [0.25, 0.25, 0.5]\n        # crf\n        self.crf_range = opt['crf_range']               # [10, 30]\n        # codec\n        self.vcodec = opt['vcodec']                     # ['libx264']\n        self.vcodec_prob = opt['vcodec_prob']           # [1]\n\n        logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, '\n                    f'x_sigma: [{\", \".join(map(str, self.blur_x_sigma))}], '\n                    f'y_sigma: [{\", \".join(map(str, self.blur_y_sigma))}], ')\n        logger.info(f'Noise: [{\", \".join(map(str, self.noise_range))}]')\n        logger.info(f'CRF compression: [{\", \".join(map(str, self.crf_range))}]')\n        logger.info(f'Codec: [{\", \".join(map(str, self.vcodec))}]')\n\n        if self.need_align:\n            self.dataroot_meta_info = opt['dataroot_meta_info']\n            self.face_aligner = FaceAligner(\n                upscale_factor=1,\n                face_size=512,\n                crop_ratio=(1, 1),\n                det_model='retinaface_resnet50',\n                save_ext='png',\n                use_parse=True)\n\n    def __getitem__(self, index):\n        if self.file_client is None:\n            self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)\n\n        key = self.keys[index]\n        real_clip_path = '/'.join(key.split('/')[:-2])\n        clip_length = int(key.split('/')[-2])\n        frame_idx = int(key.split('/')[-1])\n        clip_name = real_clip_path.split('/')[-1]\n\n        if os.path.exists(os.path.join(self.gt_root, \"train\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"train\", clip_name))))\n        elif os.path.exists(os.path.join(self.gt_root, \"test\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"test\", clip_name))))\n        else:\n            paths = sorted(list(scandir(os.path.join(self.gt_root, clip_name))))\n\n        # determine the neighboring frames\n        interval = random.choice(self.interval_list)\n\n        # exceed the length, re-select a new clip\n        while (clip_length - self.num_frame * interval) < 0:\n            interval = random.choice(self.interval_list)\n\n        # ensure not exceeding the borders\n        start_frame_idx = frame_idx - self.num_frame // 2 * interval\n        end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n\n        while (start_frame_idx < 0) or (end_frame_idx > clip_length):\n            frame_idx = random.randint(self.num_frame // 2 * interval,\n                                       clip_length - self.num_frame // 2 * interval)\n            start_frame_idx = frame_idx - self.num_frame // 2 * interval\n            end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n        neighbor_list = list(range(start_frame_idx, end_frame_idx, interval))\n\n        # random reverse\n        if self.random_reverse and random.random() < 0.5:\n            neighbor_list.reverse()\n\n        assert len(neighbor_list) == self.num_frame, (\n            f'Wrong length of neighbor list: {len(neighbor_list)}')\n\n        # get the neighboring GT frames\n        img_gts = []\n\n        need_align = False\n        if self.need_align:\n            clip_info_path = os.path.join(self.dataroot_meta_info, f'{clip_name}.txt')\n            if os.path.exists(clip_info_path):\n                need_align = True\n                clip_info = []\n                with open(clip_info_path, 'r', encoding='utf-8') as fin:\n                    for line in fin:\n                        line = line.strip()\n                        clip_info.append(line)\n\n        for neighbor in neighbor_list:\n            img_gt_path = os.path.join(self.gt_root, clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"train\", clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"test\", clip_name, paths[neighbor])\n\n            img_gt = np.asarray(Image.open(img_gt_path))[:, :, ::-1] / 255.0\n            img_gts.append(img_gt)\n\n        # augmentation - flip, rotate\n        img_gts = augment(img_gts, self.opt['use_flip'], self.opt['use_rot']) # False, False\n\n        # ------------- generate inpaint frames --------------#\n        img_lqs = img_gts\n        img_lqs = [Image.fromarray((_ * 255).astype('uint8')) for _ in img_lqs]\n        img_lqs = brush_stroke_mask_video(img_lqs)\n        img_lqs = [np.array(_) / 255. for _ in img_lqs]\n\n        # ------------ Align -------------#\n        if need_align:\n            align_lqs, align_gts = [], []\n            for frame_idx, (img_lq, img_gt) in enumerate(zip(img_lqs, img_gts)):\n                landmarks_str = clip_info[start_frame_idx + frame_idx].split(' ')\n                landmarks = np.array([float(x) for x in landmarks_str]).reshape(5, 2)\n                self.face_aligner.clean_all()\n\n                # align and warp each face\n                img_lq, img_gt = self.face_aligner.align_pair_face(img_lq, img_gt, landmarks)\n                align_lqs.append(img_lq)\n                align_gts.append(img_gt)\n            img_lqs, img_gts = align_lqs, align_gts\n\n        img_gts = img2tensor(img_gts)\n        img_lqs = img2tensor(img_lqs)\n        img_gts = torch.stack(img_gts, dim=0)\n        img_lqs = torch.stack(img_lqs, dim=0)\n\n        if self.normalize:\n            normalize(img_lqs, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n            normalize(img_gts, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n\n        return {'in': img_lqs, 'gt': img_gts, 'key': key}\n\n    def __len__(self):\n        return len(self.keys)\n"
  },
  {
    "path": "basicsr/data/paired_image_dataset.py",
    "content": "from torch.utils import data as data\nfrom torchvision.transforms.functional import normalize\n\nfrom basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb, paired_paths_from_meta_info_file\nfrom basicsr.data.transforms import augment, paired_random_crop\nfrom basicsr.utils import FileClient, imfrombytes, img2tensor\nfrom basicsr.utils.registry import DATASET_REGISTRY\n\n\n@DATASET_REGISTRY.register()\nclass PairedImageDataset(data.Dataset):\n    \"\"\"Paired image dataset for image restoration.\n\n    Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and\n    GT image pairs.\n\n    There are three modes:\n    1. 'lmdb': Use lmdb files.\n        If opt['io_backend'] == lmdb.\n    2. 'meta_info_file': Use meta information file to generate paths.\n        If opt['io_backend'] != lmdb and opt['meta_info_file'] is not None.\n    3. 'folder': Scan folders to generate paths.\n        The rest.\n\n    Args:\n        opt (dict): Config for train datasets. It contains the following keys:\n            dataroot_gt (str): Data root path for gt.\n            dataroot_lq (str): Data root path for lq.\n            meta_info_file (str): Path for meta information file.\n            io_backend (dict): IO backend type and other kwarg.\n            filename_tmpl (str): Template for each filename. Note that the\n                template excludes the file extension. Default: '{}'.\n            gt_size (int): Cropped patched size for gt patches.\n            use_flip (bool): Use horizontal flips.\n            use_rot (bool): Use rotation (use vertical flip and transposing h\n                and w for implementation).\n\n            scale (bool): Scale, which will be added automatically.\n            phase (str): 'train' or 'val'.\n    \"\"\"\n\n    def __init__(self, opt):\n        super(PairedImageDataset, self).__init__()\n        self.opt = opt\n        # file client (io backend)\n        self.file_client = None\n        self.io_backend_opt = opt['io_backend']\n        self.mean = opt['mean'] if 'mean' in opt else None\n        self.std = opt['std'] if 'std' in opt else None\n\n        self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq']\n        if 'filename_tmpl' in opt:\n            self.filename_tmpl = opt['filename_tmpl']\n        else:\n            self.filename_tmpl = '{}'\n\n        if self.io_backend_opt['type'] == 'lmdb':\n            self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder]\n            self.io_backend_opt['client_keys'] = ['lq', 'gt']\n            self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt'])\n        elif 'meta_info_file' in self.opt and self.opt['meta_info_file'] is not None:\n            self.paths = paired_paths_from_meta_info_file([self.lq_folder, self.gt_folder], ['lq', 'gt'],\n                                                          self.opt['meta_info_file'], self.filename_tmpl)\n        else:\n            self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl)\n\n    def __getitem__(self, index):\n        if self.file_client is None:\n            self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)\n\n        scale = self.opt['scale']\n\n        # Load gt and lq images. Dimension order: HWC; channel order: BGR;\n        # image range: [0, 1], float32.\n        gt_path = self.paths[index]['gt_path']\n        img_bytes = self.file_client.get(gt_path, 'gt')\n        img_gt = imfrombytes(img_bytes, float32=True)\n        lq_path = self.paths[index]['lq_path']\n        img_bytes = self.file_client.get(lq_path, 'lq')\n        img_lq = imfrombytes(img_bytes, float32=True)\n\n        # augmentation for training\n        if self.opt['phase'] == 'train':\n            gt_size = self.opt['gt_size']\n            # random crop\n            img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path)\n            # flip, rotation\n            img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_flip'], self.opt['use_rot'])\n\n        # TODO: color space transform\n        # BGR to RGB, HWC to CHW, numpy to tensor\n        img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)\n        # normalize\n        if self.mean is not None or self.std is not None:\n            normalize(img_lq, self.mean, self.std, inplace=True)\n            normalize(img_gt, self.mean, self.std, inplace=True)\n\n        return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path}\n\n    def __len__(self):\n        return len(self.paths)\n"
  },
  {
    "path": "basicsr/data/prefetch_dataloader.py",
    "content": "import queue as Queue\nimport threading\nimport torch\nfrom torch.utils.data import DataLoader\n\n\nclass PrefetchGenerator(threading.Thread):\n    \"\"\"A general prefetch generator.\n\n    Ref:\n    https://stackoverflow.com/questions/7323664/python-generator-pre-fetch\n\n    Args:\n        generator: Python generator.\n        num_prefetch_queue (int): Number of prefetch queue.\n    \"\"\"\n\n    def __init__(self, generator, num_prefetch_queue):\n        threading.Thread.__init__(self)\n        self.queue = Queue.Queue(num_prefetch_queue)\n        self.generator = generator\n        self.daemon = True\n        self.start()\n\n    def run(self):\n        for item in self.generator:\n            self.queue.put(item)\n        self.queue.put(None)\n\n    def __next__(self):\n        next_item = self.queue.get()\n        if next_item is None:\n            raise StopIteration\n        return next_item\n\n    def __iter__(self):\n        return self\n\n\nclass PrefetchDataLoader(DataLoader):\n    \"\"\"Prefetch version of dataloader.\n\n    Ref:\n    https://github.com/IgorSusmelj/pytorch-styleguide/issues/5#\n\n    TODO:\n    Need to test on single gpu and ddp (multi-gpu). There is a known issue in\n    ddp.\n\n    Args:\n        num_prefetch_queue (int): Number of prefetch queue.\n        kwargs (dict): Other arguments for dataloader.\n    \"\"\"\n\n    def __init__(self, num_prefetch_queue, **kwargs):\n        self.num_prefetch_queue = num_prefetch_queue\n        super(PrefetchDataLoader, self).__init__(**kwargs)\n\n    def __iter__(self):\n        return PrefetchGenerator(super().__iter__(), self.num_prefetch_queue)\n\n\nclass CPUPrefetcher():\n    \"\"\"CPU prefetcher.\n\n    Args:\n        loader: Dataloader.\n    \"\"\"\n\n    def __init__(self, loader):\n        self.ori_loader = loader\n        self.loader = iter(loader)\n\n    def next(self):\n        try:\n            return next(self.loader)\n        except StopIteration:\n            return None\n\n    def reset(self):\n        self.loader = iter(self.ori_loader)\n\n\nclass CUDAPrefetcher():\n    \"\"\"CUDA prefetcher.\n\n    Ref:\n    https://github.com/NVIDIA/apex/issues/304#\n\n    It may consums more GPU memory.\n\n    Args:\n        loader: Dataloader.\n        opt (dict): Options.\n    \"\"\"\n\n    def __init__(self, loader, opt):\n        self.ori_loader = loader\n        self.loader = iter(loader)\n        self.opt = opt\n        self.stream = torch.cuda.Stream()\n        self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu')\n        self.preload()\n\n    def preload(self):\n        try:\n            self.batch = next(self.loader)  # self.batch is a dict\n        except StopIteration:\n            self.batch = None\n            return None\n        # put tensors to gpu\n        with torch.cuda.stream(self.stream):\n            for k, v in self.batch.items():\n                if torch.is_tensor(v):\n                    self.batch[k] = self.batch[k].to(device=self.device, non_blocking=True)\n\n    def next(self):\n        torch.cuda.current_stream().wait_stream(self.stream)\n        batch = self.batch\n        self.preload()\n        return batch\n\n    def reset(self):\n        self.loader = iter(self.ori_loader)\n        self.preload()\n"
  },
  {
    "path": "basicsr/data/transforms.py",
    "content": "import cv2\nimport random\n\n\ndef mod_crop(img, scale):\n    \"\"\"Mod crop images, used during testing.\n\n    Args:\n        img (ndarray): Input image.\n        scale (int): Scale factor.\n\n    Returns:\n        ndarray: Result image.\n    \"\"\"\n    img = img.copy()\n    if img.ndim in (2, 3):\n        h, w = img.shape[0], img.shape[1]\n        h_remainder, w_remainder = h % scale, w % scale\n        img = img[:h - h_remainder, :w - w_remainder, ...]\n    else:\n        raise ValueError(f'Wrong img ndim: {img.ndim}.')\n    return img\n\n\ndef paired_random_crop(img_gts, img_lqs, gt_patch_size, scale, gt_path):\n    \"\"\"Paired random crop.\n\n    It crops lists of lq and gt images with corresponding locations.\n\n    Args:\n        img_gts (list[ndarray] | ndarray): GT images. Note that all images\n            should have the same shape. If the input is an ndarray, it will\n            be transformed to a list containing itself.\n        img_lqs (list[ndarray] | ndarray): LQ images. Note that all images\n            should have the same shape. If the input is an ndarray, it will\n            be transformed to a list containing itself.\n        gt_patch_size (int): GT patch size.\n        scale (int): Scale factor.\n        gt_path (str): Path to ground-truth.\n\n    Returns:\n        list[ndarray] | ndarray: GT images and LQ images. If returned results\n            only have one element, just return ndarray.\n    \"\"\"\n\n    if not isinstance(img_gts, list):\n        img_gts = [img_gts]\n    if not isinstance(img_lqs, list):\n        img_lqs = [img_lqs]\n\n    h_lq, w_lq, _ = img_lqs[0].shape\n    h_gt, w_gt, _ = img_gts[0].shape\n    lq_patch_size = gt_patch_size // scale\n\n    if h_gt != h_lq * scale or w_gt != w_lq * scale:\n        raise ValueError(f'Scale mismatches. GT ({h_gt}, {w_gt}) is not {scale}x ',\n                         f'multiplication of LQ ({h_lq}, {w_lq}).')\n    if h_lq < lq_patch_size or w_lq < lq_patch_size:\n        raise ValueError(f'LQ ({h_lq}, {w_lq}) is smaller than patch size '\n                         f'({lq_patch_size}, {lq_patch_size}). '\n                         f'Please remove {gt_path}.')\n\n    # randomly choose top and left coordinates for lq patch\n    top = random.randint(0, h_lq - lq_patch_size)\n    left = random.randint(0, w_lq - lq_patch_size)\n\n    # crop lq patch\n    img_lqs = [v[top:top + lq_patch_size, left:left + lq_patch_size, ...] for v in img_lqs]\n\n    # crop corresponding gt patch\n    top_gt, left_gt = int(top * scale), int(left * scale)\n    img_gts = [v[top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size, ...] for v in img_gts]\n    if len(img_gts) == 1:\n        img_gts = img_gts[0]\n    if len(img_lqs) == 1:\n        img_lqs = img_lqs[0]\n    return img_gts, img_lqs\n\n\ndef augment(imgs, hflip=True, rotation=True, flows=None, return_status=False):\n    \"\"\"Augment: horizontal flips OR rotate (0, 90, 180, 270 degrees).\n\n    We use vertical flip and transpose for rotation implementation.\n    All the images in the list use the same augmentation.\n\n    Args:\n        imgs (list[ndarray] | ndarray): Images to be augmented. If the input\n            is an ndarray, it will be transformed to a list.\n        hflip (bool): Horizontal flip. Default: True.\n        rotation (bool): Ratotation. Default: True.\n        flows (list[ndarray]: Flows to be augmented. If the input is an\n            ndarray, it will be transformed to a list.\n            Dimension is (h, w, 2). Default: None.\n        return_status (bool): Return the status of flip and rotation.\n            Default: False.\n\n    Returns:\n        list[ndarray] | ndarray: Augmented images and flows. If returned\n            results only have one element, just return ndarray.\n\n    \"\"\"\n    hflip = hflip and random.random() < 0.5\n    vflip = rotation and random.random() < 0.5\n    rot90 = rotation and random.random() < 0.5\n\n    def _augment(img):\n        if hflip:  # horizontal\n            cv2.flip(img, 1, img)\n        if vflip:  # vertical\n            cv2.flip(img, 0, img)\n        if rot90:\n            img = img.transpose(1, 0, 2)\n        return img\n\n    def _augment_flow(flow):\n        if hflip:  # horizontal\n            cv2.flip(flow, 1, flow)\n            flow[:, :, 0] *= -1\n        if vflip:  # vertical\n            cv2.flip(flow, 0, flow)\n            flow[:, :, 1] *= -1\n        if rot90:\n            flow = flow.transpose(1, 0, 2)\n            flow = flow[:, :, [1, 0]]\n        return flow\n\n    if not isinstance(imgs, list):\n        imgs = [imgs]\n    imgs = [_augment(img) for img in imgs]\n    if len(imgs) == 1:\n        imgs = imgs[0]\n\n    if flows is not None:\n        if not isinstance(flows, list):\n            flows = [flows]\n        flows = [_augment_flow(flow) for flow in flows]\n        if len(flows) == 1:\n            flows = flows[0]\n        return imgs, flows\n    else:\n        if return_status:\n            return imgs, (hflip, vflip, rot90)\n        else:\n            return imgs\n\n\ndef img_rotate(img, angle, center=None, scale=1.0):\n    \"\"\"Rotate image.\n\n    Args:\n        img (ndarray): Image to be rotated.\n        angle (float): Rotation angle in degrees. Positive values mean\n            counter-clockwise rotation.\n        center (tuple[int]): Rotation center. If the center is None,\n            initialize it as the center of the image. Default: None.\n        scale (float): Isotropic scale factor. Default: 1.0.\n    \"\"\"\n    (h, w) = img.shape[:2]\n\n    if center is None:\n        center = (w // 2, h // 2)\n\n    matrix = cv2.getRotationMatrix2D(center, angle, scale)\n    rotated_img = cv2.warpAffine(img, matrix, (w, h))\n    return rotated_img\n"
  },
  {
    "path": "basicsr/data/vfhq_dataset.py",
    "content": "import os\nimport random\nfrom pathlib import Path\n\nfrom PIL import Image\nimport cv2\nimport ffmpeg\nimport io\nimport av\nimport numpy as np\nimport torch\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.data.degradations import (random_add_gaussian_noise,\n                                       random_mixed_kernels)\nfrom basicsr.data.transforms import augment\nfrom basicsr.utils import FileClient, get_root_logger, img2tensor, imfrombytes, scandir\nfrom basicsr.utils.registry import DATASET_REGISTRY\nfrom facelib.utils.face_restoration_helper import FaceAligner\nfrom torch.utils import data as data\n\n\n@DATASET_REGISTRY.register()\nclass VFHQRealDegradationDatasetNew(data.Dataset):\n    \"\"\"Support for blind setting adopted in paper. We excludes the random scale compared to GFPGAN.\n\n    This dataset is adopted in BasicVSR.\n\n    The degradation order is blur+downsample+noise\n\n    Directly read image by cv2. Generate LR images online.\n    NOTE: The specific degradation order is blur-noise-downsample-crf-upsample\n\n    The keys are generated from a meta info txt file.\n\n    Key format: subfolder-name/clip-length/frame-name\n    Key examples: \"id00020#t0bbIRgKKzM#00381.txt#000.mp4/00000152/00000000\"\n    GT (gt): Ground-Truth;\n    LQ (lq): Low-Quality, e.g., low-resolution/blurry/noisy/compressed frames.\n    Args:\n        opt (dict): Config for train dataset. It contains the following keys:\n            dataroot_gt (str): Data root path for gt.\n            dataroot_clip_meta_info (srt): Data root path for meta info of each gt clip.\n            global_meta_info_file (str): Path for global meta information file.\n            io_backend (dict): IO backend type and other kwarg.\n            num_frame (int): Window size for input frames.\n            interval_list (list): Interval list for temporal augmentation.\n            random_reverse (bool): Random reverse input frames.\n            use_flip (bool): Use horizontal flips.\n            use_rot (bool): Use rotation (use vertical flip and transposing h\n                and w for implementation).\n    \"\"\"\n\n    def __init__(self, opt):\n        super(VFHQRealDegradationDatasetNew, self).__init__()\n        self.opt = opt\n        self.gt_root = Path(opt['dataroot_gt'])\n\n        self.num_frame = opt['video_length'] # 5\n        self.scale = opt['scale'] # [1, 4]\n        self.need_align = opt.get('need_align', False) # False\n        self.normalize = opt.get('normalize', False) # True\n\n        self.keys = []\n        with open(opt['global_meta_info_file'], 'r') as fin:\n            for line in fin:\n                real_clip_path = '/'.join(line.split('/')[:-1])\n                clip_length = int(line.split('/')[-1])\n                self.keys.extend([f'{real_clip_path}/{clip_length:08d}/{0:08d}'])\n\n        # file client (io backend)\n        self.file_client = None\n        self.io_backend_opt = opt['io_backend']\n        self.is_lmdb = False\n        if self.io_backend_opt['type'] == 'lmdb':\n            self.is_lmdb = True\n            self.io_backend_opt['db_paths'] = [self.gt_root]\n            self.io_backend_opt['client_keys'] = ['gt']\n\n        # temporal augmentation configs\n        self.interval_list = opt['interval_list'] # [1]\n        self.random_reverse = opt['random_reverse']\n        interval_str = ','.join(str(x) for x in opt['interval_list']) # '1'\n        logger = get_root_logger()\n        logger.info(f'Temporal augmentation interval list: [{interval_str}]; '\n                    f'random reverse is {self.random_reverse}.')\n\n        # degradations\n        # blur\n        self.blur_kernel_size = opt['blur_kernel_size'] # 21\n        self.kernel_list = opt['kernel_list']           # ['iso', 'aniso']\n        self.kernel_prob = opt['kernel_prob']           # [0.5, 0.5]  \n        self.blur_x_sigma = opt['blur_x_sigma']         # [0.2, 3]\n        self.blur_y_sigma = opt['blur_y_sigma']         # [0.2, 3]\n        # noise\n        self.noise_range = opt['noise_range']           # [0, 25] \n        # resize\n        self.resize_prob = opt['resize_prob']           # [0.25, 0.25, 0.5]\n        # crf\n        self.crf_range = opt['crf_range']               # [10, 30]\n        # codec\n        self.vcodec = opt['vcodec']                     # ['libx264']\n        self.vcodec_prob = opt['vcodec_prob']           # [1]\n\n        logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, '\n                    f'x_sigma: [{\", \".join(map(str, self.blur_x_sigma))}], '\n                    f'y_sigma: [{\", \".join(map(str, self.blur_y_sigma))}], ')\n        logger.info(f'Noise: [{\", \".join(map(str, self.noise_range))}]')\n        logger.info(f'CRF compression: [{\", \".join(map(str, self.crf_range))}]')\n        logger.info(f'Codec: [{\", \".join(map(str, self.vcodec))}]')\n\n        if self.need_align:\n            self.dataroot_meta_info = opt['dataroot_meta_info']\n            self.face_aligner = FaceAligner(\n                upscale_factor=1,\n                face_size=512,\n                crop_ratio=(1, 1),\n                det_model='retinaface_resnet50',\n                save_ext='png',\n                use_parse=True)\n\n    def __getitem__(self, index):\n        if self.file_client is None:\n            self.file_client = FileClient(\n                self.io_backend_opt.pop('type'), **self.io_backend_opt)\n\n        key = self.keys[index]\n        real_clip_path = '/'.join(key.split('/')[:-2])\n        clip_length = int(key.split('/')[-2])\n        frame_idx = int(key.split('/')[-1])\n        clip_name = real_clip_path.split('/')[-1]\n\n        if os.path.exists(os.path.join(self.gt_root, \"train\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"train\", clip_name))))\n        elif os.path.exists(os.path.join(self.gt_root, \"test\", clip_name)):\n            paths = sorted(list(scandir(os.path.join(self.gt_root, \"test\", clip_name))))\n        else:\n            paths = sorted(list(scandir(os.path.join(self.gt_root, clip_name))))\n\n        # determine the neighboring frames\n        interval = random.choice(self.interval_list)\n\n        # exceed the length, re-select a new clip\n        while (clip_length - self.num_frame * interval) < 0:\n            interval = random.choice(self.interval_list)\n\n        # ensure not exceeding the borders\n        start_frame_idx = frame_idx - self.num_frame // 2 * interval\n        end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n\n        while (start_frame_idx < 0) or (end_frame_idx > clip_length):\n            frame_idx = random.randint(self.num_frame // 2 * interval,\n                                       clip_length - self.num_frame // 2 * interval)\n            start_frame_idx = frame_idx - self.num_frame // 2 * interval\n            end_frame_idx = frame_idx + (self.num_frame + 1) // 2 * interval\n        neighbor_list = list(range(start_frame_idx, end_frame_idx, interval))\n\n        # random reverse\n        if self.random_reverse and random.random() < 0.5:\n            neighbor_list.reverse()\n\n        assert len(neighbor_list) == self.num_frame, (\n            f'Wrong length of neighbor list: {len(neighbor_list)}')\n\n        # get the neighboring GT frames\n        img_gts = []\n\n        need_align = False\n        if self.need_align:\n            clip_info_path = os.path.join(self.dataroot_meta_info, f'{clip_name}.txt')\n            if os.path.exists(clip_info_path):\n                need_align = True\n                clip_info = []\n                with open(clip_info_path, 'r', encoding='utf-8') as fin:\n                    for line in fin:\n                        line = line.strip()\n                        clip_info.append(line)\n\n        for neighbor in neighbor_list:\n            img_gt_path = os.path.join(self.gt_root, clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"train\", clip_name, paths[neighbor])\n            if not os.path.exists(img_gt_path):\n                img_gt_path = os.path.join(self.gt_root, \"test\", clip_name, paths[neighbor])\n\n            img_gt = np.asarray(Image.open(img_gt_path))[:, :, ::-1] / 255.0\n            img_gts.append(img_gt)\n\n        # augmentation - flip, rotate\n        img_gts = augment(img_gts, self.opt['use_flip'], self.opt['use_rot']) # False, False\n\n        # ------------- generate LQ frames --------------#\n        # add blur\n        kernel = random_mixed_kernels(self.kernel_list,\n                                      self.kernel_prob,      # [0.7, 0.3]\n                                      self.blur_kernel_size, # 21\n                                      self.blur_x_sigma,     # [0.1, 10]\n                                      self.blur_y_sigma)     # [0.1, 10]\n        img_lqs = [cv2.filter2D(v, -1, kernel) for v in img_gts]\n\n        # downsample\n        ori_height, ori_width = img_gts[0].shape[0:2]\n        resize_type = random.choices([cv2.INTER_AREA,\n                                      cv2.INTER_LINEAR,\n                                      cv2.INTER_CUBIC], self.resize_prob)[0]\n\n        # ensure the resized_height and resized_width are even numbers\n        # scale = np.random.uniform(self.scale)\n        resized_height = int(ori_height // self.scale) // 2 * 2\n        resized_width = int(ori_width // self.scale) // 2 * 2\n        img_lqs = [cv2.resize(v, (resized_width, resized_height),\n                              interpolation=resize_type) for v in img_lqs]\n\n        # add noise\n        img_lqs = [random_add_gaussian_noise(v,\n                                             self.noise_range, # [0, 10]\n                                             gray_prob=0.5,\n                                             clip=True,\n                                             rounds=False) for v in img_lqs] # noise_range: [0, 25]\n\n        # ffmpeg\n        crf = np.random.randint(self.crf_range[0], self.crf_range[1]) # [18, 25]\n        codec = random.choices(self.vcodec, self.vcodec_prob)[0] # 'libx264'\n\n        buf = io.BytesIO()\n        with av.open(buf, 'w', 'mp4') as container:\n            stream = container.add_stream(codec, rate=1)\n            stream.height = resized_height\n            stream.width = resized_width\n            stream.pix_fmt = 'yuv420p'\n            stream.options = {'crf': str(crf)}\n\n            for img_lq in img_lqs:\n                img_lq = np.clip(img_lq * 255, 0, 255).astype(np.uint8)\n                frame = av.VideoFrame.from_ndarray(img_lq, format='rgb24')\n                frame.pict_type = av.video.frame.PictureType.NONE\n                for packet in stream.encode(frame):\n                    container.mux(packet)\n\n            # Flush stream\n            for packet in stream.encode():\n                container.mux(packet)\n\n        img_lqs = []\n        with av.open(buf, 'r', 'mp4') as container:\n            if container.streams.video:\n                for frame in container.decode(**{'video': 0}):\n                    frame = frame.to_rgb().to_ndarray()\n                    frame = cv2.resize(frame, (ori_width, ori_height), interpolation=resize_type) # upsample\n                    img_lqs.append(frame / 255.)\n\n        assert len(img_lqs) == len(img_gts), 'Wrong length'\n        # ------------ Align -------------#\n        if need_align:\n            align_lqs, align_gts = [], []\n            for frame_idx, (img_lq, img_gt) in enumerate(zip(img_lqs, img_gts)):\n                landmarks_str = clip_info[start_frame_idx + frame_idx].split(' ')\n                landmarks = np.array([float(x) for x in landmarks_str]).reshape(5, 2)\n                self.face_aligner.clean_all()\n                # align and warp each face\n                img_lq, img_gt = self.face_aligner.align_pair_face(\n                    img_lq, img_gt, landmarks)\n                align_lqs.append(img_lq)\n                align_gts.append(img_gt)\n            img_lqs, img_gts = align_lqs, align_gts\n\n        img_gts = img2tensor(img_gts)\n        img_lqs = img2tensor(img_lqs)\n        img_gts = torch.stack(img_gts, dim=0)\n        img_lqs = torch.stack(img_lqs, dim=0)\n\n        if self.normalize:\n            normalize(img_lqs, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n            normalize(img_gts, [0.5, 0.5, 0.5], [0.5, 0.5, 0.5], inplace=True)\n\n        return {'in': img_lqs, 'gt': img_gts, 'key': key}\n\n    def __len__(self):\n        return len(self.keys)\n"
  },
  {
    "path": "basicsr/losses/__init__.py",
    "content": "from copy import deepcopy\n\nfrom basicsr.utils import get_root_logger\nfrom basicsr.utils.registry import LOSS_REGISTRY\nfrom .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize,\n                     gradient_penalty_loss, r1_penalty)\n\n__all__ = [\n    'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss',\n    'r1_penalty', 'g_path_regularize'\n]\n\n\ndef build_loss(opt):\n    \"\"\"Build loss from options.\n\n    Args:\n        opt (dict): Configuration. It must constain:\n            type (str): Model type.\n    \"\"\"\n    opt = deepcopy(opt)\n    loss_type = opt.pop('type')\n    loss = LOSS_REGISTRY.get(loss_type)(**opt)\n    logger = get_root_logger()\n    logger.info(f'Loss [{loss.__class__.__name__}] is created.')\n    return loss\n"
  },
  {
    "path": "basicsr/losses/loss_util.py",
    "content": "import functools\nfrom torch.nn import functional as F\n\n\ndef reduce_loss(loss, reduction):\n    \"\"\"Reduce loss as specified.\n\n    Args:\n        loss (Tensor): Elementwise loss tensor.\n        reduction (str): Options are 'none', 'mean' and 'sum'.\n\n    Returns:\n        Tensor: Reduced loss tensor.\n    \"\"\"\n    reduction_enum = F._Reduction.get_enum(reduction)\n    # none: 0, elementwise_mean:1, sum: 2\n    if reduction_enum == 0:\n        return loss\n    elif reduction_enum == 1:\n        return loss.mean()\n    else:\n        return loss.sum()\n\n\ndef weight_reduce_loss(loss, weight=None, reduction='mean'):\n    \"\"\"Apply element-wise weight and reduce loss.\n\n    Args:\n        loss (Tensor): Element-wise loss.\n        weight (Tensor): Element-wise weights. Default: None.\n        reduction (str): Same as built-in losses of PyTorch. Options are\n            'none', 'mean' and 'sum'. Default: 'mean'.\n\n    Returns:\n        Tensor: Loss values.\n    \"\"\"\n    # if weight is specified, apply element-wise weight\n    if weight is not None:\n        assert weight.dim() == loss.dim()\n        assert weight.size(1) == 1 or weight.size(1) == loss.size(1)\n        loss = loss * weight\n\n    # if weight is not specified or reduction is sum, just reduce the loss\n    if weight is None or reduction == 'sum':\n        loss = reduce_loss(loss, reduction)\n    # if reduction is mean, then compute mean over weight region\n    elif reduction == 'mean':\n        if weight.size(1) > 1:\n            weight = weight.sum()\n        else:\n            weight = weight.sum() * loss.size(1)\n        loss = loss.sum() / weight\n\n    return loss\n\n\ndef weighted_loss(loss_func):\n    \"\"\"Create a weighted version of a given loss function.\n\n    To use this decorator, the loss function must have the signature like\n    `loss_func(pred, target, **kwargs)`. The function only needs to compute\n    element-wise loss without any reduction. This decorator will add weight\n    and reduction arguments to the function. The decorated function will have\n    the signature like `loss_func(pred, target, weight=None, reduction='mean',\n    **kwargs)`.\n\n    :Example:\n\n    >>> import torch\n    >>> @weighted_loss\n    >>> def l1_loss(pred, target):\n    >>>     return (pred - target).abs()\n\n    >>> pred = torch.Tensor([0, 2, 3])\n    >>> target = torch.Tensor([1, 1, 1])\n    >>> weight = torch.Tensor([1, 0, 1])\n\n    >>> l1_loss(pred, target)\n    tensor(1.3333)\n    >>> l1_loss(pred, target, weight)\n    tensor(1.5000)\n    >>> l1_loss(pred, target, reduction='none')\n    tensor([1., 1., 2.])\n    >>> l1_loss(pred, target, weight, reduction='sum')\n    tensor(3.)\n    \"\"\"\n\n    @functools.wraps(loss_func)\n    def wrapper(pred, target, weight=None, reduction='mean', **kwargs):\n        # get element-wise loss\n        loss = loss_func(pred, target, **kwargs)\n        loss = weight_reduce_loss(loss, weight, reduction)\n        return loss\n\n    return wrapper\n"
  },
  {
    "path": "basicsr/losses/losses.py",
    "content": "import math\nimport lpips\nimport torch\nfrom torch import autograd as autograd\nfrom torch import nn as nn\nfrom torch.nn import functional as F\n\nfrom basicsr.archs.vgg_arch import VGGFeatureExtractor\nfrom basicsr.utils.registry import LOSS_REGISTRY\nfrom .loss_util import weighted_loss\n# from basicsr.losses.loss_util import weighted_loss\n\n_reduction_modes = ['none', 'mean', 'sum']\n\n\n@weighted_loss\ndef l1_loss(pred, target):\n    return F.l1_loss(pred, target, reduction='none')\n\n\n@weighted_loss\ndef mse_loss(pred, target):\n    return F.mse_loss(pred, target, reduction='none')\n\n\n@weighted_loss\ndef charbonnier_loss(pred, target, eps=1e-12):\n    return torch.sqrt((pred - target)**2 + eps)\n\n\n@LOSS_REGISTRY.register()\nclass L1Loss(nn.Module):\n    \"\"\"L1 (mean absolute error, MAE) loss.\n\n    Args:\n        loss_weight (float): Loss weight for L1 loss. Default: 1.0.\n        reduction (str): Specifies the reduction to apply to the output.\n            Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.\n    \"\"\"\n\n    def __init__(self, loss_weight=1.0, reduction='mean'):\n        super(L1Loss, self).__init__()\n        if reduction not in ['none', 'mean', 'sum']:\n            raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')\n\n        self.loss_weight = loss_weight\n        self.reduction = reduction\n\n    def forward(self, pred, target, weight=None, **kwargs):\n        \"\"\"\n        Args:\n            pred (Tensor): of shape (N, C, H, W). Predicted tensor.\n            target (Tensor): of shape (N, C, H, W). Ground truth tensor.\n            weight (Tensor, optional): of shape (N, C, H, W). Element-wise\n                weights. Default: None.\n        \"\"\"\n        return self.loss_weight * l1_loss(pred, target, weight, reduction=self.reduction)\n\n\n@LOSS_REGISTRY.register()\nclass MSELoss(nn.Module):\n    \"\"\"MSE (L2) loss.\n\n    Args:\n        loss_weight (float): Loss weight for MSE loss. Default: 1.0.\n        reduction (str): Specifies the reduction to apply to the output.\n            Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.\n    \"\"\"\n\n    def __init__(self, loss_weight=1.0, reduction='mean'):\n        super(MSELoss, self).__init__()\n        if reduction not in ['none', 'mean', 'sum']:\n            raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')\n\n        self.loss_weight = loss_weight\n        self.reduction = reduction\n\n    def forward(self, pred, target, weight=None, **kwargs):\n        \"\"\"\n        Args:\n            pred (Tensor): of shape (N, C, H, W). Predicted tensor.\n            target (Tensor): of shape (N, C, H, W). Ground truth tensor.\n            weight (Tensor, optional): of shape (N, C, H, W). Element-wise\n                weights. Default: None.\n        \"\"\"\n        return self.loss_weight * mse_loss(pred, target, weight, reduction=self.reduction)\n\n\n@LOSS_REGISTRY.register()\nclass CharbonnierLoss(nn.Module):\n    \"\"\"Charbonnier loss (one variant of Robust L1Loss, a differentiable\n    variant of L1Loss).\n\n    Described in \"Deep Laplacian Pyramid Networks for Fast and Accurate\n        Super-Resolution\".\n\n    Args:\n        loss_weight (float): Loss weight for L1 loss. Default: 1.0.\n        reduction (str): Specifies the reduction to apply to the output.\n            Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.\n        eps (float): A value used to control the curvature near zero.\n            Default: 1e-12.\n    \"\"\"\n\n    def __init__(self, loss_weight=1.0, reduction='mean', eps=1e-12):\n        super(CharbonnierLoss, self).__init__()\n        if reduction not in ['none', 'mean', 'sum']:\n            raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')\n\n        self.loss_weight = loss_weight\n        self.reduction = reduction\n        self.eps = eps\n\n    def forward(self, pred, target, weight=None, **kwargs):\n        \"\"\"\n        Args:\n            pred (Tensor): of shape (N, C, H, W). Predicted tensor.\n            target (Tensor): of shape (N, C, H, W). Ground truth tensor.\n            weight (Tensor, optional): of shape (N, C, H, W). Element-wise\n                weights. Default: None.\n        \"\"\"\n        return self.loss_weight * charbonnier_loss(pred, target, weight, eps=self.eps, reduction=self.reduction)\n\n\n@LOSS_REGISTRY.register()\nclass WeightedTVLoss(L1Loss):\n    \"\"\"Weighted TV loss.\n\n        Args:\n            loss_weight (float): Loss weight. Default: 1.0.\n    \"\"\"\n\n    def __init__(self, loss_weight=1.0):\n        super(WeightedTVLoss, self).__init__(loss_weight=loss_weight)\n\n    def forward(self, pred, weight=None):\n        y_diff = super(WeightedTVLoss, self).forward(pred[:, :, :-1, :], pred[:, :, 1:, :], weight=weight[:, :, :-1, :])\n        x_diff = super(WeightedTVLoss, self).forward(pred[:, :, :, :-1], pred[:, :, :, 1:], weight=weight[:, :, :, :-1])\n\n        loss = x_diff + y_diff\n\n        return loss\n\n\n@LOSS_REGISTRY.register()\nclass PerceptualLoss(nn.Module):\n    \"\"\"Perceptual loss with commonly used style loss.\n\n    Args:\n        layer_weights (dict): The weight for each layer of vgg feature.\n            Here is an example: {'conv5_4': 1.}, which means the conv5_4\n            feature layer (before relu5_4) will be extracted with weight\n            1.0 in calculting losses.\n        vgg_type (str): The type of vgg network used as feature extractor.\n            Default: 'vgg19'.\n        use_input_norm (bool):  If True, normalize the input image in vgg.\n            Default: True.\n        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].\n            Default: False.\n        perceptual_weight (float): If `perceptual_weight > 0`, the perceptual\n            loss will be calculated and the loss will multiplied by the\n            weight. Default: 1.0.\n        style_weight (float): If `style_weight > 0`, the style loss will be\n            calculated and the loss will multiplied by the weight.\n            Default: 0.\n        criterion (str): Criterion used for perceptual loss. Default: 'l1'.\n    \"\"\"\n\n    def __init__(self,\n                 layer_weights,\n                 vgg_type='vgg19',\n                 use_input_norm=True,\n                 range_norm=False,\n                 perceptual_weight=1.0,\n                 style_weight=0.,\n                 criterion='l1'):\n        super(PerceptualLoss, self).__init__()\n        self.perceptual_weight = perceptual_weight\n        self.style_weight = style_weight\n        self.layer_weights = layer_weights\n        self.vgg = VGGFeatureExtractor(\n            layer_name_list=list(layer_weights.keys()),\n            vgg_type=vgg_type,\n            use_input_norm=use_input_norm,\n            range_norm=range_norm)\n\n        self.criterion_type = criterion\n        if self.criterion_type == 'l1':\n            self.criterion = torch.nn.L1Loss()\n        elif self.criterion_type == 'l2':\n            self.criterion = torch.nn.L2loss()\n        elif self.criterion_type == 'mse':\n            self.criterion = torch.nn.MSELoss(reduction='mean')\n        elif self.criterion_type == 'fro':\n            self.criterion = None\n        else:\n            raise NotImplementedError(f'{criterion} criterion has not been supported.')\n\n    def forward(self, x, gt):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Input tensor with shape (n, c, h, w).\n            gt (Tensor): Ground-truth tensor with shape (n, c, h, w).\n\n        Returns:\n            Tensor: Forward results.\n        \"\"\"\n        # extract vgg features\n        x_features = self.vgg(x)\n        gt_features = self.vgg(gt.detach())\n\n        # calculate perceptual loss\n        if self.perceptual_weight > 0:\n            percep_loss = 0\n            for k in x_features.keys():\n                if self.criterion_type == 'fro':\n                    percep_loss += torch.norm(x_features[k] - gt_features[k], p='fro') * self.layer_weights[k]\n                else:\n                    percep_loss += self.criterion(x_features[k], gt_features[k]) * self.layer_weights[k]\n            percep_loss *= self.perceptual_weight\n        else:\n            percep_loss = None\n\n        # calculate style loss\n        if self.style_weight > 0:\n            style_loss = 0\n            for k in x_features.keys():\n                if self.criterion_type == 'fro':\n                    style_loss += torch.norm(\n                        self._gram_mat(x_features[k]) - self._gram_mat(gt_features[k]), p='fro') * self.layer_weights[k]\n                else:\n                    style_loss += self.criterion(self._gram_mat(x_features[k]), self._gram_mat(\n                        gt_features[k])) * self.layer_weights[k]\n            style_loss *= self.style_weight\n        else:\n            style_loss = None\n\n        return percep_loss, style_loss\n\n    def _gram_mat(self, x):\n        \"\"\"Calculate Gram matrix.\n\n        Args:\n            x (torch.Tensor): Tensor with shape of (n, c, h, w).\n\n        Returns:\n            torch.Tensor: Gram matrix.\n        \"\"\"\n        n, c, h, w = x.size()\n        features = x.view(n, c, w * h)\n        features_t = features.transpose(1, 2)\n        gram = features.bmm(features_t) / (c * h * w)\n        return gram\n\n\n@LOSS_REGISTRY.register()\nclass LPIPSLoss(nn.Module):\n    def __init__(self, \n            loss_weight=1.0, \n            use_input_norm=True,\n            range_norm=False,):\n        super(LPIPSLoss, self).__init__()\n        self.perceptual = lpips.LPIPS(net=\"vgg\", spatial=False).eval()\n        self.loss_weight = loss_weight\n        self.use_input_norm = use_input_norm\n        self.range_norm = range_norm\n\n        if self.use_input_norm:\n            # the mean is for image with range [0, 1]\n            self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))\n            # the std is for image with range [0, 1]\n            self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))\n\n    def forward(self, pred, target):\n        if self.range_norm:\n            pred   = (pred + 1) / 2\n            target = (target + 1) / 2\n        if self.use_input_norm:\n            pred   = (pred - self.mean) / self.std\n            target = (target - self.mean) / self.std\n        lpips_loss = self.perceptual(target.contiguous(), pred.contiguous())\n        return self.loss_weight * lpips_loss.mean()\n\n\n@LOSS_REGISTRY.register()\nclass GANLoss(nn.Module):\n    \"\"\"Define GAN loss.\n\n    Args:\n        gan_type (str): Support 'vanilla', 'lsgan', 'wgan', 'hinge'.\n        real_label_val (float): The value for real label. Default: 1.0.\n        fake_label_val (float): The value for fake label. Default: 0.0.\n        loss_weight (float): Loss weight. Default: 1.0.\n            Note that loss_weight is only for generators; and it is always 1.0\n            for discriminators.\n    \"\"\"\n\n    def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0, loss_weight=1.0):\n        super(GANLoss, self).__init__()\n        self.gan_type = gan_type\n        self.loss_weight = loss_weight\n        self.real_label_val = real_label_val\n        self.fake_label_val = fake_label_val\n\n        if self.gan_type == 'vanilla':\n            self.loss = nn.BCEWithLogitsLoss()\n        elif self.gan_type == 'lsgan':\n            self.loss = nn.MSELoss()\n        elif self.gan_type == 'wgan':\n            self.loss = self._wgan_loss\n        elif self.gan_type == 'wgan_softplus':\n            self.loss = self._wgan_softplus_loss\n        elif self.gan_type == 'hinge':\n            self.loss = nn.ReLU()\n        else:\n            raise NotImplementedError(f'GAN type {self.gan_type} is not implemented.')\n\n    def _wgan_loss(self, input, target):\n        \"\"\"wgan loss.\n\n        Args:\n            input (Tensor): Input tensor.\n            target (bool): Target label.\n\n        Returns:\n            Tensor: wgan loss.\n        \"\"\"\n        return -input.mean() if target else input.mean()\n\n    def _wgan_softplus_loss(self, input, target):\n        \"\"\"wgan loss with soft plus. softplus is a smooth approximation to the\n        ReLU function.\n\n        In StyleGAN2, it is called:\n            Logistic loss for discriminator;\n            Non-saturating loss for generator.\n\n        Args:\n            input (Tensor): Input tensor.\n            target (bool): Target label.\n\n        Returns:\n            Tensor: wgan loss.\n        \"\"\"\n        return F.softplus(-input).mean() if target else F.softplus(input).mean()\n\n    def get_target_label(self, input, target_is_real):\n        \"\"\"Get target label.\n\n        Args:\n            input (Tensor): Input tensor.\n            target_is_real (bool): Whether the target is real or fake.\n\n        Returns:\n            (bool | Tensor): Target tensor. Return bool for wgan, otherwise,\n                return Tensor.\n        \"\"\"\n\n        if self.gan_type in ['wgan', 'wgan_softplus']:\n            return target_is_real\n        target_val = (self.real_label_val if target_is_real else self.fake_label_val)\n        return input.new_ones(input.size()) * target_val\n\n    def forward(self, input, target_is_real, is_disc=False):\n        \"\"\"\n        Args:\n            input (Tensor): The input for the loss module, i.e., the network\n                prediction.\n            target_is_real (bool): Whether the targe is real or fake.\n            is_disc (bool): Whether the loss for discriminators or not.\n                Default: False.\n\n        Returns:\n            Tensor: GAN loss value.\n        \"\"\"\n        if self.gan_type == 'hinge':\n            if is_disc:  # for discriminators in hinge-gan\n                input = -input if target_is_real else input\n                loss = self.loss(1 + input).mean()\n            else:  # for generators in hinge-gan\n                loss = -input.mean()\n        else:  # other gan types\n            target_label = self.get_target_label(input, target_is_real)\n            loss = self.loss(input, target_label)\n\n        # loss_weight is always 1.0 for discriminators\n        return loss if is_disc else loss * self.loss_weight\n\n\ndef r1_penalty(real_pred, real_img):\n    \"\"\"R1 regularization for discriminator. The core idea is to\n        penalize the gradient on real data alone: when the\n        generator distribution produces the true data distribution\n        and the discriminator is equal to 0 on the data manifold, the\n        gradient penalty ensures that the discriminator cannot create\n        a non-zero gradient orthogonal to the data manifold without\n        suffering a loss in the GAN game.\n\n        Ref:\n        Eq. 9 in Which training methods for GANs do actually converge.\n        \"\"\"\n    grad_real = autograd.grad(outputs=real_pred.sum(), inputs=real_img, create_graph=True)[0]\n    grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean()\n    return grad_penalty\n\n\ndef g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):\n    noise = torch.randn_like(fake_img) / math.sqrt(fake_img.shape[2] * fake_img.shape[3])\n    grad = autograd.grad(outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True)[0]\n    path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))\n\n    path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)\n\n    path_penalty = (path_lengths - path_mean).pow(2).mean()\n\n    return path_penalty, path_lengths.detach().mean(), path_mean.detach()\n\n\ndef gradient_penalty_loss(discriminator, real_data, fake_data, weight=None):\n    \"\"\"Calculate gradient penalty for wgan-gp.\n\n    Args:\n        discriminator (nn.Module): Network for the discriminator.\n        real_data (Tensor): Real input data.\n        fake_data (Tensor): Fake input data.\n        weight (Tensor): Weight tensor. Default: None.\n\n    Returns:\n        Tensor: A tensor for gradient penalty.\n    \"\"\"\n\n    batch_size = real_data.size(0)\n    alpha = real_data.new_tensor(torch.rand(batch_size, 1, 1, 1))\n\n    # interpolate between real_data and fake_data\n    interpolates = alpha * real_data + (1. - alpha) * fake_data\n    interpolates = autograd.Variable(interpolates, requires_grad=True)\n\n    disc_interpolates = discriminator(interpolates)\n    gradients = autograd.grad(\n        outputs=disc_interpolates,\n        inputs=interpolates,\n        grad_outputs=torch.ones_like(disc_interpolates),\n        create_graph=True,\n        retain_graph=True,\n        only_inputs=True)[0]\n\n    if weight is not None:\n        gradients = gradients * weight\n\n    gradients_penalty = ((gradients.norm(2, dim=1) - 1)**2).mean()\n    if weight is not None:\n        gradients_penalty /= torch.mean(weight)\n\n    return gradients_penalty\n\n\n@LOSS_REGISTRY.register()\nclass DirichletKLLoss(nn.Module):\n    \"\"\"Dir distribution KL-loss.\n\n        Args:\n            loss_weight (float): Loss weight. Default: 1.0.\n    \"\"\"\n\n    def __init__(self, loss_weight=1.0, kl_coef=1.1):\n        super(DirichletKLLoss, self).__init__()\n        self.loss_weight = loss_weight\n        self.kl_coef = kl_coef\n\n    def forward(self, alpha):\n        beta = self.kl_coef * torch.ones_like(alpha)\n        l1 = torch.lgamma(alpha.sum(dim=-1, keepdim=True))\n        l2 = torch.lgamma(alpha).sum(dim=-1, keepdim=True)\n        l3 = (alpha - beta) * (torch.digamma(alpha) - torch.digamma(alpha.sum(dim=-1, keepdim=True)))\n        loss = l1 - l2 + l3.sum(dim=-1,keepdim=True)\n        loss = loss.mean()\n        if self.loss_weight > 0.1:\n            import torch.distributions as dist\n            dirichlet_dist = dist.Dirichlet(alpha)\n            parameters = dirichlet_dist.rsample()\n            maxium = torch.max(parameters, dim=-1)[0]\n            # print(maxium.mean())\n        return self.loss_weight*loss\n\nif __name__ == '__main__':\n    LpipsLoss = LPIPSLoss()"
  },
  {
    "path": "basicsr/metrics/__init__.py",
    "content": "from copy import deepcopy\n\nfrom basicsr.utils.registry import METRIC_REGISTRY\nfrom .psnr_ssim import calculate_psnr, calculate_ssim\n\n__all__ = ['calculate_psnr', 'calculate_ssim']\n\n\ndef calculate_metric(data, opt):\n    \"\"\"Calculate metric from data and options.\n\n    Args:\n        opt (dict): Configuration. It must constain:\n            type (str): Model type.\n    \"\"\"\n    opt = deepcopy(opt)\n    metric_type = opt.pop('type')\n    metric = METRIC_REGISTRY.get(metric_type)(**data, **opt)\n    return metric\n"
  },
  {
    "path": "basicsr/metrics/metric_util.py",
    "content": "import numpy as np\n\nfrom basicsr.utils.matlab_functions import bgr2ycbcr\n\n\ndef reorder_image(img, input_order='HWC'):\n    \"\"\"Reorder images to 'HWC' order.\n\n    If the input_order is (h, w), return (h, w, 1);\n    If the input_order is (c, h, w), return (h, w, c);\n    If the input_order is (h, w, c), return as it is.\n\n    Args:\n        img (ndarray): Input image.\n        input_order (str): Whether the input order is 'HWC' or 'CHW'.\n            If the input image shape is (h, w), input_order will not have\n            effects. Default: 'HWC'.\n\n    Returns:\n        ndarray: reordered image.\n    \"\"\"\n\n    if input_order not in ['HWC', 'CHW']:\n        raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' \"'HWC' and 'CHW'\")\n    if len(img.shape) == 2:\n        img = img[..., None]\n    if input_order == 'CHW':\n        img = img.transpose(1, 2, 0)\n    return img\n\n\ndef to_y_channel(img):\n    \"\"\"Change to Y channel of YCbCr.\n\n    Args:\n        img (ndarray): Images with range [0, 255].\n\n    Returns:\n        (ndarray): Images with range [0, 255] (float type) without round.\n    \"\"\"\n    img = img.astype(np.float32) / 255.\n    if img.ndim == 3 and img.shape[2] == 3:\n        img = bgr2ycbcr(img, y_only=True)\n        img = img[..., None]\n    return img * 255.\n"
  },
  {
    "path": "basicsr/metrics/psnr_ssim.py",
    "content": "import cv2\nimport numpy as np\n\nfrom basicsr.metrics.metric_util import reorder_image, to_y_channel\nfrom basicsr.utils.registry import METRIC_REGISTRY\n\n\n@METRIC_REGISTRY.register()\ndef calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False):\n    \"\"\"Calculate PSNR (Peak Signal-to-Noise Ratio).\n\n    Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio\n\n    Args:\n        img1 (ndarray): Images with range [0, 255].\n        img2 (ndarray): Images with range [0, 255].\n        crop_border (int): Cropped pixels in each edge of an image. These\n            pixels are not involved in the PSNR calculation.\n        input_order (str): Whether the input order is 'HWC' or 'CHW'.\n            Default: 'HWC'.\n        test_y_channel (bool): Test on Y channel of YCbCr. Default: False.\n\n    Returns:\n        float: psnr result.\n    \"\"\"\n\n    assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')\n    if input_order not in ['HWC', 'CHW']:\n        raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '\"HWC\" and \"CHW\"')\n    img1 = reorder_image(img1, input_order=input_order)\n    img2 = reorder_image(img2, input_order=input_order)\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n\n    if crop_border != 0:\n        img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]\n        img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]\n\n    if test_y_channel:\n        img1 = to_y_channel(img1)\n        img2 = to_y_channel(img2)\n\n    mse = np.mean((img1 - img2)**2)\n    if mse == 0:\n        return float('inf')\n    return 20. * np.log10(255. / np.sqrt(mse))\n\n\ndef _ssim(img1, img2):\n    \"\"\"Calculate SSIM (structural similarity) for one channel images.\n\n    It is called by func:`calculate_ssim`.\n\n    Args:\n        img1 (ndarray): Images with range [0, 255] with order 'HWC'.\n        img2 (ndarray): Images with range [0, 255] with order 'HWC'.\n\n    Returns:\n        float: ssim result.\n    \"\"\"\n\n    C1 = (0.01 * 255)**2\n    C2 = (0.03 * 255)**2\n\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n    kernel = cv2.getGaussianKernel(11, 1.5)\n    window = np.outer(kernel, kernel.transpose())\n\n    mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]\n    mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]\n    mu1_sq = mu1**2\n    mu2_sq = mu2**2\n    mu1_mu2 = mu1 * mu2\n    sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq\n    sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq\n    sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2\n\n    ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))\n    return ssim_map.mean()\n\n\n@METRIC_REGISTRY.register()\ndef calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False):\n    \"\"\"Calculate SSIM (structural similarity).\n\n    Ref:\n    Image quality assessment: From error visibility to structural similarity\n\n    The results are the same as that of the official released MATLAB code in\n    https://ece.uwaterloo.ca/~z70wang/research/ssim/.\n\n    For three-channel images, SSIM is calculated for each channel and then\n    averaged.\n\n    Args:\n        img1 (ndarray): Images with range [0, 255].\n        img2 (ndarray): Images with range [0, 255].\n        crop_border (int): Cropped pixels in each edge of an image. These\n            pixels are not involved in the SSIM calculation.\n        input_order (str): Whether the input order is 'HWC' or 'CHW'.\n            Default: 'HWC'.\n        test_y_channel (bool): Test on Y channel of YCbCr. Default: False.\n\n    Returns:\n        float: ssim result.\n    \"\"\"\n\n    assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')\n    if input_order not in ['HWC', 'CHW']:\n        raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '\"HWC\" and \"CHW\"')\n    img1 = reorder_image(img1, input_order=input_order)\n    img2 = reorder_image(img2, input_order=input_order)\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n\n    if crop_border != 0:\n        img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]\n        img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]\n\n    if test_y_channel:\n        img1 = to_y_channel(img1)\n        img2 = to_y_channel(img2)\n\n    ssims = []\n    for i in range(img1.shape[2]):\n        ssims.append(_ssim(img1[..., i], img2[..., i]))\n    return np.array(ssims).mean()\n"
  },
  {
    "path": "basicsr/models/__init__.py",
    "content": "import importlib\nfrom copy import deepcopy\nfrom os import path as osp\n\nfrom basicsr.utils import get_root_logger, scandir\nfrom basicsr.utils.registry import MODEL_REGISTRY\n\n__all__ = ['build_model']\n\n# automatically scan and import model modules for registry\n# scan all the files under the 'models' folder and collect files ending with\n# '_model.py'\nmodel_folder = osp.dirname(osp.abspath(__file__))\nmodel_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]\n# import all the model modules\n_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames]\n\n\ndef build_model(opt):\n    \"\"\"Build model from options.\n\n    Args:\n        opt (dict): Configuration. It must constain:\n            model_type (str): Model type.\n    \"\"\"\n    opt = deepcopy(opt)\n    model = MODEL_REGISTRY.get(opt['model_type'])(opt)\n    logger = get_root_logger()\n    logger.info(f'Model [{model.__class__.__name__}] is created.')\n    return model\n"
  },
  {
    "path": "basicsr/models/base_model.py",
    "content": "import logging\nimport os\nimport torch\nfrom collections import OrderedDict\nfrom copy import deepcopy\nfrom torch.nn.parallel import DataParallel, DistributedDataParallel\n\nfrom basicsr.models import lr_scheduler as lr_scheduler\nfrom basicsr.utils.dist_util import master_only\n\nlogger = logging.getLogger('basicsr')\n\n\nclass BaseModel():\n    \"\"\"Base model.\"\"\"\n\n    def __init__(self, opt):\n        self.opt = opt\n        self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu')\n        self.is_train = opt['is_train']\n        self.schedulers = []\n        self.optimizers = []\n\n    def feed_data(self, data):\n        pass\n\n    def optimize_parameters(self):\n        pass\n\n    def get_current_visuals(self):\n        pass\n\n    def save(self, epoch, current_iter):\n        \"\"\"Save networks and training state.\"\"\"\n        pass\n\n    def validation(self, dataloader, current_iter, tb_logger, save_img=False):\n        \"\"\"Validation function.\n\n        Args:\n            dataloader (torch.utils.data.DataLoader): Validation dataloader.\n            current_iter (int): Current iteration.\n            tb_logger (tensorboard logger): Tensorboard logger.\n            save_img (bool): Whether to save images. Default: False.\n        \"\"\"\n        if self.opt['dist']:\n            self.dist_validation(dataloader, current_iter, tb_logger, save_img)\n        else:\n            self.nondist_validation(dataloader, current_iter, tb_logger, save_img)\n\n    def model_ema(self, decay=0.999):\n        net_g = self.get_bare_model(self.net_g)\n\n        net_g_params = dict(net_g.named_parameters())\n        net_g_ema_params = dict(self.net_g_ema.named_parameters())\n\n        for k in net_g_ema_params.keys():\n            net_g_ema_params[k].data.mul_(decay).add_(net_g_params[k].data, alpha=1 - decay)\n\n    def get_current_log(self):\n        return self.log_dict\n\n    def model_to_device(self, net):\n        \"\"\"Model to device. It also warps models with DistributedDataParallel\n        or DataParallel.\n\n        Args:\n            net (nn.Module)\n        \"\"\"\n        net = net.to(self.device)\n        if self.opt['dist']:\n            find_unused_parameters = self.opt.get('find_unused_parameters', False)\n            net = DistributedDataParallel(\n                net, device_ids=[torch.cuda.current_device()], find_unused_parameters=find_unused_parameters)\n        elif self.opt['num_gpu'] > 1:\n            net = DataParallel(net)\n        return net\n\n    def get_optimizer(self, optim_type, params, lr, **kwargs):\n        if optim_type == 'Adam':\n            optimizer = torch.optim.Adam(params, lr, **kwargs)\n        else:\n            raise NotImplementedError(f'optimizer {optim_type} is not supperted yet.')\n        return optimizer\n\n    def setup_schedulers(self):\n        \"\"\"Set up schedulers.\"\"\"\n        train_opt = self.opt['train']\n        scheduler_type = train_opt['scheduler'].pop('type')\n        if scheduler_type in ['MultiStepLR', 'MultiStepRestartLR']:\n            for optimizer in self.optimizers:\n                self.schedulers.append(lr_scheduler.MultiStepRestartLR(optimizer, **train_opt['scheduler']))\n        elif scheduler_type == 'CosineAnnealingRestartLR':\n            for optimizer in self.optimizers:\n                self.schedulers.append(lr_scheduler.CosineAnnealingRestartLR(optimizer, **train_opt['scheduler']))\n        else:\n            raise NotImplementedError(f'Scheduler {scheduler_type} is not implemented yet.')\n\n    def get_bare_model(self, net):\n        \"\"\"Get bare model, especially under wrapping with\n        DistributedDataParallel or DataParallel.\n        \"\"\"\n        if isinstance(net, (DataParallel, DistributedDataParallel)):\n            net = net.module\n        return net\n\n    @master_only\n    def print_network(self, net):\n        \"\"\"Print the str and parameter number of a network.\n\n        Args:\n            net (nn.Module)\n        \"\"\"\n        if isinstance(net, (DataParallel, DistributedDataParallel)):\n            net_cls_str = (f'{net.__class__.__name__} - ' f'{net.module.__class__.__name__}')\n        else:\n            net_cls_str = f'{net.__class__.__name__}'\n\n        net = self.get_bare_model(net)\n        net_str = str(net)\n        net_params = sum(map(lambda x: x.numel(), net.parameters()))\n\n        logger.info(f'Network: {net_cls_str}, with parameters: {net_params:,d}')\n        logger.info(net_str)\n\n    def _set_lr(self, lr_groups_l):\n        \"\"\"Set learning rate for warmup.\n\n        Args:\n            lr_groups_l (list): List for lr_groups, each for an optimizer.\n        \"\"\"\n        for optimizer, lr_groups in zip(self.optimizers, lr_groups_l):\n            for param_group, lr in zip(optimizer.param_groups, lr_groups):\n                param_group['lr'] = lr\n\n    def _get_init_lr(self):\n        \"\"\"Get the initial lr, which is set by the scheduler.\n        \"\"\"\n        init_lr_groups_l = []\n        for optimizer in self.optimizers:\n            init_lr_groups_l.append([v['initial_lr'] for v in optimizer.param_groups])\n        return init_lr_groups_l\n\n    def update_learning_rate(self, current_iter, warmup_iter=-1):\n        \"\"\"Update learning rate.\n\n        Args:\n            current_iter (int): Current iteration.\n            warmup_iter (int)： Warmup iter numbers. -1 for no warmup.\n                Default： -1.\n        \"\"\"\n        if current_iter > 1:\n            for scheduler in self.schedulers:\n                scheduler.step()\n        # set up warm-up learning rate\n        if current_iter < warmup_iter:\n            # get initial lr for each group\n            init_lr_g_l = self._get_init_lr()\n            # modify warming-up learning rates\n            # currently only support linearly warm up\n            warm_up_lr_l = []\n            for init_lr_g in init_lr_g_l:\n                warm_up_lr_l.append([v / warmup_iter * current_iter for v in init_lr_g])\n            # set learning rate\n            self._set_lr(warm_up_lr_l)\n\n    def get_current_learning_rate(self):\n        return [param_group['lr'] for param_group in self.optimizers[0].param_groups]\n\n    @master_only\n    def save_network(self, net, net_label, current_iter, param_key='params'):\n        \"\"\"Save networks.\n\n        Args:\n            net (nn.Module | list[nn.Module]): Network(s) to be saved.\n            net_label (str): Network label.\n            current_iter (int): Current iter number.\n            param_key (str | list[str]): The parameter key(s) to save network.\n                Default: 'params'.\n        \"\"\"\n        if current_iter == -1:\n            current_iter = 'latest'\n        save_filename = f'{net_label}_{current_iter}.pth'\n        save_path = os.path.join(self.opt['path']['models'], save_filename)\n\n        net = net if isinstance(net, list) else [net]\n        param_key = param_key if isinstance(param_key, list) else [param_key]\n        assert len(net) == len(param_key), 'The lengths of net and param_key should be the same.'\n\n        save_dict = {}\n        for net_, param_key_ in zip(net, param_key):\n            net_ = self.get_bare_model(net_)\n            state_dict = net_.state_dict()\n            for key, param in state_dict.items():\n                if key.startswith('module.'):  # remove unnecessary 'module.'\n                    key = key[7:]\n                state_dict[key] = param.cpu()\n            save_dict[param_key_] = state_dict\n\n        torch.save(save_dict, save_path)\n\n    def _print_different_keys_loading(self, crt_net, load_net, strict=True):\n        \"\"\"Print keys with differnet name or different size when loading models.\n\n        1. Print keys with differnet names.\n        2. If strict=False, print the same key but with different tensor size.\n            It also ignore these keys with different sizes (not load).\n\n        Args:\n            crt_net (torch model): Current network.\n            load_net (dict): Loaded network.\n            strict (bool): Whether strictly loaded. Default: True.\n        \"\"\"\n        crt_net = self.get_bare_model(crt_net)\n        crt_net = crt_net.state_dict()\n        crt_net_keys = set(crt_net.keys())\n        load_net_keys = set(load_net.keys())\n\n        if crt_net_keys != load_net_keys:\n            logger.warning('Current net - loaded net:')\n            for v in sorted(list(crt_net_keys - load_net_keys)):\n                logger.warning(f'  {v}')\n            logger.warning('Loaded net - current net:')\n            for v in sorted(list(load_net_keys - crt_net_keys)):\n                logger.warning(f'  {v}')\n\n        # check the size for the same keys\n        if not strict:\n            common_keys = crt_net_keys & load_net_keys\n            for k in common_keys:\n                if crt_net[k].size() != load_net[k].size():\n                    logger.warning(f'Size different, ignore [{k}]: crt_net: '\n                                   f'{crt_net[k].shape}; load_net: {load_net[k].shape}')\n                    load_net[k + '.ignore'] = load_net.pop(k)\n\n    def load_network(self, net, load_path, strict=True, param_key='params'):\n        \"\"\"Load network.\n\n        Args:\n            load_path (str): The path of networks to be loaded.\n            net (nn.Module): Network.\n            strict (bool): Whether strictly loaded.\n            param_key (str): The parameter key of loaded network. If set to\n                None, use the root 'path'.\n                Default: 'params'.\n        \"\"\"\n        net = self.get_bare_model(net)\n        logger.info(f'Loading {net.__class__.__name__} model from {load_path}.')\n        load_net = torch.load(load_path, map_location=lambda storage, loc: storage)\n        if param_key is not None:\n            if param_key not in load_net and 'params' in load_net:\n                param_key = 'params'\n                logger.info('Loading: params_ema does not exist, use params.')\n            load_net = load_net[param_key]\n        # remove unnecessary 'module.'\n        for k, v in deepcopy(load_net).items():\n            if k.startswith('module.'):\n                load_net[k[7:]] = v\n                load_net.pop(k)\n        self._print_different_keys_loading(net, load_net, strict)\n        net.load_state_dict(load_net, strict=strict)\n\n    @master_only\n    def save_training_state(self, epoch, current_iter):\n        \"\"\"Save training states during training, which will be used for\n        resuming.\n\n        Args:\n            epoch (int): Current epoch.\n            current_iter (int): Current iteration.\n        \"\"\"\n        if current_iter != -1:\n            state = {'epoch': epoch, 'iter': current_iter, 'optimizers': [], 'schedulers': []}\n            for o in self.optimizers:\n                state['optimizers'].append(o.state_dict())\n            for s in self.schedulers:\n                state['schedulers'].append(s.state_dict())\n            save_filename = f'{current_iter}.state'\n            save_path = os.path.join(self.opt['path']['training_states'], save_filename)\n            torch.save(state, save_path)\n\n    def resume_training(self, resume_state):\n        \"\"\"Reload the optimizers and schedulers for resumed training.\n\n        Args:\n            resume_state (dict): Resume state.\n        \"\"\"\n        resume_optimizers = resume_state['optimizers']\n        resume_schedulers = resume_state['schedulers']\n        assert len(resume_optimizers) == len(self.optimizers), 'Wrong lengths of optimizers'\n        assert len(resume_schedulers) == len(self.schedulers), 'Wrong lengths of schedulers'\n        for i, o in enumerate(resume_optimizers):\n            self.optimizers[i].load_state_dict(o)\n        for i, s in enumerate(resume_schedulers):\n            self.schedulers[i].load_state_dict(s)\n\n    def reduce_loss_dict(self, loss_dict):\n        \"\"\"reduce loss dict.\n\n        In distributed training, it averages the losses among different GPUs .\n\n        Args:\n            loss_dict (OrderedDict): Loss dict.\n        \"\"\"\n        with torch.no_grad():\n            if self.opt['dist']:\n                keys = []\n                losses = []\n                for name, value in loss_dict.items():\n                    keys.append(name)\n                    losses.append(value)\n                losses = torch.stack(losses, 0)\n                torch.distributed.reduce(losses, dst=0)\n                if self.opt['rank'] == 0:\n                    losses /= self.opt['world_size']\n                loss_dict = {key: loss for key, loss in zip(keys, losses)}\n\n            log_dict = OrderedDict()\n            for name, value in loss_dict.items():\n                log_dict[name] = value.mean().item()\n\n            return log_dict\n"
  },
  {
    "path": "basicsr/models/codeformer_dirichlet_video_model.py",
    "content": "import torch\nfrom collections import OrderedDict\nfrom os import path as osp\nfrom tqdm import tqdm\nfrom einops import rearrange\n\nfrom basicsr.archs import build_network\nfrom basicsr.losses import build_loss\nfrom basicsr.metrics import calculate_metric\nfrom basicsr.utils import get_root_logger, imwrite, tensor2img, tensor2imgs, images_to_gif\nfrom basicsr.utils.registry import MODEL_REGISTRY\nimport torch.nn.functional as F\nfrom .sr_model import SRModel\n\n@MODEL_REGISTRY.register()\nclass CodeFormerDirichletVideoModel(SRModel):\n    def feed_data(self, data):\n        self.gt = data['gt'].to(self.device) # b t c h w\n        self.input = data['in'].to(self.device)\n        self.lq = data['in'].to(self.device)  \n        self.input_large_de = data['in'].to(self.device)\n        self.b, self.t = data['gt'].shape[:2]\n        # self.input_large_de = data['in_large_de'].to(self.device)\n\n        # merge b t\n        self.gt = rearrange(self.gt, \"b t ... -> (b t) ...\")\n        self.input = rearrange(self.input, \"b t ... -> (b t) ...\")\n        self.input_large_de = rearrange(self.input_large_de, \"b t ... -> (b t) ...\")\n\n        if 'latent_gt' in data:\n            self.idx_gt = data['latent_gt'].to(self.device)\n            # self.idx_gt = self.idx_gt.view(self.b, -1)\n            self.idx_gt = rearrange(self.idx_gt, \"b t ... -> (b t) ...\")\n        else:\n            self.idx_gt = None\n\n    def init_training_settings(self):\n        logger = get_root_logger()\n        train_opt = self.opt['train']\n\n        self.ema_decay = train_opt.get('ema_decay', 0)\n        if self.ema_decay > 0:\n            logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}')\n            self.net_g_ema = build_network(self.opt['network_g']).to(self.device)\n            # load pretrained model\n            load_path = self.opt['path'].get('pretrain_network_g', None)\n            if load_path is not None:\n                self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')\n            else:\n                self.model_ema(0)  # copy net_g weight\n            self.net_g_ema.eval()\n\n        self.scale_adaptive_gan_weight = train_opt.get('scale_adaptive_gan_weight', 0.8)\n\n        # define network net_d\n        self.net_d = build_network(self.opt['network_d'])\n        self.net_d = self.model_to_device(self.net_d)\n        self.print_network(self.net_d)\n\n        # load pretrained models\n        load_path = self.opt['path'].get('pretrain_network_d', None)\n        if load_path is not None:\n            self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True))\n\n        self.net_g.train()\n        self.net_d.train()\n\n        # define losses\n        self.cri_pix = None\n        if train_opt.get('pixel_opt'):\n            self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)\n        \n        self.cri_perceptual = None\n        if train_opt.get('perceptual_opt'):\n            self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)\n\n        # add the dir dist KL loss\n        self.cri_dirichletKL = None\n        if train_opt.get('dirichletKL_opt'):\n            self.cri_dirichletKL = build_loss(train_opt['dirichletKL_opt']).to(self.device)\n\n        if train_opt.get('gan_opt'):\n            self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)\n\n        self.fix_generator = train_opt.get('fix_generator', True)\n        logger.info(f'fix_generator: {self.fix_generator}')\n\n        self.net_g_start_iter = train_opt.get('net_g_start_iter', 0)\n        self.net_d_iters = train_opt.get('net_d_iters', 1)\n        self.net_d_start_iter = train_opt.get('net_d_start_iter', 0)\n\n        # set up optimizers and schedulers\n        self.setup_optimizers()\n        self.setup_schedulers()\n\n    def calculate_adaptive_weight(self, recon_loss, g_loss, last_layer, disc_weight_max):\n        recon_grads = torch.autograd.grad(recon_loss, last_layer, retain_graph=True)[0]\n        g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]\n\n        d_weight = torch.norm(recon_grads) / (torch.norm(g_grads) + 1e-4)\n        d_weight = torch.clamp(d_weight, 0.0, disc_weight_max).detach()\n        return d_weight\n\n    def setup_optimizers(self):\n        train_opt = self.opt['train']\n        # optimizer g\n        optim_params_g = []\n        trainable_modules = []\n        notrainable_modules = []\n        for k, v in self.net_g.named_parameters():\n            module_ = '.'.join(k.split('.')[:2])\n            if v.requires_grad:\n                optim_params_g.append(v)\n                if module_ not in trainable_modules:\n                    trainable_modules.append(module_)\n            else:\n                if module_ not in notrainable_modules:\n                    notrainable_modules.append(module_)\n\n        logger = get_root_logger()\n        for _ in trainable_modules:\n            logger.warning(f'{_} will be optimized.')\n        for _ in notrainable_modules:\n            logger.warning(f'{_} will not be optimized.')\n\n        optim_type = train_opt['optim_g'].pop('type')\n        self.optimizer_g = self.get_optimizer(optim_type, optim_params_g, **train_opt['optim_g'])\n        self.optimizers.append(self.optimizer_g)\n        # optimizer d\n        optim_type = train_opt['optim_d'].pop('type')\n        self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d'])\n        self.optimizers.append(self.optimizer_d)\n\n    def gray_resize_for_identity(self, out, size=128):\n        out_gray = (0.2989 * out[:, 0, :, :] + 0.5870 * out[:, 1, :, :] + 0.1140 * out[:, 2, :, :])\n        out_gray = out_gray.unsqueeze(1)\n        out_gray = F.interpolate(out_gray, (size, size), mode='bilinear', align_corners=False)\n        return out_gray\n\n    def optimize_parameters(self, current_iter):\n        # optimize net_g\n        for p in self.net_d.parameters():\n            p.requires_grad = False\n\n        self.optimizer_g.zero_grad()\n\n        large_de = False\n        self.output, lq_feat, dirichletDistParam = self.net_g(self.input, w=1.0, detach_16=True)\n\n        l_g_total = 0\n        loss_dict = OrderedDict()\n        if current_iter % self.net_d_iters == 0 and current_iter > self.net_g_start_iter:\n            if not large_de: # when large degradation don't need image-level loss\n                # pixel loss \n                if self.cri_pix:\n                    l_g_pix = self.cri_pix(self.output, self.gt)\n                    l_g_total += l_g_pix\n                    loss_dict['l_g_pix'] = l_g_pix\n\n                # perceptual loss\n                if self.cri_perceptual:\n                    l_g_percep = self.cri_perceptual(self.output, self.gt)\n                    l_g_total += l_g_percep\n                    loss_dict['l_g_percep'] = l_g_percep\n\n                if self.cri_dirichletKL:\n                    l_g_dirKL = self.cri_dirichletKL(dirichletDistParam)\n                    l_g_total += l_g_dirKL\n                    loss_dict['l_g_dirichletKL'] = l_g_dirKL\n\n                # gan loss\n                if  current_iter > self.net_d_start_iter:\n                    fake_g_pred = self.net_d(self.output)\n                    l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)\n                    recon_loss = l_g_pix + l_g_percep\n\n                    loss_dict['recon_loss'] = recon_loss\n                    loss_dict['l_g_gan'] = 0.1 * l_g_gan\n\n                    l_g_total += recon_loss\n                    l_g_total += l_g_gan\n\n            l_g_total.backward()\n            \n            for name, param in self.net_g.named_parameters():\n                if not param.requires_grad:\n                    continue\n            \n            self.optimizer_g.step()\n\n        if self.ema_decay > 0:\n            self.model_ema(decay=self.ema_decay)\n\n        # optimize net_d\n        if not large_de:\n            if current_iter > self.net_d_start_iter:\n                for p in self.net_d.parameters():\n                    p.requires_grad = True\n\n                self.optimizer_d.zero_grad()\n                # real\n                real_d_pred = self.net_d(self.gt)\n                l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)\n                loss_dict['l_d_real'] = l_d_real\n                loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())\n                l_d_real.backward()\n                # fake\n                fake_d_pred = self.net_d(self.output.detach())\n                l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)\n                loss_dict['l_d_fake'] = l_d_fake\n                loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())\n                l_d_fake.backward()\n\n                self.optimizer_d.step()\n\n        self.log_dict = self.reduce_loss_dict(loss_dict)\n\n    def test(self):\n        with torch.no_grad():\n            if hasattr(self, 'net_g_ema'):\n                self.net_g_ema.eval()\n                self.output, _, _ = self.net_g_ema(self.input, w=1)\n            else:\n                logger = get_root_logger()\n                logger.warning('Do not have self.net_g_ema, use self.net_g.')\n                self.net_g.eval()\n                self.output, _, _ = self.net_g(self.input, w=1)\n                self.net_g.train()\n\n    def dist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        if self.opt['rank'] == 0:\n            self.nondist_validation(dataloader, current_iter, tb_logger, save_img)\n\n    def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        dataset_name = dataloader.dataset.opt['name']\n        with_metrics = self.opt['val'].get('metrics') is not None\n        if with_metrics:\n            self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()}\n        pbar = tqdm(total=len(dataloader), unit='image')\n\n        for idx, val_data in enumerate(dataloader):\n            img_name = val_data[\"key\"][0].split('/')[-3]\n            self.feed_data(val_data)\n            self.test()\n\n            visuals = self.get_current_visuals()\n            sr_img = tensor2img([visuals['result']], min_max=(-1, 1))\n            sr_imgs = tensor2imgs(visuals['result'], min_max=(-1, 1))\n            if 'gt' in visuals:\n                gt_img = tensor2img([visuals['gt']], min_max=(-1, 1))\n                gt_imgs = tensor2imgs(visuals['gt'], min_max=(-1, 1))\n                del self.gt\n\n            # tentative for out of GPU memory\n            del self.lq\n            del self.output\n            torch.cuda.empty_cache()\n\n            if save_img:\n                if self.opt['is_train']:\n                    save_img_path = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}.png')\n                    save_img_gif_path = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}.gif')\n                    save_img_path_ori = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}_gt.png')\n                    save_img_gif_path_ori = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}_gt.gif')\n                else:\n                    if self.opt['val']['suffix']:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}.png')\n                        save_img_gif_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}.gif')\n                        save_img_path_ori = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}_ori.png')\n                        save_img_gif_path_ori = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}_ori.gif')\n                    else:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}.png')\n                        save_img_gif_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}.gif')\n                        save_img_path_ori = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}_ori.png')\n                        save_img_gif_path_ori = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}_ori.gif')\n\n                imwrite(sr_img, save_img_path)\n                imwrite(gt_img, save_img_path_ori)\n\n                images_to_gif(sr_imgs, save_img_gif_path, duration = 50, loop=4)\n                images_to_gif(gt_imgs, save_img_gif_path_ori, duration = 50, loop=4)\n            if with_metrics:\n                # calculate metrics\n                for name, opt_ in self.opt['val']['metrics'].items():\n                    metric_data = dict(img1=sr_img, img2=gt_img)\n                    self.metric_results[name] += calculate_metric(metric_data, opt_)\n            pbar.update(1)\n            pbar.set_description(f'Test {img_name}')\n        pbar.close()\n\n        if with_metrics:\n            for metric in self.metric_results.keys():\n                self.metric_results[metric] /= (idx + 1)\n\n            self._log_validation_metric_values(current_iter, dataset_name, tb_logger)\n\n    def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger):\n        log_str = f'Validation {dataset_name}\\n'\n        for metric, value in self.metric_results.items():\n            log_str += f'\\t # {metric}: {value:.4f}\\n'\n        logger = get_root_logger()\n        logger.info(log_str)\n        if tb_logger:\n            for metric, value in self.metric_results.items():\n                tb_logger.add_scalar(f'metrics/{metric}', value, current_iter)\n\n    def get_current_visuals(self):\n        out_dict = OrderedDict()\n        out_dict['gt'] = self.gt.detach().cpu()\n        out_dict['result'] = self.output.detach().cpu()\n        return out_dict\n\n    def save(self, epoch, current_iter):\n        if self.ema_decay > 0:\n            self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])\n        else:\n            self.save_network(self.net_g, 'net_g', current_iter)\n        self.save_network(self.net_d, 'net_d', current_iter)\n        self.save_training_state(epoch, current_iter)\n"
  },
  {
    "path": "basicsr/models/lr_scheduler.py",
    "content": "import math\nfrom collections import Counter\nfrom torch.optim.lr_scheduler import _LRScheduler\n\n\nclass MultiStepRestartLR(_LRScheduler):\n    \"\"\" MultiStep with restarts learning rate scheme.\n\n    Args:\n        optimizer (torch.nn.optimizer): Torch optimizer.\n        milestones (list): Iterations that will decrease learning rate.\n        gamma (float): Decrease ratio. Default: 0.1.\n        restarts (list): Restart iterations. Default: [0].\n        restart_weights (list): Restart weights at each restart iteration.\n            Default: [1].\n        last_epoch (int): Used in _LRScheduler. Default: -1.\n    \"\"\"\n\n    def __init__(self, optimizer, milestones, gamma=0.1, restarts=(0, ), restart_weights=(1, ), last_epoch=-1):\n        self.milestones = Counter(milestones)\n        self.gamma = gamma\n        self.restarts = restarts\n        self.restart_weights = restart_weights\n        assert len(self.restarts) == len(self.restart_weights), 'restarts and their weights do not match.'\n        super(MultiStepRestartLR, self).__init__(optimizer, last_epoch)\n\n    def get_lr(self):\n        if self.last_epoch in self.restarts:\n            weight = self.restart_weights[self.restarts.index(self.last_epoch)]\n            return [group['initial_lr'] * weight for group in self.optimizer.param_groups]\n        if self.last_epoch not in self.milestones:\n            return [group['lr'] for group in self.optimizer.param_groups]\n        return [group['lr'] * self.gamma**self.milestones[self.last_epoch] for group in self.optimizer.param_groups]\n\n\ndef get_position_from_periods(iteration, cumulative_period):\n    \"\"\"Get the position from a period list.\n\n    It will return the index of the right-closest number in the period list.\n    For example, the cumulative_period = [100, 200, 300, 400],\n    if iteration == 50, return 0;\n    if iteration == 210, return 2;\n    if iteration == 300, return 2.\n\n    Args:\n        iteration (int): Current iteration.\n        cumulative_period (list[int]): Cumulative period list.\n\n    Returns:\n        int: The position of the right-closest number in the period list.\n    \"\"\"\n    for i, period in enumerate(cumulative_period):\n        if iteration <= period:\n            return i\n\n\nclass CosineAnnealingRestartLR(_LRScheduler):\n    \"\"\" Cosine annealing with restarts learning rate scheme.\n\n    An example of config:\n    periods = [10, 10, 10, 10]\n    restart_weights = [1, 0.5, 0.5, 0.5]\n    eta_min=1e-7\n\n    It has four cycles, each has 10 iterations. At 10th, 20th, 30th, the\n    scheduler will restart with the weights in restart_weights.\n\n    Args:\n        optimizer (torch.nn.optimizer): Torch optimizer.\n        periods (list): Period for each cosine anneling cycle.\n        restart_weights (list): Restart weights at each restart iteration.\n            Default: [1].\n        eta_min (float): The mimimum lr. Default: 0.\n        last_epoch (int): Used in _LRScheduler. Default: -1.\n    \"\"\"\n\n    def __init__(self, optimizer, periods, restart_weights=(1, ), eta_min=0, last_epoch=-1):\n        self.periods = periods\n        self.restart_weights = restart_weights\n        self.eta_min = eta_min\n        assert (len(self.periods) == len(\n            self.restart_weights)), 'periods and restart_weights should have the same length.'\n        self.cumulative_period = [sum(self.periods[0:i + 1]) for i in range(0, len(self.periods))]\n        super(CosineAnnealingRestartLR, self).__init__(optimizer, last_epoch)\n\n    def get_lr(self):\n        idx = get_position_from_periods(self.last_epoch, self.cumulative_period)\n        current_weight = self.restart_weights[idx]\n        nearest_restart = 0 if idx == 0 else self.cumulative_period[idx - 1]\n        current_period = self.periods[idx]\n\n        return [\n            self.eta_min + current_weight * 0.5 * (base_lr - self.eta_min) *\n            (1 + math.cos(math.pi * ((self.last_epoch - nearest_restart) / current_period)))\n            for base_lr in self.base_lrs\n        ]\n"
  },
  {
    "path": "basicsr/models/sr_model.py",
    "content": "import torch\nfrom collections import OrderedDict\nfrom os import path as osp\nfrom tqdm import tqdm\n\nfrom basicsr.archs import build_network\nfrom basicsr.losses import build_loss\nfrom basicsr.metrics import calculate_metric\nfrom basicsr.utils import get_root_logger, imwrite, tensor2img\nfrom basicsr.utils.registry import MODEL_REGISTRY\nfrom .base_model import BaseModel\n\n@MODEL_REGISTRY.register()\nclass SRModel(BaseModel):\n    \"\"\"Base SR model for single image super-resolution.\"\"\"\n\n    def __init__(self, opt):\n        super(SRModel, self).__init__(opt)\n\n        # define network\n        self.net_g = build_network(opt['network_g'])\n        self.net_g = self.model_to_device(self.net_g)\n        self.print_network(self.net_g)\n\n        # load pretrained models\n        load_path = self.opt['path'].get('pretrain_network_g', None)\n        if load_path is not None:\n            param_key = self.opt['path'].get('param_key_g', 'params')\n            self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key)\n\n        if self.is_train:\n            self.init_training_settings()\n\n    def init_training_settings(self):\n        self.net_g.train()\n        train_opt = self.opt['train']\n\n        self.ema_decay = train_opt.get('ema_decay', 0)\n        if self.ema_decay > 0:\n            logger = get_root_logger()\n            logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}')\n            # define network net_g with Exponential Moving Average (EMA)\n            # net_g_ema is used only for testing on one GPU and saving\n            # There is no need to wrap with DistributedDataParallel\n            self.net_g_ema = build_network(self.opt['network_g']).to(self.device)\n            # load pretrained model\n            load_path = self.opt['path'].get('pretrain_network_g', None)\n            if load_path is not None:\n                self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')\n            else:\n                self.model_ema(0)  # copy net_g weight\n            self.net_g_ema.eval()\n\n        # define losses\n        if train_opt.get('pixel_opt'):\n            self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)\n        else:\n            self.cri_pix = None\n\n        if train_opt.get('perceptual_opt'):\n            self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)\n        else:\n            self.cri_perceptual = None\n\n        if self.cri_pix is None and self.cri_perceptual is None:\n            raise ValueError('Both pixel and perceptual losses are None.')\n\n        # set up optimizers and schedulers\n        self.setup_optimizers()\n        self.setup_schedulers()\n\n    def setup_optimizers(self):\n        train_opt = self.opt['train']\n        optim_params = []\n        for k, v in self.net_g.named_parameters():\n            if v.requires_grad:\n                optim_params.append(v)\n            else:\n                logger = get_root_logger()\n                logger.warning(f'Params {k} will not be optimized.')\n\n        optim_type = train_opt['optim_g'].pop('type')\n        self.optimizer_g = self.get_optimizer(optim_type, optim_params, **train_opt['optim_g'])\n        self.optimizers.append(self.optimizer_g)\n\n    def feed_data(self, data):\n        self.lq = data['lq'].to(self.device)\n        if 'gt' in data:\n            self.gt = data['gt'].to(self.device)\n\n    def optimize_parameters(self, current_iter):\n        self.optimizer_g.zero_grad()\n        self.output = self.net_g(self.lq)\n\n        l_total = 0\n        loss_dict = OrderedDict()\n        # pixel loss\n        if self.cri_pix:\n            l_pix = self.cri_pix(self.output, self.gt)\n            l_total += l_pix\n            loss_dict['l_pix'] = l_pix\n        # perceptual loss\n        if self.cri_perceptual:\n            l_percep, l_style = self.cri_perceptual(self.output, self.gt)\n            if l_percep is not None:\n                l_total += l_percep\n                loss_dict['l_percep'] = l_percep\n            if l_style is not None:\n                l_total += l_style\n                loss_dict['l_style'] = l_style\n\n        l_total.backward()\n        self.optimizer_g.step()\n\n        self.log_dict = self.reduce_loss_dict(loss_dict)\n\n        if self.ema_decay > 0:\n            self.model_ema(decay=self.ema_decay)\n\n    def test(self):\n        if hasattr(self, 'ema_decay'):\n            self.net_g_ema.eval()\n            with torch.no_grad():\n                self.output = self.net_g_ema(self.lq)\n        else:\n            self.net_g.eval()\n            with torch.no_grad():\n                self.output = self.net_g(self.lq)\n            self.net_g.train()\n\n    def dist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        if self.opt['rank'] == 0:\n            self.nondist_validation(dataloader, current_iter, tb_logger, save_img)\n\n    def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        dataset_name = dataloader.dataset.opt['name']\n        with_metrics = self.opt['val'].get('metrics') is not None\n        if with_metrics:\n            self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()}\n        pbar = tqdm(total=len(dataloader), unit='image')\n\n        for idx, val_data in enumerate(dataloader):\n            img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0]\n            self.feed_data(val_data)\n            self.test()\n\n            visuals = self.get_current_visuals()\n            sr_img = tensor2img([visuals['result']])\n            if 'gt' in visuals:\n                gt_img = tensor2img([visuals['gt']])\n                del self.gt\n\n            # tentative for out of GPU memory\n            del self.lq\n            del self.output\n            torch.cuda.empty_cache()\n\n            if save_img:\n                if self.opt['is_train']:\n                    save_img_path = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}.png')\n                else:\n                    if self.opt['val']['suffix']:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}.png')\n                    else:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}.png')\n                imwrite(sr_img, save_img_path)\n\n            if with_metrics:\n                # calculate metrics\n                for name, opt_ in self.opt['val']['metrics'].items():\n                    metric_data = dict(img1=sr_img, img2=gt_img)\n                    self.metric_results[name] += calculate_metric(metric_data, opt_)\n            pbar.update(1)\n            pbar.set_description(f'Test {img_name}')\n        pbar.close()\n\n        if with_metrics:\n            for metric in self.metric_results.keys():\n                self.metric_results[metric] /= (idx + 1)\n\n            self._log_validation_metric_values(current_iter, dataset_name, tb_logger)\n\n    def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger):\n        log_str = f'Validation {dataset_name}\\n'\n        for metric, value in self.metric_results.items():\n            log_str += f'\\t # {metric}: {value:.4f}\\n'\n        logger = get_root_logger()\n        logger.info(log_str)\n        if tb_logger:\n            for metric, value in self.metric_results.items():\n                tb_logger.add_scalar(f'metrics/{metric}', value, current_iter)\n\n    def get_current_visuals(self):\n        out_dict = OrderedDict()\n        out_dict['lq'] = self.lq.detach().cpu()\n        out_dict['result'] = self.output.detach().cpu()\n        if hasattr(self, 'gt'):\n            out_dict['gt'] = self.gt.detach().cpu()\n        return out_dict\n\n    def save(self, epoch, current_iter):\n        if hasattr(self, 'ema_decay'):\n            self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])\n        else:\n            self.save_network(self.net_g, 'net_g', current_iter)\n        self.save_training_state(epoch, current_iter)\n"
  },
  {
    "path": "basicsr/models/vqgan_model.py",
    "content": "import torch\nfrom collections import OrderedDict\nfrom os import path as osp\nfrom tqdm import tqdm\n\nfrom basicsr.archs import build_network\nfrom basicsr.losses import build_loss\nfrom basicsr.metrics import calculate_metric\nfrom basicsr.utils import get_root_logger, imwrite, tensor2img\nfrom basicsr.utils.registry import MODEL_REGISTRY\nimport torch.nn.functional as F\nfrom .sr_model import SRModel\n\n\n@MODEL_REGISTRY.register()\nclass VQGANModel(SRModel):\n    def feed_data(self, data):\n        self.gt = data['gt'].to(self.device)\n        self.b = self.gt.shape[0]\n\n\n    def init_training_settings(self):\n        logger = get_root_logger()\n        train_opt = self.opt['train']\n\n        self.ema_decay = train_opt.get('ema_decay', 0)\n        if self.ema_decay > 0:\n            logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}')\n            # define network net_g with Exponential Moving Average (EMA)\n            # net_g_ema is used only for testing on one GPU and saving\n            # There is no need to wrap with DistributedDataParallel\n            self.net_g_ema = build_network(self.opt['network_g']).to(self.device)\n            # load pretrained model\n            load_path = self.opt['path'].get('pretrain_network_g', None)\n            if load_path is not None:\n                self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')\n            else:\n                self.model_ema(0)  # copy net_g weight\n            self.net_g_ema.eval()\n\n        # define network net_d\n        self.net_d = build_network(self.opt['network_d'])\n        self.net_d = self.model_to_device(self.net_d)\n        self.print_network(self.net_d)\n\n        # load pretrained models\n        load_path = self.opt['path'].get('pretrain_network_d', None)\n        if load_path is not None:\n            self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True))\n\n        self.net_g.train()\n        self.net_d.train()\n\n        # define losses\n        if train_opt.get('pixel_opt'):\n            self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)\n        else:\n            self.cri_pix = None\n\n        if train_opt.get('perceptual_opt'):\n            self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)\n        else:\n            self.cri_perceptual = None\n\n        if train_opt.get('gan_opt'):\n            self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)\n\n        if train_opt.get('codebook_opt'):\n            self.l_weight_codebook = train_opt['codebook_opt'].get('loss_weight', 1.0)\n        else:\n            self.l_weight_codebook = 1.0\n        \n        self.vqgan_quantizer = self.opt['network_g']['quantizer']\n        logger.info(f'vqgan_quantizer: {self.vqgan_quantizer}')\n\n        self.net_g_start_iter = train_opt.get('net_g_start_iter', 0)\n        self.net_d_iters = train_opt.get('net_d_iters', 1)\n        self.net_d_start_iter = train_opt.get('net_d_start_iter', 0)\n        self.disc_weight = train_opt.get('disc_weight', 0.8)\n\n        # set up optimizers and schedulers\n        self.setup_optimizers()\n        self.setup_schedulers()\n\n    def calculate_adaptive_weight(self, recon_loss, g_loss, last_layer, disc_weight_max):\n        recon_grads = torch.autograd.grad(recon_loss, last_layer, retain_graph=True)[0]\n        g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]\n\n        d_weight = torch.norm(recon_grads) / (torch.norm(g_grads) + 1e-4)\n        d_weight = torch.clamp(d_weight, 0.0, disc_weight_max).detach()\n        return d_weight\n\n    def adopt_weight(self, weight, global_step, threshold=0, value=0.):\n        if global_step < threshold:\n            weight = value\n        return weight\n\n    def setup_optimizers(self):\n        train_opt = self.opt['train']\n        # optimizer g\n        optim_params_g = []\n        for k, v in self.net_g.named_parameters():\n            if v.requires_grad:\n                optim_params_g.append(v)\n            else:\n                logger = get_root_logger()\n                logger.warning(f'Params {k} will not be optimized.')\n        optim_type = train_opt['optim_g'].pop('type')\n        self.optimizer_g = self.get_optimizer(optim_type, optim_params_g, **train_opt['optim_g'])\n        self.optimizers.append(self.optimizer_g)\n        # optimizer d\n        optim_type = train_opt['optim_d'].pop('type')\n        self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d'])\n        self.optimizers.append(self.optimizer_d)\n\n\n    def optimize_parameters(self, current_iter):\n        logger = get_root_logger()\n        loss_dict = OrderedDict()\n        if self.opt['network_g']['quantizer'] == 'gumbel':\n            self.net_g.module.quantize.temperature = max(1/16, ((-1/160000) * current_iter) + 1)\n            if current_iter%1000 == 0:\n                logger.info(f'temperature: {self.net_g.module.quantize.temperature}')\n\n        # optimize net_g\n        for p in self.net_d.parameters():\n            p.requires_grad = False\n\n        self.optimizer_g.zero_grad()\n        self.output, l_codebook, quant_stats = self.net_g(self.gt)\n\n        l_codebook = l_codebook*self.l_weight_codebook\n\n        l_g_total = 0\n        if current_iter % self.net_d_iters == 0 and current_iter > self.net_g_start_iter:\n            # pixel loss\n            if self.cri_pix:\n                l_g_pix = self.cri_pix(self.output, self.gt)\n                l_g_total += l_g_pix\n                loss_dict['l_g_pix'] = l_g_pix\n            # perceptual loss\n            if self.cri_perceptual:\n                l_g_percep = self.cri_perceptual(self.output, self.gt)\n                l_g_total += l_g_percep\n                loss_dict['l_g_percep'] = l_g_percep\n\n            # gan loss\n            if current_iter > self.net_d_start_iter:\n                # fake_g_pred = self.net_d(self.output_1024)\n                fake_g_pred = self.net_d(self.output)\n                l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)\n                recon_loss = l_g_total\n                last_layer = self.net_g.module.generator.blocks[-1].weight\n                d_weight = self.calculate_adaptive_weight(recon_loss, l_g_gan, last_layer, disc_weight_max=1.0)\n                d_weight *= self.adopt_weight(1, current_iter, self.net_d_start_iter)\n                d_weight *= self.disc_weight # tamming setting 0.8\n                l_g_total += d_weight * l_g_gan\n                loss_dict['l_g_gan'] = d_weight * l_g_gan\n\n            l_g_total += l_codebook\n            loss_dict['l_codebook'] = l_codebook\n\n            l_g_total.backward()\n            self.optimizer_g.step()\n\n        # optimize net_d\n        if  current_iter > self.net_d_start_iter:\n            for p in self.net_d.parameters():\n                p.requires_grad = True\n\n            self.optimizer_d.zero_grad()\n            # real\n            real_d_pred = self.net_d(self.gt)\n            l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)\n            loss_dict['l_d_real'] = l_d_real\n            loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())\n            l_d_real.backward()\n            # fake\n            fake_d_pred = self.net_d(self.output.detach())\n            l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)\n            loss_dict['l_d_fake'] = l_d_fake\n            loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())\n            l_d_fake.backward()\n            self.optimizer_d.step()\n\n        self.log_dict = self.reduce_loss_dict(loss_dict)\n\n        if self.ema_decay > 0:\n            self.model_ema(decay=self.ema_decay)\n\n\n    def test(self):\n        with torch.no_grad():\n            if hasattr(self, 'net_g_ema'):\n                self.net_g_ema.eval()\n                self.output, _, _ = self.net_g_ema(self.gt)\n            else:\n                logger = get_root_logger()\n                logger.warning('Do not have self.net_g_ema, use self.net_g.')\n                self.net_g.eval()\n                self.output, _, _ = self.net_g(self.gt)\n                self.net_g.train()\n\n\n    def dist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        if self.opt['rank'] == 0:\n            self.nondist_validation(dataloader, current_iter, tb_logger, save_img)\n\n\n    def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        dataset_name = dataloader.dataset.opt['name']\n        with_metrics = self.opt['val'].get('metrics') is not None\n        if with_metrics:\n            self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()}\n        pbar = tqdm(total=len(dataloader), unit='image')\n\n        for idx, val_data in enumerate(dataloader):\n            img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0]\n            self.feed_data(val_data)\n            self.test()\n\n            visuals = self.get_current_visuals()\n            sr_img = tensor2img([visuals['result']])\n            if 'gt' in visuals:\n                gt_img = tensor2img([visuals['gt']])\n                del self.gt\n\n            # tentative for out of GPU memory\n            del self.lq\n            del self.output\n            torch.cuda.empty_cache()\n\n            if save_img:\n                if self.opt['is_train']:\n                    save_img_path = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}.png')\n                else:\n                    if self.opt['val']['suffix']:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}.png')\n                    else:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}.png')\n                imwrite(sr_img, save_img_path)\n\n            if with_metrics:\n                # calculate metrics\n                for name, opt_ in self.opt['val']['metrics'].items():\n                    metric_data = dict(img1=sr_img, img2=gt_img)\n                    self.metric_results[name] += calculate_metric(metric_data, opt_)\n            pbar.update(1)\n            pbar.set_description(f'Test {img_name}')\n        pbar.close()\n\n        if with_metrics:\n            for metric in self.metric_results.keys():\n                self.metric_results[metric] /= (idx + 1)\n\n            self._log_validation_metric_values(current_iter, dataset_name, tb_logger)\n\n\n    def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger):\n        log_str = f'Validation {dataset_name}\\n'\n        for metric, value in self.metric_results.items():\n            log_str += f'\\t # {metric}: {value:.4f}\\n'\n        logger = get_root_logger()\n        logger.info(log_str)\n        if tb_logger:\n            for metric, value in self.metric_results.items():\n                tb_logger.add_scalar(f'metrics/{metric}', value, current_iter)\n\n\n    def get_current_visuals(self):\n        out_dict = OrderedDict()\n        out_dict['gt'] = self.gt.detach().cpu()\n        out_dict['result'] = self.output.detach().cpu()\n        return out_dict\n\n    def save(self, epoch, current_iter):\n        if self.ema_decay > 0:\n            self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])\n        else:\n            self.save_network(self.net_g, 'net_g', current_iter)\n        self.save_network(self.net_d, 'net_d', current_iter)\n        self.save_training_state(epoch, current_iter)\n"
  },
  {
    "path": "basicsr/ops/__init__.py",
    "content": ""
  },
  {
    "path": "basicsr/ops/dcn/__init__.py",
    "content": "from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack, deform_conv,\n                          modulated_deform_conv)\n\n__all__ = [\n    'DeformConv', 'DeformConvPack', 'ModulatedDeformConv', 'ModulatedDeformConvPack', 'deform_conv',\n    'modulated_deform_conv'\n]\n"
  },
  {
    "path": "basicsr/ops/dcn/deform_conv.py",
    "content": "import math\nimport torch\nfrom torch import nn as nn\nfrom torch.autograd import Function\nfrom torch.autograd.function import once_differentiable\nfrom torch.nn import functional as F\nfrom torch.nn.modules.utils import _pair, _single\n\ntry:\n    from . import deform_conv_ext\nexcept ImportError:\n    import os\n    BASICSR_JIT = os.getenv('BASICSR_JIT')\n    if BASICSR_JIT == 'True':\n        from torch.utils.cpp_extension import load\n        module_path = os.path.dirname(__file__)\n        deform_conv_ext = load(\n            'deform_conv',\n            sources=[\n                os.path.join(module_path, 'src', 'deform_conv_ext.cpp'),\n                os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'),\n                os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'),\n            ],\n        )\n\n\nclass DeformConvFunction(Function):\n\n    @staticmethod\n    def forward(ctx,\n                input,\n                offset,\n                weight,\n                stride=1,\n                padding=0,\n                dilation=1,\n                groups=1,\n                deformable_groups=1,\n                im2col_step=64):\n        if input is not None and input.dim() != 4:\n            raise ValueError(f'Expected 4D tensor as input, got {input.dim()}' 'D tensor instead.')\n        ctx.stride = _pair(stride)\n        ctx.padding = _pair(padding)\n        ctx.dilation = _pair(dilation)\n        ctx.groups = groups\n        ctx.deformable_groups = deformable_groups\n        ctx.im2col_step = im2col_step\n\n        ctx.save_for_backward(input, offset, weight)\n\n        output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride))\n\n        ctx.bufs_ = [input.new_empty(0), input.new_empty(0)]  # columns, ones\n\n        if not input.is_cuda:\n            raise NotImplementedError\n        else:\n            cur_im2col_step = min(ctx.im2col_step, input.shape[0])\n            assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'\n            deform_conv_ext.deform_conv_forward(input, weight,\n                                                offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3),\n                                                weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],\n                                                ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,\n                                                ctx.deformable_groups, cur_im2col_step)\n        return output\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, grad_output):\n        input, offset, weight = ctx.saved_tensors\n\n        grad_input = grad_offset = grad_weight = None\n\n        if not grad_output.is_cuda:\n            raise NotImplementedError\n        else:\n            cur_im2col_step = min(ctx.im2col_step, input.shape[0])\n            assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'\n\n            if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:\n                grad_input = torch.zeros_like(input)\n                grad_offset = torch.zeros_like(offset)\n                deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input,\n                                                           grad_offset, weight, ctx.bufs_[0], weight.size(3),\n                                                           weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],\n                                                           ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,\n                                                           ctx.deformable_groups, cur_im2col_step)\n\n            if ctx.needs_input_grad[2]:\n                grad_weight = torch.zeros_like(weight)\n                deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight,\n                                                                ctx.bufs_[0], ctx.bufs_[1], weight.size(3),\n                                                                weight.size(2), ctx.stride[1], ctx.stride[0],\n                                                                ctx.padding[1], ctx.padding[0], ctx.dilation[1],\n                                                                ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1,\n                                                                cur_im2col_step)\n\n        return (grad_input, grad_offset, grad_weight, None, None, None, None, None)\n\n    @staticmethod\n    def _output_size(input, weight, padding, dilation, stride):\n        channels = weight.size(0)\n        output_size = (input.size(0), channels)\n        for d in range(input.dim() - 2):\n            in_size = input.size(d + 2)\n            pad = padding[d]\n            kernel = dilation[d] * (weight.size(d + 2) - 1) + 1\n            stride_ = stride[d]\n            output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )\n        if not all(map(lambda s: s > 0, output_size)):\n            raise ValueError('convolution input is too small (output would be ' f'{\"x\".join(map(str, output_size))})')\n        return output_size\n\n\nclass ModulatedDeformConvFunction(Function):\n\n    @staticmethod\n    def forward(ctx,\n                input,\n                offset,\n                mask,\n                weight,\n                bias=None,\n                stride=1,\n                padding=0,\n                dilation=1,\n                groups=1,\n                deformable_groups=1):\n        ctx.stride = stride\n        ctx.padding = padding\n        ctx.dilation = dilation\n        ctx.groups = groups\n        ctx.deformable_groups = deformable_groups\n        ctx.with_bias = bias is not None\n        if not ctx.with_bias:\n            bias = input.new_empty(1)  # fake tensor\n        if not input.is_cuda:\n            raise NotImplementedError\n        if weight.requires_grad or mask.requires_grad or offset.requires_grad \\\n                or input.requires_grad:\n            ctx.save_for_backward(input, offset, mask, weight, bias)\n        output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight))\n        ctx._bufs = [input.new_empty(0), input.new_empty(0)]\n        deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output,\n                                                      ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride,\n                                                      ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,\n                                                      ctx.groups, ctx.deformable_groups, ctx.with_bias)\n        return output\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, grad_output):\n        if not grad_output.is_cuda:\n            raise NotImplementedError\n        input, offset, mask, weight, bias = ctx.saved_tensors\n        grad_input = torch.zeros_like(input)\n        grad_offset = torch.zeros_like(offset)\n        grad_mask = torch.zeros_like(mask)\n        grad_weight = torch.zeros_like(weight)\n        grad_bias = torch.zeros_like(bias)\n        deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1],\n                                                       grad_input, grad_weight, grad_bias, grad_offset, grad_mask,\n                                                       grad_output, weight.shape[2], weight.shape[3], ctx.stride,\n                                                       ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,\n                                                       ctx.groups, ctx.deformable_groups, ctx.with_bias)\n        if not ctx.with_bias:\n            grad_bias = None\n\n        return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None)\n\n    @staticmethod\n    def _infer_shape(ctx, input, weight):\n        n = input.size(0)\n        channels_out = weight.size(0)\n        height, width = input.shape[2:4]\n        kernel_h, kernel_w = weight.shape[2:4]\n        height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1\n        width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1\n        return n, channels_out, height_out, width_out\n\n\ndeform_conv = DeformConvFunction.apply\nmodulated_deform_conv = ModulatedDeformConvFunction.apply\n\n\nclass DeformConv(nn.Module):\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 stride=1,\n                 padding=0,\n                 dilation=1,\n                 groups=1,\n                 deformable_groups=1,\n                 bias=False):\n        super(DeformConv, self).__init__()\n\n        assert not bias\n        assert in_channels % groups == 0, \\\n            f'in_channels {in_channels} is not divisible by groups {groups}'\n        assert out_channels % groups == 0, \\\n            f'out_channels {out_channels} is not divisible ' \\\n            f'by groups {groups}'\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = _pair(kernel_size)\n        self.stride = _pair(stride)\n        self.padding = _pair(padding)\n        self.dilation = _pair(dilation)\n        self.groups = groups\n        self.deformable_groups = deformable_groups\n        # enable compatibility with nn.Conv2d\n        self.transposed = False\n        self.output_padding = _single(0)\n\n        self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size))\n\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        n = self.in_channels\n        for k in self.kernel_size:\n            n *= k\n        stdv = 1. / math.sqrt(n)\n        self.weight.data.uniform_(-stdv, stdv)\n\n    def forward(self, x, offset):\n        # To fix an assert error in deform_conv_cuda.cpp:128\n        # input image is smaller than kernel\n        input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1])\n        if input_pad:\n            pad_h = max(self.kernel_size[0] - x.size(2), 0)\n            pad_w = max(self.kernel_size[1] - x.size(3), 0)\n            x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()\n            offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()\n        out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,\n                          self.deformable_groups)\n        if input_pad:\n            out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous()\n        return out\n\n\nclass DeformConvPack(DeformConv):\n    \"\"\"A Deformable Conv Encapsulation that acts as normal Conv layers.\n\n    Args:\n        in_channels (int): Same as nn.Conv2d.\n        out_channels (int): Same as nn.Conv2d.\n        kernel_size (int or tuple[int]): Same as nn.Conv2d.\n        stride (int or tuple[int]): Same as nn.Conv2d.\n        padding (int or tuple[int]): Same as nn.Conv2d.\n        dilation (int or tuple[int]): Same as nn.Conv2d.\n        groups (int): Same as nn.Conv2d.\n        bias (bool or str): If specified as `auto`, it will be decided by the\n            norm_cfg. Bias will be set as True if norm_cfg is None, otherwise\n            False.\n    \"\"\"\n\n    _version = 2\n\n    def __init__(self, *args, **kwargs):\n        super(DeformConvPack, self).__init__(*args, **kwargs)\n\n        self.conv_offset = nn.Conv2d(\n            self.in_channels,\n            self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1],\n            kernel_size=self.kernel_size,\n            stride=_pair(self.stride),\n            padding=_pair(self.padding),\n            dilation=_pair(self.dilation),\n            bias=True)\n        self.init_offset()\n\n    def init_offset(self):\n        self.conv_offset.weight.data.zero_()\n        self.conv_offset.bias.data.zero_()\n\n    def forward(self, x):\n        offset = self.conv_offset(x)\n        return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,\n                           self.deformable_groups)\n\n\nclass ModulatedDeformConv(nn.Module):\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 stride=1,\n                 padding=0,\n                 dilation=1,\n                 groups=1,\n                 deformable_groups=1,\n                 bias=True):\n        super(ModulatedDeformConv, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = _pair(kernel_size)\n        self.stride = stride\n        self.padding = padding\n        self.dilation = dilation\n        self.groups = groups\n        self.deformable_groups = deformable_groups\n        self.with_bias = bias\n        # enable compatibility with nn.Conv2d\n        self.transposed = False\n        self.output_padding = _single(0)\n\n        self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size))\n        if bias:\n            self.bias = nn.Parameter(torch.Tensor(out_channels))\n        else:\n            self.register_parameter('bias', None)\n        self.init_weights()\n\n    def init_weights(self):\n        n = self.in_channels\n        for k in self.kernel_size:\n            n *= k\n        stdv = 1. / math.sqrt(n)\n        self.weight.data.uniform_(-stdv, stdv)\n        if self.bias is not None:\n            self.bias.data.zero_()\n\n    def forward(self, x, offset, mask):\n        return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,\n                                     self.groups, self.deformable_groups)\n\n\nclass ModulatedDeformConvPack(ModulatedDeformConv):\n    \"\"\"A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers.\n\n    Args:\n        in_channels (int): Same as nn.Conv2d.\n        out_channels (int): Same as nn.Conv2d.\n        kernel_size (int or tuple[int]): Same as nn.Conv2d.\n        stride (int or tuple[int]): Same as nn.Conv2d.\n        padding (int or tuple[int]): Same as nn.Conv2d.\n        dilation (int or tuple[int]): Same as nn.Conv2d.\n        groups (int): Same as nn.Conv2d.\n        bias (bool or str): If specified as `auto`, it will be decided by the\n            norm_cfg. Bias will be set as True if norm_cfg is None, otherwise\n            False.\n    \"\"\"\n\n    _version = 2\n\n    def __init__(self, *args, **kwargs):\n        super(ModulatedDeformConvPack, self).__init__(*args, **kwargs)\n\n        self.conv_offset = nn.Conv2d(\n            self.in_channels,\n            self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1],\n            kernel_size=self.kernel_size,\n            stride=_pair(self.stride),\n            padding=_pair(self.padding),\n            dilation=_pair(self.dilation),\n            bias=True)\n        self.init_weights()\n\n    def init_weights(self):\n        super(ModulatedDeformConvPack, self).init_weights()\n        if hasattr(self, 'conv_offset'):\n            self.conv_offset.weight.data.zero_()\n            self.conv_offset.bias.data.zero_()\n\n    def forward(self, x):\n        out = self.conv_offset(x)\n        o1, o2, mask = torch.chunk(out, 3, dim=1)\n        offset = torch.cat((o1, o2), dim=1)\n        mask = torch.sigmoid(mask)\n        return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,\n                                     self.groups, self.deformable_groups)\n"
  },
  {
    "path": "basicsr/ops/dcn/src/deform_conv_cuda.cpp",
    "content": "// modify from\n// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c\n\n#include <torch/extension.h>\n#include <ATen/DeviceGuard.h>\n\n#include <cmath>\n#include <vector>\n\nvoid deformable_im2col(const at::Tensor data_im, const at::Tensor data_offset,\n                       const int channels, const int height, const int width,\n                       const int ksize_h, const int ksize_w, const int pad_h,\n                       const int pad_w, const int stride_h, const int stride_w,\n                       const int dilation_h, const int dilation_w,\n                       const int parallel_imgs, const int deformable_group,\n                       at::Tensor data_col);\n\nvoid deformable_col2im(const at::Tensor data_col, const at::Tensor data_offset,\n                       const int channels, const int height, const int width,\n                       const int ksize_h, const int ksize_w, const int pad_h,\n                       const int pad_w, const int stride_h, const int stride_w,\n                       const int dilation_h, const int dilation_w,\n                       const int parallel_imgs, const int deformable_group,\n                       at::Tensor grad_im);\n\nvoid deformable_col2im_coord(\n    const at::Tensor data_col, const at::Tensor data_im,\n    const at::Tensor data_offset, const int channels, const int height,\n    const int width, const int ksize_h, const int ksize_w, const int pad_h,\n    const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, const int parallel_imgs,\n    const int deformable_group, at::Tensor grad_offset);\n\nvoid modulated_deformable_im2col_cuda(\n    const at::Tensor data_im, const at::Tensor data_offset,\n    const at::Tensor data_mask, const int batch_size, const int channels,\n    const int height_im, const int width_im, const int height_col,\n    const int width_col, const int kernel_h, const int kenerl_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, const int deformable_group,\n    at::Tensor data_col);\n\nvoid modulated_deformable_col2im_cuda(\n    const at::Tensor data_col, const at::Tensor data_offset,\n    const at::Tensor data_mask, const int batch_size, const int channels,\n    const int height_im, const int width_im, const int height_col,\n    const int width_col, const int kernel_h, const int kenerl_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, const int deformable_group,\n    at::Tensor grad_im);\n\nvoid modulated_deformable_col2im_coord_cuda(\n    const at::Tensor data_col, const at::Tensor data_im,\n    const at::Tensor data_offset, const at::Tensor data_mask,\n    const int batch_size, const int channels, const int height_im,\n    const int width_im, const int height_col, const int width_col,\n    const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w, const int dilation_h,\n    const int dilation_w, const int deformable_group, at::Tensor grad_offset,\n    at::Tensor grad_mask);\n\nvoid shape_check(at::Tensor input, at::Tensor offset, at::Tensor *gradOutput,\n                 at::Tensor weight, int kH, int kW, int dH, int dW, int padH,\n                 int padW, int dilationH, int dilationW, int group,\n                 int deformable_group) {\n  TORCH_CHECK(weight.ndimension() == 4,\n           \"4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, \"\n           \"but got: %s\",\n           weight.ndimension());\n\n  TORCH_CHECK(weight.is_contiguous(), \"weight tensor has to be contiguous\");\n\n  TORCH_CHECK(kW > 0 && kH > 0,\n           \"kernel size should be greater than zero, but got kH: %d kW: %d\", kH,\n           kW);\n\n  TORCH_CHECK((weight.size(2) == kH && weight.size(3) == kW),\n           \"kernel size should be consistent with weight, \",\n           \"but got kH: %d kW: %d weight.size(2): %d, weight.size(3): %d\", kH,\n           kW, weight.size(2), weight.size(3));\n\n  TORCH_CHECK(dW > 0 && dH > 0,\n           \"stride should be greater than zero, but got dH: %d dW: %d\", dH, dW);\n\n  TORCH_CHECK(\n      dilationW > 0 && dilationH > 0,\n      \"dilation should be greater than 0, but got dilationH: %d dilationW: %d\",\n      dilationH, dilationW);\n\n  int ndim = input.ndimension();\n  int dimf = 0;\n  int dimh = 1;\n  int dimw = 2;\n\n  if (ndim == 4) {\n    dimf++;\n    dimh++;\n    dimw++;\n  }\n\n  TORCH_CHECK(ndim == 3 || ndim == 4, \"3D or 4D input tensor expected but got: %s\",\n           ndim);\n\n  long nInputPlane = weight.size(1) * group;\n  long inputHeight = input.size(dimh);\n  long inputWidth = input.size(dimw);\n  long nOutputPlane = weight.size(0);\n  long outputHeight =\n      (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1;\n  long outputWidth =\n      (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1;\n\n  TORCH_CHECK(nInputPlane % deformable_group == 0,\n           \"input channels must divide deformable group size\");\n\n  if (outputWidth < 1 || outputHeight < 1)\n    AT_ERROR(\n        \"Given input size: (%ld x %ld x %ld). \"\n        \"Calculated output size: (%ld x %ld x %ld). Output size is too small\",\n        nInputPlane, inputHeight, inputWidth, nOutputPlane, outputHeight,\n        outputWidth);\n\n  TORCH_CHECK(input.size(1) == nInputPlane,\n           \"invalid number of input planes, expected: %d, but got: %d\",\n           nInputPlane, input.size(1));\n\n  TORCH_CHECK((inputHeight >= kH && inputWidth >= kW),\n           \"input image is smaller than kernel\");\n\n  TORCH_CHECK((offset.size(2) == outputHeight && offset.size(3) == outputWidth),\n           \"invalid spatial size of offset, expected height: %d width: %d, but \"\n           \"got height: %d width: %d\",\n           outputHeight, outputWidth, offset.size(2), offset.size(3));\n\n  TORCH_CHECK((offset.size(1) == deformable_group * 2 * kH * kW),\n           \"invalid number of channels of offset\");\n\n  if (gradOutput != NULL) {\n    TORCH_CHECK(gradOutput->size(dimf) == nOutputPlane,\n             \"invalid number of gradOutput planes, expected: %d, but got: %d\",\n             nOutputPlane, gradOutput->size(dimf));\n\n    TORCH_CHECK((gradOutput->size(dimh) == outputHeight &&\n              gradOutput->size(dimw) == outputWidth),\n             \"invalid size of gradOutput, expected height: %d width: %d , but \"\n             \"got height: %d width: %d\",\n             outputHeight, outputWidth, gradOutput->size(dimh),\n             gradOutput->size(dimw));\n  }\n}\n\nint deform_conv_forward_cuda(at::Tensor input, at::Tensor weight,\n                             at::Tensor offset, at::Tensor output,\n                             at::Tensor columns, at::Tensor ones, int kW,\n                             int kH, int dW, int dH, int padW, int padH,\n                             int dilationW, int dilationH, int group,\n                             int deformable_group, int im2col_step) {\n  // todo: resize columns to include im2col: done\n  // todo: add im2col_step as input\n  // todo: add new output buffer and transpose it to output (or directly\n  // transpose output) todo: possibly change data indexing because of\n  // parallel_imgs\n\n  shape_check(input, offset, NULL, weight, kH, kW, dH, dW, padH, padW,\n              dilationH, dilationW, group, deformable_group);\n  at::DeviceGuard guard(input.device());\n\n  input = input.contiguous();\n  offset = offset.contiguous();\n  weight = weight.contiguous();\n\n  int batch = 1;\n  if (input.ndimension() == 3) {\n    // Force batch\n    batch = 0;\n    input.unsqueeze_(0);\n    offset.unsqueeze_(0);\n  }\n\n  // todo: assert batchsize dividable by im2col_step\n\n  long batchSize = input.size(0);\n  long nInputPlane = input.size(1);\n  long inputHeight = input.size(2);\n  long inputWidth = input.size(3);\n\n  long nOutputPlane = weight.size(0);\n\n  long outputWidth =\n      (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1;\n  long outputHeight =\n      (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1;\n\n  TORCH_CHECK((offset.size(0) == batchSize), \"invalid batch size of offset\");\n\n  output = output.view({batchSize / im2col_step, im2col_step, nOutputPlane,\n                        outputHeight, outputWidth});\n  columns = at::zeros(\n      {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth},\n      input.options());\n\n  if (ones.ndimension() != 2 ||\n      ones.size(0) * ones.size(1) < outputHeight * outputWidth) {\n    ones = at::ones({outputHeight, outputWidth}, input.options());\n  }\n\n  input = input.view({batchSize / im2col_step, im2col_step, nInputPlane,\n                      inputHeight, inputWidth});\n  offset =\n      offset.view({batchSize / im2col_step, im2col_step,\n                   deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  at::Tensor output_buffer =\n      at::zeros({batchSize / im2col_step, nOutputPlane,\n                 im2col_step * outputHeight, outputWidth},\n                output.options());\n\n  output_buffer = output_buffer.view(\n      {output_buffer.size(0), group, output_buffer.size(1) / group,\n       output_buffer.size(2), output_buffer.size(3)});\n\n  for (int elt = 0; elt < batchSize / im2col_step; elt++) {\n    deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight,\n                      inputWidth, kH, kW, padH, padW, dH, dW, dilationH,\n                      dilationW, im2col_step, deformable_group, columns);\n\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n    weight = weight.view({group, weight.size(0) / group, weight.size(1),\n                          weight.size(2), weight.size(3)});\n\n    for (int g = 0; g < group; g++) {\n      output_buffer[elt][g] = output_buffer[elt][g]\n                                  .flatten(1)\n                                  .addmm_(weight[g].flatten(1), columns[g])\n                                  .view_as(output_buffer[elt][g]);\n    }\n  }\n\n  output_buffer = output_buffer.view(\n      {output_buffer.size(0), output_buffer.size(1) * output_buffer.size(2),\n       output_buffer.size(3), output_buffer.size(4)});\n\n  output_buffer = output_buffer.view({batchSize / im2col_step, nOutputPlane,\n                                      im2col_step, outputHeight, outputWidth});\n  output_buffer.transpose_(1, 2);\n  output.copy_(output_buffer);\n  output = output.view({batchSize, nOutputPlane, outputHeight, outputWidth});\n\n  input = input.view({batchSize, nInputPlane, inputHeight, inputWidth});\n  offset = offset.view(\n      {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  if (batch == 0) {\n    output = output.view({nOutputPlane, outputHeight, outputWidth});\n    input = input.view({nInputPlane, inputHeight, inputWidth});\n    offset = offset.view({offset.size(1), offset.size(2), offset.size(3)});\n  }\n\n  return 1;\n}\n\nint deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset,\n                                    at::Tensor gradOutput, at::Tensor gradInput,\n                                    at::Tensor gradOffset, at::Tensor weight,\n                                    at::Tensor columns, int kW, int kH, int dW,\n                                    int dH, int padW, int padH, int dilationW,\n                                    int dilationH, int group,\n                                    int deformable_group, int im2col_step) {\n  shape_check(input, offset, &gradOutput, weight, kH, kW, dH, dW, padH, padW,\n              dilationH, dilationW, group, deformable_group);\n  at::DeviceGuard guard(input.device());\n\n  input = input.contiguous();\n  offset = offset.contiguous();\n  gradOutput = gradOutput.contiguous();\n  weight = weight.contiguous();\n\n  int batch = 1;\n\n  if (input.ndimension() == 3) {\n    // Force batch\n    batch = 0;\n    input = input.view({1, input.size(0), input.size(1), input.size(2)});\n    offset = offset.view({1, offset.size(0), offset.size(1), offset.size(2)});\n    gradOutput = gradOutput.view(\n        {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)});\n  }\n\n  long batchSize = input.size(0);\n  long nInputPlane = input.size(1);\n  long inputHeight = input.size(2);\n  long inputWidth = input.size(3);\n\n  long nOutputPlane = weight.size(0);\n\n  long outputWidth =\n      (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1;\n  long outputHeight =\n      (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1;\n\n  TORCH_CHECK((offset.size(0) == batchSize), 3, \"invalid batch size of offset\");\n  gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth});\n  columns = at::zeros(\n      {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth},\n      input.options());\n\n  // change order of grad output\n  gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step,\n                                nOutputPlane, outputHeight, outputWidth});\n  gradOutput.transpose_(1, 2);\n\n  gradInput = gradInput.view({batchSize / im2col_step, im2col_step, nInputPlane,\n                              inputHeight, inputWidth});\n  input = input.view({batchSize / im2col_step, im2col_step, nInputPlane,\n                      inputHeight, inputWidth});\n  gradOffset = gradOffset.view({batchSize / im2col_step, im2col_step,\n                                deformable_group * 2 * kH * kW, outputHeight,\n                                outputWidth});\n  offset =\n      offset.view({batchSize / im2col_step, im2col_step,\n                   deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  for (int elt = 0; elt < batchSize / im2col_step; elt++) {\n    // divide into groups\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n    weight = weight.view({group, weight.size(0) / group, weight.size(1),\n                          weight.size(2), weight.size(3)});\n    gradOutput = gradOutput.view(\n        {gradOutput.size(0), group, gradOutput.size(1) / group,\n         gradOutput.size(2), gradOutput.size(3), gradOutput.size(4)});\n\n    for (int g = 0; g < group; g++) {\n      columns[g] = columns[g].addmm_(weight[g].flatten(1).transpose(0, 1),\n                                     gradOutput[elt][g].flatten(1), 0.0f, 1.0f);\n    }\n\n    columns =\n        columns.view({columns.size(0) * columns.size(1), columns.size(2)});\n    gradOutput = gradOutput.view(\n        {gradOutput.size(0), gradOutput.size(1) * gradOutput.size(2),\n         gradOutput.size(3), gradOutput.size(4), gradOutput.size(5)});\n\n    deformable_col2im_coord(columns, input[elt], offset[elt], nInputPlane,\n                            inputHeight, inputWidth, kH, kW, padH, padW, dH, dW,\n                            dilationH, dilationW, im2col_step, deformable_group,\n                            gradOffset[elt]);\n\n    deformable_col2im(columns, offset[elt], nInputPlane, inputHeight,\n                      inputWidth, kH, kW, padH, padW, dH, dW, dilationH,\n                      dilationW, im2col_step, deformable_group, gradInput[elt]);\n  }\n\n  gradOutput.transpose_(1, 2);\n  gradOutput =\n      gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth});\n\n  gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth});\n  input = input.view({batchSize, nInputPlane, inputHeight, inputWidth});\n  gradOffset = gradOffset.view(\n      {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n  offset = offset.view(\n      {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  if (batch == 0) {\n    gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth});\n    input = input.view({nInputPlane, inputHeight, inputWidth});\n    gradInput = gradInput.view({nInputPlane, inputHeight, inputWidth});\n    offset = offset.view({offset.size(1), offset.size(2), offset.size(3)});\n    gradOffset =\n        gradOffset.view({offset.size(1), offset.size(2), offset.size(3)});\n  }\n\n  return 1;\n}\n\nint deform_conv_backward_parameters_cuda(\n    at::Tensor input, at::Tensor offset, at::Tensor gradOutput,\n    at::Tensor gradWeight,  // at::Tensor gradBias,\n    at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH,\n    int padW, int padH, int dilationW, int dilationH, int group,\n    int deformable_group, float scale, int im2col_step) {\n  // todo: transpose and reshape outGrad\n  // todo: reshape columns\n  // todo: add im2col_step as input\n\n  shape_check(input, offset, &gradOutput, gradWeight, kH, kW, dH, dW, padH,\n              padW, dilationH, dilationW, group, deformable_group);\n  at::DeviceGuard guard(input.device());\n\n  input = input.contiguous();\n  offset = offset.contiguous();\n  gradOutput = gradOutput.contiguous();\n\n  int batch = 1;\n\n  if (input.ndimension() == 3) {\n    // Force batch\n    batch = 0;\n    input = input.view(\n        at::IntList({1, input.size(0), input.size(1), input.size(2)}));\n    gradOutput = gradOutput.view(\n        {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)});\n  }\n\n  long batchSize = input.size(0);\n  long nInputPlane = input.size(1);\n  long inputHeight = input.size(2);\n  long inputWidth = input.size(3);\n\n  long nOutputPlane = gradWeight.size(0);\n\n  long outputWidth =\n      (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1;\n  long outputHeight =\n      (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1;\n\n  TORCH_CHECK((offset.size(0) == batchSize), \"invalid batch size of offset\");\n\n  columns = at::zeros(\n      {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth},\n      input.options());\n\n  gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step,\n                                nOutputPlane, outputHeight, outputWidth});\n  gradOutput.transpose_(1, 2);\n\n  at::Tensor gradOutputBuffer = at::zeros_like(gradOutput);\n  gradOutputBuffer =\n      gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, im2col_step,\n                             outputHeight, outputWidth});\n  gradOutputBuffer.copy_(gradOutput);\n  gradOutputBuffer =\n      gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane,\n                             im2col_step * outputHeight, outputWidth});\n\n  gradOutput.transpose_(1, 2);\n  gradOutput =\n      gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth});\n\n  input = input.view({batchSize / im2col_step, im2col_step, nInputPlane,\n                      inputHeight, inputWidth});\n  offset =\n      offset.view({batchSize / im2col_step, im2col_step,\n                   deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  for (int elt = 0; elt < batchSize / im2col_step; elt++) {\n    deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight,\n                      inputWidth, kH, kW, padH, padW, dH, dW, dilationH,\n                      dilationW, im2col_step, deformable_group, columns);\n\n    // divide into group\n    gradOutputBuffer = gradOutputBuffer.view(\n        {gradOutputBuffer.size(0), group, gradOutputBuffer.size(1) / group,\n         gradOutputBuffer.size(2), gradOutputBuffer.size(3)});\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n    gradWeight =\n        gradWeight.view({group, gradWeight.size(0) / group, gradWeight.size(1),\n                         gradWeight.size(2), gradWeight.size(3)});\n\n    for (int g = 0; g < group; g++) {\n      gradWeight[g] = gradWeight[g]\n                          .flatten(1)\n                          .addmm_(gradOutputBuffer[elt][g].flatten(1),\n                                  columns[g].transpose(1, 0), 1.0, scale)\n                          .view_as(gradWeight[g]);\n    }\n    gradOutputBuffer = gradOutputBuffer.view(\n        {gradOutputBuffer.size(0),\n         gradOutputBuffer.size(1) * gradOutputBuffer.size(2),\n         gradOutputBuffer.size(3), gradOutputBuffer.size(4)});\n    columns =\n        columns.view({columns.size(0) * columns.size(1), columns.size(2)});\n    gradWeight = gradWeight.view({gradWeight.size(0) * gradWeight.size(1),\n                                  gradWeight.size(2), gradWeight.size(3),\n                                  gradWeight.size(4)});\n  }\n\n  input = input.view({batchSize, nInputPlane, inputHeight, inputWidth});\n  offset = offset.view(\n      {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth});\n\n  if (batch == 0) {\n    gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth});\n    input = input.view({nInputPlane, inputHeight, inputWidth});\n  }\n\n  return 1;\n}\n\nvoid modulated_deform_conv_cuda_forward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns,\n    int kernel_h, int kernel_w, const int stride_h, const int stride_w,\n    const int pad_h, const int pad_w, const int dilation_h,\n    const int dilation_w, const int group, const int deformable_group,\n    const bool with_bias) {\n  TORCH_CHECK(input.is_contiguous(), \"input tensor has to be contiguous\");\n  TORCH_CHECK(weight.is_contiguous(), \"weight tensor has to be contiguous\");\n  at::DeviceGuard guard(input.device());\n\n  const int batch = input.size(0);\n  const int channels = input.size(1);\n  const int height = input.size(2);\n  const int width = input.size(3);\n\n  const int channels_out = weight.size(0);\n  const int channels_kernel = weight.size(1);\n  const int kernel_h_ = weight.size(2);\n  const int kernel_w_ = weight.size(3);\n\n  if (kernel_h_ != kernel_h || kernel_w_ != kernel_w)\n    AT_ERROR(\"Input shape and kernel shape wont match: (%d x %d vs %d x %d).\",\n             kernel_h_, kernel_w, kernel_h_, kernel_w_);\n  if (channels != channels_kernel * group)\n    AT_ERROR(\"Input shape and kernel channels wont match: (%d vs %d).\",\n             channels, channels_kernel * group);\n\n  const int height_out =\n      (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  const int width_out =\n      (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n\n  if (ones.ndimension() != 2 ||\n      ones.size(0) * ones.size(1) < height_out * width_out) {\n    // Resize plane and fill with ones...\n    ones = at::ones({height_out, width_out}, input.options());\n  }\n\n  // resize output\n  output = output.view({batch, channels_out, height_out, width_out}).zero_();\n  // resize temporary columns\n  columns =\n      at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out},\n                input.options());\n\n  output = output.view({output.size(0), group, output.size(1) / group,\n                        output.size(2), output.size(3)});\n\n  for (int b = 0; b < batch; b++) {\n    modulated_deformable_im2col_cuda(\n        input[b], offset[b], mask[b], 1, channels, height, width, height_out,\n        width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,\n        dilation_h, dilation_w, deformable_group, columns);\n\n    // divide into group\n    weight = weight.view({group, weight.size(0) / group, weight.size(1),\n                          weight.size(2), weight.size(3)});\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n\n    for (int g = 0; g < group; g++) {\n      output[b][g] = output[b][g]\n                         .flatten(1)\n                         .addmm_(weight[g].flatten(1), columns[g])\n                         .view_as(output[b][g]);\n    }\n\n    weight = weight.view({weight.size(0) * weight.size(1), weight.size(2),\n                          weight.size(3), weight.size(4)});\n    columns =\n        columns.view({columns.size(0) * columns.size(1), columns.size(2)});\n  }\n\n  output = output.view({output.size(0), output.size(1) * output.size(2),\n                        output.size(3), output.size(4)});\n\n  if (with_bias) {\n    output += bias.view({1, bias.size(0), 1, 1});\n  }\n}\n\nvoid modulated_deform_conv_cuda_backward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor columns,\n    at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias,\n    at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output,\n    int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h,\n    int pad_w, int dilation_h, int dilation_w, int group, int deformable_group,\n    const bool with_bias) {\n  TORCH_CHECK(input.is_contiguous(), \"input tensor has to be contiguous\");\n  TORCH_CHECK(weight.is_contiguous(), \"weight tensor has to be contiguous\");\n  at::DeviceGuard guard(input.device());\n\n  const int batch = input.size(0);\n  const int channels = input.size(1);\n  const int height = input.size(2);\n  const int width = input.size(3);\n\n  const int channels_kernel = weight.size(1);\n  const int kernel_h_ = weight.size(2);\n  const int kernel_w_ = weight.size(3);\n  if (kernel_h_ != kernel_h || kernel_w_ != kernel_w)\n    AT_ERROR(\"Input shape and kernel shape wont match: (%d x %d vs %d x %d).\",\n             kernel_h_, kernel_w, kernel_h_, kernel_w_);\n  if (channels != channels_kernel * group)\n    AT_ERROR(\"Input shape and kernel channels wont match: (%d vs %d).\",\n             channels, channels_kernel * group);\n\n  const int height_out =\n      (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;\n  const int width_out =\n      (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;\n\n  if (ones.ndimension() != 2 ||\n      ones.size(0) * ones.size(1) < height_out * width_out) {\n    // Resize plane and fill with ones...\n    ones = at::ones({height_out, width_out}, input.options());\n  }\n\n  grad_input = grad_input.view({batch, channels, height, width});\n  columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out},\n                      input.options());\n\n  grad_output =\n      grad_output.view({grad_output.size(0), group, grad_output.size(1) / group,\n                        grad_output.size(2), grad_output.size(3)});\n\n  for (int b = 0; b < batch; b++) {\n    // divide int group\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n    weight = weight.view({group, weight.size(0) / group, weight.size(1),\n                          weight.size(2), weight.size(3)});\n\n    for (int g = 0; g < group; g++) {\n      columns[g].addmm_(weight[g].flatten(1).transpose(0, 1),\n                        grad_output[b][g].flatten(1), 0.0f, 1.0f);\n    }\n\n    columns =\n        columns.view({columns.size(0) * columns.size(1), columns.size(2)});\n    weight = weight.view({weight.size(0) * weight.size(1), weight.size(2),\n                          weight.size(3), weight.size(4)});\n\n    // gradient w.r.t. input coordinate data\n    modulated_deformable_col2im_coord_cuda(\n        columns, input[b], offset[b], mask[b], 1, channels, height, width,\n        height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h,\n        stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b],\n        grad_mask[b]);\n    // gradient w.r.t. input data\n    modulated_deformable_col2im_cuda(\n        columns, offset[b], mask[b], 1, channels, height, width, height_out,\n        width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,\n        dilation_h, dilation_w, deformable_group, grad_input[b]);\n\n    // gradient w.r.t. weight, dWeight should accumulate across the batch and\n    // group\n    modulated_deformable_im2col_cuda(\n        input[b], offset[b], mask[b], 1, channels, height, width, height_out,\n        width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,\n        dilation_h, dilation_w, deformable_group, columns);\n\n    columns = columns.view({group, columns.size(0) / group, columns.size(1)});\n    grad_weight = grad_weight.view({group, grad_weight.size(0) / group,\n                                    grad_weight.size(1), grad_weight.size(2),\n                                    grad_weight.size(3)});\n    if (with_bias)\n      grad_bias = grad_bias.view({group, grad_bias.size(0) / group});\n\n    for (int g = 0; g < group; g++) {\n      grad_weight[g] =\n          grad_weight[g]\n              .flatten(1)\n              .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1))\n              .view_as(grad_weight[g]);\n      if (with_bias) {\n        grad_bias[g] =\n            grad_bias[g]\n                .view({-1, 1})\n                .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1}))\n                .view(-1);\n      }\n    }\n\n    columns =\n        columns.view({columns.size(0) * columns.size(1), columns.size(2)});\n    grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1),\n                                    grad_weight.size(2), grad_weight.size(3),\n                                    grad_weight.size(4)});\n    if (with_bias)\n      grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)});\n  }\n  grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1),\n                                  grad_output.size(2), grad_output.size(3),\n                                  grad_output.size(4)});\n}\n"
  },
  {
    "path": "basicsr/ops/dcn/src/deform_conv_cuda_kernel.cu",
    "content": "/*!\n ******************* BEGIN Caffe Copyright Notice and Disclaimer ****************\n *\n * COPYRIGHT\n *\n * All contributions by the University of California:\n * Copyright (c) 2014-2017 The Regents of the University of California (Regents)\n * All rights reserved.\n *\n * All other contributions:\n * Copyright (c) 2014-2017, the respective contributors\n * All rights reserved.\n *\n * Caffe uses a shared copyright model: each contributor holds copyright over\n * their contributions to Caffe. The project versioning records all such\n * contribution and copyright details. If a contributor wants to further mark\n * their specific copyright on a particular contribution, they should indicate\n * their copyright solely in the commit message of the change when it is\n * committed.\n *\n * LICENSE\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n * 1. Redistributions of source code must retain the above copyright notice, this\n * list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright notice,\n * this list of conditions and the following disclaimer in the documentation\n * and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\n * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * CONTRIBUTION AGREEMENT\n *\n * By contributing to the BVLC/caffe repository through pull-request, comment,\n * or otherwise, the contributor releases their content to the\n * license and copyright terms herein.\n *\n ***************** END Caffe Copyright Notice and Disclaimer ********************\n *\n * Copyright (c) 2018 Microsoft\n * Licensed under The MIT License [see LICENSE for details]\n * \\file modulated_deformable_im2col.cuh\n * \\brief Function definitions of converting an image to\n * column matrix based on kernel, padding, dilation, and offset.\n * These functions are mainly used in deformable convolution operators.\n * \\ref: https://arxiv.org/abs/1703.06211\n * \\author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu, Dazhi Cheng\n */\n\n// modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda_kernel.cu\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <THC/THCAtomics.cuh>\n#include <stdio.h>\n#include <math.h>\n#include <float.h>\n\nusing namespace at;\n\n#define CUDA_KERNEL_LOOP(i, n)                                 \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < (n); \\\n       i += blockDim.x * gridDim.x)\n\nconst int CUDA_NUM_THREADS = 1024;\nconst int kMaxGridNum = 65535;\n\ninline int GET_BLOCKS(const int N)\n{\n  return std::min(kMaxGridNum, (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS);\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t deformable_im2col_bilinear(const scalar_t *bottom_data, const int data_width,\n                                               const int height, const int width, scalar_t h, scalar_t w)\n{\n\n  int h_low = floor(h);\n  int w_low = floor(w);\n  int h_high = h_low + 1;\n  int w_high = w_low + 1;\n\n  scalar_t lh = h - h_low;\n  scalar_t lw = w - w_low;\n  scalar_t hh = 1 - lh, hw = 1 - lw;\n\n  scalar_t v1 = 0;\n  if (h_low >= 0 && w_low >= 0)\n    v1 = bottom_data[h_low * data_width + w_low];\n  scalar_t v2 = 0;\n  if (h_low >= 0 && w_high <= width - 1)\n    v2 = bottom_data[h_low * data_width + w_high];\n  scalar_t v3 = 0;\n  if (h_high <= height - 1 && w_low >= 0)\n    v3 = bottom_data[h_high * data_width + w_low];\n  scalar_t v4 = 0;\n  if (h_high <= height - 1 && w_high <= width - 1)\n    v4 = bottom_data[h_high * data_width + w_high];\n\n  scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n\n  scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  return val;\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t get_gradient_weight(scalar_t argmax_h, scalar_t argmax_w,\n                                        const int h, const int w, const int height, const int width)\n{\n\n  if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)\n  {\n    //empty\n    return 0;\n  }\n\n  int argmax_h_low = floor(argmax_h);\n  int argmax_w_low = floor(argmax_w);\n  int argmax_h_high = argmax_h_low + 1;\n  int argmax_w_high = argmax_w_low + 1;\n\n  scalar_t weight = 0;\n  if (h == argmax_h_low && w == argmax_w_low)\n    weight = (h + 1 - argmax_h) * (w + 1 - argmax_w);\n  if (h == argmax_h_low && w == argmax_w_high)\n    weight = (h + 1 - argmax_h) * (argmax_w + 1 - w);\n  if (h == argmax_h_high && w == argmax_w_low)\n    weight = (argmax_h + 1 - h) * (w + 1 - argmax_w);\n  if (h == argmax_h_high && w == argmax_w_high)\n    weight = (argmax_h + 1 - h) * (argmax_w + 1 - w);\n  return weight;\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t get_coordinate_weight(scalar_t argmax_h, scalar_t argmax_w,\n                                          const int height, const int width, const scalar_t *im_data,\n                                          const int data_width, const int bp_dir)\n{\n\n  if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)\n  {\n    //empty\n    return 0;\n  }\n\n  int argmax_h_low = floor(argmax_h);\n  int argmax_w_low = floor(argmax_w);\n  int argmax_h_high = argmax_h_low + 1;\n  int argmax_w_high = argmax_w_low + 1;\n\n  scalar_t weight = 0;\n\n  if (bp_dir == 0)\n  {\n    if (argmax_h_low >= 0 && argmax_w_low >= 0)\n      weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low];\n    if (argmax_h_low >= 0 && argmax_w_high <= width - 1)\n      weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high];\n    if (argmax_h_high <= height - 1 && argmax_w_low >= 0)\n      weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low];\n    if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)\n      weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  }\n  else if (bp_dir == 1)\n  {\n    if (argmax_h_low >= 0 && argmax_w_low >= 0)\n      weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low];\n    if (argmax_h_low >= 0 && argmax_w_high <= width - 1)\n      weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high];\n    if (argmax_h_high <= height - 1 && argmax_w_low >= 0)\n      weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low];\n    if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)\n      weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  }\n\n  return weight;\n}\n\ntemplate <typename scalar_t>\n__global__ void deformable_im2col_gpu_kernel(const int n, const scalar_t *data_im, const scalar_t *data_offset,\n                                             const int height, const int width, const int kernel_h, const int kernel_w,\n                                             const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n                                             const int dilation_h, const int dilation_w, const int channel_per_deformable_group,\n                                             const int batch_size, const int num_channels, const int deformable_group,\n                                             const int height_col, const int width_col,\n                                             scalar_t *data_col)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    // index index of output matrix\n    const int w_col = index % width_col;\n    const int h_col = (index / width_col) % height_col;\n    const int b_col = (index / width_col / height_col) % batch_size;\n    const int c_im = (index / width_col / height_col) / batch_size;\n    const int c_col = c_im * kernel_h * kernel_w;\n\n    // compute deformable group index\n    const int deformable_group_index = c_im / channel_per_deformable_group;\n\n    const int h_in = h_col * stride_h - pad_h;\n    const int w_in = w_col * stride_w - pad_w;\n    scalar_t *data_col_ptr = data_col + ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col;\n    //const scalar_t* data_im_ptr = data_im + ((b_col * num_channels + c_im) * height + h_in) * width + w_in;\n    const scalar_t *data_im_ptr = data_im + (b_col * num_channels + c_im) * height * width;\n    const scalar_t *data_offset_ptr = data_offset + (b_col * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;\n\n    for (int i = 0; i < kernel_h; ++i)\n    {\n      for (int j = 0; j < kernel_w; ++j)\n      {\n        const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col;\n        const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + w_col;\n        const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n        const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n        scalar_t val = static_cast<scalar_t>(0);\n        const scalar_t h_im = h_in + i * dilation_h + offset_h;\n        const scalar_t w_im = w_in + j * dilation_w + offset_w;\n        if (h_im > -1 && w_im > -1 && h_im < height && w_im < width)\n        {\n          //const scalar_t map_h = i * dilation_h + offset_h;\n          //const scalar_t map_w = j * dilation_w + offset_w;\n          //const int cur_height = height - h_in;\n          //const int cur_width = width - w_in;\n          //val = deformable_im2col_bilinear(data_im_ptr, width, cur_height, cur_width, map_h, map_w);\n          val = deformable_im2col_bilinear(data_im_ptr, width, height, width, h_im, w_im);\n        }\n        *data_col_ptr = val;\n        data_col_ptr += batch_size * height_col * width_col;\n      }\n    }\n  }\n}\n\nvoid deformable_im2col(\n    const at::Tensor data_im, const at::Tensor data_offset, const int channels,\n    const int height, const int width, const int ksize_h, const int ksize_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w, const int parallel_imgs,\n    const int deformable_group, at::Tensor data_col)\n{\n  // num_axes should be smaller than block size\n  // todo: check parallel_imgs is correctly passed in\n  int height_col = (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1;\n  int num_kernels = channels * height_col * width_col * parallel_imgs;\n  int channel_per_deformable_group = channels / deformable_group;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_im.scalar_type(), \"deformable_im2col_gpu\", ([&] {\n        const scalar_t *data_im_ = data_im.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n\n        deformable_im2col_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_im_, data_offset_, height, width, ksize_h, ksize_w,\n            pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w,\n            channel_per_deformable_group, parallel_imgs, channels, deformable_group,\n            height_col, width_col, data_col_);\n      }));\n\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in deformable_im2col: %s\\n\", cudaGetErrorString(err));\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void deformable_col2im_gpu_kernel(\n    const int n, const scalar_t *data_col, const scalar_t *data_offset,\n    const int channels, const int height, const int width,\n    const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int channel_per_deformable_group,\n    const int batch_size, const int deformable_group,\n    const int height_col, const int width_col,\n    scalar_t *grad_im)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    const int j = (index / width_col / height_col / batch_size) % kernel_w;\n    const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h;\n    const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / channel_per_deformable_group;\n\n    int w_out = index % width_col;\n    int h_out = (index / width_col) % height_col;\n    int b = (index / width_col / height_col) % batch_size;\n    int w_in = w_out * stride_w - pad_w;\n    int h_in = h_out * stride_h - pad_h;\n\n    const scalar_t *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) *\n                                                        2 * kernel_h * kernel_w * height_col * width_col;\n    const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out;\n    const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out;\n    const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n    const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n    const scalar_t cur_inv_h_data = h_in + i * dilation_h + offset_h;\n    const scalar_t cur_inv_w_data = w_in + j * dilation_w + offset_w;\n\n    const scalar_t cur_top_grad = data_col[index];\n    const int cur_h = (int)cur_inv_h_data;\n    const int cur_w = (int)cur_inv_w_data;\n    for (int dy = -2; dy <= 2; dy++)\n    {\n      for (int dx = -2; dx <= 2; dx++)\n      {\n        if (cur_h + dy >= 0 && cur_h + dy < height &&\n            cur_w + dx >= 0 && cur_w + dx < width &&\n            abs(cur_inv_h_data - (cur_h + dy)) < 1 &&\n            abs(cur_inv_w_data - (cur_w + dx)) < 1)\n        {\n          int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx;\n          scalar_t weight = get_gradient_weight(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width);\n          atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad);\n        }\n      }\n    }\n  }\n}\n\nvoid deformable_col2im(\n    const at::Tensor data_col, const at::Tensor data_offset, const int channels,\n    const int height, const int width, const int ksize_h,\n    const int ksize_w, const int pad_h, const int pad_w,\n    const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int parallel_imgs, const int deformable_group,\n    at::Tensor grad_im)\n{\n\n  // todo: make sure parallel_imgs is passed in correctly\n  int height_col = (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1;\n  int num_kernels = channels * ksize_h * ksize_w * height_col * width_col * parallel_imgs;\n  int channel_per_deformable_group = channels / deformable_group;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_col.scalar_type(), \"deformable_col2im_gpu\", ([&] {\n        const scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        scalar_t *grad_im_ = grad_im.data_ptr<scalar_t>();\n\n        deformable_col2im_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_col_, data_offset_, channels, height, width, ksize_h,\n            ksize_w, pad_h, pad_w, stride_h, stride_w,\n            dilation_h, dilation_w, channel_per_deformable_group,\n            parallel_imgs, deformable_group, height_col, width_col, grad_im_);\n      }));\n\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in deformable_col2im: %s\\n\", cudaGetErrorString(err));\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void deformable_col2im_coord_gpu_kernel(const int n, const scalar_t *data_col,\n                                                   const scalar_t *data_im, const scalar_t *data_offset,\n                                                   const int channels, const int height, const int width,\n                                                   const int kernel_h, const int kernel_w,\n                                                   const int pad_h, const int pad_w,\n                                                   const int stride_h, const int stride_w,\n                                                   const int dilation_h, const int dilation_w,\n                                                   const int channel_per_deformable_group,\n                                                   const int batch_size, const int offset_channels, const int deformable_group,\n                                                   const int height_col, const int width_col, scalar_t *grad_offset)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    scalar_t val = 0;\n    int w = index % width_col;\n    int h = (index / width_col) % height_col;\n    int c = (index / width_col / height_col) % offset_channels;\n    int b = (index / width_col / height_col) / offset_channels;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / (2 * kernel_h * kernel_w);\n    const int col_step = kernel_h * kernel_w;\n    int cnt = 0;\n    const scalar_t *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group *\n                                                  batch_size * width_col * height_col;\n    const scalar_t *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) *\n                                                channel_per_deformable_group / kernel_h / kernel_w * height * width;\n    const scalar_t *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 *\n                                                        kernel_h * kernel_w * height_col * width_col;\n\n    const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w;\n\n    for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step)\n    {\n      const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w;\n      const int bp_dir = offset_c % 2;\n\n      int j = (col_pos / width_col / height_col / batch_size) % kernel_w;\n      int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h;\n      int w_out = col_pos % width_col;\n      int h_out = (col_pos / width_col) % height_col;\n      int w_in = w_out * stride_w - pad_w;\n      int h_in = h_out * stride_h - pad_h;\n      const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out);\n      const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out);\n      const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n      const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n      scalar_t inv_h = h_in + i * dilation_h + offset_h;\n      scalar_t inv_w = w_in + j * dilation_w + offset_w;\n      if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width)\n      {\n        inv_h = inv_w = -2;\n      }\n      const scalar_t weight = get_coordinate_weight(\n          inv_h, inv_w,\n          height, width, data_im_ptr + cnt * height * width, width, bp_dir);\n      val += weight * data_col_ptr[col_pos];\n      cnt += 1;\n    }\n\n    grad_offset[index] = val;\n  }\n}\n\nvoid deformable_col2im_coord(\n    const at::Tensor data_col, const at::Tensor data_im, const at::Tensor data_offset,\n    const int channels, const int height, const int width, const int ksize_h,\n    const int ksize_w, const int pad_h, const int pad_w, const int stride_h,\n    const int stride_w, const int dilation_h, const int dilation_w,\n    const int parallel_imgs, const int deformable_group, at::Tensor grad_offset)\n{\n\n  int height_col = (height + 2 * pad_h - (dilation_h * (ksize_h - 1) + 1)) / stride_h + 1;\n  int width_col = (width + 2 * pad_w - (dilation_w * (ksize_w - 1) + 1)) / stride_w + 1;\n  int num_kernels = height_col * width_col * 2 * ksize_h * ksize_w * deformable_group * parallel_imgs;\n  int channel_per_deformable_group = channels * ksize_h * ksize_w / deformable_group;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_col.scalar_type(), \"deformable_col2im_coord_gpu\", ([&] {\n        const scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n        const scalar_t *data_im_ = data_im.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        scalar_t *grad_offset_ = grad_offset.data_ptr<scalar_t>();\n\n        deformable_col2im_coord_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_col_, data_im_, data_offset_, channels, height, width,\n            ksize_h, ksize_w, pad_h, pad_w, stride_h, stride_w,\n            dilation_h, dilation_w, channel_per_deformable_group,\n            parallel_imgs, 2 * ksize_h * ksize_w * deformable_group, deformable_group,\n            height_col, width_col, grad_offset_);\n      }));\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t dmcn_im2col_bilinear(const scalar_t *bottom_data, const int data_width,\n                                         const int height, const int width, scalar_t h, scalar_t w)\n{\n  int h_low = floor(h);\n  int w_low = floor(w);\n  int h_high = h_low + 1;\n  int w_high = w_low + 1;\n\n  scalar_t lh = h - h_low;\n  scalar_t lw = w - w_low;\n  scalar_t hh = 1 - lh, hw = 1 - lw;\n\n  scalar_t v1 = 0;\n  if (h_low >= 0 && w_low >= 0)\n    v1 = bottom_data[h_low * data_width + w_low];\n  scalar_t v2 = 0;\n  if (h_low >= 0 && w_high <= width - 1)\n    v2 = bottom_data[h_low * data_width + w_high];\n  scalar_t v3 = 0;\n  if (h_high <= height - 1 && w_low >= 0)\n    v3 = bottom_data[h_high * data_width + w_low];\n  scalar_t v4 = 0;\n  if (h_high <= height - 1 && w_high <= width - 1)\n    v4 = bottom_data[h_high * data_width + w_high];\n\n  scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;\n\n  scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n  return val;\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t dmcn_get_gradient_weight(scalar_t argmax_h, scalar_t argmax_w,\n                                             const int h, const int w, const int height, const int width)\n{\n  if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)\n  {\n    //empty\n    return 0;\n  }\n\n  int argmax_h_low = floor(argmax_h);\n  int argmax_w_low = floor(argmax_w);\n  int argmax_h_high = argmax_h_low + 1;\n  int argmax_w_high = argmax_w_low + 1;\n\n  scalar_t weight = 0;\n  if (h == argmax_h_low && w == argmax_w_low)\n    weight = (h + 1 - argmax_h) * (w + 1 - argmax_w);\n  if (h == argmax_h_low && w == argmax_w_high)\n    weight = (h + 1 - argmax_h) * (argmax_w + 1 - w);\n  if (h == argmax_h_high && w == argmax_w_low)\n    weight = (argmax_h + 1 - h) * (w + 1 - argmax_w);\n  if (h == argmax_h_high && w == argmax_w_high)\n    weight = (argmax_h + 1 - h) * (argmax_w + 1 - w);\n  return weight;\n}\n\ntemplate <typename scalar_t>\n__device__ scalar_t dmcn_get_coordinate_weight(scalar_t argmax_h, scalar_t argmax_w,\n                                               const int height, const int width, const scalar_t *im_data,\n                                               const int data_width, const int bp_dir)\n{\n  if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width)\n  {\n    //empty\n    return 0;\n  }\n\n  int argmax_h_low = floor(argmax_h);\n  int argmax_w_low = floor(argmax_w);\n  int argmax_h_high = argmax_h_low + 1;\n  int argmax_w_high = argmax_w_low + 1;\n\n  scalar_t weight = 0;\n\n  if (bp_dir == 0)\n  {\n    if (argmax_h_low >= 0 && argmax_w_low >= 0)\n      weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low];\n    if (argmax_h_low >= 0 && argmax_w_high <= width - 1)\n      weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high];\n    if (argmax_h_high <= height - 1 && argmax_w_low >= 0)\n      weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low];\n    if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)\n      weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  }\n  else if (bp_dir == 1)\n  {\n    if (argmax_h_low >= 0 && argmax_w_low >= 0)\n      weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low];\n    if (argmax_h_low >= 0 && argmax_w_high <= width - 1)\n      weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high];\n    if (argmax_h_high <= height - 1 && argmax_w_low >= 0)\n      weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low];\n    if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1)\n      weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high];\n  }\n\n  return weight;\n}\n\ntemplate <typename scalar_t>\n__global__ void modulated_deformable_im2col_gpu_kernel(const int n,\n                                                       const scalar_t *data_im, const scalar_t *data_offset, const scalar_t *data_mask,\n                                                       const int height, const int width, const int kernel_h, const int kernel_w,\n                                                       const int pad_h, const int pad_w,\n                                                       const int stride_h, const int stride_w,\n                                                       const int dilation_h, const int dilation_w,\n                                                       const int channel_per_deformable_group,\n                                                       const int batch_size, const int num_channels, const int deformable_group,\n                                                       const int height_col, const int width_col,\n                                                       scalar_t *data_col)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    // index index of output matrix\n    const int w_col = index % width_col;\n    const int h_col = (index / width_col) % height_col;\n    const int b_col = (index / width_col / height_col) % batch_size;\n    const int c_im = (index / width_col / height_col) / batch_size;\n    const int c_col = c_im * kernel_h * kernel_w;\n\n    // compute deformable group index\n    const int deformable_group_index = c_im / channel_per_deformable_group;\n\n    const int h_in = h_col * stride_h - pad_h;\n    const int w_in = w_col * stride_w - pad_w;\n\n    scalar_t *data_col_ptr = data_col + ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col;\n    //const float* data_im_ptr = data_im + ((b_col * num_channels + c_im) * height + h_in) * width + w_in;\n    const scalar_t *data_im_ptr = data_im + (b_col * num_channels + c_im) * height * width;\n    const scalar_t *data_offset_ptr = data_offset + (b_col * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;\n\n    const scalar_t *data_mask_ptr = data_mask + (b_col * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col;\n\n    for (int i = 0; i < kernel_h; ++i)\n    {\n      for (int j = 0; j < kernel_w; ++j)\n      {\n        const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col;\n        const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + w_col;\n        const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_col) * width_col + w_col;\n        const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n        const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n        const scalar_t mask = data_mask_ptr[data_mask_hw_ptr];\n        scalar_t val = static_cast<scalar_t>(0);\n        const scalar_t h_im = h_in + i * dilation_h + offset_h;\n        const scalar_t w_im = w_in + j * dilation_w + offset_w;\n        //if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) {\n        if (h_im > -1 && w_im > -1 && h_im < height && w_im < width)\n        {\n          //const float map_h = i * dilation_h + offset_h;\n          //const float map_w = j * dilation_w + offset_w;\n          //const int cur_height = height - h_in;\n          //const int cur_width = width - w_in;\n          //val = dmcn_im2col_bilinear(data_im_ptr, width, cur_height, cur_width, map_h, map_w);\n          val = dmcn_im2col_bilinear(data_im_ptr, width, height, width, h_im, w_im);\n        }\n        *data_col_ptr = val * mask;\n        data_col_ptr += batch_size * height_col * width_col;\n        //data_col_ptr += height_col * width_col;\n      }\n    }\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void modulated_deformable_col2im_gpu_kernel(const int n,\n                                                       const scalar_t *data_col, const scalar_t *data_offset, const scalar_t *data_mask,\n                                                       const int channels, const int height, const int width,\n                                                       const int kernel_h, const int kernel_w,\n                                                       const int pad_h, const int pad_w,\n                                                       const int stride_h, const int stride_w,\n                                                       const int dilation_h, const int dilation_w,\n                                                       const int channel_per_deformable_group,\n                                                       const int batch_size, const int deformable_group,\n                                                       const int height_col, const int width_col,\n                                                       scalar_t *grad_im)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    const int j = (index / width_col / height_col / batch_size) % kernel_w;\n    const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h;\n    const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / channel_per_deformable_group;\n\n    int w_out = index % width_col;\n    int h_out = (index / width_col) % height_col;\n    int b = (index / width_col / height_col) % batch_size;\n    int w_in = w_out * stride_w - pad_w;\n    int h_in = h_out * stride_h - pad_h;\n\n    const scalar_t *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;\n    const scalar_t *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col;\n    const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out;\n    const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out;\n    const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_out) * width_col + w_out;\n    const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n    const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n    const scalar_t mask = data_mask_ptr[data_mask_hw_ptr];\n    const scalar_t cur_inv_h_data = h_in + i * dilation_h + offset_h;\n    const scalar_t cur_inv_w_data = w_in + j * dilation_w + offset_w;\n\n    const scalar_t cur_top_grad = data_col[index] * mask;\n    const int cur_h = (int)cur_inv_h_data;\n    const int cur_w = (int)cur_inv_w_data;\n    for (int dy = -2; dy <= 2; dy++)\n    {\n      for (int dx = -2; dx <= 2; dx++)\n      {\n        if (cur_h + dy >= 0 && cur_h + dy < height &&\n            cur_w + dx >= 0 && cur_w + dx < width &&\n            abs(cur_inv_h_data - (cur_h + dy)) < 1 &&\n            abs(cur_inv_w_data - (cur_w + dx)) < 1)\n        {\n          int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx;\n          scalar_t weight = dmcn_get_gradient_weight(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width);\n          atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad);\n        }\n      }\n    }\n  }\n}\n\ntemplate <typename scalar_t>\n__global__ void modulated_deformable_col2im_coord_gpu_kernel(const int n,\n                                                             const scalar_t *data_col, const scalar_t *data_im,\n                                                             const scalar_t *data_offset, const scalar_t *data_mask,\n                                                             const int channels, const int height, const int width,\n                                                             const int kernel_h, const int kernel_w,\n                                                             const int pad_h, const int pad_w,\n                                                             const int stride_h, const int stride_w,\n                                                             const int dilation_h, const int dilation_w,\n                                                             const int channel_per_deformable_group,\n                                                             const int batch_size, const int offset_channels, const int deformable_group,\n                                                             const int height_col, const int width_col,\n                                                             scalar_t *grad_offset, scalar_t *grad_mask)\n{\n  CUDA_KERNEL_LOOP(index, n)\n  {\n    scalar_t val = 0, mval = 0;\n    int w = index % width_col;\n    int h = (index / width_col) % height_col;\n    int c = (index / width_col / height_col) % offset_channels;\n    int b = (index / width_col / height_col) / offset_channels;\n    // compute the start and end of the output\n\n    const int deformable_group_index = c / (2 * kernel_h * kernel_w);\n    const int col_step = kernel_h * kernel_w;\n    int cnt = 0;\n    const scalar_t *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * batch_size * width_col * height_col;\n    const scalar_t *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) * channel_per_deformable_group / kernel_h / kernel_w * height * width;\n    const scalar_t *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col;\n    const scalar_t *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col;\n\n    const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w;\n\n    for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step)\n    {\n      const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w;\n      const int bp_dir = offset_c % 2;\n\n      int j = (col_pos / width_col / height_col / batch_size) % kernel_w;\n      int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h;\n      int w_out = col_pos % width_col;\n      int h_out = (col_pos / width_col) % height_col;\n      int w_in = w_out * stride_w - pad_w;\n      int h_in = h_out * stride_h - pad_h;\n      const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out);\n      const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out);\n      const int data_mask_hw_ptr = (((i * kernel_w + j) * height_col + h_out) * width_col + w_out);\n      const scalar_t offset_h = data_offset_ptr[data_offset_h_ptr];\n      const scalar_t offset_w = data_offset_ptr[data_offset_w_ptr];\n      const scalar_t mask = data_mask_ptr[data_mask_hw_ptr];\n      scalar_t inv_h = h_in + i * dilation_h + offset_h;\n      scalar_t inv_w = w_in + j * dilation_w + offset_w;\n      if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width)\n      {\n        inv_h = inv_w = -2;\n      }\n      else\n      {\n        mval += data_col_ptr[col_pos] * dmcn_im2col_bilinear(data_im_ptr + cnt * height * width, width, height, width, inv_h, inv_w);\n      }\n      const scalar_t weight = dmcn_get_coordinate_weight(\n          inv_h, inv_w,\n          height, width, data_im_ptr + cnt * height * width, width, bp_dir);\n      val += weight * data_col_ptr[col_pos] * mask;\n      cnt += 1;\n    }\n    // KERNEL_ASSIGN(grad_offset[index], offset_req, val);\n    grad_offset[index] = val;\n    if (offset_c % 2 == 0)\n      // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w], mask_req, mval);\n      grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w] = mval;\n  }\n}\n\nvoid modulated_deformable_im2col_cuda(\n    const at::Tensor data_im, const at::Tensor data_offset, const at::Tensor data_mask,\n    const int batch_size, const int channels, const int height_im, const int width_im,\n    const int height_col, const int width_col, const int kernel_h, const int kenerl_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int deformable_group, at::Tensor data_col)\n{\n  // num_axes should be smaller than block size\n  const int channel_per_deformable_group = channels / deformable_group;\n  const int num_kernels = channels * batch_size * height_col * width_col;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_im.scalar_type(), \"modulated_deformable_im2col_gpu\", ([&] {\n        const scalar_t *data_im_ = data_im.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        const scalar_t *data_mask_ = data_mask.data_ptr<scalar_t>();\n        scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n\n        modulated_deformable_im2col_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_im_, data_offset_, data_mask_, height_im, width_im, kernel_h, kenerl_w,\n            pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group,\n            batch_size, channels, deformable_group, height_col, width_col, data_col_);\n      }));\n\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in modulated_deformable_im2col_cuda: %s\\n\", cudaGetErrorString(err));\n  }\n}\n\nvoid modulated_deformable_col2im_cuda(\n    const at::Tensor data_col, const at::Tensor data_offset, const at::Tensor data_mask,\n    const int batch_size, const int channels, const int height_im, const int width_im,\n    const int height_col, const int width_col, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int deformable_group, at::Tensor grad_im)\n{\n\n  const int channel_per_deformable_group = channels / deformable_group;\n  const int num_kernels = channels * kernel_h * kernel_w * batch_size * height_col * width_col;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_col.scalar_type(), \"modulated_deformable_col2im_gpu\", ([&] {\n        const scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        const scalar_t *data_mask_ = data_mask.data_ptr<scalar_t>();\n        scalar_t *grad_im_ = grad_im.data_ptr<scalar_t>();\n\n        modulated_deformable_col2im_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_col_, data_offset_, data_mask_, channels, height_im, width_im,\n            kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,\n            dilation_h, dilation_w, channel_per_deformable_group,\n            batch_size, deformable_group, height_col, width_col, grad_im_);\n      }));\n\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in modulated_deformable_col2im_cuda: %s\\n\", cudaGetErrorString(err));\n  }\n}\n\nvoid modulated_deformable_col2im_coord_cuda(\n    const at::Tensor data_col, const at::Tensor data_im, const at::Tensor data_offset, const at::Tensor data_mask,\n    const int batch_size, const int channels, const int height_im, const int width_im,\n    const int height_col, const int width_col, const int kernel_h, const int kernel_w,\n    const int pad_h, const int pad_w, const int stride_h, const int stride_w,\n    const int dilation_h, const int dilation_w,\n    const int deformable_group,\n    at::Tensor grad_offset, at::Tensor grad_mask)\n{\n  const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * kernel_w * deformable_group;\n  const int channel_per_deformable_group = channels * kernel_h * kernel_w / deformable_group;\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(\n      data_col.scalar_type(), \"modulated_deformable_col2im_coord_gpu\", ([&] {\n        const scalar_t *data_col_ = data_col.data_ptr<scalar_t>();\n        const scalar_t *data_im_ = data_im.data_ptr<scalar_t>();\n        const scalar_t *data_offset_ = data_offset.data_ptr<scalar_t>();\n        const scalar_t *data_mask_ = data_mask.data_ptr<scalar_t>();\n        scalar_t *grad_offset_ = grad_offset.data_ptr<scalar_t>();\n        scalar_t *grad_mask_ = grad_mask.data_ptr<scalar_t>();\n\n        modulated_deformable_col2im_coord_gpu_kernel<<<GET_BLOCKS(num_kernels), CUDA_NUM_THREADS, 0, at::cuda::getCurrentCUDAStream()>>>(\n            num_kernels, data_col_, data_im_, data_offset_, data_mask_, channels, height_im, width_im,\n            kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w,\n            dilation_h, dilation_w, channel_per_deformable_group,\n            batch_size, 2 * kernel_h * kernel_w * deformable_group, deformable_group, height_col, width_col,\n            grad_offset_, grad_mask_);\n      }));\n  cudaError_t err = cudaGetLastError();\n  if (err != cudaSuccess)\n  {\n    printf(\"error in modulated_deformable_col2im_coord_cuda: %s\\n\", cudaGetErrorString(err));\n  }\n}\n"
  },
  {
    "path": "basicsr/ops/dcn/src/deform_conv_ext.cpp",
    "content": "// modify from\n// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c\n\n#include <torch/extension.h>\n#include <ATen/DeviceGuard.h>\n\n#include <cmath>\n#include <vector>\n\n#define WITH_CUDA  // always use cuda\n#ifdef WITH_CUDA\nint deform_conv_forward_cuda(at::Tensor input, at::Tensor weight,\n                             at::Tensor offset, at::Tensor output,\n                             at::Tensor columns, at::Tensor ones, int kW,\n                             int kH, int dW, int dH, int padW, int padH,\n                             int dilationW, int dilationH, int group,\n                             int deformable_group, int im2col_step);\n\nint deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset,\n                                    at::Tensor gradOutput, at::Tensor gradInput,\n                                    at::Tensor gradOffset, at::Tensor weight,\n                                    at::Tensor columns, int kW, int kH, int dW,\n                                    int dH, int padW, int padH, int dilationW,\n                                    int dilationH, int group,\n                                    int deformable_group, int im2col_step);\n\nint deform_conv_backward_parameters_cuda(\n    at::Tensor input, at::Tensor offset, at::Tensor gradOutput,\n    at::Tensor gradWeight,  // at::Tensor gradBias,\n    at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH,\n    int padW, int padH, int dilationW, int dilationH, int group,\n    int deformable_group, float scale, int im2col_step);\n\nvoid modulated_deform_conv_cuda_forward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns,\n    int kernel_h, int kernel_w, const int stride_h, const int stride_w,\n    const int pad_h, const int pad_w, const int dilation_h,\n    const int dilation_w, const int group, const int deformable_group,\n    const bool with_bias);\n\nvoid modulated_deform_conv_cuda_backward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor columns,\n    at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias,\n    at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output,\n    int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h,\n    int pad_w, int dilation_h, int dilation_w, int group, int deformable_group,\n    const bool with_bias);\n#endif\n\nint deform_conv_forward(at::Tensor input, at::Tensor weight,\n                             at::Tensor offset, at::Tensor output,\n                             at::Tensor columns, at::Tensor ones, int kW,\n                             int kH, int dW, int dH, int padW, int padH,\n                             int dilationW, int dilationH, int group,\n                             int deformable_group, int im2col_step) {\n  if (input.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return deform_conv_forward_cuda(input, weight, offset, output, columns,\n        ones, kW, kH, dW, dH, padW, padH, dilationW, dilationH, group,\n        deformable_group, im2col_step);\n#else\n    AT_ERROR(\"deform conv is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"deform conv is not implemented on CPU\");\n}\n\nint deform_conv_backward_input(at::Tensor input, at::Tensor offset,\n                                    at::Tensor gradOutput, at::Tensor gradInput,\n                                    at::Tensor gradOffset, at::Tensor weight,\n                                    at::Tensor columns, int kW, int kH, int dW,\n                                    int dH, int padW, int padH, int dilationW,\n                                    int dilationH, int group,\n                                    int deformable_group, int im2col_step) {\n  if (input.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return deform_conv_backward_input_cuda(input, offset, gradOutput,\n        gradInput, gradOffset, weight, columns, kW, kH, dW, dH, padW, padH,\n        dilationW, dilationH, group, deformable_group, im2col_step);\n#else\n    AT_ERROR(\"deform conv is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"deform conv is not implemented on CPU\");\n}\n\nint deform_conv_backward_parameters(\n    at::Tensor input, at::Tensor offset, at::Tensor gradOutput,\n    at::Tensor gradWeight,  // at::Tensor gradBias,\n    at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH,\n    int padW, int padH, int dilationW, int dilationH, int group,\n    int deformable_group, float scale, int im2col_step) {\n  if (input.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return deform_conv_backward_parameters_cuda(input, offset, gradOutput,\n        gradWeight, columns, ones, kW, kH, dW, dH, padW, padH, dilationW,\n        dilationH, group, deformable_group, scale, im2col_step);\n#else\n    AT_ERROR(\"deform conv is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"deform conv is not implemented on CPU\");\n}\n\nvoid modulated_deform_conv_forward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns,\n    int kernel_h, int kernel_w, const int stride_h, const int stride_w,\n    const int pad_h, const int pad_w, const int dilation_h,\n    const int dilation_w, const int group, const int deformable_group,\n    const bool with_bias) {\n  if (input.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return modulated_deform_conv_cuda_forward(input, weight, bias, ones,\n        offset, mask, output, columns, kernel_h, kernel_w, stride_h,\n        stride_w, pad_h, pad_w, dilation_h, dilation_w, group,\n        deformable_group, with_bias);\n#else\n    AT_ERROR(\"modulated deform conv is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"modulated deform conv is not implemented on CPU\");\n}\n\nvoid modulated_deform_conv_backward(\n    at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,\n    at::Tensor offset, at::Tensor mask, at::Tensor columns,\n    at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias,\n    at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output,\n    int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h,\n    int pad_w, int dilation_h, int dilation_w, int group, int deformable_group,\n    const bool with_bias) {\n  if (input.device().is_cuda()) {\n#ifdef WITH_CUDA\n    return modulated_deform_conv_cuda_backward(input, weight, bias, ones,\n        offset, mask, columns, grad_input, grad_weight, grad_bias, grad_offset,\n        grad_mask, grad_output, kernel_h, kernel_w, stride_h, stride_w,\n        pad_h, pad_w, dilation_h, dilation_w, group, deformable_group,\n        with_bias);\n#else\n    AT_ERROR(\"modulated deform conv is not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"modulated deform conv is not implemented on CPU\");\n}\n\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"deform_conv_forward\", &deform_conv_forward,\n        \"deform forward\");\n  m.def(\"deform_conv_backward_input\", &deform_conv_backward_input,\n        \"deform_conv_backward_input\");\n  m.def(\"deform_conv_backward_parameters\",\n        &deform_conv_backward_parameters,\n        \"deform_conv_backward_parameters\");\n  m.def(\"modulated_deform_conv_forward\",\n        &modulated_deform_conv_forward,\n        \"modulated deform conv forward\");\n  m.def(\"modulated_deform_conv_backward\",\n        &modulated_deform_conv_backward,\n        \"modulated deform conv backward\");\n}\n"
  },
  {
    "path": "basicsr/ops/fused_act/__init__.py",
    "content": "from .fused_act import FusedLeakyReLU, fused_leaky_relu\n\n__all__ = ['FusedLeakyReLU', 'fused_leaky_relu']\n"
  },
  {
    "path": "basicsr/ops/fused_act/fused_act.py",
    "content": "# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501\n\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\n\ntry:\n    from . import fused_act_ext\nexcept ImportError:\n    import os\n    BASICSR_JIT = os.getenv('BASICSR_JIT')\n    if BASICSR_JIT == 'True':\n        from torch.utils.cpp_extension import load\n        module_path = os.path.dirname(__file__)\n        fused_act_ext = load(\n            'fused',\n            sources=[\n                os.path.join(module_path, 'src', 'fused_bias_act.cpp'),\n                os.path.join(module_path, 'src', 'fused_bias_act_kernel.cu'),\n            ],\n        )\n\n\nclass FusedLeakyReLUFunctionBackward(Function):\n\n    @staticmethod\n    def forward(ctx, grad_output, out, negative_slope, scale):\n        ctx.save_for_backward(out)\n        ctx.negative_slope = negative_slope\n        ctx.scale = scale\n\n        empty = grad_output.new_empty(0)\n\n        grad_input = fused_act_ext.fused_bias_act(grad_output, empty, out, 3, 1, negative_slope, scale)\n\n        dim = [0]\n\n        if grad_input.ndim > 2:\n            dim += list(range(2, grad_input.ndim))\n\n        grad_bias = grad_input.sum(dim).detach()\n\n        return grad_input, grad_bias\n\n    @staticmethod\n    def backward(ctx, gradgrad_input, gradgrad_bias):\n        out, = ctx.saved_tensors\n        gradgrad_out = fused_act_ext.fused_bias_act(gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope,\n                                                    ctx.scale)\n\n        return gradgrad_out, None, None, None\n\n\nclass FusedLeakyReLUFunction(Function):\n\n    @staticmethod\n    def forward(ctx, input, bias, negative_slope, scale):\n        empty = input.new_empty(0)\n        out = fused_act_ext.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)\n        ctx.save_for_backward(out)\n        ctx.negative_slope = negative_slope\n        ctx.scale = scale\n\n        return out\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        out, = ctx.saved_tensors\n\n        grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(grad_output, out, ctx.negative_slope, ctx.scale)\n\n        return grad_input, grad_bias, None, None\n\n\nclass FusedLeakyReLU(nn.Module):\n\n    def __init__(self, channel, negative_slope=0.2, scale=2**0.5):\n        super().__init__()\n\n        self.bias = nn.Parameter(torch.zeros(channel))\n        self.negative_slope = negative_slope\n        self.scale = scale\n\n    def forward(self, input):\n        return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)\n\n\ndef fused_leaky_relu(input, bias, negative_slope=0.2, scale=2**0.5):\n    return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)\n"
  },
  {
    "path": "basicsr/ops/fused_act/src/fused_bias_act.cpp",
    "content": "// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act.cpp\n#include <torch/extension.h>\n\n\ntorch::Tensor fused_bias_act_op(const torch::Tensor& input,\n                                const torch::Tensor& bias,\n                                const torch::Tensor& refer,\n                                int act, int grad, float alpha, float scale);\n\n#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\")\n#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x \" must be contiguous\")\n#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)\n\ntorch::Tensor fused_bias_act(const torch::Tensor& input,\n                             const torch::Tensor& bias,\n                             const torch::Tensor& refer,\n                             int act, int grad, float alpha, float scale) {\n    CHECK_CUDA(input);\n    CHECK_CUDA(bias);\n\n    return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale);\n}\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"fused_bias_act\", &fused_bias_act, \"fused bias act (CUDA)\");\n}\n"
  },
  {
    "path": "basicsr/ops/fused_act/src/fused_bias_act_kernel.cu",
    "content": "// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act_kernel.cu\n// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#include <torch/types.h>\n\n#include <ATen/ATen.h>\n#include <ATen/AccumulateType.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n\ntemplate <typename scalar_t>\nstatic __global__ void fused_bias_act_kernel(scalar_t* out, const scalar_t* p_x, const scalar_t* p_b, const scalar_t* p_ref,\n    int act, int grad, scalar_t alpha, scalar_t scale, int loop_x, int size_x, int step_b, int size_b, int use_bias, int use_ref) {\n    int xi = blockIdx.x * loop_x * blockDim.x + threadIdx.x;\n\n    scalar_t zero = 0.0;\n\n    for (int loop_idx = 0; loop_idx < loop_x && xi < size_x; loop_idx++, xi += blockDim.x) {\n        scalar_t x = p_x[xi];\n\n        if (use_bias) {\n            x += p_b[(xi / step_b) % size_b];\n        }\n\n        scalar_t ref = use_ref ? p_ref[xi] : zero;\n\n        scalar_t y;\n\n        switch (act * 10 + grad) {\n            default:\n            case 10: y = x; break;\n            case 11: y = x; break;\n            case 12: y = 0.0; break;\n\n            case 30: y = (x > 0.0) ? x : x * alpha; break;\n            case 31: y = (ref > 0.0) ? x : x * alpha; break;\n            case 32: y = 0.0; break;\n        }\n\n        out[xi] = y * scale;\n    }\n}\n\n\ntorch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,\n    int act, int grad, float alpha, float scale) {\n    int curDevice = -1;\n    cudaGetDevice(&curDevice);\n    cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice);\n\n    auto x = input.contiguous();\n    auto b = bias.contiguous();\n    auto ref = refer.contiguous();\n\n    int use_bias = b.numel() ? 1 : 0;\n    int use_ref = ref.numel() ? 1 : 0;\n\n    int size_x = x.numel();\n    int size_b = b.numel();\n    int step_b = 1;\n\n    for (int i = 1 + 1; i < x.dim(); i++) {\n        step_b *= x.size(i);\n    }\n\n    int loop_x = 4;\n    int block_size = 4 * 32;\n    int grid_size = (size_x - 1) / (loop_x * block_size) + 1;\n\n    auto y = torch::empty_like(x);\n\n    AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), \"fused_bias_act_kernel\", [&] {\n        fused_bias_act_kernel<scalar_t><<<grid_size, block_size, 0, stream>>>(\n            y.data_ptr<scalar_t>(),\n            x.data_ptr<scalar_t>(),\n            b.data_ptr<scalar_t>(),\n            ref.data_ptr<scalar_t>(),\n            act,\n            grad,\n            alpha,\n            scale,\n            loop_x,\n            size_x,\n            step_b,\n            size_b,\n            use_bias,\n            use_ref\n        );\n    });\n\n    return y;\n}\n"
  },
  {
    "path": "basicsr/ops/upfirdn2d/__init__.py",
    "content": "from .upfirdn2d import upfirdn2d\n\n__all__ = ['upfirdn2d']\n"
  },
  {
    "path": "basicsr/ops/upfirdn2d/src/upfirdn2d.cpp",
    "content": "// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp\n#include <torch/extension.h>\n\n\ntorch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,\n                            int up_x, int up_y, int down_x, int down_y,\n                            int pad_x0, int pad_x1, int pad_y0, int pad_y1);\n\n#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\")\n#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x \" must be contiguous\")\n#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)\n\ntorch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,\n                        int up_x, int up_y, int down_x, int down_y,\n                        int pad_x0, int pad_x1, int pad_y0, int pad_y1) {\n    CHECK_CUDA(input);\n    CHECK_CUDA(kernel);\n\n    return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);\n}\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"upfirdn2d\", &upfirdn2d, \"upfirdn2d (CUDA)\");\n}\n"
  },
  {
    "path": "basicsr/ops/upfirdn2d/src/upfirdn2d_kernel.cu",
    "content": "// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d_kernel.cu\n// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#include <torch/types.h>\n\n#include <ATen/ATen.h>\n#include <ATen/AccumulateType.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\nstatic __host__ __device__ __forceinline__ int floor_div(int a, int b) {\n  int c = a / b;\n\n  if (c * b > a) {\n    c--;\n  }\n\n  return c;\n}\n\nstruct UpFirDn2DKernelParams {\n  int up_x;\n  int up_y;\n  int down_x;\n  int down_y;\n  int pad_x0;\n  int pad_x1;\n  int pad_y0;\n  int pad_y1;\n\n  int major_dim;\n  int in_h;\n  int in_w;\n  int minor_dim;\n  int kernel_h;\n  int kernel_w;\n  int out_h;\n  int out_w;\n  int loop_major;\n  int loop_x;\n};\n\ntemplate <typename scalar_t>\n__global__ void upfirdn2d_kernel_large(scalar_t *out, const scalar_t *input,\n                                       const scalar_t *kernel,\n                                       const UpFirDn2DKernelParams p) {\n  int minor_idx = blockIdx.x * blockDim.x + threadIdx.x;\n  int out_y = minor_idx / p.minor_dim;\n  minor_idx -= out_y * p.minor_dim;\n  int out_x_base = blockIdx.y * p.loop_x * blockDim.y + threadIdx.y;\n  int major_idx_base = blockIdx.z * p.loop_major;\n\n  if (out_x_base >= p.out_w || out_y >= p.out_h ||\n      major_idx_base >= p.major_dim) {\n    return;\n  }\n\n  int mid_y = out_y * p.down_y + p.up_y - 1 - p.pad_y0;\n  int in_y = min(max(floor_div(mid_y, p.up_y), 0), p.in_h);\n  int h = min(max(floor_div(mid_y + p.kernel_h, p.up_y), 0), p.in_h) - in_y;\n  int kernel_y = mid_y + p.kernel_h - (in_y + 1) * p.up_y;\n\n  for (int loop_major = 0, major_idx = major_idx_base;\n       loop_major < p.loop_major && major_idx < p.major_dim;\n       loop_major++, major_idx++) {\n    for (int loop_x = 0, out_x = out_x_base;\n         loop_x < p.loop_x && out_x < p.out_w; loop_x++, out_x += blockDim.y) {\n      int mid_x = out_x * p.down_x + p.up_x - 1 - p.pad_x0;\n      int in_x = min(max(floor_div(mid_x, p.up_x), 0), p.in_w);\n      int w = min(max(floor_div(mid_x + p.kernel_w, p.up_x), 0), p.in_w) - in_x;\n      int kernel_x = mid_x + p.kernel_w - (in_x + 1) * p.up_x;\n\n      const scalar_t *x_p =\n          &input[((major_idx * p.in_h + in_y) * p.in_w + in_x) * p.minor_dim +\n                 minor_idx];\n      const scalar_t *k_p = &kernel[kernel_y * p.kernel_w + kernel_x];\n      int x_px = p.minor_dim;\n      int k_px = -p.up_x;\n      int x_py = p.in_w * p.minor_dim;\n      int k_py = -p.up_y * p.kernel_w;\n\n      scalar_t v = 0.0f;\n\n      for (int y = 0; y < h; y++) {\n        for (int x = 0; x < w; x++) {\n          v += static_cast<scalar_t>(*x_p) * static_cast<scalar_t>(*k_p);\n          x_p += x_px;\n          k_p += k_px;\n        }\n\n        x_p += x_py - w * x_px;\n        k_p += k_py - w * k_px;\n      }\n\n      out[((major_idx * p.out_h + out_y) * p.out_w + out_x) * p.minor_dim +\n          minor_idx] = v;\n    }\n  }\n}\n\ntemplate <typename scalar_t, int up_x, int up_y, int down_x, int down_y,\n          int kernel_h, int kernel_w, int tile_out_h, int tile_out_w>\n__global__ void upfirdn2d_kernel(scalar_t *out, const scalar_t *input,\n                                 const scalar_t *kernel,\n                                 const UpFirDn2DKernelParams p) {\n  const int tile_in_h = ((tile_out_h - 1) * down_y + kernel_h - 1) / up_y + 1;\n  const int tile_in_w = ((tile_out_w - 1) * down_x + kernel_w - 1) / up_x + 1;\n\n  __shared__ volatile float sk[kernel_h][kernel_w];\n  __shared__ volatile float sx[tile_in_h][tile_in_w];\n\n  int minor_idx = blockIdx.x;\n  int tile_out_y = minor_idx / p.minor_dim;\n  minor_idx -= tile_out_y * p.minor_dim;\n  tile_out_y *= tile_out_h;\n  int tile_out_x_base = blockIdx.y * p.loop_x * tile_out_w;\n  int major_idx_base = blockIdx.z * p.loop_major;\n\n  if (tile_out_x_base >= p.out_w | tile_out_y >= p.out_h |\n      major_idx_base >= p.major_dim) {\n    return;\n  }\n\n  for (int tap_idx = threadIdx.x; tap_idx < kernel_h * kernel_w;\n       tap_idx += blockDim.x) {\n    int ky = tap_idx / kernel_w;\n    int kx = tap_idx - ky * kernel_w;\n    scalar_t v = 0.0;\n\n    if (kx < p.kernel_w & ky < p.kernel_h) {\n      v = kernel[(p.kernel_h - 1 - ky) * p.kernel_w + (p.kernel_w - 1 - kx)];\n    }\n\n    sk[ky][kx] = v;\n  }\n\n  for (int loop_major = 0, major_idx = major_idx_base;\n       loop_major < p.loop_major & major_idx < p.major_dim;\n       loop_major++, major_idx++) {\n    for (int loop_x = 0, tile_out_x = tile_out_x_base;\n         loop_x < p.loop_x & tile_out_x < p.out_w;\n         loop_x++, tile_out_x += tile_out_w) {\n      int tile_mid_x = tile_out_x * down_x + up_x - 1 - p.pad_x0;\n      int tile_mid_y = tile_out_y * down_y + up_y - 1 - p.pad_y0;\n      int tile_in_x = floor_div(tile_mid_x, up_x);\n      int tile_in_y = floor_div(tile_mid_y, up_y);\n\n      __syncthreads();\n\n      for (int in_idx = threadIdx.x; in_idx < tile_in_h * tile_in_w;\n           in_idx += blockDim.x) {\n        int rel_in_y = in_idx / tile_in_w;\n        int rel_in_x = in_idx - rel_in_y * tile_in_w;\n        int in_x = rel_in_x + tile_in_x;\n        int in_y = rel_in_y + tile_in_y;\n\n        scalar_t v = 0.0;\n\n        if (in_x >= 0 & in_y >= 0 & in_x < p.in_w & in_y < p.in_h) {\n          v = input[((major_idx * p.in_h + in_y) * p.in_w + in_x) *\n                        p.minor_dim +\n                    minor_idx];\n        }\n\n        sx[rel_in_y][rel_in_x] = v;\n      }\n\n      __syncthreads();\n      for (int out_idx = threadIdx.x; out_idx < tile_out_h * tile_out_w;\n           out_idx += blockDim.x) {\n        int rel_out_y = out_idx / tile_out_w;\n        int rel_out_x = out_idx - rel_out_y * tile_out_w;\n        int out_x = rel_out_x + tile_out_x;\n        int out_y = rel_out_y + tile_out_y;\n\n        int mid_x = tile_mid_x + rel_out_x * down_x;\n        int mid_y = tile_mid_y + rel_out_y * down_y;\n        int in_x = floor_div(mid_x, up_x);\n        int in_y = floor_div(mid_y, up_y);\n        int rel_in_x = in_x - tile_in_x;\n        int rel_in_y = in_y - tile_in_y;\n        int kernel_x = (in_x + 1) * up_x - mid_x - 1;\n        int kernel_y = (in_y + 1) * up_y - mid_y - 1;\n\n        scalar_t v = 0.0;\n\n#pragma unroll\n        for (int y = 0; y < kernel_h / up_y; y++)\n#pragma unroll\n          for (int x = 0; x < kernel_w / up_x; x++)\n            v += sx[rel_in_y + y][rel_in_x + x] *\n                 sk[kernel_y + y * up_y][kernel_x + x * up_x];\n\n        if (out_x < p.out_w & out_y < p.out_h) {\n          out[((major_idx * p.out_h + out_y) * p.out_w + out_x) * p.minor_dim +\n              minor_idx] = v;\n        }\n      }\n    }\n  }\n}\n\ntorch::Tensor upfirdn2d_op(const torch::Tensor &input,\n                           const torch::Tensor &kernel, int up_x, int up_y,\n                           int down_x, int down_y, int pad_x0, int pad_x1,\n                           int pad_y0, int pad_y1) {\n  int curDevice = -1;\n  cudaGetDevice(&curDevice);\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice);\n\n  UpFirDn2DKernelParams p;\n\n  auto x = input.contiguous();\n  auto k = kernel.contiguous();\n\n  p.major_dim = x.size(0);\n  p.in_h = x.size(1);\n  p.in_w = x.size(2);\n  p.minor_dim = x.size(3);\n  p.kernel_h = k.size(0);\n  p.kernel_w = k.size(1);\n  p.up_x = up_x;\n  p.up_y = up_y;\n  p.down_x = down_x;\n  p.down_y = down_y;\n  p.pad_x0 = pad_x0;\n  p.pad_x1 = pad_x1;\n  p.pad_y0 = pad_y0;\n  p.pad_y1 = pad_y1;\n\n  p.out_h = (p.in_h * p.up_y + p.pad_y0 + p.pad_y1 - p.kernel_h + p.down_y) /\n            p.down_y;\n  p.out_w = (p.in_w * p.up_x + p.pad_x0 + p.pad_x1 - p.kernel_w + p.down_x) /\n            p.down_x;\n\n  auto out =\n      at::empty({p.major_dim, p.out_h, p.out_w, p.minor_dim}, x.options());\n\n  int mode = -1;\n\n  int tile_out_h = -1;\n  int tile_out_w = -1;\n\n  if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 &&\n      p.kernel_h <= 4 && p.kernel_w <= 4) {\n    mode = 1;\n    tile_out_h = 16;\n    tile_out_w = 64;\n  }\n\n  if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 &&\n      p.kernel_h <= 3 && p.kernel_w <= 3) {\n    mode = 2;\n    tile_out_h = 16;\n    tile_out_w = 64;\n  }\n\n  if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 &&\n      p.kernel_h <= 4 && p.kernel_w <= 4) {\n    mode = 3;\n    tile_out_h = 16;\n    tile_out_w = 64;\n  }\n\n  if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 &&\n      p.kernel_h <= 2 && p.kernel_w <= 2) {\n    mode = 4;\n    tile_out_h = 16;\n    tile_out_w = 64;\n  }\n\n  if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 &&\n      p.kernel_h <= 4 && p.kernel_w <= 4) {\n    mode = 5;\n    tile_out_h = 8;\n    tile_out_w = 32;\n  }\n\n  if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 &&\n      p.kernel_h <= 2 && p.kernel_w <= 2) {\n    mode = 6;\n    tile_out_h = 8;\n    tile_out_w = 32;\n  }\n\n  dim3 block_size;\n  dim3 grid_size;\n\n  if (tile_out_h > 0 && tile_out_w > 0) {\n    p.loop_major = (p.major_dim - 1) / 16384 + 1;\n    p.loop_x = 1;\n    block_size = dim3(32 * 8, 1, 1);\n    grid_size = dim3(((p.out_h - 1) / tile_out_h + 1) * p.minor_dim,\n                     (p.out_w - 1) / (p.loop_x * tile_out_w) + 1,\n                     (p.major_dim - 1) / p.loop_major + 1);\n  } else {\n    p.loop_major = (p.major_dim - 1) / 16384 + 1;\n    p.loop_x = 4;\n    block_size = dim3(4, 32, 1);\n    grid_size = dim3((p.out_h * p.minor_dim - 1) / block_size.x + 1,\n                     (p.out_w - 1) / (p.loop_x * block_size.y) + 1,\n                     (p.major_dim - 1) / p.loop_major + 1);\n  }\n\n  AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), \"upfirdn2d_cuda\", [&] {\n    switch (mode) {\n    case 1:\n      upfirdn2d_kernel<scalar_t, 1, 1, 1, 1, 4, 4, 16, 64>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    case 2:\n      upfirdn2d_kernel<scalar_t, 1, 1, 1, 1, 3, 3, 16, 64>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    case 3:\n      upfirdn2d_kernel<scalar_t, 2, 2, 1, 1, 4, 4, 16, 64>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    case 4:\n      upfirdn2d_kernel<scalar_t, 2, 2, 1, 1, 2, 2, 16, 64>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    case 5:\n      upfirdn2d_kernel<scalar_t, 1, 1, 2, 2, 4, 4, 8, 32>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    case 6:\n      upfirdn2d_kernel<scalar_t, 1, 1, 2, 2, 4, 4, 8, 32>\n          <<<grid_size, block_size, 0, stream>>>(out.data_ptr<scalar_t>(),\n                                                 x.data_ptr<scalar_t>(),\n                                                 k.data_ptr<scalar_t>(), p);\n\n      break;\n\n    default:\n      upfirdn2d_kernel_large<scalar_t><<<grid_size, block_size, 0, stream>>>(\n          out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(),\n          k.data_ptr<scalar_t>(), p);\n    }\n  });\n\n  return out;\n}\n"
  },
  {
    "path": "basicsr/ops/upfirdn2d/upfirdn2d.py",
    "content": "# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py  # noqa:E501\n\nimport torch\nfrom torch.autograd import Function\nfrom torch.nn import functional as F\n\ntry:\n    from . import upfirdn2d_ext\nexcept ImportError:\n    import os\n    BASICSR_JIT = os.getenv('BASICSR_JIT')\n    if BASICSR_JIT == 'True':\n        from torch.utils.cpp_extension import load\n        module_path = os.path.dirname(__file__)\n        upfirdn2d_ext = load(\n            'upfirdn2d',\n            sources=[\n                os.path.join(module_path, 'src', 'upfirdn2d.cpp'),\n                os.path.join(module_path, 'src', 'upfirdn2d_kernel.cu'),\n            ],\n        )\n\n\nclass UpFirDn2dBackward(Function):\n\n    @staticmethod\n    def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size):\n\n        up_x, up_y = up\n        down_x, down_y = down\n        g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad\n\n        grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)\n\n        grad_input = upfirdn2d_ext.upfirdn2d(\n            grad_output,\n            grad_kernel,\n            down_x,\n            down_y,\n            up_x,\n            up_y,\n            g_pad_x0,\n            g_pad_x1,\n            g_pad_y0,\n            g_pad_y1,\n        )\n        grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])\n\n        ctx.save_for_backward(kernel)\n\n        pad_x0, pad_x1, pad_y0, pad_y1 = pad\n\n        ctx.up_x = up_x\n        ctx.up_y = up_y\n        ctx.down_x = down_x\n        ctx.down_y = down_y\n        ctx.pad_x0 = pad_x0\n        ctx.pad_x1 = pad_x1\n        ctx.pad_y0 = pad_y0\n        ctx.pad_y1 = pad_y1\n        ctx.in_size = in_size\n        ctx.out_size = out_size\n\n        return grad_input\n\n    @staticmethod\n    def backward(ctx, gradgrad_input):\n        kernel, = ctx.saved_tensors\n\n        gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)\n\n        gradgrad_out = upfirdn2d_ext.upfirdn2d(\n            gradgrad_input,\n            kernel,\n            ctx.up_x,\n            ctx.up_y,\n            ctx.down_x,\n            ctx.down_y,\n            ctx.pad_x0,\n            ctx.pad_x1,\n            ctx.pad_y0,\n            ctx.pad_y1,\n        )\n        # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0],\n        #                                  ctx.out_size[1], ctx.in_size[3])\n        gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1])\n\n        return gradgrad_out, None, None, None, None, None, None, None, None\n\n\nclass UpFirDn2d(Function):\n\n    @staticmethod\n    def forward(ctx, input, kernel, up, down, pad):\n        up_x, up_y = up\n        down_x, down_y = down\n        pad_x0, pad_x1, pad_y0, pad_y1 = pad\n\n        kernel_h, kernel_w = kernel.shape\n        batch, channel, in_h, in_w = input.shape\n        ctx.in_size = input.shape\n\n        input = input.reshape(-1, in_h, in_w, 1)\n\n        ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))\n\n        out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1\n        out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1\n        ctx.out_size = (out_h, out_w)\n\n        ctx.up = (up_x, up_y)\n        ctx.down = (down_x, down_y)\n        ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)\n\n        g_pad_x0 = kernel_w - pad_x0 - 1\n        g_pad_y0 = kernel_h - pad_y0 - 1\n        g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1\n        g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1\n\n        ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)\n\n        out = upfirdn2d_ext.upfirdn2d(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1)\n        # out = out.view(major, out_h, out_w, minor)\n        out = out.view(-1, channel, out_h, out_w)\n\n        return out\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        kernel, grad_kernel = ctx.saved_tensors\n\n        grad_input = UpFirDn2dBackward.apply(\n            grad_output,\n            kernel,\n            grad_kernel,\n            ctx.up,\n            ctx.down,\n            ctx.pad,\n            ctx.g_pad,\n            ctx.in_size,\n            ctx.out_size,\n        )\n\n        return grad_input, None, None, None, None\n\n\ndef upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):\n    if input.device.type == 'cpu':\n        out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1])\n    else:\n        out = UpFirDn2d.apply(input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]))\n\n    return out\n\n\ndef upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1):\n    _, channel, in_h, in_w = input.shape\n    input = input.reshape(-1, in_h, in_w, 1)\n\n    _, in_h, in_w, minor = input.shape\n    kernel_h, kernel_w = kernel.shape\n\n    out = input.view(-1, in_h, 1, in_w, 1, minor)\n    out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])\n    out = out.view(-1, in_h * up_y, in_w * up_x, minor)\n\n    out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)])\n    out = out[:, max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ]\n\n    out = out.permute(0, 3, 1, 2)\n    out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1])\n    w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)\n    out = F.conv2d(out, w)\n    out = out.reshape(\n        -1,\n        minor,\n        in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,\n        in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,\n    )\n    out = out.permute(0, 2, 3, 1)\n    out = out[:, ::down_y, ::down_x, :]\n\n    out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1\n    out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1\n\n    return out.view(-1, channel, out_h, out_w)\n"
  },
  {
    "path": "basicsr/setup.py",
    "content": "#!/usr/bin/env python\n\nfrom setuptools import find_packages, setup\n\nimport os\nimport subprocess\nimport sys\nimport time\nfrom torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension\nfrom utils.misc import gpu_is_available\n\nversion_file = './basicsr/version.py'\n\n\ndef readme():\n    with open('README.md', encoding='utf-8') as f:\n        content = f.read()\n    return content\n\n\ndef get_git_hash():\n\n    def _minimal_ext_cmd(cmd):\n        # construct minimal environment\n        env = {}\n        for k in ['SYSTEMROOT', 'PATH', 'HOME']:\n            v = os.environ.get(k)\n            if v is not None:\n                env[k] = v\n        # LANGUAGE is used on win32\n        env['LANGUAGE'] = 'C'\n        env['LANG'] = 'C'\n        env['LC_ALL'] = 'C'\n        out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]\n        return out\n\n    try:\n        out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n        sha = out.strip().decode('ascii')\n    except OSError:\n        sha = 'unknown'\n\n    return sha\n\n\ndef get_hash():\n    if os.path.exists('.git'):\n        sha = get_git_hash()[:7]\n    elif os.path.exists(version_file):\n        try:\n            from version import __version__\n            sha = __version__.split('+')[-1]\n        except ImportError:\n            raise ImportError('Unable to get git version')\n    else:\n        sha = 'unknown'\n\n    return sha\n\n\ndef write_version_py():\n    content = \"\"\"# GENERATED VERSION FILE\n# TIME: {}\n__version__ = '{}'\n__gitsha__ = '{}'\nversion_info = ({})\n\"\"\"\n    sha = get_hash()\n    with open('./basicsr/VERSION', 'r') as f:\n        SHORT_VERSION = f.read().strip()\n    VERSION_INFO = ', '.join([x if x.isdigit() else f'\"{x}\"' for x in SHORT_VERSION.split('.')])\n\n    version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO)\n    with open(version_file, 'w') as f:\n        f.write(version_file_str)\n\n\ndef get_version():\n    with open(version_file, 'r') as f:\n        exec(compile(f.read(), version_file, 'exec'))\n    return locals()['__version__']\n\n\ndef make_cuda_ext(name, module, sources, sources_cuda=None):\n    if sources_cuda is None:\n        sources_cuda = []\n    define_macros = []\n    extra_compile_args = {'cxx': []}\n\n    # if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1':\n    if gpu_is_available or os.getenv('FORCE_CUDA', '0') == '1':\n        define_macros += [('WITH_CUDA', None)]\n        extension = CUDAExtension\n        extra_compile_args['nvcc'] = [\n            '-D__CUDA_NO_HALF_OPERATORS__',\n            '-D__CUDA_NO_HALF_CONVERSIONS__',\n            '-D__CUDA_NO_HALF2_OPERATORS__',\n        ]\n        sources += sources_cuda\n    else:\n        print(f'Compiling {name} without CUDA')\n        extension = CppExtension\n\n    return extension(\n        name=f'{module}.{name}',\n        sources=[os.path.join(*module.split('.'), p) for p in sources],\n        define_macros=define_macros,\n        extra_compile_args=extra_compile_args)\n\n\ndef get_requirements(filename='requirements.txt'):\n    with open(os.path.join('.', filename), 'r') as f:\n        requires = [line.replace('\\n', '') for line in f.readlines()]\n    return requires\n\n\nif __name__ == '__main__':\n    if '--cuda_ext' in sys.argv:\n        ext_modules = [\n            make_cuda_ext(\n                name='deform_conv_ext',\n                module='ops.dcn',\n                sources=['src/deform_conv_ext.cpp'],\n                sources_cuda=['src/deform_conv_cuda.cpp', 'src/deform_conv_cuda_kernel.cu']),\n            make_cuda_ext(\n                name='fused_act_ext',\n                module='ops.fused_act',\n                sources=['src/fused_bias_act.cpp'],\n                sources_cuda=['src/fused_bias_act_kernel.cu']),\n            make_cuda_ext(\n                name='upfirdn2d_ext',\n                module='ops.upfirdn2d',\n                sources=['src/upfirdn2d.cpp'],\n                sources_cuda=['src/upfirdn2d_kernel.cu']),\n        ]\n        sys.argv.remove('--cuda_ext')\n    else:\n        ext_modules = []\n\n    write_version_py()\n    setup(\n        name='basicsr',\n        version=get_version(),\n        description='Open Source Image and Video Super-Resolution Toolbox',\n        long_description=readme(),\n        long_description_content_type='text/markdown',\n        author='Xintao Wang',\n        author_email='xintao.wang@outlook.com',\n        keywords='computer vision, restoration, super resolution',\n        url='https://github.com/xinntao/BasicSR',\n        include_package_data=True,\n        packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')),\n        classifiers=[\n            'Development Status :: 4 - Beta',\n            'License :: OSI Approved :: Apache Software License',\n            'Operating System :: OS Independent',\n            'Programming Language :: Python :: 3',\n            'Programming Language :: Python :: 3.7',\n            'Programming Language :: Python :: 3.8',\n        ],\n        license='Apache License 2.0',\n        setup_requires=['cython', 'numpy'],\n        install_requires=get_requirements(),\n        ext_modules=ext_modules,\n        cmdclass={'build_ext': BuildExtension},\n        zip_safe=False)\n"
  },
  {
    "path": "basicsr/train.py",
    "content": "import argparse\nimport datetime\nimport logging\nimport math\nimport copy\nimport random\nimport time\nimport torch\nfrom os import path as osp\nimport os\nos.environ[\"PYTORCH_CUDA_ALLOC_CONF\"]=\"expandable_segments:True\"\n\nfrom basicsr.data import build_dataloader, build_dataset\nfrom basicsr.data.data_sampler import EnlargedSampler\nfrom basicsr.data.prefetch_dataloader import CPUPrefetcher, CUDAPrefetcher\nfrom basicsr.models import build_model\nfrom basicsr.utils import (MessageLogger, check_resume, get_env_info, get_root_logger, init_tb_logger,\n                           init_wandb_logger, make_exp_dirs, mkdir_and_rename, set_random_seed)\nfrom basicsr.utils.dist_util import get_dist_info, init_dist\nfrom basicsr.utils.options import dict2str, parse\n\nimport warnings\n# ignore UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\n\ndef parse_options(root_path, is_train=True):\n    parser = argparse.ArgumentParser()\n    parser.add_argument('-opt', type=str, required=True, help='Path to option YAML file.')\n    parser.add_argument('--launcher', choices=['none', 'pytorch', 'slurm'], default='none', help='job launcher')\n    parser.add_argument('--local-rank', type=int, default=0)\n    parser.add_argument('--rank', type=int, default=0)\n    args = parser.parse_args()\n    opt = parse(args.opt, root_path, is_train=is_train)\n\n    # distributed settings\n    if args.launcher == 'none':\n        opt['dist'] = False\n        print('Disable distributed.', flush=True)\n    else:\n        opt['dist'] = True\n        if args.launcher == 'slurm' and 'dist_params' in opt:\n            init_dist(args.launcher, **opt['dist_params'])\n        else:\n            init_dist(args.launcher)\n\n    opt['rank'], opt['world_size'] = get_dist_info()\n\n    # print(opt['rank'], opt['world_size'])\n    # exit()\n\n    # random seed\n    seed = opt.get('manual_seed')\n    if seed is None:\n        seed = random.randint(1, 10000)\n        opt['manual_seed'] = seed\n    set_random_seed(seed + opt['rank'])\n\n    return opt\n\n\ndef init_loggers(opt):\n    log_file = osp.join(opt['path']['log'], f\"train_{opt['name']}.log\")\n    logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file)\n    logger.info(get_env_info())\n    logger.info(dict2str(opt))\n\n    # initialize wandb logger before tensorboard logger to allow proper sync:\n    if (opt['logger'].get('wandb') is not None) and (opt['logger']['wandb'].get('project') is not None):\n        assert opt['logger'].get('use_tb_logger') is True, ('should turn on tensorboard when using wandb')\n        init_wandb_logger(opt)\n    tb_logger = None\n    if opt['logger'].get('use_tb_logger'):\n        tb_logger = init_tb_logger(log_dir=osp.join('tb_logger', opt['name']))\n    return logger, tb_logger\n\n\ndef create_train_val_dataloader(opt, logger):\n    # create train and val dataloaders\n    train_loader, val_loader = None, None\n    for phase, dataset_opt in opt['datasets'].items():\n        if phase == 'train':\n            dataset_enlarge_ratio = dataset_opt.get('dataset_enlarge_ratio', 1)\n            train_set = build_dataset(dataset_opt)\n            train_sampler = EnlargedSampler(train_set, opt['world_size'], opt['rank'], dataset_enlarge_ratio)\n            train_loader = build_dataloader(\n                train_set,\n                dataset_opt,\n                num_gpu=opt['num_gpu'],\n                dist=opt['dist'],\n                sampler=train_sampler,\n                seed=opt['manual_seed'])\n\n            print(len(train_set), dataset_enlarge_ratio, dataset_opt['batch_size_per_gpu'], opt['world_size'])\n            num_iter_per_epoch = math.ceil(\n                len(train_set) * dataset_enlarge_ratio / (dataset_opt['batch_size_per_gpu'] * opt['world_size']))\n            total_iters = int(opt['train']['total_iter'])\n            total_epochs = math.ceil(total_iters / (num_iter_per_epoch))\n            logger.info('Training statistics:'\n                        f'\\n\\tNumber of train images: {len(train_set)}'\n                        f'\\n\\tDataset enlarge ratio: {dataset_enlarge_ratio}'\n                        f'\\n\\tBatch size per gpu: {dataset_opt[\"batch_size_per_gpu\"]}'\n                        f'\\n\\tWorld size (gpu number): {opt[\"world_size\"]}'\n                        f'\\n\\tRequire iter number per epoch: {num_iter_per_epoch}'\n                        f'\\n\\tTotal epochs: {total_epochs}; iters: {total_iters}.')\n\n        elif phase == 'val':\n            val_set = build_dataset(dataset_opt)\n            val_loader = build_dataloader(\n                val_set, dataset_opt, num_gpu=opt['num_gpu'], dist=opt['dist'], sampler=None, seed=opt['manual_seed'])\n            logger.info(f'Number of val images/folders in {dataset_opt[\"name\"]}: ' f'{len(val_set)}')\n        else:\n            raise ValueError(f'Dataset phase {phase} is not recognized.')\n\n    return train_loader, train_sampler, val_loader, total_epochs, total_iters\n\n\ndef train_pipeline(root_path):\n    # parse options, set distributed setting, set ramdom seed\n    opt = parse_options(root_path, is_train=True)\n\n    torch.backends.cudnn.benchmark = True\n    # torch.backends.cudnn.deterministic = True\n\n    # load resume states if necessary\n    if opt['path'].get('resume_state'):\n        device_id = torch.cuda.current_device()\n        resume_state = torch.load(\n            opt['path']['resume_state'], map_location=lambda storage, loc: storage.cuda(device_id))\n    else:\n        resume_state = None\n\n    # mkdir for experiments and logger\n    if resume_state is None:\n        make_exp_dirs(opt)\n        if opt['logger'].get('use_tb_logger') and opt['rank'] == 0:\n            mkdir_and_rename(osp.join('tb_logger', opt['name']))\n\n    # initialize loggers\n    logger, tb_logger = init_loggers(opt)\n\n    # create train and validation dataloaders\n    result = create_train_val_dataloader(opt, logger)\n    train_loader, train_sampler, val_loader, total_epochs, total_iters = result\n\n    # create model\n    if resume_state:  # resume training\n        check_resume(opt, resume_state['iter'])\n        model = build_model(opt)\n        model.resume_training(resume_state)  # handle optimizers and schedulers\n        logger.info(f\"Resuming training from epoch: {resume_state['epoch']}, \" f\"iter: {resume_state['iter']}.\")\n        start_epoch = resume_state['epoch']\n        current_iter = resume_state['iter']\n    else:\n        model = build_model(opt)\n        start_epoch = 0\n        current_iter = 0\n\n    # create message logger (formatted outputs)\n    msg_logger = MessageLogger(opt, current_iter, tb_logger)\n\n    # dataloader prefetcher\n    prefetch_mode = opt['datasets']['train'].get('prefetch_mode')\n    if prefetch_mode is None or prefetch_mode == 'cpu':\n        prefetcher = CPUPrefetcher(train_loader)\n    elif prefetch_mode == 'cuda':\n        prefetcher = CUDAPrefetcher(train_loader, opt)\n        logger.info(f'Use {prefetch_mode} prefetch dataloader')\n        if opt['datasets']['train'].get('pin_memory') is not True:\n            raise ValueError('Please set pin_memory=True for CUDAPrefetcher.')\n    else:\n        raise ValueError(f'Wrong prefetch_mode {prefetch_mode}.' \"Supported ones are: None, 'cuda', 'cpu'.\")\n\n    # training\n    logger.info(f'Start training from epoch: {start_epoch}, iter: {current_iter+1}')\n    data_time, iter_time = time.time(), time.time()\n    start_time = time.time()\n\n    for epoch in range(start_epoch, total_epochs + 1):\n        train_sampler.set_epoch(epoch)\n        prefetcher.reset()\n        train_data = prefetcher.next()\n\n        while train_data is not None:\n            data_time = time.time() - data_time\n\n            current_iter += 1\n            if current_iter > total_iters:\n                break\n\n            # update learning rate\n            model.update_learning_rate(current_iter,\n                                       warmup_iter=opt['train'].get('warmup_iter', -1))\n            # training\n            model.feed_data(train_data)\n            model.optimize_parameters(current_iter)\n\n            iter_time = time.time() - iter_time\n            # log\n            if current_iter % opt['logger']['print_freq'] == 0:\n                log_vars = {'epoch': epoch, 'iter': current_iter}\n                log_vars.update({'lrs': model.get_current_learning_rate()})\n                log_vars.update({'time': iter_time, 'data_time': data_time})\n                log_vars.update(model.get_current_log())\n                msg_logger(log_vars)\n\n            # save models and training states\n            if current_iter % opt['logger']['save_checkpoint_freq'] == 0:\n                logger.info('Saving models and training states.')\n                model.save(epoch, current_iter)\n\n            # validation\n            if opt.get('val') is not None and opt['datasets'].get('val') is not None \\\n                and (current_iter % opt['val']['val_freq'] == 0):\n                model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])\n\n            data_time = time.time()\n            iter_time = time.time()\n            train_data = prefetcher.next()\n        # end of iter\n\n    # end of epoch\n\n    consumed_time = str(datetime.timedelta(seconds=int(time.time() - start_time)))\n    logger.info(f'End of training. Time consumed: {consumed_time}')\n    logger.info('Save the latest model.')\n    model.save(epoch=-1, current_iter=-1)  # -1 stands for the latest\n    if opt.get('val') is not None and opt['datasets'].get('val'):\n        model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])\n    if tb_logger:\n        tb_logger.close()\n\n\nif __name__ == '__main__':\n    root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))\n    train_pipeline(root_path)\n"
  },
  {
    "path": "basicsr/utils/__init__.py",
    "content": "from .file_client import FileClient\nfrom .img_util import crop_border, imfrombytes, img2tensor, imwrite, tensor2img, tensor2imgs, images_to_gif\nfrom .logger import MessageLogger, get_env_info, get_root_logger, init_tb_logger, init_wandb_logger\nfrom .misc import check_resume, get_time_str, make_exp_dirs, mkdir_and_rename, scandir, set_random_seed, sizeof_fmt\n\n__all__ = [\n    # file_client.py\n    'FileClient',\n    # img_util.py\n    'img2tensor',\n    'tensor2img',\n    'imfrombytes',\n    'imwrite',\n    'crop_border',\n    # logger.py\n    'MessageLogger',\n    'init_tb_logger',\n    'init_wandb_logger',\n    'get_root_logger',\n    'get_env_info',\n    # misc.py\n    'set_random_seed',\n    'get_time_str',\n    'mkdir_and_rename',\n    'make_exp_dirs',\n    'scandir',\n    'check_resume',\n    'sizeof_fmt',\n    \n    # new add\n    'tensor2imgs',\n    'images_to_gif',\n]\n"
  },
  {
    "path": "basicsr/utils/dist_util.py",
    "content": "# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py  # noqa: E501\nimport functools\nimport os\nimport subprocess\nimport torch\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\n\n\ndef init_dist(launcher, backend='nccl', **kwargs):\n    if mp.get_start_method(allow_none=True) is None:\n        mp.set_start_method('spawn')\n    if launcher == 'pytorch':\n        _init_dist_pytorch(backend, **kwargs)\n    elif launcher == 'slurm':\n        _init_dist_slurm(backend, **kwargs)\n    else:\n        raise ValueError(f'Invalid launcher type: {launcher}')\n\n\ndef _init_dist_pytorch(backend, **kwargs):\n    rank = int(os.environ['RANK'])\n    num_gpus = torch.cuda.device_count()\n    print(f'Initializing PyTorch distributed with rank {rank} and {num_gpus} GPUs.')\n    # exit()\n    torch.cuda.set_device(rank % num_gpus)\n    dist.init_process_group(backend=backend, **kwargs)\n\n\ndef _init_dist_slurm(backend, port=None):\n    \"\"\"Initialize slurm distributed training environment.\n\n    If argument ``port`` is not specified, then the master port will be system\n    environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system\n    environment variable, then a default port ``29500`` will be used.\n\n    Args:\n        backend (str): Backend of torch.distributed.\n        port (int, optional): Master port. Defaults to None.\n    \"\"\"\n    proc_id = int(os.environ['SLURM_PROCID'])\n    ntasks = int(os.environ['SLURM_NTASKS'])\n    node_list = os.environ['SLURM_NODELIST']\n    num_gpus = torch.cuda.device_count()\n    torch.cuda.set_device(proc_id % num_gpus)\n    addr = subprocess.getoutput(f'scontrol show hostname {node_list} | head -n1')\n    # specify master port\n    if port is not None:\n        os.environ['MASTER_PORT'] = str(port)\n    elif 'MASTER_PORT' in os.environ:\n        pass  # use MASTER_PORT in the environment variable\n    else:\n        # 29500 is torch.distributed default port\n        os.environ['MASTER_PORT'] = '29500'\n    os.environ['MASTER_ADDR'] = addr\n    os.environ['WORLD_SIZE'] = str(ntasks)\n    os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)\n    os.environ['RANK'] = str(proc_id)\n    dist.init_process_group(backend=backend)\n\n\ndef get_dist_info():\n    if dist.is_available():\n        initialized = dist.is_initialized()\n    else:\n        initialized = False\n    if initialized:\n        rank = dist.get_rank()\n        world_size = dist.get_world_size()\n    else:\n        rank = 0\n        world_size = 1\n    return rank, world_size\n\n\ndef master_only(func):\n\n    @functools.wraps(func)\n    def wrapper(*args, **kwargs):\n        rank, _ = get_dist_info()\n        if rank == 0:\n            return func(*args, **kwargs)\n\n    return wrapper\n"
  },
  {
    "path": "basicsr/utils/download_util.py",
    "content": "import math\nimport os\nimport requests\nfrom torch.hub import download_url_to_file, get_dir\nfrom tqdm import tqdm\nfrom urllib.parse import urlparse\n\nfrom .misc import sizeof_fmt\n\n\ndef download_file_from_google_drive(file_id, save_path):\n    \"\"\"Download files from google drive.\n    Ref:\n    https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive  # noqa E501\n    Args:\n        file_id (str): File id.\n        save_path (str): Save path.\n    \"\"\"\n\n    session = requests.Session()\n    URL = 'https://docs.google.com/uc?export=download'\n    params = {'id': file_id}\n\n    response = session.get(URL, params=params, stream=True)\n    token = get_confirm_token(response)\n    if token:\n        params['confirm'] = token\n        response = session.get(URL, params=params, stream=True)\n\n    # get file size\n    response_file_size = session.get(URL, params=params, stream=True, headers={'Range': 'bytes=0-2'})\n    print(response_file_size)\n    if 'Content-Range' in response_file_size.headers:\n        file_size = int(response_file_size.headers['Content-Range'].split('/')[1])\n    else:\n        file_size = None\n\n    save_response_content(response, save_path, file_size)\n\n\ndef get_confirm_token(response):\n    for key, value in response.cookies.items():\n        if key.startswith('download_warning'):\n            return value\n    return None\n\n\ndef save_response_content(response, destination, file_size=None, chunk_size=32768):\n    if file_size is not None:\n        pbar = tqdm(total=math.ceil(file_size / chunk_size), unit='chunk')\n\n        readable_file_size = sizeof_fmt(file_size)\n    else:\n        pbar = None\n\n    with open(destination, 'wb') as f:\n        downloaded_size = 0\n        for chunk in response.iter_content(chunk_size):\n            downloaded_size += chunk_size\n            if pbar is not None:\n                pbar.update(1)\n                pbar.set_description(f'Download {sizeof_fmt(downloaded_size)} / {readable_file_size}')\n            if chunk:  # filter out keep-alive new chunks\n                f.write(chunk)\n        if pbar is not None:\n            pbar.close()\n\n\ndef load_file_from_url(url, model_dir=None, progress=True, file_name=None):\n    \"\"\"Load file form http url, will download models if necessary.\n    Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py\n    Args:\n        url (str): URL to be downloaded.\n        model_dir (str): The path to save the downloaded model. Should be a full path. If None, use pytorch hub_dir.\n            Default: None.\n        progress (bool): Whether to show the download progress. Default: True.\n        file_name (str): The downloaded file name. If None, use the file name in the url. Default: None.\n    Returns:\n        str: The path to the downloaded file.\n    \"\"\"\n    if model_dir is None:  # use the pytorch hub_dir\n        hub_dir = get_dir()\n        model_dir = os.path.join(hub_dir, 'checkpoints')\n\n    os.makedirs(model_dir, exist_ok=True)\n\n    parts = urlparse(url)\n    filename = os.path.basename(parts.path)\n    if file_name is not None:\n        filename = file_name\n    cached_file = os.path.abspath(os.path.join(model_dir, filename))\n    if not os.path.exists(cached_file):\n        print(f'Downloading: \"{url}\" to {cached_file}\\n')\n        download_url_to_file(url, cached_file, hash_prefix=None, progress=progress)\n    return cached_file"
  },
  {
    "path": "basicsr/utils/file_client.py",
    "content": "# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py  # noqa: E501\nfrom abc import ABCMeta, abstractmethod\n\n\nclass BaseStorageBackend(metaclass=ABCMeta):\n    \"\"\"Abstract class of storage backends.\n\n    All backends need to implement two apis: ``get()`` and ``get_text()``.\n    ``get()`` reads the file as a byte stream and ``get_text()`` reads the file\n    as texts.\n    \"\"\"\n\n    @abstractmethod\n    def get(self, filepath):\n        pass\n\n    @abstractmethod\n    def get_text(self, filepath):\n        pass\n\n\nclass MemcachedBackend(BaseStorageBackend):\n    \"\"\"Memcached storage backend.\n\n    Attributes:\n        server_list_cfg (str): Config file for memcached server list.\n        client_cfg (str): Config file for memcached client.\n        sys_path (str | None): Additional path to be appended to `sys.path`.\n            Default: None.\n    \"\"\"\n\n    def __init__(self, server_list_cfg, client_cfg, sys_path=None):\n        if sys_path is not None:\n            import sys\n            sys.path.append(sys_path)\n        try:\n            import mc\n        except ImportError:\n            raise ImportError('Please install memcached to enable MemcachedBackend.')\n\n        self.server_list_cfg = server_list_cfg\n        self.client_cfg = client_cfg\n        self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, self.client_cfg)\n        # mc.pyvector servers as a point which points to a memory cache\n        self._mc_buffer = mc.pyvector()\n\n    def get(self, filepath):\n        filepath = str(filepath)\n        import mc\n        self._client.Get(filepath, self._mc_buffer)\n        value_buf = mc.ConvertBuffer(self._mc_buffer)\n        return value_buf\n\n    def get_text(self, filepath):\n        raise NotImplementedError\n\n\nclass HardDiskBackend(BaseStorageBackend):\n    \"\"\"Raw hard disks storage backend.\"\"\"\n\n    def get(self, filepath):\n        filepath = str(filepath)\n        with open(filepath, 'rb') as f:\n            value_buf = f.read()\n        return value_buf\n\n    def get_text(self, filepath):\n        filepath = str(filepath)\n        with open(filepath, 'r') as f:\n            value_buf = f.read()\n        return value_buf\n\n\nclass LmdbBackend(BaseStorageBackend):\n    \"\"\"Lmdb storage backend.\n\n    Args:\n        db_paths (str | list[str]): Lmdb database paths.\n        client_keys (str | list[str]): Lmdb client keys. Default: 'default'.\n        readonly (bool, optional): Lmdb environment parameter. If True,\n            disallow any write operations. Default: True.\n        lock (bool, optional): Lmdb environment parameter. If False, when\n            concurrent access occurs, do not lock the database. Default: False.\n        readahead (bool, optional): Lmdb environment parameter. If False,\n            disable the OS filesystem readahead mechanism, which may improve\n            random read performance when a database is larger than RAM.\n            Default: False.\n\n    Attributes:\n        db_paths (list): Lmdb database path.\n        _client (list): A list of several lmdb envs.\n    \"\"\"\n\n    def __init__(self, db_paths, client_keys='default', readonly=True, lock=False, readahead=False, **kwargs):\n        try:\n            import lmdb\n        except ImportError:\n            raise ImportError('Please install lmdb to enable LmdbBackend.')\n\n        if isinstance(client_keys, str):\n            client_keys = [client_keys]\n\n        if isinstance(db_paths, list):\n            self.db_paths = [str(v) for v in db_paths]\n        elif isinstance(db_paths, str):\n            self.db_paths = [str(db_paths)]\n        assert len(client_keys) == len(self.db_paths), ('client_keys and db_paths should have the same length, '\n                                                        f'but received {len(client_keys)} and {len(self.db_paths)}.')\n\n        self._client = {}\n        for client, path in zip(client_keys, self.db_paths):\n            self._client[client] = lmdb.open(path, readonly=readonly, lock=lock, readahead=readahead, **kwargs)\n\n    def get(self, filepath, client_key):\n        \"\"\"Get values according to the filepath from one lmdb named client_key.\n\n        Args:\n            filepath (str | obj:`Path`): Here, filepath is the lmdb key.\n            client_key (str): Used for distinguishing differnet lmdb envs.\n        \"\"\"\n        filepath = str(filepath)\n        assert client_key in self._client, (f'client_key {client_key} is not ' 'in lmdb clients.')\n        client = self._client[client_key]\n        with client.begin(write=False) as txn:\n            value_buf = txn.get(filepath.encode('ascii'))\n        return value_buf\n\n    def get_text(self, filepath):\n        raise NotImplementedError\n\n\nclass FileClient(object):\n    \"\"\"A general file client to access files in different backend.\n\n    The client loads a file or text in a specified backend from its path\n    and return it as a binary file. it can also register other backend\n    accessor with a given name and backend class.\n\n    Attributes:\n        backend (str): The storage backend type. Options are \"disk\",\n            \"memcached\" and \"lmdb\".\n        client (:obj:`BaseStorageBackend`): The backend object.\n    \"\"\"\n\n    _backends = {\n        'disk': HardDiskBackend,\n        'memcached': MemcachedBackend,\n        'lmdb': LmdbBackend,\n    }\n\n    def __init__(self, backend='disk', **kwargs):\n        if backend not in self._backends:\n            raise ValueError(f'Backend {backend} is not supported. Currently supported ones'\n                             f' are {list(self._backends.keys())}')\n        self.backend = backend\n        self.client = self._backends[backend](**kwargs)\n\n    def get(self, filepath, client_key='default'):\n        # client_key is used only for lmdb, where different fileclients have\n        # different lmdb environments.\n        if self.backend == 'lmdb':\n            return self.client.get(filepath, client_key)\n        else:\n            return self.client.get(filepath)\n\n    def get_text(self, filepath):\n        return self.client.get_text(filepath)\n"
  },
  {
    "path": "basicsr/utils/img_util.py",
    "content": "import cv2\nimport math\nimport numpy as np\nfrom PIL import Image\nimport os\nimport torch\nfrom torchvision.utils import make_grid\n\n\ndef img2tensor(imgs, bgr2rgb=True, float32=True):\n    \"\"\"Numpy array to tensor.\n\n    Args:\n        imgs (list[ndarray] | ndarray): Input images.\n        bgr2rgb (bool): Whether to change bgr to rgb.\n        float32 (bool): Whether to change to float32.\n\n    Returns:\n        list[tensor] | tensor: Tensor images. If returned results only have\n            one element, just return tensor.\n    \"\"\"\n\n    def _totensor(img, bgr2rgb, float32):\n        if img.shape[2] == 3 and bgr2rgb:\n            if img.dtype == 'float64':\n                img = img.astype('float32')\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n        \n        img = torch.from_numpy(img.transpose(2, 0, 1))\n        if float32:\n            img = img.float()\n        return img\n\n    if isinstance(imgs, list):\n        return [_totensor(img, bgr2rgb, float32) for img in imgs]\n    else:\n        return _totensor(imgs, bgr2rgb, float32)\n\n\ndef tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)):\n    \"\"\"Convert torch Tensors into image numpy arrays.\n\n    After clamping to [min, max], values will be normalized to [0, 1].\n\n    Args:\n        tensor (Tensor or list[Tensor]): Accept shapes:\n            1) 4D mini-batch Tensor of shape (B x 3/1 x H x W);\n            2) 3D Tensor of shape (3/1 x H x W);\n            3) 2D Tensor of shape (H x W).\n            Tensor channel should be in RGB order.\n        rgb2bgr (bool): Whether to change rgb to bgr.\n        out_type (numpy type): output types. If ``np.uint8``, transform outputs\n            to uint8 type with range [0, 255]; otherwise, float type with\n            range [0, 1]. Default: ``np.uint8``.\n        min_max (tuple[int]): min and max values for clamp.\n\n    Returns:\n        (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of\n        shape (H x W). The channel order is BGR.\n    \"\"\"\n    if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):\n        raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')\n\n    if torch.is_tensor(tensor):\n        tensor = [tensor]\n    result = []\n    for _tensor in tensor:\n        _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max)\n        _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0])\n\n        n_dim = _tensor.dim()\n        if n_dim == 4:\n            img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy()\n            img_np = img_np.transpose(1, 2, 0)\n            if rgb2bgr:\n                img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)\n        elif n_dim == 3:\n            img_np = _tensor.numpy()\n            img_np = img_np.transpose(1, 2, 0)\n            if img_np.shape[2] == 1:  # gray image\n                img_np = np.squeeze(img_np, axis=2)\n            else:\n                if rgb2bgr:\n                    img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)\n        elif n_dim == 2:\n            img_np = _tensor.numpy()\n        else:\n            raise TypeError('Only support 4D, 3D or 2D tensor. ' f'But received with dimension: {n_dim}')\n        if out_type == np.uint8:\n            # Unlike MATLAB, numpy.unit8() WILL NOT round by default.\n            img_np = (img_np * 255.0).round()\n        img_np = img_np.astype(out_type)\n        result.append(img_np)\n    if len(result) == 1:\n        result = result[0]\n    return result\n\n\ndef tensor2img_fast(tensor, rgb2bgr=True, min_max=(0, 1)):\n    \"\"\"This implementation is slightly faster than tensor2img.\n    It now only supports torch tensor with shape (1, c, h, w).\n\n    Args:\n        tensor (Tensor): Now only support torch tensor with (1, c, h, w).\n        rgb2bgr (bool): Whether to change rgb to bgr. Default: True.\n        min_max (tuple[int]): min and max values for clamp.\n    \"\"\"\n    output = tensor.squeeze(0).detach().clamp_(*min_max).permute(1, 2, 0)\n    output = (output - min_max[0]) / (min_max[1] - min_max[0]) * 255\n    output = output.type(torch.uint8).cpu().numpy()\n    if rgb2bgr:\n        output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR)\n    return output\n\n\ndef tensor2imgs(tensor, rgb2bgr=True, min_max=(0, 1)):\n    \"\"\"Convert a 4D torch tensor to a list of numpy images.\n\n    Args:\n        tensor (Tensor): A 4D torch tensor with shape (B, C, H, W).\n        rgb2bgr (bool): Whether to change rgb to bgr. Default: True.\n        min_max (tuple[int]): min and max values for clamp.\n\n    Returns:\n        list: A list of numpy arrays representing images.\n    \"\"\"\n    # 检查输入是否为 4D 张量\n    if tensor.dim() != 4:\n        raise ValueError(f\"Input tensor should be 4D (B, C, H, W), but got {tensor.dim()}D tensor.\")\n\n    num_images = tensor.size(0)\n    image_list = []\n\n    # 遍历批量中的每个图像\n    for i in range(num_images):\n        single_image_tensor = tensor[i].unsqueeze(0)  # 提取单张图像并添加一个维度以匹配 tensor2img_fast 的输入要求\n        single_image_np = tensor2img_fast(single_image_tensor, rgb2bgr=rgb2bgr, min_max=min_max)\n        image_list.append(single_image_np)\n\n    return image_list\n\n\ndef imfrombytes(content, flag='color', float32=False):\n    \"\"\"Read an image from bytes.\n\n    Args:\n        content (bytes): Image bytes got from files or other streams.\n        flag (str): Flags specifying the color type of a loaded image,\n            candidates are `color`, `grayscale` and `unchanged`.\n        float32 (bool): Whether to change to float32., If True, will also norm\n            to [0, 1]. Default: False.\n\n    Returns:\n        ndarray: Loaded image array.\n    \"\"\"\n    img_np = np.frombuffer(content, np.uint8)\n    imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED}\n    img = cv2.imdecode(img_np, imread_flags[flag])\n    if float32:\n        img = img.astype(np.float32) / 255.\n    return img\n\n\ndef imwrite(img, file_path, params=None, auto_mkdir=True):\n    \"\"\"Write image to file.\n\n    Args:\n        img (ndarray): Image array to be written.\n        file_path (str): Image file path.\n        params (None or list): Same as opencv's :func:`imwrite` interface.\n        auto_mkdir (bool): If the parent folder of `file_path` does not exist,\n            whether to create it automatically.\n\n    Returns:\n        bool: Successful or not.\n    \"\"\"\n    if auto_mkdir:\n        dir_name = os.path.abspath(os.path.dirname(file_path))\n        os.makedirs(dir_name, exist_ok=True)\n    return cv2.imwrite(file_path, img, params)\n\ndef images_to_gif(image_list, output_path, duration=100, loop=0):\n    \"\"\"\n    将包含 numpy.ndarray 类型图像的列表拼接成一个 GIF 动画。\n\n    Args:\n        image_list (list): 包含 numpy.ndarray 对象的列表，代表图像数据。\n        output_path (str): 输出 GIF 文件的路径。\n        duration (int): 每一帧的显示时间（毫秒），默认为 100 毫秒。\n        loop (int): GIF 动画的循环次数，0 表示无限循环，默认为 0。\n    \"\"\"\n    # 确保 image_list 不为空\n    if not image_list:\n        print(\"图像列表为空，无法创建 GIF。\")\n        return\n\n    pil_images = []\n    for img in image_list:\n        # 检查图像是否为单通道灰度图\n        if len(img.shape) == 2:\n            pil_img = Image.fromarray(img, mode='L')\n        else:\n            # 通常 OpenCV 读取的图像是 BGR 格式，而 PIL 使用 RGB 格式，需要转换\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) if 'cv2' in globals() else img\n            pil_img = Image.fromarray(img)\n        pil_images.append(pil_img)\n\n    # 保存为 GIF\n    pil_images[0].save(\n        output_path,\n        save_all=True,\n        append_images=pil_images[1:],\n        duration=duration,\n        loop=loop\n    )\n\ndef crop_border(imgs, crop_border):\n    \"\"\"Crop borders of images.\n\n    Args:\n        imgs (list[ndarray] | ndarray): Images with shape (h, w, c).\n        crop_border (int): Crop border for each end of height and weight.\n\n    Returns:\n        list[ndarray]: Cropped images.\n    \"\"\"\n    if crop_border == 0:\n        return imgs\n    else:\n        if isinstance(imgs, list):\n            return [v[crop_border:-crop_border, crop_border:-crop_border, ...] for v in imgs]\n        else:\n            return imgs[crop_border:-crop_border, crop_border:-crop_border, ...]\n\n"
  },
  {
    "path": "basicsr/utils/lmdb_util.py",
    "content": "import cv2\nimport lmdb\nimport sys\nfrom multiprocessing import Pool\nfrom os import path as osp\nfrom tqdm import tqdm\n\n\ndef make_lmdb_from_imgs(data_path,\n                        lmdb_path,\n                        img_path_list,\n                        keys,\n                        batch=5000,\n                        compress_level=1,\n                        multiprocessing_read=False,\n                        n_thread=40,\n                        map_size=None):\n    \"\"\"Make lmdb from images.\n\n    Contents of lmdb. The file structure is:\n    example.lmdb\n    ├── data.mdb\n    ├── lock.mdb\n    ├── meta_info.txt\n\n    The data.mdb and lock.mdb are standard lmdb files and you can refer to\n    https://lmdb.readthedocs.io/en/release/ for more details.\n\n    The meta_info.txt is a specified txt file to record the meta information\n    of our datasets. It will be automatically created when preparing\n    datasets by our provided dataset tools.\n    Each line in the txt file records 1)image name (with extension),\n    2)image shape, and 3)compression level, separated by a white space.\n\n    For example, the meta information could be:\n    `000_00000000.png (720,1280,3) 1`, which means:\n    1) image name (with extension): 000_00000000.png;\n    2) image shape: (720,1280,3);\n    3) compression level: 1\n\n    We use the image name without extension as the lmdb key.\n\n    If `multiprocessing_read` is True, it will read all the images to memory\n    using multiprocessing. Thus, your server needs to have enough memory.\n\n    Args:\n        data_path (str): Data path for reading images.\n        lmdb_path (str): Lmdb save path.\n        img_path_list (str): Image path list.\n        keys (str): Used for lmdb keys.\n        batch (int): After processing batch images, lmdb commits.\n            Default: 5000.\n        compress_level (int): Compress level when encoding images. Default: 1.\n        multiprocessing_read (bool): Whether use multiprocessing to read all\n            the images to memory. Default: False.\n        n_thread (int): For multiprocessing.\n        map_size (int | None): Map size for lmdb env. If None, use the\n            estimated size from images. Default: None\n    \"\"\"\n\n    assert len(img_path_list) == len(keys), ('img_path_list and keys should have the same length, '\n                                             f'but got {len(img_path_list)} and {len(keys)}')\n    print(f'Create lmdb for {data_path}, save to {lmdb_path}...')\n    print(f'Totoal images: {len(img_path_list)}')\n    if not lmdb_path.endswith('.lmdb'):\n        raise ValueError(\"lmdb_path must end with '.lmdb'.\")\n    if osp.exists(lmdb_path):\n        print(f'Folder {lmdb_path} already exists. Exit.')\n        sys.exit(1)\n\n    if multiprocessing_read:\n        # read all the images to memory (multiprocessing)\n        dataset = {}  # use dict to keep the order for multiprocessing\n        shapes = {}\n        print(f'Read images with multiprocessing, #thread: {n_thread} ...')\n        pbar = tqdm(total=len(img_path_list), unit='image')\n\n        def callback(arg):\n            \"\"\"get the image data and update pbar.\"\"\"\n            key, dataset[key], shapes[key] = arg\n            pbar.update(1)\n            pbar.set_description(f'Read {key}')\n\n        pool = Pool(n_thread)\n        for path, key in zip(img_path_list, keys):\n            pool.apply_async(read_img_worker, args=(osp.join(data_path, path), key, compress_level), callback=callback)\n        pool.close()\n        pool.join()\n        pbar.close()\n        print(f'Finish reading {len(img_path_list)} images.')\n\n    # create lmdb environment\n    if map_size is None:\n        # obtain data size for one image\n        img = cv2.imread(osp.join(data_path, img_path_list[0]), cv2.IMREAD_UNCHANGED)\n        _, img_byte = cv2.imencode('.png', img, [cv2.IMWRITE_PNG_COMPRESSION, compress_level])\n        data_size_per_img = img_byte.nbytes\n        print('Data size per image is: ', data_size_per_img)\n        data_size = data_size_per_img * len(img_path_list)\n        map_size = data_size * 10\n\n    env = lmdb.open(lmdb_path, map_size=map_size)\n\n    # write data to lmdb\n    pbar = tqdm(total=len(img_path_list), unit='chunk')\n    txn = env.begin(write=True)\n    txt_file = open(osp.join(lmdb_path, 'meta_info.txt'), 'w')\n    for idx, (path, key) in enumerate(zip(img_path_list, keys)):\n        pbar.update(1)\n        pbar.set_description(f'Write {key}')\n        key_byte = key.encode('ascii')\n        if multiprocessing_read:\n            img_byte = dataset[key]\n            h, w, c = shapes[key]\n        else:\n            _, img_byte, img_shape = read_img_worker(osp.join(data_path, path), key, compress_level)\n            h, w, c = img_shape\n\n        txn.put(key_byte, img_byte)\n        # write meta information\n        txt_file.write(f'{key}.png ({h},{w},{c}) {compress_level}\\n')\n        if idx % batch == 0:\n            txn.commit()\n            txn = env.begin(write=True)\n    pbar.close()\n    txn.commit()\n    env.close()\n    txt_file.close()\n    print('\\nFinish writing lmdb.')\n\n\ndef read_img_worker(path, key, compress_level):\n    \"\"\"Read image worker.\n\n    Args:\n        path (str): Image path.\n        key (str): Image key.\n        compress_level (int): Compress level when encoding images.\n\n    Returns:\n        str: Image key.\n        byte: Image byte.\n        tuple[int]: Image shape.\n    \"\"\"\n\n    img = cv2.imread(path, cv2.IMREAD_UNCHANGED)\n    if img.ndim == 2:\n        h, w = img.shape\n        c = 1\n    else:\n        h, w, c = img.shape\n    _, img_byte = cv2.imencode('.png', img, [cv2.IMWRITE_PNG_COMPRESSION, compress_level])\n    return (key, img_byte, (h, w, c))\n\n\nclass LmdbMaker():\n    \"\"\"LMDB Maker.\n\n    Args:\n        lmdb_path (str): Lmdb save path.\n        map_size (int): Map size for lmdb env. Default: 1024 ** 4, 1TB.\n        batch (int): After processing batch images, lmdb commits.\n            Default: 5000.\n        compress_level (int): Compress level when encoding images. Default: 1.\n    \"\"\"\n\n    def __init__(self, lmdb_path, map_size=1024**4, batch=5000, compress_level=1):\n        if not lmdb_path.endswith('.lmdb'):\n            raise ValueError(\"lmdb_path must end with '.lmdb'.\")\n        if osp.exists(lmdb_path):\n            print(f'Folder {lmdb_path} already exists. Exit.')\n            sys.exit(1)\n\n        self.lmdb_path = lmdb_path\n        self.batch = batch\n        self.compress_level = compress_level\n        self.env = lmdb.open(lmdb_path, map_size=map_size)\n        self.txn = self.env.begin(write=True)\n        self.txt_file = open(osp.join(lmdb_path, 'meta_info.txt'), 'w')\n        self.counter = 0\n\n    def put(self, img_byte, key, img_shape):\n        self.counter += 1\n        key_byte = key.encode('ascii')\n        self.txn.put(key_byte, img_byte)\n        # write meta information\n        h, w, c = img_shape\n        self.txt_file.write(f'{key}.png ({h},{w},{c}) {self.compress_level}\\n')\n        if self.counter % self.batch == 0:\n            self.txn.commit()\n            self.txn = self.env.begin(write=True)\n\n    def close(self):\n        self.txn.commit()\n        self.env.close()\n        self.txt_file.close()\n"
  },
  {
    "path": "basicsr/utils/logger.py",
    "content": "import datetime\nimport logging\nimport time\n\nfrom .dist_util import get_dist_info, master_only\n\ninitialized_logger = {}\n\n\nclass MessageLogger():\n    \"\"\"Message logger for printing.\n    Args:\n        opt (dict): Config. It contains the following keys:\n            name (str): Exp name.\n            logger (dict): Contains 'print_freq' (str) for logger interval.\n            train (dict): Contains 'total_iter' (int) for total iters.\n            use_tb_logger (bool): Use tensorboard logger.\n        start_iter (int): Start iter. Default: 1.\n        tb_logger (obj:`tb_logger`): Tensorboard logger. Default： None.\n    \"\"\"\n\n    def __init__(self, opt, start_iter=1, tb_logger=None):\n        self.exp_name = opt['name']\n        self.interval = opt['logger']['print_freq']\n        self.start_iter = start_iter\n        self.max_iters = opt['train']['total_iter']\n        self.use_tb_logger = opt['logger']['use_tb_logger']\n        self.tb_logger = tb_logger\n        self.start_time = time.time()\n        self.logger = get_root_logger()\n\n    @master_only\n    def __call__(self, log_vars):\n        \"\"\"Format logging message.\n        Args:\n            log_vars (dict): It contains the following keys:\n                epoch (int): Epoch number.\n                iter (int): Current iter.\n                lrs (list): List for learning rates.\n                time (float): Iter time.\n                data_time (float): Data time for each iter.\n        \"\"\"\n        # epoch, iter, learning rates\n        epoch = log_vars.pop('epoch')\n        current_iter = log_vars.pop('iter')\n        lrs = log_vars.pop('lrs')\n\n        message = (f'[{self.exp_name[:5]}..][epoch:{epoch:3d}, ' f'iter:{current_iter:8,d}, lr:(')\n        for v in lrs:\n            message += f'{v:.3e},'\n        message += ')] '\n\n        # time and estimated time\n        if 'time' in log_vars.keys():\n            iter_time = log_vars.pop('time')\n            data_time = log_vars.pop('data_time')\n\n            total_time = time.time() - self.start_time\n            time_sec_avg = total_time / (current_iter - self.start_iter + 1)\n            eta_sec = time_sec_avg * (self.max_iters - current_iter - 1)\n            eta_str = str(datetime.timedelta(seconds=int(eta_sec)))\n            message += f'[eta: {eta_str}, '\n            message += f'time (data): {iter_time:.3f} ({data_time:.3f})] '\n\n        # other items, especially losses\n        for k, v in log_vars.items():\n            message += f'{k}: {v:.4e} '\n            # tensorboard logger\n            if self.use_tb_logger:\n                # if k.startswith('l_'):\n                #     self.tb_logger.add_scalar(f'losses/{k}', v, current_iter)\n                # else:\n                self.tb_logger.add_scalar(k, v, current_iter)\n        self.logger.info(message)\n\n\n@master_only\ndef init_tb_logger(log_dir):\n    from torch.utils.tensorboard import SummaryWriter\n    tb_logger = SummaryWriter(log_dir=log_dir)\n    return tb_logger\n\n\n@master_only\ndef init_wandb_logger(opt):\n    \"\"\"We now only use wandb to sync tensorboard log.\"\"\"\n    import wandb\n    logger = logging.getLogger('basicsr')\n\n    project = opt['logger']['wandb']['project']\n    resume_id = opt['logger']['wandb'].get('resume_id')\n    if resume_id:\n        wandb_id = resume_id\n        resume = 'allow'\n        logger.warning(f'Resume wandb logger with id={wandb_id}.')\n    else:\n        wandb_id = wandb.util.generate_id()\n        resume = 'never'\n\n    wandb_mode = opt['logger']['wandb'].get('mode', 'offline')  # tree mode : offline online disabled\n    \n    wandb.init(id=wandb_id, resume=resume, name=opt['name'], config=opt, project=project, sync_tensorboard=True, mode=wandb_mode)\n\n    logger.info(f'Use wandb logger with id={wandb_id}; project={project}; mode: {wandb_mode}. ')\n\n\ndef get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):\n    \"\"\"Get the root logger.\n    The logger will be initialized if it has not been initialized. By default a\n    StreamHandler will be added. If `log_file` is specified, a FileHandler will\n    also be added.\n    Args:\n        logger_name (str): root logger name. Default: 'basicsr'.\n        log_file (str | None): The log filename. If specified, a FileHandler\n            will be added to the root logger.\n        log_level (int): The root logger level. Note that only the process of\n            rank 0 is affected, while other processes will set the level to\n            \"Error\" and be silent most of the time.\n    Returns:\n        logging.Logger: The root logger.\n    \"\"\"\n    logger = logging.getLogger(logger_name)\n    # if the logger has been initialized, just return it\n    if logger_name in initialized_logger:\n        return logger\n\n    format_str = '%(asctime)s %(levelname)s: %(message)s'\n    stream_handler = logging.StreamHandler()\n    stream_handler.setFormatter(logging.Formatter(format_str))\n    logger.addHandler(stream_handler)\n    logger.propagate = False\n    rank, _ = get_dist_info()\n    if rank != 0:\n        logger.setLevel('ERROR')\n    elif log_file is not None:\n        logger.setLevel(log_level)\n        # add file handler\n        # file_handler = logging.FileHandler(log_file, 'w')\n        file_handler = logging.FileHandler(log_file, 'a') #Shangchen: keep the previous log\n        file_handler.setFormatter(logging.Formatter(format_str))\n        file_handler.setLevel(log_level)\n        logger.addHandler(file_handler)\n    initialized_logger[logger_name] = True\n    return logger\n\n\ndef get_env_info():\n    \"\"\"Get environment information.\n    Currently, only log the software version.\n    \"\"\"\n    import torch\n    import torchvision\n\n    from basicsr.version import __version__\n    msg = r\"\"\"\n                ____                _       _____  ____\n               / __ ) ____ _ _____ (_)_____/ ___/ / __ \\\n              / __  |/ __ `// ___// // ___/\\__ \\ / /_/ /\n             / /_/ // /_/ /(__  )/ // /__ ___/ // _, _/\n            /_____/ \\__,_//____//_/ \\___//____//_/ |_|\n     ______                   __   __                 __      __\n    / ____/____   ____   ____/ /  / /   __  __ _____ / /__   / /\n   / / __ / __ \\ / __ \\ / __  /  / /   / / / // ___// //_/  / /\n  / /_/ // /_/ // /_/ // /_/ /  / /___/ /_/ // /__ / /<    /_/\n  \\____/ \\____/ \\____/ \\____/  /_____/\\____/ \\___//_/|_|  (_)\n    \"\"\"\n    msg += ('\\nVersion Information: '\n            f'\\n\\tBasicSR: {__version__}'\n            f'\\n\\tPyTorch: {torch.__version__}'\n            f'\\n\\tTorchVision: {torchvision.__version__}')\n    return msg"
  },
  {
    "path": "basicsr/utils/matlab_functions.py",
    "content": "import math\nimport numpy as np\nimport torch\n\n\ndef cubic(x):\n    \"\"\"cubic function used for calculate_weights_indices.\"\"\"\n    absx = torch.abs(x)\n    absx2 = absx**2\n    absx3 = absx**3\n    return (1.5 * absx3 - 2.5 * absx2 + 1) * (\n        (absx <= 1).type_as(absx)) + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * (((absx > 1) *\n                                                                                     (absx <= 2)).type_as(absx))\n\n\ndef calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):\n    \"\"\"Calculate weights and indices, used for imresize function.\n\n    Args:\n        in_length (int): Input length.\n        out_length (int): Output length.\n        scale (float): Scale factor.\n        kernel_width (int): Kernel width.\n        antialisaing (bool): Whether to apply anti-aliasing when downsampling.\n    \"\"\"\n\n    if (scale < 1) and antialiasing:\n        # Use a modified kernel (larger kernel width) to simultaneously\n        # interpolate and antialias\n        kernel_width = kernel_width / scale\n\n    # Output-space coordinates\n    x = torch.linspace(1, out_length, out_length)\n\n    # Input-space coordinates. Calculate the inverse mapping such that 0.5\n    # in output space maps to 0.5 in input space, and 0.5 + scale in output\n    # space maps to 1.5 in input space.\n    u = x / scale + 0.5 * (1 - 1 / scale)\n\n    # What is the left-most pixel that can be involved in the computation?\n    left = torch.floor(u - kernel_width / 2)\n\n    # What is the maximum number of pixels that can be involved in the\n    # computation?  Note: it's OK to use an extra pixel here; if the\n    # corresponding weights are all zero, it will be eliminated at the end\n    # of this function.\n    p = math.ceil(kernel_width) + 2\n\n    # The indices of the input pixels involved in computing the k-th output\n    # pixel are in row k of the indices matrix.\n    indices = left.view(out_length, 1).expand(out_length, p) + torch.linspace(0, p - 1, p).view(1, p).expand(\n        out_length, p)\n\n    # The weights used to compute the k-th output pixel are in row k of the\n    # weights matrix.\n    distance_to_center = u.view(out_length, 1).expand(out_length, p) - indices\n\n    # apply cubic kernel\n    if (scale < 1) and antialiasing:\n        weights = scale * cubic(distance_to_center * scale)\n    else:\n        weights = cubic(distance_to_center)\n\n    # Normalize the weights matrix so that each row sums to 1.\n    weights_sum = torch.sum(weights, 1).view(out_length, 1)\n    weights = weights / weights_sum.expand(out_length, p)\n\n    # If a column in weights is all zero, get rid of it. only consider the\n    # first and last column.\n    weights_zero_tmp = torch.sum((weights == 0), 0)\n    if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):\n        indices = indices.narrow(1, 1, p - 2)\n        weights = weights.narrow(1, 1, p - 2)\n    if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):\n        indices = indices.narrow(1, 0, p - 2)\n        weights = weights.narrow(1, 0, p - 2)\n    weights = weights.contiguous()\n    indices = indices.contiguous()\n    sym_len_s = -indices.min() + 1\n    sym_len_e = indices.max() - in_length\n    indices = indices + sym_len_s - 1\n    return weights, indices, int(sym_len_s), int(sym_len_e)\n\n\n@torch.no_grad()\ndef imresize(img, scale, antialiasing=True):\n    \"\"\"imresize function same as MATLAB.\n\n    It now only supports bicubic.\n    The same scale applies for both height and width.\n\n    Args:\n        img (Tensor | Numpy array):\n            Tensor: Input image with shape (c, h, w), [0, 1] range.\n            Numpy: Input image with shape (h, w, c), [0, 1] range.\n        scale (float): Scale factor. The same scale applies for both height\n            and width.\n        antialisaing (bool): Whether to apply anti-aliasing when downsampling.\n            Default: True.\n\n    Returns:\n        Tensor: Output image with shape (c, h, w), [0, 1] range, w/o round.\n    \"\"\"\n    if type(img).__module__ == np.__name__:  # numpy type\n        numpy_type = True\n        img = torch.from_numpy(img.transpose(2, 0, 1)).float()\n    else:\n        numpy_type = False\n\n    in_c, in_h, in_w = img.size()\n    out_h, out_w = math.ceil(in_h * scale), math.ceil(in_w * scale)\n    kernel_width = 4\n    kernel = 'cubic'\n\n    # get weights and indices\n    weights_h, indices_h, sym_len_hs, sym_len_he = calculate_weights_indices(in_h, out_h, scale, kernel, kernel_width,\n                                                                             antialiasing)\n    weights_w, indices_w, sym_len_ws, sym_len_we = calculate_weights_indices(in_w, out_w, scale, kernel, kernel_width,\n                                                                             antialiasing)\n    # process H dimension\n    # symmetric copying\n    img_aug = torch.FloatTensor(in_c, in_h + sym_len_hs + sym_len_he, in_w)\n    img_aug.narrow(1, sym_len_hs, in_h).copy_(img)\n\n    sym_patch = img[:, :sym_len_hs, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    img_aug.narrow(1, 0, sym_len_hs).copy_(sym_patch_inv)\n\n    sym_patch = img[:, -sym_len_he:, :]\n    inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(1, inv_idx)\n    img_aug.narrow(1, sym_len_hs + in_h, sym_len_he).copy_(sym_patch_inv)\n\n    out_1 = torch.FloatTensor(in_c, out_h, in_w)\n    kernel_width = weights_h.size(1)\n    for i in range(out_h):\n        idx = int(indices_h[i][0])\n        for j in range(in_c):\n            out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_h[i])\n\n    # process W dimension\n    # symmetric copying\n    out_1_aug = torch.FloatTensor(in_c, out_h, in_w + sym_len_ws + sym_len_we)\n    out_1_aug.narrow(2, sym_len_ws, in_w).copy_(out_1)\n\n    sym_patch = out_1[:, :, :sym_len_ws]\n    inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(2, inv_idx)\n    out_1_aug.narrow(2, 0, sym_len_ws).copy_(sym_patch_inv)\n\n    sym_patch = out_1[:, :, -sym_len_we:]\n    inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()\n    sym_patch_inv = sym_patch.index_select(2, inv_idx)\n    out_1_aug.narrow(2, sym_len_ws + in_w, sym_len_we).copy_(sym_patch_inv)\n\n    out_2 = torch.FloatTensor(in_c, out_h, out_w)\n    kernel_width = weights_w.size(1)\n    for i in range(out_w):\n        idx = int(indices_w[i][0])\n        for j in range(in_c):\n            out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_w[i])\n\n    if numpy_type:\n        out_2 = out_2.numpy().transpose(1, 2, 0)\n    return out_2\n\n\ndef rgb2ycbcr(img, y_only=False):\n    \"\"\"Convert a RGB image to YCbCr image.\n\n    This function produces the same results as Matlab's `rgb2ycbcr` function.\n    It implements the ITU-R BT.601 conversion for standard-definition\n    television. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.\n\n    It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`.\n    In OpenCV, it implements a JPEG conversion. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.\n\n    Args:\n        img (ndarray): The input image. It accepts:\n            1. np.uint8 type with range [0, 255];\n            2. np.float32 type with range [0, 1].\n        y_only (bool): Whether to only return Y channel. Default: False.\n\n    Returns:\n        ndarray: The converted YCbCr image. The output image has the same type\n            and range as input image.\n    \"\"\"\n    img_type = img.dtype\n    img = _convert_input_type_range(img)\n    if y_only:\n        out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0\n    else:\n        out_img = np.matmul(\n            img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]]) + [16, 128, 128]\n    out_img = _convert_output_type_range(out_img, img_type)\n    return out_img\n\n\ndef bgr2ycbcr(img, y_only=False):\n    \"\"\"Convert a BGR image to YCbCr image.\n\n    The bgr version of rgb2ycbcr.\n    It implements the ITU-R BT.601 conversion for standard-definition\n    television. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.\n\n    It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.\n    In OpenCV, it implements a JPEG conversion. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.\n\n    Args:\n        img (ndarray): The input image. It accepts:\n            1. np.uint8 type with range [0, 255];\n            2. np.float32 type with range [0, 1].\n        y_only (bool): Whether to only return Y channel. Default: False.\n\n    Returns:\n        ndarray: The converted YCbCr image. The output image has the same type\n            and range as input image.\n    \"\"\"\n    img_type = img.dtype\n    img = _convert_input_type_range(img)\n    if y_only:\n        out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0\n    else:\n        out_img = np.matmul(\n            img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128]\n    out_img = _convert_output_type_range(out_img, img_type)\n    return out_img\n\n\ndef ycbcr2rgb(img):\n    \"\"\"Convert a YCbCr image to RGB image.\n\n    This function produces the same results as Matlab's ycbcr2rgb function.\n    It implements the ITU-R BT.601 conversion for standard-definition\n    television. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.\n\n    It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`.\n    In OpenCV, it implements a JPEG conversion. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.\n\n    Args:\n        img (ndarray): The input image. It accepts:\n            1. np.uint8 type with range [0, 255];\n            2. np.float32 type with range [0, 1].\n\n    Returns:\n        ndarray: The converted RGB image. The output image has the same type\n            and range as input image.\n    \"\"\"\n    img_type = img.dtype\n    img = _convert_input_type_range(img) * 255\n    out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],\n                              [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]  # noqa: E126\n    out_img = _convert_output_type_range(out_img, img_type)\n    return out_img\n\n\ndef ycbcr2bgr(img):\n    \"\"\"Convert a YCbCr image to BGR image.\n\n    The bgr version of ycbcr2rgb.\n    It implements the ITU-R BT.601 conversion for standard-definition\n    television. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.\n\n    It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`.\n    In OpenCV, it implements a JPEG conversion. See more details in\n    https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.\n\n    Args:\n        img (ndarray): The input image. It accepts:\n            1. np.uint8 type with range [0, 255];\n            2. np.float32 type with range [0, 1].\n\n    Returns:\n        ndarray: The converted BGR image. The output image has the same type\n            and range as input image.\n    \"\"\"\n    img_type = img.dtype\n    img = _convert_input_type_range(img) * 255\n    out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0.00791071, -0.00153632, 0],\n                              [0, -0.00318811, 0.00625893]]) * 255.0 + [-276.836, 135.576, -222.921]  # noqa: E126\n    out_img = _convert_output_type_range(out_img, img_type)\n    return out_img\n\n\ndef _convert_input_type_range(img):\n    \"\"\"Convert the type and range of the input image.\n\n    It converts the input image to np.float32 type and range of [0, 1].\n    It is mainly used for pre-processing the input image in colorspace\n    convertion functions such as rgb2ycbcr and ycbcr2rgb.\n\n    Args:\n        img (ndarray): The input image. It accepts:\n            1. np.uint8 type with range [0, 255];\n            2. np.float32 type with range [0, 1].\n\n    Returns:\n        (ndarray): The converted image with type of np.float32 and range of\n            [0, 1].\n    \"\"\"\n    img_type = img.dtype\n    img = img.astype(np.float32)\n    if img_type == np.float32:\n        pass\n    elif img_type == np.uint8:\n        img /= 255.\n    else:\n        raise TypeError('The img type should be np.float32 or np.uint8, ' f'but got {img_type}')\n    return img\n\n\ndef _convert_output_type_range(img, dst_type):\n    \"\"\"Convert the type and range of the image according to dst_type.\n\n    It converts the image to desired type and range. If `dst_type` is np.uint8,\n    images will be converted to np.uint8 type with range [0, 255]. If\n    `dst_type` is np.float32, it converts the image to np.float32 type with\n    range [0, 1].\n    It is mainly used for post-processing images in colorspace convertion\n    functions such as rgb2ycbcr and ycbcr2rgb.\n\n    Args:\n        img (ndarray): The image to be converted with np.float32 type and\n            range [0, 255].\n        dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it\n            converts the image to np.uint8 type with range [0, 255]. If\n            dst_type is np.float32, it converts the image to np.float32 type\n            with range [0, 1].\n\n    Returns:\n        (ndarray): The converted image with desired type and range.\n    \"\"\"\n    if dst_type not in (np.uint8, np.float32):\n        raise TypeError('The dst_type should be np.float32 or np.uint8, ' f'but got {dst_type}')\n    if dst_type == np.uint8:\n        img = img.round()\n    else:\n        img /= 255.\n    return img.astype(dst_type)\n"
  },
  {
    "path": "basicsr/utils/misc.py",
    "content": "import os\nimport re\nimport random\nimport time\nimport torch\nimport numpy as np\nfrom os import path as osp\n\nfrom .dist_util import master_only\nfrom .logger import get_root_logger\n\nIS_HIGH_VERSION = [int(m) for m in list(re.findall(r\"^([0-9]+)\\.([0-9]+)\\.([0-9]+)([^0-9][a-zA-Z0-9]*)?(\\+git.*)?$\",\\\n    torch.__version__)[0][:3])] >= [1, 12, 0]\n\ndef gpu_is_available():\n    if IS_HIGH_VERSION:\n        if torch.backends.mps.is_available():\n            return True\n    return True if torch.cuda.is_available() and torch.backends.cudnn.is_available() else False\n\ndef get_device(gpu_id=None):\n    if gpu_id is None:\n        gpu_str = ''\n    elif isinstance(gpu_id, int):\n        gpu_str = f':{gpu_id}'\n    else:\n        raise TypeError('Input should be int value.')\n\n    if IS_HIGH_VERSION:\n        if torch.backends.mps.is_available():\n            return torch.device('mps'+gpu_str)\n    return torch.device('cuda'+gpu_str if torch.cuda.is_available() and torch.backends.cudnn.is_available() else 'cpu')\n\n\ndef set_random_seed(seed):\n    \"\"\"Set random seeds.\"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed(seed)\n    torch.cuda.manual_seed_all(seed)\n\n\ndef get_time_str():\n    return time.strftime('%Y%m%d_%H%M%S', time.localtime())\n\n\ndef mkdir_and_rename(path):\n    \"\"\"mkdirs. If path exists, rename it with timestamp and create a new one.\n\n    Args:\n        path (str): Folder path.\n    \"\"\"\n    if osp.exists(path):\n        new_name = path + '_archived_' + get_time_str()\n        print(f'Path already exists. Rename it to {new_name}', flush=True)\n        os.rename(path, new_name)\n    os.makedirs(path, exist_ok=True)\n\n\n@master_only\ndef make_exp_dirs(opt):\n    \"\"\"Make dirs for experiments.\"\"\"\n    path_opt = opt['path'].copy()\n    if opt['is_train']:\n        mkdir_and_rename(path_opt.pop('experiments_root'))\n    else:\n        mkdir_and_rename(path_opt.pop('results_root'))\n    for key, path in path_opt.items():\n        if ('strict_load' not in key) and ('pretrain_network' not in key) and ('resume' not in key):\n            os.makedirs(path, exist_ok=True)\n\n\ndef scandir(dir_path, suffix=None, recursive=False, full_path=False):\n    \"\"\"Scan a directory to find the interested files.\n\n    Args:\n        dir_path (str): Path of the directory.\n        suffix (str | tuple(str), optional): File suffix that we are\n            interested in. Default: None.\n        recursive (bool, optional): If set to True, recursively scan the\n            directory. Default: False.\n        full_path (bool, optional): If set to True, include the dir_path.\n            Default: False.\n\n    Returns:\n        A generator for all the interested files with relative pathes.\n    \"\"\"\n\n    if (suffix is not None) and not isinstance(suffix, (str, tuple)):\n        raise TypeError('\"suffix\" must be a string or tuple of strings')\n\n    root = dir_path\n\n    def _scandir(dir_path, suffix, recursive):\n        for entry in os.scandir(dir_path):\n            if not entry.name.startswith('.') and entry.is_file():\n                if full_path:\n                    return_path = entry.path\n                else:\n                    return_path = osp.relpath(entry.path, root)\n\n                if suffix is None:\n                    yield return_path\n                elif return_path.endswith(suffix):\n                    yield return_path\n            else:\n                if recursive:\n                    yield from _scandir(entry.path, suffix=suffix, recursive=recursive)\n                else:\n                    continue\n\n    return _scandir(dir_path, suffix=suffix, recursive=recursive)\n\n\ndef check_resume(opt, resume_iter):\n    \"\"\"Check resume states and pretrain_network paths.\n\n    Args:\n        opt (dict): Options.\n        resume_iter (int): Resume iteration.\n    \"\"\"\n    logger = get_root_logger()\n    if opt['path']['resume_state']:\n        # get all the networks\n        networks = [key for key in opt.keys() if key.startswith('network_')]\n        flag_pretrain = False\n        for network in networks:\n            if opt['path'].get(f'pretrain_{network}') is not None:\n                flag_pretrain = True\n        if flag_pretrain:\n            logger.warning('pretrain_network path will be ignored during resuming.')\n        # set pretrained model paths\n        for network in networks:\n            name = f'pretrain_{network}'\n            basename = network.replace('network_', '')\n            if opt['path'].get('ignore_resume_networks') is None or (basename\n                                                                     not in opt['path']['ignore_resume_networks']):\n                opt['path'][name] = osp.join(opt['path']['models'], f'net_{basename}_{resume_iter}.pth')\n                logger.info(f\"Set {name} to {opt['path'][name]}\")\n\n\ndef sizeof_fmt(size, suffix='B'):\n    \"\"\"Get human readable file size.\n\n    Args:\n        size (int): File size.\n        suffix (str): Suffix. Default: 'B'.\n\n    Return:\n        str: Formated file siz.\n    \"\"\"\n    for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:\n        if abs(size) < 1024.0:\n            return f'{size:3.1f} {unit}{suffix}'\n        size /= 1024.0\n    return f'{size:3.1f} Y{suffix}'\n"
  },
  {
    "path": "basicsr/utils/options.py",
    "content": "import yaml\nimport time\nfrom collections import OrderedDict\nfrom os import path as osp\nfrom basicsr.utils.misc import get_time_str\n\ndef ordered_yaml():\n    \"\"\"Support OrderedDict for yaml.\n\n    Returns:\n        yaml Loader and Dumper.\n    \"\"\"\n    try:\n        from yaml import CDumper as Dumper\n        from yaml import CLoader as Loader\n    except ImportError:\n        from yaml import Dumper, Loader\n\n    _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG\n\n    def dict_representer(dumper, data):\n        return dumper.represent_dict(data.items())\n\n    def dict_constructor(loader, node):\n        return OrderedDict(loader.construct_pairs(node))\n\n    Dumper.add_representer(OrderedDict, dict_representer)\n    Loader.add_constructor(_mapping_tag, dict_constructor)\n    return Loader, Dumper\n\n\ndef parse(opt_path, root_path, is_train=True):\n    \"\"\"Parse option file.\n\n    Args:\n        opt_path (str): Option file path.\n        is_train (str): Indicate whether in training or not. Default: True.\n\n    Returns:\n        (dict): Options.\n    \"\"\"\n    with open(opt_path, mode='r') as f:\n        Loader, _ = ordered_yaml()\n        opt = yaml.load(f, Loader=Loader)\n\n    opt['is_train'] = is_train\n\n    # opt['name'] = f\"{get_time_str()}_{opt['name']}\"\n    if opt['path'].get('resume_state', None): # Shangchen added\n        resume_state_path = opt['path'].get('resume_state')\n        opt['name'] = resume_state_path.split(\"/\")[-3]\n    else:\n        opt['name'] = f\"{get_time_str()}_{opt['name']}\"\n\n\n    # datasets\n    for phase, dataset in opt['datasets'].items():\n        # for several datasets, e.g., test_1, test_2\n        phase = phase.split('_')[0]\n        dataset['phase'] = phase\n        if 'scale' in opt:\n            dataset['scale'] = opt['scale']\n        if dataset.get('dataroot_gt') is not None:\n            dataset['dataroot_gt'] = osp.expanduser(dataset['dataroot_gt'])\n        if dataset.get('dataroot_lq') is not None:\n            dataset['dataroot_lq'] = osp.expanduser(dataset['dataroot_lq'])\n\n    # paths\n    for key, val in opt['path'].items():\n        if (val is not None) and ('resume_state' in key or 'pretrain_network' in key):\n            opt['path'][key] = osp.expanduser(val)\n\n    if is_train:\n        experiments_root = osp.join(root_path, 'experiments', opt['name'])\n        opt['path']['experiments_root'] = experiments_root\n        opt['path']['models'] = osp.join(experiments_root, 'models')\n        opt['path']['training_states'] = osp.join(experiments_root, 'training_states')\n        opt['path']['log'] = experiments_root\n        opt['path']['visualization'] = osp.join(experiments_root, 'visualization')\n\n    else:  # test\n        results_root = osp.join(root_path, 'results', opt['name'])\n        opt['path']['results_root'] = results_root\n        opt['path']['log'] = results_root\n        opt['path']['visualization'] = osp.join(results_root, 'visualization')\n\n    return opt\n\n\ndef dict2str(opt, indent_level=1):\n    \"\"\"dict to string for printing options.\n\n    Args:\n        opt (dict): Option dict.\n        indent_level (int): Indent level. Default: 1.\n\n    Return:\n        (str): Option string for printing.\n    \"\"\"\n    msg = '\\n'\n    for k, v in opt.items():\n        if isinstance(v, dict):\n            msg += ' ' * (indent_level * 2) + k + ':['\n            msg += dict2str(v, indent_level + 1)\n            msg += ' ' * (indent_level * 2) + ']\\n'\n        else:\n            msg += ' ' * (indent_level * 2) + k + ': ' + str(v) + '\\n'\n    return msg\n"
  },
  {
    "path": "basicsr/utils/realesrgan_utils.py",
    "content": "import cv2\nimport math\nimport numpy as np\nimport os\nimport queue\nimport threading\nimport torch\nfrom torch.nn import functional as F\nfrom basicsr.utils.download_util import load_file_from_url\nfrom basicsr.utils.misc import get_device\n\n# ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nclass RealESRGANer():\n    \"\"\"A helper class for upsampling images with RealESRGAN.\n\n    Args:\n        scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.\n        model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).\n        model (nn.Module): The defined network. Default: None.\n        tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop\n            input images into tiles, and then process each of them. Finally, they will be merged into one image.\n            0 denotes for do not use tile. Default: 0.\n        tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.\n        pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.\n        half (float): Whether to use half precision during inference. Default: False.\n    \"\"\"\n\n    def __init__(self,\n                 scale,\n                 model_path,\n                 model=None,\n                 tile=0,\n                 tile_pad=10,\n                 pre_pad=10,\n                 half=False,\n                 device=None,\n                 gpu_id=None):\n        self.scale = scale\n        self.tile_size = tile\n        self.tile_pad = tile_pad\n        self.pre_pad = pre_pad\n        self.mod_scale = None\n        self.half = half\n\n        # initialize model\n        # if gpu_id:\n        #     self.device = torch.device(\n        #         f'cuda:{gpu_id}' if torch.cuda.is_available() else 'cpu') if device is None else device\n        # else:\n        #     self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device is None else device\n\n        self.device = get_device(gpu_id) if device is None else device\n        \n        # if the model_path starts with https, it will first download models to the folder: realesrgan/weights\n        if model_path.startswith('https://'):\n            model_path = load_file_from_url(\n                url=model_path, model_dir=os.path.join('weights/realesrgan'), progress=True, file_name=None)\n        loadnet = torch.load(model_path, map_location=torch.device('cpu'))\n        # prefer to use params_ema\n        if 'params_ema' in loadnet:\n            keyname = 'params_ema'\n        else:\n            keyname = 'params'\n        model.load_state_dict(loadnet[keyname], strict=True)\n        model.eval()\n        self.model = model.to(self.device)\n        if self.half:\n            self.model = self.model.half()\n\n    def pre_process(self, img):\n        \"\"\"Pre-process, such as pre-pad and mod pad, so that the images can be divisible\n        \"\"\"\n        img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()\n        self.img = img.unsqueeze(0).to(self.device)\n        if self.half:\n            self.img = self.img.half()\n\n        # pre_pad\n        if self.pre_pad != 0:\n            self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')\n        # mod pad for divisible borders\n        if self.scale == 2:\n            self.mod_scale = 2\n        elif self.scale == 1:\n            self.mod_scale = 4\n        if self.mod_scale is not None:\n            self.mod_pad_h, self.mod_pad_w = 0, 0\n            _, _, h, w = self.img.size()\n            if (h % self.mod_scale != 0):\n                self.mod_pad_h = (self.mod_scale - h % self.mod_scale)\n            if (w % self.mod_scale != 0):\n                self.mod_pad_w = (self.mod_scale - w % self.mod_scale)\n            self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')\n\n    def process(self):\n        # model inference\n        self.output = self.model(self.img)\n\n    def tile_process(self):\n        \"\"\"It will first crop input images to tiles, and then process each tile.\n        Finally, all the processed tiles are merged into one images.\n\n        Modified from: https://github.com/ata4/esrgan-launcher\n        \"\"\"\n        batch, channel, height, width = self.img.shape\n        output_height = height * self.scale\n        output_width = width * self.scale\n        output_shape = (batch, channel, output_height, output_width)\n\n        # start with black image\n        self.output = self.img.new_zeros(output_shape)\n        tiles_x = math.ceil(width / self.tile_size)\n        tiles_y = math.ceil(height / self.tile_size)\n\n        # loop over all tiles\n        for y in range(tiles_y):\n            for x in range(tiles_x):\n                # extract tile from input image\n                ofs_x = x * self.tile_size\n                ofs_y = y * self.tile_size\n                # input tile area on total image\n                input_start_x = ofs_x\n                input_end_x = min(ofs_x + self.tile_size, width)\n                input_start_y = ofs_y\n                input_end_y = min(ofs_y + self.tile_size, height)\n\n                # input tile area on total image with padding\n                input_start_x_pad = max(input_start_x - self.tile_pad, 0)\n                input_end_x_pad = min(input_end_x + self.tile_pad, width)\n                input_start_y_pad = max(input_start_y - self.tile_pad, 0)\n                input_end_y_pad = min(input_end_y + self.tile_pad, height)\n\n                # input tile dimensions\n                input_tile_width = input_end_x - input_start_x\n                input_tile_height = input_end_y - input_start_y\n                tile_idx = y * tiles_x + x + 1\n                input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]\n\n                # upscale tile\n                try:\n                    with torch.no_grad():\n                        output_tile = self.model(input_tile)\n                except RuntimeError as error:\n                    print('Error', error)\n                # print(f'\\tTile {tile_idx}/{tiles_x * tiles_y}')\n\n                # output tile area on total image\n                output_start_x = input_start_x * self.scale\n                output_end_x = input_end_x * self.scale\n                output_start_y = input_start_y * self.scale\n                output_end_y = input_end_y * self.scale\n\n                # output tile area without padding\n                output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale\n                output_end_x_tile = output_start_x_tile + input_tile_width * self.scale\n                output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale\n                output_end_y_tile = output_start_y_tile + input_tile_height * self.scale\n\n                # put tile into output image\n                self.output[:, :, output_start_y:output_end_y,\n                            output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,\n                                                                       output_start_x_tile:output_end_x_tile]\n\n    def post_process(self):\n        # remove extra pad\n        if self.mod_scale is not None:\n            _, _, h, w = self.output.size()\n            self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]\n        # remove prepad\n        if self.pre_pad != 0:\n            _, _, h, w = self.output.size()\n            self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]\n        return self.output\n\n    @torch.no_grad()\n    def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):\n        h_input, w_input = img.shape[0:2]\n        # img: numpy\n        img = img.astype(np.float32)\n        if np.max(img) > 256:  # 16-bit image\n            max_range = 65535\n            print('\\tInput is a 16-bit image')\n        else:\n            max_range = 255\n        img = img / max_range\n        if len(img.shape) == 2:  # gray image\n            img_mode = 'L'\n            img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)\n        elif img.shape[2] == 4:  # RGBA image with alpha channel\n            img_mode = 'RGBA'\n            alpha = img[:, :, 3]\n            img = img[:, :, 0:3]\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n            if alpha_upsampler == 'realesrgan':\n                alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)\n        else:\n            img_mode = 'RGB'\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n        # ------------------- process image (without the alpha channel) ------------------- #\n        try:\n            with torch.no_grad():\n                self.pre_process(img)\n                if self.tile_size > 0:\n                    self.tile_process()\n                else:\n                    self.process()\n                output_img_t = self.post_process()\n                output_img = output_img_t.data.squeeze().float().cpu().clamp_(0, 1).numpy()\n                output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))\n                if img_mode == 'L':\n                    output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)\n            del output_img_t\n            torch.cuda.empty_cache()        \n        except RuntimeError as error:\n            print(f\"Failed inference for RealESRGAN: {error}\")      \n\n        # ------------------- process the alpha channel if necessary ------------------- #\n        if img_mode == 'RGBA':\n            if alpha_upsampler == 'realesrgan':\n                self.pre_process(alpha)\n                if self.tile_size > 0:\n                    self.tile_process()\n                else:\n                    self.process()\n                output_alpha = self.post_process()\n                output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()\n                output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))\n                output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)\n            else:  # use the cv2 resize for alpha channel\n                h, w = alpha.shape[0:2]\n                output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)\n\n            # merge the alpha channel\n            output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)\n            output_img[:, :, 3] = output_alpha\n\n        # ------------------------------ return ------------------------------ #\n        if max_range == 65535:  # 16-bit image\n            output = (output_img * 65535.0).round().astype(np.uint16)\n        else:\n            output = (output_img * 255.0).round().astype(np.uint8)\n\n        if outscale is not None and outscale != float(self.scale):\n            output = cv2.resize(\n                output, (\n                    int(w_input * outscale),\n                    int(h_input * outscale),\n                ), interpolation=cv2.INTER_LANCZOS4)\n\n        return output, img_mode\n\n\nclass PrefetchReader(threading.Thread):\n    \"\"\"Prefetch images.\n\n    Args:\n        img_list (list[str]): A image list of image paths to be read.\n        num_prefetch_queue (int): Number of prefetch queue.\n    \"\"\"\n\n    def __init__(self, img_list, num_prefetch_queue):\n        super().__init__()\n        self.que = queue.Queue(num_prefetch_queue)\n        self.img_list = img_list\n\n    def run(self):\n        for img_path in self.img_list:\n            img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)\n            self.que.put(img)\n\n        self.que.put(None)\n\n    def __next__(self):\n        next_item = self.que.get()\n        if next_item is None:\n            raise StopIteration\n        return next_item\n\n    def __iter__(self):\n        return self\n\n\nclass IOConsumer(threading.Thread):\n\n    def __init__(self, opt, que, qid):\n        super().__init__()\n        self._queue = que\n        self.qid = qid\n        self.opt = opt\n\n    def run(self):\n        while True:\n            msg = self._queue.get()\n            if isinstance(msg, str) and msg == 'quit':\n                break\n\n            output = msg['output']\n            save_path = msg['save_path']\n            cv2.imwrite(save_path, output)\n        print(f'IO worker {self.qid} is done.')"
  },
  {
    "path": "basicsr/utils/registry.py",
    "content": "# Modified from: https://github.com/facebookresearch/fvcore/blob/master/fvcore/common/registry.py  # noqa: E501\n\n\nclass Registry():\n    \"\"\"\n    The registry that provides name -> object mapping, to support third-party\n    users' custom modules.\n\n    To create a registry (e.g. a backbone registry):\n\n    .. code-block:: python\n\n        BACKBONE_REGISTRY = Registry('BACKBONE')\n\n    To register an object:\n\n    .. code-block:: python\n\n        @BACKBONE_REGISTRY.register()\n        class MyBackbone():\n            ...\n\n    Or:\n\n    .. code-block:: python\n\n        BACKBONE_REGISTRY.register(MyBackbone)\n    \"\"\"\n\n    def __init__(self, name):\n        \"\"\"\n        Args:\n            name (str): the name of this registry\n        \"\"\"\n        self._name = name\n        self._obj_map = {}\n\n    def _do_register(self, name, obj):\n        assert (name not in self._obj_map), (f\"An object named '{name}' was already registered \"\n                                             f\"in '{self._name}' registry!\")\n        self._obj_map[name] = obj\n\n    def register(self, obj=None):\n        \"\"\"\n        Register the given object under the the name `obj.__name__`.\n        Can be used as either a decorator or not.\n        See docstring of this class for usage.\n        \"\"\"\n        if obj is None:\n            # used as a decorator\n            def deco(func_or_class):\n                name = func_or_class.__name__\n                self._do_register(name, func_or_class)\n                return func_or_class\n\n            return deco\n\n        # used as a function call\n        name = obj.__name__\n        self._do_register(name, obj)\n\n    def get(self, name):\n        ret = self._obj_map.get(name)\n        if ret is None:\n            raise KeyError(f\"No object named '{name}' found in '{self._name}' registry!\")\n        return ret\n\n    def __contains__(self, name):\n        return name in self._obj_map\n\n    def __iter__(self):\n        return iter(self._obj_map.items())\n\n    def keys(self):\n        return self._obj_map.keys()\n\n\nDATASET_REGISTRY = Registry('dataset')\nARCH_REGISTRY = Registry('arch')\nMODEL_REGISTRY = Registry('model')\nLOSS_REGISTRY = Registry('loss')\nMETRIC_REGISTRY = Registry('metric')\n"
  },
  {
    "path": "basicsr/utils/video_util.py",
    "content": "'''\nThe code is modified from the Real-ESRGAN:\nhttps://github.com/xinntao/Real-ESRGAN/blob/master/inference_realesrgan_video.py\n\n'''\nimport cv2\nimport sys\nimport numpy as np\n\ntry:\n    import ffmpeg\nexcept ImportError:\n    import pip\n    pip.main(['install', '--user', 'ffmpeg-python'])\n    import ffmpeg\n\ndef get_video_meta_info(video_path):\n    ret = {}\n    probe = ffmpeg.probe(video_path)\n    video_streams = [stream for stream in probe['streams'] if stream['codec_type'] == 'video']\n    has_audio = any(stream['codec_type'] == 'audio' for stream in probe['streams'])\n    ret['width'] = video_streams[0]['width']\n    ret['height'] = video_streams[0]['height']\n    ret['fps'] = eval(video_streams[0]['avg_frame_rate'])\n    ret['audio'] = ffmpeg.input(video_path).audio if has_audio else None\n    ret['nb_frames'] = int(video_streams[0]['nb_frames'])\n    return ret\n\nclass VideoReader:\n    def __init__(self, video_path):\n        self.paths = []  # for image&folder type\n        self.audio = None\n        try:\n            self.stream_reader = (\n                ffmpeg.input(video_path).output('pipe:', format='rawvideo', pix_fmt='bgr24',\n                                                loglevel='error').run_async(\n                                                    pipe_stdin=True, pipe_stdout=True, cmd='ffmpeg'))\n        except FileNotFoundError:\n            print('Please install ffmpeg (not ffmpeg-python) by running\\n',\n                  '\\t$ conda install -c conda-forge ffmpeg')\n            sys.exit(0)\n\n        meta = get_video_meta_info(video_path)\n        self.width = meta['width']\n        self.height = meta['height']\n        self.input_fps = meta['fps']\n        self.audio = meta['audio']\n        self.nb_frames = meta['nb_frames']\n\n        self.idx = 0\n\n    def get_resolution(self):\n        return self.height, self.width\n\n    def get_fps(self):\n        if self.input_fps is not None:\n            return self.input_fps\n        return 24\n\n    def get_audio(self):\n        return self.audio\n\n    def __len__(self):\n        return self.nb_frames\n\n    def get_frame_from_stream(self):\n        img_bytes = self.stream_reader.stdout.read(self.width * self.height * 3)  # 3 bytes for one pixel\n        if not img_bytes:\n            return None\n        img = np.frombuffer(img_bytes, np.uint8).reshape([self.height, self.width, 3])\n        return img\n\n    def get_frame_from_list(self):\n        if self.idx >= self.nb_frames:\n            return None\n        img = cv2.imread(self.paths[self.idx])\n        self.idx += 1\n        return img\n\n    def get_frame(self):\n        return self.get_frame_from_stream()\n\n\n    def close(self):\n        self.stream_reader.stdin.close()\n        self.stream_reader.wait()\n\n\nclass VideoWriter:\n    def __init__(self, video_save_path, height, width, fps, audio):\n        if height > 2160:\n            print('You are generating video that is larger than 4K, which will be very slow due to IO speed.',\n                  'We highly recommend to decrease the outscale(aka, -s).')\n        if audio is not None:\n            self.stream_writer = (\n                ffmpeg.input('pipe:', format='rawvideo', pix_fmt='bgr24', s=f'{width}x{height}',\n                            framerate=fps).output(\n                                audio,\n                                video_save_path,\n                                pix_fmt='yuv420p',\n                                vcodec='libx264',\n                                loglevel='error',\n                                acodec='copy').overwrite_output().run_async(\n                                    pipe_stdin=True, pipe_stdout=True, cmd='ffmpeg'))\n        else:\n            self.stream_writer = (\n                ffmpeg.input('pipe:', format='rawvideo', pix_fmt='bgr24', s=f'{width}x{height}',\n                            framerate=fps).output(\n                                video_save_path, pix_fmt='yuv420p', vcodec='libx264',\n                                loglevel='error').overwrite_output().run_async(\n                                    pipe_stdin=True, pipe_stdout=True, cmd='ffmpeg'))\n\n    def write_frame(self, frame):\n        try:\n            frame = frame.astype(np.uint8).tobytes()\n            self.stream_writer.stdin.write(frame)\n        except BrokenPipeError:\n            print('Please re-install ffmpeg and libx264 by running\\n',\n                  '\\t$ conda install -c conda-forge ffmpeg\\n',\n                  '\\t$ conda install -c conda-forge x264')\n            sys.exit(0)\n\n    def close(self):\n        self.stream_writer.stdin.close()\n        self.stream_writer.wait()"
  },
  {
    "path": "basicsr/version.py",
    "content": "# GENERATED VERSION FILE\n# TIME: Thu Jun 26 05:59:40 2025\n__version__ = '1.3.2'\n__gitsha__ = '536df45'\nversion_info = (1, 3, 2)\n"
  },
  {
    "path": "facelib/detection/__init__.py",
    "content": "import os\nimport torch\nfrom torch import nn\nfrom copy import deepcopy\n\nfrom facelib.utils import load_file_from_url\nfrom facelib.utils import download_pretrained_models\nfrom facelib.detection.yolov5face.models.common import Conv\n\nfrom .retinaface.retinaface import RetinaFace\nfrom .yolov5face.face_detector import YoloDetector\n\n\ndef init_detection_model(model_name, half=False, device='cuda'):\n    if 'retinaface' in model_name:\n        model = init_retinaface_model(model_name, half, device)\n    elif 'YOLOv5' in model_name:\n        model = init_yolov5face_model(model_name, device)\n    else:\n        raise NotImplementedError(f'{model_name} is not implemented.')\n\n    return model\n\n\ndef init_retinaface_model(model_name, half=False, device='cuda'):\n    if model_name == 'retinaface_resnet50':\n        model = RetinaFace(network_name='resnet50', half=half)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth'\n    elif model_name == 'retinaface_mobile0.25':\n        model = RetinaFace(network_name='mobile0.25', half=half)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_mobilenet0.25_Final.pth'\n    else:\n        raise NotImplementedError(f'{model_name} is not implemented.')\n\n    model_path = load_file_from_url(url=model_url, model_dir='ckpts/facelib', progress=True, file_name=None)\n    load_net = torch.load(model_path, map_location=lambda storage, loc: storage)\n    # remove unnecessary 'module.'\n    for k, v in deepcopy(load_net).items():\n        if k.startswith('module.'):\n            load_net[k[7:]] = v\n            load_net.pop(k)\n    model.load_state_dict(load_net, strict=True)\n    model.eval()\n    model = model.to(device)\n\n    return model\n\n\ndef init_yolov5face_model(model_name, device='cuda'):\n    if model_name == 'YOLOv5l':\n        model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5l.yaml', device=device)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth'\n    elif model_name == 'YOLOv5n':\n        model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5n.yaml', device=device)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5n-face.pth'\n    else:\n        raise NotImplementedError(f'{model_name} is not implemented.')\n    \n    model_path = load_file_from_url(url=model_url, model_dir='ckpts/facelib', progress=True, file_name=None)\n    load_net = torch.load(model_path, map_location=lambda storage, loc: storage)\n    model.detector.load_state_dict(load_net, strict=True)\n    model.detector.eval()\n    model.detector = model.detector.to(device).float()\n\n    for m in model.detector.modules():\n        if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:\n            m.inplace = True  # pytorch 1.7.0 compatibility\n        elif isinstance(m, Conv):\n            m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility\n\n    return model\n\n\n# Download from Google Drive\n# def init_yolov5face_model(model_name, device='cuda'):\n#     if model_name == 'YOLOv5l':\n#         model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5l.yaml', device=device)\n#         f_id = {'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV'}\n#     elif model_name == 'YOLOv5n':\n#         model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5n.yaml', device=device)\n#         f_id = {'yolov5n-face.pth': '1fhcpFvWZqghpGXjYPIne2sw1Fy4yhw6o'}\n#     else:\n#         raise NotImplementedError(f'{model_name} is not implemented.')\n\n#     model_path = os.path.join('weights/facelib', list(f_id.keys())[0])\n#     if not os.path.exists(model_path):\n#         download_pretrained_models(file_ids=f_id, save_path_root='weights/facelib')\n\n#     load_net = torch.load(model_path, map_location=lambda storage, loc: storage)\n#     model.detector.load_state_dict(load_net, strict=True)\n#     model.detector.eval()\n#     model.detector = model.detector.to(device).float()\n\n#     for m in model.detector.modules():\n#         if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:\n#             m.inplace = True  # pytorch 1.7.0 compatibility\n#         elif isinstance(m, Conv):\n#             m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility\n\n#     return model"
  },
  {
    "path": "facelib/detection/align_trans.py",
    "content": "import cv2\nimport numpy as np\n\nfrom .matlab_cp2tform import get_similarity_transform_for_cv2\n\n# reference facial points, a list of coordinates (x,y)\nREFERENCE_FACIAL_POINTS = [[30.29459953, 51.69630051], [65.53179932, 51.50139999], [48.02519989, 71.73660278],\n                           [33.54930115, 92.3655014], [62.72990036, 92.20410156]]\n\nDEFAULT_CROP_SIZE = (96, 112)\n\n\nclass FaceWarpException(Exception):\n\n    def __str__(self):\n        return 'In File {}:{}'.format(__file__, super.__str__(self))\n\n\ndef get_reference_facial_points(output_size=None, inner_padding_factor=0.0, outer_padding=(0, 0), default_square=False):\n    \"\"\"\n    Function:\n    ----------\n        get reference 5 key points according to crop settings:\n        0. Set default crop_size:\n            if default_square:\n                crop_size = (112, 112)\n            else:\n                crop_size = (96, 112)\n        1. Pad the crop_size by inner_padding_factor in each side;\n        2. Resize crop_size into (output_size - outer_padding*2),\n            pad into output_size with outer_padding;\n        3. Output reference_5point;\n    Parameters:\n    ----------\n        @output_size: (w, h) or None\n            size of aligned face image\n        @inner_padding_factor: (w_factor, h_factor)\n            padding factor for inner (w, h)\n        @outer_padding: (w_pad, h_pad)\n            each row is a pair of coordinates (x, y)\n        @default_square: True or False\n            if True:\n                default crop_size = (112, 112)\n            else:\n                default crop_size = (96, 112);\n        !!! make sure, if output_size is not None:\n                (output_size - outer_padding)\n                = some_scale * (default crop_size * (1.0 +\n                inner_padding_factor))\n    Returns:\n    ----------\n        @reference_5point: 5x2 np.array\n            each row is a pair of transformed coordinates (x, y)\n    \"\"\"\n\n    tmp_5pts = np.array(REFERENCE_FACIAL_POINTS)\n    tmp_crop_size = np.array(DEFAULT_CROP_SIZE)\n\n    # 0) make the inner region a square\n    if default_square:\n        size_diff = max(tmp_crop_size) - tmp_crop_size\n        tmp_5pts += size_diff / 2\n        tmp_crop_size += size_diff\n\n    if (output_size and output_size[0] == tmp_crop_size[0] and output_size[1] == tmp_crop_size[1]):\n\n        return tmp_5pts\n\n    if (inner_padding_factor == 0 and outer_padding == (0, 0)):\n        if output_size is None:\n            return tmp_5pts\n        else:\n            raise FaceWarpException('No paddings to do, output_size must be None or {}'.format(tmp_crop_size))\n\n    # check output size\n    if not (0 <= inner_padding_factor <= 1.0):\n        raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)')\n\n    if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0) and output_size is None):\n        output_size = tmp_crop_size * \\\n            (1 + inner_padding_factor * 2).astype(np.int32)\n        output_size += np.array(outer_padding)\n    if not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1]):\n        raise FaceWarpException('Not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1])')\n\n    # 1) pad the inner region according inner_padding_factor\n    if inner_padding_factor > 0:\n        size_diff = tmp_crop_size * inner_padding_factor * 2\n        tmp_5pts += size_diff / 2\n        tmp_crop_size += np.round(size_diff).astype(np.int32)\n\n    # 2) resize the padded inner region\n    size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2\n\n    if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]:\n        raise FaceWarpException('Must have (output_size - outer_padding)'\n                                '= some_scale * (crop_size * (1.0 + inner_padding_factor)')\n\n    scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0]\n    tmp_5pts = tmp_5pts * scale_factor\n    #    size_diff = tmp_crop_size * (scale_factor - min(scale_factor))\n    #    tmp_5pts = tmp_5pts + size_diff / 2\n    tmp_crop_size = size_bf_outer_pad\n\n    # 3) add outer_padding to make output_size\n    reference_5point = tmp_5pts + np.array(outer_padding)\n    tmp_crop_size = output_size\n\n    return reference_5point\n\n\ndef get_affine_transform_matrix(src_pts, dst_pts):\n    \"\"\"\n    Function:\n    ----------\n        get affine transform matrix 'tfm' from src_pts to dst_pts\n    Parameters:\n    ----------\n        @src_pts: Kx2 np.array\n            source points matrix, each row is a pair of coordinates (x, y)\n        @dst_pts: Kx2 np.array\n            destination points matrix, each row is a pair of coordinates (x, y)\n    Returns:\n    ----------\n        @tfm: 2x3 np.array\n            transform matrix from src_pts to dst_pts\n    \"\"\"\n\n    tfm = np.float32([[1, 0, 0], [0, 1, 0]])\n    n_pts = src_pts.shape[0]\n    ones = np.ones((n_pts, 1), src_pts.dtype)\n    src_pts_ = np.hstack([src_pts, ones])\n    dst_pts_ = np.hstack([dst_pts, ones])\n\n    A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_)\n\n    if rank == 3:\n        tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]]])\n    elif rank == 2:\n        tfm = np.float32([[A[0, 0], A[1, 0], 0], [A[0, 1], A[1, 1], 0]])\n\n    return tfm\n\n\ndef warp_and_crop_face(src_img, facial_pts, reference_pts=None, crop_size=(96, 112), align_type='smilarity'):\n    \"\"\"\n    Function:\n    ----------\n        apply affine transform 'trans' to uv\n    Parameters:\n    ----------\n        @src_img: 3x3 np.array\n            input image\n        @facial_pts: could be\n            1)a list of K coordinates (x,y)\n        or\n            2) Kx2 or 2xK np.array\n            each row or col is a pair of coordinates (x, y)\n        @reference_pts: could be\n            1) a list of K coordinates (x,y)\n        or\n            2) Kx2 or 2xK np.array\n            each row or col is a pair of coordinates (x, y)\n        or\n            3) None\n            if None, use default reference facial points\n        @crop_size: (w, h)\n            output face image size\n        @align_type: transform type, could be one of\n            1) 'similarity': use similarity transform\n            2) 'cv2_affine': use the first 3 points to do affine transform,\n                    by calling cv2.getAffineTransform()\n            3) 'affine': use all points to do affine transform\n    Returns:\n    ----------\n        @face_img: output face image with size (w, h) = @crop_size\n    \"\"\"\n\n    if reference_pts is None:\n        if crop_size[0] == 96 and crop_size[1] == 112:\n            reference_pts = REFERENCE_FACIAL_POINTS\n        else:\n            default_square = False\n            inner_padding_factor = 0\n            outer_padding = (0, 0)\n            output_size = crop_size\n\n            reference_pts = get_reference_facial_points(output_size, inner_padding_factor, outer_padding,\n                                                        default_square)\n\n    ref_pts = np.float32(reference_pts)\n    ref_pts_shp = ref_pts.shape\n    if max(ref_pts_shp) < 3 or min(ref_pts_shp) != 2:\n        raise FaceWarpException('reference_pts.shape must be (K,2) or (2,K) and K>2')\n\n    if ref_pts_shp[0] == 2:\n        ref_pts = ref_pts.T\n\n    src_pts = np.float32(facial_pts)\n    src_pts_shp = src_pts.shape\n    if max(src_pts_shp) < 3 or min(src_pts_shp) != 2:\n        raise FaceWarpException('facial_pts.shape must be (K,2) or (2,K) and K>2')\n\n    if src_pts_shp[0] == 2:\n        src_pts = src_pts.T\n\n    if src_pts.shape != ref_pts.shape:\n        raise FaceWarpException('facial_pts and reference_pts must have the same shape')\n\n    if align_type == 'cv2_affine':\n        tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3])\n    elif align_type == 'affine':\n        tfm = get_affine_transform_matrix(src_pts, ref_pts)\n    else:\n        tfm = get_similarity_transform_for_cv2(src_pts, ref_pts)\n\n    face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1]))\n\n    return face_img\n"
  },
  {
    "path": "facelib/detection/matlab_cp2tform.py",
    "content": "import numpy as np\nfrom numpy.linalg import inv, lstsq\nfrom numpy.linalg import matrix_rank as rank\nfrom numpy.linalg import norm\n\n\nclass MatlabCp2tormException(Exception):\n\n    def __str__(self):\n        return 'In File {}:{}'.format(__file__, super.__str__(self))\n\n\ndef tformfwd(trans, uv):\n    \"\"\"\n    Function:\n    ----------\n        apply affine transform 'trans' to uv\n\n    Parameters:\n    ----------\n        @trans: 3x3 np.array\n            transform matrix\n        @uv: Kx2 np.array\n            each row is a pair of coordinates (x, y)\n\n    Returns:\n    ----------\n        @xy: Kx2 np.array\n            each row is a pair of transformed coordinates (x, y)\n    \"\"\"\n    uv = np.hstack((uv, np.ones((uv.shape[0], 1))))\n    xy = np.dot(uv, trans)\n    xy = xy[:, 0:-1]\n    return xy\n\n\ndef tforminv(trans, uv):\n    \"\"\"\n    Function:\n    ----------\n        apply the inverse of affine transform 'trans' to uv\n\n    Parameters:\n    ----------\n        @trans: 3x3 np.array\n            transform matrix\n        @uv: Kx2 np.array\n            each row is a pair of coordinates (x, y)\n\n    Returns:\n    ----------\n        @xy: Kx2 np.array\n            each row is a pair of inverse-transformed coordinates (x, y)\n    \"\"\"\n    Tinv = inv(trans)\n    xy = tformfwd(Tinv, uv)\n    return xy\n\n\ndef findNonreflectiveSimilarity(uv, xy, options=None):\n    options = {'K': 2}\n\n    K = options['K']\n    M = xy.shape[0]\n    x = xy[:, 0].reshape((-1, 1))  # use reshape to keep a column vector\n    y = xy[:, 1].reshape((-1, 1))  # use reshape to keep a column vector\n\n    tmp1 = np.hstack((x, y, np.ones((M, 1)), np.zeros((M, 1))))\n    tmp2 = np.hstack((y, -x, np.zeros((M, 1)), np.ones((M, 1))))\n    X = np.vstack((tmp1, tmp2))\n\n    u = uv[:, 0].reshape((-1, 1))  # use reshape to keep a column vector\n    v = uv[:, 1].reshape((-1, 1))  # use reshape to keep a column vector\n    U = np.vstack((u, v))\n\n    # We know that X * r = U\n    if rank(X) >= 2 * K:\n        r, _, _, _ = lstsq(X, U, rcond=-1)\n        r = np.squeeze(r)\n    else:\n        raise Exception('cp2tform:twoUniquePointsReq')\n    sc = r[0]\n    ss = r[1]\n    tx = r[2]\n    ty = r[3]\n\n    Tinv = np.array([[sc, -ss, 0], [ss, sc, 0], [tx, ty, 1]])\n    T = inv(Tinv)\n    T[:, 2] = np.array([0, 0, 1])\n\n    return T, Tinv\n\n\ndef findSimilarity(uv, xy, options=None):\n    options = {'K': 2}\n\n    #    uv = np.array(uv)\n    #    xy = np.array(xy)\n\n    # Solve for trans1\n    trans1, trans1_inv = findNonreflectiveSimilarity(uv, xy, options)\n\n    # Solve for trans2\n\n    # manually reflect the xy data across the Y-axis\n    xyR = xy\n    xyR[:, 0] = -1 * xyR[:, 0]\n\n    trans2r, trans2r_inv = findNonreflectiveSimilarity(uv, xyR, options)\n\n    # manually reflect the tform to undo the reflection done on xyR\n    TreflectY = np.array([[-1, 0, 0], [0, 1, 0], [0, 0, 1]])\n\n    trans2 = np.dot(trans2r, TreflectY)\n\n    # Figure out if trans1 or trans2 is better\n    xy1 = tformfwd(trans1, uv)\n    norm1 = norm(xy1 - xy)\n\n    xy2 = tformfwd(trans2, uv)\n    norm2 = norm(xy2 - xy)\n\n    if norm1 <= norm2:\n        return trans1, trans1_inv\n    else:\n        trans2_inv = inv(trans2)\n        return trans2, trans2_inv\n\n\ndef get_similarity_transform(src_pts, dst_pts, reflective=True):\n    \"\"\"\n    Function:\n    ----------\n        Find Similarity Transform Matrix 'trans':\n            u = src_pts[:, 0]\n            v = src_pts[:, 1]\n            x = dst_pts[:, 0]\n            y = dst_pts[:, 1]\n            [x, y, 1] = [u, v, 1] * trans\n\n    Parameters:\n    ----------\n        @src_pts: Kx2 np.array\n            source points, each row is a pair of coordinates (x, y)\n        @dst_pts: Kx2 np.array\n            destination points, each row is a pair of transformed\n            coordinates (x, y)\n        @reflective: True or False\n            if True:\n                use reflective similarity transform\n            else:\n                use non-reflective similarity transform\n\n    Returns:\n    ----------\n       @trans: 3x3 np.array\n            transform matrix from uv to xy\n        trans_inv: 3x3 np.array\n            inverse of trans, transform matrix from xy to uv\n    \"\"\"\n\n    if reflective:\n        trans, trans_inv = findSimilarity(src_pts, dst_pts)\n    else:\n        trans, trans_inv = findNonreflectiveSimilarity(src_pts, dst_pts)\n\n    return trans, trans_inv\n\n\ndef cvt_tform_mat_for_cv2(trans):\n    \"\"\"\n    Function:\n    ----------\n        Convert Transform Matrix 'trans' into 'cv2_trans' which could be\n        directly used by cv2.warpAffine():\n            u = src_pts[:, 0]\n            v = src_pts[:, 1]\n            x = dst_pts[:, 0]\n            y = dst_pts[:, 1]\n            [x, y].T = cv_trans * [u, v, 1].T\n\n    Parameters:\n    ----------\n        @trans: 3x3 np.array\n            transform matrix from uv to xy\n\n    Returns:\n    ----------\n        @cv2_trans: 2x3 np.array\n            transform matrix from src_pts to dst_pts, could be directly used\n            for cv2.warpAffine()\n    \"\"\"\n    cv2_trans = trans[:, 0:2].T\n\n    return cv2_trans\n\n\ndef get_similarity_transform_for_cv2(src_pts, dst_pts, reflective=True):\n    \"\"\"\n    Function:\n    ----------\n        Find Similarity Transform Matrix 'cv2_trans' which could be\n        directly used by cv2.warpAffine():\n            u = src_pts[:, 0]\n            v = src_pts[:, 1]\n            x = dst_pts[:, 0]\n            y = dst_pts[:, 1]\n            [x, y].T = cv_trans * [u, v, 1].T\n\n    Parameters:\n    ----------\n        @src_pts: Kx2 np.array\n            source points, each row is a pair of coordinates (x, y)\n        @dst_pts: Kx2 np.array\n            destination points, each row is a pair of transformed\n            coordinates (x, y)\n        reflective: True or False\n            if True:\n                use reflective similarity transform\n            else:\n                use non-reflective similarity transform\n\n    Returns:\n    ----------\n        @cv2_trans: 2x3 np.array\n            transform matrix from src_pts to dst_pts, could be directly used\n            for cv2.warpAffine()\n    \"\"\"\n    trans, trans_inv = get_similarity_transform(src_pts, dst_pts, reflective)\n    cv2_trans = cvt_tform_mat_for_cv2(trans)\n\n    return cv2_trans\n\n\nif __name__ == '__main__':\n    \"\"\"\n    u = [0, 6, -2]\n    v = [0, 3, 5]\n    x = [-1, 0, 4]\n    y = [-1, -10, 4]\n\n    # In Matlab, run:\n    #\n    #   uv = [u'; v'];\n    #   xy = [x'; y'];\n    #   tform_sim=cp2tform(uv,xy,'similarity');\n    #\n    #   trans = tform_sim.tdata.T\n    #   ans =\n    #       -0.0764   -1.6190         0\n    #        1.6190   -0.0764         0\n    #       -3.2156    0.0290    1.0000\n    #   trans_inv = tform_sim.tdata.Tinv\n    #    ans =\n    #\n    #       -0.0291    0.6163         0\n    #       -0.6163   -0.0291         0\n    #       -0.0756    1.9826    1.0000\n    #    xy_m=tformfwd(tform_sim, u,v)\n    #\n    #    xy_m =\n    #\n    #       -3.2156    0.0290\n    #        1.1833   -9.9143\n    #        5.0323    2.8853\n    #    uv_m=tforminv(tform_sim, x,y)\n    #\n    #    uv_m =\n    #\n    #        0.5698    1.3953\n    #        6.0872    2.2733\n    #       -2.6570    4.3314\n    \"\"\"\n    u = [0, 6, -2]\n    v = [0, 3, 5]\n    x = [-1, 0, 4]\n    y = [-1, -10, 4]\n\n    uv = np.array((u, v)).T\n    xy = np.array((x, y)).T\n\n    print('\\n--->uv:')\n    print(uv)\n    print('\\n--->xy:')\n    print(xy)\n\n    trans, trans_inv = get_similarity_transform(uv, xy)\n\n    print('\\n--->trans matrix:')\n    print(trans)\n\n    print('\\n--->trans_inv matrix:')\n    print(trans_inv)\n\n    print('\\n---> apply transform to uv')\n    print('\\nxy_m = uv_augmented * trans')\n    uv_aug = np.hstack((uv, np.ones((uv.shape[0], 1))))\n    xy_m = np.dot(uv_aug, trans)\n    print(xy_m)\n\n    print('\\nxy_m = tformfwd(trans, uv)')\n    xy_m = tformfwd(trans, uv)\n    print(xy_m)\n\n    print('\\n---> apply inverse transform to xy')\n    print('\\nuv_m = xy_augmented * trans_inv')\n    xy_aug = np.hstack((xy, np.ones((xy.shape[0], 1))))\n    uv_m = np.dot(xy_aug, trans_inv)\n    print(uv_m)\n\n    print('\\nuv_m = tformfwd(trans_inv, xy)')\n    uv_m = tformfwd(trans_inv, xy)\n    print(uv_m)\n\n    uv_m = tforminv(trans, xy)\n    print('\\nuv_m = tforminv(trans, xy)')\n    print(uv_m)\n"
  },
  {
    "path": "facelib/detection/retinaface/retinaface.py",
    "content": "import cv2\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom PIL import Image\nfrom torchvision.models._utils import IntermediateLayerGetter as IntermediateLayerGetter\n\nfrom facelib.detection.align_trans import get_reference_facial_points, warp_and_crop_face\nfrom facelib.detection.retinaface.retinaface_net import FPN, SSH, MobileNetV1, make_bbox_head, make_class_head, make_landmark_head\nfrom facelib.detection.retinaface.retinaface_utils import (PriorBox, batched_decode, batched_decode_landm, decode, decode_landm,\n                                                 py_cpu_nms)\n\nfrom basicsr.utils.misc import get_device\n# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\ndevice = get_device()\n\n\ndef generate_config(network_name):\n    \n    cfg_mnet = {\n        'name': 'mobilenet0.25',\n        'min_sizes': [[16, 32], [64, 128], [256, 512]],\n        'steps': [8, 16, 32],\n        'variance': [0.1, 0.2],\n        'clip': False,\n        'loc_weight': 2.0,\n        'gpu_train': True,\n        'batch_size': 32,\n        'ngpu': 1,\n        'epoch': 250,\n        'decay1': 190,\n        'decay2': 220,\n        'image_size': 640,\n        'return_layers': {\n            'stage1': 1,\n            'stage2': 2,\n            'stage3': 3\n        },\n        'in_channel': 32,\n        'out_channel': 64\n    }\n\n    cfg_re50 = {\n        'name': 'Resnet50',\n        'min_sizes': [[16, 32], [64, 128], [256, 512]],\n        'steps': [8, 16, 32],\n        'variance': [0.1, 0.2],\n        'clip': False,\n        'loc_weight': 2.0,\n        'gpu_train': True,\n        'batch_size': 24,\n        'ngpu': 4,\n        'epoch': 100,\n        'decay1': 70,\n        'decay2': 90,\n        'image_size': 840,\n        'return_layers': {\n            'layer2': 1,\n            'layer3': 2,\n            'layer4': 3\n        },\n        'in_channel': 256,\n        'out_channel': 256\n    }\n\n    if network_name == 'mobile0.25':\n        return cfg_mnet\n    elif network_name == 'resnet50':\n        return cfg_re50\n    else:\n        raise NotImplementedError(f'network_name={network_name}')\n\n\nclass RetinaFace(nn.Module):\n\n    def __init__(self, network_name='resnet50', half=False, phase='test'):\n        super(RetinaFace, self).__init__()\n        self.half_inference = half\n        cfg = generate_config(network_name)\n        self.backbone = cfg['name']\n\n        self.model_name = f'retinaface_{network_name}'\n        self.cfg = cfg\n        self.phase = phase\n        self.target_size, self.max_size = 1600, 2150\n        self.resize, self.scale, self.scale1 = 1., None, None\n        self.mean_tensor = torch.tensor([[[[104.]], [[117.]], [[123.]]]]).to(device)\n        self.reference = get_reference_facial_points(default_square=True)\n        # Build network.\n        backbone = None\n        if cfg['name'] == 'mobilenet0.25':\n            backbone = MobileNetV1()\n            self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])\n        elif cfg['name'] == 'Resnet50':\n            import torchvision.models as models\n            backbone = models.resnet50(pretrained=False)\n            self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])\n\n        in_channels_stage2 = cfg['in_channel']\n        in_channels_list = [\n            in_channels_stage2 * 2,\n            in_channels_stage2 * 4,\n            in_channels_stage2 * 8,\n        ]\n\n        out_channels = cfg['out_channel']\n        self.fpn = FPN(in_channels_list, out_channels)\n        self.ssh1 = SSH(out_channels, out_channels)\n        self.ssh2 = SSH(out_channels, out_channels)\n        self.ssh3 = SSH(out_channels, out_channels)\n\n        self.ClassHead = make_class_head(fpn_num=3, inchannels=cfg['out_channel'])\n        self.BboxHead = make_bbox_head(fpn_num=3, inchannels=cfg['out_channel'])\n        self.LandmarkHead = make_landmark_head(fpn_num=3, inchannels=cfg['out_channel'])\n\n        self.to(device)\n        self.eval()\n        if self.half_inference:\n            self.half()\n\n    def forward(self, inputs):\n        out = self.body(inputs)\n\n        if self.backbone == 'mobilenet0.25' or self.backbone == 'Resnet50':\n            out = list(out.values())\n        # FPN\n        fpn = self.fpn(out)\n\n        # SSH\n        feature1 = self.ssh1(fpn[0])\n        feature2 = self.ssh2(fpn[1])\n        feature3 = self.ssh3(fpn[2])\n        features = [feature1, feature2, feature3]\n\n        bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1)\n        classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)], dim=1)\n        tmp = [self.LandmarkHead[i](feature) for i, feature in enumerate(features)]\n        ldm_regressions = (torch.cat(tmp, dim=1))\n\n        if self.phase == 'train':\n            output = (bbox_regressions, classifications, ldm_regressions)\n        else:\n            output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions)\n        return output\n\n    def __detect_faces(self, inputs):\n        # get scale\n        height, width = inputs.shape[2:]\n        self.scale = torch.tensor([width, height, width, height], dtype=torch.float32).to(device)\n        tmp = [width, height, width, height, width, height, width, height, width, height]\n        self.scale1 = torch.tensor(tmp, dtype=torch.float32).to(device)\n\n        # forawrd\n        inputs = inputs.to(device)\n        if self.half_inference:\n            inputs = inputs.half()\n        loc, conf, landmarks = self(inputs)\n\n        # get priorbox\n        priorbox = PriorBox(self.cfg, image_size=inputs.shape[2:])\n        priors = priorbox.forward().to(device)\n\n        return loc, conf, landmarks, priors\n\n    # single image detection\n    def transform(self, image, use_origin_size):\n        # convert to opencv format\n        if isinstance(image, Image.Image):\n            image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)\n        image = image.astype(np.float32)\n\n        # testing scale\n        im_size_min = np.min(image.shape[0:2])\n        im_size_max = np.max(image.shape[0:2])\n        resize = float(self.target_size) / float(im_size_min)\n\n        # prevent bigger axis from being more than max_size\n        if np.round(resize * im_size_max) > self.max_size:\n            resize = float(self.max_size) / float(im_size_max)\n        resize = 1 if use_origin_size else resize\n\n        # resize\n        if resize != 1:\n            image = cv2.resize(image, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)\n\n        # convert to torch.tensor format\n        # image -= (104, 117, 123)\n        image = image.transpose(2, 0, 1)\n        image = torch.from_numpy(image).unsqueeze(0)\n\n        return image, resize\n\n    def detect_faces(\n        self,\n        image,\n        conf_threshold=0.8,\n        nms_threshold=0.4,\n        use_origin_size=True,\n    ):\n        \"\"\"\n        Params:\n            imgs: BGR image\n        \"\"\"\n        image, self.resize = self.transform(image, use_origin_size)\n        image = image.to(device)\n        if self.half_inference:\n            image = image.half()\n        image = image - self.mean_tensor\n\n        loc, conf, landmarks, priors = self.__detect_faces(image)\n\n        boxes = decode(loc.data.squeeze(0), priors.data, self.cfg['variance'])\n        boxes = boxes * self.scale / self.resize\n        boxes = boxes.cpu().numpy()\n\n        scores = conf.squeeze(0).data.cpu().numpy()[:, 1]\n\n        landmarks = decode_landm(landmarks.squeeze(0), priors, self.cfg['variance'])\n        landmarks = landmarks * self.scale1 / self.resize\n        landmarks = landmarks.cpu().numpy()\n\n        # ignore low scores\n        inds = np.where(scores > conf_threshold)[0]\n        boxes, landmarks, scores = boxes[inds], landmarks[inds], scores[inds]\n\n        # sort\n        order = scores.argsort()[::-1]\n        boxes, landmarks, scores = boxes[order], landmarks[order], scores[order]\n\n        # do NMS\n        bounding_boxes = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)\n        keep = py_cpu_nms(bounding_boxes, nms_threshold)\n        bounding_boxes, landmarks = bounding_boxes[keep, :], landmarks[keep]\n        # self.t['forward_pass'].toc()\n        # print(self.t['forward_pass'].average_time)\n        # import sys\n        # sys.stdout.flush()\n        return np.concatenate((bounding_boxes, landmarks), axis=1)\n\n    def __align_multi(self, image, boxes, landmarks, limit=None):\n\n        if len(boxes) < 1:\n            return [], []\n\n        if limit:\n            boxes = boxes[:limit]\n            landmarks = landmarks[:limit]\n\n        faces = []\n        for landmark in landmarks:\n            facial5points = [[landmark[2 * j], landmark[2 * j + 1]] for j in range(5)]\n\n            warped_face = warp_and_crop_face(np.array(image), facial5points, self.reference, crop_size=(112, 112))\n            faces.append(warped_face)\n\n        return np.concatenate((boxes, landmarks), axis=1), faces\n\n    def align_multi(self, img, conf_threshold=0.8, limit=None):\n\n        rlt = self.detect_faces(img, conf_threshold=conf_threshold)\n        boxes, landmarks = rlt[:, 0:5], rlt[:, 5:]\n\n        return self.__align_multi(img, boxes, landmarks, limit)\n\n    # batched detection\n    def batched_transform(self, frames, use_origin_size):\n        \"\"\"\n        Arguments:\n            frames: a list of PIL.Image, or torch.Tensor(shape=[n, h, w, c],\n                type=np.float32, BGR format).\n            use_origin_size: whether to use origin size.\n        \"\"\"\n        from_PIL = True if isinstance(frames[0], Image.Image) else False\n\n        # convert to opencv format\n        if from_PIL:\n            frames = [cv2.cvtColor(np.asarray(frame), cv2.COLOR_RGB2BGR) for frame in frames]\n            frames = np.asarray(frames, dtype=np.float32)\n\n        # testing scale\n        im_size_min = np.min(frames[0].shape[0:2])\n        im_size_max = np.max(frames[0].shape[0:2])\n        resize = float(self.target_size) / float(im_size_min)\n\n        # prevent bigger axis from being more than max_size\n        if np.round(resize * im_size_max) > self.max_size:\n            resize = float(self.max_size) / float(im_size_max)\n        resize = 1 if use_origin_size else resize\n\n        # resize\n        if resize != 1:\n            if not from_PIL:\n                frames = F.interpolate(frames, scale_factor=resize)\n            else:\n                frames = [\n                    cv2.resize(frame, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)\n                    for frame in frames\n                ]\n\n        # convert to torch.tensor format\n        if not from_PIL:\n            frames = frames.transpose(1, 2).transpose(1, 3).contiguous()\n        else:\n            frames = frames.transpose((0, 3, 1, 2))\n            frames = torch.from_numpy(frames)\n\n        return frames, resize\n\n    def batched_detect_faces(self, frames, conf_threshold=0.8, nms_threshold=0.4, use_origin_size=True):\n        \"\"\"\n        Arguments:\n            frames: a list of PIL.Image, or np.array(shape=[n, h, w, c],\n                type=np.uint8, BGR format).\n            conf_threshold: confidence threshold.\n            nms_threshold: nms threshold.\n            use_origin_size: whether to use origin size.\n        Returns:\n            final_bounding_boxes: list of np.array ([n_boxes, 5],\n                type=np.float32).\n            final_landmarks: list of np.array ([n_boxes, 10], type=np.float32).\n        \"\"\"\n        # self.t['forward_pass'].tic()\n        frames, self.resize = self.batched_transform(frames, use_origin_size)\n        frames = frames.to(device)\n        frames = frames - self.mean_tensor\n\n        b_loc, b_conf, b_landmarks, priors = self.__detect_faces(frames)\n\n        final_bounding_boxes, final_landmarks = [], []\n\n        # decode\n        priors = priors.unsqueeze(0)\n        b_loc = batched_decode(b_loc, priors, self.cfg['variance']) * self.scale / self.resize\n        b_landmarks = batched_decode_landm(b_landmarks, priors, self.cfg['variance']) * self.scale1 / self.resize\n        b_conf = b_conf[:, :, 1]\n\n        # index for selection\n        b_indice = b_conf > conf_threshold\n\n        # concat\n        b_loc_and_conf = torch.cat((b_loc, b_conf.unsqueeze(-1)), dim=2).float()\n\n        for pred, landm, inds in zip(b_loc_and_conf, b_landmarks, b_indice):\n\n            # ignore low scores\n            pred, landm = pred[inds, :], landm[inds, :]\n            if pred.shape[0] == 0:\n                final_bounding_boxes.append(np.array([], dtype=np.float32))\n                final_landmarks.append(np.array([], dtype=np.float32))\n                continue\n\n            # sort\n            # order = score.argsort(descending=True)\n            # box, landm, score = box[order], landm[order], score[order]\n\n            # to CPU\n            bounding_boxes, landm = pred.cpu().numpy(), landm.cpu().numpy()\n\n            # NMS\n            keep = py_cpu_nms(bounding_boxes, nms_threshold)\n            bounding_boxes, landmarks = bounding_boxes[keep, :], landm[keep]\n\n            # append\n            final_bounding_boxes.append(bounding_boxes)\n            final_landmarks.append(landmarks)\n        # self.t['forward_pass'].toc(average=True)\n        # self.batch_time += self.t['forward_pass'].diff\n        # self.total_frame += len(frames)\n        # print(self.batch_time / self.total_frame)\n\n        return final_bounding_boxes, final_landmarks\n"
  },
  {
    "path": "facelib/detection/retinaface/retinaface_net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef conv_bn(inp, oup, stride=1, leaky=0):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True))\n\n\ndef conv_bn_no_relu(inp, oup, stride):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 3, stride, 1, bias=False),\n        nn.BatchNorm2d(oup),\n    )\n\n\ndef conv_bn1X1(inp, oup, stride, leaky=0):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True))\n\n\ndef conv_dw(inp, oup, stride, leaky=0.1):\n    return nn.Sequential(\n        nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),\n        nn.BatchNorm2d(inp),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True),\n        nn.Conv2d(inp, oup, 1, 1, 0, bias=False),\n        nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True),\n    )\n\n\nclass SSH(nn.Module):\n\n    def __init__(self, in_channel, out_channel):\n        super(SSH, self).__init__()\n        assert out_channel % 4 == 0\n        leaky = 0\n        if (out_channel <= 64):\n            leaky = 0.1\n        self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1)\n\n        self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky)\n        self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)\n\n        self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky)\n        self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1)\n\n    def forward(self, input):\n        conv3X3 = self.conv3X3(input)\n\n        conv5X5_1 = self.conv5X5_1(input)\n        conv5X5 = self.conv5X5_2(conv5X5_1)\n\n        conv7X7_2 = self.conv7X7_2(conv5X5_1)\n        conv7X7 = self.conv7x7_3(conv7X7_2)\n\n        out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)\n        out = F.relu(out)\n        return out\n\n\nclass FPN(nn.Module):\n\n    def __init__(self, in_channels_list, out_channels):\n        super(FPN, self).__init__()\n        leaky = 0\n        if (out_channels <= 64):\n            leaky = 0.1\n        self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky)\n        self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky)\n        self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky)\n\n        self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky)\n        self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky)\n\n    def forward(self, input):\n        # names = list(input.keys())\n        # input = list(input.values())\n\n        output1 = self.output1(input[0])\n        output2 = self.output2(input[1])\n        output3 = self.output3(input[2])\n\n        up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest')\n        output2 = output2 + up3\n        output2 = self.merge2(output2)\n\n        up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest')\n        output1 = output1 + up2\n        output1 = self.merge1(output1)\n\n        out = [output1, output2, output3]\n        return out\n\n\nclass MobileNetV1(nn.Module):\n\n    def __init__(self):\n        super(MobileNetV1, self).__init__()\n        self.stage1 = nn.Sequential(\n            conv_bn(3, 8, 2, leaky=0.1),  # 3\n            conv_dw(8, 16, 1),  # 7\n            conv_dw(16, 32, 2),  # 11\n            conv_dw(32, 32, 1),  # 19\n            conv_dw(32, 64, 2),  # 27\n            conv_dw(64, 64, 1),  # 43\n        )\n        self.stage2 = nn.Sequential(\n            conv_dw(64, 128, 2),  # 43 + 16 = 59\n            conv_dw(128, 128, 1),  # 59 + 32 = 91\n            conv_dw(128, 128, 1),  # 91 + 32 = 123\n            conv_dw(128, 128, 1),  # 123 + 32 = 155\n            conv_dw(128, 128, 1),  # 155 + 32 = 187\n            conv_dw(128, 128, 1),  # 187 + 32 = 219\n        )\n        self.stage3 = nn.Sequential(\n            conv_dw(128, 256, 2),  # 219 +3 2 = 241\n            conv_dw(256, 256, 1),  # 241 + 64 = 301\n        )\n        self.avg = nn.AdaptiveAvgPool2d((1, 1))\n        self.fc = nn.Linear(256, 1000)\n\n    def forward(self, x):\n        x = self.stage1(x)\n        x = self.stage2(x)\n        x = self.stage3(x)\n        x = self.avg(x)\n        # x = self.model(x)\n        x = x.view(-1, 256)\n        x = self.fc(x)\n        return x\n\n\nclass ClassHead(nn.Module):\n\n    def __init__(self, inchannels=512, num_anchors=3):\n        super(ClassHead, self).__init__()\n        self.num_anchors = num_anchors\n        self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0)\n\n    def forward(self, x):\n        out = self.conv1x1(x)\n        out = out.permute(0, 2, 3, 1).contiguous()\n\n        return out.view(out.shape[0], -1, 2)\n\n\nclass BboxHead(nn.Module):\n\n    def __init__(self, inchannels=512, num_anchors=3):\n        super(BboxHead, self).__init__()\n        self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0)\n\n    def forward(self, x):\n        out = self.conv1x1(x)\n        out = out.permute(0, 2, 3, 1).contiguous()\n\n        return out.view(out.shape[0], -1, 4)\n\n\nclass LandmarkHead(nn.Module):\n\n    def __init__(self, inchannels=512, num_anchors=3):\n        super(LandmarkHead, self).__init__()\n        self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0)\n\n    def forward(self, x):\n        out = self.conv1x1(x)\n        out = out.permute(0, 2, 3, 1).contiguous()\n\n        return out.view(out.shape[0], -1, 10)\n\n\ndef make_class_head(fpn_num=3, inchannels=64, anchor_num=2):\n    classhead = nn.ModuleList()\n    for i in range(fpn_num):\n        classhead.append(ClassHead(inchannels, anchor_num))\n    return classhead\n\n\ndef make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2):\n    bboxhead = nn.ModuleList()\n    for i in range(fpn_num):\n        bboxhead.append(BboxHead(inchannels, anchor_num))\n    return bboxhead\n\n\ndef make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2):\n    landmarkhead = nn.ModuleList()\n    for i in range(fpn_num):\n        landmarkhead.append(LandmarkHead(inchannels, anchor_num))\n    return landmarkhead\n"
  },
  {
    "path": "facelib/detection/retinaface/retinaface_utils.py",
    "content": "import numpy as np\nimport torch\nimport torchvision\nfrom itertools import product as product\nfrom math import ceil\n\n\nclass PriorBox(object):\n\n    def __init__(self, cfg, image_size=None, phase='train'):\n        super(PriorBox, self).__init__()\n        self.min_sizes = cfg['min_sizes']\n        self.steps = cfg['steps']\n        self.clip = cfg['clip']\n        self.image_size = image_size\n        self.feature_maps = [[ceil(self.image_size[0] / step), ceil(self.image_size[1] / step)] for step in self.steps]\n        self.name = 's'\n\n    def forward(self):\n        anchors = []\n        for k, f in enumerate(self.feature_maps):\n            min_sizes = self.min_sizes[k]\n            for i, j in product(range(f[0]), range(f[1])):\n                for min_size in min_sizes:\n                    s_kx = min_size / self.image_size[1]\n                    s_ky = min_size / self.image_size[0]\n                    dense_cx = [x * self.steps[k] / self.image_size[1] for x in [j + 0.5]]\n                    dense_cy = [y * self.steps[k] / self.image_size[0] for y in [i + 0.5]]\n                    for cy, cx in product(dense_cy, dense_cx):\n                        anchors += [cx, cy, s_kx, s_ky]\n\n        # back to torch land\n        output = torch.Tensor(anchors).view(-1, 4)\n        if self.clip:\n            output.clamp_(max=1, min=0)\n        return output\n\n\ndef py_cpu_nms(dets, thresh):\n    \"\"\"Pure Python NMS baseline.\"\"\"\n    keep = torchvision.ops.nms(\n        boxes=torch.Tensor(dets[:, :4]),\n        scores=torch.Tensor(dets[:, 4]),\n        iou_threshold=thresh,\n    )\n\n    return list(keep)\n\n\ndef point_form(boxes):\n    \"\"\" Convert prior_boxes to (xmin, ymin, xmax, ymax)\n    representation for comparison to point form ground truth data.\n    Args:\n        boxes: (tensor) center-size default boxes from priorbox layers.\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    return torch.cat(\n        (\n            boxes[:, :2] - boxes[:, 2:] / 2,  # xmin, ymin\n            boxes[:, :2] + boxes[:, 2:] / 2),\n        1)  # xmax, ymax\n\n\ndef center_size(boxes):\n    \"\"\" Convert prior_boxes to (cx, cy, w, h)\n    representation for comparison to center-size form ground truth data.\n    Args:\n        boxes: (tensor) point_form boxes\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    return torch.cat(\n        (boxes[:, 2:] + boxes[:, :2]) / 2,  # cx, cy\n        boxes[:, 2:] - boxes[:, :2],\n        1)  # w, h\n\n\ndef intersect(box_a, box_b):\n    \"\"\" We resize both tensors to [A,B,2] without new malloc:\n    [A,2] -> [A,1,2] -> [A,B,2]\n    [B,2] -> [1,B,2] -> [A,B,2]\n    Then we compute the area of intersect between box_a and box_b.\n    Args:\n      box_a: (tensor) bounding boxes, Shape: [A,4].\n      box_b: (tensor) bounding boxes, Shape: [B,4].\n    Return:\n      (tensor) intersection area, Shape: [A,B].\n    \"\"\"\n    A = box_a.size(0)\n    B = box_b.size(0)\n    max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2), box_b[:, 2:].unsqueeze(0).expand(A, B, 2))\n    min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2), box_b[:, :2].unsqueeze(0).expand(A, B, 2))\n    inter = torch.clamp((max_xy - min_xy), min=0)\n    return inter[:, :, 0] * inter[:, :, 1]\n\n\ndef jaccard(box_a, box_b):\n    \"\"\"Compute the jaccard overlap of two sets of boxes.  The jaccard overlap\n    is simply the intersection over union of two boxes.  Here we operate on\n    ground truth boxes and default boxes.\n    E.g.:\n        A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)\n    Args:\n        box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]\n        box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]\n    Return:\n        jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]\n    \"\"\"\n    inter = intersect(box_a, box_b)\n    area_a = ((box_a[:, 2] - box_a[:, 0]) * (box_a[:, 3] - box_a[:, 1])).unsqueeze(1).expand_as(inter)  # [A,B]\n    area_b = ((box_b[:, 2] - box_b[:, 0]) * (box_b[:, 3] - box_b[:, 1])).unsqueeze(0).expand_as(inter)  # [A,B]\n    union = area_a + area_b - inter\n    return inter / union  # [A,B]\n\n\ndef matrix_iou(a, b):\n    \"\"\"\n    return iou of a and b, numpy version for data augenmentation\n    \"\"\"\n    lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])\n    rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])\n\n    area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)\n    area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)\n    area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)\n    return area_i / (area_a[:, np.newaxis] + area_b - area_i)\n\n\ndef matrix_iof(a, b):\n    \"\"\"\n    return iof of a and b, numpy version for data augenmentation\n    \"\"\"\n    lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])\n    rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])\n\n    area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)\n    area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)\n    return area_i / np.maximum(area_a[:, np.newaxis], 1)\n\n\ndef match(threshold, truths, priors, variances, labels, landms, loc_t, conf_t, landm_t, idx):\n    \"\"\"Match each prior box with the ground truth box of the highest jaccard\n    overlap, encode the bounding boxes, then return the matched indices\n    corresponding to both confidence and location preds.\n    Args:\n        threshold: (float) The overlap threshold used when matching boxes.\n        truths: (tensor) Ground truth boxes, Shape: [num_obj, 4].\n        priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4].\n        variances: (tensor) Variances corresponding to each prior coord,\n            Shape: [num_priors, 4].\n        labels: (tensor) All the class labels for the image, Shape: [num_obj].\n        landms: (tensor) Ground truth landms, Shape [num_obj, 10].\n        loc_t: (tensor) Tensor to be filled w/ encoded location targets.\n        conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds.\n        landm_t: (tensor) Tensor to be filled w/ encoded landm targets.\n        idx: (int) current batch index\n    Return:\n        The matched indices corresponding to 1)location 2)confidence\n        3)landm preds.\n    \"\"\"\n    # jaccard index\n    overlaps = jaccard(truths, point_form(priors))\n    # (Bipartite Matching)\n    # [1,num_objects] best prior for each ground truth\n    best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True)\n\n    # ignore hard gt\n    valid_gt_idx = best_prior_overlap[:, 0] >= 0.2\n    best_prior_idx_filter = best_prior_idx[valid_gt_idx, :]\n    if best_prior_idx_filter.shape[0] <= 0:\n        loc_t[idx] = 0\n        conf_t[idx] = 0\n        return\n\n    # [1,num_priors] best ground truth for each prior\n    best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True)\n    best_truth_idx.squeeze_(0)\n    best_truth_overlap.squeeze_(0)\n    best_prior_idx.squeeze_(1)\n    best_prior_idx_filter.squeeze_(1)\n    best_prior_overlap.squeeze_(1)\n    best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2)  # ensure best prior\n    # TODO refactor: index  best_prior_idx with long tensor\n    # ensure every gt matches with its prior of max overlap\n    for j in range(best_prior_idx.size(0)):  # 判别此anchor是预测哪一个boxes\n        best_truth_idx[best_prior_idx[j]] = j\n    matches = truths[best_truth_idx]  # Shape: [num_priors,4] 此处为每一个anchor对应的bbox取出来\n    conf = labels[best_truth_idx]  # Shape: [num_priors]      此处为每一个anchor对应的label取出来\n    conf[best_truth_overlap < threshold] = 0  # label as background   overlap<0.35的全部作为负样本\n    loc = encode(matches, priors, variances)\n\n    matches_landm = landms[best_truth_idx]\n    landm = encode_landm(matches_landm, priors, variances)\n    loc_t[idx] = loc  # [num_priors,4] encoded offsets to learn\n    conf_t[idx] = conf  # [num_priors] top class label for each prior\n    landm_t[idx] = landm\n\n\ndef encode(matched, priors, variances):\n    \"\"\"Encode the variances from the priorbox layers into the ground truth boxes\n    we have matched (based on jaccard overlap) with the prior boxes.\n    Args:\n        matched: (tensor) Coords of ground truth for each prior in point-form\n            Shape: [num_priors, 4].\n        priors: (tensor) Prior boxes in center-offset form\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        encoded boxes (tensor), Shape: [num_priors, 4]\n    \"\"\"\n\n    # dist b/t match center and prior's center\n    g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2]\n    # encode variance\n    g_cxcy /= (variances[0] * priors[:, 2:])\n    # match wh / prior wh\n    g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]\n    g_wh = torch.log(g_wh) / variances[1]\n    # return target for smooth_l1_loss\n    return torch.cat([g_cxcy, g_wh], 1)  # [num_priors,4]\n\n\ndef encode_landm(matched, priors, variances):\n    \"\"\"Encode the variances from the priorbox layers into the ground truth boxes\n    we have matched (based on jaccard overlap) with the prior boxes.\n    Args:\n        matched: (tensor) Coords of ground truth for each prior in point-form\n            Shape: [num_priors, 10].\n        priors: (tensor) Prior boxes in center-offset form\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        encoded landm (tensor), Shape: [num_priors, 10]\n    \"\"\"\n\n    # dist b/t match center and prior's center\n    matched = torch.reshape(matched, (matched.size(0), 5, 2))\n    priors_cx = priors[:, 0].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_cy = priors[:, 1].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_w = priors[:, 2].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_h = priors[:, 3].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors = torch.cat([priors_cx, priors_cy, priors_w, priors_h], dim=2)\n    g_cxcy = matched[:, :, :2] - priors[:, :, :2]\n    # encode variance\n    g_cxcy /= (variances[0] * priors[:, :, 2:])\n    # g_cxcy /= priors[:, :, 2:]\n    g_cxcy = g_cxcy.reshape(g_cxcy.size(0), -1)\n    # return target for smooth_l1_loss\n    return g_cxcy\n\n\n# Adapted from https://github.com/Hakuyume/chainer-ssd\ndef decode(loc, priors, variances):\n    \"\"\"Decode locations from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        loc (tensor): location predictions for loc layers,\n            Shape: [num_priors,4]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded bounding box predictions\n    \"\"\"\n\n    boxes = torch.cat((priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],\n                       priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)\n    boxes[:, :2] -= boxes[:, 2:] / 2\n    boxes[:, 2:] += boxes[:, :2]\n    return boxes\n\n\ndef decode_landm(pre, priors, variances):\n    \"\"\"Decode landm from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        pre (tensor): landm predictions for loc layers,\n            Shape: [num_priors,10]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded landm predictions\n    \"\"\"\n    tmp = (\n        priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:],\n        priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:],\n        priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:],\n        priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:],\n        priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:],\n    )\n    landms = torch.cat(tmp, dim=1)\n    return landms\n\n\ndef batched_decode(b_loc, priors, variances):\n    \"\"\"Decode locations from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        b_loc (tensor): location predictions for loc layers,\n            Shape: [num_batches,num_priors,4]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [1,num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded bounding box predictions\n    \"\"\"\n    boxes = (\n        priors[:, :, :2] + b_loc[:, :, :2] * variances[0] * priors[:, :, 2:],\n        priors[:, :, 2:] * torch.exp(b_loc[:, :, 2:] * variances[1]),\n    )\n    boxes = torch.cat(boxes, dim=2)\n\n    boxes[:, :, :2] -= boxes[:, :, 2:] / 2\n    boxes[:, :, 2:] += boxes[:, :, :2]\n    return boxes\n\n\ndef batched_decode_landm(pre, priors, variances):\n    \"\"\"Decode landm from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        pre (tensor): landm predictions for loc layers,\n            Shape: [num_batches,num_priors,10]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [1,num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded landm predictions\n    \"\"\"\n    landms = (\n        priors[:, :, :2] + pre[:, :, :2] * variances[0] * priors[:, :, 2:],\n        priors[:, :, :2] + pre[:, :, 2:4] * variances[0] * priors[:, :, 2:],\n        priors[:, :, :2] + pre[:, :, 4:6] * variances[0] * priors[:, :, 2:],\n        priors[:, :, :2] + pre[:, :, 6:8] * variances[0] * priors[:, :, 2:],\n        priors[:, :, :2] + pre[:, :, 8:10] * variances[0] * priors[:, :, 2:],\n    )\n    landms = torch.cat(landms, dim=2)\n    return landms\n\n\ndef log_sum_exp(x):\n    \"\"\"Utility function for computing log_sum_exp while determining\n    This will be used to determine unaveraged confidence loss across\n    all examples in a batch.\n    Args:\n        x (Variable(tensor)): conf_preds from conf layers\n    \"\"\"\n    x_max = x.data.max()\n    return torch.log(torch.sum(torch.exp(x - x_max), 1, keepdim=True)) + x_max\n\n\n# Original author: Francisco Massa:\n# https://github.com/fmassa/object-detection.torch\n# Ported to PyTorch by Max deGroot (02/01/2017)\ndef nms(boxes, scores, overlap=0.5, top_k=200):\n    \"\"\"Apply non-maximum suppression at test time to avoid detecting too many\n    overlapping bounding boxes for a given object.\n    Args:\n        boxes: (tensor) The location preds for the img, Shape: [num_priors,4].\n        scores: (tensor) The class predscores for the img, Shape:[num_priors].\n        overlap: (float) The overlap thresh for suppressing unnecessary boxes.\n        top_k: (int) The Maximum number of box preds to consider.\n    Return:\n        The indices of the kept boxes with respect to num_priors.\n    \"\"\"\n\n    keep = torch.Tensor(scores.size(0)).fill_(0).long()\n    if boxes.numel() == 0:\n        return keep\n    x1 = boxes[:, 0]\n    y1 = boxes[:, 1]\n    x2 = boxes[:, 2]\n    y2 = boxes[:, 3]\n    area = torch.mul(x2 - x1, y2 - y1)\n    v, idx = scores.sort(0)  # sort in ascending order\n    # I = I[v >= 0.01]\n    idx = idx[-top_k:]  # indices of the top-k largest vals\n    xx1 = boxes.new()\n    yy1 = boxes.new()\n    xx2 = boxes.new()\n    yy2 = boxes.new()\n    w = boxes.new()\n    h = boxes.new()\n\n    # keep = torch.Tensor()\n    count = 0\n    while idx.numel() > 0:\n        i = idx[-1]  # index of current largest val\n        # keep.append(i)\n        keep[count] = i\n        count += 1\n        if idx.size(0) == 1:\n            break\n        idx = idx[:-1]  # remove kept element from view\n        # load bboxes of next highest vals\n        torch.index_select(x1, 0, idx, out=xx1)\n        torch.index_select(y1, 0, idx, out=yy1)\n        torch.index_select(x2, 0, idx, out=xx2)\n        torch.index_select(y2, 0, idx, out=yy2)\n        # store element-wise max with next highest score\n        xx1 = torch.clamp(xx1, min=x1[i])\n        yy1 = torch.clamp(yy1, min=y1[i])\n        xx2 = torch.clamp(xx2, max=x2[i])\n        yy2 = torch.clamp(yy2, max=y2[i])\n        w.resize_as_(xx2)\n        h.resize_as_(yy2)\n        w = xx2 - xx1\n        h = yy2 - yy1\n        # check sizes of xx1 and xx2.. after each iteration\n        w = torch.clamp(w, min=0.0)\n        h = torch.clamp(h, min=0.0)\n        inter = w * h\n        # IoU = i / (area(a) + area(b) - i)\n        rem_areas = torch.index_select(area, 0, idx)  # load remaining areas)\n        union = (rem_areas - inter) + area[i]\n        IoU = inter / union  # store result in iou\n        # keep only elements with an IoU <= overlap\n        idx = idx[IoU.le(overlap)]\n    return keep, count\n"
  },
  {
    "path": "facelib/detection/yolov5face/__init__.py",
    "content": ""
  },
  {
    "path": "facelib/detection/yolov5face/face_detector.py",
    "content": "import cv2\nimport copy\nimport re\nimport torch\nimport numpy as np\n\nfrom pathlib import Path\nfrom facelib.detection.yolov5face.models.yolo import Model\nfrom facelib.detection.yolov5face.utils.datasets import letterbox\nfrom facelib.detection.yolov5face.utils.general import (\n    check_img_size,\n    non_max_suppression_face,\n    scale_coords,\n    scale_coords_landmarks,\n)\n\n# IS_HIGH_VERSION = tuple(map(int, torch.__version__.split('+')[0].split('.')[:2])) >= (1, 9)\nIS_HIGH_VERSION = [int(m) for m in list(re.findall(r\"^([0-9]+)\\.([0-9]+)\\.([0-9]+)([^0-9][a-zA-Z0-9]*)?(\\+git.*)?$\",\\\n    torch.__version__)[0][:3])] >= [1, 9, 0]\n\n\ndef isListempty(inList):\n    if isinstance(inList, list): # Is a list\n        return all(map(isListempty, inList))\n    return False # Not a list\n\nclass YoloDetector:\n    def __init__(\n        self,\n        config_name,\n        min_face=10,\n        target_size=None,\n        device='cuda',\n    ):\n        \"\"\"\n        config_name: name of .yaml config with network configuration from models/ folder.\n        min_face : minimal face size in pixels.\n        target_size : target size of smaller image axis (choose lower for faster work). e.g. 480, 720, 1080.\n                    None for original resolution.\n        \"\"\"\n        self._class_path = Path(__file__).parent.absolute()\n        self.target_size = target_size\n        self.min_face = min_face\n        self.detector = Model(cfg=config_name)\n        self.device = device\n\n\n    def _preprocess(self, imgs):\n        \"\"\"\n        Preprocessing image before passing through the network. Resize and conversion to torch tensor.\n        \"\"\"\n        pp_imgs = []\n        for img in imgs:\n            h0, w0 = img.shape[:2]  # orig hw\n            if self.target_size:\n                r = self.target_size / min(h0, w0)  # resize image to img_size\n                if r < 1:\n                    img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=cv2.INTER_LINEAR)\n\n            imgsz = check_img_size(max(img.shape[:2]), s=self.detector.stride.max())  # check img_size\n            img = letterbox(img, new_shape=imgsz)[0]\n            pp_imgs.append(img)\n        pp_imgs = np.array(pp_imgs)\n        pp_imgs = pp_imgs.transpose(0, 3, 1, 2)\n        pp_imgs = torch.from_numpy(pp_imgs).to(self.device)\n        pp_imgs = pp_imgs.float()  # uint8 to fp16/32\n        return pp_imgs / 255.0  # 0 - 255 to 0.0 - 1.0\n\n    def _postprocess(self, imgs, origimgs, pred, conf_thres, iou_thres):\n        \"\"\"\n        Postprocessing of raw pytorch model output.\n        Returns:\n            bboxes: list of arrays with 4 coordinates of bounding boxes with format x1,y1,x2,y2.\n            points: list of arrays with coordinates of 5 facial keypoints (eyes, nose, lips corners).\n        \"\"\"\n        bboxes = [[] for _ in range(len(origimgs))]\n        landmarks = [[] for _ in range(len(origimgs))]\n\n        pred = non_max_suppression_face(pred, conf_thres, iou_thres)\n\n        for image_id, origimg in enumerate(origimgs):\n            img_shape = origimg.shape\n            image_height, image_width = img_shape[:2]\n            gn = torch.tensor(img_shape)[[1, 0, 1, 0]]  # normalization gain whwh\n            gn_lks = torch.tensor(img_shape)[[1, 0, 1, 0, 1, 0, 1, 0, 1, 0]]  # normalization gain landmarks\n            det = pred[image_id].cpu()\n            scale_coords(imgs[image_id].shape[1:], det[:, :4], img_shape).round()\n            scale_coords_landmarks(imgs[image_id].shape[1:], det[:, 5:15], img_shape).round()\n\n            for j in range(det.size()[0]):\n                box = (det[j, :4].view(1, 4) / gn).view(-1).tolist()\n                box = list(\n                    map(int, [box[0] * image_width, box[1] * image_height, box[2] * image_width, box[3] * image_height])\n                )\n                if box[3] - box[1] < self.min_face:\n                    continue\n                lm = (det[j, 5:15].view(1, 10) / gn_lks).view(-1).tolist()\n                lm = list(map(int, [i * image_width if j % 2 == 0 else i * image_height for j, i in enumerate(lm)]))\n                lm = [lm[i : i + 2] for i in range(0, len(lm), 2)]\n                bboxes[image_id].append(box)\n                landmarks[image_id].append(lm)\n        return bboxes, landmarks\n\n    def detect_faces(self, imgs, conf_thres=0.7, iou_thres=0.5):\n        \"\"\"\n        Get bbox coordinates and keypoints of faces on original image.\n        Params:\n            imgs: image or list of images to detect faces on with BGR order (convert to RGB order for inference)\n            conf_thres: confidence threshold for each prediction\n            iou_thres: threshold for NMS (filter of intersecting bboxes)\n        Returns:\n            bboxes: list of arrays with 4 coordinates of bounding boxes with format x1,y1,x2,y2.\n            points: list of arrays with coordinates of 5 facial keypoints (eyes, nose, lips corners).\n        \"\"\"\n        # Pass input images through face detector\n        images = imgs if isinstance(imgs, list) else [imgs]\n        images = [cv2.cvtColor(img, cv2.COLOR_BGR2RGB) for img in images]\n        origimgs = copy.deepcopy(images)\n\n        images = self._preprocess(images)\n        \n        if IS_HIGH_VERSION:\n            with torch.inference_mode():  # for pytorch>=1.9 \n                pred = self.detector(images)[0]\n        else:\n            with torch.no_grad():  # for pytorch<1.9\n                pred = self.detector(images)[0]\n\n        bboxes, points = self._postprocess(images, origimgs, pred, conf_thres, iou_thres)\n\n        # return bboxes, points\n        if not isListempty(points):\n            bboxes = np.array(bboxes).reshape(-1,4)\n            points = np.array(points).reshape(-1,10)\n            padding = bboxes[:,0].reshape(-1,1)\n            return np.concatenate((bboxes, padding, points), axis=1)\n        else:\n            return None\n\n    def __call__(self, *args):\n        return self.predict(*args)\n"
  },
  {
    "path": "facelib/detection/yolov5face/models/__init__.py",
    "content": ""
  },
  {
    "path": "facelib/detection/yolov5face/models/common.py",
    "content": "# This file contains modules common to various models\n\nimport math\n\nimport numpy as np\nimport torch\nfrom torch import nn\n\nfrom facelib.detection.yolov5face.utils.datasets import letterbox\nfrom facelib.detection.yolov5face.utils.general import (\n    make_divisible,\n    non_max_suppression,\n    scale_coords,\n    xyxy2xywh,\n)\n\n\ndef autopad(k, p=None):  # kernel, padding\n    # Pad to 'same'\n    if p is None:\n        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad\n    return p\n\n\ndef channel_shuffle(x, groups):\n    batchsize, num_channels, height, width = x.data.size()\n    channels_per_group = torch.div(num_channels, groups, rounding_mode=\"trunc\")\n\n    # reshape\n    x = x.view(batchsize, groups, channels_per_group, height, width)\n    x = torch.transpose(x, 1, 2).contiguous()\n\n    # flatten\n    return x.view(batchsize, -1, height, width)\n\n\ndef DWConv(c1, c2, k=1, s=1, act=True):\n    # Depthwise convolution\n    return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act)\n\n\nclass Conv(nn.Module):\n    # Standard convolution\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)\n        self.bn = nn.BatchNorm2d(c2)\n        self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())\n\n    def forward(self, x):\n        return self.act(self.bn(self.conv(x)))\n\n    def fuseforward(self, x):\n        return self.act(self.conv(x))\n\n\nclass StemBlock(nn.Module):\n    def __init__(self, c1, c2, k=3, s=2, p=None, g=1, act=True):\n        super().__init__()\n        self.stem_1 = Conv(c1, c2, k, s, p, g, act)\n        self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0)\n        self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1)\n        self.stem_2p = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)\n        self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0)\n\n    def forward(self, x):\n        stem_1_out = self.stem_1(x)\n        stem_2a_out = self.stem_2a(stem_1_out)\n        stem_2b_out = self.stem_2b(stem_2a_out)\n        stem_2p_out = self.stem_2p(stem_1_out)\n        return self.stem_3(torch.cat((stem_2b_out, stem_2p_out), 1))\n\n\nclass Bottleneck(nn.Module):\n    # Standard bottleneck\n    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c_, c2, 3, 1, g=g)\n        self.add = shortcut and c1 == c2\n\n    def forward(self, x):\n        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass BottleneckCSP(nn.Module):\n    # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)\n        self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)\n        self.cv4 = Conv(2 * c_, c2, 1, 1)\n        self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)\n        self.act = nn.LeakyReLU(0.1, inplace=True)\n        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))\n\n    def forward(self, x):\n        y1 = self.cv3(self.m(self.cv1(x)))\n        y2 = self.cv2(x)\n        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))\n\n\nclass C3(nn.Module):\n    # CSP Bottleneck with 3 convolutions\n    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c1, c_, 1, 1)\n        self.cv3 = Conv(2 * c_, c2, 1)  # act=FReLU(c2)\n        self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))\n\n    def forward(self, x):\n        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))\n\n\nclass ShuffleV2Block(nn.Module):\n    def __init__(self, inp, oup, stride):\n        super().__init__()\n\n        if not 1 <= stride <= 3:\n            raise ValueError(\"illegal stride value\")\n        self.stride = stride\n\n        branch_features = oup // 2\n\n        if self.stride > 1:\n            self.branch1 = nn.Sequential(\n                self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1),\n                nn.BatchNorm2d(inp),\n                nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False),\n                nn.BatchNorm2d(branch_features),\n                nn.SiLU(),\n            )\n        else:\n            self.branch1 = nn.Sequential()\n\n        self.branch2 = nn.Sequential(\n            nn.Conv2d(\n                inp if (self.stride > 1) else branch_features,\n                branch_features,\n                kernel_size=1,\n                stride=1,\n                padding=0,\n                bias=False,\n            ),\n            nn.BatchNorm2d(branch_features),\n            nn.SiLU(),\n            self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1),\n            nn.BatchNorm2d(branch_features),\n            nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False),\n            nn.BatchNorm2d(branch_features),\n            nn.SiLU(),\n        )\n\n    @staticmethod\n    def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False):\n        return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i)\n\n    def forward(self, x):\n        if self.stride == 1:\n            x1, x2 = x.chunk(2, dim=1)\n            out = torch.cat((x1, self.branch2(x2)), dim=1)\n        else:\n            out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)\n        out = channel_shuffle(out, 2)\n        return out\n\n\nclass SPP(nn.Module):\n    # Spatial pyramid pooling layer used in YOLOv3-SPP\n    def __init__(self, c1, c2, k=(5, 9, 13)):\n        super().__init__()\n        c_ = c1 // 2  # hidden channels\n        self.cv1 = Conv(c1, c_, 1, 1)\n        self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)\n        self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])\n\n    def forward(self, x):\n        x = self.cv1(x)\n        return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))\n\n\nclass Focus(nn.Module):\n    # Focus wh information into c-space\n    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups\n        super().__init__()\n        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)\n\n    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)\n        return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))\n\n\nclass Concat(nn.Module):\n    # Concatenate a list of tensors along dimension\n    def __init__(self, dimension=1):\n        super().__init__()\n        self.d = dimension\n\n    def forward(self, x):\n        return torch.cat(x, self.d)\n\n\nclass NMS(nn.Module):\n    # Non-Maximum Suppression (NMS) module\n    conf = 0.25  # confidence threshold\n    iou = 0.45  # IoU threshold\n    classes = None  # (optional list) filter by class\n\n    def forward(self, x):\n        return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes)\n\n\nclass AutoShape(nn.Module):\n    # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS\n    img_size = 640  # inference size (pixels)\n    conf = 0.25  # NMS confidence threshold\n    iou = 0.45  # NMS IoU threshold\n    classes = None  # (optional list) filter by class\n\n    def __init__(self, model):\n        super().__init__()\n        self.model = model.eval()\n\n    def autoshape(self):\n        print(\"autoShape already enabled, skipping... \")  # model already converted to model.autoshape()\n        return self\n\n    def forward(self, imgs, size=640, augment=False, profile=False):\n        # Inference from various sources. For height=720, width=1280, RGB images example inputs are:\n        #   OpenCV:          = cv2.imread('image.jpg')[:,:,::-1]  # HWC BGR to RGB x(720,1280,3)\n        #   PIL:             = Image.open('image.jpg')  # HWC x(720,1280,3)\n        #   numpy:           = np.zeros((720,1280,3))  # HWC\n        #   torch:           = torch.zeros(16,3,720,1280)  # BCHW\n        #   multiple:        = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...]  # list of images\n\n        p = next(self.model.parameters())  # for device and type\n        if isinstance(imgs, torch.Tensor):  # torch\n            return self.model(imgs.to(p.device).type_as(p), augment, profile)  # inference\n\n        # Pre-process\n        n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs])  # number of images, list of images\n        shape0, shape1 = [], []  # image and inference shapes\n        for i, im in enumerate(imgs):\n            im = np.array(im)  # to numpy\n            if im.shape[0] < 5:  # image in CHW\n                im = im.transpose((1, 2, 0))  # reverse dataloader .transpose(2, 0, 1)\n            im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3)  # enforce 3ch input\n            s = im.shape[:2]  # HWC\n            shape0.append(s)  # image shape\n            g = size / max(s)  # gain\n            shape1.append([y * g for y in s])\n            imgs[i] = im  # update\n        shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)]  # inference shape\n        x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs]  # pad\n        x = np.stack(x, 0) if n > 1 else x[0][None]  # stack\n        x = np.ascontiguousarray(x.transpose((0, 3, 1, 2)))  # BHWC to BCHW\n        x = torch.from_numpy(x).to(p.device).type_as(p) / 255.0  # uint8 to fp16/32\n\n        # Inference\n        with torch.no_grad():\n            y = self.model(x, augment, profile)[0]  # forward\n        y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes)  # NMS\n\n        # Post-process\n        for i in range(n):\n            scale_coords(shape1, y[i][:, :4], shape0[i])\n\n        return Detections(imgs, y, self.names)\n\n\nclass Detections:\n    # detections class for YOLOv5 inference results\n    def __init__(self, imgs, pred, names=None):\n        super().__init__()\n        d = pred[0].device  # device\n        gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1.0, 1.0], device=d) for im in imgs]  # normalizations\n        self.imgs = imgs  # list of images as numpy arrays\n        self.pred = pred  # list of tensors pred[0] = (xyxy, conf, cls)\n        self.names = names  # class names\n        self.xyxy = pred  # xyxy pixels\n        self.xywh = [xyxy2xywh(x) for x in pred]  # xywh pixels\n        self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)]  # xyxy normalized\n        self.xywhn = [x / g for x, g in zip(self.xywh, gn)]  # xywh normalized\n        self.n = len(self.pred)\n\n    def __len__(self):\n        return self.n\n\n    def tolist(self):\n        # return a list of Detections objects, i.e. 'for result in results.tolist():'\n        x = [Detections([self.imgs[i]], [self.pred[i]], self.names) for i in range(self.n)]\n        for d in x:\n            for k in [\"imgs\", \"pred\", \"xyxy\", \"xyxyn\", \"xywh\", \"xywhn\"]:\n                setattr(d, k, getattr(d, k)[0])  # pop out of list\n        return x\n"
  },
  {
    "path": "facelib/detection/yolov5face/models/experimental.py",
    "content": "# # This file contains experimental modules\n\nimport numpy as np\nimport torch\nfrom torch import nn\n\nfrom facelib.detection.yolov5face.models.common import Conv\n\n\nclass CrossConv(nn.Module):\n    # Cross Convolution Downsample\n    def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):\n        # ch_in, ch_out, kernel, stride, groups, expansion, shortcut\n        super().__init__()\n        c_ = int(c2 * e)  # hidden channels\n        self.cv1 = Conv(c1, c_, (1, k), (1, s))\n        self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)\n        self.add = shortcut and c1 == c2\n\n    def forward(self, x):\n        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass MixConv2d(nn.Module):\n    # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595\n    def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):\n        super().__init__()\n        groups = len(k)\n        if equal_ch:  # equal c_ per group\n            i = torch.linspace(0, groups - 1e-6, c2).floor()  # c2 indices\n            c_ = [(i == g).sum() for g in range(groups)]  # intermediate channels\n        else:  # equal weight.numel() per group\n            b = [c2] + [0] * groups\n            a = np.eye(groups + 1, groups, k=-1)\n            a -= np.roll(a, 1, axis=1)\n            a *= np.array(k) ** 2\n            a[0] = 1\n            c_ = np.linalg.lstsq(a, b, rcond=None)[0].round()  # solve for equal weight indices, ax = b\n\n        self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])\n        self.bn = nn.BatchNorm2d(c2)\n        self.act = nn.LeakyReLU(0.1, inplace=True)\n\n    def forward(self, x):\n        return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))\n"
  },
  {
    "path": "facelib/detection/yolov5face/models/yolo.py",
    "content": "import math\nfrom copy import deepcopy\nfrom pathlib import Path\n\nimport torch\nimport yaml  # for torch hub\nfrom torch import nn\n\nfrom facelib.detection.yolov5face.models.common import (\n    C3,\n    NMS,\n    SPP,\n    AutoShape,\n    Bottleneck,\n    BottleneckCSP,\n    Concat,\n    Conv,\n    DWConv,\n    Focus,\n    ShuffleV2Block,\n    StemBlock,\n)\nfrom facelib.detection.yolov5face.models.experimental import CrossConv, MixConv2d\nfrom facelib.detection.yolov5face.utils.autoanchor import check_anchor_order\nfrom facelib.detection.yolov5face.utils.general import make_divisible\nfrom facelib.detection.yolov5face.utils.torch_utils import copy_attr, fuse_conv_and_bn\n\n\nclass Detect(nn.Module):\n    stride = None  # strides computed during build\n    export = False  # onnx export\n\n    def __init__(self, nc=80, anchors=(), ch=()):  # detection layer\n        super().__init__()\n        self.nc = nc  # number of classes\n        self.no = nc + 5 + 10  # number of outputs per anchor\n\n        self.nl = len(anchors)  # number of detection layers\n        self.na = len(anchors[0]) // 2  # number of anchors\n        self.grid = [torch.zeros(1)] * self.nl  # init grid\n        a = torch.tensor(anchors).float().view(self.nl, -1, 2)\n        self.register_buffer(\"anchors\", a)  # shape(nl,na,2)\n        self.register_buffer(\"anchor_grid\", a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2)\n        self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv\n\n    def forward(self, x):\n        z = []  # inference output\n        if self.export:\n            for i in range(self.nl):\n                x[i] = self.m[i](x[i])\n            return x\n        for i in range(self.nl):\n            x[i] = self.m[i](x[i])  # conv\n            bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)\n            x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()\n\n            if not self.training:  # inference\n                if self.grid[i].shape[2:4] != x[i].shape[2:4]:\n                    self.grid[i] = self._make_grid(nx, ny).to(x[i].device)\n\n                y = torch.full_like(x[i], 0)\n                y[..., [0, 1, 2, 3, 4, 15]] = x[i][..., [0, 1, 2, 3, 4, 15]].sigmoid()\n                y[..., 5:15] = x[i][..., 5:15]\n\n                y[..., 0:2] = (y[..., 0:2] * 2.0 - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy\n                y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh\n\n                y[..., 5:7] = (\n                    y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]\n                )  # landmark x1 y1\n                y[..., 7:9] = (\n                    y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]\n                )  # landmark x2 y2\n                y[..., 9:11] = (\n                    y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]\n                )  # landmark x3 y3\n                y[..., 11:13] = (\n                    y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]\n                )  # landmark x4 y4\n                y[..., 13:15] = (\n                    y[..., 13:15] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]\n                )  # landmark x5 y5\n\n                z.append(y.view(bs, -1, self.no))\n\n        return x if self.training else (torch.cat(z, 1), x)\n\n    @staticmethod\n    def _make_grid(nx=20, ny=20):\n        # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)], indexing=\"ij\") # for pytorch>=1.10\n        yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])\n        return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()\n\n\nclass Model(nn.Module):\n    def __init__(self, cfg=\"yolov5s.yaml\", ch=3, nc=None):  # model, input channels, number of classes\n        super().__init__()\n        self.yaml_file = Path(cfg).name\n        with Path(cfg).open(encoding=\"utf8\") as f:\n            self.yaml = yaml.safe_load(f)  # model dict\n\n        # Define model\n        ch = self.yaml[\"ch\"] = self.yaml.get(\"ch\", ch)  # input channels\n        if nc and nc != self.yaml[\"nc\"]:\n            self.yaml[\"nc\"] = nc  # override yaml value\n\n        self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch])  # model, savelist\n        self.names = [str(i) for i in range(self.yaml[\"nc\"])]  # default names\n\n        # Build strides, anchors\n        m = self.model[-1]  # Detect()\n        if isinstance(m, Detect):\n            s = 128  # 2x min stride\n            m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))])  # forward\n            m.anchors /= m.stride.view(-1, 1, 1)\n            check_anchor_order(m)\n            self.stride = m.stride\n            self._initialize_biases()  # only run once\n\n    def forward(self, x):\n        return self.forward_once(x)  # single-scale inference, train\n\n    def forward_once(self, x):\n        y = []  # outputs\n        for m in self.model:\n            if m.f != -1:  # if not from previous layer\n                x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers\n\n            x = m(x)  # run\n            y.append(x if m.i in self.save else None)  # save output\n\n        return x\n\n    def _initialize_biases(self, cf=None):  # initialize biases into Detect(), cf is class frequency\n        # https://arxiv.org/abs/1708.02002 section 3.3\n        m = self.model[-1]  # Detect() module\n        for mi, s in zip(m.m, m.stride):  # from\n            b = mi.bias.view(m.na, -1)  # conv.bias(255) to (3,85)\n            b.data[:, 4] += math.log(8 / (640 / s) ** 2)  # obj (8 objects per 640 image)\n            b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum())  # cls\n            mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)\n\n    def _print_biases(self):\n        m = self.model[-1]  # Detect() module\n        for mi in m.m:  # from\n            b = mi.bias.detach().view(m.na, -1).T  # conv.bias(255) to (3,85)\n            print((\"%6g Conv2d.bias:\" + \"%10.3g\" * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))\n\n    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers\n        print(\"Fusing layers... \")\n        for m in self.model.modules():\n            if isinstance(m, Conv) and hasattr(m, \"bn\"):\n                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv\n                delattr(m, \"bn\")  # remove batchnorm\n                m.forward = m.fuseforward  # update forward\n            elif type(m) is nn.Upsample:\n                m.recompute_scale_factor = None  # torch 1.11.0 compatibility\n        return self\n\n    def nms(self, mode=True):  # add or remove NMS module\n        present = isinstance(self.model[-1], NMS)  # last layer is NMS\n        if mode and not present:\n            print(\"Adding NMS... \")\n            m = NMS()  # module\n            m.f = -1  # from\n            m.i = self.model[-1].i + 1  # index\n            self.model.add_module(name=str(m.i), module=m)  # add\n            self.eval()\n        elif not mode and present:\n            print(\"Removing NMS... \")\n            self.model = self.model[:-1]  # remove\n        return self\n\n    def autoshape(self):  # add autoShape module\n        print(\"Adding autoShape... \")\n        m = AutoShape(self)  # wrap model\n        copy_attr(m, self, include=(\"yaml\", \"nc\", \"hyp\", \"names\", \"stride\"), exclude=())  # copy attributes\n        return m\n\n\ndef parse_model(d, ch):  # model_dict, input_channels(3)\n    anchors, nc, gd, gw = d[\"anchors\"], d[\"nc\"], d[\"depth_multiple\"], d[\"width_multiple\"]\n    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors\n    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)\n\n    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out\n    for i, (f, n, m, args) in enumerate(d[\"backbone\"] + d[\"head\"]):  # from, number, module, args\n        m = eval(m) if isinstance(m, str) else m  # eval strings\n        for j, a in enumerate(args):\n            try:\n                args[j] = eval(a) if isinstance(a, str) else a  # eval strings\n            except:\n                pass\n\n        n = max(round(n * gd), 1) if n > 1 else n  # depth gain\n        if m in [\n            Conv,\n            Bottleneck,\n            SPP,\n            DWConv,\n            MixConv2d,\n            Focus,\n            CrossConv,\n            BottleneckCSP,\n            C3,\n            ShuffleV2Block,\n            StemBlock,\n        ]:\n            c1, c2 = ch[f], args[0]\n\n            c2 = make_divisible(c2 * gw, 8) if c2 != no else c2\n\n            args = [c1, c2, *args[1:]]\n            if m in [BottleneckCSP, C3]:\n                args.insert(2, n)\n                n = 1\n        elif m is nn.BatchNorm2d:\n            args = [ch[f]]\n        elif m is Concat:\n            c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)\n        elif m is Detect:\n            args.append([ch[x + 1] for x in f])\n            if isinstance(args[1], int):  # number of anchors\n                args[1] = [list(range(args[1] * 2))] * len(f)\n        else:\n            c2 = ch[f]\n\n        m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args)  # module\n        t = str(m)[8:-2].replace(\"__main__.\", \"\")  # module type\n        np = sum(x.numel() for x in m_.parameters())  # number params\n        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params\n        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist\n        layers.append(m_)\n        ch.append(c2)\n    return nn.Sequential(*layers), sorted(save)\n"
  },
  {
    "path": "facelib/detection/yolov5face/models/yolov5l.yaml",
    "content": "# parameters\nnc: 1  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\n\n# anchors\nanchors:\n  - [4,5,  8,10,  13,16]  # P3/8\n  - [23,29,  43,55,  73,105]  # P4/16\n  - [146,217,  231,300,  335,433]  # P5/32\n\n# YOLOv5 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, StemBlock, [64, 3, 2]],  # 0-P1/2\n   [-1, 3, C3, [128]],\n   [-1, 1, Conv, [256, 3, 2]],      # 2-P3/8\n   [-1, 9, C3, [256]],\n   [-1, 1, Conv, [512, 3, 2]],      # 4-P4/16\n   [-1, 9, C3, [512]],\n   [-1, 1, Conv, [1024, 3, 2]],     # 6-P5/32\n   [-1, 1, SPP, [1024, [3,5,7]]],\n   [-1, 3, C3, [1024, False]],      # 8\n  ]\n\n# YOLOv5 head\nhead:\n  [[-1, 1, Conv, [512, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 5], 1, Concat, [1]],  # cat backbone P4\n   [-1, 3, C3, [512, False]],  # 12\n\n   [-1, 1, Conv, [256, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 3], 1, Concat, [1]],  # cat backbone P3\n   [-1, 3, C3, [256, False]],  # 16 (P3/8-small)\n\n   [-1, 1, Conv, [256, 3, 2]],\n   [[-1, 13], 1, Concat, [1]],  # cat head P4\n   [-1, 3, C3, [512, False]],  # 19 (P4/16-medium)\n\n   [-1, 1, Conv, [512, 3, 2]],\n   [[-1, 9], 1, Concat, [1]],  # cat head P5\n   [-1, 3, C3, [1024, False]],  # 22 (P5/32-large)\n\n   [[16, 19, 22], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]"
  },
  {
    "path": "facelib/detection/yolov5face/models/yolov5n.yaml",
    "content": "# parameters\nnc: 1  # number of classes\ndepth_multiple: 1.0  # model depth multiple\nwidth_multiple: 1.0  # layer channel multiple\n\n# anchors\nanchors:\n  - [4,5,  8,10,  13,16]  # P3/8\n  - [23,29,  43,55,  73,105]  # P4/16\n  - [146,217,  231,300,  335,433]  # P5/32\n\n# YOLOv5 backbone\nbackbone:\n  # [from, number, module, args]\n  [[-1, 1, StemBlock, [32, 3, 2]],    # 0-P2/4\n   [-1, 1, ShuffleV2Block, [128, 2]], # 1-P3/8\n   [-1, 3, ShuffleV2Block, [128, 1]], # 2\n   [-1, 1, ShuffleV2Block, [256, 2]], # 3-P4/16\n   [-1, 7, ShuffleV2Block, [256, 1]], # 4\n   [-1, 1, ShuffleV2Block, [512, 2]], # 5-P5/32\n   [-1, 3, ShuffleV2Block, [512, 1]], # 6\n  ]\n\n# YOLOv5 head\nhead:\n  [[-1, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 4], 1, Concat, [1]],  # cat backbone P4\n   [-1, 1, C3, [128, False]],  # 10\n\n   [-1, 1, Conv, [128, 1, 1]],\n   [-1, 1, nn.Upsample, [None, 2, 'nearest']],\n   [[-1, 2], 1, Concat, [1]],  # cat backbone P3\n   [-1, 1, C3, [128, False]],  # 14 (P3/8-small)\n\n   [-1, 1, Conv, [128, 3, 2]],\n   [[-1, 11], 1, Concat, [1]],  # cat head P4\n   [-1, 1, C3, [128, False]],  # 17 (P4/16-medium)\n\n   [-1, 1, Conv, [128, 3, 2]],\n   [[-1, 7], 1, Concat, [1]],  # cat head P5\n   [-1, 1, C3, [128, False]],  # 20 (P5/32-large)\n\n   [[14, 17, 20], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)\n  ]\n"
  },
  {
    "path": "facelib/detection/yolov5face/utils/__init__.py",
    "content": ""
  },
  {
    "path": "facelib/detection/yolov5face/utils/autoanchor.py",
    "content": "# Auto-anchor utils\n\n\ndef check_anchor_order(m):\n    # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary\n    a = m.anchor_grid.prod(-1).view(-1)  # anchor area\n    da = a[-1] - a[0]  # delta a\n    ds = m.stride[-1] - m.stride[0]  # delta s\n    if da.sign() != ds.sign():  # same order\n        print(\"Reversing anchor order\")\n        m.anchors[:] = m.anchors.flip(0)\n        m.anchor_grid[:] = m.anchor_grid.flip(0)\n"
  },
  {
    "path": "facelib/detection/yolov5face/utils/datasets.py",
    "content": "import cv2\nimport numpy as np\n\n\ndef letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scale_fill=False, scaleup=True):\n    # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232\n    shape = img.shape[:2]  # current shape [height, width]\n    if isinstance(new_shape, int):\n        new_shape = (new_shape, new_shape)\n\n    # Scale ratio (new / old)\n    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])\n    if not scaleup:  # only scale down, do not scale up (for better test mAP)\n        r = min(r, 1.0)\n\n    # Compute padding\n    ratio = r, r  # width, height ratios\n    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))\n    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding\n    if auto:  # minimum rectangle\n        dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh padding\n    elif scale_fill:  # stretch\n        dw, dh = 0.0, 0.0\n        new_unpad = (new_shape[1], new_shape[0])\n        ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios\n\n    dw /= 2  # divide padding into 2 sides\n    dh /= 2\n\n    if shape[::-1] != new_unpad:  # resize\n        img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)\n    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))\n    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))\n    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border\n    return img, ratio, (dw, dh)\n"
  },
  {
    "path": "facelib/detection/yolov5face/utils/extract_ckpt.py",
    "content": "import torch\nimport sys\nsys.path.insert(0,'./facelib/detection/yolov5face')\nmodel = torch.load('facelib/detection/yolov5face/yolov5n-face.pt', map_location='cpu')['model']\ntorch.save(model.state_dict(),'ckpts/facelib/yolov5n-face.pth')"
  },
  {
    "path": "facelib/detection/yolov5face/utils/general.py",
    "content": "import math\nimport time\n\nimport numpy as np\nimport torch\nimport torchvision\n\n\ndef check_img_size(img_size, s=32):\n    # Verify img_size is a multiple of stride s\n    new_size = make_divisible(img_size, int(s))  # ceil gs-multiple\n    # if new_size != img_size:\n    #     print(f\"WARNING: --img-size {img_size:g} must be multiple of max stride {s:g}, updating to {new_size:g}\")\n    return new_size\n\n\ndef make_divisible(x, divisor):\n    # Returns x evenly divisible by divisor\n    return math.ceil(x / divisor) * divisor\n\n\ndef xyxy2xywh(x):\n    # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = (x[:, 0] + x[:, 2]) / 2  # x center\n    y[:, 1] = (x[:, 1] + x[:, 3]) / 2  # y center\n    y[:, 2] = x[:, 2] - x[:, 0]  # width\n    y[:, 3] = x[:, 3] - x[:, 1]  # height\n    return y\n\n\ndef xywh2xyxy(x):\n    # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right\n    y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)\n    y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x\n    y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y\n    y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x\n    y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y\n    return y\n\n\ndef scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):\n    # Rescale coords (xyxy) from img1_shape to img0_shape\n    if ratio_pad is None:  # calculate from img0_shape\n        gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])  # gain  = old / new\n        pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding\n    else:\n        gain = ratio_pad[0][0]\n        pad = ratio_pad[1]\n\n    coords[:, [0, 2]] -= pad[0]  # x padding\n    coords[:, [1, 3]] -= pad[1]  # y padding\n    coords[:, :4] /= gain\n    clip_coords(coords, img0_shape)\n    return coords\n\n\ndef clip_coords(boxes, img_shape):\n    # Clip bounding xyxy bounding boxes to image shape (height, width)\n    boxes[:, 0].clamp_(0, img_shape[1])  # x1\n    boxes[:, 1].clamp_(0, img_shape[0])  # y1\n    boxes[:, 2].clamp_(0, img_shape[1])  # x2\n    boxes[:, 3].clamp_(0, img_shape[0])  # y2\n\n\ndef box_iou(box1, box2):\n    # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py\n    \"\"\"\n    Return intersection-over-union (Jaccard index) of boxes.\n    Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n    Arguments:\n        box1 (Tensor[N, 4])\n        box2 (Tensor[M, 4])\n    Returns:\n        iou (Tensor[N, M]): the NxM matrix containing the pairwise\n            IoU values for every element in boxes1 and boxes2\n    \"\"\"\n\n    def box_area(box):\n        return (box[2] - box[0]) * (box[3] - box[1])\n\n    area1 = box_area(box1.T)\n    area2 = box_area(box2.T)\n\n    inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)\n    return inter / (area1[:, None] + area2 - inter)\n\n\ndef non_max_suppression_face(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):\n    \"\"\"Performs Non-Maximum Suppression (NMS) on inference results\n    Returns:\n         detections with shape: nx6 (x1, y1, x2, y2, conf, cls)\n    \"\"\"\n\n    nc = prediction.shape[2] - 15  # number of classes\n    xc = prediction[..., 4] > conf_thres  # candidates\n\n    # Settings\n    # (pixels) maximum box width and height\n    max_wh = 4096\n    time_limit = 10.0  # seconds to quit after\n    redundant = True  # require redundant detections\n    multi_label = nc > 1  # multiple labels per box (adds 0.5ms/img)\n    merge = False  # use merge-NMS\n\n    t = time.time()\n    output = [torch.zeros((0, 16), device=prediction.device)] * prediction.shape[0]\n    for xi, x in enumerate(prediction):  # image index, image inference\n        # Apply constraints\n        x = x[xc[xi]]  # confidence\n\n        # Cat apriori labels if autolabelling\n        if labels and len(labels[xi]):\n            label = labels[xi]\n            v = torch.zeros((len(label), nc + 15), device=x.device)\n            v[:, :4] = label[:, 1:5]  # box\n            v[:, 4] = 1.0  # conf\n            v[range(len(label)), label[:, 0].long() + 15] = 1.0  # cls\n            x = torch.cat((x, v), 0)\n\n        # If none remain process next image\n        if not x.shape[0]:\n            continue\n\n        # Compute conf\n        x[:, 15:] *= x[:, 4:5]  # conf = obj_conf * cls_conf\n\n        # Box (center x, center y, width, height) to (x1, y1, x2, y2)\n        box = xywh2xyxy(x[:, :4])\n\n        # Detections matrix nx6 (xyxy, conf, landmarks, cls)\n        if multi_label:\n            i, j = (x[:, 15:] > conf_thres).nonzero(as_tuple=False).T\n            x = torch.cat((box[i], x[i, j + 15, None], x[:, 5:15], j[:, None].float()), 1)\n        else:  # best class only\n            conf, j = x[:, 15:].max(1, keepdim=True)\n            x = torch.cat((box, conf, x[:, 5:15], j.float()), 1)[conf.view(-1) > conf_thres]\n\n        # Filter by class\n        if classes is not None:\n            x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]\n\n        # If none remain process next image\n        n = x.shape[0]  # number of boxes\n        if not n:\n            continue\n\n        # Batched NMS\n        c = x[:, 15:16] * (0 if agnostic else max_wh)  # classes\n        boxes, scores = x[:, :4] + c, x[:, 4]  # boxes (offset by class), scores\n        i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS\n\n        if merge and (1 < n < 3e3):  # Merge NMS (boxes merged using weighted mean)\n            # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)\n            iou = box_iou(boxes[i], boxes) > iou_thres  # iou matrix\n            weights = iou * scores[None]  # box weights\n            x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True)  # merged boxes\n            if redundant:\n                i = i[iou.sum(1) > 1]  # require redundancy\n\n        output[xi] = x[i]\n        if (time.time() - t) > time_limit:\n            break  # time limit exceeded\n\n    return output\n\n\ndef non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()):\n    \"\"\"Performs Non-Maximum Suppression (NMS) on inference results\n\n    Returns:\n         detections with shape: nx6 (x1, y1, x2, y2, conf, cls)\n    \"\"\"\n\n    nc = prediction.shape[2] - 5  # number of classes\n    xc = prediction[..., 4] > conf_thres  # candidates\n\n    # Settings\n    # (pixels) maximum box width and height\n    max_wh = 4096\n    time_limit = 10.0  # seconds to quit after\n    redundant = True  # require redundant detections\n    multi_label = nc > 1  # multiple labels per box (adds 0.5ms/img)\n    merge = False  # use merge-NMS\n\n    t = time.time()\n    output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]\n    for xi, x in enumerate(prediction):  # image index, image inference\n        x = x[xc[xi]]  # confidence\n\n        # Cat apriori labels if autolabelling\n        if labels and len(labels[xi]):\n            label_id = labels[xi]\n            v = torch.zeros((len(label_id), nc + 5), device=x.device)\n            v[:, :4] = label_id[:, 1:5]  # box\n            v[:, 4] = 1.0  # conf\n            v[range(len(label_id)), label_id[:, 0].long() + 5] = 1.0  # cls\n            x = torch.cat((x, v), 0)\n\n        # If none remain process next image\n        if not x.shape[0]:\n            continue\n\n        # Compute conf\n        x[:, 5:] *= x[:, 4:5]  # conf = obj_conf * cls_conf\n\n        # Box (center x, center y, width, height) to (x1, y1, x2, y2)\n        box = xywh2xyxy(x[:, :4])\n\n        # Detections matrix nx6 (xyxy, conf, cls)\n        if multi_label:\n            i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T\n            x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)\n        else:  # best class only\n            conf, j = x[:, 5:].max(1, keepdim=True)\n            x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]\n\n        # Filter by class\n        if classes is not None:\n            x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]\n\n        # Check shape\n        n = x.shape[0]  # number of boxes\n        if not n:  # no boxes\n            continue\n\n        x = x[x[:, 4].argsort(descending=True)]  # sort by confidence\n\n        # Batched NMS\n        c = x[:, 5:6] * (0 if agnostic else max_wh)  # classes\n        boxes, scores = x[:, :4] + c, x[:, 4]  # boxes (offset by class), scores\n        i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS\n        if merge and (1 < n < 3e3):  # Merge NMS (boxes merged using weighted mean)\n            # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)\n            iou = box_iou(boxes[i], boxes) > iou_thres  # iou matrix\n            weights = iou * scores[None]  # box weights\n            x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True)  # merged boxes\n            if redundant:\n                i = i[iou.sum(1) > 1]  # require redundancy\n\n        output[xi] = x[i]\n        if (time.time() - t) > time_limit:\n            print(f\"WARNING: NMS time limit {time_limit}s exceeded\")\n            break  # time limit exceeded\n\n    return output\n\n\ndef scale_coords_landmarks(img1_shape, coords, img0_shape, ratio_pad=None):\n    # Rescale coords (xyxy) from img1_shape to img0_shape\n    if ratio_pad is None:  # calculate from img0_shape\n        gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])  # gain  = old / new\n        pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding\n    else:\n        gain = ratio_pad[0][0]\n        pad = ratio_pad[1]\n\n    coords[:, [0, 2, 4, 6, 8]] -= pad[0]  # x padding\n    coords[:, [1, 3, 5, 7, 9]] -= pad[1]  # y padding\n    coords[:, :10] /= gain\n    coords[:, 0].clamp_(0, img0_shape[1])  # x1\n    coords[:, 1].clamp_(0, img0_shape[0])  # y1\n    coords[:, 2].clamp_(0, img0_shape[1])  # x2\n    coords[:, 3].clamp_(0, img0_shape[0])  # y2\n    coords[:, 4].clamp_(0, img0_shape[1])  # x3\n    coords[:, 5].clamp_(0, img0_shape[0])  # y3\n    coords[:, 6].clamp_(0, img0_shape[1])  # x4\n    coords[:, 7].clamp_(0, img0_shape[0])  # y4\n    coords[:, 8].clamp_(0, img0_shape[1])  # x5\n    coords[:, 9].clamp_(0, img0_shape[0])  # y5\n    return coords\n"
  },
  {
    "path": "facelib/detection/yolov5face/utils/torch_utils.py",
    "content": "import torch\nfrom torch import nn\n\n\ndef fuse_conv_and_bn(conv, bn):\n    # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/\n    fusedconv = (\n        nn.Conv2d(\n            conv.in_channels,\n            conv.out_channels,\n            kernel_size=conv.kernel_size,\n            stride=conv.stride,\n            padding=conv.padding,\n            groups=conv.groups,\n            bias=True,\n        )\n        .requires_grad_(False)\n        .to(conv.weight.device)\n    )\n\n    # prepare filters\n    w_conv = conv.weight.clone().view(conv.out_channels, -1)\n    w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))\n    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size()))\n\n    # prepare spatial bias\n    b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias\n    b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))\n    fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)\n\n    return fusedconv\n\n\ndef copy_attr(a, b, include=(), exclude=()):\n    # Copy attributes from b to a, options to only include [...] and to exclude [...]\n    for k, v in b.__dict__.items():\n        if (include and k not in include) or k.startswith(\"_\") or k in exclude:\n            continue\n\n        setattr(a, k, v)\n"
  },
  {
    "path": "facelib/parsing/__init__.py",
    "content": "import torch\n\nfrom facelib.utils import load_file_from_url\nfrom .bisenet import BiSeNet\nfrom .parsenet import ParseNet\n\n\ndef init_parsing_model(model_name='bisenet', half=False, device='cuda'):\n    if model_name == 'bisenet':\n        model = BiSeNet(num_class=19)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_bisenet.pth'\n    elif model_name == 'parsenet':\n        model = ParseNet(in_size=512, out_size=512, parsing_ch=19)\n        model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth'\n    else:\n        raise NotImplementedError(f'{model_name} is not implemented.')\n\n    model_path = load_file_from_url(url=model_url, model_dir='ckpts/facelib', progress=True, file_name=None)\n    load_net = torch.load(model_path, map_location=lambda storage, loc: storage)\n    model.load_state_dict(load_net, strict=True)\n    model.eval()\n    model = model.to(device)\n    return model\n"
  },
  {
    "path": "facelib/parsing/bisenet.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom .resnet import ResNet18\n\n\nclass ConvBNReLU(nn.Module):\n\n    def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1):\n        super(ConvBNReLU, self).__init__()\n        self.conv = nn.Conv2d(in_chan, out_chan, kernel_size=ks, stride=stride, padding=padding, bias=False)\n        self.bn = nn.BatchNorm2d(out_chan)\n\n    def forward(self, x):\n        x = self.conv(x)\n        x = F.relu(self.bn(x))\n        return x\n\n\nclass BiSeNetOutput(nn.Module):\n\n    def __init__(self, in_chan, mid_chan, num_class):\n        super(BiSeNetOutput, self).__init__()\n        self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1)\n        self.conv_out = nn.Conv2d(mid_chan, num_class, kernel_size=1, bias=False)\n\n    def forward(self, x):\n        feat = self.conv(x)\n        out = self.conv_out(feat)\n        return out, feat\n\n\nclass AttentionRefinementModule(nn.Module):\n\n    def __init__(self, in_chan, out_chan):\n        super(AttentionRefinementModule, self).__init__()\n        self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1)\n        self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size=1, bias=False)\n        self.bn_atten = nn.BatchNorm2d(out_chan)\n        self.sigmoid_atten = nn.Sigmoid()\n\n    def forward(self, x):\n        feat = self.conv(x)\n        atten = F.avg_pool2d(feat, feat.size()[2:])\n        atten = self.conv_atten(atten)\n        atten = self.bn_atten(atten)\n        atten = self.sigmoid_atten(atten)\n        out = torch.mul(feat, atten)\n        return out\n\n\nclass ContextPath(nn.Module):\n\n    def __init__(self):\n        super(ContextPath, self).__init__()\n        self.resnet = ResNet18()\n        self.arm16 = AttentionRefinementModule(256, 128)\n        self.arm32 = AttentionRefinementModule(512, 128)\n        self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)\n        self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)\n        self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0)\n\n    def forward(self, x):\n        feat8, feat16, feat32 = self.resnet(x)\n        h8, w8 = feat8.size()[2:]\n        h16, w16 = feat16.size()[2:]\n        h32, w32 = feat32.size()[2:]\n\n        avg = F.avg_pool2d(feat32, feat32.size()[2:])\n        avg = self.conv_avg(avg)\n        avg_up = F.interpolate(avg, (h32, w32), mode='nearest')\n\n        feat32_arm = self.arm32(feat32)\n        feat32_sum = feat32_arm + avg_up\n        feat32_up = F.interpolate(feat32_sum, (h16, w16), mode='nearest')\n        feat32_up = self.conv_head32(feat32_up)\n\n        feat16_arm = self.arm16(feat16)\n        feat16_sum = feat16_arm + feat32_up\n        feat16_up = F.interpolate(feat16_sum, (h8, w8), mode='nearest')\n        feat16_up = self.conv_head16(feat16_up)\n\n        return feat8, feat16_up, feat32_up  # x8, x8, x16\n\n\nclass FeatureFusionModule(nn.Module):\n\n    def __init__(self, in_chan, out_chan):\n        super(FeatureFusionModule, self).__init__()\n        self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0)\n        self.conv1 = nn.Conv2d(out_chan, out_chan // 4, kernel_size=1, stride=1, padding=0, bias=False)\n        self.conv2 = nn.Conv2d(out_chan // 4, out_chan, kernel_size=1, stride=1, padding=0, bias=False)\n        self.relu = nn.ReLU(inplace=True)\n        self.sigmoid = nn.Sigmoid()\n\n    def forward(self, fsp, fcp):\n        fcat = torch.cat([fsp, fcp], dim=1)\n        feat = self.convblk(fcat)\n        atten = F.avg_pool2d(feat, feat.size()[2:])\n        atten = self.conv1(atten)\n        atten = self.relu(atten)\n        atten = self.conv2(atten)\n        atten = self.sigmoid(atten)\n        feat_atten = torch.mul(feat, atten)\n        feat_out = feat_atten + feat\n        return feat_out\n\n\nclass BiSeNet(nn.Module):\n\n    def __init__(self, num_class):\n        super(BiSeNet, self).__init__()\n        self.cp = ContextPath()\n        self.ffm = FeatureFusionModule(256, 256)\n        self.conv_out = BiSeNetOutput(256, 256, num_class)\n        self.conv_out16 = BiSeNetOutput(128, 64, num_class)\n        self.conv_out32 = BiSeNetOutput(128, 64, num_class)\n\n    def forward(self, x, return_feat=False):\n        h, w = x.size()[2:]\n        feat_res8, feat_cp8, feat_cp16 = self.cp(x)  # return res3b1 feature\n        feat_sp = feat_res8  # replace spatial path feature with res3b1 feature\n        feat_fuse = self.ffm(feat_sp, feat_cp8)\n\n        out, feat = self.conv_out(feat_fuse)\n        out16, feat16 = self.conv_out16(feat_cp8)\n        out32, feat32 = self.conv_out32(feat_cp16)\n\n        out = F.interpolate(out, (h, w), mode='bilinear', align_corners=True)\n        out16 = F.interpolate(out16, (h, w), mode='bilinear', align_corners=True)\n        out32 = F.interpolate(out32, (h, w), mode='bilinear', align_corners=True)\n\n        if return_feat:\n            feat = F.interpolate(feat, (h, w), mode='bilinear', align_corners=True)\n            feat16 = F.interpolate(feat16, (h, w), mode='bilinear', align_corners=True)\n            feat32 = F.interpolate(feat32, (h, w), mode='bilinear', align_corners=True)\n            return out, out16, out32, feat, feat16, feat32\n        else:\n            return out, out16, out32\n"
  },
  {
    "path": "facelib/parsing/parsenet.py",
    "content": "\"\"\"Modified from https://github.com/chaofengc/PSFRGAN\n\"\"\"\nimport numpy as np\nimport torch.nn as nn\nfrom torch.nn import functional as F\n\n\nclass NormLayer(nn.Module):\n    \"\"\"Normalization Layers.\n\n    Args:\n        channels: input channels, for batch norm and instance norm.\n        input_size: input shape without batch size, for layer norm.\n    \"\"\"\n\n    def __init__(self, channels, normalize_shape=None, norm_type='bn'):\n        super(NormLayer, self).__init__()\n        norm_type = norm_type.lower()\n        self.norm_type = norm_type\n        if norm_type == 'bn':\n            self.norm = nn.BatchNorm2d(channels, affine=True)\n        elif norm_type == 'in':\n            self.norm = nn.InstanceNorm2d(channels, affine=False)\n        elif norm_type == 'gn':\n            self.norm = nn.GroupNorm(32, channels, affine=True)\n        elif norm_type == 'pixel':\n            self.norm = lambda x: F.normalize(x, p=2, dim=1)\n        elif norm_type == 'layer':\n            self.norm = nn.LayerNorm(normalize_shape)\n        elif norm_type == 'none':\n            self.norm = lambda x: x * 1.0\n        else:\n            assert 1 == 0, f'Norm type {norm_type} not support.'\n\n    def forward(self, x, ref=None):\n        if self.norm_type == 'spade':\n            return self.norm(x, ref)\n        else:\n            return self.norm(x)\n\n\nclass ReluLayer(nn.Module):\n    \"\"\"Relu Layer.\n\n    Args:\n        relu type: type of relu layer, candidates are\n            - ReLU\n            - LeakyReLU: default relu slope 0.2\n            - PRelu\n            - SELU\n            - none: direct pass\n    \"\"\"\n\n    def __init__(self, channels, relu_type='relu'):\n        super(ReluLayer, self).__init__()\n        relu_type = relu_type.lower()\n        if relu_type == 'relu':\n            self.func = nn.ReLU(True)\n        elif relu_type == 'leakyrelu':\n            self.func = nn.LeakyReLU(0.2, inplace=True)\n        elif relu_type == 'prelu':\n            self.func = nn.PReLU(channels)\n        elif relu_type == 'selu':\n            self.func = nn.SELU(True)\n        elif relu_type == 'none':\n            self.func = lambda x: x * 1.0\n        else:\n            assert 1 == 0, f'Relu type {relu_type} not support.'\n\n    def forward(self, x):\n        return self.func(x)\n\n\nclass ConvLayer(nn.Module):\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size=3,\n                 scale='none',\n                 norm_type='none',\n                 relu_type='none',\n                 use_pad=True,\n                 bias=True):\n        super(ConvLayer, self).__init__()\n        self.use_pad = use_pad\n        self.norm_type = norm_type\n        if norm_type in ['bn']:\n            bias = False\n\n        stride = 2 if scale == 'down' else 1\n\n        self.scale_func = lambda x: x\n        if scale == 'up':\n            self.scale_func = lambda x: nn.functional.interpolate(x, scale_factor=2, mode='nearest')\n\n        self.reflection_pad = nn.ReflectionPad2d(int(np.ceil((kernel_size - 1.) / 2)))\n        self.conv2d = nn.Conv2d(in_channels, out_channels, kernel_size, stride, bias=bias)\n\n        self.relu = ReluLayer(out_channels, relu_type)\n        self.norm = NormLayer(out_channels, norm_type=norm_type)\n\n    def forward(self, x):\n        out = self.scale_func(x)\n        if self.use_pad:\n            out = self.reflection_pad(out)\n        out = self.conv2d(out)\n        out = self.norm(out)\n        out = self.relu(out)\n        return out\n\n\nclass ResidualBlock(nn.Module):\n    \"\"\"\n    Residual block recommended in: http://torch.ch/blog/2016/02/04/resnets.html\n    \"\"\"\n\n    def __init__(self, c_in, c_out, relu_type='prelu', norm_type='bn', scale='none'):\n        super(ResidualBlock, self).__init__()\n\n        if scale == 'none' and c_in == c_out:\n            self.shortcut_func = lambda x: x\n        else:\n            self.shortcut_func = ConvLayer(c_in, c_out, 3, scale)\n\n        scale_config_dict = {'down': ['none', 'down'], 'up': ['up', 'none'], 'none': ['none', 'none']}\n        scale_conf = scale_config_dict[scale]\n\n        self.conv1 = ConvLayer(c_in, c_out, 3, scale_conf[0], norm_type=norm_type, relu_type=relu_type)\n        self.conv2 = ConvLayer(c_out, c_out, 3, scale_conf[1], norm_type=norm_type, relu_type='none')\n\n    def forward(self, x):\n        identity = self.shortcut_func(x)\n\n        res = self.conv1(x)\n        res = self.conv2(res)\n        return identity + res\n\n\nclass ParseNet(nn.Module):\n\n    def __init__(self,\n                 in_size=128,\n                 out_size=128,\n                 min_feat_size=32,\n                 base_ch=64,\n                 parsing_ch=19,\n                 res_depth=10,\n                 relu_type='LeakyReLU',\n                 norm_type='bn',\n                 ch_range=[32, 256]):\n        super().__init__()\n        self.res_depth = res_depth\n        act_args = {'norm_type': norm_type, 'relu_type': relu_type}\n        min_ch, max_ch = ch_range\n\n        ch_clip = lambda x: max(min_ch, min(x, max_ch))  # noqa: E731\n        min_feat_size = min(in_size, min_feat_size)\n\n        down_steps = int(np.log2(in_size // min_feat_size))\n        up_steps = int(np.log2(out_size // min_feat_size))\n\n        # =============== define encoder-body-decoder ====================\n        self.encoder = []\n        self.encoder.append(ConvLayer(3, base_ch, 3, 1))\n        head_ch = base_ch\n        for i in range(down_steps):\n            cin, cout = ch_clip(head_ch), ch_clip(head_ch * 2)\n            self.encoder.append(ResidualBlock(cin, cout, scale='down', **act_args))\n            head_ch = head_ch * 2\n\n        self.body = []\n        for i in range(res_depth):\n            self.body.append(ResidualBlock(ch_clip(head_ch), ch_clip(head_ch), **act_args))\n\n        self.decoder = []\n        for i in range(up_steps):\n            cin, cout = ch_clip(head_ch), ch_clip(head_ch // 2)\n            self.decoder.append(ResidualBlock(cin, cout, scale='up', **act_args))\n            head_ch = head_ch // 2\n\n        self.encoder = nn.Sequential(*self.encoder)\n        self.body = nn.Sequential(*self.body)\n        self.decoder = nn.Sequential(*self.decoder)\n        self.out_img_conv = ConvLayer(ch_clip(head_ch), 3)\n        self.out_mask_conv = ConvLayer(ch_clip(head_ch), parsing_ch)\n\n    def forward(self, x):\n        feat = self.encoder(x)\n        x = feat + self.body(feat)\n        x = self.decoder(x)\n        out_img = self.out_img_conv(x)\n        out_mask = self.out_mask_conv(x)\n        return out_mask, out_img\n"
  },
  {
    "path": "facelib/parsing/resnet.py",
    "content": "import torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)\n\n\nclass BasicBlock(nn.Module):\n\n    def __init__(self, in_chan, out_chan, stride=1):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(in_chan, out_chan, stride)\n        self.bn1 = nn.BatchNorm2d(out_chan)\n        self.conv2 = conv3x3(out_chan, out_chan)\n        self.bn2 = nn.BatchNorm2d(out_chan)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = None\n        if in_chan != out_chan or stride != 1:\n            self.downsample = nn.Sequential(\n                nn.Conv2d(in_chan, out_chan, kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(out_chan),\n            )\n\n    def forward(self, x):\n        residual = self.conv1(x)\n        residual = F.relu(self.bn1(residual))\n        residual = self.conv2(residual)\n        residual = self.bn2(residual)\n\n        shortcut = x\n        if self.downsample is not None:\n            shortcut = self.downsample(x)\n\n        out = shortcut + residual\n        out = self.relu(out)\n        return out\n\n\ndef create_layer_basic(in_chan, out_chan, bnum, stride=1):\n    layers = [BasicBlock(in_chan, out_chan, stride=stride)]\n    for i in range(bnum - 1):\n        layers.append(BasicBlock(out_chan, out_chan, stride=1))\n    return nn.Sequential(*layers)\n\n\nclass ResNet18(nn.Module):\n\n    def __init__(self):\n        super(ResNet18, self).__init__()\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)\n        self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)\n        self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)\n        self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(self.bn1(x))\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        feat8 = self.layer2(x)  # 1/8\n        feat16 = self.layer3(feat8)  # 1/16\n        feat32 = self.layer4(feat16)  # 1/32\n        return feat8, feat16, feat32\n"
  },
  {
    "path": "facelib/utils/__init__.py",
    "content": "from .face_utils import align_crop_face_landmarks, compute_increased_bbox, get_valid_bboxes, paste_face_back\nfrom .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir\n\n__all__ = [\n    'align_crop_face_landmarks', 'compute_increased_bbox', 'get_valid_bboxes', 'load_file_from_url', \n    'download_pretrained_models', 'paste_face_back', 'img2tensor', 'scandir'\n]\n"
  },
  {
    "path": "facelib/utils/face_restoration_helper.py",
    "content": "import cv2\nimport numpy as np\nimport os\nimport torch\nimport pdb\nimport dlib\nfrom torchvision.transforms.functional import normalize\n\nfrom facelib.detection import init_detection_model\nfrom facelib.parsing import init_parsing_model\nfrom facelib.utils.misc import img2tensor, imwrite, is_gray, bgr2gray, adain_npy\nfrom basicsr.utils.download_util import load_file_from_url\nfrom basicsr.utils.misc import get_device\n\ndlib_model_url = {\n    'face_detector': 'https://github.com/jnjaby/KEEP/releases/download/v0.1.0/mmod_human_face_detector-4cb19393.dat',\n    'shape_predictor_5': 'https://github.com/jnjaby/KEEP/releases/download/v0.1.0/shape_predictor_5_face_landmarks-c4b1e980.dat'\n}\n# is the test part\ndlib_model_path = {\n    'face_detector': \"./ckpts/dlib/mmod_human_face_detector.dat\",\n    'shape_predictor_5' : \"./ckpts/dlib/shape_predictor_5_face_landmarks.dat\"\n}\n\ndef get_largest_face(det_faces, h, w):\n\n    def get_location(val, length):\n        if val < 0:\n            return 0\n        elif val > length:\n            return length\n        else:\n            return val\n\n    face_areas = []\n    for det_face in det_faces:\n        left = get_location(det_face[0], w)\n        right = get_location(det_face[2], w)\n        top = get_location(det_face[1], h)\n        bottom = get_location(det_face[3], h)\n        face_area = (right - left) * (bottom - top)\n        face_areas.append(face_area)\n    largest_idx = face_areas.index(max(face_areas))\n    return det_faces[largest_idx], largest_idx\n\n\ndef get_center_face(det_faces, h=0, w=0, center=None):\n    if center is not None:\n        center = np.array(center)\n    else:\n        center = np.array([w / 2, h / 2])\n    center_dist = []\n    for det_face in det_faces:\n        face_center = np.array(\n            [(det_face[0] + det_face[2]) / 2, (det_face[1] + det_face[3]) / 2]\n        )\n        dist = np.linalg.norm(face_center - center)\n        center_dist.append(dist)\n    center_idx = center_dist.index(min(center_dist))\n    return det_faces[center_idx], center_idx\n\n\nclass FaceRestoreHelper(object):\n    \"\"\"Helper for the face restoration pipeline (base class).\"\"\"\n\n    def __init__(\n        self,\n        upscale_factor,\n        face_size=512,\n        crop_ratio=(1, 1),\n        det_model='retinaface_resnet50',\n        save_ext='png',\n        template_3points=False,\n        pad_blur=False,\n        use_parse=False,\n        device=None,\n    ):\n        self.template_3points = template_3points  # improve robustness\n        self.upscale_factor = int(upscale_factor)\n        # the cropped face ratio based on the square face\n        self.crop_ratio = crop_ratio  # (h, w)\n        assert (self.crop_ratio[0] >= 1 and self.crop_ratio[1] >= 1), 'crop ration only supports >=1'\n        self.face_size = (\n            int(face_size * self.crop_ratio[1]),\n            int(face_size * self.crop_ratio[0]),\n        )\n        self.det_model = det_model\n\n        if self.det_model == 'dlib':\n            # standard 5 landmarks for FFHQ faces with 1024 x 1024\n            self.face_template = np.array(\n                [\n                    [686.77227723, 488.62376238],\n                    [586.77227723, 493.59405941],\n                    [337.91089109, 488.38613861],\n                    [437.95049505, 493.51485149],\n                    [513.58415842, 678.5049505],\n                ]\n            )\n            self.face_template = self.face_template / (1024 // face_size)\n        elif self.template_3points:\n            self.face_template = np.array([[192, 240], [319, 240], [257, 371]])\n        else:\n            # standard 5 landmarks for FFHQ faces with 512 x 512\n            # facexlib\n            self.face_template = np.array(\n                [\n                    [192.98138, 239.94708],\n                    [318.90277, 240.1936],\n                    [256.63416, 314.01935],\n                    [201.26117, 371.41043],\n                    [313.08905, 371.15118],\n                ]\n            )\n\n            # dlib: left_eye: 36:41  right_eye: 42:47  nose: 30,32,33,34  left mouth corner: 48  right mouth corner: 54\n            # self.face_template = np.array([[193.65928, 242.98541], [318.32558, 243.06108], [255.67984, 328.82894],\n            #                                 [198.22603, 372.82502], [313.91018, 372.75659]])\n\n        self.face_template = self.face_template * (face_size / 512.0)\n        if self.crop_ratio[0] > 1:\n            self.face_template[:, 1] += face_size * \\\n                (self.crop_ratio[0] - 1) / 2\n        if self.crop_ratio[1] > 1:\n            self.face_template[:, 0] += face_size * \\\n                (self.crop_ratio[1] - 1) / 2\n        self.save_ext = save_ext\n        self.pad_blur = pad_blur\n        if self.pad_blur is True:\n            self.template_3points = False\n\n        self.all_landmarks_5 = []\n        self.det_faces = []\n        self.affine_matrices = []\n        self.inverse_affine_matrices = []\n        self.cropped_faces = []\n        self.restored_faces = []\n        self.pad_input_imgs = []\n\n        if device is None:\n            # self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n            self.device = get_device()\n        else:\n            self.device = device\n\n        # init face detection model\n        if self.det_model == 'dlib':\n            self.face_detector, self.shape_predictor_5 = self.init_dlib(\n                dlib_model_path['face_detector'], dlib_model_path['shape_predictor_5'])\n        else:\n            self.face_detector = init_detection_model(\n                det_model, half=False, device=self.device)\n\n        # init face parsing model\n        self.use_parse = use_parse\n        self.face_parse = init_parsing_model(\n            model_name='parsenet', device=self.device)\n\n    def set_upscale_factor(self, upscale_factor):\n        self.upscale_factor = upscale_factor\n\n    def read_image(self, img):\n        \"\"\"img can be image path or cv2 loaded image.\"\"\"\n        # self.input_img is Numpy array, (h, w, c), BGR, uint8, [0, 255]\n        if isinstance(img, str):\n            img = cv2.imread(img)\n\n        if np.max(img) > 256:  # 16-bit image\n            img = img / 65535 * 255\n        if len(img.shape) == 2:  # gray image\n            img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)\n        elif img.shape[2] == 4:  # BGRA image with alpha channel\n            img = img[:, :, 0:3]\n\n        self.input_img = img\n        self.is_gray = is_gray(img, threshold=10)\n        if self.is_gray:\n            print('Grayscale input: True')\n\n        if min(self.input_img.shape[:2]) < 512:\n            f = 512.0/min(self.input_img.shape[:2])\n            self.input_img = cv2.resize(\n                self.input_img, (0, 0), fx=f, fy=f, interpolation=cv2.INTER_LINEAR)\n\n    def init_dlib(self, detection_path, landmark5_path):\n        \"\"\"Initialize the dlib detectors and predictors.\"\"\"\n        try:\n            import dlib\n        except ImportError:\n            print('Please install dlib by running:' 'conda install -c conda-forge dlib')\n        # detection_path = load_file_from_url(\n        #     url=detection_path, model_dir='weights/dlib', progress=True, file_name=None)\n        # landmark5_path = load_file_from_url(\n        #     url=landmark5_path, model_dir='weights/dlib', progress=True, file_name=None)\n        face_detector = dlib.cnn_face_detection_model_v1(detection_path)\n        shape_predictor_5 = dlib.shape_predictor(landmark5_path)\n        return face_detector, shape_predictor_5\n\n    def get_face_landmarks_5_dlib(self,\n                                  only_keep_largest=False,\n                                  scale=1):\n        det_faces = self.face_detector(self.input_img, scale)\n\n        if len(det_faces) == 0:\n            # print('No face detected. Try to increase upsample_num_times.')\n            return 0\n        else:\n            if only_keep_largest:\n                # print('Detect several faces and only keep the largest.')\n                face_areas = []\n                for i in range(len(det_faces)):\n                    face_area = (det_faces[i].rect.right() - det_faces[i].rect.left()) * (\n                        det_faces[i].rect.bottom() - det_faces[i].rect.top())\n                    face_areas.append(face_area)\n                largest_idx = face_areas.index(max(face_areas))\n                self.det_faces = [det_faces[largest_idx]]\n            else:\n                self.det_faces = det_faces\n\n        if len(self.det_faces) == 0:\n            return 0\n\n        for face in self.det_faces:\n            shape = self.shape_predictor_5(self.input_img, face.rect)\n            landmark = np.array([[part.x, part.y] for part in shape.parts()])\n            self.all_landmarks_5.append(landmark)\n\n        return len(self.all_landmarks_5)\n    \n\n    def get_face_landmarks_5(self,\n                             only_keep_largest=False,\n                             only_center_face=False,\n                             resize=None,\n                             blur_ratio=0.01,\n                             eye_dist_threshold=None):\n        if self.det_model == 'dlib':\n            return self.get_face_landmarks_5_dlib(only_keep_largest)\n\n        if resize is None:\n            scale = 1\n            input_img = self.input_img\n        else:\n            h, w = self.input_img.shape[0:2]\n            scale = resize / min(h, w)\n            scale = max(1, scale)  # always scale up\n            h, w = int(h * scale), int(w * scale)\n            interp = cv2.INTER_AREA if scale < 1 else cv2.INTER_LINEAR\n            input_img = cv2.resize(\n                self.input_img, (w, h), interpolation=interp)\n\n        with torch.no_grad():\n            bboxes = self.face_detector.detect_faces(input_img)\n\n        if bboxes is None or bboxes.shape[0] == 0:\n            return 0\n        else:\n            bboxes = bboxes / scale\n\n        for bbox in bboxes:\n            # remove faces with too small eye distance: side faces or too small faces\n            eye_dist = np.linalg.norm([bbox[6] - bbox[8], bbox[7] - bbox[9]])\n            if eye_dist_threshold is not None and (eye_dist < eye_dist_threshold):\n                continue\n\n            if self.template_3points:\n                landmark = np.array([[bbox[i], bbox[i + 1]]\n                                    for i in range(5, 11, 2)])\n            else:\n                landmark = np.array([[bbox[i], bbox[i + 1]]\n                                    for i in range(5, 15, 2)])\n            self.all_landmarks_5.append(landmark)\n            self.det_faces.append(bbox[0:5])\n\n        if len(self.det_faces) == 0:\n            return 0\n        if only_keep_largest:\n            h, w, _ = self.input_img.shape\n            self.det_faces, largest_idx = get_largest_face(\n                self.det_faces, h, w)\n            self.all_landmarks_5 = [self.all_landmarks_5[largest_idx]]\n        elif only_center_face:\n            h, w, _ = self.input_img.shape\n            self.det_faces, center_idx = get_center_face(self.det_faces, h, w)\n            self.all_landmarks_5 = [self.all_landmarks_5[center_idx]]\n\n        # pad blurry images\n        if self.pad_blur:\n            self.pad_input_imgs = []\n            for landmarks in self.all_landmarks_5:\n                # get landmarks\n                eye_left = landmarks[0, :]\n                eye_right = landmarks[1, :]\n                eye_avg = (eye_left + eye_right) * 0.5\n                mouth_avg = (landmarks[3, :] + landmarks[4, :]) * 0.5\n                eye_to_eye = eye_right - eye_left\n                eye_to_mouth = mouth_avg - eye_avg\n\n                # Get the oriented crop rectangle\n                # x: half width of the oriented crop rectangle\n                x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]\n                #  - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise\n                # norm with the hypotenuse: get the direction\n                x /= np.hypot(*x)  # get the hypotenuse of a right triangle\n                rect_scale = 1.5\n                x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale,\n                         np.hypot(*eye_to_mouth) * 1.8 * rect_scale)\n                # y: half height of the oriented crop rectangle\n                y = np.flipud(x) * [-1, 1]\n\n                # c: center\n                c = eye_avg + eye_to_mouth * 0.1\n                # quad: (left_top, left_bottom, right_bottom, right_top)\n                quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])\n                # qsize: side length of the square\n                qsize = np.hypot(*x) * 2\n                border = max(int(np.rint(qsize * 0.1)), 3)\n\n                # get pad\n                # pad: (width_left, height_top, width_right, height_bottom)\n                pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n                       int(np.ceil(max(quad[:, 1]))))\n                pad = [\n                    max(-pad[0] + border, 1),\n                    max(-pad[1] + border, 1),\n                    max(pad[2] - self.input_img.shape[0] + border, 1),\n                    max(pad[3] - self.input_img.shape[1] + border, 1)\n                ]\n\n                if max(pad) > 1:\n                    # pad image\n                    pad_img = np.pad(\n                        self.input_img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')\n                    # modify landmark coords\n                    landmarks[:, 0] += pad[0]\n                    landmarks[:, 1] += pad[1]\n                    # blur pad images\n                    h, w, _ = pad_img.shape\n                    y, x, _ = np.ogrid[:h, :w, :1]\n                    mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0],\n                                                       np.float32(w - 1 - x) / pad[2]),\n                                      1.0 - np.minimum(np.float32(y) / pad[1],\n                                                       np.float32(h - 1 - y) / pad[3]))\n                    blur = int(qsize * blur_ratio)\n                    if blur % 2 == 0:\n                        blur += 1\n                    blur_img = cv2.boxFilter(pad_img, 0, ksize=(blur, blur))\n                    # blur_img = cv2.GaussianBlur(pad_img, (blur, blur), 0)\n\n                    pad_img = pad_img.astype('float32')\n                    pad_img += (blur_img - pad_img) * \\\n                        np.clip(mask * 3.0 + 1.0, 0.0, 1.0)\n                    pad_img += (np.median(pad_img, axis=(0, 1)) -\n                                pad_img) * np.clip(mask, 0.0, 1.0)\n                    pad_img = np.clip(pad_img, 0, 255)  # float32, [0, 255]\n                    self.pad_input_imgs.append(pad_img)\n                else:\n                    self.pad_input_imgs.append(np.copy(self.input_img))\n\n        return len(self.all_landmarks_5)\n\n    def align_warp_face(self, save_cropped_path=None, border_mode='constant'):\n        \"\"\"Align and warp faces with face template.\n        \"\"\"\n        if self.pad_blur:\n            assert len(self.pad_input_imgs) == len(\n                self.all_landmarks_5), f'Mismatched samples: {len(self.pad_input_imgs)} and {len(self.all_landmarks_5)}'\n        for idx, landmark in enumerate(self.all_landmarks_5):\n            # use 5 landmarks to get affine matrix\n            # use cv2.LMEDS method for the equivalence to skimage transform\n            # ref: https://blog.csdn.net/yichxi/article/details/115827338\n            affine_matrix = cv2.estimateAffinePartial2D(\n                landmark, self.face_template, method=cv2.LMEDS\n            )[0]\n            self.affine_matrices.append(affine_matrix)\n            # warp and crop faces\n            if border_mode == 'constant':\n                border_mode = cv2.BORDER_CONSTANT\n            elif border_mode == 'reflect101':\n                border_mode = cv2.BORDER_REFLECT101\n            elif border_mode == 'reflect':\n                border_mode = cv2.BORDER_REFLECT\n            if self.pad_blur:\n                input_img = self.pad_input_imgs[idx]\n            else:\n                input_img = self.input_img\n            # pdb.set_trace()\n            cropped_face = cv2.warpAffine(\n                input_img,\n                affine_matrix,\n                self.face_size,\n                borderMode=border_mode,\n                borderValue=(135, 133, 132),\n            )  # gray\n            self.cropped_faces.append(cropped_face)\n            # save the cropped face\n            if save_cropped_path is not None:\n                path = os.path.splitext(save_cropped_path)[0]\n                save_path = f'{path}_{idx:02d}.{self.save_ext}'\n                imwrite(cropped_face, save_path)\n\n    def get_inverse_affine(self, save_inverse_affine_path=None):\n        \"\"\"Get inverse affine matrix.\"\"\"\n        for idx, affine_matrix in enumerate(self.affine_matrices):\n            inverse_affine = cv2.invertAffineTransform(affine_matrix)\n            inverse_affine *= self.upscale_factor\n            self.inverse_affine_matrices.append(inverse_affine)\n            # save inverse affine matrices\n            if save_inverse_affine_path is not None:\n                path, _ = os.path.splitext(save_inverse_affine_path)\n                save_path = f'{path}_{idx:02d}.pth'\n                torch.save(inverse_affine, save_path)\n\n    def add_restored_face(self, restored_face, input_face=None):\n        if self.is_gray:\n            # convert img into grayscale\n            restored_face = bgr2gray(restored_face)\n            if input_face is not None:\n                restored_face = adain_npy(\n                    restored_face, input_face)  # transfer the color\n        self.restored_faces.append(restored_face)\n\n    def paste_faces_to_input_image(\n        self, save_path=None, upsample_img=None, draw_box=False, face_upsampler=None\n    ):\n        h, w, _ = self.input_img.shape\n        h_up, w_up = int(h * self.upscale_factor), int(w * self.upscale_factor)\n\n        if upsample_img is None:\n            # simply resize the background\n            # upsample_img = cv2.resize(self.input_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4)\n            upsample_img = cv2.resize(\n                self.input_img, (w_up, h_up), interpolation=cv2.INTER_LINEAR\n            )\n        else:\n            upsample_img = cv2.resize(\n                upsample_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4\n            )\n\n        assert len(self.restored_faces) == len(\n            self.inverse_affine_matrices), ('length of restored_faces and affine_matrices are different.')\n\n        inv_mask_borders = []\n        for restored_face, inverse_affine in zip(\n            self.restored_faces, self.inverse_affine_matrices\n        ):\n            if face_upsampler is not None:\n                restored_face = face_upsampler.enhance(\n                    restored_face, outscale=self.upscale_factor\n                )[0]\n                inverse_affine /= self.upscale_factor\n                inverse_affine[:, 2] *= self.upscale_factor\n                face_size = (\n                    self.face_size[0] * self.upscale_factor,\n                    self.face_size[1] * self.upscale_factor,\n                )\n            else:\n                # Add an offset to inverse affine matrix, for more precise back alignment\n                if self.upscale_factor > 1:\n                    extra_offset = 0.5 * self.upscale_factor\n                else:\n                    extra_offset = 0\n                inverse_affine[:, 2] += extra_offset\n                face_size = self.face_size\n            inv_restored = cv2.warpAffine(\n                restored_face, inverse_affine, (w_up, h_up))\n\n            # always use square mask\n            mask = np.ones(face_size, dtype=np.float32)\n            inv_mask = cv2.warpAffine(mask, inverse_affine, (w_up, h_up))\n            # remove the black borders\n            inv_mask_erosion = cv2.erode(\n                inv_mask,\n                np.ones(\n                    (int(2 * self.upscale_factor), int(2 * self.upscale_factor)),\n                    np.uint8,\n                ),\n            )\n            pasted_face = inv_mask_erosion[:, :, None] * inv_restored\n            total_face_area = np.sum(inv_mask_erosion)  # // 3\n            # add border\n            if draw_box:\n                h, w = face_size\n                mask_border = np.ones((h, w, 3), dtype=np.float32)\n                border = int(1400 / np.sqrt(total_face_area))\n                mask_border[border : h - border, border : w - border, :] = 0\n                inv_mask_border = cv2.warpAffine(\n                    mask_border, inverse_affine, (w_up, h_up)\n                )\n                inv_mask_borders.append(inv_mask_border)\n            # compute the fusion edge based on the area of face\n            w_edge = int(total_face_area**0.5) // 20\n            erosion_radius = w_edge * 2\n            inv_mask_center = cv2.erode(\n                inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)\n            )\n            blur_size = w_edge * 2\n            inv_soft_mask = cv2.GaussianBlur(\n                inv_mask_center, (blur_size + 1, blur_size + 1), 0\n            )\n            if len(upsample_img.shape) == 2:  # upsample_img is gray image\n                upsample_img = upsample_img[:, :, None]\n            inv_soft_mask = inv_soft_mask[:, :, None]\n\n            # cv2.imwrite(\"inv_soft_mask_1.png\", (255 * inv_soft_mask).astype(np.uint8))\n\n            # parse mask\n            if self.use_parse:\n                # inference\n                face_input = cv2.resize(\n                    restored_face, (512, 512), interpolation=cv2.INTER_LINEAR)\n                face_input = img2tensor(face_input.astype(\n                    'float32') / 255., bgr2rgb=True, float32=True)\n                normalize(face_input, (0.5, 0.5, 0.5),\n                          (0.5, 0.5, 0.5), inplace=True)\n                face_input = torch.unsqueeze(face_input, 0).to(self.device)\n                with torch.no_grad():\n                    out = self.face_parse(face_input)[0]\n                out = out.argmax(dim=1).squeeze().cpu().numpy()\n\n                parse_mask = np.zeros(out.shape)\n                MASK_COLORMAP = [0, 255, 255, 255, 255, 255, 255,\n                                 255, 255, 255, 255, 255, 255, 255, 0, 255, 0, 0, 0]\n                for idx, color in enumerate(MASK_COLORMAP):\n                    parse_mask[out == idx] = color\n                #  blur the mask\n                parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11)\n                parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11)\n                # remove the black borders\n                thres = 10\n                parse_mask[:thres, :] = 0\n                parse_mask[-thres:, :] = 0\n                parse_mask[:, :thres] = 0\n                parse_mask[:, -thres:] = 0\n                parse_mask = parse_mask / 255.0\n\n                parse_mask = cv2.resize(parse_mask, face_size)\n                parse_mask = cv2.warpAffine(\n                    parse_mask, inverse_affine, (w_up, h_up), flags=3\n                )\n                inv_soft_parse_mask = parse_mask[:, :, None]\n                # pasted_face = inv_restored\n                fuse_mask = (inv_soft_parse_mask < inv_soft_mask).astype('int')\n                inv_soft_mask = inv_soft_parse_mask * fuse_mask + inv_soft_mask * (1 - fuse_mask)\n\n            # cv2.imwrite(\"z_inv_soft_mask.png\", (255 * inv_soft_mask).astype(np.uint8))\n            # cv2.imwrite(\"z_1-inv_soft_mask.png\", (255 * (1 - inv_soft_mask)).astype(np.uint8))\n            # cv2.imwrite(\"z_upsample_img.png\", upsample_img.astype(np.uint8))\n            # cv2.imwrite(\"z_pasted_face.png\", pasted_face.astype(np.uint8))\n\n            # alpha channel\n            if len(upsample_img.shape) == 3 and upsample_img.shape[2] == 4:\n                alpha = upsample_img[:, :, 3:]\n                upsample_img = inv_soft_mask * pasted_face + \\\n                    (1 - inv_soft_mask) * upsample_img[:, :, 0:3]\n                upsample_img = np.concatenate((upsample_img, alpha), axis=2)\n            else:\n                upsample_img = inv_soft_mask * pasted_face + \\\n                    (1 - inv_soft_mask) * upsample_img\n\n            # cv2.imwrite(\"z_merged.png\", upsample_img.astype(np.uint8))\n            # import time\n            # time.sleep(100)\n\n        if np.max(upsample_img) > 256:  # 16-bit image\n            upsample_img = upsample_img.astype(np.uint16)\n        else:\n            upsample_img = upsample_img.astype(np.uint8)\n\n        # draw bounding box\n        if draw_box:\n            # upsample_input_img = cv2.resize(input_img, (w_up, h_up))\n            img_color = np.ones([*upsample_img.shape], dtype=np.float32)\n            img_color[:, :, 0] = 0\n            img_color[:, :, 1] = 255\n            img_color[:, :, 2] = 0\n            for inv_mask_border in inv_mask_borders:\n                upsample_img = inv_mask_border * img_color + \\\n                    (1 - inv_mask_border) * upsample_img\n                # upsample_input_img = inv_mask_border * img_color + (1 - inv_mask_border) * upsample_input_img\n\n        if save_path is not None:\n            path = os.path.splitext(save_path)[0]\n            save_path = f'{path}.{self.save_ext}'\n            imwrite(upsample_img, save_path)\n        return upsample_img\n\n    def clean_all(self):\n        self.all_landmarks_5 = []\n        self.restored_faces = []\n        self.affine_matrices = []\n        self.cropped_faces = []\n        self.inverse_affine_matrices = []\n        self.det_faces = []\n        self.pad_input_imgs = []\n\n\nclass FaceAligner(object):\n    def __init__(self,\n                 upscale_factor,\n                 face_size=512,\n                 crop_ratio=(1, 1),\n                 det_model='retinaface_resnet50',\n                 save_ext='png',\n                 template_3points=False,\n                 pad_blur=False,\n                 use_parse=False,\n                 device=None):\n        self.template_3points = template_3points  # improve robustness\n        self.upscale_factor = int(upscale_factor)\n        # the cropped face ratio based on the square face\n        self.crop_ratio = crop_ratio  # (h, w)\n        assert (self.crop_ratio[0] >= 1 and self.crop_ratio[1]\n                >= 1), 'crop ration only supports >=1'\n        self.face_size = (\n            int(face_size * self.crop_ratio[1]), int(face_size * self.crop_ratio[0]))\n        self.det_model = det_model\n\n        if self.det_model == 'dlib':\n            # standard 5 landmarks for FFHQ faces with 1024 x 1024\n            self.face_template = np.array([[686.77227723, 488.62376238], [586.77227723, 493.59405941],\n                                           [337.91089109, 488.38613861], [\n                                               437.95049505, 493.51485149],\n                                           [513.58415842, 678.5049505]])\n            self.face_template = self.face_template / (1024 // face_size)\n        elif self.template_3points:\n            self.face_template = np.array([[192, 240], [319, 240], [257, 371]])\n        else:\n            # standard 5 landmarks for FFHQ faces with 512 x 512\n            # facexlib\n            self.face_template = np.array([[192.98138, 239.94708], [318.90277, 240.1936], [256.63416, 314.01935],\n                                           [201.26117, 371.41043], [313.08905, 371.15118]])\n\n            # dlib: left_eye: 36:41  right_eye: 42:47  nose: 30,32,33,34  left mouth corner: 48  right mouth corner: 54\n            # self.face_template = np.array([[193.65928, 242.98541], [318.32558, 243.06108], [255.67984, 328.82894],\n            #                                 [198.22603, 372.82502], [313.91018, 372.75659]])\n\n        self.face_template = self.face_template * (face_size / 512.0)\n        if self.crop_ratio[0] > 1:\n            self.face_template[:, 1] += face_size * \\\n                (self.crop_ratio[0] - 1) / 2\n        if self.crop_ratio[1] > 1:\n            self.face_template[:, 0] += face_size * \\\n                (self.crop_ratio[1] - 1) / 2\n        self.save_ext = save_ext\n        self.pad_blur = pad_blur\n        if self.pad_blur is True:\n            self.template_3points = False\n\n        self.all_landmarks_5 = []\n        self.det_faces = []\n        self.affine_matrices = []\n        self.inverse_affine_matrices = []\n        self.cropped_faces = []\n        self.restored_faces = []\n        self.pad_input_imgs = []\n\n        if device is None:\n            # self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n            self.device = get_device()\n        else:\n            self.device = device\n\n    def set_image(self, img):\n        self.input_img = img\n\n    def align_pair_face(self, img_lq, img_gt, landmarks):\n        img_lq = (img_lq[:, :, ::-1] * 255).round().astype(np.uint8)\n        img_gt = (img_gt[:, :, ::-1] * 255).round().astype(np.uint8)\n\n        self.set_image(img_gt)\n        img_lq, img_gt = self.align_warp_face(img_lq, img_gt, landmarks)\n        img_lq = img_lq[:, :, ::-1] / 255.0\n        img_gt = img_gt[:, :, ::-1] / 255.0\n        return img_lq, img_gt\n\n    def align_single_face(self, img, landmarks, border_mode='constant'):\n        \"\"\"Align and warp faces with face template.\n           Suppose input images are Numpy array, (h, w, c), BGR, uint8, [0, 255]\n        \"\"\"\n        # warp and crop faces\n        if border_mode == 'constant':\n            border_mode = cv2.BORDER_CONSTANT\n        elif border_mode == 'reflect101':\n            border_mode = cv2.BORDER_REFLECT101\n        elif border_mode == 'reflect':\n            border_mode = cv2.BORDER_REFLECT\n\n        img = (img[:, :, ::-1] * 255).round().astype(np.uint8)\n\n        affine_matrix = cv2.estimateAffinePartial2D(\n            landmarks, self.face_template, method=cv2.LMEDS)[0]\n        img = cv2.warpAffine(\n            img, affine_matrix, img.shape[0:2], borderMode=border_mode, borderValue=(135, 133, 132))  # gray\n        img = img[:, :, ::-1] / 255.0\n        return img\n\n    def align_warp_face(self, img_lq, img_gt, landmarks, border_mode='constant'):\n        \"\"\"Align and warp faces with face template.\n           Suppose input images are Numpy array, (h, w, c), BGR, uint8, [0, 255]\n        \"\"\"\n        # use 5 landmarks to get affine matrix\n        # use cv2.LMEDS method for the equivalence to skimage transform\n        # ref: https://blog.csdn.net/yichxi/article/details/115827338\n        scale = img_gt.shape[0] / img_lq.shape[0]\n        # warp and crop faces\n        if border_mode == 'constant':\n            border_mode = cv2.BORDER_CONSTANT\n        elif border_mode == 'reflect101':\n            border_mode = cv2.BORDER_REFLECT101\n        elif border_mode == 'reflect':\n            border_mode = cv2.BORDER_REFLECT\n\n        affine_matrix = cv2.estimateAffinePartial2D(\n            landmarks, self.face_template, method=cv2.LMEDS)[0]\n        img_gt = cv2.warpAffine(\n            img_gt, affine_matrix, img_gt.shape[0:2], borderMode=border_mode, borderValue=(135, 133, 132))  # gray\n\n        affine_matrix = cv2.estimateAffinePartial2D(\n            landmarks / scale, self.face_template / scale, method=cv2.LMEDS)[0]\n        img_lq = cv2.warpAffine(\n            img_lq, affine_matrix, img_lq.shape[0:2], borderMode=border_mode, borderValue=(135, 133, 132))  # gray\n\n        return img_lq, img_gt\n\n    def clean_all(self):\n        self.all_landmarks_5 = []\n        self.restored_faces = []\n        self.affine_matrices = []\n        self.cropped_faces = []\n        self.inverse_affine_matrices = []\n        self.det_faces = []\n        self.pad_input_imgs = []\n"
  },
  {
    "path": "facelib/utils/face_utils.py",
    "content": "import cv2\nimport numpy as np\nimport torch\n\n\ndef compute_increased_bbox(bbox, increase_area, preserve_aspect=True):\n    left, top, right, bot = bbox\n    width = right - left\n    height = bot - top\n\n    if preserve_aspect:\n        width_increase = max(increase_area, ((1 + 2 * increase_area) * height - width) / (2 * width))\n        height_increase = max(increase_area, ((1 + 2 * increase_area) * width - height) / (2 * height))\n    else:\n        width_increase = height_increase = increase_area\n    left = int(left - width_increase * width)\n    top = int(top - height_increase * height)\n    right = int(right + width_increase * width)\n    bot = int(bot + height_increase * height)\n    return (left, top, right, bot)\n\n\ndef get_valid_bboxes(bboxes, h, w):\n    left = max(bboxes[0], 0)\n    top = max(bboxes[1], 0)\n    right = min(bboxes[2], w)\n    bottom = min(bboxes[3], h)\n    return (left, top, right, bottom)\n\n\ndef align_crop_face_landmarks(img,\n                              landmarks,\n                              output_size,\n                              transform_size=None,\n                              enable_padding=True,\n                              return_inverse_affine=False,\n                              shrink_ratio=(1, 1)):\n    \"\"\"Align and crop face with landmarks.\n\n    The output_size and transform_size are based on width. The height is\n    adjusted based on shrink_ratio_h/shring_ration_w.\n\n    Modified from:\n    https://github.com/NVlabs/ffhq-dataset/blob/master/download_ffhq.py\n\n    Args:\n        img (Numpy array): Input image.\n        landmarks (Numpy array): 5 or 68 or 98 landmarks.\n        output_size (int): Output face size.\n        transform_size (ing): Transform size. Usually the four time of\n            output_size.\n        enable_padding (float): Default: True.\n        shrink_ratio (float | tuple[float] | list[float]): Shring the whole\n            face for height and width (crop larger area). Default: (1, 1).\n\n    Returns:\n        (Numpy array): Cropped face.\n    \"\"\"\n    lm_type = 'retinaface_5'  # Options: dlib_5, retinaface_5\n\n    if isinstance(shrink_ratio, (float, int)):\n        shrink_ratio = (shrink_ratio, shrink_ratio)\n    if transform_size is None:\n        transform_size = output_size * 4\n\n    # Parse landmarks\n    lm = np.array(landmarks)\n    if lm.shape[0] == 5 and lm_type == 'retinaface_5':\n        eye_left = lm[0]\n        eye_right = lm[1]\n        mouth_avg = (lm[3] + lm[4]) * 0.5\n    elif lm.shape[0] == 5 and lm_type == 'dlib_5':\n        lm_eye_left = lm[2:4]\n        lm_eye_right = lm[0:2]\n        eye_left = np.mean(lm_eye_left, axis=0)\n        eye_right = np.mean(lm_eye_right, axis=0)\n        mouth_avg = lm[4]\n    elif lm.shape[0] == 68:\n        lm_eye_left = lm[36:42]\n        lm_eye_right = lm[42:48]\n        eye_left = np.mean(lm_eye_left, axis=0)\n        eye_right = np.mean(lm_eye_right, axis=0)\n        mouth_avg = (lm[48] + lm[54]) * 0.5\n    elif lm.shape[0] == 98:\n        lm_eye_left = lm[60:68]\n        lm_eye_right = lm[68:76]\n        eye_left = np.mean(lm_eye_left, axis=0)\n        eye_right = np.mean(lm_eye_right, axis=0)\n        mouth_avg = (lm[76] + lm[82]) * 0.5\n\n    eye_avg = (eye_left + eye_right) * 0.5\n    eye_to_eye = eye_right - eye_left\n    eye_to_mouth = mouth_avg - eye_avg\n\n    # Get the oriented crop rectangle\n    # x: half width of the oriented crop rectangle\n    x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]\n    #  - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise\n    # norm with the hypotenuse: get the direction\n    x /= np.hypot(*x)  # get the hypotenuse of a right triangle\n    rect_scale = 1  # TODO: you can edit it to get larger rect\n    x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale)\n    # y: half height of the oriented crop rectangle\n    y = np.flipud(x) * [-1, 1]\n\n    x *= shrink_ratio[1]  # width\n    y *= shrink_ratio[0]  # height\n\n    # c: center\n    c = eye_avg + eye_to_mouth * 0.1\n    # quad: (left_top, left_bottom, right_bottom, right_top)\n    quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])\n    # qsize: side length of the square\n    qsize = np.hypot(*x) * 2\n\n    quad_ori = np.copy(quad)\n    # Shrink, for large face\n    # TODO: do we really need shrink\n    shrink = int(np.floor(qsize / output_size * 0.5))\n    if shrink > 1:\n        h, w = img.shape[0:2]\n        rsize = (int(np.rint(float(w) / shrink)), int(np.rint(float(h) / shrink)))\n        img = cv2.resize(img, rsize, interpolation=cv2.INTER_AREA)\n        quad /= shrink\n        qsize /= shrink\n\n    # Crop\n    h, w = img.shape[0:2]\n    border = max(int(np.rint(qsize * 0.1)), 3)\n    crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n            int(np.ceil(max(quad[:, 1]))))\n    crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, w), min(crop[3] + border, h))\n    if crop[2] - crop[0] < w or crop[3] - crop[1] < h:\n        img = img[crop[1]:crop[3], crop[0]:crop[2], :]\n        quad -= crop[0:2]\n\n    # Pad\n    # pad: (width_left, height_top, width_right, height_bottom)\n    h, w = img.shape[0:2]\n    pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n           int(np.ceil(max(quad[:, 1]))))\n    pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - w + border, 0), max(pad[3] - h + border, 0))\n    if enable_padding and max(pad) > border - 4:\n        pad = np.maximum(pad, int(np.rint(qsize * 0.3)))\n        img = np.pad(img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')\n        h, w = img.shape[0:2]\n        y, x, _ = np.ogrid[:h, :w, :1]\n        mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0],\n                                           np.float32(w - 1 - x) / pad[2]),\n                          1.0 - np.minimum(np.float32(y) / pad[1],\n                                           np.float32(h - 1 - y) / pad[3]))\n        blur = int(qsize * 0.02)\n        if blur % 2 == 0:\n            blur += 1\n        blur_img = cv2.boxFilter(img, 0, ksize=(blur, blur))\n\n        img = img.astype('float32')\n        img += (blur_img - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)\n        img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)\n        img = np.clip(img, 0, 255)  # float32, [0, 255]\n        quad += pad[:2]\n\n    # Transform use cv2\n    h_ratio = shrink_ratio[0] / shrink_ratio[1]\n    dst_h, dst_w = int(transform_size * h_ratio), transform_size\n    template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]])\n    # use cv2.LMEDS method for the equivalence to skimage transform\n    # ref: https://blog.csdn.net/yichxi/article/details/115827338\n    affine_matrix = cv2.estimateAffinePartial2D(quad, template, method=cv2.LMEDS)[0]\n    cropped_face = cv2.warpAffine(\n        img, affine_matrix, (dst_w, dst_h), borderMode=cv2.BORDER_CONSTANT, borderValue=(135, 133, 132))  # gray\n\n    if output_size < transform_size:\n        cropped_face = cv2.resize(\n            cropped_face, (output_size, int(output_size * h_ratio)), interpolation=cv2.INTER_LINEAR)\n\n    if return_inverse_affine:\n        dst_h, dst_w = int(output_size * h_ratio), output_size\n        template = np.array([[0, 0], [0, dst_h], [dst_w, dst_h], [dst_w, 0]])\n        # use cv2.LMEDS method for the equivalence to skimage transform\n        # ref: https://blog.csdn.net/yichxi/article/details/115827338\n        affine_matrix = cv2.estimateAffinePartial2D(\n            quad_ori, np.array([[0, 0], [0, output_size], [dst_w, dst_h], [dst_w, 0]]), method=cv2.LMEDS)[0]\n        inverse_affine = cv2.invertAffineTransform(affine_matrix)\n    else:\n        inverse_affine = None\n    return cropped_face, inverse_affine\n\n\ndef paste_face_back(img, face, inverse_affine):\n    h, w = img.shape[0:2]\n    face_h, face_w = face.shape[0:2]\n    inv_restored = cv2.warpAffine(face, inverse_affine, (w, h))\n    mask = np.ones((face_h, face_w, 3), dtype=np.float32)\n    inv_mask = cv2.warpAffine(mask, inverse_affine, (w, h))\n    # remove the black borders\n    inv_mask_erosion = cv2.erode(inv_mask, np.ones((2, 2), np.uint8))\n    inv_restored_remove_border = inv_mask_erosion * inv_restored\n    total_face_area = np.sum(inv_mask_erosion) // 3\n    # compute the fusion edge based on the area of face\n    w_edge = int(total_face_area**0.5) // 20\n    erosion_radius = w_edge * 2\n    inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8))\n    blur_size = w_edge * 2\n    inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0)\n    img = inv_soft_mask * inv_restored_remove_border + (1 - inv_soft_mask) * img\n    # float32, [0, 255]\n    return img\n\n\nif __name__ == '__main__':\n    import os\n\n    from facelib.detection import init_detection_model\n    from facelib.utils.face_restoration_helper import get_largest_face\n\n    img_path = '/home/wxt/datasets/ffhq/ffhq_wild/00009.png'\n    img_name = os.splitext(os.path.basename(img_path))[0]\n\n    # initialize model\n    det_net = init_detection_model('retinaface_resnet50', half=False)\n    img_ori = cv2.imread(img_path)\n    h, w = img_ori.shape[0:2]\n    # if larger than 800, scale it\n    scale = max(h / 800, w / 800)\n    if scale > 1:\n        img = cv2.resize(img_ori, (int(w / scale), int(h / scale)), interpolation=cv2.INTER_LINEAR)\n\n    with torch.no_grad():\n        bboxes = det_net.detect_faces(img, 0.97)\n    if scale > 1:\n        bboxes *= scale  # the score is incorrect\n    bboxes = get_largest_face(bboxes, h, w)[0]\n\n    landmarks = np.array([[bboxes[i], bboxes[i + 1]] for i in range(5, 15, 2)])\n\n    cropped_face, inverse_affine = align_crop_face_landmarks(\n        img_ori,\n        landmarks,\n        output_size=512,\n        transform_size=None,\n        enable_padding=True,\n        return_inverse_affine=True,\n        shrink_ratio=(1, 1))\n\n    cv2.imwrite(f'tmp/{img_name}_cropeed_face.png', cropped_face)\n    img = paste_face_back(img_ori, cropped_face, inverse_affine)\n    cv2.imwrite(f'tmp/{img_name}_back.png', img)\n"
  },
  {
    "path": "facelib/utils/misc.py",
    "content": "import cv2\nimport os\nimport os.path as osp\nimport numpy as np\nfrom PIL import Image\nimport torch\nfrom torch.hub import download_url_to_file, get_dir\nfrom urllib.parse import urlparse\n# from basicsr.utils.download_util import download_file_from_google_drive\n\nROOT_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\n\ndef download_pretrained_models(file_ids, save_path_root):\n    import gdown\n    \n    os.makedirs(save_path_root, exist_ok=True)\n\n    for file_name, file_id in file_ids.items():\n        file_url = 'https://drive.google.com/uc?id='+file_id\n        save_path = osp.abspath(osp.join(save_path_root, file_name))\n        if osp.exists(save_path):\n            user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\\n')\n            if user_response.lower() == 'y':\n                print(f'Covering {file_name} to {save_path}')\n                gdown.download(file_url, save_path, quiet=False)\n                # download_file_from_google_drive(file_id, save_path)\n            elif user_response.lower() == 'n':\n                print(f'Skipping {file_name}')\n            else:\n                raise ValueError('Wrong input. Only accepts Y/N.')\n        else:\n            print(f'Downloading {file_name} to {save_path}')\n            gdown.download(file_url, save_path, quiet=False)\n            # download_file_from_google_drive(file_id, save_path)\n\n\ndef imwrite(img, file_path, params=None, auto_mkdir=True):\n    \"\"\"Write image to file.\n\n    Args:\n        img (ndarray): Image array to be written.\n        file_path (str): Image file path.\n        params (None or list): Same as opencv's :func:`imwrite` interface.\n        auto_mkdir (bool): If the parent folder of `file_path` does not exist,\n            whether to create it automatically.\n\n    Returns:\n        bool: Successful or not.\n    \"\"\"\n    if auto_mkdir:\n        dir_name = os.path.abspath(os.path.dirname(file_path))\n        os.makedirs(dir_name, exist_ok=True)\n    return cv2.imwrite(file_path, img, params)\n\n\ndef img2tensor(imgs, bgr2rgb=True, float32=True):\n    \"\"\"Numpy array to tensor.\n\n    Args:\n        imgs (list[ndarray] | ndarray): Input images.\n        bgr2rgb (bool): Whether to change bgr to rgb.\n        float32 (bool): Whether to change to float32.\n\n    Returns:\n        list[tensor] | tensor: Tensor images. If returned results only have\n            one element, just return tensor.\n    \"\"\"\n\n    def _totensor(img, bgr2rgb, float32):\n        if img.shape[2] == 3 and bgr2rgb:\n            if img.dtype == 'float64':\n                img = img.astype('float32')\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n        img = torch.from_numpy(img.transpose(2, 0, 1))\n        if float32:\n            img = img.float()\n        return img\n\n    if isinstance(imgs, list):\n        return [_totensor(img, bgr2rgb, float32) for img in imgs]\n    else:\n        return _totensor(imgs, bgr2rgb, float32)\n\n\ndef load_file_from_url(url, model_dir=None, progress=True, file_name=None):\n    \"\"\"Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py\n    \"\"\"\n    if model_dir is None:\n        hub_dir = get_dir()\n        model_dir = os.path.join(hub_dir, 'checkpoints')\n\n    os.makedirs(os.path.join(ROOT_DIR, model_dir), exist_ok=True)\n\n    parts = urlparse(url)\n    filename = os.path.basename(parts.path)\n    if file_name is not None:\n        filename = file_name\n    cached_file = os.path.abspath(os.path.join(ROOT_DIR, model_dir, filename))\n    if not os.path.exists(cached_file):\n        print(f'Downloading: \"{url}\" to {cached_file}\\n')\n        download_url_to_file(url, cached_file, hash_prefix=None, progress=progress)\n    return cached_file\n\n\ndef scandir(dir_path, suffix=None, recursive=False, full_path=False):\n    \"\"\"Scan a directory to find the interested files.\n    Args:\n        dir_path (str): Path of the directory.\n        suffix (str | tuple(str), optional): File suffix that we are\n            interested in. Default: None.\n        recursive (bool, optional): If set to True, recursively scan the\n            directory. Default: False.\n        full_path (bool, optional): If set to True, include the dir_path.\n            Default: False.\n    Returns:\n        A generator for all the interested files with relative paths.\n    \"\"\"\n\n    if (suffix is not None) and not isinstance(suffix, (str, tuple)):\n        raise TypeError('\"suffix\" must be a string or tuple of strings')\n\n    root = dir_path\n\n    def _scandir(dir_path, suffix, recursive):\n        for entry in os.scandir(dir_path):\n            if not entry.name.startswith('.') and entry.is_file():\n                if full_path:\n                    return_path = entry.path\n                else:\n                    return_path = osp.relpath(entry.path, root)\n\n                if suffix is None:\n                    yield return_path\n                elif return_path.endswith(suffix):\n                    yield return_path\n            else:\n                if recursive:\n                    yield from _scandir(entry.path, suffix=suffix, recursive=recursive)\n                else:\n                    continue\n\n    return _scandir(dir_path, suffix=suffix, recursive=recursive)\n\n\ndef is_gray(img, threshold=10):\n    img = Image.fromarray(img)\n    if len(img.getbands()) == 1:\n        return True\n    img1 = np.asarray(img.getchannel(channel=0), dtype=np.int16)\n    img2 = np.asarray(img.getchannel(channel=1), dtype=np.int16)\n    img3 = np.asarray(img.getchannel(channel=2), dtype=np.int16)\n    diff1 = (img1 - img2).var()\n    diff2 = (img2 - img3).var()\n    diff3 = (img3 - img1).var()\n    diff_sum = (diff1 + diff2 + diff3) / 3.0\n    if diff_sum <= threshold:\n        return True\n    else:\n        return False\n\ndef rgb2gray(img, out_channel=3):\n    r, g, b = img[:,:,0], img[:,:,1], img[:,:,2]\n    gray = 0.2989 * r + 0.5870 * g + 0.1140 * b\n    if out_channel == 3:\n        gray = gray[:,:,np.newaxis].repeat(3, axis=2)\n    return gray\n\ndef bgr2gray(img, out_channel=3):\n    b, g, r = img[:,:,0], img[:,:,1], img[:,:,2]\n    gray = 0.2989 * r + 0.5870 * g + 0.1140 * b\n    if out_channel == 3:\n        gray = gray[:,:,np.newaxis].repeat(3, axis=2)\n    return gray\n\n\ndef calc_mean_std(feat, eps=1e-5):\n    \"\"\"\n    Args:\n        feat (numpy): 3D [w h c]s\n    \"\"\"\n    size = feat.shape\n    assert len(size) == 3, 'The input feature should be 3D tensor.'\n    c = size[2]\n    feat_var = feat.reshape(-1, c).var(axis=0) + eps\n    feat_std = np.sqrt(feat_var).reshape(1, 1, c)\n    feat_mean = feat.reshape(-1, c).mean(axis=0).reshape(1, 1, c)\n    return feat_mean, feat_std\n\n\ndef adain_npy(content_feat, style_feat):\n    \"\"\"Adaptive instance normalization for numpy.\n\n    Args:\n        content_feat (numpy): The input feature.\n        style_feat (numpy): The reference feature.\n    \"\"\"\n    size = content_feat.shape\n    style_mean, style_std = calc_mean_std(style_feat)\n    content_mean, content_std = calc_mean_std(content_feat)\n    normalized_feat = (content_feat - np.broadcast_to(content_mean, size)) / np.broadcast_to(content_std, size)\n    return normalized_feat * np.broadcast_to(style_std, size) + np.broadcast_to(style_mean, size)"
  },
  {
    "path": "options/clip5_bs2_512_align_nofix_multiscale.yaml",
    "content": "# general settings\nname: BFR_test\nmodel_type: CodeFormerDirichletVideoModel\nnum_gpu: 1\nmanual_seed: 0\n\n# dataset and data loader settings\ndatasets:\n  train:\n    name: VFHQ-Train\n    type: VFHQRealDegradationDatasetNew\n    dataroot_gt:                    # replace your training data root path\n    global_meta_info_file:          # replace with your training data meta info\n    dataroot_meta_info:             # replace with the landmarks info of your training data\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True # make sure that dataroot_meta_info is the landmarks of your data\n    normalize: True\n    interval_list: [1]\n    random_reverse: True\n    use_flip: False\n    use_rot: False\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']\n    kernel_prob:  [0.7, 0.3]\n    blur_x_sigma: [0.1, 10]\n    blur_y_sigma: [0.1, 10] \n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40]  \n    crf_range:    [18, 25]\n    vcodec:       ['libx264']         \n    vcodec_prob:  [1]                 \n\n\n    # data loader\n    num_worker_per_gpu: 4\n    batch_size_per_gpu: 2\n    dataset_enlarge_ratio: 20\n    prefetch_mode: ~\n\n  val:\n    name: VFHQ-Test-50\n    type: VFHQRealDegradationDatasetNew\n    dataroot_gt:                           # replace with your test data root path\n    global_meta_info_file:                 # test data meta\n    dataroot_meta_info:                    # landmark info of your test data\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True\n    normalize: True\n    interval_list: [1]\n    random_reverse: False\n    use_flip: False\n    use_rot: False\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']    \n    kernel_prob:  [0.7, 0.3]\n    blur_x_sigma: [0.1, 10]\n    blur_y_sigma: [0.1, 10] \n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40] \n    crf_range:    [18, 25]\n    vcodec:       ['libx264']        \n    vcodec_prob:  [1]                \n    # data loader\n    num_worker_per_gpu: 2\n    batch_size_per_gpu: 1\n    dataset_enlarge_ratio: 1\n    prefetch_mode: ~\n\n# network structures\nnetwork_g:\n  type: TemporalCodeFormerDirDistMultiScale\n  dim_embed: 512\n  n_head: 8\n  n_layers: 9\n  codebook_size: 1024\n  connect_list: ['32', '64', '128', '256']\n  # fix_modules: ['encoder','quantize', 'fuse_convs_dict', 'feat_emb'] \n  fix_modules: []                         # you can fix some module \n  frame_length: 5\n\nnetwork_d:\n  type: VQGANDiscriminator\n  nc: 3\n  ndf: 64\n  n_layers: 4\n\n# path\npath:\n  pretrain_network_g: './ckpts/CodeFormer/codeformer.pth'\n  param_key_g: params_ema\n  strict_load_g: false\n  pretrain_network_d: './ckpts/CodeFormer/vqgan_discriminator.pth'\n  strict_load_d: true\n  resume_state: ~\n\n# base_lr(4.5e-6)*bach_size(4)\ntrain:\n  cross_entropy_loss: true\n  entropy_loss_weight: 0.5\n  fidelity_weight: 0\n\n  optim_g:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n  optim_d:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n\n  # scheduler:\n  #   type: MultiStepLR\n  #   milestones: [30000, 45000]\n  #   gamma: 0.5\n\n  scheduler:\n    type: CosineAnnealingRestartLR\n    periods: [100000]\n    restart_weights: [1]\n    eta_min: !!float 2e-5\n\n  total_iter: 100000\n\n  warmup_iter: -1  # no warm up\n  ema_decay: 0.997\n\n# training loss\n  pixel_opt:\n    type: L1Loss\n    loss_weight: 1.0\n    reduction: mean\n\n  perceptual_opt:\n    type: LPIPSLoss\n    loss_weight: 1.0\n    use_input_norm: true\n    range_norm: true\n\n  dirichletKL_opt:\n    type: DirichletKLLoss\n    loss_weight: 1.00\n    kl_coef: 0.1\n\n  gan_opt:\n    type: GANLoss\n    gan_type: hinge\n    loss_weight: !!float 1.0 # adaptive_weighting\n\n  use_adaptive_weight: true\n\n  net_g_start_iter: 0\n  net_d_iters: 1\n  net_d_start_iter: 6000000000\n  manual_seed: 0\n\n# validation settings\nval:\n  val_freq: !!float 10 # no validation\n  save_img: true\n\n  metrics:\n    psnr: # metric name, can be arbitrary\n      type: calculate_psnr\n      crop_border: 4\n      test_y_channel: false\n\n# logging settings\nlogger:\n  print_freq: 1                   # Frequency (iterations) to print training logs to console\n  save_checkpoint_freq: !!float 10  # Frequency (iterations) to save model checkpoints\n  use_tb_logger: true             # Enable TensorBoard logging\n  wandb:\n    mode: offline                 # Logging mode: 'offline' (local only) or 'online' (sync to Weights & Biases) \n                                  # Set to 'online' to upload training metrics to Weights & Biases\n    project: project_name         # WandB project name\n    resume_id: ~                  # ID to resume a previous WandB run (leave as ~ for new runs)\n\n# dist training settings\ndist_params:\n  backend: nccl\n  port: 29412\n\nfind_unused_parameters: false"
  },
  {
    "path": "options/clip5_bs2_512_align_nofix_multiscale_color.yaml",
    "content": "# general settings\nname: codeformer_dirichlet_clip5_bs2_align_nofix_multiscale_color\nmodel_type: CodeFormerDirichletVideoModel\nnum_gpu: 8\nmanual_seed: 0\n\n# dataset and data loader settings\ndatasets:\n  train:\n    name: VFHQ-Train\n    type: ColorizationDataset\n    dataroot_gt: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_DATA_512x512\n    global_meta_info_file:  # path to your training data meta file\n    dataroot_meta_info: #\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True\n    normalize: True\n    interval_list: [1]\n    random_reverse: True\n    use_flip: False\n    use_rot: False\n    # large degradation in stageII\n    # blur_kernel_size: 41\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']    # 模糊核的类型列表\n    # kernel_prob:  [0.5, 0.5]        # 模糊核类型的概率\n    kernel_prob:  [0.7, 0.3]\n    # blur_x_sigma: [0.2, 3]          # 模糊核在 x 方向的标准差范围\n    blur_x_sigma: [0.1, 10]\n    # blur_y_sigma: [0.2, 3]          # 模糊核在 y 方向的标准差范围\n    blur_y_sigma: [0.1, 10] \n    # noise_range:  [0, 25]           # 噪声范围\n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40]  # 不同插值方法的概率\n    # use_crf:      True              # 是否使用crf压缩\n    # crf_range:    [10, 30]          # CRF 压缩范围\n    crf_range:    [18, 25]\n    vcodec:       ['libx264']         # 视频编码格式\n    vcodec_prob:  [1]                 # 视频编码格式的概率\n\n    latent_gt_path: ~ # without pre-calculated latent code\n    # latent_gt_path: './experiments/pretrained_models/VQGAN/latent_gt_code1024.pth'\n\n    # data loader\n    num_worker_per_gpu: 4\n    batch_size_per_gpu: 2\n    dataset_enlarge_ratio: 10\n    prefetch_mode: ~\n\n  val:\n    name: VFHQ-Test-50\n    type: ColorizationDataset\n    # dataroot_gt: ../VFHQ_Test/VAL_cases\n    # global_meta_info_file: ./vfhq_val_data_info.txt\n    # dataroot_meta_info: ./vfhq_val_landmarks\n    dataroot_gt: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_Test/TEST_DATA\n    global_meta_info_file: ./vfhq_test.txt\n    dataroot_meta_info: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_Test/vfhq_test_landmarks\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True\n    normalize: True\n    interval_list: [1]\n    random_reverse: False\n    use_flip: False\n    use_rot: False\n    # large degradation in stageII\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']    # 模糊核的类型列表\n    # kernel_prob:  [0.5, 0.5]        # 模糊核类型的概率\n    kernel_prob:  [0.7, 0.3]\n    # blur_x_sigma: [0.2, 3]          # 模糊核在 x 方向的标准差范围\n    blur_x_sigma: [0.1, 10]\n    # blur_y_sigma: [0.2, 3]          # 模糊核在 y 方向的标准差范围\n    blur_y_sigma: [0.1, 10] \n    # noise_range:  [0, 25]           # 噪声范围\n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40]  # 不同插值方法的概率\n    # use_crf:      True              # 是否使用crf压缩\n    # crf_range:    [10, 30]          # CRF 压缩范围\n    crf_range:    [18, 25]\n    vcodec:       ['libx264']         # 视频编码格式\n    vcodec_prob:  [1]                 # 视频编码格式的概率\n    # data loader\n    num_worker_per_gpu: 4\n    batch_size_per_gpu: 2\n    dataset_enlarge_ratio: 1\n    prefetch_mode: ~\n\n# network structures\nnetwork_g:\n  type: TemporalCodeFormerDirDistMultiScale\n  dim_embed: 512\n  n_head: 8\n  n_layers: 9\n  codebook_size: 1024\n  connect_list: ['32', '64', '128', '256']\n  # fix_modules: ['encoder','quantize', 'fuse_convs_dict', 'feat_emb'] # decoder 放开, generator\n  fix_modules: []\n  # vqgan_path: './weights/CodeFormer/vqgan_code1024.pth' # pretrained VQGAN \n  frame_length: 5\n\n# network_vqgan: # this config is needed if no pre-calculated latent\n#   type: VQAutoEncoder\n#   img_size: 512\n#   nf: 64\n#   ch_mult: [1, 2, 2, 4, 4, 8]\n#   quantizer: 'nearest'\n#   codebook_size: 1024\n\nnetwork_d:\n  type: VQGANDiscriminator\n  nc: 3\n  ndf: 64\n  n_layers: 4\n\n# path\npath:\n  pretrain_network_g: './weights/CodeFormer/codeformer.pth'\n  param_key_g: params_ema\n  strict_load_g: false\n  pretrain_network_d: './weights/CodeFormer/vqgan_discriminator.pth'\n  strict_load_d: true\n  resume_state: ~\n\n# base_lr(4.5e-6)*bach_size(4)\ntrain:\n  # use_hq_feat_loss: False\n  # feat_loss_weight: 1.0\n  cross_entropy_loss: true\n  entropy_loss_weight: 0.5\n  fidelity_weight: 0\n\n  optim_g:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n  optim_d:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n\n  # scheduler:\n  #   type: MultiStepLR\n  #   milestones: [30000, 45000]\n  #   gamma: 0.5\n\n  scheduler:\n    type: CosineAnnealingRestartLR\n    periods: [100000]\n    restart_weights: [1]\n    eta_min: !!float 2e-5\n\n  total_iter: 100000\n\n  warmup_iter: -1  # no warm up\n  ema_decay: 0.997\n\n# training loss\n  pixel_opt:\n    type: L1Loss\n    loss_weight: 1.0\n    reduction: mean\n\n  perceptual_opt:\n    type: LPIPSLoss\n    loss_weight: 1.0\n    use_input_norm: true\n    range_norm: true\n\n  dirichletKL_opt:\n    type: DirichletKLLoss\n    loss_weight: 0.00\n    kl_coef: 0.1\n\n  gan_opt:\n    type: GANLoss\n    gan_type: hinge\n    loss_weight: !!float 1.0 # adaptive_weighting\n\n  use_adaptive_weight: true\n\n  net_g_start_iter: 0\n  net_d_iters: 1\n  net_d_start_iter: 6000000000\n  manual_seed: 0\n\n# validation settings\nval:\n  val_freq: !!float 1000 # no validation\n  save_img: true\n\n  metrics:\n    psnr: # metric name, can be arbitrary\n      type: calculate_psnr\n      crop_border: 4\n      test_y_channel: false\n\n# logging settings\nlogger:\n  print_freq: 100\n  save_checkpoint_freq: !!float 1000\n  use_tb_logger: true\n  wandb:\n    mode: offline\n    project: codeformer_dirichlet_clip5_color_dloss\n    resume_id: ~\n\n# dist training settings\ndist_params:\n  backend: nccl\n  port: 29412\n\nfind_unused_parameters: false"
  },
  {
    "path": "options/clip5_bs2_512_align_nofix_multiscale_inpaint.yaml",
    "content": "# general settings\nname: codeformer_dirichlet_clip5_bs2_align_nofix_multiscale_inpaint\nmodel_type: CodeFormerDirichletVideoModel\nnum_gpu: 8\nmanual_seed: 0\n\n# dataset and data loader settings\ndatasets:\n  train:\n    name: VFHQ-Train\n    type: InpaintingDataset\n    dataroot_gt: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_DATA_512x512\n    global_meta_info_file:  ./vfhq_train_data.txt\n    dataroot_meta_info: /sykj_002/datasets/VFHQ/VFHQ_DATAset/vfhq_train_landmarks\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True\n    normalize: True\n    interval_list: [1]\n    random_reverse: True\n    use_flip: False\n    use_rot: False\n    # large degradation in stageII\n    # blur_kernel_size: 41\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']    # 模糊核的类型列表\n    # kernel_prob:  [0.5, 0.5]        # 模糊核类型的概率\n    kernel_prob:  [0.7, 0.3]\n    # blur_x_sigma: [0.2, 3]          # 模糊核在 x 方向的标准差范围\n    blur_x_sigma: [0.1, 10]\n    # blur_y_sigma: [0.2, 3]          # 模糊核在 y 方向的标准差范围\n    blur_y_sigma: [0.1, 10] \n    # noise_range:  [0, 25]           # 噪声范围\n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40]  # 不同插值方法的概率\n    # use_crf:      True              # 是否使用crf压缩\n    # crf_range:    [10, 30]          # CRF 压缩范围\n    crf_range:    [18, 25]\n    vcodec:       ['libx264']         # 视频编码格式\n    vcodec_prob:  [1]                 # 视频编码格式的概率\n\n    latent_gt_path: ~ # without pre-calculated latent code\n    # latent_gt_path: './experiments/pretrained_models/VQGAN/latent_gt_code1024.pth'\n\n    # data loader\n    num_worker_per_gpu: 4\n    batch_size_per_gpu: 2\n    dataset_enlarge_ratio: 10\n    prefetch_mode: ~\n\n  val:\n    name: VFHQ-Test-50\n    type: InpaintingDataset\n    # dataroot_gt: ../VFHQ_Test/VAL_cases\n    # global_meta_info_file: ./vfhq_val_data_info.txt\n    # dataroot_meta_info: ./vfhq_val_landmarks\n    dataroot_gt: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_Test/TEST_DATA\n    global_meta_info_file: ./vfhq_test.txt\n    dataroot_meta_info: /sykj_002/datasets/VFHQ/VFHQ_DATAset/VFHQ_Test/vfhq_test_landmarks\n    io_backend:\n      type: disk\n\n    video_length: 5\n    scale: 4\n    need_align: True\n    normalize: True\n    interval_list: [1]\n    random_reverse: False\n    use_flip: False\n    use_rot: False\n    # large degradation in stageII\n    blur_kernel_size: 21\n    kernel_list:  ['iso', 'aniso']    # 模糊核的类型列表\n    # kernel_prob:  [0.5, 0.5]        # 模糊核类型的概率\n    kernel_prob:  [0.7, 0.3]\n    # blur_x_sigma: [0.2, 3]          # 模糊核在 x 方向的标准差范围\n    blur_x_sigma: [0.1, 10]\n    # blur_y_sigma: [0.2, 3]          # 模糊核在 y 方向的标准差范围\n    blur_y_sigma: [0.1, 10] \n    # noise_range:  [0, 25]           # 噪声范围\n    noise_range:  [0, 10]\n    resize_prob:  [0.20, 0.40, 0.40]  # 不同插值方法的概率\n    # use_crf:      True              # 是否使用crf压缩\n    # crf_range:    [10, 30]          # CRF 压缩范围\n    crf_range:    [18, 25]\n    vcodec:       ['libx264']         # 视频编码格式\n    vcodec_prob:  [1]                 # 视频编码格式的概率\n    # data loader\n    num_worker_per_gpu: 4\n    batch_size_per_gpu: 2\n    dataset_enlarge_ratio: 1\n    prefetch_mode: ~\n\n# network structures\nnetwork_g:\n  type: TemporalCodeFormerDirDistMultiScale\n  dim_embed: 512\n  n_head: 8\n  n_layers: 9\n  codebook_size: 1024\n  connect_list: ['32', '64', '128', '256']\n  # fix_modules: ['encoder','quantize', 'fuse_convs_dict', 'feat_emb'] # decoder 放开, generator\n  fix_modules: []\n  # vqgan_path: './weights/CodeFormer/vqgan_code1024.pth' # pretrained VQGAN \n  frame_length: 5\n\n# network_vqgan: # this config is needed if no pre-calculated latent\n#   type: VQAutoEncoder\n#   img_size: 512\n#   nf: 64\n#   ch_mult: [1, 2, 2, 4, 4, 8]\n#   quantizer: 'nearest'\n#   codebook_size: 1024\n\nnetwork_d:\n  type: VQGANDiscriminator\n  nc: 3\n  ndf: 64\n  n_layers: 4\n\n# path\npath:\n  pretrain_network_g: './weights/CodeFormer/codeformer.pth'\n  param_key_g: params_ema\n  strict_load_g: false\n  pretrain_network_d: './weights/CodeFormer/vqgan_discriminator.pth'\n  strict_load_d: true\n  resume_state: ~\n\n# base_lr(4.5e-6)*bach_size(4)\ntrain:\n  # use_hq_feat_loss: False\n  # feat_loss_weight: 1.0\n  cross_entropy_loss: true\n  entropy_loss_weight: 0.5\n  fidelity_weight: 0\n\n  optim_g:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n  optim_d:\n    type: Adam\n    lr: !!float 5e-5\n    weight_decay: 0\n    betas: [0.9, 0.99]\n\n  # scheduler:\n  #   type: MultiStepLR\n  #   milestones: [30000, 45000]\n  #   gamma: 0.5\n\n  scheduler:\n    type: CosineAnnealingRestartLR\n    periods: [100000]\n    restart_weights: [1]\n    eta_min: !!float 2e-5\n\n  total_iter: 100000\n\n  warmup_iter: -1  # no warm up\n  ema_decay: 0.997\n\n# training loss\n  pixel_opt:\n    type: L1Loss\n    loss_weight: 1.0\n    reduction: mean\n\n  perceptual_opt:\n    type: LPIPSLoss\n    loss_weight: 1.0\n    use_input_norm: true\n    range_norm: true\n\n  dirichletKL_opt:\n    type: DirichletKLLoss\n    loss_weight: 0.00\n    kl_coef: 0.1\n\n  gan_opt:\n    type: GANLoss\n    gan_type: hinge\n    loss_weight: !!float 1.0 # adaptive_weighting\n\n  use_adaptive_weight: true\n\n  net_g_start_iter: 0\n  net_d_iters: 1\n  net_d_start_iter: 6000000000\n  manual_seed: 0\n\n# validation settings\nval:\n  val_freq: !!float 1000 # no validation\n  save_img: true\n\n  metrics:\n    psnr: # metric name, can be arbitrary\n      type: calculate_psnr\n      crop_border: 4\n      test_y_channel: false\n\n# logging settings\nlogger:\n  print_freq: 100\n  save_checkpoint_freq: !!float 1000\n  use_tb_logger: true\n  wandb:\n    mode: offline\n    project: codeformer_dirichlet_clip5_bs2_align_nofix_multiscale_inpaint\n    resume_id: ~\n\n# dist training settings\ndist_params:\n  backend: nccl\n  port: 29412\n\nfind_unused_parameters: false"
  },
  {
    "path": "requirements.txt",
    "content": "addict\nfuture\nlmdb\nnumpy\nopencv-python\nPillow\npyyaml\nrequests\nscikit-image\nscipy\n# tb-nightly\ntensorboard\ntorch>=1.7.1\ntorchvision\ntqdm\nyapf\nlpips\neinops\nav\nffmpeg-python\nwandb"
  },
  {
    "path": "scripts/inference.py",
    "content": "import os\nimport cv2\nimport argparse\nimport glob\nimport torch\nimport numpy as np\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.utils import imwrite, img2tensor, tensor2img\nfrom basicsr.utils.misc import gpu_is_available, get_device\nfrom scipy.ndimage import gaussian_filter1d\nfrom facelib.utils.face_restoration_helper import FaceRestoreHelper\nfrom facelib.utils.misc import is_gray\nfrom basicsr.utils.video_util import VideoReader, VideoWriter\nfrom einops import rearrange\n\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n\ndef interpolate_sequence(sequence):\n    interpolated_sequence = np.copy(sequence)\n    missing_indices = np.isnan(sequence)\n\n    if np.any(missing_indices):\n        valid_indices = ~missing_indices\n        x = np.arange(len(sequence))\n\n        # Interpolate missing values using valid data points\n        interpolated_sequence[missing_indices] = np.interp(\n            x[missing_indices], x[valid_indices], sequence[valid_indices]\n        )\n\n    return interpolated_sequence\n\n\ndef set_realesrgan():\n    from basicsr.archs.rrdbnet_arch import RRDBNet\n    from basicsr.utils.realesrgan_utils import RealESRGANer\n\n    use_half = False\n    if torch.cuda.is_available():  # set False in CPU/MPS mode\n        # set False for GPUs that don't support f16\n        no_half_gpu_list = [\"1650\", \"1660\"]\n        if not True in [\n            gpu in torch.cuda.get_device_name(0) for gpu in no_half_gpu_list\n        ]:\n            use_half = True\n\n    model = RRDBNet(\n        num_in_ch=3,\n        num_out_ch=3,\n        num_feat=64,\n        num_block=23,\n        num_grow_ch=32,\n        scale=2,\n    )\n    upsampler = RealESRGANer(\n        scale=2,\n        model_path=\"./ckpts/realesrgan/RealESRGAN_x2plus.pth\",\n        model=model,\n        tile=args.bg_tile,\n        tile_pad=40,\n        pre_pad=0,\n        half=use_half,\n    )\n\n    if not gpu_is_available():  # CPU\n        import warnings\n\n        warnings.warn(\n            \"Running on CPU now! Make sure your PyTorch version matches your CUDA.\"\n            \"The unoptimized RealESRGAN is slow on CPU. \"\n            \"If you want to disable it, please remove `--bg_upsampler` and `--face_upsample` in command.\",\n            category=RuntimeWarning,\n        )\n    return upsampler\n\nif __name__ == \"__main__\":\n    device = get_device()\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\n        \"-i\",\n        \"--input_path\",\n        type=str,\n        default=\"None\",\n        help=\"Input image, video or folder. Default: inputs/whole_imgs\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--output_path\",\n        type=str,\n        default=\"results/\",\n        help=\"Output folder. Default: results/\",\n    )\n    parser.add_argument(\n        \"--save_video\", action=\"store_true\", help=\"Save output as video. Default: False\"\n    )\n    parser.add_argument(\n        \"-s\",\n        \"--upscale\",\n        type=int,\n        default=2,\n        help=\"The final upsampling scale of the image. Default: 1\",\n    )\n    parser.add_argument(\n        \"--max_length\",\n        type=int,\n        default=20,\n        help=\"Max length of per sub-clip depending of GPU memory. Default: 20\",\n    )\n    parser.add_argument(\n        \"--has_aligned\",\n        action=\"store_true\",\n        help=\"Input are cropped and aligned faces. Default: False\",\n    )\n    parser.add_argument(\n        \"--only_center_face\",\n        type=bool,\n        default=True,\n        help=\"Only restore the center face. Default: True\",\n    )\n    parser.add_argument(\n        \"--draw_box\",\n        action=\"store_true\",\n        help=\"Draw the bounding box for the detected faces. Default: False\",\n    )\n    parser.add_argument(\n        \"--detection_model\",\n        type=str,\n        default=\"retinaface_resnet50\",\n        help=\"Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n, dlib. \\\n                        Default: retinaface_resnet50\",\n    )\n    parser.add_argument(\n        \"--bg_upsampler\",\n        type=str,\n        default=\"None\",\n        help=\"Background upsampler. Optional: realesrgan\",\n    )\n    parser.add_argument(\n        \"--face_upsample\",\n        action=\"store_true\",\n        help=\"Face upsampler after enhancement. Default: False\",\n    )\n    parser.add_argument(\n        \"--bg_tile\",\n        type=int,\n        default=400,\n        help=\"Tile size for background sampler. Default: 400\",\n    )\n    parser.add_argument(\n        \"--save_video_fps\",\n        type=float,\n        default=20,\n        help=\"Frame rate for saving video. Default: 20\",\n    )\n    parser.add_argument(\n        \"--ckpt_path\", type=str, default=\"None\", help=\"the loaded ckpt file path\"\n    )\n\n    args = parser.parse_args()\n    input_video = False\n\n    ckpt_path = args.ckpt_path\n    weight_parameter = 1.0\n\n    # ------------------ set up background upsampler ------------------\n    print(\"------------------ set up background upsampler ------------------\")\n    if args.bg_upsampler == \"realesrgan\":\n        bg_upsampler = set_realesrgan()\n    else:\n        bg_upsampler = None\n\n    # ------------------ set up face upsampler ------------------\n    if args.face_upsample:\n        if bg_upsampler is not None:\n            face_upsampler = bg_upsampler\n        else:\n            face_upsampler = set_realesrgan()\n    else:\n        face_upsampler = None\n\n    os.makedirs(args.output_path, exist_ok=True)\n\n    # ------------------ set up restorer -------------------\n    net = ARCH_REGISTRY.get(\"TemporalCodeFormerDirDistMultiScale\")(\n        dim_embed=512,\n        n_head=8,\n        n_layers=9,\n        codebook_size=1024,\n        connect_list=[\"32\", \"64\", \"128\", \"256\"],\n        frame_length=5,\n    ).to(device)\n\n    checkpoint = torch.load(ckpt_path)[\"params_ema\"]\n    net.load_state_dict(checkpoint)\n    net.eval()\n\n    # ------------------ set up FaceRestoreHelper -------------------\n    # large det_model: 'YOLOv5l', 'retinaface_resnet50'\n    # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'\n    if not args.has_aligned:\n        print(f\"Face detection model: {args.detection_model}\")\n    if bg_upsampler is not None:\n        print(f\"Background upsampling: True. Face upsampling: {args.face_upsample}\")\n    else:\n        print(f\"Background upsampling: False. Face upsampling: {args.face_upsample}\")\n\n    face_helper = FaceRestoreHelper(\n        args.upscale,\n        face_size=512,\n        crop_ratio=(1, 1),\n        det_model=args.detection_model,\n        save_ext=\"png\",\n        use_parse=True,\n        device=device,\n    )\n\n    # -------------------- start processing ---------------------\n    input_img_list = []\n    restored_img_list = []\n\n    if args.input_path.endswith(\n        (\"mp4\", \"mov\", \"avi\", \"MP4\", \"MOV\", \"AVI\")\n    ):  # input video path\n        vidreader = VideoReader(args.input_path)\n        image = vidreader.get_frame()\n        while image is not None:\n            input_img_list.append(image)\n            image = vidreader.get_frame()\n        fps = (\n            vidreader.get_fps() if args.save_video_fps is None else args.save_video_fps\n        )\n        vidreader.close()\n\n        clip_name = os.path.basename(args.input_path)[:-4]\n        result_root = os.path.join(args.output_path, clip_name)\n        os.makedirs(result_root, exist_ok=True)\n\n    elif os.path.isdir(args.input_path):  # input img folder\n        # scan all the jpg and png images\n        for img_path in sorted(\n            glob.glob(os.path.join(args.input_path, \"*.[jpJP][pnPN]*[gG]\"))\n        ):\n            input_img_list.append(cv2.imread(img_path))\n        clip_name = os.path.basename(args.input_path)\n        result_root = os.path.join(args.output_path, clip_name)\n        os.makedirs(result_root, exist_ok=True)\n\n    else:\n        raise TypeError(f\"Unrecognized type of input video {args.input_path}.\")\n\n    if len(input_img_list) == 0:\n        raise FileNotFoundError(\n            \"No input image/video is found...\\n\"\n            \"\\tNote that --input_path for video should end with .mp4|.mov|.avi\"\n        )\n\n    if not args.has_aligned:\n        # Smoothing aligned landmarks\n        print(\"Detecting keypoints and smooth alignment ...\")\n        raw_landmarks = []\n        for i, img in enumerate(input_img_list):\n            # clean all the intermediate results to process the next image\n            face_helper.clean_all()\n            face_helper.read_image(img)\n\n            # get face landmarks for each face\n            num_det_faces = face_helper.get_face_landmarks_5(\n                only_center_face=args.only_center_face,\n                resize=640,\n                eye_dist_threshold=5,\n                only_keep_largest=True,\n            )\n\n            if num_det_faces == 1:\n                raw_landmarks.append(face_helper.all_landmarks_5[0].reshape((10,)))\n            elif num_det_faces == 0:\n                raw_landmarks.append(np.array([np.nan] * 10))\n\n        raw_landmarks = np.array(raw_landmarks)\n        for i in range(10):\n            raw_landmarks[:, i] = interpolate_sequence(raw_landmarks[:, i])\n        video_length = len(input_img_list)\n        avg_landmarks = gaussian_filter1d(raw_landmarks, 5, axis=0).reshape(\n            video_length, 5, 2\n        )\n\n    # Pack cropped faces.\n    cropped_faces = []\n    for i, img in enumerate(input_img_list):\n        if not args.has_aligned:\n            face_helper.clean_all()\n            face_helper.read_image(img)\n            face_helper.all_landmarks_5 = [avg_landmarks[i]]\n            face_helper.align_warp_face()\n        else:\n            img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)\n            face_helper.is_gray = is_gray(img, threshold=10)\n            if face_helper.is_gray:\n                print(\"Grayscale input: True\")\n            face_helper.cropped_faces = [img]\n\n        cropped_face_t = img2tensor(\n            face_helper.cropped_faces[0] / 255.0, bgr2rgb=True, float32=True\n        )\n        normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)\n        cropped_faces.append(cropped_face_t)\n    cropped_faces = torch.stack(cropped_faces, dim=0).unsqueeze(0).to(device)\n\n    print(\"Restoring faces ...\")\n    with torch.no_grad():\n        video_length = cropped_faces.shape[1]\n        output = []\n        for start_idx in range(0, video_length):\n            pre_length = args.max_length // 2\n            post_length = args.max_length - pre_length - 1\n\n            padding_begin_idx = start_idx - pre_length\n            padding_end_idx = start_idx + post_length\n\n            if padding_begin_idx < 0:\n                pre_padding = torch.zeros(\n                    (\n                        cropped_faces.shape[0],\n                        -padding_begin_idx,\n                        *cropped_faces.shape[2:],\n                    ),\n                    dtype=cropped_faces.dtype,\n                    device=cropped_faces.device,\n                )\n                pre_padding = pre_padding + cropped_faces[:, 0:1]\n                small_clip = torch.cat(\n                    [pre_padding, cropped_faces[:, : padding_end_idx + 1, ...]], dim=1\n                )\n            elif padding_end_idx >= video_length:\n                post_padding = torch.zeros(\n                    (\n                        cropped_faces.shape[0],\n                        padding_end_idx - video_length + 1,\n                        *cropped_faces.shape[2:],\n                    ),\n                    dtype=cropped_faces.dtype,\n                    device=cropped_faces.device,\n                )\n                post_padding = post_padding + cropped_faces[:, -1:]\n                small_clip = torch.cat(\n                    [cropped_faces[:, padding_begin_idx:, ...], post_padding], dim=1\n                )\n            else:\n                small_clip = cropped_faces[\n                    :, padding_begin_idx : padding_end_idx + 1, ...\n                ]\n\n            small_clip = rearrange(\n                small_clip, \"b t c h w -> (b t) c h w\", t=args.max_length\n            )\n            bt = small_clip.shape[0]\n            res, _, _ = net(\n                small_clip, w=weight_parameter\n            ) \n\n            res = rearrange(res, \"(b t) c h w -> b t c h w\", t=args.max_length)\n\n            res = res[:, pre_length : pre_length + 1, ...]\n            output.append(res)\n\n        output = torch.cat(output, dim=1).squeeze(0)\n        assert output.shape[0] == video_length, \"Differer number of frames\"\n\n        restored_faces = [tensor2img(x, rgb2bgr=True, min_max=(-1, 1)) for x in output]\n        del output\n        torch.cuda.empty_cache()\n\n    print(\"Pasting faces back ...\")\n\n    for i, img in enumerate(input_img_list):\n        # clean all the intermediate results to process the next image\n        face_helper.clean_all()\n\n        if args.has_aligned:\n            # the input faces are already cropped and aligned\n            img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)\n            face_helper.is_gray = is_gray(img, threshold=10)\n            if face_helper.is_gray:\n                print(\"Grayscale input: True\")\n            face_helper.cropped_faces = [img]\n        else:\n            # align and warp each face\n            face_helper.read_image(img)\n            face_helper.all_landmarks_5 = [avg_landmarks[i]]\n            face_helper.align_warp_face()\n\n        face_helper.add_restored_face(restored_faces[i].astype(\"uint8\"))\n\n        # paste_back\n        if not args.has_aligned:\n            # upsample the background\n            if bg_upsampler is not None:\n                # Now only support RealESRGAN for upsampling background\n                bg_img = bg_upsampler.enhance(img, outscale=args.upscale)[0]\n            else:\n                bg_img = None\n            face_helper.get_inverse_affine(None)\n            # paste each restored face to the input image\n            if args.face_upsample and face_upsampler is not None:\n                restored_img = face_helper.paste_faces_to_input_image(\n                    upsample_img=bg_img,\n                    draw_box=args.draw_box,\n                    face_upsampler=face_upsampler,\n                )\n            else:\n                restored_img = face_helper.paste_faces_to_input_image(\n                    upsample_img=bg_img, draw_box=args.draw_box\n                )\n\n            restored_img_list.append(restored_img)\n\n        # save faces\n        save_face_name = f\"{i:08d}.png\"\n\n        for face_idx, (cropped_face, restored_face) in enumerate(\n            zip(face_helper.cropped_faces, face_helper.restored_faces)\n        ):\n            # save cropped face\n            if not args.has_aligned:\n                save_crop_path = os.path.join(\n                    result_root, \"cropped_faces\", save_face_name\n                )\n                imwrite(cropped_face, save_crop_path)\n            # save restored face\n            save_restore_path = os.path.join(\n                result_root, \"restored_faces\", save_face_name\n            )\n            imwrite(restored_face, save_restore_path)\n\n        # save restored img\n        if not args.has_aligned and restored_img is not None:\n            save_restore_path = os.path.join(\n                result_root, \"final_results\", save_face_name\n            )\n            imwrite(restored_img, save_restore_path)\n\n    # save enhanced video\n    if args.save_video:\n        print(\"Saving video ...\")\n        # load images\n        video_frames = []\n        if not args.has_aligned:\n            img_list = sorted(\n                glob.glob(os.path.join(result_root, \"final_results\", \"*.[jp][pn]g\"))\n            )\n        else:\n            img_list = sorted(\n                glob.glob(os.path.join(result_root, \"restored_faces\", \"*.[jp][pn]g\"))\n            )\n\n        for img_path in img_list:\n            img = cv2.imread(img_path)\n            video_frames.append(img)\n\n        height, width = video_frames[0].shape[:2]\n        save_restore_path = os.path.join(args.output_path, f\"{clip_name}.mp4\")\n        vidwriter = VideoWriter(\n            save_restore_path, height, width, args.save_video_fps, audio=None\n        )\n\n        for f in video_frames:\n            vidwriter.write_frame(f)\n        vidwriter.close()\n\n    print(f\"\\nAll results are saved in {result_root}\")\n"
  },
  {
    "path": "scripts/inference_color_and_inpainting.py",
    "content": "import os\nimport cv2\nimport argparse\nimport glob\nimport torch\nimport numpy as np\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.utils import imwrite, img2tensor, tensor2img\nfrom basicsr.utils.misc import gpu_is_available, get_device\nfrom scipy.ndimage import gaussian_filter1d\nfrom facelib.utils.face_restoration_helper import FaceRestoreHelper\nfrom facelib.utils.misc import is_gray\nfrom basicsr.utils.video_util import VideoReader, VideoWriter\nfrom einops import rearrange\n\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n\ndef interpolate_sequence(sequence):\n    interpolated_sequence = np.copy(sequence)\n    missing_indices = np.isnan(sequence)\n\n    if np.any(missing_indices):\n        valid_indices = ~missing_indices\n        x = np.arange(len(sequence))\n\n        # Interpolate missing values using valid data points\n        interpolated_sequence[missing_indices] = np.interp(\n            x[missing_indices], x[valid_indices], sequence[valid_indices]\n        )\n\n    return interpolated_sequence\n\n\ndef set_realesrgan():\n    from basicsr.archs.rrdbnet_arch import RRDBNet\n    from basicsr.utils.realesrgan_utils import RealESRGANer\n\n    use_half = False\n    if torch.cuda.is_available():  # set False in CPU/MPS mode\n        # set False for GPUs that don't support f16\n        no_half_gpu_list = [\"1650\", \"1660\"]\n        if not True in [\n            gpu in torch.cuda.get_device_name(0) for gpu in no_half_gpu_list\n        ]:\n            use_half = True\n\n    model = RRDBNet(\n        num_in_ch=3,\n        num_out_ch=3,\n        num_feat=64,\n        num_block=23,\n        num_grow_ch=32,\n        scale=2,\n    )\n    upsampler = RealESRGANer(\n        scale=2,\n        model_path=\"./ckpts/realesrgan/RealESRGAN_x2plus.pth\",\n        model=model,\n        tile=args.bg_tile,\n        tile_pad=40,\n        pre_pad=0,\n        half=use_half,\n    )\n\n    if not gpu_is_available():  # CPU\n        import warnings\n\n        warnings.warn(\n            \"Running on CPU now! Make sure your PyTorch version matches your CUDA.\"\n            \"The unoptimized RealESRGAN is slow on CPU. \"\n            \"If you want to disable it, please remove `--bg_upsampler` and `--face_upsample` in command.\",\n            category=RuntimeWarning,\n        )\n    return upsampler\n\n\nif __name__ == \"__main__\":\n    device = get_device()\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument(\n        \"-i\",\n        \"--input_path\",\n        type=str,\n        default=\"None\",\n        help=\"Input image, video or folder. Default: inputs/whole_imgs\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--output_path\",\n        type=str,\n        default=\"results/\",\n        help=\"Output folder. Default: results/\",\n    )\n    parser.add_argument(\n        \"--save_video\", action=\"store_true\", help=\"Save output as video. Default: False\"\n    )\n    parser.add_argument(\n        \"-s\",\n        \"--upscale\",\n        type=int,\n        default=2,\n        help=\"The final upsampling scale of the image. Default: 1\",\n    )\n    parser.add_argument(\n        \"--max_length\",\n        type=int,\n        default=20,\n        help=\"Max length of per sub-clip depending of GPU memory. Default: 20\",\n    )\n    parser.add_argument(\n        \"--has_aligned\",\n        action=\"store_true\",\n        help=\"Input are cropped and aligned faces. Default: False\",\n    )\n    parser.add_argument(\n        \"--only_center_face\",\n        type=bool,\n        default=True,\n        help=\"Only restore the center face. Default: True\",\n    )\n    parser.add_argument(\n        \"--draw_box\",\n        action=\"store_true\",\n        help=\"Draw the bounding box for the detected faces. Default: False\",\n    )\n    parser.add_argument(\n        \"--detection_model\",\n        type=str,\n        default=\"retinaface_resnet50\",\n        help=\"Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n, dlib. \\\n                        Default: retinaface_resnet50\",\n    )\n    parser.add_argument(\n        \"--bg_upsampler\",\n        type=str,\n        default=\"None\",\n        help=\"Background upsampler. Optional: realesrgan\",\n    )\n    parser.add_argument(\n        \"--face_upsample\",\n        action=\"store_true\",\n        help=\"Face upsampler after enhancement. Default: False\",\n    )\n    parser.add_argument(\n        \"--bg_tile\",\n        type=int,\n        default=400,\n        help=\"Tile size for background sampler. Default: 400\",\n    )\n    parser.add_argument(\n        \"--save_video_fps\",\n        type=float,\n        default=20,\n        help=\"Frame rate for saving video. Default: 20\",\n    )\n    parser.add_argument(\n        \"--ckpt_path\", type=str, default=\"None\", help=\"the loaded ckpt file path\"\n    )\n\n    args = parser.parse_args()\n    input_video = False\n\n    ckpt_path = args.ckpt_path\n    weight_parameter = 1.0\n\n    # ------------------ set up background upsampler ------------------\n    print(\"------------------ set up background upsampler ------------------\")\n    if args.bg_upsampler == \"realesrgan\":\n        bg_upsampler = set_realesrgan()\n    else:\n        bg_upsampler = None\n\n    # ------------------ set up face upsampler ------------------\n    if args.face_upsample:\n        if bg_upsampler is not None:\n            face_upsampler = bg_upsampler\n        else:\n            face_upsampler = set_realesrgan()\n    else:\n        face_upsampler = None\n\n    # ------------------ set up restorer -------------------\n    net = ARCH_REGISTRY.get(\"TemporalCodeFormerDirDistMultiScale\")(\n        dim_embed=512,\n        n_head=8,\n        n_layers=9,\n        codebook_size=1024,\n        connect_list=[\"32\", \"64\", \"128\", \"256\"],\n        frame_length=5,\n    ).to(device)\n\n    checkpoint = torch.load(ckpt_path)[\"params_ema\"]\n    net.load_state_dict(checkpoint)\n    net.eval()\n\n    # ------------------ set up FaceRestoreHelper -------------------\n    # large det_model: 'YOLOv5l', 'retinaface_resnet50'\n    # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'\n    if not args.has_aligned:\n        print(f\"Face detection model: {args.detection_model}\")\n    if bg_upsampler is not None:\n        print(f\"Background upsampling: True. Face upsampling: {args.face_upsample}\")\n    else:\n        print(f\"Background upsampling: False. Face upsampling: {args.face_upsample}\")\n\n    face_helper = FaceRestoreHelper(\n        args.upscale,\n        face_size=512,\n        crop_ratio=(1, 1),\n        det_model=args.detection_model,\n        save_ext=\"png\",\n        use_parse=True,\n        device=device,\n    )\n\n    # -------------------- start processing ---------------------\n    input_img_list = []\n    restored_img_list = []\n\n    if args.input_path.endswith(\n        (\"mp4\", \"mov\", \"avi\", \"MP4\", \"MOV\", \"AVI\")\n    ):  # input video path\n        vidreader = VideoReader(args.input_path)\n        image = vidreader.get_frame()\n        while image is not None:\n            input_img_list.append(image)\n            image = vidreader.get_frame()\n        fps = (\n            vidreader.get_fps() if args.save_video_fps is None else args.save_video_fps\n        )\n        vidreader.close()\n\n        clip_name = os.path.basename(args.input_path)[:-4]\n        result_root = os.path.join(args.output_path, clip_name)\n        os.makedirs(result_root, exist_ok=True)\n\n    elif os.path.isdir(args.input_path):  # input img folder\n        # scan all the jpg and png images\n        for img_path in sorted(\n            glob.glob(os.path.join(args.input_path, \"*.[jpJP][pnPN]*[gG]\"))\n        ):\n            input_img_list.append(cv2.imread(img_path))\n        clip_name = os.path.basename(args.input_path)\n        result_root = os.path.join(args.output_path, clip_name)\n        os.makedirs(result_root, exist_ok=True)\n\n    else:\n        raise TypeError(f\"Unrecognized type of input video {args.input_path}.\")\n\n    if len(input_img_list) == 0:\n        raise FileNotFoundError(\n            \"No input image/video is found...\\n\"\n            \"\\tNote that --input_path for video should end with .mp4|.mov|.avi\"\n        )\n\n    if not args.has_aligned:\n        # Smoothing aligned landmarks\n        print(\"Detecting keypoints and smooth alignment ...\")\n        raw_landmarks = []\n        for i, img in enumerate(input_img_list):\n            # clean all the intermediate results to process the next image\n            face_helper.clean_all()\n            face_helper.read_image(img)\n\n            # get face landmarks for each face\n            num_det_faces = face_helper.get_face_landmarks_5(\n                only_center_face=args.only_center_face,\n                resize=640,\n                eye_dist_threshold=5,\n                only_keep_largest=True,\n            )\n\n            if num_det_faces == 1:\n                raw_landmarks.append(face_helper.all_landmarks_5[0].reshape((10,)))\n            elif num_det_faces == 0:\n                raw_landmarks.append(np.array([np.nan] * 10))\n\n        raw_landmarks = np.array(raw_landmarks)\n        for i in range(10):\n            raw_landmarks[:, i] = interpolate_sequence(raw_landmarks[:, i])\n        video_length = len(input_img_list)\n        avg_landmarks = gaussian_filter1d(raw_landmarks, 5, axis=0).reshape(\n            video_length, 5, 2\n        )\n\n    # Pack cropped faces.\n    cropped_faces = []\n    for i, img in enumerate(input_img_list):\n        if not args.has_aligned:\n            face_helper.clean_all()\n            face_helper.read_image(img)\n            face_helper.all_landmarks_5 = [avg_landmarks[i]]\n            face_helper.align_warp_face()\n        else:\n            img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)\n            face_helper.is_gray = is_gray(img, threshold=10)\n            # if face_helper.is_gray:\n                # print(\"Grayscale input: True\")\n            face_helper.cropped_faces = [img]\n\n        cropped_face_t = img2tensor(\n            face_helper.cropped_faces[0] / 255.0, bgr2rgb=True, float32=True\n        )\n        normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)\n        cropped_faces.append(cropped_face_t)\n    cropped_faces = torch.stack(cropped_faces, dim=0).unsqueeze(0).to(device)\n\n    print(\"Restoring faces ...\")\n    with torch.no_grad():\n        video_length = cropped_faces.shape[1]\n        output = []\n        for start_idx in range(0, video_length):\n            pre_length = args.max_length // 2\n            post_length = args.max_length - pre_length - 1\n\n            padding_begin_idx = start_idx - pre_length\n            padding_end_idx = start_idx + post_length\n\n            if padding_begin_idx < 0:\n                pre_padding = torch.zeros(\n                    (\n                        cropped_faces.shape[0],\n                        -padding_begin_idx,\n                        *cropped_faces.shape[2:],\n                    ),\n                    dtype=cropped_faces.dtype,\n                    device=cropped_faces.device,\n                )\n                pre_padding = pre_padding + cropped_faces[:, 0:1]\n                small_clip = torch.cat(\n                    [pre_padding, cropped_faces[:, : padding_end_idx + 1, ...]], dim=1\n                )\n            elif padding_end_idx >= video_length:\n                post_padding = torch.zeros(\n                    (\n                        cropped_faces.shape[0],\n                        padding_end_idx - video_length + 1,\n                        *cropped_faces.shape[2:],\n                    ),\n                    dtype=cropped_faces.dtype,\n                    device=cropped_faces.device,\n                )\n                post_padding = post_padding + cropped_faces[:, -1:]\n                small_clip = torch.cat(\n                    [cropped_faces[:, padding_begin_idx:, ...], post_padding], dim=1\n                )\n            else:\n                small_clip = cropped_faces[\n                    :, padding_begin_idx : padding_end_idx + 1, ...\n                ]\n\n            small_clip = rearrange(\n                small_clip, \"b t c h w -> (b t) c h w\", t=args.max_length\n            )\n            bt = small_clip.shape[0]\n            res, _, _ = net(small_clip, w=weight_parameter)\n\n            res = rearrange(res, \"(b t) c h w -> b t c h w\", t=args.max_length)\n\n            res = res[:, pre_length : pre_length + 1, ...]\n            output.append(res)\n\n        output = torch.cat(output, dim=1).squeeze(0)\n        assert output.shape[0] == video_length, \"Differer number of frames\"\n\n        restored_faces = [tensor2img(x, rgb2bgr=True, min_max=(-1, 1)) for x in output]\n        del output\n        torch.cuda.empty_cache()\n\n    print(\"Saving result ...\")\n    \n    output_path = result_root\n    os.makedirs(output_path, mode=0o777, exist_ok=True)\n    if args.save_video:\n        writer = cv2.VideoWriter(\n            f\"{output_path}.mp4\",\n            fourcc=cv2.VideoWriter_fourcc(*\"mp4v\"),\n            fps=args.save_video_fps,\n            frameSize=(512, 512),\n        )\n\n    for idx, restored_img in enumerate(restored_faces):\n        img_abs_path = os.path.join(output_path, str(idx).zfill(8) + \".png\")\n        cv2.imwrite(img_abs_path, restored_img, [cv2.IMWRITE_PNG_COMPRESSION, 0])\n\n        if args.save_video:\n            writer.write(restored_img)\n\n    if args.save_video:\n        writer.release()\n"
  },
  {
    "path": "scripts/warp_images.py",
    "content": "import os\nimport cv2\nimport argparse\nimport glob\nimport torch\nimport pdb\nimport numpy as np\nfrom tqdm import tqdm\nfrom torchvision.transforms.functional import normalize\nfrom basicsr.utils import imwrite, img2tensor, tensor2img\nfrom basicsr.utils.download_util import load_file_from_url\nfrom basicsr.utils.misc import gpu_is_available, get_device\nfrom scipy.ndimage import gaussian_filter1d\nfrom facelib.utils.face_restoration_helper import FaceRestoreHelper\nfrom facelib.utils.misc import is_gray\nfrom basicsr.utils.video_util import VideoReader, VideoWriter\nfrom einops import rearrange\n\nfrom utils import TDCF_OPT, TCFDD_OPT\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n\ndef interpolate_sequence(sequence):\n    interpolated_sequence = np.copy(sequence)\n    missing_indices = np.isnan(sequence)\n\n    if np.any(missing_indices):\n        valid_indices = ~missing_indices\n        x = np.arange(len(sequence))\n\n        # Interpolate missing values using valid data points\n        interpolated_sequence[missing_indices] = np.interp(\n            x[missing_indices], x[valid_indices], sequence[valid_indices])\n\n    return interpolated_sequence\n\n\ndef process_single(args, face_helper, input_path, ldmk_folder_path):\n    input_img_list = []\n\n    if input_path.endswith(('mp4', 'mov', 'avi', 'MP4', 'MOV', 'AVI')): # input video path\n        vidreader = VideoReader(input_path)\n        image = vidreader.get_frame()\n        while image is not None:\n            input_img_list.append(image)\n            image = vidreader.get_frame()\n        fps = vidreader.get_fps() if args.save_video_fps is None else args.save_video_fps   \n        vidreader.close()\n\n        clip_name = os.path.basename(input_path)[:-4]\n        result_root = os.path.join(args.output_path, clip_name)\n    elif os.path.isdir(args.input_path): # input img folder\n        # scan all the jpg and png images\n        for img_path in sorted(glob.glob(os.path.join(input_path, '*.[jpJP][pnPN]*[gG]'))):\n            input_img_list.append(cv2.imread(img_path))\n        clip_name = os.path.basename(input_path)\n        result_root = os.path.join(args.output_path, clip_name)\n    else:\n        raise TypeError(f'Unrecognized type of input video {input_path}.')\n\n    if len(input_img_list) == 0:\n        raise FileNotFoundError('No input image/video is found...\\n'\n                                '\\tNote that --input_path for video should end with .mp4|.mov|.avi')\n\n    # Smoothing aligned landmarks\n    print('Detecting keypoints and smooth alignment ...')\n    avg_landmarks = []\n    with open(f\"{ldmk_folder_path}/{clip_name}.txt\", \"r\") as f:\n        for line in f.readlines():\n            line = line.strip().split()\n            landmark = np.array([float(_) for _ in line]).reshape(5, 2)\n            avg_landmarks.append(landmark)\n\n    # Save cropped faces.\n    output_path = os.path.join(args.output_path, f'{clip_name}')\n    os.makedirs(output_path, mode=0o777, exist_ok=True)\n    if args.save_video:\n        writer = cv2.VideoWriter(os.path.join(output_path, f'{clip_name}.mp4'),\n                                 fourcc=cv2.VideoWriter_fourcc(*'mp4v'),\n                                 fps=args.save_video_fps,\n                                 frameSize=(512, 512))\n\n    for idx, img in enumerate(input_img_list):\n        face_helper.clean_all()\n        face_helper.read_image(img)\n        face_helper.all_landmarks_5 = [avg_landmarks[idx]]\n        face_helper.align_warp_face()\n\n        img_abs_path = os.path.join(output_path, str(idx).zfill(8)+'.png')\n        cv2.imwrite(img_abs_path, face_helper.cropped_faces[0], [cv2.IMWRITE_PNG_COMPRESSION, 0])\n\n        if args.save_video:\n            writer.write(face_helper.cropped_faces[0])\n\n    if args.save_video:\n        writer.release()\n\n    print(f'All results are saved in {result_root}')\n\n\nif __name__ == '__main__':\n    device = get_device()\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument('-i', '--input_path', type=str, default='../dataset/real_LQ/',\n                        help='no warped images')\n    parser.add_argument('-o', '--output_path', type=str, default='results/',\n                        help='Output folder. Default: results/')\n    parser.add_argument('-l', '--ldmk_folder_path', type=str, required=True,\n                        help='landmarks info folder.')\n    parser.add_argument('--save_video', action='store_true',\n                        help='Save output as video. Default: False')\n    parser.add_argument('-s', '--upscale', type=int, default=1,\n                        help='The final upsampling scale of the image. Default: 1')\n    parser.add_argument('--detection_model', type=str, default='retinaface_resnet50',\n                        help='Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n, dlib. \\\n                Default: retinaface_resnet50')\n    parser.add_argument('--bg_tile', type=int, default=0,\n                        help='Tile size for background sampler. Default: 400')\n    parser.add_argument('--save_video_fps', type=float, default=24,\n                        help='Frame rate for saving video. Default: 20')\n\n    args = parser.parse_args()\n\n    # ------------------ set up FaceRestoreHelper -------------------\n    face_helper = FaceRestoreHelper(\n        args.upscale,\n        face_size=512,\n        crop_ratio=(1, 1),\n        det_model=args.detection_model,\n        save_ext='png',\n        use_parse=False,\n        device=device)\n\n    for _, clip_name in enumerate(tqdm(os.listdir(args.input_path))):\n        process_single(args,\n                       face_helper,\n                       os.path.join(args.input_path, clip_name), args.ldmk_folder_path)"
  },
  {
    "path": "train.sh",
    "content": "CUDA_VISIBLE_DEVICES=0 torchrun \\\n    --nproc_per_node=1 --master_port=29597 \\\n    basicsr/train.py \\\n    -opt options/clip5_bs2_512_align_nofix_multiscale.yaml \\\n    --launcher pytorch\n"
  }
]