Full Code of IDKiro/CBDNet-pytorch for AI

master 09a2e55b2098 cached
20 files
68.7 KB
20.6k tokens
61 symbols
1 requests
Download .txt
Repository: IDKiro/CBDNet-pytorch
Branch: master
Commit: 09a2e55b2098
Files: 20
Total size: 68.7 KB

Directory structure:
gitextract_rvk4zezk/

├── .gitignore
├── LICENSE
├── README.md
├── dataset/
│   ├── __init__.py
│   └── loader.py
├── model/
│   ├── __init__.py
│   └── cbdnet.py
├── predict.py
├── train.py
└── utils/
    ├── __init__.py
    ├── common.py
    └── syn/
        ├── ISP_implement.py
        ├── generate_dataset.py
        ├── metadata/
        │   ├── 201_CRF_data.mat
        │   ├── cameras.json
        │   └── dorfCurvesInv.mat
        └── modules/
            ├── Demosaicing_malvar2004.py
            ├── __init__.py
            ├── masks.py
            └── tone_mapping_cython.pyx

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Add by user
.vscode/
result/
data/
save_model/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 IDKiro

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# CBDNet-pytorch

It's an unofficial PyTorch implementation of CBDNet.

We used higher quality real and synthetic datasets for training and achieved better performance on DND.

[CBDNet in MATLAB](https://github.com/GuoShi28/CBDNet)

[CBDNet in Tensorflow](https://github.com/IDKiro/CBDNet-tensorflow)

## Quick Start

Download the dataset and pretrained model from [GoogleDrive](https://drive.google.com/drive/folders/1-e2nPCr_eP1cTDhFFes27Rjj-QXzMk5u?usp=sharing).

Extract the files to `data` folder and `save_model` folder as follow:

```
~/
  data/
    SIDD_train/
      ... (scene id)
    Syn_train/
      ... (id)
    DND/
      images_srgb/
        ... (mat files)
      ... (mat files)
  save_model/
    checkpoint.pth.tar
```

Train the model:

```
python train.py
```

Predict using the trained model:

```
python predict.py input_filename output_filename
```

## Network Structure

![Image of Network](imgs/CBDNet_v13.png)

## Realistic Noise Model
Given a clean image `x`, the realistic noise model can be represented as:

![](http://latex.codecogs.com/gif.latex?\\textbf{y}=f(\\textbf{DM}(\\textbf{L}+n(\\textbf{L}))))

![](http://latex.codecogs.com/gif.latex?n(\\textbf{L})=n_s(\\textbf{L})+n_c)

Where `y` is the noisy image, `f(.)` is the CRF function and the irradiance ![](http://latex.codecogs.com/gif.latex?\\textbf{L}=\\textbf{M}f^{-1}(\\textbf{x})) , `M(.)` represents the function that convert sRGB image to Bayer image and `DM(.)` represents the demosaicing function.

If considering denosing on compressed images, 

![](http://latex.codecogs.com/gif.latex?\\textbf{y}=JPEG(f(\\textbf{DM}(\\textbf{L}+n(\\textbf{L})))))

## Result

![](imgs/results.png)


================================================
FILE: dataset/__init__.py
================================================


================================================
FILE: dataset/loader.py
================================================
import os
import random
import torch
import numpy as np
import glob
from torch.utils.data import Dataset

from utils import read_img, hwc_to_chw


def get_patch(imgs, patch_size):
	H = imgs[0].shape[0]
	W = imgs[0].shape[1]

	ps_temp = min(H, W, patch_size)

	xx = np.random.randint(0, W-ps_temp) if W > ps_temp else 0
	yy = np.random.randint(0, H-ps_temp) if H > ps_temp else 0

	for i in range(len(imgs)):
		imgs[i] = imgs[i][yy:yy+ps_temp, xx:xx+ps_temp, :]

	if np.random.randint(2, size=1)[0] == 1:
		for i in range(len(imgs)):
			imgs[i] = np.flip(imgs[i], axis=1)
	if np.random.randint(2, size=1)[0] == 1: 
		for i in range(len(imgs)):
			imgs[i] = np.flip(imgs[i], axis=0)
	if np.random.randint(2, size=1)[0] == 1:
		for i in range(len(imgs)):
			imgs[i] = np.transpose(imgs[i], (1, 0, 2))

	return imgs


class Real(Dataset):
	def __init__(self, root_dir, sample_num, patch_size=128):
		self.patch_size = patch_size

		folders = glob.glob(root_dir + '/*')
		folders.sort()

		self.clean_fns = [None] * sample_num
		for i in range(sample_num):
			self.clean_fns[i] = []

		for ind, folder in enumerate(folders):
			clean_imgs = glob.glob(folder + '/*GT_SRGB*')
			clean_imgs.sort()

			for clean_img in clean_imgs:
				self.clean_fns[ind % sample_num].append(clean_img)

	def __len__(self):
		l = len(self.clean_fns)
		return l

	def __getitem__(self, idx):
		clean_fn = random.choice(self.clean_fns[idx])

		clean_img = read_img(clean_fn)
		noise_img = read_img(clean_fn.replace('GT_SRGB', 'NOISY_SRGB'))

		if self.patch_size > 0:
			[clean_img, noise_img] = get_patch([clean_img, noise_img], self.patch_size)

		return hwc_to_chw(noise_img), hwc_to_chw(clean_img), np.zeros((3, self.patch_size, self.patch_size)), np.zeros((3, self.patch_size, self.patch_size))


class Syn(Dataset):
	def __init__(self, root_dir, sample_num, patch_size=128):
		self.patch_size = patch_size

		folders = glob.glob(root_dir + '/*')
		folders.sort()

		self.clean_fns = [None] * sample_num
		for i in range(sample_num):
			self.clean_fns[i] = []

		for ind, folder in enumerate(folders):
			clean_imgs = glob.glob(folder + '/*GT_SRGB*')
			clean_imgs.sort()

			for clean_img in clean_imgs:
				self.clean_fns[ind % sample_num].append(clean_img)

	def __len__(self):
		l = len(self.clean_fns)
		return l

	def __getitem__(self, idx):
		clean_fn = random.choice(self.clean_fns[idx])

		clean_img = read_img(clean_fn)
		noise_img = read_img(clean_fn.replace('GT_SRGB', 'NOISY_SRGB'))
		sigma_img = read_img(clean_fn.replace('GT_SRGB', 'SIGMA_SRGB')) / 15.	# inverse scaling

		if self.patch_size > 0:
			[clean_img, noise_img, sigma_img] = get_patch([clean_img, noise_img, sigma_img], self.patch_size)

		return hwc_to_chw(noise_img), hwc_to_chw(clean_img), hwc_to_chw(sigma_img), np.ones((3, self.patch_size, self.patch_size))

================================================
FILE: model/__init__.py
================================================


================================================
FILE: model/cbdnet.py
================================================

import torch
import torch.nn as nn
import torch.nn.functional as F


class single_conv(nn.Module):
    def __init__(self, in_ch, out_ch):
        super(single_conv, self).__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(in_ch, out_ch, 3, padding=1),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        return self.conv(x)


class up(nn.Module):
    def __init__(self, in_ch):
        super(up, self).__init__()
        self.up = nn.ConvTranspose2d(in_ch, in_ch//2, 2, stride=2)

    def forward(self, x1, x2):
        x1 = self.up(x1)
        
        # input is CHW
        diffY = x2.size()[2] - x1.size()[2]
        diffX = x2.size()[3] - x1.size()[3]

        x1 = F.pad(x1, (diffX // 2, diffX - diffX//2,
                        diffY // 2, diffY - diffY//2))

        x = x2 + x1
        return x


class outconv(nn.Module):
    def __init__(self, in_ch, out_ch):
        super(outconv, self).__init__()
        self.conv = nn.Conv2d(in_ch, out_ch, 1)

    def forward(self, x):
        x = self.conv(x)
        return x


class FCN(nn.Module):
    def __init__(self):
        super(FCN, self).__init__()
        self.fcn = nn.Sequential(
            nn.Conv2d(3, 32, 3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(32, 32, 3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(32, 32, 3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(32, 32, 3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(32, 3, 3, padding=1),
            nn.ReLU(inplace=True)
        )
    
    def forward(self, x):
        return self.fcn(x)


class UNet(nn.Module):
    def __init__(self):
        super(UNet, self).__init__()
        
        self.inc = nn.Sequential(
            single_conv(6, 64),
            single_conv(64, 64)
        )

        self.down1 = nn.AvgPool2d(2)
        self.conv1 = nn.Sequential(
            single_conv(64, 128),
            single_conv(128, 128),
            single_conv(128, 128)
        )

        self.down2 = nn.AvgPool2d(2)
        self.conv2 = nn.Sequential(
            single_conv(128, 256),
            single_conv(256, 256),
            single_conv(256, 256),
            single_conv(256, 256),
            single_conv(256, 256),
            single_conv(256, 256)
        )

        self.up1 = up(256)
        self.conv3 = nn.Sequential(
            single_conv(128, 128),
            single_conv(128, 128),
            single_conv(128, 128)
        )

        self.up2 = up(128)
        self.conv4 = nn.Sequential(
            single_conv(64, 64),
            single_conv(64, 64)
        )

        self.outc = outconv(64, 3)

    def forward(self, x):
        inx = self.inc(x)

        down1 = self.down1(inx)
        conv1 = self.conv1(down1)

        down2 = self.down2(conv1)
        conv2 = self.conv2(down2)

        up1 = self.up1(conv2, conv1)
        conv3 = self.conv3(up1)

        up2 = self.up2(conv3, inx)
        conv4 = self.conv4(up2)

        out = self.outc(conv4)
        return out


class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.fcn = FCN()
        self.unet = UNet()
    
    def forward(self, x):
        noise_level = self.fcn(x)
        concat_img = torch.cat([x, noise_level], dim=1)
        out = self.unet(concat_img) + x
        return noise_level, out


class fixed_loss(nn.Module):
    def __init__(self):
        super().__init__()
        
    def forward(self, out_image, gt_image, est_noise, gt_noise, if_asym):
        l2_loss = F.mse_loss(out_image, gt_image)

        asym_loss = torch.mean(if_asym * torch.abs(0.3 - torch.lt(gt_noise, est_noise).float()) * torch.pow(est_noise - gt_noise, 2))

        h_x = est_noise.size()[2]
        w_x = est_noise.size()[3]
        count_h = self._tensor_size(est_noise[:, :, 1:, :])
        count_w = self._tensor_size(est_noise[:, :, : ,1:])
        h_tv = torch.pow((est_noise[:, :, 1:, :] - est_noise[:, :, :h_x-1, :]), 2).sum()
        w_tv = torch.pow((est_noise[:, :, :, 1:] - est_noise[:, :, :, :w_x-1]), 2).sum()
        tvloss = h_tv / count_h + w_tv / count_w

        loss = l2_loss +  0.5 * asym_loss + 0.05 * tvloss

        return loss

    def _tensor_size(self, t):
        return t.size()[1]*t.size()[2]*t.size()[3]

================================================
FILE: predict.py
================================================
import os, time, scipy.io, shutil
import numpy as np
import torch
import torch.nn as nn
import argparse
import cv2

from model.cbdnet import Network
from utils import read_img, chw_to_hwc, hwc_to_chw

parser = argparse.ArgumentParser(description = 'Test')
parser.add_argument('input_filename', type=str)
parser.add_argument('output_filename', type=str)
args = parser.parse_args()

save_dir = './save_model/'

model = Network()
model.cuda()
model = nn.DataParallel(model)

model.eval()

if os.path.exists(os.path.join(save_dir, 'checkpoint.pth.tar')):
    # load existing model
    model_info = torch.load(os.path.join(save_dir, 'checkpoint.pth.tar'))
    model.load_state_dict(model_info['state_dict'])
else:
    print('Error: no trained model detected!')
    exit(1)

input_image = read_img(args.input_filename)
input_var =  torch.from_numpy(hwc_to_chw(input_image)).unsqueeze(0).cuda()

with torch.no_grad():
    _, output = model(input_var)

output_image = chw_to_hwc(output[0,...].cpu().numpy())
output_image = np.uint8(np.round(np.clip(output_image, 0, 1) * 255.))[: ,: ,::-1]

cv2.imwrite(args.output_filename, output_image)

================================================
FILE: train.py
================================================
import os, time, shutil
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F

from utils import AverageMeter
from dataset.loader import Real, Syn
from model.cbdnet import Network, fixed_loss


parser = argparse.ArgumentParser(description = 'Train')
parser.add_argument('--bs', default=32, type=int, help='batch size')
parser.add_argument('--ps', default=128, type=int, help='patch size')
parser.add_argument('--lr', default=2e-4, type=float, help='learning rate')
parser.add_argument('--epochs', default=5000, type=int, help='sum of epochs')
args = parser.parse_args()


def train(train_loader, model, criterion, optimizer):
	losses = AverageMeter()
	model.train()

	for (noise_img, clean_img, sigma_img, flag) in train_loader:
		input_var = noise_img.cuda()
		target_var = clean_img.cuda()
		sigma_var = sigma_img.cuda()
		flag_var = flag.cuda()

		noise_level_est, output = model(input_var)

		loss = criterion(output, target_var, noise_level_est, sigma_var, flag_var)
		losses.update(loss.item())

		optimizer.zero_grad()
		loss.backward()
		optimizer.step()
	
	return losses.avg


if __name__ == '__main__':
	save_dir = './save_model/'

	model = Network()
	model.cuda()
	model = nn.DataParallel(model)

	if os.path.exists(os.path.join(save_dir, 'checkpoint.pth.tar')):
		# load existing model
		model_info = torch.load(os.path.join(save_dir, 'checkpoint.pth.tar'))
		print('==> loading existing model:', os.path.join(save_dir, 'checkpoint.pth.tar'))
		model.load_state_dict(model_info['state_dict'])
		optimizer = torch.optim.Adam(model.parameters())
		optimizer.load_state_dict(model_info['optimizer'])
		scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=args.epochs)
		scheduler.load_state_dict(model_info['scheduler'])
		cur_epoch = model_info['epoch']
	else:
		if not os.path.isdir(save_dir):
			os.makedirs(save_dir)
		# create model
		optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)
		scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=args.epochs)
		cur_epoch = 0
		
	criterion = fixed_loss()
	criterion.cuda()

	train_dataset = Real('./data/SIDD_train/', 320, args.ps) + Syn('./data/Syn_train/', 100, args.ps)
	train_loader = torch.utils.data.DataLoader(
		train_dataset, batch_size=args.bs, shuffle=True, num_workers=8, pin_memory=True, drop_last=True)

	for epoch in range(cur_epoch, args.epochs + 1):
		loss = train(train_loader, model, criterion, optimizer)
		scheduler.step()

		torch.save({
			'epoch': epoch + 1,
			'state_dict': model.state_dict(),
			'optimizer' : optimizer.state_dict(),
			'scheduler' : scheduler.state_dict()}, 
			os.path.join(save_dir, 'checkpoint.pth.tar'))

		print('Epoch [{0}]\t'
			'lr: {lr:.6f}\t'
			'Loss: {loss:.5f}'
			.format(
			epoch,
			lr=optimizer.param_groups[-1]['lr'],
			loss=loss))


================================================
FILE: utils/__init__.py
================================================
from .common import AverageMeter, ListAverageMeter, read_img, hwc_to_chw, chw_to_hwc

================================================
FILE: utils/common.py
================================================
import numpy as np
import cv2


class AverageMeter(object):
	def __init__(self):
		self.reset()

	def reset(self):
		self.val = 0
		self.avg = 0
		self.sum = 0
		self.count = 0

	def update(self, val, n=1):
		self.val = val
		self.sum += val * n
		self.count += n
		self.avg = self.sum / self.count


class ListAverageMeter(object):
	"""Computes and stores the average and current values of a list"""
	def __init__(self):
		self.len = 10000  # set up the maximum length
		self.reset()

	def reset(self):
		self.val = [0] * self.len
		self.avg = [0] * self.len
		self.sum = [0] * self.len
		self.count = 0

	def set_len(self, n):
		self.len = n
		self.reset()

	def update(self, vals, n=1):
		assert len(vals) == self.len, 'length of vals not equal to self.len'
		self.val = vals
		for i in range(self.len):
			self.sum[i] += self.val[i] * n
		self.count += n
		for i in range(self.len):
			self.avg[i] = self.sum[i] / self.count
			

def read_img(filename):
	img = cv2.imread(filename)
	img = img[:,:,::-1] / 255.0
		
	img = np.array(img).astype('float32')

	return img


def hwc_to_chw(img):
	return np.transpose(img, axes=[2, 0, 1]).astype('float32')


def chw_to_hwc(img):
	return np.transpose(img, axes=[1, 2, 0]).astype('float32')


================================================
FILE: utils/syn/ISP_implement.py
================================================
import random
import numpy as np
import cv2
import os
import json
import scipy.io
import math
import skimage

from modules import demosaicing_CFA_Bayer_Malvar2004, CRF_Map_Cython, ICRF_Map_Cython


class ISP:
    def __init__(self, curve_path='./'):
        filename = os.path.join(curve_path, 'metadata/201_CRF_data.mat')
        CRFs = scipy.io.loadmat(filename)
        self.I = CRFs['I']
        self.B = CRFs['B']
        filename = os.path.join(curve_path, 'metadata/dorfCurvesInv.mat')
        inverseCRFs = scipy.io.loadmat(filename)
        self.I_inv = inverseCRFs['invI']
        self.B_inv = inverseCRFs['invB']
        filename = os.path.join(curve_path, 'metadata/cameras.json')
        with open(filename, 'r') as load_f:
            self.cameras = json.load(load_f)

    def ICRF_Map(self, img):
        invI_temp = self.I_inv[self.icrf_index, :]
        invB_temp = self.B_inv[self.icrf_index, :]
        out = ICRF_Map_Cython(img.astype(np.double), invI_temp.astype(np.double), invB_temp.astype(np.double))
        return out

    def CRF_Map(self, img):
        I_temp = self.I[self.icrf_index, :]  # shape: (1024, 1)
        B_temp = self.B[self.icrf_index, :]  # shape: (1024, 1)
        out = CRF_Map_Cython(img.astype(np.double), I_temp.astype(np.double), B_temp.astype(np.double))
        return out

    def RGB2XYZ(self, img):
        xyz = skimage.color.rgb2xyz(img)
        return xyz

    def XYZ2RGB(self, img):
        rgb = skimage.color.xyz2rgb(img)
        return rgb

    def XYZ2CAM(self, img):
        M_xyz2cam = np.reshape(self.M_xyz2cam, (3, 3))
        M_xyz2cam = M_xyz2cam / np.tile(np.sum(M_xyz2cam, axis=1), [3, 1]).T
        cam = self.apply_cmatrix(img, M_xyz2cam)
        cam = np.clip(cam, 0, 1)
        return cam

    def CAM2XYZ(self, img):
        M_xyz2cam = np.reshape(self.M_xyz2cam, (3, 3))
        M_xyz2cam = M_xyz2cam / np.tile(np.sum(M_xyz2cam, axis=1), [3, 1]).T
        M_cam2xyz = np.linalg.inv(M_xyz2cam)
        xyz = self.apply_cmatrix(img, M_cam2xyz)
        xyz = np.clip(xyz, 0, 1)
        return xyz

    def apply_cmatrix(self, img, matrix):
        r = (matrix[0, 0] * img[:, :, 0] + matrix[0, 1] * img[:, :, 1]
             + matrix[0, 2] * img[:, :, 2])
        g = (matrix[1, 0] * img[:, :, 0] + matrix[1, 1] * img[:, :, 1]
             + matrix[1, 2] * img[:, :, 2])
        b = (matrix[2, 0] * img[:, :, 0] + matrix[2, 1] * img[:, :, 1]
             + matrix[2, 2] * img[:, :, 2])
        r = np.expand_dims(r, axis=2)
        g = np.expand_dims(g, axis=2)
        b = np.expand_dims(b, axis=2)
        results = np.concatenate((r, g, b), axis=2)
        return results

    def mosaic_bayer(self, rgb):
        # analysis pattern
        num = np.zeros(4, dtype=int)
        # the image store in OpenCV using BGR
        temp = list(self.find(self.pattern, 'R'))
        num[temp] = 0
        temp = list(self.find(self.pattern, 'G'))
        num[temp] = 1
        temp = list(self.find(self.pattern, 'B'))
        num[temp] = 2

        mosaic_img = np.zeros((rgb.shape[0], rgb.shape[1]), dtype=rgb.dtype)
        mosaic_img[0::2, 0::2] = rgb[0::2, 0::2, num[0]]
        mosaic_img[0::2, 1::2] = rgb[0::2, 1::2, num[1]]
        mosaic_img[1::2, 0::2] = rgb[1::2, 0::2, num[2]]
        mosaic_img[1::2, 1::2] = rgb[1::2, 1::2, num[3]]
        return mosaic_img

    def WB_Mask(self, img, fr_now, fb_now):
        wb_mask = np.ones(img.shape)
        if  self.pattern == 'RGGB':
            wb_mask[0::2, 0::2] = fr_now
            wb_mask[1::2, 1::2] = fb_now
        elif  self.pattern == 'BGGR':
            wb_mask[1::2, 1::2] = fr_now
            wb_mask[0::2, 0::2] = fb_now
        elif  self.pattern == 'GRBG':
            wb_mask[0::2, 1::2] = fr_now
            wb_mask[1::2, 0::2] = fb_now
        elif  self.pattern == 'GBRG':
            wb_mask[1::2, 0::2] = fr_now
            wb_mask[0::2, 1::2] = fb_now
        return wb_mask


    def find(self, str, ch):
        for i, ltr in enumerate(str):
            if ltr == ch:
                yield i

    def Demosaic(self, bayer):
        results = demosaicing_CFA_Bayer_Malvar2004(bayer, self.pattern)
        results = np.clip(results, 0, 1)
        return results
    
    def add_PG_noise(self, img):
        min_log = np.log([0.0001])
        max_log_s = np.log([0.01])

        log_sigma_s = min_log + np.random.rand(1) * (max_log_s - min_log)
        sigma_s = np.exp(log_sigma_s)

        line_c = 2.2 * log_sigma_s + 1.2
        offset_c = np.random.normal(0.0, 0.26)
        log_sigma_c = line_c + offset_c
        sigma_c = np.exp(log_sigma_c)

        # add noise
        sigma_total = np.sqrt(sigma_s * img + sigma_c)

        noisy_img = img +  \
            sigma_total * np.random.randn(img.shape[0], img.shape[1])
        return noisy_img, sigma_s, sigma_c

    def noise_generate_srgb(self, img, configs='DND'):
        # -------------------------- CAMERA SETTING --------------------------
        cameras = self.cameras[configs]
        camera = cameras[random.randint(0, len(cameras)-1)]

        self.icrf_index = random.randint(0, 200)

        try:
            self.pattern = camera['bayertype']
        except:
            self.pattern = random.choice(['GRBG', 'RGGB', 'GBRG', 'BGGR'])

        try:
            ColorMatrix1 = camera['ColorMatrix1']
            ColorMatrix2 = camera['ColorMatrix2']
            alpha = np.random.random_sample([1])
            self.M_xyz2cam = alpha * ColorMatrix1 + (1 - alpha) * ColorMatrix2
        except:
            cam_index = np.random.random((1, 4))
            cam_index = cam_index / np.sum(cam_index)
            self.M_xyz2cam = ([1.0234,-0.2969,-0.2266,-0.5625,1.6328,-0.0469,-0.0703,0.2188,0.6406] * cam_index[0, 0] + \
                            [0.4913,-0.0541,-0.0202,-0.613,1.3513,0.2906,-0.1564,0.2151,0.7183] * cam_index[0, 1] + \
                            [0.838,-0.263,-0.0639,-0.2887,1.0725,0.2496,-0.0627,0.1427,0.5438] * cam_index[0, 2] + \
                            [0.6596,-0.2079,-0.0562,-0.4782,1.3016,0.1933,-0.097,0.1581,0.5181] * cam_index[0, 3])

        try:
            min_offset = -0.05
            max_offset = 0.05
            AsShotNeutral = camera['AsShotNeutral']
            self.fr_now = AsShotNeutral[0] + random.uniform(min_offset, max_offset)
            self.fb_now = AsShotNeutral[2] + random.uniform(min_offset, max_offset)
        except:
            min_fc = 0.75
            max_fc = 1
            self.fr_now = random.uniform(min_fc, max_fc)
            self.fb_now = random.uniform(min_fc, max_fc)
        
        try:
            blacklevel = camera['blacklevel']
            whitelevel = camera['whitelevel']
        except:
            blacklevel = 254
            whitelevel = 4094
        
        # -------------------------- INVERSE ISP PROCESS --------------------------
        img_rgb = img
        # Step 1 : inverse tone mapping
        img_L = self.ICRF_Map(img_rgb)
        # Step 2 : from RGB to XYZ
        img_XYZ = self.RGB2XYZ(img_L)
        # Step 3: from XYZ to Cam
        img_Cam = self.XYZ2CAM(img_XYZ)
        # Step 4: Mosaic
        img_mosaic = self.mosaic_bayer(img_Cam)
        # Step 5: inverse White Balance
        wb_mask = self.WB_Mask(img_mosaic, self.fr_now, self.fb_now)
        img_mosaic = img_mosaic * wb_mask
        img_mosaic_gt = img_mosaic

        # -------------------------- POISSON-GAUSSIAN NOISE ON RAW --------------------------
        img_mosaic_noise, sigma_s, sigma_c = self.add_PG_noise(img_mosaic)

        # -------------------------- QUANTIZATION NOISE AND CLIPPING EFFECT ON RAW --------------------------
        upper_bound = math.pow(2, math.ceil(math.log(whitelevel + 1, 2))) - 1
        img_mosaic_noise = np.clip(np.floor(img_mosaic_noise * (whitelevel - blacklevel) + blacklevel), 0, upper_bound)
        img_mosaic_noise = (img_mosaic_noise - blacklevel) / (whitelevel - blacklevel)
        img_mosaic_gt = np.clip(np.floor(img_mosaic_gt * (whitelevel - blacklevel) + blacklevel), 0, upper_bound)
        img_mosaic_gt = (img_mosaic_gt - blacklevel) / (whitelevel - blacklevel)

        # -------------------------- ISP PROCESS --------------------------
        # Step 5 : White Balance
        wb_mask = self.WB_Mask(img_mosaic_noise, 1/self.fr_now, 1/self.fb_now)
        img_mosaic_noise = img_mosaic_noise * wb_mask
        img_mosaic_noise = np.clip(img_mosaic_noise, 0, 1)
        img_mosaic_gt = img_mosaic_gt * wb_mask
        img_mosaic_gt = np.clip(img_mosaic_gt, 0, 1)
        # Step 4 : Demosaic
        img_demosaic = self.Demosaic(img_mosaic_noise)
        img_demosaic_gt = self.Demosaic(img_mosaic_gt)
        # Step 3 : from Cam to XYZ
        img_IXYZ = self.CAM2XYZ(img_demosaic)
        img_IXYZ_gt = self.CAM2XYZ(img_demosaic_gt)
        # Step 2 : frome XYZ to RGB
        img_IL = self.XYZ2RGB(img_IXYZ)
        img_IL_gt = self.XYZ2RGB(img_IXYZ_gt)
        # Step 1 : tone mapping
        img_Irgb = self.CRF_Map(img_IL)
        img_Irgb_gt = self.CRF_Map(img_IL_gt)

        # -------------------------- QUANTIZATION NOISE AND CLIPPING EFFECT ON RGB --------------------------
        noise = np.clip(img_Irgb, 0, 1) - np.clip(img_Irgb_gt, 0, 1)
        img_Irgb_gt = np.clip(img_rgb, 0, 1)
        img_Irgb = np.clip((img_rgb + noise), 0, 1)

        sigma_total = np.sqrt(sigma_s * img + sigma_c)  # noise level map

        return np.uint8(np.round(img_Irgb_gt*255)), np.uint8(np.round(img_Irgb*255)), sigma_total


================================================
FILE: utils/syn/generate_dataset.py
================================================
import os
import random, math
import torch
import numpy as np
import glob
import cv2
from tqdm import tqdm
from skimage import io

from ISP_implement import ISP


if __name__ == '__main__':
	isp = ISP()

	source_dir = './source/'
	target_dir = './target/'

	if not os.path.isdir(target_dir):
		os.makedirs(target_dir)

	fns = glob.glob(os.path.join(source_dir, '*.png'))

	patch_size = 256

	for fn in tqdm(fns):
		img_rgb = cv2.imread(fn)[:, :, ::-1] / 255.0

		H = img_rgb.shape[0]
		W = img_rgb.shape[1]

		H_s = H // patch_size
		W_s = W // patch_size

		patch_id = 0

		for i in range(H_s):
			for j in range(W_s):
	
				yy = i * patch_size
				xx = j * patch_size

				patch_img_rgb = img_rgb[yy:yy+patch_size, xx:xx+patch_size, :]

				gt, noise, sigma = isp.noise_generate_srgb(patch_img_rgb)

				sigma = np.uint8(np.round(np.clip(sigma * 15 , 0, 1) * 255))	# store in uint8

				filename = os.path.basename(fn)
				foldername = filename.split('.')[0]

				out_folder = os.path.join(target_dir, foldername)

				if not os.path.isdir(out_folder):
					os.makedirs(out_folder)

				io.imsave(os.path.join(out_folder, 'GT_SRGB_%d_%d.png' % (i, j)), gt)
				io.imsave(os.path.join(out_folder, 'NOISY_SRGB_%d_%d.png' % (i, j)), noise)
				io.imsave(os.path.join(out_folder, 'SIGMA_SRGB_%d_%d.png' % (i, j)), sigma)


================================================
FILE: utils/syn/metadata/cameras.json
================================================
{
    "DND": [
        {
            "bayertype": "GBRG",
            "blacklevel": 52.0,
            "whitelevel": 1023.0,
            "ColorMatrix1": [
                0.8203,
                -0.2266,
                -0.125,
                -0.3203,
                1.2656,
                0.0391,
                -0.0391,
                0.2266,
                0.4531
            ],
            "ColorMatrix2": [
                1.0234,
                -0.2969,
                -0.2266,
                -0.5625,
                1.6328,
                -0.0469,
                -0.0703,
                0.2188,
                0.6406
            ],
            "AsShotNeutral": [
                0.4922,
                1.0078,
                0.6016
            ]
        },
        {
            "bayertype": "RGGB",
            "blacklevel": 256.0,
            "whitelevel": 7680.0,
            "ColorMatrix1": [
                0.7216,
                -0.2921,
                0.035,
                -0.4204,
                1.1461,
                0.3143,
                -0.0767,
                0.1485,
                0.7418
            ],
            "ColorMatrix2": [
                0.4913,
                -0.0541,
                -0.0202,
                -0.613,
                1.3513,
                0.2906,
                -0.1564,
                0.2151,
                0.7183
            ],
            "AsShotNeutral": [
                0.419,
                1.0,
                0.6275
            ]
        },
        {
            "bayertype": "RGGB",
            "blacklevel": 254.0,
            "whitelevel": 4094.0,
            "ColorMatrix1": [
                0.9033,
                -0.3597,
                0.026,
                -0.2351,
                0.97,
                0.3111,
                -0.0181,
                0.0807,
                0.5838
            ],
            "ColorMatrix2": [
                0.838,
                -0.263,
                -0.0639,
                -0.2887,
                1.0725,
                0.2496,
                -0.0627,
                0.1427,
                0.5438
            ],
            "AsShotNeutral": [
                0.5246,
                1.0,
                0.5424
            ]
        },
        {
            "bayertype": "GBRG",
            "blacklevel": 254.0,
            "whitelevel": 4094.0,
            "ColorMatrix1": [
                0.9033,
                -0.3597,
                0.026,
                -0.2351,
                0.97,
                0.3111,
                -0.0181,
                0.0807,
                0.5838
            ],
            "ColorMatrix2": [
                0.838,
                -0.263,
                -0.0639,
                -0.2887,
                1.0725,
                0.2496,
                -0.0627,
                0.1427,
                0.5438
            ],
            "AsShotNeutral": [
                0.5246,
                1.0,
                0.5424
            ]
        },
        {
            "bayertype": "RGGB",
            "blacklevel": 400.0,
            "whitelevel": 7680.0,
            "ColorMatrix1": [
                0.7366,
                -0.3213,
                0.038,
                -0.3609,
                1.1127,
                0.2852,
                -0.0218,
                0.0694,
                0.5821
            ],
            "ColorMatrix2": [
                0.6596,
                -0.2079,
                -0.0562,
                -0.4782,
                1.3016,
                0.1933,
                -0.097,
                0.1581,
                0.5181
            ],
            "AsShotNeutral": [
                0.4044,
                1.0,
                0.5356
            ]
        },
        {
            "bayertype": "GRBG",
            "blacklevel": 400.0,
            "whitelevel": 7680.0,
            "ColorMatrix1": [
                0.7366,
                -0.3213,
                0.038,
                -0.3609,
                1.1127,
                0.2852,
                -0.0218,
                0.0694,
                0.5821
            ],
            "ColorMatrix2": [
                0.6596,
                -0.2079,
                -0.0562,
                -0.4782,
                1.3016,
                0.1933,
                -0.097,
                0.1581,
                0.5181
            ],
            "AsShotNeutral": [
                0.4044,
                1.0,
                0.5356
            ]
        }
    ],
    "SIDD": [
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.56640625,
                1.0,
                0.4951171875
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.4140625,
                1.0,
                0.5859375
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.484375,
                1.0,
                0.6328125
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.4609375,
                1.0,
                0.6875
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.406349,
                1.0,
                0.601468
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.5703125,
                1.0078125,
                0.5078125
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.407806,
                1.0,
                0.640901
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.640625,
                1.0,
                0.515625
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.71875,
                1.0,
                0.4921875
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.5478515625,
                1.0,
                0.576171875
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.546875,
                1.0,
                0.625
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.515625,
                1.0,
                0.65625
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.583984375,
                1.0,
                0.5869140625
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.5546875,
                0.9921875,
                0.5078125
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.5752,
                1.0,
                0.407076
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.796875,
                0.9921875,
                0.390625
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.71875,
                1.0,
                0.484375
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.751953125,
                1.0,
                0.59375
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.65625,
                0.9921875,
                0.4296875
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.590712,
                1.0,
                0.453448
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.609375,
                1.0,
                0.5
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.5625,
                1.0,
                0.515625
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.4951171875,
                1.0,
                0.576171875
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.4453125,
                0.9921875,
                0.53125
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.343221,
                1.0,
                0.593365
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.578125,
                1.0,
                0.53125
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.5859375,
                1.0,
                0.515625
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.5234375,
                1.0,
                0.5947265625
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.4140625,
                0.9921875,
                0.5703125
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.383592,
                1.0,
                0.555993
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.625,
                1.0,
                0.609375
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.578125,
                1.0,
                0.609375
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.5556640625,
                1.0,
                0.626953125
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.546875,
                0.9921875,
                0.4375
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.524523,
                1.0,
                0.480638
            ]
        },
        {
            "ColorMatrix1": [
                0.6796875,
                -0.078125,
                -0.09375,
                -0.4609375,
                1.296875,
                0.1328125,
                -0.109375,
                0.25,
                0.5234375
            ],
            "ColorMatrix2": [
                1.1875,
                -0.4140625,
                -0.25,
                -0.4609375,
                1.5,
                0.015625,
                -0.046875,
                0.2109375,
                0.59375
            ],
            "AsShotNeutral": [
                0.71875,
                1.0,
                0.5234375
            ]
        },
        {
            "ColorMatrix1": [
                0.5859375,
                0.0546875,
                -0.125,
                -0.6484375,
                1.5546875,
                0.0546875,
                -0.2421875,
                0.5625,
                0.390625
            ],
            "ColorMatrix2": [
                1.15625,
                -0.2890625,
                -0.3203125,
                -0.53125,
                1.5625,
                0.0625,
                -0.078125,
                0.28125,
                0.5625
            ],
            "AsShotNeutral": [
                0.59375,
                1.0,
                0.5546875
            ]
        },
        {
            "ColorMatrix1": [
                0.6640625,
                -0.0458984375,
                -0.1201171875,
                -0.54296875,
                1.4384765625,
                0.0712890625,
                -0.1767578125,
                0.4052734375,
                0.4755859375
            ],
            "ColorMatrix2": [
                1.23828125,
                -0.4443359375,
                -0.2783203125,
                -0.4501953125,
                1.4697265625,
                0.0693359375,
                -0.08984375,
                0.2919921875,
                0.61328125
            ],
            "AsShotNeutral": [
                0.6005859375,
                1.0,
                0.6416015625
            ]
        },
        {
            "ColorMatrix1": [
                0.7265625,
                -0.1953125,
                -0.0859375,
                -0.5625,
                1.3515625,
                0.1640625,
                -0.2265625,
                0.3046875,
                0.53125
            ],
            "ColorMatrix2": [
                1.0703125,
                -0.3125,
                -0.28125,
                -0.5625,
                1.65625,
                -0.1171875,
                -0.0546875,
                0.1875,
                0.5859375
            ],
            "AsShotNeutral": [
                0.546875,
                0.9921875,
                0.4375
            ]
        },
        {
            "ColorMatrix1": [
                0.8353,
                -0.3171,
                -0.1289,
                -0.3878,
                1.1893,
                0.2237,
                -0.038,
                0.1056,
                0.6397
            ],
            "ColorMatrix2": [
                0.7418,
                -0.2398,
                -0.061,
                -0.5006,
                1.2972,
                0.2248,
                -0.1074,
                0.1419,
                0.59
            ],
            "AsShotNeutral": [
                0.630736,
                1.0,
                0.419586
            ]
        }
    ]
}

================================================
FILE: utils/syn/modules/Demosaicing_malvar2004.py
================================================
# -*- coding: utf-8 -*-
"""
Malvar (2004) Bayer CFA Demosaicing
===================================

*Bayer* CFA (Colour Filter Array) *Malvar (2004)* demosaicing.

References
----------
-   :cite:`Malvar2004a` : Malvar, H. S., He, L.-W., Cutler, R., & Way, O. M.
    (2004). High-Quality Linear Interpolation for Demosaicing of
    Bayer-Patterned Color Images. In International Conference of Acoustic,
    Speech and Signal Processing (pp. 5-8). Institute of Electrical and
    Electronics Engineers, Inc. Retrieved from
    http://research.microsoft.com/apps/pubs/default.aspx?id=102068
    https://colour-demosaicing.readthedocs.io/en/develop/_modules/colour_demosaicing/bayer/demosaicing/malvar2004.html
"""

from __future__ import division, unicode_literals

import numpy as np
from scipy.ndimage.filters import convolve

#from colour.utilities import tstack
import cv2

from .masks import masks_CFA_Bayer

__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2015-2018 - Colour Developers'
__license__ = 'New BSD License - http://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-science@googlegroups.com'
__status__ = 'Production'

__all__ = ['demosaicing_CFA_Bayer_Malvar2004']


def demosaicing_CFA_Bayer_Malvar2004(CFA, pattern='RGGB'):
    """
    Returns the demosaiced *RGB* colourspace array from given *Bayer* CFA using
    *Malvar (2004)* demosaicing algorithm.

    Parameters
    ----------
    CFA : array_like
        *Bayer* CFA.
    pattern : unicode, optional
        **{'RGGB', 'BGGR', 'GRBG', 'GBRG'}**,
        Arrangement of the colour filters on the pixel array.

    Returns
    -------
    ndarray
        *RGB* colourspace array.

    Notes
    -----
    -   The definition output is not clipped in range [0, 1] : this allows for
        direct HDRI / radiance image generation on *Bayer* CFA data and post
        demosaicing of the high dynamic range data as showcased in this
        `Jupyter Notebook <https://github.com/colour-science/colour-hdri/\
blob/develop/colour_hdri/examples/\
examples_merge_from_raw_files_with_post_demosaicing.ipynb>`_.

    References
    ----------
    -   :cite:`Malvar2004a`

    Examples
    --------
    >>> CFA = np.array(
    ...     [[0.30980393, 0.36078432, 0.30588236, 0.3764706],
    ...      [0.35686275, 0.39607844, 0.36078432, 0.40000001]])
    >>> demosaicing_CFA_Bayer_Malvar2004(CFA)
    array([[[ 0.30980393,  0.31666668,  0.32941177],
            [ 0.33039216,  0.36078432,  0.38112746],
            [ 0.30588236,  0.32794118,  0.34877452],
            [ 0.36274511,  0.3764706 ,  0.38480393]],
    <BLANKLINE>
           [[ 0.34828432,  0.35686275,  0.36568628],
            [ 0.35318628,  0.38186275,  0.39607844],
            [ 0.3379902 ,  0.36078432,  0.3754902 ],
            [ 0.37769609,  0.39558825,  0.40000001]]])
    >>> CFA = np.array(
    ...     [[0.3764706, 0.360784320, 0.40784314, 0.3764706],
    ...      [0.35686275, 0.30980393, 0.36078432, 0.29803923]])
    >>> demosaicing_CFA_Bayer_Malvar2004(CFA, 'BGGR')
    array([[[ 0.35539217,  0.37058825,  0.3764706 ],
            [ 0.34264707,  0.36078432,  0.37450981],
            [ 0.36568628,  0.39607844,  0.40784314],
            [ 0.36568629,  0.3764706 ,  0.3882353 ]],
    <BLANKLINE>
           [[ 0.34411765,  0.35686275,  0.36200981],
            [ 0.30980393,  0.32990197,  0.34975491],
            [ 0.33039216,  0.36078432,  0.38063726],
            [ 0.29803923,  0.30441178,  0.31740197]]])
    """

    CFA = np.asarray(CFA)
    R_m, G_m, B_m = masks_CFA_Bayer(CFA.shape, pattern)

    GR_GB = np.asarray(
        [[0, 0, -1, 0, 0],
         [0, 0, 2, 0, 0],
         [-1, 2, 4, 2, -1],
         [0, 0, 2, 0, 0],
         [0, 0, -1, 0, 0]]) / 8  # yapf: disable

    Rg_RB_Bg_BR = np.asarray(
        [[0, 0, 0.5, 0, 0],
         [0, -1, 0, -1, 0],
         [-1, 4, 5, 4, - 1],
         [0, -1, 0, -1, 0],
         [0, 0, 0.5, 0, 0]]) / 8  # yapf: disable

    Rg_BR_Bg_RB = np.transpose(Rg_RB_Bg_BR)

    Rb_BB_Br_RR = np.asarray(
        [[0, 0, -1.5, 0, 0],
         [0, 2, 0, 2, 0],
         [-1.5, 0, 6, 0, -1.5],
         [0, 2, 0, 2, 0],
         [0, 0, -1.5, 0, 0]]) / 8  # yapf: disable

    R = CFA * R_m
    G = CFA * G_m
    B = CFA * B_m

    del G_m

    G = np.where(np.logical_or(R_m == 1, B_m == 1), convolve(CFA, GR_GB), G)

    RBg_RBBR = convolve(CFA, Rg_RB_Bg_BR)
    RBg_BRRB = convolve(CFA, Rg_BR_Bg_RB)
    RBgr_BBRR = convolve(CFA, Rb_BB_Br_RR)

    del GR_GB, Rg_RB_Bg_BR, Rg_BR_Bg_RB, Rb_BB_Br_RR

    # Red rows.
    R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
    # Red columns.
    R_c = np.any(R_m == 1, axis=0)[np.newaxis] * np.ones(R.shape)
    # Blue rows.
    B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
    # Blue columns
    B_c = np.any(B_m == 1, axis=0)[np.newaxis] * np.ones(B.shape)

    del R_m, B_m

    R = np.where(np.logical_and(R_r == 1, B_c == 1), RBg_RBBR, R)
    R = np.where(np.logical_and(B_r == 1, R_c == 1), RBg_BRRB, R)

    B = np.where(np.logical_and(B_r == 1, R_c == 1), RBg_RBBR, B)
    B = np.where(np.logical_and(R_r == 1, B_c == 1), RBg_BRRB, B)

    R = np.where(np.logical_and(B_r == 1, B_c == 1), RBgr_BBRR, R)
    B = np.where(np.logical_and(R_r == 1, R_c == 1), RBgr_BBRR, B)

    del RBg_RBBR, RBg_BRRB, RBgr_BBRR, R_r, R_c, B_r, B_c

    #return tstack((R, G, B))
    return cv2.merge([R, G, B])

================================================
FILE: utils/syn/modules/__init__.py
================================================
from .Demosaicing_malvar2004 import demosaicing_CFA_Bayer_Malvar2004
import pyximport; pyximport.install()
from .tone_mapping_cython import CRF_Map_Cython, ICRF_Map_Cython

================================================
FILE: utils/syn/modules/masks.py
================================================
# -*- coding: utf-8 -*-
"""
Bayer CFA Masks
===============

*Bayer* CFA (Colour Filter Array) masks generation.
"""

from __future__ import division, unicode_literals

import numpy as np

__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2015-2018 - Colour Developers'
__license__ = 'New BSD License - http://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-science@googlegroups.com'
__status__ = 'Production'

__all__ = ['masks_CFA_Bayer']


def masks_CFA_Bayer(shape, pattern='RGGB'):
    """
    Returns the *Bayer* CFA red, green and blue masks for given pattern.

    Parameters
    ----------
    shape : array_like
        Dimensions of the *Bayer* CFA.
    pattern : unicode, optional
        **{'RGGB', 'BGGR', 'GRBG', 'GBRG'}**,
        Arrangement of the colour filters on the pixel array.

    Returns
    -------
    tuple
        *Bayer* CFA red, green and blue masks.

    Examples
    --------
    >>> from pprint import pprint
    >>> shape = (3, 3)
    >>> pprint(masks_CFA_Bayer(shape))
    (array([[ True, False,  True],
           [False, False, False],
           [ True, False,  True]], dtype=bool),
     array([[False,  True, False],
           [ True, False,  True],
           [False,  True, False]], dtype=bool),
     array([[False, False, False],
           [False,  True, False],
           [False, False, False]], dtype=bool))
    >>> pprint(masks_CFA_Bayer(shape, 'BGGR'))
    (array([[False, False, False],
           [False,  True, False],
           [False, False, False]], dtype=bool),
     array([[False,  True, False],
           [ True, False,  True],
           [False,  True, False]], dtype=bool),
     array([[ True, False,  True],
           [False, False, False],
           [ True, False,  True]], dtype=bool))
    """

    pattern = pattern.upper()

    channels = dict((channel, np.zeros(shape)) for channel in 'RGB')
    for channel, (y, x) in zip(pattern, [(0, 0), (0, 1), (1, 0), (1, 1)]):
        channels[channel][y::2, x::2] = 1

    return tuple(channels[c].astype(bool) for c in 'RGB')


================================================
FILE: utils/syn/modules/tone_mapping_cython.pyx
================================================
# Power by Zongsheng Yue 2019-06-11 21:23:27

import numpy as np
from math import floor

def CRF_Map_Cython(double[:, :, :] img, double[:] I, double[:] B):
    cdef Py_ssize_t h = img.shape[0]
    cdef Py_ssize_t w = img.shape[1]
    cdef Py_ssize_t c = img.shape[2]
    cdef Py_ssize_t bin = I.shape[0]

    cdef int ii, jj, cc, start_bin, b, index
    out = np.zeros((h,w,c), dtype=np.float64)
    cdef double[:,:,:] out_view = out

    cdef double tiny_bin = 9.7656e-04      # 1/1024 = 9.7656e-04
    cdef double min_tiny_bin = 0.0039
    cdef temp, tempB, comp1, comp2

    for ii in range(h):
        for jj in range(w):
            for cc in range(c):
                temp = img[ii, jj, cc]
                start_bin = 1
                if temp > min_tiny_bin:
                    start_bin = floor(temp / tiny_bin - 1)
                for b in range(start_bin, bin):
                    tempI = I[b]
                    if tempI >= temp:
                        index = b
                        if index > 1:
                            comp1 = tempI - temp
                            comp2 = temp - I[index - 1]
                            if comp2 < comp1:
                                index -= 1
                        out_view[ii, jj, cc] = B[index]
                        break
    return out

def ICRF_Map_Cython(double[:, :, :] img, double[:] invI, double[:] invB):
    cdef Py_ssize_t h = img.shape[0]
    cdef Py_ssize_t w = img.shape[1]
    cdef Py_ssize_t c = img.shape[2]
    cdef Py_ssize_t bin = invI.shape[0]

    cdef int ii, jj, cc, start_bin, b, index
    out = np.zeros((h,w,c), dtype=np.float64)
    cdef double[:,:,:] out_view = out

    cdef double tiny_bin = 9.7656e-04      # 1/1024 = 9.7656e-04
    cdef double min_tiny_bin = 0.0039
    cdef temp, tempB, comp1, comp2

    for ii in range(h):
        for jj in range(w):
            for cc in range(c):
                temp = img[ii, jj, cc]
                start_bin = 1
                if temp > min_tiny_bin:
                    start_bin = floor(temp / tiny_bin - 1)
                for b in range(start_bin, bin):
                    tempB = invB[b]
                    if tempB >= temp:
                        index = b
                        if index > 1:
                            comp1 = tempB - temp
                            comp2 = temp - invB[index - 1]
                            if comp2 < comp1:
                                index -= 1
                        out_view[ii, jj, cc] = invI[index]
                        break
    return out

Download .txt
gitextract_rvk4zezk/

├── .gitignore
├── LICENSE
├── README.md
├── dataset/
│   ├── __init__.py
│   └── loader.py
├── model/
│   ├── __init__.py
│   └── cbdnet.py
├── predict.py
├── train.py
└── utils/
    ├── __init__.py
    ├── common.py
    └── syn/
        ├── ISP_implement.py
        ├── generate_dataset.py
        ├── metadata/
        │   ├── 201_CRF_data.mat
        │   ├── cameras.json
        │   └── dorfCurvesInv.mat
        └── modules/
            ├── Demosaicing_malvar2004.py
            ├── __init__.py
            ├── masks.py
            └── tone_mapping_cython.pyx
Download .txt
SYMBOL INDEX (61 symbols across 7 files)

FILE: dataset/loader.py
  function get_patch (line 11) | def get_patch(imgs, patch_size):
  class Real (line 36) | class Real(Dataset):
    method __init__ (line 37) | def __init__(self, root_dir, sample_num, patch_size=128):
    method __len__ (line 54) | def __len__(self):
    method __getitem__ (line 58) | def __getitem__(self, idx):
  class Syn (line 70) | class Syn(Dataset):
    method __init__ (line 71) | def __init__(self, root_dir, sample_num, patch_size=128):
    method __len__ (line 88) | def __len__(self):
    method __getitem__ (line 92) | def __getitem__(self, idx):

FILE: model/cbdnet.py
  class single_conv (line 7) | class single_conv(nn.Module):
    method __init__ (line 8) | def __init__(self, in_ch, out_ch):
    method forward (line 15) | def forward(self, x):
  class up (line 19) | class up(nn.Module):
    method __init__ (line 20) | def __init__(self, in_ch):
    method forward (line 24) | def forward(self, x1, x2):
  class outconv (line 38) | class outconv(nn.Module):
    method __init__ (line 39) | def __init__(self, in_ch, out_ch):
    method forward (line 43) | def forward(self, x):
  class FCN (line 48) | class FCN(nn.Module):
    method __init__ (line 49) | def __init__(self):
    method forward (line 64) | def forward(self, x):
  class UNet (line 68) | class UNet(nn.Module):
    method __init__ (line 69) | def __init__(self):
    method forward (line 109) | def forward(self, x):
  class Network (line 128) | class Network(nn.Module):
    method __init__ (line 129) | def __init__(self):
    method forward (line 134) | def forward(self, x):
  class fixed_loss (line 141) | class fixed_loss(nn.Module):
    method __init__ (line 142) | def __init__(self):
    method forward (line 145) | def forward(self, out_image, gt_image, est_noise, gt_noise, if_asym):
    method _tensor_size (line 162) | def _tensor_size(self, t):

FILE: train.py
  function train (line 20) | def train(train_loader, model, criterion, optimizer):

FILE: utils/common.py
  class AverageMeter (line 5) | class AverageMeter(object):
    method __init__ (line 6) | def __init__(self):
    method reset (line 9) | def reset(self):
    method update (line 15) | def update(self, val, n=1):
  class ListAverageMeter (line 22) | class ListAverageMeter(object):
    method __init__ (line 24) | def __init__(self):
    method reset (line 28) | def reset(self):
    method set_len (line 34) | def set_len(self, n):
    method update (line 38) | def update(self, vals, n=1):
  function read_img (line 48) | def read_img(filename):
  function hwc_to_chw (line 57) | def hwc_to_chw(img):
  function chw_to_hwc (line 61) | def chw_to_hwc(img):

FILE: utils/syn/ISP_implement.py
  class ISP (line 13) | class ISP:
    method __init__ (line 14) | def __init__(self, curve_path='./'):
    method ICRF_Map (line 27) | def ICRF_Map(self, img):
    method CRF_Map (line 33) | def CRF_Map(self, img):
    method RGB2XYZ (line 39) | def RGB2XYZ(self, img):
    method XYZ2RGB (line 43) | def XYZ2RGB(self, img):
    method XYZ2CAM (line 47) | def XYZ2CAM(self, img):
    method CAM2XYZ (line 54) | def CAM2XYZ(self, img):
    method apply_cmatrix (line 62) | def apply_cmatrix(self, img, matrix):
    method mosaic_bayer (line 75) | def mosaic_bayer(self, rgb):
    method WB_Mask (line 93) | def WB_Mask(self, img, fr_now, fb_now):
    method find (line 110) | def find(self, str, ch):
    method Demosaic (line 115) | def Demosaic(self, bayer):
    method add_PG_noise (line 120) | def add_PG_noise(self, img):
    method noise_generate_srgb (line 139) | def noise_generate_srgb(self, img, configs='DND'):

FILE: utils/syn/modules/Demosaicing_malvar2004.py
  function demosaicing_CFA_Bayer_Malvar2004 (line 39) | def demosaicing_CFA_Bayer_Malvar2004(CFA, pattern='RGGB'):

FILE: utils/syn/modules/masks.py
  function masks_CFA_Bayer (line 23) | def masks_CFA_Bayer(shape, pattern='RGGB'):
Condensed preview — 20 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (75K chars).
[
  {
    "path": ".gitignore",
    "chars": 1327,
    "preview": "# Add by user\n.vscode/\nresult/\ndata/\nsave_model/\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.cl"
  },
  {
    "path": "LICENSE",
    "chars": 1063,
    "preview": "MIT License\n\nCopyright (c) 2018 IDKiro\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof "
  },
  {
    "path": "README.md",
    "chars": 1678,
    "preview": "# CBDNet-pytorch\n\nIt's an unofficial PyTorch implementation of CBDNet.\n\nWe used higher quality real and synthetic datase"
  },
  {
    "path": "dataset/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "dataset/loader.py",
    "chars": 2817,
    "preview": "import os\nimport random\nimport torch\nimport numpy as np\nimport glob\nfrom torch.utils.data import Dataset\n\nfrom utils imp"
  },
  {
    "path": "model/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "model/cbdnet.py",
    "chars": 4336,
    "preview": "\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass single_conv(nn.Module):\n    def __init__(sel"
  },
  {
    "path": "predict.py",
    "chars": 1130,
    "preview": "import os, time, scipy.io, shutil\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport argparse\nimport cv2\n\nfrom"
  },
  {
    "path": "train.py",
    "chars": 2832,
    "preview": "import os, time, shutil\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom utils i"
  },
  {
    "path": "utils/__init__.py",
    "chars": 84,
    "preview": "from .common import AverageMeter, ListAverageMeter, read_img, hwc_to_chw, chw_to_hwc"
  },
  {
    "path": "utils/common.py",
    "chars": 1236,
    "preview": "import numpy as np\nimport cv2\n\n\nclass AverageMeter(object):\n\tdef __init__(self):\n\t\tself.reset()\n\n\tdef reset(self):\n\t\tsel"
  },
  {
    "path": "utils/syn/ISP_implement.py",
    "chars": 9476,
    "preview": "import random\nimport numpy as np\nimport cv2\nimport os\nimport json\nimport scipy.io\nimport math\nimport skimage\n\nfrom modul"
  },
  {
    "path": "utils/syn/generate_dataset.py",
    "chars": 1319,
    "preview": "import os\nimport random, math\nimport torch\nimport numpy as np\nimport glob\nimport cv2\nfrom tqdm import tqdm\nfrom skimage "
  },
  {
    "path": "utils/syn/metadata/cameras.json",
    "chars": 32797,
    "preview": "{\n    \"DND\": [\n        {\n            \"bayertype\": \"GBRG\",\n            \"blacklevel\": 52.0,\n            \"whitelevel\": 1023"
  },
  {
    "path": "utils/syn/modules/Demosaicing_malvar2004.py",
    "chars": 5450,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nMalvar (2004) Bayer CFA Demosaicing\n===================================\n\n*Bayer* CFA (Colour"
  },
  {
    "path": "utils/syn/modules/__init__.py",
    "chars": 171,
    "preview": "from .Demosaicing_malvar2004 import demosaicing_CFA_Bayer_Malvar2004\nimport pyximport; pyximport.install()\nfrom .tone_ma"
  },
  {
    "path": "utils/syn/modules/masks.py",
    "chars": 2101,
    "preview": "# -*- coding: utf-8 -*-\n\"\"\"\nBayer CFA Masks\n===============\n\n*Bayer* CFA (Colour Filter Array) masks generation.\n\"\"\"\n\nfr"
  },
  {
    "path": "utils/syn/modules/tone_mapping_cython.pyx",
    "chars": 2556,
    "preview": "# Power by Zongsheng Yue 2019-06-11 21:23:27\n\nimport numpy as np\nfrom math import floor\n\ndef CRF_Map_Cython(double[:, :,"
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the IDKiro/CBDNet-pytorch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 20 files (68.7 KB), approximately 20.6k tokens, and a symbol index with 61 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!