Full Code of minyuanye/SIUN for AI

master d62bc50915d0 cached
19 files
24.6 MB
43.2k tokens
81 symbols
1 requests
Download .txt
Repository: minyuanye/SIUN
Branch: master
Commit: d62bc50915d0
Files: 19
Total size: 24.6 MB

Directory structure:
gitextract_k65vub75/

├── README.md
└── code/
    ├── __init__.py
    ├── deblur.py
    ├── model/
    │   ├── generator.h5
    │   └── generator.json
    ├── requirements.txt
    └── src/
        ├── __init__.py
        ├── application.py
        ├── config.py
        ├── lib/
        │   ├── MLVSharpnessMeasure.py
        │   ├── __init__.py
        │   ├── data_helper.py
        │   ├── data_producer.py
        │   └── tf_util.py
        ├── model/
        │   ├── __init__.py
        │   └── model.py
        ├── tester.py
        ├── trainer.py
        └── verification.py

================================================
FILE CONTENTS
================================================

================================================
FILE: README.md
================================================
# Scale-Iterative Upscaling Network for Image Deblurring
by Minyuan Ye, Dong Lyu and Gengsheng Chen<br>
pdf [[main](https://ieeexplore.ieee.org/document/8963625)][[backup](http://lab.zhuzhuguowang.cn:36900/croxline/Paper/Scale-Iterative%20Upscaling%20Network%20for%20Image%20Deblurring.pdf)]
### One real example
![/comparisions/images_in_paper/real_building1_comparision.png](../master/comparisons/images_in_paper/Real_building1_comparison.png)<br>
(a) Result of Nah et al. (b) Result of Tao et al. (c) Result of Zhang et al. (d) Our result.
<br>
### Results on benchmark datasets
![/comparisions/images_in_paper/benchmark_comparison.png](../master/comparisons/images_in_paper/benchmark_comparison.png)<br>
From top to bottom are blurry input, deblurring results of Nah et al., Tao et al., Zhang et al. and ours.<br>
<br>
### Results on real-world blurred images
![/comparisions/images_in_paper/real_comparison.png](../master/comparisons/images_in_paper/real_comparison.png)<br>
From top to bottom are images restored by Pan et al., Nah et al., Tao et al., Zhang et al. and ours. As space limits, the original blurry images are omitted here. 
They can be viewed in Lai dataset with their names, from left to right: boy_statue, pietro, street4 and text1.
<br>
## Prerequisites
Please refer to "/code/requirements.txt".
<br>
## Installation

```
git clone https://github.com/minyuanye/SIUN.git
cd code
```

## Basic usage
You can always add '--gpu=<gpu_id>' to specify GPU ID, the default ID is 0.<br>

1. For deblurring an image:<br>
**python deblur.py --apply --file-path='</testpath/test.png>'**<br>


2. For deblurring all images in a folder:<br>
**python deblur.py --apply --dir-path='</testpath/testDir>'**<br>
Add '--result-dir=</output_path>' to specify output path. If it is not specified, the default path is './output'.<br>

3. For testing the model:<br>
**python deblur.py --test**<br>
Note that this command can only be used to test GOPRO dataset. And it will load all images into memory first. We recommand to use '--apply'
as an alternative (Item 2).<br>
Please set value of 'test_directory_path' to specify the GOPRO dataset path in file 'config.py'.<br>

4. For training a new model:<br>
**python deblur.py --train**<br>
Please remove the model file in 'model' first and set value of 'train_directory_path' to specify the GOPRO dataset path in file 'config.py'.<br>
When it finishes, run:<br>
**python deblur.py --verify**<br>


## Advanced usage
Please refer to the source code. Most configuration parameters are listed in '/code/src/config.py'.

## Citation
If you use any part of our code, or SIUN is useful for your research, please consider citing:
```bibtex
@ARTICLE{8963625,
author={M. {Ye} and D. {Lyu} and G. {Chen}},
journal={IEEE Access},
title={Scale-Iterative Upscaling Network for Image Deblurring},
year={2020},
volume={8},
number={},
pages={18316-18325},
keywords={Blind deblurring;curriculum learning;scale-iterative;upscaling network},
doi={10.1109/ACCESS.2020.2967823},
ISSN={2169-3536},
month={},}
```


================================================
FILE: code/__init__.py
================================================


================================================
FILE: code/deblur.py
================================================
import os
import sys
import argparse
from src.config import Config
from src.lib.tf_util import set_session_config

_PATH_ = os.path.dirname(os.path.dirname(__file__))

if _PATH_ not in sys.path:
    sys.path.append(_PATH_)



def getArgs():
    parser = argparse.ArgumentParser()
    parser.add_argument("--train", help="train the model", action="store_true", default=False)
    parser.add_argument("--test", help="test the model", action="store_true", default=False)
    parser.add_argument("--apply", help="use the model", action="store_true", default=False)
    parser.add_argument("--verify", help="verify the model", action="store_true", default=False)
    parser.add_argument("--gpu", help="test device list", default="0")
    parser.add_argument("--file-path", help="file path of the input image")
    parser.add_argument("--dir-path", help="dir path of the input images")
    parser.add_argument("--result-dir", help="deblur result dir of the input images")
    parser.add_argument("--iter", help="iter times", default=0, type=int)
    return parser.parse_args()
    
if __name__ == "__main__":
    args = getArgs()
    config = Config()
    config.resource.create_directories()
    if(args.file_path):
        config.application.deblurring_file_path = args.file_path
    if(args.dir_path):
        config.application.deblurring_dir_path = args.dir_path
    if(args.iter):
        config.application.iter = args.iter
    if(args.result_dir):
        config.application.deblurring_result_dir = args.result_dir
    set_session_config(per_process_gpu_memory_fraction=1, allow_growth=True, device_list=args.gpu)
    gpus = args.gpu.split(",")
    config.trainer.gpu_num = len(gpus)
    if(args.train):
        #trainer
        from src.trainer import Trainer
        Trainer(config).start()
    elif(args.test):
        #tester
        from src.tester import Tester
        Tester(config).start()
    elif(args.apply):
        #application
        from src.application import Application
        Application(config).start()
    elif(args.verify):
        #verification
        from src.verification import Verification
        Verification(config).start()
    else:
        #info
        from src.model.model import DDModel
        model = DDModel(config)
        model.generator.summary(line_length=150)




================================================
FILE: code/model/generator.h5
================================================
[File too large to display: 24.5 MB]

================================================
FILE: code/model/generator.json
================================================
{"class_name": "Model", "config": {"name": "generator", "layers": [{"name": "imageSmall", "class_name": "InputLayer", "config": {"batch_input_shape": [null, null, null, 6], "dtype": "float32", "sparse": false, "name": "imageSmall"}, "inbound_nodes": []}, {"name": "conv2d_1", "class_name": "Conv2D", "config": {"name": "conv2d_1", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["imageSmall", 0, 0, {}]]]}, {"name": "conv2d_2", "class_name": "Conv2D", "config": {"name": "conv2d_2", "trainable": true, "filters": 32, "kernel_size": [5, 5], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["conv2d_1", 0, 0, {}]]]}, {"name": "activation_1", "class_name": "Activation", "config": {"name": "activation_1", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_2", 0, 0, {}]]]}, {"name": "conv2d_3", "class_name": "Conv2D", "config": {"name": "conv2d_3", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_1", 0, 0, {}]]]}, {"name": "activation_2", "class_name": "Activation", "config": {"name": "activation_2", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_3", 0, 0, {}]]]}, {"name": "conv2d_4", "class_name": "Conv2D", "config": {"name": "conv2d_4", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_2", 0, 0, {}]]]}, {"name": "add_1", "class_name": "Add", "config": {"name": "add_1", "trainable": true}, "inbound_nodes": [[["conv2d_4", 0, 0, {}], ["activation_1", 0, 0, {}]]]}, {"name": "conv2d_5", "class_name": "Conv2D", "config": {"name": "conv2d_5", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_1", 0, 0, {}]]]}, {"name": "activation_3", "class_name": "Activation", "config": {"name": "activation_3", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_5", 0, 0, {}]]]}, {"name": "conv2d_6", "class_name": "Conv2D", "config": {"name": "conv2d_6", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_3", 0, 0, {}]]]}, {"name": "add_2", "class_name": "Add", "config": {"name": "add_2", "trainable": true}, "inbound_nodes": [[["conv2d_6", 0, 0, {}], ["add_1", 0, 0, {}]]]}, {"name": "conv2d_7", "class_name": "Conv2D", "config": {"name": "conv2d_7", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_2", 0, 0, {}]]]}, {"name": "activation_4", "class_name": "Activation", "config": {"name": "activation_4", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_7", 0, 0, {}]]]}, {"name": "conv2d_8", "class_name": "Conv2D", "config": {"name": "conv2d_8", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_4", 0, 0, {}]]]}, {"name": "add_3", "class_name": "Add", "config": {"name": "add_3", "trainable": true}, "inbound_nodes": [[["conv2d_8", 0, 0, {}], ["add_2", 0, 0, {}]]]}, {"name": "conv2d_9", "class_name": "Conv2D", "config": {"name": "conv2d_9", "trainable": true, "filters": 64, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_3", 0, 0, {}]]]}, {"name": "activation_5", "class_name": "Activation", "config": {"name": "activation_5", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_9", 0, 0, {}]]]}, {"name": "conv2d_10", "class_name": "Conv2D", "config": {"name": "conv2d_10", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_5", 0, 0, {}]]]}, {"name": "activation_6", "class_name": "Activation", "config": {"name": "activation_6", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_10", 0, 0, {}]]]}, {"name": "conv2d_11", "class_name": "Conv2D", "config": {"name": "conv2d_11", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_6", 0, 0, {}]]]}, {"name": "add_4", "class_name": "Add", "config": {"name": "add_4", "trainable": true}, "inbound_nodes": [[["conv2d_11", 0, 0, {}], ["activation_5", 0, 0, {}]]]}, {"name": "conv2d_12", "class_name": "Conv2D", "config": {"name": "conv2d_12", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_4", 0, 0, {}]]]}, {"name": "activation_7", "class_name": "Activation", "config": {"name": "activation_7", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_12", 0, 0, {}]]]}, {"name": "conv2d_13", "class_name": "Conv2D", "config": {"name": "conv2d_13", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_7", 0, 0, {}]]]}, {"name": "add_5", "class_name": "Add", "config": {"name": "add_5", "trainable": true}, "inbound_nodes": [[["conv2d_13", 0, 0, {}], ["add_4", 0, 0, {}]]]}, {"name": "conv2d_14", "class_name": "Conv2D", "config": {"name": "conv2d_14", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_5", 0, 0, {}]]]}, {"name": "activation_8", "class_name": "Activation", "config": {"name": "activation_8", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_14", 0, 0, {}]]]}, {"name": "conv2d_15", "class_name": "Conv2D", "config": {"name": "conv2d_15", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_8", 0, 0, {}]]]}, {"name": "add_6", "class_name": "Add", "config": {"name": "add_6", "trainable": true}, "inbound_nodes": [[["conv2d_15", 0, 0, {}], ["add_5", 0, 0, {}]]]}, {"name": "conv2d_16", "class_name": "Conv2D", "config": {"name": "conv2d_16", "trainable": true, "filters": 128, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_6", 0, 0, {}]]]}, {"name": "activation_9", "class_name": "Activation", "config": {"name": "activation_9", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_16", 0, 0, {}]]]}, {"name": "conv2d_17", "class_name": "Conv2D", "config": {"name": "conv2d_17", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_9", 0, 0, {}]]]}, {"name": "activation_10", "class_name": "Activation", "config": {"name": "activation_10", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_17", 0, 0, {}]]]}, {"name": "conv2d_18", "class_name": "Conv2D", "config": {"name": "conv2d_18", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_10", 0, 0, {}]]]}, {"name": "add_7", "class_name": "Add", "config": {"name": "add_7", "trainable": true}, "inbound_nodes": [[["conv2d_18", 0, 0, {}], ["activation_9", 0, 0, {}]]]}, {"name": "conv2d_19", "class_name": "Conv2D", "config": {"name": "conv2d_19", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_7", 0, 0, {}]]]}, {"name": "activation_11", "class_name": "Activation", "config": {"name": "activation_11", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_19", 0, 0, {}]]]}, {"name": "conv2d_20", "class_name": "Conv2D", "config": {"name": "conv2d_20", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_11", 0, 0, {}]]]}, {"name": "add_8", "class_name": "Add", "config": {"name": "add_8", "trainable": true}, "inbound_nodes": [[["conv2d_20", 0, 0, {}], ["add_7", 0, 0, {}]]]}, {"name": "conv2d_21", "class_name": "Conv2D", "config": {"name": "conv2d_21", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_8", 0, 0, {}]]]}, {"name": "activation_12", "class_name": "Activation", "config": {"name": "activation_12", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_21", 0, 0, {}]]]}, {"name": "conv2d_22", "class_name": "Conv2D", "config": {"name": "conv2d_22", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_12", 0, 0, {}]]]}, {"name": "add_9", "class_name": "Add", "config": {"name": "add_9", "trainable": true}, "inbound_nodes": [[["conv2d_22", 0, 0, {}], ["add_8", 0, 0, {}]]]}, {"name": "conv2d_23", "class_name": "Conv2D", "config": {"name": "conv2d_23", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_9", 0, 0, {}]]]}, {"name": "activation_13", "class_name": "Activation", "config": {"name": "activation_13", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_23", 0, 0, {}]]]}, {"name": "conv2d_24", "class_name": "Conv2D", "config": {"name": "conv2d_24", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_13", 0, 0, {}]]]}, {"name": "add_10", "class_name": "Add", "config": {"name": "add_10", "trainable": true}, "inbound_nodes": [[["conv2d_24", 0, 0, {}], ["add_9", 0, 0, {}]]]}, {"name": "conv2d_25", "class_name": "Conv2D", "config": {"name": "conv2d_25", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_10", 0, 0, {}]]]}, {"name": "activation_14", "class_name": "Activation", "config": {"name": "activation_14", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_25", 0, 0, {}]]]}, {"name": "conv2d_26", "class_name": "Conv2D", "config": {"name": "conv2d_26", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_14", 0, 0, {}]]]}, {"name": "add_11", "class_name": "Add", "config": {"name": "add_11", "trainable": true}, "inbound_nodes": [[["conv2d_26", 0, 0, {}], ["add_10", 0, 0, {}]]]}, {"name": "conv2d_27", "class_name": "Conv2D", "config": {"name": "conv2d_27", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_11", 0, 0, {}]]]}, {"name": "activation_15", "class_name": "Activation", "config": {"name": "activation_15", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_27", 0, 0, {}]]]}, {"name": "conv2d_28", "class_name": "Conv2D", "config": {"name": "conv2d_28", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_15", 0, 0, {}]]]}, {"name": "add_12", "class_name": "Add", "config": {"name": "add_12", "trainable": true}, "inbound_nodes": [[["conv2d_28", 0, 0, {}], ["add_11", 0, 0, {}]]]}, {"name": "conv2d_transpose_1", "class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_1", "trainable": true, "filters": 64, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}, "inbound_nodes": [[["add_12", 0, 0, {}]]]}, {"name": "activation_16", "class_name": "Activation", "config": {"name": "activation_16", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_transpose_1", 0, 0, {}]]]}, {"name": "add_13", "class_name": "Add", "config": {"name": "add_13", "trainable": true}, "inbound_nodes": [[["activation_16", 0, 0, {}], ["add_6", 0, 0, {}]]]}, {"name": "conv2d_29", "class_name": "Conv2D", "config": {"name": "conv2d_29", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_13", 0, 0, {}]]]}, {"name": "activation_17", "class_name": "Activation", "config": {"name": "activation_17", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_29", 0, 0, {}]]]}, {"name": "conv2d_30", "class_name": "Conv2D", "config": {"name": "conv2d_30", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_17", 0, 0, {}]]]}, {"name": "add_14", "class_name": "Add", "config": {"name": "add_14", "trainable": true}, "inbound_nodes": [[["conv2d_30", 0, 0, {}], ["add_13", 0, 0, {}]]]}, {"name": "conv2d_31", "class_name": "Conv2D", "config": {"name": "conv2d_31", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_14", 0, 0, {}]]]}, {"name": "activation_18", "class_name": "Activation", "config": {"name": "activation_18", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_31", 0, 0, {}]]]}, {"name": "conv2d_32", "class_name": "Conv2D", "config": {"name": "conv2d_32", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_18", 0, 0, {}]]]}, {"name": "add_15", "class_name": "Add", "config": {"name": "add_15", "trainable": true}, "inbound_nodes": [[["conv2d_32", 0, 0, {}], ["add_14", 0, 0, {}]]]}, {"name": "conv2d_33", "class_name": "Conv2D", "config": {"name": "conv2d_33", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_15", 0, 0, {}]]]}, {"name": "activation_19", "class_name": "Activation", "config": {"name": "activation_19", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_33", 0, 0, {}]]]}, {"name": "conv2d_34", "class_name": "Conv2D", "config": {"name": "conv2d_34", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_19", 0, 0, {}]]]}, {"name": "add_16", "class_name": "Add", "config": {"name": "add_16", "trainable": true}, "inbound_nodes": [[["conv2d_34", 0, 0, {}], ["add_15", 0, 0, {}]]]}, {"name": "conv2d_transpose_2", "class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_2", "trainable": true, "filters": 32, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}, "inbound_nodes": [[["add_16", 0, 0, {}]]]}, {"name": "activation_20", "class_name": "Activation", "config": {"name": "activation_20", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_transpose_2", 0, 0, {}]]]}, {"name": "add_17", "class_name": "Add", "config": {"name": "add_17", "trainable": true}, "inbound_nodes": [[["activation_20", 0, 0, {}], ["add_3", 0, 0, {}]]]}, {"name": "conv2d_35", "class_name": "Conv2D", "config": {"name": "conv2d_35", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_17", 0, 0, {}]]]}, {"name": "activation_21", "class_name": "Activation", "config": {"name": "activation_21", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_35", 0, 0, {}]]]}, {"name": "concatenate_1", "class_name": "Concatenate", "config": {"name": "concatenate_1", "trainable": true, "axis": 3}, "inbound_nodes": [[["add_17", 0, 0, {}], ["activation_21", 0, 0, {}]]]}, {"name": "conv2d_36", "class_name": "Conv2D", "config": {"name": "conv2d_36", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_1", 0, 0, {}]]]}, {"name": "activation_22", "class_name": "Activation", "config": {"name": "activation_22", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_36", 0, 0, {}]]]}, {"name": "concatenate_2", "class_name": "Concatenate", "config": {"name": "concatenate_2", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_1", 0, 0, {}], ["activation_22", 0, 0, {}]]]}, {"name": "conv2d_37", "class_name": "Conv2D", "config": {"name": "conv2d_37", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_2", 0, 0, {}]]]}, {"name": "activation_23", "class_name": "Activation", "config": {"name": "activation_23", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_37", 0, 0, {}]]]}, {"name": "concatenate_3", "class_name": "Concatenate", "config": {"name": "concatenate_3", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_2", 0, 0, {}], ["activation_23", 0, 0, {}]]]}, {"name": "conv2d_38", "class_name": "Conv2D", "config": {"name": "conv2d_38", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_3", 0, 0, {}]]]}, {"name": "activation_24", "class_name": "Activation", "config": {"name": "activation_24", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_38", 0, 0, {}]]]}, {"name": "concatenate_4", "class_name": "Concatenate", "config": {"name": "concatenate_4", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_3", 0, 0, {}], ["activation_24", 0, 0, {}]]]}, {"name": "conv2d_39", "class_name": "Conv2D", "config": {"name": "conv2d_39", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_4", 0, 0, {}]]]}, {"name": "activation_25", "class_name": "Activation", "config": {"name": "activation_25", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_39", 0, 0, {}]]]}, {"name": "concatenate_5", "class_name": "Concatenate", "config": {"name": "concatenate_5", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_4", 0, 0, {}], ["activation_25", 0, 0, {}]]]}, {"name": "conv2d_40", "class_name": "Conv2D", "config": {"name": "conv2d_40", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_5", 0, 0, {}]]]}, {"name": "activation_26", "class_name": "Activation", "config": {"name": "activation_26", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_40", 0, 0, {}]]]}, {"name": "concatenate_6", "class_name": "Concatenate", "config": {"name": "concatenate_6", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_5", 0, 0, {}], ["activation_26", 0, 0, {}]]]}, {"name": "conv2d_41", "class_name": "Conv2D", "config": {"name": "conv2d_41", "trainable": true, "filters": 32, "kernel_size": [1, 1], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_6", 0, 0, {}]]]}, {"name": "add_18", "class_name": "Add", "config": {"name": "add_18", "trainable": true}, "inbound_nodes": [[["conv2d_41", 0, 0, {}], ["add_17", 0, 0, {}]]]}, {"name": "conv2d_42", "class_name": "Conv2D", "config": {"name": "conv2d_42", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_18", 0, 0, {}]]]}, {"name": "activation_27", "class_name": "Activation", "config": {"name": "activation_27", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_42", 0, 0, {}]]]}, {"name": "concatenate_7", "class_name": "Concatenate", "config": {"name": "concatenate_7", "trainable": true, "axis": 3}, "inbound_nodes": [[["add_18", 0, 0, {}], ["activation_27", 0, 0, {}]]]}, {"name": "conv2d_43", "class_name": "Conv2D", "config": {"name": "conv2d_43", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_7", 0, 0, {}]]]}, {"name": "activation_28", "class_name": "Activation", "config": {"name": "activation_28", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_43", 0, 0, {}]]]}, {"name": "concatenate_8", "class_name": "Concatenate", "config": {"name": "concatenate_8", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_7", 0, 0, {}], ["activation_28", 0, 0, {}]]]}, {"name": "conv2d_44", "class_name": "Conv2D", "config": {"name": "conv2d_44", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_8", 0, 0, {}]]]}, {"name": "activation_29", "class_name": "Activation", "config": {"name": "activation_29", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_44", 0, 0, {}]]]}, {"name": "concatenate_9", "class_name": "Concatenate", "config": {"name": "concatenate_9", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_8", 0, 0, {}], ["activation_29", 0, 0, {}]]]}, {"name": "conv2d_45", "class_name": "Conv2D", "config": {"name": "conv2d_45", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_9", 0, 0, {}]]]}, {"name": "activation_30", "class_name": "Activation", "config": {"name": "activation_30", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_45", 0, 0, {}]]]}, {"name": "concatenate_10", "class_name": "Concatenate", "config": {"name": "concatenate_10", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_9", 0, 0, {}], ["activation_30", 0, 0, {}]]]}, {"name": "conv2d_46", "class_name": "Conv2D", "config": {"name": "conv2d_46", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_10", 0, 0, {}]]]}, {"name": "activation_31", "class_name": "Activation", "config": {"name": "activation_31", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_46", 0, 0, {}]]]}, {"name": "concatenate_11", "class_name": "Concatenate", "config": {"name": "concatenate_11", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_10", 0, 0, {}], ["activation_31", 0, 0, {}]]]}, {"name": "conv2d_47", "class_name": "Conv2D", "config": {"name": "conv2d_47", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_11", 0, 0, {}]]]}, {"name": "activation_32", "class_name": "Activation", "config": {"name": "activation_32", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_47", 0, 0, {}]]]}, {"name": "concatenate_12", "class_name": "Concatenate", "config": {"name": "concatenate_12", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_11", 0, 0, {}], ["activation_32", 0, 0, {}]]]}, {"name": "conv2d_48", "class_name": "Conv2D", "config": {"name": "conv2d_48", "trainable": true, "filters": 32, "kernel_size": [1, 1], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_12", 0, 0, {}]]]}, {"name": "add_19", "class_name": "Add", "config": {"name": "add_19", "trainable": true}, "inbound_nodes": [[["conv2d_48", 0, 0, {}], ["add_18", 0, 0, {}]]]}, {"name": "conv2d_49", "class_name": "Conv2D", "config": {"name": "conv2d_49", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_19", 0, 0, {}]]]}, {"name": "activation_33", "class_name": "Activation", "config": {"name": "activation_33", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_49", 0, 0, {}]]]}, {"name": "concatenate_13", "class_name": "Concatenate", "config": {"name": "concatenate_13", "trainable": true, "axis": 3}, "inbound_nodes": [[["add_19", 0, 0, {}], ["activation_33", 0, 0, {}]]]}, {"name": "conv2d_50", "class_name": "Conv2D", "config": {"name": "conv2d_50", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_13", 0, 0, {}]]]}, {"name": "activation_34", "class_name": "Activation", "config": {"name": "activation_34", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_50", 0, 0, {}]]]}, {"name": "concatenate_14", "class_name": "Concatenate", "config": {"name": "concatenate_14", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_13", 0, 0, {}], ["activation_34", 0, 0, {}]]]}, {"name": "conv2d_51", "class_name": "Conv2D", "config": {"name": "conv2d_51", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_14", 0, 0, {}]]]}, {"name": "activation_35", "class_name": "Activation", "config": {"name": "activation_35", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_51", 0, 0, {}]]]}, {"name": "concatenate_15", "class_name": "Concatenate", "config": {"name": "concatenate_15", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_14", 0, 0, {}], ["activation_35", 0, 0, {}]]]}, {"name": "conv2d_52", "class_name": "Conv2D", "config": {"name": "conv2d_52", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_15", 0, 0, {}]]]}, {"name": "activation_36", "class_name": "Activation", "config": {"name": "activation_36", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_52", 0, 0, {}]]]}, {"name": "concatenate_16", "class_name": "Concatenate", "config": {"name": "concatenate_16", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_15", 0, 0, {}], ["activation_36", 0, 0, {}]]]}, {"name": "conv2d_53", "class_name": "Conv2D", "config": {"name": "conv2d_53", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_16", 0, 0, {}]]]}, {"name": "activation_37", "class_name": "Activation", "config": {"name": "activation_37", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_53", 0, 0, {}]]]}, {"name": "concatenate_17", "class_name": "Concatenate", "config": {"name": "concatenate_17", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_16", 0, 0, {}], ["activation_37", 0, 0, {}]]]}, {"name": "conv2d_54", "class_name": "Conv2D", "config": {"name": "conv2d_54", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_17", 0, 0, {}]]]}, {"name": "activation_38", "class_name": "Activation", "config": {"name": "activation_38", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_54", 0, 0, {}]]]}, {"name": "concatenate_18", "class_name": "Concatenate", "config": {"name": "concatenate_18", "trainable": true, "axis": 3}, "inbound_nodes": [[["concatenate_17", 0, 0, {}], ["activation_38", 0, 0, {}]]]}, {"name": "conv2d_55", "class_name": "Conv2D", "config": {"name": "conv2d_55", "trainable": true, "filters": 32, "kernel_size": [1, 1], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": false, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_18", 0, 0, {}]]]}, {"name": "add_20", "class_name": "Add", "config": {"name": "add_20", "trainable": true}, "inbound_nodes": [[["conv2d_55", 0, 0, {}], ["add_19", 0, 0, {}]]]}, {"name": "concatenate_19", "class_name": "Concatenate", "config": {"name": "concatenate_19", "trainable": true, "axis": 3}, "inbound_nodes": [[["add_18", 0, 0, {}], ["add_19", 0, 0, {}], ["add_20", 0, 0, {}]]]}, {"name": "conv2d_56", "class_name": "Conv2D", "config": {"name": "conv2d_56", "trainable": true, "filters": 32, "kernel_size": [1, 1], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_19", 0, 0, {}]]]}, {"name": "conv2d_57", "class_name": "Conv2D", "config": {"name": "conv2d_57", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["conv2d_56", 0, 0, {}]]]}, {"name": "add_21", "class_name": "Add", "config": {"name": "add_21", "trainable": true}, "inbound_nodes": [[["conv2d_57", 0, 0, {}], ["conv2d_1", 0, 0, {}]]]}, {"name": "conv2d_58", "class_name": "Conv2D", "config": {"name": "conv2d_58", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_21", 0, 0, {}]]]}, {"name": "lambda_1", "class_name": "Lambda", "config": {"name": "lambda_1", "trainable": true, "function": ["4wEAAAAAAAAAAQAAAAMAAABTAAAAcwwAAAB0AGoBfABkAYMCUwApAk7pAgAAACkC2gJ0ZtoOZGVw\ndGhfdG9fc3BhY2UpAdoBeKkAcgUAAAD6MC9ob21lL215eWUvRGVlcExlYXJuaW5nRGVibHVyL3Ny\nYy9tb2RlbC9tb2RlbC5wedoIPGxhbWJkYT5cAAAA8wAAAAA=\n", null, null], "function_type": "lambda", "output_shape": null, "output_shape_type": "raw", "arguments": {}}, "inbound_nodes": [[["conv2d_58", 0, 0, {}]]]}, {"name": "conv2d_59", "class_name": "Conv2D", "config": {"name": "conv2d_59", "trainable": true, "filters": 3, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["lambda_1", 0, 0, {}]]]}, {"name": "activation_39", "class_name": "Activation", "config": {"name": "activation_39", "trainable": true, "activation": "tanh"}, "inbound_nodes": [[["conv2d_59", 0, 0, {}]]]}, {"name": "imageUp", "class_name": "InputLayer", "config": {"batch_input_shape": [null, null, null, 3], "dtype": "float32", "sparse": false, "name": "imageUp"}, "inbound_nodes": []}, {"name": "lambda_2", "class_name": "Lambda", "config": {"name": "lambda_2", "trainable": true, "function": ["4wEAAAAAAAAAAQAAAAIAAABTAAAAcwwAAAB8AGQBGwBkAhcAUwApA07pAgAAAGcAAAAAAADgP6kA\nKQHaAXhyAgAAAHICAAAA+jAvaG9tZS9teXllL0RlZXBMZWFybmluZ0RlYmx1ci9zcmMvbW9kZWwv\nbW9kZWwucHnaCDxsYW1iZGE+XwAAAPMAAAAA\n", null, null], "function_type": "lambda", "output_shape": null, "output_shape_type": "raw", "arguments": {}}, "inbound_nodes": [[["activation_39", 0, 0, {}]]]}, {"name": "concatenate_20", "class_name": "Concatenate", "config": {"name": "concatenate_20", "trainable": true, "axis": 3}, "inbound_nodes": [[["imageUp", 0, 0, {}], ["lambda_2", 0, 0, {}]]]}, {"name": "conv2d_60", "class_name": "Conv2D", "config": {"name": "conv2d_60", "trainable": true, "filters": 32, "kernel_size": [5, 5], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_20", 0, 0, {}]]]}, {"name": "activation_40", "class_name": "Activation", "config": {"name": "activation_40", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_60", 0, 0, {}]]]}, {"name": "conv2d_61", "class_name": "Conv2D", "config": {"name": "conv2d_61", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_40", 0, 0, {}]]]}, {"name": "activation_41", "class_name": "Activation", "config": {"name": "activation_41", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_61", 0, 0, {}]]]}, {"name": "conv2d_62", "class_name": "Conv2D", "config": {"name": "conv2d_62", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_41", 0, 0, {}]]]}, {"name": "add_22", "class_name": "Add", "config": {"name": "add_22", "trainable": true}, "inbound_nodes": [[["conv2d_62", 0, 0, {}], ["activation_40", 0, 0, {}]]]}, {"name": "conv2d_63", "class_name": "Conv2D", "config": {"name": "conv2d_63", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_22", 0, 0, {}]]]}, {"name": "activation_42", "class_name": "Activation", "config": {"name": "activation_42", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_63", 0, 0, {}]]]}, {"name": "conv2d_64", "class_name": "Conv2D", "config": {"name": "conv2d_64", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_42", 0, 0, {}]]]}, {"name": "add_23", "class_name": "Add", "config": {"name": "add_23", "trainable": true}, "inbound_nodes": [[["conv2d_64", 0, 0, {}], ["add_22", 0, 0, {}]]]}, {"name": "conv2d_65", "class_name": "Conv2D", "config": {"name": "conv2d_65", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_23", 0, 0, {}]]]}, {"name": "activation_43", "class_name": "Activation", "config": {"name": "activation_43", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_65", 0, 0, {}]]]}, {"name": "conv2d_66", "class_name": "Conv2D", "config": {"name": "conv2d_66", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_43", 0, 0, {}]]]}, {"name": "add_24", "class_name": "Add", "config": {"name": "add_24", "trainable": true}, "inbound_nodes": [[["conv2d_66", 0, 0, {}], ["add_23", 0, 0, {}]]]}, {"name": "conv2d_67", "class_name": "Conv2D", "config": {"name": "conv2d_67", "trainable": true, "filters": 64, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_24", 0, 0, {}]]]}, {"name": "activation_44", "class_name": "Activation", "config": {"name": "activation_44", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_67", 0, 0, {}]]]}, {"name": "conv2d_68", "class_name": "Conv2D", "config": {"name": "conv2d_68", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_44", 0, 0, {}]]]}, {"name": "activation_45", "class_name": "Activation", "config": {"name": "activation_45", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_68", 0, 0, {}]]]}, {"name": "conv2d_69", "class_name": "Conv2D", "config": {"name": "conv2d_69", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_45", 0, 0, {}]]]}, {"name": "add_25", "class_name": "Add", "config": {"name": "add_25", "trainable": true}, "inbound_nodes": [[["conv2d_69", 0, 0, {}], ["activation_44", 0, 0, {}]]]}, {"name": "conv2d_70", "class_name": "Conv2D", "config": {"name": "conv2d_70", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_25", 0, 0, {}]]]}, {"name": "activation_46", "class_name": "Activation", "config": {"name": "activation_46", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_70", 0, 0, {}]]]}, {"name": "conv2d_71", "class_name": "Conv2D", "config": {"name": "conv2d_71", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_46", 0, 0, {}]]]}, {"name": "add_26", "class_name": "Add", "config": {"name": "add_26", "trainable": true}, "inbound_nodes": [[["conv2d_71", 0, 0, {}], ["add_25", 0, 0, {}]]]}, {"name": "conv2d_72", "class_name": "Conv2D", "config": {"name": "conv2d_72", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_26", 0, 0, {}]]]}, {"name": "activation_47", "class_name": "Activation", "config": {"name": "activation_47", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_72", 0, 0, {}]]]}, {"name": "conv2d_73", "class_name": "Conv2D", "config": {"name": "conv2d_73", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_47", 0, 0, {}]]]}, {"name": "add_27", "class_name": "Add", "config": {"name": "add_27", "trainable": true}, "inbound_nodes": [[["conv2d_73", 0, 0, {}], ["add_26", 0, 0, {}]]]}, {"name": "conv2d_74", "class_name": "Conv2D", "config": {"name": "conv2d_74", "trainable": true, "filters": 128, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_27", 0, 0, {}]]]}, {"name": "activation_48", "class_name": "Activation", "config": {"name": "activation_48", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_74", 0, 0, {}]]]}, {"name": "conv2d_75", "class_name": "Conv2D", "config": {"name": "conv2d_75", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_48", 0, 0, {}]]]}, {"name": "activation_49", "class_name": "Activation", "config": {"name": "activation_49", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_75", 0, 0, {}]]]}, {"name": "conv2d_76", "class_name": "Conv2D", "config": {"name": "conv2d_76", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_49", 0, 0, {}]]]}, {"name": "add_28", "class_name": "Add", "config": {"name": "add_28", "trainable": true}, "inbound_nodes": [[["conv2d_76", 0, 0, {}], ["activation_48", 0, 0, {}]]]}, {"name": "conv2d_77", "class_name": "Conv2D", "config": {"name": "conv2d_77", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_28", 0, 0, {}]]]}, {"name": "activation_50", "class_name": "Activation", "config": {"name": "activation_50", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_77", 0, 0, {}]]]}, {"name": "conv2d_78", "class_name": "Conv2D", "config": {"name": "conv2d_78", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_50", 0, 0, {}]]]}, {"name": "add_29", "class_name": "Add", "config": {"name": "add_29", "trainable": true}, "inbound_nodes": [[["conv2d_78", 0, 0, {}], ["add_28", 0, 0, {}]]]}, {"name": "conv2d_79", "class_name": "Conv2D", "config": {"name": "conv2d_79", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_29", 0, 0, {}]]]}, {"name": "activation_51", "class_name": "Activation", "config": {"name": "activation_51", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_79", 0, 0, {}]]]}, {"name": "conv2d_80", "class_name": "Conv2D", "config": {"name": "conv2d_80", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_51", 0, 0, {}]]]}, {"name": "add_30", "class_name": "Add", "config": {"name": "add_30", "trainable": true}, "inbound_nodes": [[["conv2d_80", 0, 0, {}], ["add_29", 0, 0, {}]]]}, {"name": "conv2d_81", "class_name": "Conv2D", "config": {"name": "conv2d_81", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_30", 0, 0, {}]]]}, {"name": "activation_52", "class_name": "Activation", "config": {"name": "activation_52", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_81", 0, 0, {}]]]}, {"name": "conv2d_82", "class_name": "Conv2D", "config": {"name": "conv2d_82", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_52", 0, 0, {}]]]}, {"name": "add_31", "class_name": "Add", "config": {"name": "add_31", "trainable": true}, "inbound_nodes": [[["conv2d_82", 0, 0, {}], ["add_30", 0, 0, {}]]]}, {"name": "conv2d_83", "class_name": "Conv2D", "config": {"name": "conv2d_83", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_31", 0, 0, {}]]]}, {"name": "activation_53", "class_name": "Activation", "config": {"name": "activation_53", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_83", 0, 0, {}]]]}, {"name": "conv2d_84", "class_name": "Conv2D", "config": {"name": "conv2d_84", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_53", 0, 0, {}]]]}, {"name": "add_32", "class_name": "Add", "config": {"name": "add_32", "trainable": true}, "inbound_nodes": [[["conv2d_84", 0, 0, {}], ["add_31", 0, 0, {}]]]}, {"name": "conv2d_85", "class_name": "Conv2D", "config": {"name": "conv2d_85", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_32", 0, 0, {}]]]}, {"name": "activation_54", "class_name": "Activation", "config": {"name": "activation_54", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_85", 0, 0, {}]]]}, {"name": "conv2d_86", "class_name": "Conv2D", "config": {"name": "conv2d_86", "trainable": true, "filters": 128, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_54", 0, 0, {}]]]}, {"name": "add_33", "class_name": "Add", "config": {"name": "add_33", "trainable": true}, "inbound_nodes": [[["conv2d_86", 0, 0, {}], ["add_32", 0, 0, {}]]]}, {"name": "conv2d_transpose_3", "class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_3", "trainable": true, "filters": 64, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}, "inbound_nodes": [[["add_33", 0, 0, {}]]]}, {"name": "activation_55", "class_name": "Activation", "config": {"name": "activation_55", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_transpose_3", 0, 0, {}]]]}, {"name": "add_34", "class_name": "Add", "config": {"name": "add_34", "trainable": true}, "inbound_nodes": [[["activation_55", 0, 0, {}], ["add_27", 0, 0, {}]]]}, {"name": "conv2d_87", "class_name": "Conv2D", "config": {"name": "conv2d_87", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_34", 0, 0, {}]]]}, {"name": "activation_56", "class_name": "Activation", "config": {"name": "activation_56", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_87", 0, 0, {}]]]}, {"name": "conv2d_88", "class_name": "Conv2D", "config": {"name": "conv2d_88", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_56", 0, 0, {}]]]}, {"name": "add_35", "class_name": "Add", "config": {"name": "add_35", "trainable": true}, "inbound_nodes": [[["conv2d_88", 0, 0, {}], ["add_34", 0, 0, {}]]]}, {"name": "conv2d_89", "class_name": "Conv2D", "config": {"name": "conv2d_89", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_35", 0, 0, {}]]]}, {"name": "activation_57", "class_name": "Activation", "config": {"name": "activation_57", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_89", 0, 0, {}]]]}, {"name": "conv2d_90", "class_name": "Conv2D", "config": {"name": "conv2d_90", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_57", 0, 0, {}]]]}, {"name": "add_36", "class_name": "Add", "config": {"name": "add_36", "trainable": true}, "inbound_nodes": [[["conv2d_90", 0, 0, {}], ["add_35", 0, 0, {}]]]}, {"name": "conv2d_91", "class_name": "Conv2D", "config": {"name": "conv2d_91", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_36", 0, 0, {}]]]}, {"name": "activation_58", "class_name": "Activation", "config": {"name": "activation_58", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_91", 0, 0, {}]]]}, {"name": "conv2d_92", "class_name": "Conv2D", "config": {"name": "conv2d_92", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_58", 0, 0, {}]]]}, {"name": "add_37", "class_name": "Add", "config": {"name": "add_37", "trainable": true}, "inbound_nodes": [[["conv2d_92", 0, 0, {}], ["add_36", 0, 0, {}]]]}, {"name": "conv2d_transpose_4", "class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_4", "trainable": true, "filters": 32, "kernel_size": [5, 5], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}, "inbound_nodes": [[["add_37", 0, 0, {}]]]}, {"name": "activation_59", "class_name": "Activation", "config": {"name": "activation_59", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_transpose_4", 0, 0, {}]]]}, {"name": "add_38", "class_name": "Add", "config": {"name": "add_38", "trainable": true}, "inbound_nodes": [[["activation_59", 0, 0, {}], ["add_24", 0, 0, {}]]]}, {"name": "conv2d_93", "class_name": "Conv2D", "config": {"name": "conv2d_93", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_38", 0, 0, {}]]]}, {"name": "activation_60", "class_name": "Activation", "config": {"name": "activation_60", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_93", 0, 0, {}]]]}, {"name": "conv2d_94", "class_name": "Conv2D", "config": {"name": "conv2d_94", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_60", 0, 0, {}]]]}, {"name": "add_39", "class_name": "Add", "config": {"name": "add_39", "trainable": true}, "inbound_nodes": [[["conv2d_94", 0, 0, {}], ["add_38", 0, 0, {}]]]}, {"name": "conv2d_95", "class_name": "Conv2D", "config": {"name": "conv2d_95", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_39", 0, 0, {}]]]}, {"name": "activation_61", "class_name": "Activation", "config": {"name": "activation_61", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_95", 0, 0, {}]]]}, {"name": "conv2d_96", "class_name": "Conv2D", "config": {"name": "conv2d_96", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_61", 0, 0, {}]]]}, {"name": "add_40", "class_name": "Add", "config": {"name": "add_40", "trainable": true}, "inbound_nodes": [[["conv2d_96", 0, 0, {}], ["add_39", 0, 0, {}]]]}, {"name": "conv2d_97", "class_name": "Conv2D", "config": {"name": "conv2d_97", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_40", 0, 0, {}]]]}, {"name": "activation_62", "class_name": "Activation", "config": {"name": "activation_62", "trainable": true, "activation": "relu"}, "inbound_nodes": [[["conv2d_97", 0, 0, {}]]]}, {"name": "conv2d_98", "class_name": "Conv2D", "config": {"name": "conv2d_98", "trainable": true, "filters": 32, "kernel_size": [3, 3], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["activation_62", 0, 0, {}]]]}, {"name": "add_41", "class_name": "Add", "config": {"name": "add_41", "trainable": true}, "inbound_nodes": [[["conv2d_98", 0, 0, {}], ["add_40", 0, 0, {}]]]}, {"name": "conv2d_99", "class_name": "Conv2D", "config": {"name": "conv2d_99", "trainable": true, "filters": 3, "kernel_size": [5, 5], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["add_41", 0, 0, {}]]]}, {"name": "activation_63", "class_name": "Activation", "config": {"name": "activation_63", "trainable": true, "activation": "tanh"}, "inbound_nodes": [[["conv2d_99", 0, 0, {}]]]}, {"name": "lambda_3", "class_name": "Lambda", "config": {"name": "lambda_3", "trainable": true, "function": ["4wEAAAAAAAAAAQAAAAIAAABTAAAAcwwAAAB8AGQBGwBkAhcAUwApA07pAgAAAGcAAAAAAADgP6kA\nKQHaAXhyAgAAAHICAAAA+jAvaG9tZS9teXllL0RlZXBMZWFybmluZ0RlYmx1ci9zcmMvbW9kZWwv\nbW9kZWwucHnaCDxsYW1iZGE+LwAAAPMAAAAA\n", null, null], "function_type": "lambda", "output_shape": null, "output_shape_type": "raw", "arguments": {}}, "inbound_nodes": [[["activation_63", 0, 0, {}]]]}], "input_layers": [["imageSmall", 0, 0], ["imageUp", 0, 0]], "output_layers": [["lambda_3", 0, 0]]}, "keras_version": "2.2.2", "backend": "tensorflow"}

================================================
FILE: code/requirements.txt
================================================
h5py==2.7.1
tensorflow-gpu==1.4.0
Keras==2.2.2
scikit-image==0.14.3

================================================
FILE: code/src/__init__.py
================================================


================================================
FILE: code/src/application.py
================================================
import os
from src.model.model import DDModel
from src.lib.data_helper import DataHelper
from skimage import io,transform,feature,color,img_as_float
import numpy as np
import math
import time

class Application():
#note that input image must be color, gray image should be expand to 3 channels
#and image size must be even

    def __init__(self,config):
        self.config = config
        self.model = DDModel(config)
        if(config.application.deblurring_result_dir is None):
            config.application.deblurring_result_dir = config.resource.output_dir
        if not os.path.exists(config.application.deblurring_result_dir):
            os.makedirs(config.application.deblurring_result_dir)
        self.__fileBlurList=[]

    def start(self):
        self.application()

    def __tuneSize(self,shape):
        pad = []
        for i in range(2):
            size = shape[i]
            if(size % 256 == 0):
                pad.append(0)
            else:
                n = size // 256 + 1
                pad.append((n*256 - size) // 2)
        return pad

    def __getImage(self,fileFullPath):#self.config.application.deblurring_file_path
        imageBlur = img_as_float(io.imread(fileFullPath))
        #make sure row&col are even
        row = imageBlur.shape[0]
        col = imageBlur.shape[1]
        row = row-1 if row%2==1 else row
        col = col-1 if col%2==1 else col
        imageBlur = imageBlur[0:row,0:col]
        imageOrigin = imageBlur
        pad = self.__tuneSize(imageBlur.shape)
        imageBlur = np.pad(imageBlur,((pad[0],pad[0]),(pad[1],pad[1]),(0,0)),'reflect')
        return imageBlur,imageOrigin

    def __getData(self,root):
        for parent,dirnames,filenames in os.walk(root):
            for filename in filenames:
                self.__fileBlurList.append(os.path.join(parent,filename))
        self.data_length = len(self.__fileBlurList)
        print(f'total data:{self.data_length}!')

    def __deblur(self,imageBlur,imageOrigin):
        pyramid = tuple(transform.pyramid_gaussian(imageBlur, downscale=2, max_layer=self.max_iter, multichannel=True))
        deblurs = []
        for iter in self.iters:
            batch_blur2x = []
            batch_blur1x = []
            runtime = 0;
            for i in range(iter,0,-1):
                if(i == iter):#first iter
                    imageBlur2x = pyramid[i]
                    batch_blur2x.append(imageBlur2x)
                    batch_gen = batch_blur2x
                else:
                    batch_blur2x = batch_blur1x
                    batch_blur1x = []
                imageBlur1x = pyramid[i-1]
                batch_blur1x.append(imageBlur1x)
                data_X1 = np.concatenate((batch_blur2x,batch_gen), axis=3)#6channels
                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blur1x)}
                start = time.time()
                batch_gen = self.model.generator.predict(data_X)
                print(f'Runtime @scale {i}:{time.time()-start:4.3f}')
                runtime += time.time()-start;
            print(f'Runtime total @iter {iter}:{runtime:4.3f}')
            deblur = self.__clipOutput(batch_gen[0],imageOrigin.shape)
            deblurs.append(deblur)
        return deblurs

    def application(self):
        if(self.config.application.iter == 0):
            self.iters = [1,2,3,4]
        else:
            self.iters = [self.config.application.iter]
        self.max_iter = max(self.iters)
        deblurring_file_path = self.config.application.deblurring_file_path
        deblurring_dir_path = self.config.application.deblurring_dir_path
        if(deblurring_file_path and os.path.exists(deblurring_file_path)):
            imageBlur,imageOrigin = self.__getImage(deblurring_file_path)
            deblurs = self.__deblur(imageBlur,imageOrigin)
            infos = deblurring_file_path.rsplit('/', 1)
            iter_times = len(deblurs)
            for i in range(iter_times):
                deblur = deblurs[i]
                deblur = (deblur * 255).astype('uint8')
                iter = self.iters[i]
                io.imsave(os.path.join(self.config.application.deblurring_result_dir, 'deblur'+str(iter)+'_'+infos[1]),deblur)
            print(f'file saved')
        elif(deblurring_dir_path and os.path.exists(deblurring_dir_path)):
            self.__getData(deblurring_dir_path)
            index = 0
            for fileFullPath in self.__fileBlurList:
                imageBlur,imageOrigin = self.__getImage(fileFullPath)
                deblurs = self.__deblur(imageBlur,imageOrigin)
                infos = os.path.basename(fileFullPath)
                iter_times = len(deblurs)
                for j in range(iter_times):
                    deblur = deblurs[j]
                    deblur = (deblur * 255).astype('uint8')
                    iter = self.iters[j]
                    io.imsave(os.path.join(self.config.application.deblurring_result_dir, 'deblur'+str(iter)+'_'+infos),deblur)
                index += 1
                print(f'{index}/{self.data_length} done!')
            print(f'all saved')
        else:
            print(f"no deblur file(s)")

    def __clipOutput(self,image,outSize):
        inSize = image.shape
        start = []
        for i in range(2):
            start.append((inSize[i] - outSize[i]) // 2)
        return image[start[0]:start[0]+outSize[0],start[1]:start[1]+outSize[1]]

================================================
FILE: code/src/config.py
================================================
import os
import getpass

def _project_dir():
    d = os.path.dirname
    return d(d(os.path.abspath(__file__)))

class Config:
    def __init__(self):
        self.resource = ResourceConfig()
        self.trainer = TrainConfig()
        self.tester = TestConfig()
        self.application = Application()

class ResourceConfig:
    def __init__(self):
        self.project_dir = os.environ.get("PROJECT_DIR", _project_dir())
        self.data_dir = os.environ.get("DATA_DIR", os.path.join(_project_dir(), "data"))
        self.model_dir = os.environ.get("MODEL_DIR", os.path.join(self.project_dir, "model"))
        self.debug_dir = os.environ.get("DEBUG_DIR", os.path.join(self.project_dir, "debug"))
        self.output_dir = os.environ.get("OUTPUT_DIR", os.path.join(self.project_dir, "output"))
        
        self.generator_json_path = os.path.join(self.model_dir, "generator.json")
        self.generator_weights_path = os.path.join(self.model_dir, "generator.h5")
        self.train_directory_path = "/mnt/SD_1/myye/Deblur/GoPro/train"
        self.test_directory_path = "/mnt/SD_1/myye/Deblur/GoPro/test"

    def create_directories(self):
        dirs = [self.project_dir, self.data_dir, self.model_dir, self.debug_dir, self.output_dir]
        for d in dirs:
            if not os.path.exists(d):
                os.makedirs(d)

class TrainConfig:
    def __init__(self):
        self.generatorImageSize = 256
        self.generatorImageChannels = 3
        self.batch_size = 8
        self.maxEpoch = 2000
        self.gpu_num = 1

class TestConfig:
    def __init__(self):
        self.iter = 0

class Application:
    def __init__(self):
        self.iter = 4#try all iter(1,2,3,4) if set 0
        self.deblurring_file_path = None
        self.deblurring_dir_path = None
        self.deblurring_result_dir = None

================================================
FILE: code/src/lib/MLVSharpnessMeasure.py
================================================
import numpy as np
from scipy.special import gamma
from skimage import color

class MLVMeasurement():
    def __init__(self):
        self.gam = np.linspace(0.2,10,9801)

    def __estimateggdparam(self,vec):
        gam = self.gam
        r_gam = (gamma(1/gam)*gamma(3/gam))/((gamma(2/gam)) ** 2)
        sigma_sq = np.mean(vec ** 2)
        sigma = np.sqrt(sigma_sq)
        return sigma

    def __MLVMap(self,img):
        xs, ys = img.shape
        x=img
        x1=np.zeros((xs,ys))
        x2=np.zeros((xs,ys))
        x3=np.zeros((xs,ys))
        x4=np.zeros((xs,ys))
        x5=np.zeros((xs,ys))
        x6=np.zeros((xs,ys))
        x7=np.zeros((xs,ys))
        x8=np.zeros((xs,ys))
        x9=np.zeros((xs,ys))
        x1[0:xs-2,0:ys-2] = x[1:xs-1,1:ys-1]
        x2[0:xs-2,1:ys-1] = x[1:xs-1,1:ys-1]
        x3[0:xs-2,2:ys]   = x[1:xs-1,1:ys-1]
        x4[1:xs-1,0:ys-2] = x[1:xs-1,1:ys-1]
        x5[1:xs-1,1:ys-1] = x[1:xs-1,1:ys-1]
        x6[1:xs-1,2:ys]   = x[1:xs-1,1:ys-1]
        x7[2:xs,0:ys-2]   = x[1:xs-1,1:ys-1]
        x8[2:xs,1:ys-1]   = x[1:xs-1,1:ys-1]
        x9[2:xs,2:ys]     = x[1:xs-1,1:ys-1]
        x1=x1[1:xs-1,1:ys-1]
        x2=x2[1:xs-1,1:ys-1]
        x3=x3[1:xs-1,1:ys-1]
        x4=x4[1:xs-1,1:ys-1]
        x5=x5[1:xs-1,1:ys-1]
        x6=x6[1:xs-1,1:ys-1]
        x7=x7[1:xs-1,1:ys-1]
        x8=x8[1:xs-1,1:ys-1]
        x9=x9[1:xs-1,1:ys-1]
        dd=[]
        dd.append(x1-x5)
        dd.append(x2-x5)
        dd.append(x3-x5)
        dd.append(x4-x5)
        dd.append(x6-x5)
        dd.append(x7-x5)
        dd.append(x8-x5)
        dd.append(x9-x5)
        map = np.max(dd,axis=0)
        return map

    def getScore(self,x):#x should be double gray image
        if(x.ndim == 3):#color
            x = color.rgb2gray(x)
        map = self.__MLVMap(x)
        xs,ys = map.shape
        xy_number=xs*ys
        vec = map.reshape((xy_number,))
        vec[::-1].sort()#descend
        svec=vec[0:xy_number]
        a=np.arange(xy_number)
        q=np.exp(-0.01*a)
        svec=svec*q
        svec=svec[0:1000]
        return self.__estimateggdparam(svec)

================================================
FILE: code/src/lib/__init__.py
================================================


================================================
FILE: code/src/lib/data_helper.py
================================================
import os
from skimage import io,img_as_float # image process
import numpy as np

class DataHelper:
    def __init__(self):
        self.__fileBlurList=[]
        self.__directoryList=[]
        self.__blurSharpPairs=[]

    def __traversalDir(self,root):
        for name in os.listdir(root):
          fullPath = os.path.join(root, name)
          if os.path.isdir(fullPath):
            self.__directoryList.append(fullPath)
        for directory in self.__directoryList:
          for parent,dirnames,filenames in os.walk(os.path.join(directory,'blur')):
            for filename in filenames:
              self.__fileBlurList.append(os.path.join(parent,filename))

    def load_data(self, path, number):#shuffle
        self.__traversalDir(path)
        if(number>0):
            np.random.shuffle(self.__fileBlurList)
        totalLoaded = 0
        print(f'start loading dataset...')
        for fileFullPath in self.__fileBlurList:
          #imageBlur = io.imread(fileFullPath,as_gray=True)
          #imageSharp = io.imread(fileFullPath.replace('/blur','/sharp'),as_gray=True)
          imageBlur = img_as_float(io.imread(fileFullPath))
          imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))
          self.__blurSharpPairs.append((imageBlur,imageSharp))
          totalLoaded += 1
          if(totalLoaded == number):#if number < 1, all datas loaded
            break
        print(f'dataset loaded:{totalLoaded}!')

    def getRandomTrainDatas(self,config):
        X_train=[]
        Y_train=[]
        patchW = patchH = config.trainer.generatorImageSize
        for imageBlur,imageSharp in self.__blurSharpPairs:
          trainImageH = imageBlur.shape[0]
          trainImageW = imageBlur.shape[1]
          rowStart = np.random.randint(0, trainImageH-patchH)
          colStart = np.random.randint(0, trainImageW-patchW)
          X_train.append(imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW])
          Y_train.append(imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW])
        return X_train,Y_train#(row,col)

    def getTestDatas(self):
        #for imageBlur,imageSharp in self.__blurSharpPairs:
        return self.__fileBlurList

    def getLoadedPairs(self):
        return self.__blurSharpPairs

    def loadDataList(self, path):
        self.__traversalDir(path)
        data_length = len(self.__fileBlurList)
        print(f'dataset got:{data_length}!')
        return data_length

    def getAPair(self,index,config):
        fileFullPath = self.__fileBlurList[index]
        imageBlur = img_as_float(io.imread(fileFullPath))
        imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))
        patchW = patchH = config.trainer.generatorImageSize
        trainImageH = imageBlur.shape[0]
        trainImageW = imageBlur.shape[1]
        rowStart = np.random.randint(0, trainImageH-patchH)
        colStart = np.random.randint(0, trainImageW-patchW)
        return imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW],imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW]

================================================
FILE: code/src/lib/data_producer.py
================================================
import os
from skimage import io,img_as_float # image process
import numpy as np
import threading

class DataProducer(threading.Thread):
    def __init__(self, name,queue,config):
        threading.Thread.__init__(self, name=name,daemon=True)
        self.data=queue
        self.__fileBlurList=[]
        self.__directoryList=[]
        self.__blurSharpParis=[]
        self.config = config
        self.running = True

    def __traversalDir(self,root):
        for name in os.listdir(root):
          fullPath = os.path.join(root, name)
          if os.path.isdir(fullPath):
            self.__directoryList.append(fullPath)
        for directory in self.__directoryList:
          for parent,dirnames,filenames in os.walk(os.path.join(directory,'blur')):
            for filename in filenames:
              self.__fileBlurList.append(os.path.join(parent,filename))

    def loadDataList(self, path):
        self.__traversalDir(path)
        self.data_length = len(self.__fileBlurList)
        print(f'dataset got:{self.data_length}!')
        return self.data_length

    def __produceAPair(self,index):
        fileFullPath = self.__fileBlurList[index]
        imageBlur = img_as_float(io.imread(fileFullPath))
        imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))
        patchW = patchH = self.config.trainer.generatorImageSize
        trainImageH = imageBlur.shape[0]
        trainImageW = imageBlur.shape[1]
        rowStart = np.random.randint(0, trainImageH-patchH)
        colStart = np.random.randint(0, trainImageW-patchW)
        blur = imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW]
        sharp = imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW]
        self.data.put((blur,sharp),1)#block

    def run(self):
        arr = np.arange(self.data_length)
        while(True):
            #an epoch
            np.random.shuffle(arr)
            for i in range(self.data_length):
                index = arr[i]
                self.__produceAPair(index)

================================================
FILE: code/src/lib/tf_util.py
================================================

def set_session_config(per_process_gpu_memory_fraction=None, allow_growth=None, device_list='0'):
    """

    :param allow_growth: When necessary, reserve memory
    :param float per_process_gpu_memory_fraction: specify GPU memory usage as 0 to 1

    :return:
    """
    import tensorflow as tf
    import keras.backend as K

    config = tf.ConfigProto(
        gpu_options=tf.GPUOptions(
            per_process_gpu_memory_fraction=per_process_gpu_memory_fraction,
            allow_growth=allow_growth,
            visible_device_list=device_list
        )
    )
    sess = tf.Session(config=config)
    K.set_session(sess)


================================================
FILE: code/src/model/__init__.py
================================================


================================================
FILE: code/src/model/model.py
================================================
import os
import tensorflow as tf
from keras.layers import *
from keras.initializers import glorot_uniform
from keras.models import Sequential,Model,load_model
from keras.layers.advanced_activations import LeakyReLU
import keras.backend as K

class DDModel:#Details Deblurring Model

    def __init__(self,config):
        self.config = config
        self.generator = self.build_generator((None,None,6),(None,None,3))

    def __resblock(self,X,filter_num):
        # Save the input value.
        X_shortcut = X
        
        X = Conv2D(filters = filter_num, kernel_size = (3, 3), strides = (1,1), padding = 'same')(X)
        X = Activation('relu')(X)
        
        X = Conv2D(filters = filter_num, kernel_size = (3, 3), strides = (1,1), padding = 'same')(X)
        X = Add()([X, X_shortcut])
        
        return X

    def __eblock(self,X,filter_num,stride):
        X = Conv2D(filters = filter_num, kernel_size = (5, 5), strides = (stride,stride), padding = 'same')(X)
        X = Activation('relu')(X)
        for i in range(3):
            X = self.__resblock(X,filter_num)
        return X

    def __dblock(self,X,filter_num,stride):
        for i in range(3):
            X = self.__resblock(X,filter_num*2)
        X = Conv2DTranspose(filter_num, kernel_size = (5, 5), strides = (stride, stride), padding='same')(X)
        X = Activation('relu')(X)
        return X

    def __outblock(self,X,filter_num):
        for i in range(3):
            X = self.__resblock(X,filter_num)
        X = Conv2D(3, kernel_size = (5, 5), strides = (1, 1), padding='same')(X)
        X = Activation('tanh')(X)
        X = Lambda(lambda x: x/2+0.5)(X)
        return X

    def __unet1(self,X):
        e32 = self.__eblock(X,32,1)#None,None,32
        e64 = self.__eblock(e32,64,2)#/2,64
        e128 = self.__eblock(e64,128,2)#/4,128
        d64 = self.__dblock(e128,64,2)#/2,64
        d64e64 = Add()([d64, e64])
        d32 = self.__dblock(d64e64,32,2)#None,None,32
        d32e32 = Add()([d32, e32])
        #d3 = self.__outblock(d32e32,32)
        return d32e32

    def __unet2(self,X):
        e32 = self.__eblock(X,32,1)#None,None,32
        e64 = self.__eblock(e32,64,2)#/2,64
        e128 = self.__eblock(e64,128,2)#/4,128
        d64 = self.__dblock(e128,64,2)#/2,64
        d64e64 = Add()([d64, e64])
        d32 = self.__dblock(d64e64,32,2)#None,None,32
        d32e32 = Add()([d32, e32])
        d3 = self.__outblock(d32e32,32)
        return d3

    def __makeDense(self,X,growthRate):
        out = Conv2D(filters = growthRate, kernel_size = (3, 3), strides = (1,1), padding = 'same', use_bias=False)(X)
        out = Activation('relu')(out)
        out = concatenate([X,out], axis=3)
        return out

    def __RDB(self,X,nChannels,nDenselayer,growthRate):
        X_shortcut = X
        for i in range(nDenselayer):    
            X = self.__makeDense(X, growthRate)
        X = Conv2D(filters = nChannels, kernel_size = (1, 1), strides = (1,1), padding = 'same', use_bias=False)(X)
        X = Add()([X, X_shortcut])
        return X

    def build_generator(self,input_shapeA,input_shapeB):#unet
        if(self.load(self.config.resource.generator_json_path,self.config.resource.generator_weights_path)):
            return self.model
        else:#init
            print(f'init network parameters')
            inputsA = Input(input_shapeA,name='imageSmall')#None,None,6
            inputsB = Input(input_shapeB,name='imageUp')#None,None,3
            #layer 1
            F_ = Conv2D(filters = 32, kernel_size = (3, 3), strides = (1,1), padding = 'same')(inputsA)#conv1
            F_0 = self.__unet1(F_)#32
            F_1 = self.__RDB(F_0,32,6,32)#RDB1
            F_2 = self.__RDB(F_1,32,6,32)#RDB2
            F_3 = self.__RDB(F_2,32,6,32)#RDB3
            FF = concatenate([F_1, F_2,F_3], axis=3)
            FdLF = Conv2D(filters = 32, kernel_size = (1, 1), strides = (1,1), padding = 'same')(FF)
            FGF = Conv2D(filters = 32, kernel_size = (3, 3), strides = (1,1), padding = 'same')(FdLF)
            FDF = Add()([FGF, F_])
            us = Conv2D(filters = 32*4, kernel_size = (3, 3), strides = (1,1), padding = 'same')(FDF)
            us = Lambda(lambda x: tf.depth_to_space(x,2))(us)#x2(upsample),32
            d3 = Conv2D(filters = 3, kernel_size = (3, 3), strides = (1,1), padding = 'same')(us)
            d3 = Activation('tanh')(d3)
            d3 = Lambda(lambda x: x/2+0.5)(d3)
            combined = concatenate([inputsB, d3], axis=3)#blur-generator,6
            o2 = self.__unet2(combined)
            model = Model(inputs=[inputsA,inputsB], outputs=o2, name='generator')
            return model

    def load(self, json_path, weights_path):
        from keras.models import model_from_json
        if os.path.exists(json_path) and os.path.exists(weights_path):
            json_file = open(json_path, 'r')
            loaded_model_json = json_file.read()
            json_file.close()
            self.model = model_from_json(loaded_model_json,custom_objects={'tf':tf})
            # load weights into new model
            self.model.load_weights(weights_path)
            print("Loaded model from disk")
            return True
        else:
            return False

    def save(self, model, json_path, weights_path):
        # serialize model to JSON
        model_json = model.to_json()
        with open(json_path, "w") as json_file:
            json_file.write(model_json)
        # serialize weights to HDF5
        model.save_weights(weights_path)
        print("Saved model to disk")

================================================
FILE: code/src/tester.py
================================================
import os
from src.model.model import DDModel
from src.lib.data_helper import DataHelper
from src.lib.MLVSharpnessMeasure import MLVMeasurement
from skimage import io,transform #reize image
import numpy as np
import pickle
import math

class Tester():
    def __init__(self,config):
        self.config = config
        self.model = DDModel(config)
        self.batch_size = 8
        self.current_size = 0
        self.pyramid_blurs = []
        self.batch_sharps = []
        #metrics
        self.all_psnrs = {}

    def start(self):
        if(self.config.tester.iter == 0):
            self.iters = [1,2,3,4]
        else:
            self.iters = [self.config.application.iter]
        self.max_iter = max(self.iters)
        for iter in self.iters:
            self.all_psnrs[iter] = []
        #json_path=self.config.resource.generator_json_path
        #infos = json_path.split('generator')
        #infos = infos[1].split('.')
        #json_info = infos[0]
        #weights_path=self.config.resource.generator_weights_path
        #infos = weights_path.split('generator')
        #infos = infos[1].split('.')
        #weights_info = infos[0]
        #print(f'json/weight:{json_info}/{weights_info}')
        print(f'test strategy:{self.iters}')
        self.test()

    def __compute_psnr(self, x , label , max_diff):
        mse =  np.mean(( x - label ) **2 )
        return 10*math.log10( max_diff**2 / mse )

    def __doBatchTest(self):
        n = len(self.pyramid_blurs)
        for iter in self.iters:
            batch_blurs2x = []
            batch_blurs1x = []
            for i in range(iter,0,-1):
                if(i == iter):#first iter
                    #generate batch_blurs2x
                    for j in range(n):
                        pyramid_blur = self.pyramid_blurs[j]
                        imageBlur2x = pyramid_blur[i]
                        batch_blurs2x.append(imageBlur2x)
                    batch_gen = batch_blurs2x
                else:
                    #generate batch_blurs2x
                    batch_blurs2x = batch_blurs1x
                    batch_blurs1x = []
                #generate batch_blurs1x
                for j in range(n):
                    pyramid_blur = self.pyramid_blurs[j]
                    imageBlur1x = pyramid_blur[i-1]
                    batch_blurs1x.append(imageBlur1x)
                #data prepare end
                
                #predict 2x
                data_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels
                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blurs1x)}
                batch_gen = self.model.generator.predict(data_X)
            #calculate metrics
            for i in range(n):
                pImage = batch_gen[i]
                pImage = pImage[24:744]
                psnr = self.__compute_psnr(pImage, self.batch_sharps[i], 1)
                self.all_psnrs[iter].append(psnr)
        #reset
        self.current_size = 0
        self.pyramid_blurs = []
        self.batch_sharps = []

    def __doInteration(self,blur,sharp):
        #self.sharpness.append(self.measure.getScore(blur))
        if(self.current_size < self.batch_size):
            blur = np.pad(blur,((24,24),(0,0),(0,0)),'reflect')#be divided by 256
            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))
            self.batch_sharps.append(sharp)
            self.current_size += 1
        if(self.current_size == self.batch_size):#train a batch
            self.__doBatchTest()

    def test(self):
        dataHelper = DataHelper()
        dataHelper.load_data(self.config.resource.test_directory_path,0)
        
        blurSharpParis = dataHelper.getLoadedPairs()
        for imageBlur,imageSharp in blurSharpParis:
            self.__doInteration(imageBlur,imageSharp)
        if(self.pyramid_blurs):
            self.__doBatchTest()
        
        #analyse results
        psnrs = []
        for iter in self.iters:
            psnrs.append(self.all_psnrs[iter])
        psnrs = np.array(psnrs)
        psnrs_by_iter = np.mean(psnrs,axis=1)
        for i in range(len(psnrs_by_iter)):
            print(f'PSNR:{psnrs_by_iter[i]}@{self.iters[i]}')
        best_psnrs = np.amax(psnrs,axis=0)
        path=os.path.join(self.config.resource.output_dir, "psnrs.pkl")
        with open(path, 'wb') as pfile:
          pickle.dump(best_psnrs, pfile, protocol=pickle.HIGHEST_PROTOCOL)
        best_iters_index = np.argmax(psnrs,axis=0)
        iters = np.array(self.iters)
        best_iters = iters[best_iters_index]
        path=os.path.join(self.config.resource.output_dir, "iters.pkl")
        with open(path, 'wb') as pfile:
          pickle.dump(best_iters, pfile, protocol=pickle.HIGHEST_PROTOCOL)
        calculate_data_n = len(best_psnrs)
        #path=os.path.join(self.config.resource.output_dir, "sharpness.pkl")
        #with open(path, 'wb') as pfile:
        #  pickle.dump(self.sharpness, pfile, protocol=pickle.HIGHEST_PROTOCOL)
        calculate_data_n = len(best_psnrs)
        print(f'{calculate_data_n}/{len(blurSharpParis)} done! Average PSNRs(Best):{np.mean(best_psnrs)}')

================================================
FILE: code/src/trainer.py
================================================
from src.model.model import DDModel
from src.lib.data_producer import DataProducer
import tensorflow as tf
import keras.backend as K
from keras.optimizers import RMSprop,Adam
from skimage import io,transform,feature,color
import numpy as np
import sys
from keras.utils.training_utils import multi_gpu_model
import queue
import threading

class Trainer():
    def __init__(self,config):
        self.config = config
        self.model = DDModel(config)
        self.batch_size = config.trainer.batch_size
        self.learningSteps = [1e-4,3e-5,5e-6,1e-6]
        #self.learningSteps = [1e-4,3e-5]
        self.currentStep = 0
        self.bestLoss = 2
        self.bestEpoch = 0
        self.current_size = 0
        self.iters = [3]
        self.iter_length = len(self.iters)
        self.pyramid_blurs = []
        self.pyramid_sharps = []
        

    def start(self):
        #json_path=self.config.resource.generator_json_path
        #infos = json_path.split('generator')
        #infos = infos[1].split('.')
        #json_info = infos[0]
        #weights_path=self.config.resource.generator_weights_path
        #infos = weights_path.split('generator')
        #infos = infos[1].split('.')
        #weights_info = infos[0]
        #print(f'json/weight:{json_info}/{weights_info}')
        self.train(self.config.trainer.maxEpoch)

    def __trainBatch(self):
        batch_blurs2x = []
        batch_blurs1x = []
        batch_sharps1x = []
        n = len(self.pyramid_blurs)
        for i in range(self.max_iter,0,-1):
            if(i == self.max_iter):#first iter
                #generate batch_blurs2x
                for j in range(n):
                    pyramid_blur = self.pyramid_blurs[j]
                    imageBlur2x = pyramid_blur[i]
                    batch_blurs2x.append(imageBlur2x)
                batch_gen = batch_blurs2x
            else:
                #generate batch_blurs2x
                batch_blurs2x = batch_blurs1x
                batch_blurs1x = []
                batch_sharps1x = []
            #generate batch_blurs1x
            for j in range(n):
                pyramid_blur = self.pyramid_blurs[j]
                imageBlur1x = pyramid_blur[i-1]
                batch_blurs1x.append(imageBlur1x)
            #generate batch_sharps1x
            for j in range(n):
                pyramid_sharp = self.pyramid_sharps[j]
                imageSharp1x = pyramid_sharp[i-1]
                batch_sharps1x.append(imageSharp1x)
            #data generate end
            
            #train Generator 2x
            train_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels
            train_X = {'imageSmall':train_X1,'imageUp':np.array(batch_blurs1x)}
            g_loss = self.generator.train_on_batch(train_X,np.array(batch_sharps1x))
            if(i == 1):#last iter
                self.g_loss += g_loss * n
            else:
                batch_gen = self.generator.predict(train_X)
        #train end,reset
        self.current_size = 0
        self.pyramid_blurs = []
        self.pyramid_sharps = []

    def __doInteration(self,blur,sharp,epoch):
        iter_index = epoch%self.iter_length
        self.max_iter = self.iters[iter_index]
        if(self.current_size < self.batch_size):
            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))
            self.pyramid_sharps.append(tuple(transform.pyramid_gaussian(sharp, downscale=2, max_layer=self.max_iter, multichannel=True)))
            self.current_size += 1
        if(self.current_size == self.batch_size):#train a batch
            self.__trainBatch()

    def __nextStep(self):
        #lr = K.get_value(self.generator.optimizer.lr)
        self.currentStep += 1
        if(self.currentStep < len(self.learningSteps)):
            lr = self.learningSteps[self.currentStep]
            K.set_value(self.generator.optimizer.lr, lr)
            self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)
            f_lr = "{:.2e}".format(lr)
            print(f'learning rate:{f_lr}')
            return False
        else:#early end
            return True

    def __learningScheduler(self,epoch):
        if(epoch == 0):
            lr = K.get_value(self.generator.optimizer.lr)
            f_lr = "{:.2e}".format(lr)
            print(f'learning rate:{f_lr}')
            return False
        if(self.bestLoss>self.g_loss):
            self.bestLoss = self.g_loss
            self.bestEpoch = epoch
            if(self.currentStep == len(self.learningSteps)-1):#last step
                self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)
            return False
        #self.bestLoss<=self.g_loss, model not improved
        if(self.currentStep == len(self.learningSteps)-1):#last step
            patience = 50
        else:
            patience = 30
        if(epoch-self.bestEpoch >= patience):
            return self.__nextStep()

    def train(self,maxEpoch):
        optimizer = Adam(self.learningSteps[self.currentStep])
        if(self.config.trainer.gpu_num>1):
            self.generator = multi_gpu_model(self.model.generator, self.config.trainer.gpu_num)
        else:
            self.generator = self.model.generator
        self.generator.compile(loss='mean_absolute_error', optimizer=optimizer)
        print(f'generator:{self.generator.metrics_names}')
        print(f'training strategy:{self.iters}')
        
        image_queue = queue.Queue(maxsize=self.config.trainer.batch_size*4)
        dataProducer = DataProducer('Producer',image_queue,self.config)
        n = dataProducer.loadDataList(self.config.resource.train_directory_path)
        dataProducer.start()
        for epoch in range(maxEpoch):
            #tune learning rate
            
            if(self.__learningScheduler(epoch)):#early end
                print('early end')
                sys.exit()
            '''
            if(epoch == 0):
                lr = K.get_value(self.generator.optimizer.lr)
                f_lr = "{:.2e}".format(lr)
                print(f'learning rate:{f_lr}')
            elif(epoch % 300 == 0):
                earlyEnd = self.__nextStep()
                if(earlyEnd):
                    break
            '''
            self.g_loss = 0
            for i in range(n):
              imageBlur,imageSharp = image_queue.get(1)#block
              self.__doInteration(imageBlur,imageSharp,epoch)
            if(self.pyramid_blurs):
              #last batch, may smaller than batch_size
              self.__trainBatch()
            #f_g_loss = ["{:.2f}".format(x) for x in self.g_loss]
            self.g_loss = self.g_loss/n
            f_g_loss = "{:.3e}".format(self.g_loss)
            print(f'epoch:{epoch+1}/{maxEpoch},[G loss:{f_g_loss}]')
        self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)

================================================
FILE: code/src/verification.py
================================================
from src.model.model import DDModel
from src.lib.data_producer import DataProducer
from src.lib.data_helper import DataHelper
import tensorflow as tf
import keras.backend as K
from keras.optimizers import RMSprop,Adam
from skimage import io,transform,feature,color,img_as_float
import numpy as np
import sys
from keras.utils.training_utils import multi_gpu_model
import queue
import threading
import math

class Verification():
    def __init__(self,config):
        self.config = config
        self.model = DDModel(config)
        self.batch_size = config.trainer.batch_size
        self.learningRate = 1e-6
        self.bestMetric = 0#psnr
        self.bestEpoch = 0
        self.patience = 200
        self.current_size = 0
        self.iters = [3]
        self.iter_length = len(self.iters)
        self.pyramid_blurs = []
        self.pyramid_sharps = []

    def start(self):
        #json_path=self.config.resource.generator_json_path
        #infos = json_path.split('generator')
        #infos = infos[1].split('.')
        #json_info = infos[0]
        #weights_path=self.config.resource.generator_weights_path
        #infos = weights_path.split('generator')
        #infos = infos[1].split('.')
        #weights_info = infos[0]
        #print(f'json/weight:{json_info}/{weights_info}')
        print(f'verification strategy:{self.iters}')
        self.bestMetric = self.__getMetric()#init
        print(f'init metric:{self.bestMetric}')
        self.train()

    def __trainBatch(self):
        batch_blurs2x = []
        batch_blurs1x = []
        batch_sharps1x = []
        n = len(self.pyramid_blurs)
        for i in range(self.max_iter,0,-1):
            if(i == self.max_iter):#first iter
                #generate batch_blurs2x
                for j in range(n):
                    pyramid_blur = self.pyramid_blurs[j]
                    imageBlur2x = pyramid_blur[i]
                    batch_blurs2x.append(imageBlur2x)
                batch_gen = batch_blurs2x
            else:
                #generate batch_blurs2x
                batch_blurs2x = batch_blurs1x
                batch_blurs1x = []
                batch_sharps1x = []
            #generate batch_blurs1x
            for j in range(n):
                pyramid_blur = self.pyramid_blurs[j]
                imageBlur1x = pyramid_blur[i-1]
                batch_blurs1x.append(imageBlur1x)
            #generate batch_sharps1x
            for j in range(n):
                pyramid_sharp = self.pyramid_sharps[j]
                imageSharp1x = pyramid_sharp[i-1]
                batch_sharps1x.append(imageSharp1x)
            #data generate end
            
            #train Generator 2x
            train_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels
            train_X = {'imageSmall':train_X1,'imageUp':np.array(batch_blurs1x)}
            g_loss = self.generator.train_on_batch(train_X,np.array(batch_sharps1x))
            if(i == 1):#last iter
                self.g_loss += g_loss * n
            else:
                batch_gen = self.generator.predict(train_X)
        #train end,reset
        self.current_size = 0
        self.pyramid_blurs = []
        self.pyramid_sharps = []

    def __doInteration(self,blur,sharp,epoch):
        iter_index = epoch%self.iter_length
        self.max_iter = self.iters[iter_index]
        if(self.current_size < self.batch_size):
            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))
            self.pyramid_sharps.append(tuple(transform.pyramid_gaussian(sharp, downscale=2, max_layer=self.max_iter, multichannel=True)))
            self.current_size += 1
        if(self.current_size == self.batch_size):#train a batch
            self.__trainBatch()

    def __compute_psnr(self, x , label , max_diff):
        mse =  np.mean(( x - label ) **2 )
        return 10*math.log10( max_diff**2 / mse )

    def __testBatch(self,pyramid_blurs,batch_sharps):
        n = len(pyramid_blurs)
        psnrs = []
        for iter in self.iters:
            batch_blurs2x = []
            batch_blurs1x = []
            for i in range(iter,0,-1):
                if(i == iter):#first iter
                    #generate batch_blurs2x
                    for j in range(n):
                        pyramid_blur = pyramid_blurs[j]
                        imageBlur2x = pyramid_blur[i]
                        batch_blurs2x.append(imageBlur2x)
                    batch_gen = batch_blurs2x
                else:
                    #generate batch_blurs2x
                    batch_blurs2x = batch_blurs1x
                    batch_blurs1x = []
                #generate batch_blurs1x
                for j in range(n):
                    pyramid_blur = pyramid_blurs[j]
                    imageBlur1x = pyramid_blur[i-1]
                    batch_blurs1x.append(imageBlur1x)
                #data prepare end
                
                #predict 2x
                data_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels
                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blurs1x)}
                batch_gen = self.model.generator.predict(data_X)
            #calculate metrics
            batch_psnrs = []
            for i in range(n):
                pImage = batch_gen[i]
                pImage = pImage[24:744]
                psnr = self.__compute_psnr(pImage, batch_sharps[i], 1)
                batch_psnrs.append(psnr)
            psnrs.append(batch_psnrs)
        psnrs = np.array(psnrs)
        best_index = np.argmax(psnrs,axis=0)
        for i in range(n):
            best_psnr = psnrs[best_index[i]][i]
            best_iter = self.iters[best_index[i]]
            self.best_psnrs.append(best_psnr)
            self.best_iters.append(best_iter)

    def __getMetric(self):
        dataHelper = DataHelper()
        dataHelper.loadDataList(self.config.resource.test_directory_path)
        fileBlurList = dataHelper.getTestDatas()
        batch_size = 8
        max_iter = max(self.iters)
        #metrics
        self.best_psnrs = []
        self.best_iters = []
        
        current_size = 0
        pyramid_blurs = []
        batch_sharps = []
        for fileFullPath in fileBlurList:
            blur = img_as_float(io.imread(fileFullPath))
            sharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))
            if(current_size < batch_size):
                blur = np.pad(blur,((24,24),(0,0),(0,0)),'reflect')#be divided by 256
                pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=max_iter, multichannel=True)))
                batch_sharps.append(sharp)
                current_size += 1
            if(current_size == batch_size):#verify a batch
                self.__testBatch(pyramid_blurs,batch_sharps)
                current_size = 0
                pyramid_blurs = []
                batch_sharps = []
        if(pyramid_blurs):
            self.__testBatch(pyramid_blurs,batch_sharps)
            current_size = 0
            pyramid_blurs = []
            batch_sharps = []
        return np.mean(self.best_psnrs)

    def __verify(self,epoch):
        if(epoch % 50 != 0):
            return False
        metric = self.__getMetric()
        print(f'current metric:{metric}')
        if(metric > self.bestMetric):
            self.bestMetric = metric
            self.bestEpoch = epoch
            self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)
            return False
        elif(epoch - self.bestEpoch < self.patience):
            return False
        else:
            return True
    
    def train(self):
        optimizer = Adam(self.learningRate)
        if(self.config.trainer.gpu_num>1):
            self.generator = multi_gpu_model(self.model.generator, self.config.trainer.gpu_num)
        else:
            self.generator = self.model.generator
        self.generator.compile(loss='mean_absolute_error', optimizer=optimizer)
        print(f'generator:{self.generator.metrics_names}')
        
        image_queue = queue.Queue(maxsize=self.config.trainer.batch_size*4)
        dataProducer = DataProducer('Producer',image_queue,self.config)
        n = dataProducer.loadDataList(self.config.resource.train_directory_path)
        dataProducer.start()
        epoch = 0
        while(True):
            self.g_loss = 0
            for i in range(n):
              imageBlur,imageSharp = image_queue.get(1)#block
              self.__doInteration(imageBlur,imageSharp,epoch)
            if(self.pyramid_blurs):
              #last batch, may smaller than batch_size
              self.__trainBatch()
            #f_g_loss = ["{:.2f}".format(x) for x in self.g_loss]
            self.g_loss = self.g_loss/n
            f_g_loss = "{:.3e}".format(self.g_loss)
            print(f'verification epoch:{epoch},[G loss:{f_g_loss}]')
            epoch += 1
            if(self.__verify(epoch)):
                break
Download .txt
gitextract_k65vub75/

├── README.md
└── code/
    ├── __init__.py
    ├── deblur.py
    ├── model/
    │   ├── generator.h5
    │   └── generator.json
    ├── requirements.txt
    └── src/
        ├── __init__.py
        ├── application.py
        ├── config.py
        ├── lib/
        │   ├── MLVSharpnessMeasure.py
        │   ├── __init__.py
        │   ├── data_helper.py
        │   ├── data_producer.py
        │   └── tf_util.py
        ├── model/
        │   ├── __init__.py
        │   └── model.py
        ├── tester.py
        ├── trainer.py
        └── verification.py
Download .txt
SYMBOL INDEX (81 symbols across 11 files)

FILE: code/deblur.py
  function getArgs (line 14) | def getArgs():

FILE: code/src/application.py
  class Application (line 9) | class Application():
    method __init__ (line 13) | def __init__(self,config):
    method start (line 22) | def start(self):
    method __tuneSize (line 25) | def __tuneSize(self,shape):
    method __getImage (line 36) | def __getImage(self,fileFullPath):#self.config.application.deblurring_...
    method __getData (line 49) | def __getData(self,root):
    method __deblur (line 56) | def __deblur(self,imageBlur,imageOrigin):
    method application (line 84) | def application(self):
    method __clipOutput (line 122) | def __clipOutput(self,image,outSize):

FILE: code/src/config.py
  function _project_dir (line 4) | def _project_dir():
  class Config (line 8) | class Config:
    method __init__ (line 9) | def __init__(self):
  class ResourceConfig (line 15) | class ResourceConfig:
    method __init__ (line 16) | def __init__(self):
    method create_directories (line 28) | def create_directories(self):
  class TrainConfig (line 34) | class TrainConfig:
    method __init__ (line 35) | def __init__(self):
  class TestConfig (line 42) | class TestConfig:
    method __init__ (line 43) | def __init__(self):
  class Application (line 46) | class Application:
    method __init__ (line 47) | def __init__(self):

FILE: code/src/lib/MLVSharpnessMeasure.py
  class MLVMeasurement (line 5) | class MLVMeasurement():
    method __init__ (line 6) | def __init__(self):
    method __estimateggdparam (line 9) | def __estimateggdparam(self,vec):
    method __MLVMap (line 16) | def __MLVMap(self,img):
    method getScore (line 58) | def getScore(self,x):#x should be double gray image

FILE: code/src/lib/data_helper.py
  class DataHelper (line 5) | class DataHelper:
    method __init__ (line 6) | def __init__(self):
    method __traversalDir (line 11) | def __traversalDir(self,root):
    method load_data (line 21) | def load_data(self, path, number):#shuffle
    method getRandomTrainDatas (line 38) | def getRandomTrainDatas(self,config):
    method getTestDatas (line 51) | def getTestDatas(self):
    method getLoadedPairs (line 55) | def getLoadedPairs(self):
    method loadDataList (line 58) | def loadDataList(self, path):
    method getAPair (line 64) | def getAPair(self,index,config):

FILE: code/src/lib/data_producer.py
  class DataProducer (line 6) | class DataProducer(threading.Thread):
    method __init__ (line 7) | def __init__(self, name,queue,config):
    method __traversalDir (line 16) | def __traversalDir(self,root):
    method loadDataList (line 26) | def loadDataList(self, path):
    method __produceAPair (line 32) | def __produceAPair(self,index):
    method run (line 45) | def run(self):

FILE: code/src/lib/tf_util.py
  function set_session_config (line 2) | def set_session_config(per_process_gpu_memory_fraction=None, allow_growt...

FILE: code/src/model/model.py
  class DDModel (line 9) | class DDModel:#Details Deblurring Model
    method __init__ (line 11) | def __init__(self,config):
    method __resblock (line 15) | def __resblock(self,X,filter_num):
    method __eblock (line 27) | def __eblock(self,X,filter_num,stride):
    method __dblock (line 34) | def __dblock(self,X,filter_num,stride):
    method __outblock (line 41) | def __outblock(self,X,filter_num):
    method __unet1 (line 49) | def __unet1(self,X):
    method __unet2 (line 60) | def __unet2(self,X):
    method __makeDense (line 71) | def __makeDense(self,X,growthRate):
    method __RDB (line 77) | def __RDB(self,X,nChannels,nDenselayer,growthRate):
    method build_generator (line 85) | def build_generator(self,input_shapeA,input_shapeB):#unet
    method load (line 112) | def load(self, json_path, weights_path):
    method save (line 126) | def save(self, model, json_path, weights_path):

FILE: code/src/tester.py
  class Tester (line 10) | class Tester():
    method __init__ (line 11) | def __init__(self,config):
    method start (line 21) | def start(self):
    method __compute_psnr (line 41) | def __compute_psnr(self, x , label , max_diff):
    method __doBatchTest (line 45) | def __doBatchTest(self):
    method __doInteration (line 84) | def __doInteration(self,blur,sharp):
    method test (line 94) | def test(self):

FILE: code/src/trainer.py
  class Trainer (line 13) | class Trainer():
    method __init__ (line 14) | def __init__(self,config):
    method start (line 30) | def start(self):
    method __trainBatch (line 42) | def __trainBatch(self):
    method __doInteration (line 85) | def __doInteration(self,blur,sharp,epoch):
    method __nextStep (line 95) | def __nextStep(self):
    method __learningScheduler (line 108) | def __learningScheduler(self,epoch):
    method train (line 128) | def train(self,maxEpoch):

FILE: code/src/verification.py
  class Verification (line 15) | class Verification():
    method __init__ (line 16) | def __init__(self,config):
    method start (line 30) | def start(self):
    method __trainBatch (line 45) | def __trainBatch(self):
    method __doInteration (line 88) | def __doInteration(self,blur,sharp,epoch):
    method __compute_psnr (line 98) | def __compute_psnr(self, x , label , max_diff):
    method __testBatch (line 102) | def __testBatch(self,pyramid_blurs,batch_sharps):
    method __getMetric (line 147) | def __getMetric(self):
    method __verify (line 180) | def __verify(self,epoch):
    method train (line 195) | def train(self):
Condensed preview — 19 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (153K chars).
[
  {
    "path": "README.md",
    "chars": 3038,
    "preview": "# Scale-Iterative Upscaling Network for Image Deblurring\nby Minyuan Ye, Dong Lyu and Gengsheng Chen<br>\npdf [[main](http"
  },
  {
    "path": "code/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "code/deblur.py",
    "chars": 2311,
    "preview": "import os\nimport sys\nimport argparse\nfrom src.config import Config\nfrom src.lib.tf_util import set_session_config\n\n_PATH"
  },
  {
    "path": "code/model/generator.json",
    "chars": 91844,
    "preview": "{\"class_name\": \"Model\", \"config\": {\"name\": \"generator\", \"layers\": [{\"name\": \"imageSmall\", \"class_name\": \"InputLayer\", \"c"
  },
  {
    "path": "code/requirements.txt",
    "chars": 67,
    "preview": "h5py==2.7.1\ntensorflow-gpu==1.4.0\nKeras==2.2.2\nscikit-image==0.14.3"
  },
  {
    "path": "code/src/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "code/src/application.py",
    "chars": 5420,
    "preview": "import os\nfrom src.model.model import DDModel\nfrom src.lib.data_helper import DataHelper\nfrom skimage import io,transfor"
  },
  {
    "path": "code/src/config.py",
    "chars": 1829,
    "preview": "import os\nimport getpass\n\ndef _project_dir():\n    d = os.path.dirname\n    return d(d(os.path.abspath(__file__)))\n\nclass "
  },
  {
    "path": "code/src/lib/MLVSharpnessMeasure.py",
    "chars": 2105,
    "preview": "import numpy as np\nfrom scipy.special import gamma\nfrom skimage import color\n\nclass MLVMeasurement():\n    def __init__(s"
  },
  {
    "path": "code/src/lib/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "code/src/lib/data_helper.py",
    "chars": 3096,
    "preview": "import os\nfrom skimage import io,img_as_float # image process\nimport numpy as np\n\nclass DataHelper:\n    def __init__(sel"
  },
  {
    "path": "code/src/lib/data_producer.py",
    "chars": 2027,
    "preview": "import os\nfrom skimage import io,img_as_float # image process\nimport numpy as np\nimport threading\n\nclass DataProducer(th"
  },
  {
    "path": "code/src/lib/tf_util.py",
    "chars": 631,
    "preview": "\ndef set_session_config(per_process_gpu_memory_fraction=None, allow_growth=None, device_list='0'):\n    \"\"\"\n\n    :param a"
  },
  {
    "path": "code/src/model/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "code/src/model/model.py",
    "chars": 5558,
    "preview": "import os\nimport tensorflow as tf\nfrom keras.layers import *\nfrom keras.initializers import glorot_uniform\nfrom keras.mo"
  },
  {
    "path": "code/src/tester.py",
    "chars": 5205,
    "preview": "import os\nfrom src.model.model import DDModel\nfrom src.lib.data_helper import DataHelper\nfrom src.lib.MLVSharpnessMeasur"
  },
  {
    "path": "code/src/trainer.py",
    "chars": 7058,
    "preview": "from src.model.model import DDModel\nfrom src.lib.data_producer import DataProducer\nimport tensorflow as tf\nimport keras."
  },
  {
    "path": "code/src/verification.py",
    "chars": 9077,
    "preview": "from src.model.model import DDModel\nfrom src.lib.data_producer import DataProducer\nfrom src.lib.data_helper import DataH"
  }
]

// ... and 1 more files (download for full content)

About this extraction

This page contains the full source code of the minyuanye/SIUN GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 19 files (24.6 MB), approximately 43.2k tokens, and a symbol index with 81 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!