[
  {
    "path": "README.md",
    "content": "# Scale-Iterative Upscaling Network for Image Deblurring\nby Minyuan Ye, Dong Lyu and Gengsheng Chen<br>\npdf [[main](https://ieeexplore.ieee.org/document/8963625)][[backup](http://lab.zhuzhuguowang.cn:36900/croxline/Paper/Scale-Iterative%20Upscaling%20Network%20for%20Image%20Deblurring.pdf)]\n### One real example\n![/comparisions/images_in_paper/real_building1_comparision.png](../master/comparisons/images_in_paper/Real_building1_comparison.png)<br>\n(a) Result of Nah et al. (b) Result of Tao et al. (c) Result of Zhang et al. (d) Our result.\n<br>\n### Results on benchmark datasets\n![/comparisions/images_in_paper/benchmark_comparison.png](../master/comparisons/images_in_paper/benchmark_comparison.png)<br>\nFrom top to bottom are blurry input, deblurring results of Nah et al., Tao et al., Zhang et al. and ours.<br>\n<br>\n### Results on real-world blurred images\n![/comparisions/images_in_paper/real_comparison.png](../master/comparisons/images_in_paper/real_comparison.png)<br>\nFrom top to bottom are images restored by Pan et al., Nah et al., Tao et al., Zhang et al. and ours. As space limits, the original blurry images are omitted here. \nThey can be viewed in Lai dataset with their names, from left to right: boy_statue, pietro, street4 and text1.\n<br>\n## Prerequisites\nPlease refer to \"/code/requirements.txt\".\n<br>\n## Installation\n\n```\ngit clone https://github.com/minyuanye/SIUN.git\ncd code\n```\n\n## Basic usage\nYou can always add '--gpu=<gpu_id>' to specify GPU ID, the default ID is 0.<br>\n\n1. For deblurring an image:<br>\n**python deblur.py --apply --file-path='</testpath/test.png>'**<br>\n\n\n2. For deblurring all images in a folder:<br>\n**python deblur.py --apply --dir-path='</testpath/testDir>'**<br>\nAdd '--result-dir=</output_path>' to specify output path. If it is not specified, the default path is './output'.<br>\n\n3. For testing the model:<br>\n**python deblur.py --test**<br>\nNote that this command can only be used to test GOPRO dataset. And it will load all images into memory first. We recommand to use '--apply'\nas an alternative (Item 2).<br>\nPlease set value of 'test_directory_path' to specify the GOPRO dataset path in file 'config.py'.<br>\n\n4. For training a new model:<br>\n**python deblur.py --train**<br>\nPlease remove the model file in 'model' first and set value of 'train_directory_path' to specify the GOPRO dataset path in file 'config.py'.<br>\nWhen it finishes, run:<br>\n**python deblur.py --verify**<br>\n\n\n## Advanced usage\nPlease refer to the source code. Most configuration parameters are listed in '/code/src/config.py'.\n\n## Citation\nIf you use any part of our code, or SIUN is useful for your research, please consider citing:\n```bibtex\n@ARTICLE{8963625,\nauthor={M. {Ye} and D. {Lyu} and G. {Chen}},\njournal={IEEE Access},\ntitle={Scale-Iterative Upscaling Network for Image Deblurring},\nyear={2020},\nvolume={8},\nnumber={},\npages={18316-18325},\nkeywords={Blind deblurring;curriculum learning;scale-iterative;upscaling network},\ndoi={10.1109/ACCESS.2020.2967823},\nISSN={2169-3536},\nmonth={},}\n```\n"
  },
  {
    "path": "code/__init__.py",
    "content": ""
  },
  {
    "path": "code/deblur.py",
    "content": "import os\nimport sys\nimport argparse\nfrom src.config import Config\nfrom src.lib.tf_util import set_session_config\n\n_PATH_ = os.path.dirname(os.path.dirname(__file__))\n\nif _PATH_ not in sys.path:\n    sys.path.append(_PATH_)\n\n\n\ndef getArgs():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--train\", help=\"train the model\", action=\"store_true\", default=False)\n    parser.add_argument(\"--test\", help=\"test the model\", action=\"store_true\", default=False)\n    parser.add_argument(\"--apply\", help=\"use the model\", action=\"store_true\", default=False)\n    parser.add_argument(\"--verify\", help=\"verify the model\", action=\"store_true\", default=False)\n    parser.add_argument(\"--gpu\", help=\"test device list\", default=\"0\")\n    parser.add_argument(\"--file-path\", help=\"file path of the input image\")\n    parser.add_argument(\"--dir-path\", help=\"dir path of the input images\")\n    parser.add_argument(\"--result-dir\", help=\"deblur result dir of the input images\")\n    parser.add_argument(\"--iter\", help=\"iter times\", default=0, type=int)\n    return parser.parse_args()\n    \nif __name__ == \"__main__\":\n    args = getArgs()\n    config = Config()\n    config.resource.create_directories()\n    if(args.file_path):\n        config.application.deblurring_file_path = args.file_path\n    if(args.dir_path):\n        config.application.deblurring_dir_path = args.dir_path\n    if(args.iter):\n        config.application.iter = args.iter\n    if(args.result_dir):\n        config.application.deblurring_result_dir = args.result_dir\n    set_session_config(per_process_gpu_memory_fraction=1, allow_growth=True, device_list=args.gpu)\n    gpus = args.gpu.split(\",\")\n    config.trainer.gpu_num = len(gpus)\n    if(args.train):\n        #trainer\n        from src.trainer import Trainer\n        Trainer(config).start()\n    elif(args.test):\n        #tester\n        from src.tester import Tester\n        Tester(config).start()\n    elif(args.apply):\n        #application\n        from src.application import Application\n        Application(config).start()\n    elif(args.verify):\n        #verification\n        from src.verification import Verification\n        Verification(config).start()\n    else:\n        #info\n        from src.model.model import DDModel\n        model = DDModel(config)\n        model.generator.summary(line_length=150)\n\n\n"
  },
  {
    "path": "code/model/generator.json",
    "content": "{\"class_name\": \"Model\", \"config\": {\"name\": \"generator\", \"layers\": [{\"name\": \"imageSmall\", \"class_name\": \"InputLayer\", \"config\": {\"batch_input_shape\": [null, null, null, 6], \"dtype\": \"float32\", \"sparse\": false, \"name\": \"imageSmall\"}, \"inbound_nodes\": []}, {\"name\": \"conv2d_1\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_1\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"imageSmall\", 0, 0, {}]]]}, {\"name\": \"conv2d_2\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_2\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [5, 5], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"conv2d_1\", 0, 0, {}]]]}, {\"name\": \"activation_1\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_1\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_2\", 0, 0, {}]]]}, {\"name\": \"conv2d_3\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_3\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_1\", 0, 0, {}]]]}, {\"name\": \"activation_2\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_2\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_3\", 0, 0, {}]]]}, {\"name\": \"conv2d_4\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_4\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_2\", 0, 0, {}]]]}, {\"name\": \"add_1\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_1\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_4\", 0, 0, {}], [\"activation_1\", 0, 0, {}]]]}, {\"name\": \"conv2d_5\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_5\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_1\", 0, 0, {}]]]}, {\"name\": \"activation_3\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_3\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_5\", 0, 0, {}]]]}, {\"name\": \"conv2d_6\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_6\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_3\", 0, 0, {}]]]}, {\"name\": \"add_2\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_2\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_6\", 0, 0, {}], [\"add_1\", 0, 0, {}]]]}, {\"name\": \"conv2d_7\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_7\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_2\", 0, 0, {}]]]}, {\"name\": \"activation_4\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_4\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_7\", 0, 0, {}]]]}, {\"name\": \"conv2d_8\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_8\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_4\", 0, 0, {}]]]}, {\"name\": \"add_3\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_3\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_8\", 0, 0, {}], [\"add_2\", 0, 0, {}]]]}, {\"name\": \"conv2d_9\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_9\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_3\", 0, 0, {}]]]}, {\"name\": \"activation_5\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_5\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_9\", 0, 0, {}]]]}, {\"name\": \"conv2d_10\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_10\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_5\", 0, 0, {}]]]}, {\"name\": \"activation_6\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_6\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_10\", 0, 0, {}]]]}, {\"name\": \"conv2d_11\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_11\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_6\", 0, 0, {}]]]}, {\"name\": \"add_4\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_4\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_11\", 0, 0, {}], [\"activation_5\", 0, 0, {}]]]}, {\"name\": \"conv2d_12\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_12\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_4\", 0, 0, {}]]]}, {\"name\": \"activation_7\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_7\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_12\", 0, 0, {}]]]}, {\"name\": \"conv2d_13\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_13\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_7\", 0, 0, {}]]]}, {\"name\": \"add_5\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_5\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_13\", 0, 0, {}], [\"add_4\", 0, 0, {}]]]}, {\"name\": \"conv2d_14\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_14\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_5\", 0, 0, {}]]]}, {\"name\": \"activation_8\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_8\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_14\", 0, 0, {}]]]}, {\"name\": \"conv2d_15\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_15\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_8\", 0, 0, {}]]]}, {\"name\": \"add_6\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_6\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_15\", 0, 0, {}], [\"add_5\", 0, 0, {}]]]}, {\"name\": \"conv2d_16\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_16\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_6\", 0, 0, {}]]]}, {\"name\": \"activation_9\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_9\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_16\", 0, 0, {}]]]}, {\"name\": \"conv2d_17\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_17\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_9\", 0, 0, {}]]]}, {\"name\": \"activation_10\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_10\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_17\", 0, 0, {}]]]}, {\"name\": \"conv2d_18\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_18\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_10\", 0, 0, {}]]]}, {\"name\": \"add_7\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_7\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_18\", 0, 0, {}], [\"activation_9\", 0, 0, {}]]]}, {\"name\": \"conv2d_19\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_19\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_7\", 0, 0, {}]]]}, {\"name\": \"activation_11\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_11\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_19\", 0, 0, {}]]]}, {\"name\": \"conv2d_20\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_20\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_11\", 0, 0, {}]]]}, {\"name\": \"add_8\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_8\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_20\", 0, 0, {}], [\"add_7\", 0, 0, {}]]]}, {\"name\": \"conv2d_21\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_21\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_8\", 0, 0, {}]]]}, {\"name\": \"activation_12\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_12\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_21\", 0, 0, {}]]]}, {\"name\": \"conv2d_22\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_22\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_12\", 0, 0, {}]]]}, {\"name\": \"add_9\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_9\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_22\", 0, 0, {}], [\"add_8\", 0, 0, {}]]]}, {\"name\": \"conv2d_23\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_23\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_9\", 0, 0, {}]]]}, {\"name\": \"activation_13\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_13\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_23\", 0, 0, {}]]]}, {\"name\": \"conv2d_24\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_24\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_13\", 0, 0, {}]]]}, {\"name\": \"add_10\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_10\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_24\", 0, 0, {}], [\"add_9\", 0, 0, {}]]]}, {\"name\": \"conv2d_25\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_25\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_10\", 0, 0, {}]]]}, {\"name\": \"activation_14\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_14\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_25\", 0, 0, {}]]]}, {\"name\": \"conv2d_26\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_26\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_14\", 0, 0, {}]]]}, {\"name\": \"add_11\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_11\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_26\", 0, 0, {}], [\"add_10\", 0, 0, {}]]]}, {\"name\": \"conv2d_27\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_27\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_11\", 0, 0, {}]]]}, {\"name\": \"activation_15\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_15\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_27\", 0, 0, {}]]]}, {\"name\": \"conv2d_28\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_28\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_15\", 0, 0, {}]]]}, {\"name\": \"add_12\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_12\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_28\", 0, 0, {}], [\"add_11\", 0, 0, {}]]]}, {\"name\": \"conv2d_transpose_1\", \"class_name\": \"Conv2DTranspose\", \"config\": {\"name\": \"conv2d_transpose_1\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null, \"output_padding\": null}, \"inbound_nodes\": [[[\"add_12\", 0, 0, {}]]]}, {\"name\": \"activation_16\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_16\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_transpose_1\", 0, 0, {}]]]}, {\"name\": \"add_13\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_13\", \"trainable\": true}, \"inbound_nodes\": [[[\"activation_16\", 0, 0, {}], [\"add_6\", 0, 0, {}]]]}, {\"name\": \"conv2d_29\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_29\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_13\", 0, 0, {}]]]}, {\"name\": \"activation_17\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_17\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_29\", 0, 0, {}]]]}, {\"name\": \"conv2d_30\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_30\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_17\", 0, 0, {}]]]}, {\"name\": \"add_14\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_14\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_30\", 0, 0, {}], [\"add_13\", 0, 0, {}]]]}, {\"name\": \"conv2d_31\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_31\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_14\", 0, 0, {}]]]}, {\"name\": \"activation_18\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_18\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_31\", 0, 0, {}]]]}, {\"name\": \"conv2d_32\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_32\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_18\", 0, 0, {}]]]}, {\"name\": \"add_15\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_15\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_32\", 0, 0, {}], [\"add_14\", 0, 0, {}]]]}, {\"name\": \"conv2d_33\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_33\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_15\", 0, 0, {}]]]}, {\"name\": \"activation_19\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_19\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_33\", 0, 0, {}]]]}, {\"name\": \"conv2d_34\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_34\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_19\", 0, 0, {}]]]}, {\"name\": \"add_16\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_16\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_34\", 0, 0, {}], [\"add_15\", 0, 0, {}]]]}, {\"name\": \"conv2d_transpose_2\", \"class_name\": \"Conv2DTranspose\", \"config\": {\"name\": \"conv2d_transpose_2\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null, \"output_padding\": null}, \"inbound_nodes\": [[[\"add_16\", 0, 0, {}]]]}, {\"name\": \"activation_20\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_20\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_transpose_2\", 0, 0, {}]]]}, {\"name\": \"add_17\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_17\", \"trainable\": true}, \"inbound_nodes\": [[[\"activation_20\", 0, 0, {}], [\"add_3\", 0, 0, {}]]]}, {\"name\": \"conv2d_35\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_35\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_17\", 0, 0, {}]]]}, {\"name\": \"activation_21\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_21\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_35\", 0, 0, {}]]]}, {\"name\": \"concatenate_1\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_1\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"add_17\", 0, 0, {}], [\"activation_21\", 0, 0, {}]]]}, {\"name\": \"conv2d_36\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_36\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_1\", 0, 0, {}]]]}, {\"name\": \"activation_22\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_22\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_36\", 0, 0, {}]]]}, {\"name\": \"concatenate_2\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_2\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_1\", 0, 0, {}], [\"activation_22\", 0, 0, {}]]]}, {\"name\": \"conv2d_37\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_37\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_2\", 0, 0, {}]]]}, {\"name\": \"activation_23\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_23\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_37\", 0, 0, {}]]]}, {\"name\": \"concatenate_3\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_3\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_2\", 0, 0, {}], [\"activation_23\", 0, 0, {}]]]}, {\"name\": \"conv2d_38\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_38\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_3\", 0, 0, {}]]]}, {\"name\": \"activation_24\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_24\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_38\", 0, 0, {}]]]}, {\"name\": \"concatenate_4\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_4\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_3\", 0, 0, {}], [\"activation_24\", 0, 0, {}]]]}, {\"name\": \"conv2d_39\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_39\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_4\", 0, 0, {}]]]}, {\"name\": \"activation_25\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_25\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_39\", 0, 0, {}]]]}, {\"name\": \"concatenate_5\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_5\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_4\", 0, 0, {}], [\"activation_25\", 0, 0, {}]]]}, {\"name\": \"conv2d_40\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_40\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_5\", 0, 0, {}]]]}, {\"name\": \"activation_26\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_26\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_40\", 0, 0, {}]]]}, {\"name\": \"concatenate_6\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_6\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_5\", 0, 0, {}], [\"activation_26\", 0, 0, {}]]]}, {\"name\": \"conv2d_41\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_41\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [1, 1], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_6\", 0, 0, {}]]]}, {\"name\": \"add_18\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_18\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_41\", 0, 0, {}], [\"add_17\", 0, 0, {}]]]}, {\"name\": \"conv2d_42\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_42\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_18\", 0, 0, {}]]]}, {\"name\": \"activation_27\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_27\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_42\", 0, 0, {}]]]}, {\"name\": \"concatenate_7\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_7\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"add_18\", 0, 0, {}], [\"activation_27\", 0, 0, {}]]]}, {\"name\": \"conv2d_43\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_43\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_7\", 0, 0, {}]]]}, {\"name\": \"activation_28\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_28\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_43\", 0, 0, {}]]]}, {\"name\": \"concatenate_8\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_8\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_7\", 0, 0, {}], [\"activation_28\", 0, 0, {}]]]}, {\"name\": \"conv2d_44\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_44\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_8\", 0, 0, {}]]]}, {\"name\": \"activation_29\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_29\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_44\", 0, 0, {}]]]}, {\"name\": \"concatenate_9\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_9\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_8\", 0, 0, {}], [\"activation_29\", 0, 0, {}]]]}, {\"name\": \"conv2d_45\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_45\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_9\", 0, 0, {}]]]}, {\"name\": \"activation_30\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_30\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_45\", 0, 0, {}]]]}, {\"name\": \"concatenate_10\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_10\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_9\", 0, 0, {}], [\"activation_30\", 0, 0, {}]]]}, {\"name\": \"conv2d_46\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_46\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_10\", 0, 0, {}]]]}, {\"name\": \"activation_31\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_31\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_46\", 0, 0, {}]]]}, {\"name\": \"concatenate_11\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_11\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_10\", 0, 0, {}], [\"activation_31\", 0, 0, {}]]]}, {\"name\": \"conv2d_47\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_47\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_11\", 0, 0, {}]]]}, {\"name\": \"activation_32\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_32\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_47\", 0, 0, {}]]]}, {\"name\": \"concatenate_12\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_12\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_11\", 0, 0, {}], [\"activation_32\", 0, 0, {}]]]}, {\"name\": \"conv2d_48\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_48\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [1, 1], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_12\", 0, 0, {}]]]}, {\"name\": \"add_19\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_19\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_48\", 0, 0, {}], [\"add_18\", 0, 0, {}]]]}, {\"name\": \"conv2d_49\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_49\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_19\", 0, 0, {}]]]}, {\"name\": \"activation_33\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_33\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_49\", 0, 0, {}]]]}, {\"name\": \"concatenate_13\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_13\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"add_19\", 0, 0, {}], [\"activation_33\", 0, 0, {}]]]}, {\"name\": \"conv2d_50\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_50\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_13\", 0, 0, {}]]]}, {\"name\": \"activation_34\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_34\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_50\", 0, 0, {}]]]}, {\"name\": \"concatenate_14\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_14\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_13\", 0, 0, {}], [\"activation_34\", 0, 0, {}]]]}, {\"name\": \"conv2d_51\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_51\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_14\", 0, 0, {}]]]}, {\"name\": \"activation_35\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_35\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_51\", 0, 0, {}]]]}, {\"name\": \"concatenate_15\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_15\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_14\", 0, 0, {}], [\"activation_35\", 0, 0, {}]]]}, {\"name\": \"conv2d_52\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_52\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_15\", 0, 0, {}]]]}, {\"name\": \"activation_36\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_36\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_52\", 0, 0, {}]]]}, {\"name\": \"concatenate_16\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_16\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_15\", 0, 0, {}], [\"activation_36\", 0, 0, {}]]]}, {\"name\": \"conv2d_53\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_53\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_16\", 0, 0, {}]]]}, {\"name\": \"activation_37\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_37\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_53\", 0, 0, {}]]]}, {\"name\": \"concatenate_17\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_17\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_16\", 0, 0, {}], [\"activation_37\", 0, 0, {}]]]}, {\"name\": \"conv2d_54\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_54\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_17\", 0, 0, {}]]]}, {\"name\": \"activation_38\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_38\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_54\", 0, 0, {}]]]}, {\"name\": \"concatenate_18\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_18\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"concatenate_17\", 0, 0, {}], [\"activation_38\", 0, 0, {}]]]}, {\"name\": \"conv2d_55\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_55\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [1, 1], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": false, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_18\", 0, 0, {}]]]}, {\"name\": \"add_20\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_20\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_55\", 0, 0, {}], [\"add_19\", 0, 0, {}]]]}, {\"name\": \"concatenate_19\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_19\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"add_18\", 0, 0, {}], [\"add_19\", 0, 0, {}], [\"add_20\", 0, 0, {}]]]}, {\"name\": \"conv2d_56\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_56\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [1, 1], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_19\", 0, 0, {}]]]}, {\"name\": \"conv2d_57\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_57\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"conv2d_56\", 0, 0, {}]]]}, {\"name\": \"add_21\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_21\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_57\", 0, 0, {}], [\"conv2d_1\", 0, 0, {}]]]}, {\"name\": \"conv2d_58\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_58\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_21\", 0, 0, {}]]]}, {\"name\": \"lambda_1\", \"class_name\": \"Lambda\", \"config\": {\"name\": \"lambda_1\", \"trainable\": true, \"function\": [\"4wEAAAAAAAAAAQAAAAMAAABTAAAAcwwAAAB0AGoBfABkAYMCUwApAk7pAgAAACkC2gJ0ZtoOZGVw\\ndGhfdG9fc3BhY2UpAdoBeKkAcgUAAAD6MC9ob21lL215eWUvRGVlcExlYXJuaW5nRGVibHVyL3Ny\\nYy9tb2RlbC9tb2RlbC5wedoIPGxhbWJkYT5cAAAA8wAAAAA=\\n\", null, null], \"function_type\": \"lambda\", \"output_shape\": null, \"output_shape_type\": \"raw\", \"arguments\": {}}, \"inbound_nodes\": [[[\"conv2d_58\", 0, 0, {}]]]}, {\"name\": \"conv2d_59\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_59\", \"trainable\": true, \"filters\": 3, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"lambda_1\", 0, 0, {}]]]}, {\"name\": \"activation_39\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_39\", \"trainable\": true, \"activation\": \"tanh\"}, \"inbound_nodes\": [[[\"conv2d_59\", 0, 0, {}]]]}, {\"name\": \"imageUp\", \"class_name\": \"InputLayer\", \"config\": {\"batch_input_shape\": [null, null, null, 3], \"dtype\": \"float32\", \"sparse\": false, \"name\": \"imageUp\"}, \"inbound_nodes\": []}, {\"name\": \"lambda_2\", \"class_name\": \"Lambda\", \"config\": {\"name\": \"lambda_2\", \"trainable\": true, \"function\": [\"4wEAAAAAAAAAAQAAAAIAAABTAAAAcwwAAAB8AGQBGwBkAhcAUwApA07pAgAAAGcAAAAAAADgP6kA\\nKQHaAXhyAgAAAHICAAAA+jAvaG9tZS9teXllL0RlZXBMZWFybmluZ0RlYmx1ci9zcmMvbW9kZWwv\\nbW9kZWwucHnaCDxsYW1iZGE+XwAAAPMAAAAA\\n\", null, null], \"function_type\": \"lambda\", \"output_shape\": null, \"output_shape_type\": \"raw\", \"arguments\": {}}, \"inbound_nodes\": [[[\"activation_39\", 0, 0, {}]]]}, {\"name\": \"concatenate_20\", \"class_name\": \"Concatenate\", \"config\": {\"name\": \"concatenate_20\", \"trainable\": true, \"axis\": 3}, \"inbound_nodes\": [[[\"imageUp\", 0, 0, {}], [\"lambda_2\", 0, 0, {}]]]}, {\"name\": \"conv2d_60\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_60\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [5, 5], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"concatenate_20\", 0, 0, {}]]]}, {\"name\": \"activation_40\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_40\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_60\", 0, 0, {}]]]}, {\"name\": \"conv2d_61\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_61\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_40\", 0, 0, {}]]]}, {\"name\": \"activation_41\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_41\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_61\", 0, 0, {}]]]}, {\"name\": \"conv2d_62\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_62\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_41\", 0, 0, {}]]]}, {\"name\": \"add_22\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_22\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_62\", 0, 0, {}], [\"activation_40\", 0, 0, {}]]]}, {\"name\": \"conv2d_63\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_63\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_22\", 0, 0, {}]]]}, {\"name\": \"activation_42\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_42\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_63\", 0, 0, {}]]]}, {\"name\": \"conv2d_64\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_64\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_42\", 0, 0, {}]]]}, {\"name\": \"add_23\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_23\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_64\", 0, 0, {}], [\"add_22\", 0, 0, {}]]]}, {\"name\": \"conv2d_65\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_65\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_23\", 0, 0, {}]]]}, {\"name\": \"activation_43\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_43\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_65\", 0, 0, {}]]]}, {\"name\": \"conv2d_66\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_66\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_43\", 0, 0, {}]]]}, {\"name\": \"add_24\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_24\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_66\", 0, 0, {}], [\"add_23\", 0, 0, {}]]]}, {\"name\": \"conv2d_67\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_67\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_24\", 0, 0, {}]]]}, {\"name\": \"activation_44\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_44\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_67\", 0, 0, {}]]]}, {\"name\": \"conv2d_68\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_68\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_44\", 0, 0, {}]]]}, {\"name\": \"activation_45\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_45\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_68\", 0, 0, {}]]]}, {\"name\": \"conv2d_69\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_69\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_45\", 0, 0, {}]]]}, {\"name\": \"add_25\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_25\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_69\", 0, 0, {}], [\"activation_44\", 0, 0, {}]]]}, {\"name\": \"conv2d_70\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_70\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_25\", 0, 0, {}]]]}, {\"name\": \"activation_46\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_46\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_70\", 0, 0, {}]]]}, {\"name\": \"conv2d_71\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_71\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_46\", 0, 0, {}]]]}, {\"name\": \"add_26\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_26\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_71\", 0, 0, {}], [\"add_25\", 0, 0, {}]]]}, {\"name\": \"conv2d_72\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_72\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_26\", 0, 0, {}]]]}, {\"name\": \"activation_47\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_47\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_72\", 0, 0, {}]]]}, {\"name\": \"conv2d_73\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_73\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_47\", 0, 0, {}]]]}, {\"name\": \"add_27\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_27\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_73\", 0, 0, {}], [\"add_26\", 0, 0, {}]]]}, {\"name\": \"conv2d_74\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_74\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_27\", 0, 0, {}]]]}, {\"name\": \"activation_48\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_48\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_74\", 0, 0, {}]]]}, {\"name\": \"conv2d_75\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_75\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_48\", 0, 0, {}]]]}, {\"name\": \"activation_49\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_49\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_75\", 0, 0, {}]]]}, {\"name\": \"conv2d_76\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_76\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_49\", 0, 0, {}]]]}, {\"name\": \"add_28\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_28\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_76\", 0, 0, {}], [\"activation_48\", 0, 0, {}]]]}, {\"name\": \"conv2d_77\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_77\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_28\", 0, 0, {}]]]}, {\"name\": \"activation_50\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_50\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_77\", 0, 0, {}]]]}, {\"name\": \"conv2d_78\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_78\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_50\", 0, 0, {}]]]}, {\"name\": \"add_29\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_29\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_78\", 0, 0, {}], [\"add_28\", 0, 0, {}]]]}, {\"name\": \"conv2d_79\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_79\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_29\", 0, 0, {}]]]}, {\"name\": \"activation_51\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_51\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_79\", 0, 0, {}]]]}, {\"name\": \"conv2d_80\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_80\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_51\", 0, 0, {}]]]}, {\"name\": \"add_30\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_30\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_80\", 0, 0, {}], [\"add_29\", 0, 0, {}]]]}, {\"name\": \"conv2d_81\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_81\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_30\", 0, 0, {}]]]}, {\"name\": \"activation_52\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_52\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_81\", 0, 0, {}]]]}, {\"name\": \"conv2d_82\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_82\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_52\", 0, 0, {}]]]}, {\"name\": \"add_31\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_31\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_82\", 0, 0, {}], [\"add_30\", 0, 0, {}]]]}, {\"name\": \"conv2d_83\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_83\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_31\", 0, 0, {}]]]}, {\"name\": \"activation_53\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_53\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_83\", 0, 0, {}]]]}, {\"name\": \"conv2d_84\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_84\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_53\", 0, 0, {}]]]}, {\"name\": \"add_32\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_32\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_84\", 0, 0, {}], [\"add_31\", 0, 0, {}]]]}, {\"name\": \"conv2d_85\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_85\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_32\", 0, 0, {}]]]}, {\"name\": \"activation_54\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_54\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_85\", 0, 0, {}]]]}, {\"name\": \"conv2d_86\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_86\", \"trainable\": true, \"filters\": 128, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_54\", 0, 0, {}]]]}, {\"name\": \"add_33\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_33\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_86\", 0, 0, {}], [\"add_32\", 0, 0, {}]]]}, {\"name\": \"conv2d_transpose_3\", \"class_name\": \"Conv2DTranspose\", \"config\": {\"name\": \"conv2d_transpose_3\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null, \"output_padding\": null}, \"inbound_nodes\": [[[\"add_33\", 0, 0, {}]]]}, {\"name\": \"activation_55\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_55\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_transpose_3\", 0, 0, {}]]]}, {\"name\": \"add_34\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_34\", \"trainable\": true}, \"inbound_nodes\": [[[\"activation_55\", 0, 0, {}], [\"add_27\", 0, 0, {}]]]}, {\"name\": \"conv2d_87\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_87\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_34\", 0, 0, {}]]]}, {\"name\": \"activation_56\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_56\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_87\", 0, 0, {}]]]}, {\"name\": \"conv2d_88\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_88\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_56\", 0, 0, {}]]]}, {\"name\": \"add_35\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_35\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_88\", 0, 0, {}], [\"add_34\", 0, 0, {}]]]}, {\"name\": \"conv2d_89\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_89\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_35\", 0, 0, {}]]]}, {\"name\": \"activation_57\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_57\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_89\", 0, 0, {}]]]}, {\"name\": \"conv2d_90\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_90\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_57\", 0, 0, {}]]]}, {\"name\": \"add_36\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_36\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_90\", 0, 0, {}], [\"add_35\", 0, 0, {}]]]}, {\"name\": \"conv2d_91\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_91\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_36\", 0, 0, {}]]]}, {\"name\": \"activation_58\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_58\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_91\", 0, 0, {}]]]}, {\"name\": \"conv2d_92\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_92\", \"trainable\": true, \"filters\": 64, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_58\", 0, 0, {}]]]}, {\"name\": \"add_37\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_37\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_92\", 0, 0, {}], [\"add_36\", 0, 0, {}]]]}, {\"name\": \"conv2d_transpose_4\", \"class_name\": \"Conv2DTranspose\", \"config\": {\"name\": \"conv2d_transpose_4\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [5, 5], \"strides\": [2, 2], \"padding\": \"same\", \"data_format\": \"channels_last\", \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null, \"output_padding\": null}, \"inbound_nodes\": [[[\"add_37\", 0, 0, {}]]]}, {\"name\": \"activation_59\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_59\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_transpose_4\", 0, 0, {}]]]}, {\"name\": \"add_38\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_38\", \"trainable\": true}, \"inbound_nodes\": [[[\"activation_59\", 0, 0, {}], [\"add_24\", 0, 0, {}]]]}, {\"name\": \"conv2d_93\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_93\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_38\", 0, 0, {}]]]}, {\"name\": \"activation_60\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_60\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_93\", 0, 0, {}]]]}, {\"name\": \"conv2d_94\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_94\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_60\", 0, 0, {}]]]}, {\"name\": \"add_39\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_39\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_94\", 0, 0, {}], [\"add_38\", 0, 0, {}]]]}, {\"name\": \"conv2d_95\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_95\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_39\", 0, 0, {}]]]}, {\"name\": \"activation_61\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_61\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_95\", 0, 0, {}]]]}, {\"name\": \"conv2d_96\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_96\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_61\", 0, 0, {}]]]}, {\"name\": \"add_40\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_40\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_96\", 0, 0, {}], [\"add_39\", 0, 0, {}]]]}, {\"name\": \"conv2d_97\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_97\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_40\", 0, 0, {}]]]}, {\"name\": \"activation_62\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_62\", \"trainable\": true, \"activation\": \"relu\"}, \"inbound_nodes\": [[[\"conv2d_97\", 0, 0, {}]]]}, {\"name\": \"conv2d_98\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_98\", \"trainable\": true, \"filters\": 32, \"kernel_size\": [3, 3], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"activation_62\", 0, 0, {}]]]}, {\"name\": \"add_41\", \"class_name\": \"Add\", \"config\": {\"name\": \"add_41\", \"trainable\": true}, \"inbound_nodes\": [[[\"conv2d_98\", 0, 0, {}], [\"add_40\", 0, 0, {}]]]}, {\"name\": \"conv2d_99\", \"class_name\": \"Conv2D\", \"config\": {\"name\": \"conv2d_99\", \"trainable\": true, \"filters\": 3, \"kernel_size\": [5, 5], \"strides\": [1, 1], \"padding\": \"same\", \"data_format\": \"channels_last\", \"dilation_rate\": [1, 1], \"activation\": \"linear\", \"use_bias\": true, \"kernel_initializer\": {\"class_name\": \"VarianceScaling\", \"config\": {\"scale\": 1.0, \"mode\": \"fan_avg\", \"distribution\": \"uniform\", \"seed\": null}}, \"bias_initializer\": {\"class_name\": \"Zeros\", \"config\": {}}, \"kernel_regularizer\": null, \"bias_regularizer\": null, \"activity_regularizer\": null, \"kernel_constraint\": null, \"bias_constraint\": null}, \"inbound_nodes\": [[[\"add_41\", 0, 0, {}]]]}, {\"name\": \"activation_63\", \"class_name\": \"Activation\", \"config\": {\"name\": \"activation_63\", \"trainable\": true, \"activation\": \"tanh\"}, \"inbound_nodes\": [[[\"conv2d_99\", 0, 0, {}]]]}, {\"name\": \"lambda_3\", \"class_name\": \"Lambda\", \"config\": {\"name\": \"lambda_3\", \"trainable\": true, \"function\": [\"4wEAAAAAAAAAAQAAAAIAAABTAAAAcwwAAAB8AGQBGwBkAhcAUwApA07pAgAAAGcAAAAAAADgP6kA\\nKQHaAXhyAgAAAHICAAAA+jAvaG9tZS9teXllL0RlZXBMZWFybmluZ0RlYmx1ci9zcmMvbW9kZWwv\\nbW9kZWwucHnaCDxsYW1iZGE+LwAAAPMAAAAA\\n\", null, null], \"function_type\": \"lambda\", \"output_shape\": null, \"output_shape_type\": \"raw\", \"arguments\": {}}, \"inbound_nodes\": [[[\"activation_63\", 0, 0, {}]]]}], \"input_layers\": [[\"imageSmall\", 0, 0], [\"imageUp\", 0, 0]], \"output_layers\": [[\"lambda_3\", 0, 0]]}, \"keras_version\": \"2.2.2\", \"backend\": \"tensorflow\"}"
  },
  {
    "path": "code/requirements.txt",
    "content": "h5py==2.7.1\ntensorflow-gpu==1.4.0\nKeras==2.2.2\nscikit-image==0.14.3"
  },
  {
    "path": "code/src/__init__.py",
    "content": ""
  },
  {
    "path": "code/src/application.py",
    "content": "import os\nfrom src.model.model import DDModel\nfrom src.lib.data_helper import DataHelper\nfrom skimage import io,transform,feature,color,img_as_float\nimport numpy as np\nimport math\nimport time\n\nclass Application():\n#note that input image must be color, gray image should be expand to 3 channels\n#and image size must be even\n\n    def __init__(self,config):\n        self.config = config\n        self.model = DDModel(config)\n        if(config.application.deblurring_result_dir is None):\n            config.application.deblurring_result_dir = config.resource.output_dir\n        if not os.path.exists(config.application.deblurring_result_dir):\n            os.makedirs(config.application.deblurring_result_dir)\n        self.__fileBlurList=[]\n\n    def start(self):\n        self.application()\n\n    def __tuneSize(self,shape):\n        pad = []\n        for i in range(2):\n            size = shape[i]\n            if(size % 256 == 0):\n                pad.append(0)\n            else:\n                n = size // 256 + 1\n                pad.append((n*256 - size) // 2)\n        return pad\n\n    def __getImage(self,fileFullPath):#self.config.application.deblurring_file_path\n        imageBlur = img_as_float(io.imread(fileFullPath))\n        #make sure row&col are even\n        row = imageBlur.shape[0]\n        col = imageBlur.shape[1]\n        row = row-1 if row%2==1 else row\n        col = col-1 if col%2==1 else col\n        imageBlur = imageBlur[0:row,0:col]\n        imageOrigin = imageBlur\n        pad = self.__tuneSize(imageBlur.shape)\n        imageBlur = np.pad(imageBlur,((pad[0],pad[0]),(pad[1],pad[1]),(0,0)),'reflect')\n        return imageBlur,imageOrigin\n\n    def __getData(self,root):\n        for parent,dirnames,filenames in os.walk(root):\n            for filename in filenames:\n                self.__fileBlurList.append(os.path.join(parent,filename))\n        self.data_length = len(self.__fileBlurList)\n        print(f'total data:{self.data_length}!')\n\n    def __deblur(self,imageBlur,imageOrigin):\n        pyramid = tuple(transform.pyramid_gaussian(imageBlur, downscale=2, max_layer=self.max_iter, multichannel=True))\n        deblurs = []\n        for iter in self.iters:\n            batch_blur2x = []\n            batch_blur1x = []\n            runtime = 0;\n            for i in range(iter,0,-1):\n                if(i == iter):#first iter\n                    imageBlur2x = pyramid[i]\n                    batch_blur2x.append(imageBlur2x)\n                    batch_gen = batch_blur2x\n                else:\n                    batch_blur2x = batch_blur1x\n                    batch_blur1x = []\n                imageBlur1x = pyramid[i-1]\n                batch_blur1x.append(imageBlur1x)\n                data_X1 = np.concatenate((batch_blur2x,batch_gen), axis=3)#6channels\n                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blur1x)}\n                start = time.time()\n                batch_gen = self.model.generator.predict(data_X)\n                print(f'Runtime @scale {i}:{time.time()-start:4.3f}')\n                runtime += time.time()-start;\n            print(f'Runtime total @iter {iter}:{runtime:4.3f}')\n            deblur = self.__clipOutput(batch_gen[0],imageOrigin.shape)\n            deblurs.append(deblur)\n        return deblurs\n\n    def application(self):\n        if(self.config.application.iter == 0):\n            self.iters = [1,2,3,4]\n        else:\n            self.iters = [self.config.application.iter]\n        self.max_iter = max(self.iters)\n        deblurring_file_path = self.config.application.deblurring_file_path\n        deblurring_dir_path = self.config.application.deblurring_dir_path\n        if(deblurring_file_path and os.path.exists(deblurring_file_path)):\n            imageBlur,imageOrigin = self.__getImage(deblurring_file_path)\n            deblurs = self.__deblur(imageBlur,imageOrigin)\n            infos = deblurring_file_path.rsplit('/', 1)\n            iter_times = len(deblurs)\n            for i in range(iter_times):\n                deblur = deblurs[i]\n                deblur = (deblur * 255).astype('uint8')\n                iter = self.iters[i]\n                io.imsave(os.path.join(self.config.application.deblurring_result_dir, 'deblur'+str(iter)+'_'+infos[1]),deblur)\n            print(f'file saved')\n        elif(deblurring_dir_path and os.path.exists(deblurring_dir_path)):\n            self.__getData(deblurring_dir_path)\n            index = 0\n            for fileFullPath in self.__fileBlurList:\n                imageBlur,imageOrigin = self.__getImage(fileFullPath)\n                deblurs = self.__deblur(imageBlur,imageOrigin)\n                infos = os.path.basename(fileFullPath)\n                iter_times = len(deblurs)\n                for j in range(iter_times):\n                    deblur = deblurs[j]\n                    deblur = (deblur * 255).astype('uint8')\n                    iter = self.iters[j]\n                    io.imsave(os.path.join(self.config.application.deblurring_result_dir, 'deblur'+str(iter)+'_'+infos),deblur)\n                index += 1\n                print(f'{index}/{self.data_length} done!')\n            print(f'all saved')\n        else:\n            print(f\"no deblur file(s)\")\n\n    def __clipOutput(self,image,outSize):\n        inSize = image.shape\n        start = []\n        for i in range(2):\n            start.append((inSize[i] - outSize[i]) // 2)\n        return image[start[0]:start[0]+outSize[0],start[1]:start[1]+outSize[1]]"
  },
  {
    "path": "code/src/config.py",
    "content": "import os\nimport getpass\n\ndef _project_dir():\n    d = os.path.dirname\n    return d(d(os.path.abspath(__file__)))\n\nclass Config:\n    def __init__(self):\n        self.resource = ResourceConfig()\n        self.trainer = TrainConfig()\n        self.tester = TestConfig()\n        self.application = Application()\n\nclass ResourceConfig:\n    def __init__(self):\n        self.project_dir = os.environ.get(\"PROJECT_DIR\", _project_dir())\n        self.data_dir = os.environ.get(\"DATA_DIR\", os.path.join(_project_dir(), \"data\"))\n        self.model_dir = os.environ.get(\"MODEL_DIR\", os.path.join(self.project_dir, \"model\"))\n        self.debug_dir = os.environ.get(\"DEBUG_DIR\", os.path.join(self.project_dir, \"debug\"))\n        self.output_dir = os.environ.get(\"OUTPUT_DIR\", os.path.join(self.project_dir, \"output\"))\n        \n        self.generator_json_path = os.path.join(self.model_dir, \"generator.json\")\n        self.generator_weights_path = os.path.join(self.model_dir, \"generator.h5\")\n        self.train_directory_path = \"/mnt/SD_1/myye/Deblur/GoPro/train\"\n        self.test_directory_path = \"/mnt/SD_1/myye/Deblur/GoPro/test\"\n\n    def create_directories(self):\n        dirs = [self.project_dir, self.data_dir, self.model_dir, self.debug_dir, self.output_dir]\n        for d in dirs:\n            if not os.path.exists(d):\n                os.makedirs(d)\n\nclass TrainConfig:\n    def __init__(self):\n        self.generatorImageSize = 256\n        self.generatorImageChannels = 3\n        self.batch_size = 8\n        self.maxEpoch = 2000\n        self.gpu_num = 1\n\nclass TestConfig:\n    def __init__(self):\n        self.iter = 0\n\nclass Application:\n    def __init__(self):\n        self.iter = 4#try all iter(1,2,3,4) if set 0\n        self.deblurring_file_path = None\n        self.deblurring_dir_path = None\n        self.deblurring_result_dir = None"
  },
  {
    "path": "code/src/lib/MLVSharpnessMeasure.py",
    "content": "import numpy as np\nfrom scipy.special import gamma\nfrom skimage import color\n\nclass MLVMeasurement():\n    def __init__(self):\n        self.gam = np.linspace(0.2,10,9801)\n\n    def __estimateggdparam(self,vec):\n        gam = self.gam\n        r_gam = (gamma(1/gam)*gamma(3/gam))/((gamma(2/gam)) ** 2)\n        sigma_sq = np.mean(vec ** 2)\n        sigma = np.sqrt(sigma_sq)\n        return sigma\n\n    def __MLVMap(self,img):\n        xs, ys = img.shape\n        x=img\n        x1=np.zeros((xs,ys))\n        x2=np.zeros((xs,ys))\n        x3=np.zeros((xs,ys))\n        x4=np.zeros((xs,ys))\n        x5=np.zeros((xs,ys))\n        x6=np.zeros((xs,ys))\n        x7=np.zeros((xs,ys))\n        x8=np.zeros((xs,ys))\n        x9=np.zeros((xs,ys))\n        x1[0:xs-2,0:ys-2] = x[1:xs-1,1:ys-1]\n        x2[0:xs-2,1:ys-1] = x[1:xs-1,1:ys-1]\n        x3[0:xs-2,2:ys]   = x[1:xs-1,1:ys-1]\n        x4[1:xs-1,0:ys-2] = x[1:xs-1,1:ys-1]\n        x5[1:xs-1,1:ys-1] = x[1:xs-1,1:ys-1]\n        x6[1:xs-1,2:ys]   = x[1:xs-1,1:ys-1]\n        x7[2:xs,0:ys-2]   = x[1:xs-1,1:ys-1]\n        x8[2:xs,1:ys-1]   = x[1:xs-1,1:ys-1]\n        x9[2:xs,2:ys]     = x[1:xs-1,1:ys-1]\n        x1=x1[1:xs-1,1:ys-1]\n        x2=x2[1:xs-1,1:ys-1]\n        x3=x3[1:xs-1,1:ys-1]\n        x4=x4[1:xs-1,1:ys-1]\n        x5=x5[1:xs-1,1:ys-1]\n        x6=x6[1:xs-1,1:ys-1]\n        x7=x7[1:xs-1,1:ys-1]\n        x8=x8[1:xs-1,1:ys-1]\n        x9=x9[1:xs-1,1:ys-1]\n        dd=[]\n        dd.append(x1-x5)\n        dd.append(x2-x5)\n        dd.append(x3-x5)\n        dd.append(x4-x5)\n        dd.append(x6-x5)\n        dd.append(x7-x5)\n        dd.append(x8-x5)\n        dd.append(x9-x5)\n        map = np.max(dd,axis=0)\n        return map\n\n    def getScore(self,x):#x should be double gray image\n        if(x.ndim == 3):#color\n            x = color.rgb2gray(x)\n        map = self.__MLVMap(x)\n        xs,ys = map.shape\n        xy_number=xs*ys\n        vec = map.reshape((xy_number,))\n        vec[::-1].sort()#descend\n        svec=vec[0:xy_number]\n        a=np.arange(xy_number)\n        q=np.exp(-0.01*a)\n        svec=svec*q\n        svec=svec[0:1000]\n        return self.__estimateggdparam(svec)"
  },
  {
    "path": "code/src/lib/__init__.py",
    "content": ""
  },
  {
    "path": "code/src/lib/data_helper.py",
    "content": "import os\nfrom skimage import io,img_as_float # image process\nimport numpy as np\n\nclass DataHelper:\n    def __init__(self):\n        self.__fileBlurList=[]\n        self.__directoryList=[]\n        self.__blurSharpPairs=[]\n\n    def __traversalDir(self,root):\n        for name in os.listdir(root):\n          fullPath = os.path.join(root, name)\n          if os.path.isdir(fullPath):\n            self.__directoryList.append(fullPath)\n        for directory in self.__directoryList:\n          for parent,dirnames,filenames in os.walk(os.path.join(directory,'blur')):\n            for filename in filenames:\n              self.__fileBlurList.append(os.path.join(parent,filename))\n\n    def load_data(self, path, number):#shuffle\n        self.__traversalDir(path)\n        if(number>0):\n            np.random.shuffle(self.__fileBlurList)\n        totalLoaded = 0\n        print(f'start loading dataset...')\n        for fileFullPath in self.__fileBlurList:\n          #imageBlur = io.imread(fileFullPath,as_gray=True)\n          #imageSharp = io.imread(fileFullPath.replace('/blur','/sharp'),as_gray=True)\n          imageBlur = img_as_float(io.imread(fileFullPath))\n          imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))\n          self.__blurSharpPairs.append((imageBlur,imageSharp))\n          totalLoaded += 1\n          if(totalLoaded == number):#if number < 1, all datas loaded\n            break\n        print(f'dataset loaded:{totalLoaded}!')\n\n    def getRandomTrainDatas(self,config):\n        X_train=[]\n        Y_train=[]\n        patchW = patchH = config.trainer.generatorImageSize\n        for imageBlur,imageSharp in self.__blurSharpPairs:\n          trainImageH = imageBlur.shape[0]\n          trainImageW = imageBlur.shape[1]\n          rowStart = np.random.randint(0, trainImageH-patchH)\n          colStart = np.random.randint(0, trainImageW-patchW)\n          X_train.append(imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW])\n          Y_train.append(imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW])\n        return X_train,Y_train#(row,col)\n\n    def getTestDatas(self):\n        #for imageBlur,imageSharp in self.__blurSharpPairs:\n        return self.__fileBlurList\n\n    def getLoadedPairs(self):\n        return self.__blurSharpPairs\n\n    def loadDataList(self, path):\n        self.__traversalDir(path)\n        data_length = len(self.__fileBlurList)\n        print(f'dataset got:{data_length}!')\n        return data_length\n\n    def getAPair(self,index,config):\n        fileFullPath = self.__fileBlurList[index]\n        imageBlur = img_as_float(io.imread(fileFullPath))\n        imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))\n        patchW = patchH = config.trainer.generatorImageSize\n        trainImageH = imageBlur.shape[0]\n        trainImageW = imageBlur.shape[1]\n        rowStart = np.random.randint(0, trainImageH-patchH)\n        colStart = np.random.randint(0, trainImageW-patchW)\n        return imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW],imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW]"
  },
  {
    "path": "code/src/lib/data_producer.py",
    "content": "import os\nfrom skimage import io,img_as_float # image process\nimport numpy as np\nimport threading\n\nclass DataProducer(threading.Thread):\n    def __init__(self, name,queue,config):\n        threading.Thread.__init__(self, name=name,daemon=True)\n        self.data=queue\n        self.__fileBlurList=[]\n        self.__directoryList=[]\n        self.__blurSharpParis=[]\n        self.config = config\n        self.running = True\n\n    def __traversalDir(self,root):\n        for name in os.listdir(root):\n          fullPath = os.path.join(root, name)\n          if os.path.isdir(fullPath):\n            self.__directoryList.append(fullPath)\n        for directory in self.__directoryList:\n          for parent,dirnames,filenames in os.walk(os.path.join(directory,'blur')):\n            for filename in filenames:\n              self.__fileBlurList.append(os.path.join(parent,filename))\n\n    def loadDataList(self, path):\n        self.__traversalDir(path)\n        self.data_length = len(self.__fileBlurList)\n        print(f'dataset got:{self.data_length}!')\n        return self.data_length\n\n    def __produceAPair(self,index):\n        fileFullPath = self.__fileBlurList[index]\n        imageBlur = img_as_float(io.imread(fileFullPath))\n        imageSharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))\n        patchW = patchH = self.config.trainer.generatorImageSize\n        trainImageH = imageBlur.shape[0]\n        trainImageW = imageBlur.shape[1]\n        rowStart = np.random.randint(0, trainImageH-patchH)\n        colStart = np.random.randint(0, trainImageW-patchW)\n        blur = imageBlur[rowStart:rowStart+patchH,colStart:colStart+patchW]\n        sharp = imageSharp[rowStart:rowStart+patchH,colStart:colStart+patchW]\n        self.data.put((blur,sharp),1)#block\n\n    def run(self):\n        arr = np.arange(self.data_length)\n        while(True):\n            #an epoch\n            np.random.shuffle(arr)\n            for i in range(self.data_length):\n                index = arr[i]\n                self.__produceAPair(index)"
  },
  {
    "path": "code/src/lib/tf_util.py",
    "content": "\ndef set_session_config(per_process_gpu_memory_fraction=None, allow_growth=None, device_list='0'):\n    \"\"\"\n\n    :param allow_growth: When necessary, reserve memory\n    :param float per_process_gpu_memory_fraction: specify GPU memory usage as 0 to 1\n\n    :return:\n    \"\"\"\n    import tensorflow as tf\n    import keras.backend as K\n\n    config = tf.ConfigProto(\n        gpu_options=tf.GPUOptions(\n            per_process_gpu_memory_fraction=per_process_gpu_memory_fraction,\n            allow_growth=allow_growth,\n            visible_device_list=device_list\n        )\n    )\n    sess = tf.Session(config=config)\n    K.set_session(sess)\n"
  },
  {
    "path": "code/src/model/__init__.py",
    "content": ""
  },
  {
    "path": "code/src/model/model.py",
    "content": "import os\nimport tensorflow as tf\nfrom keras.layers import *\nfrom keras.initializers import glorot_uniform\nfrom keras.models import Sequential,Model,load_model\nfrom keras.layers.advanced_activations import LeakyReLU\nimport keras.backend as K\n\nclass DDModel:#Details Deblurring Model\n\n    def __init__(self,config):\n        self.config = config\n        self.generator = self.build_generator((None,None,6),(None,None,3))\n\n    def __resblock(self,X,filter_num):\n        # Save the input value.\n        X_shortcut = X\n        \n        X = Conv2D(filters = filter_num, kernel_size = (3, 3), strides = (1,1), padding = 'same')(X)\n        X = Activation('relu')(X)\n        \n        X = Conv2D(filters = filter_num, kernel_size = (3, 3), strides = (1,1), padding = 'same')(X)\n        X = Add()([X, X_shortcut])\n        \n        return X\n\n    def __eblock(self,X,filter_num,stride):\n        X = Conv2D(filters = filter_num, kernel_size = (5, 5), strides = (stride,stride), padding = 'same')(X)\n        X = Activation('relu')(X)\n        for i in range(3):\n            X = self.__resblock(X,filter_num)\n        return X\n\n    def __dblock(self,X,filter_num,stride):\n        for i in range(3):\n            X = self.__resblock(X,filter_num*2)\n        X = Conv2DTranspose(filter_num, kernel_size = (5, 5), strides = (stride, stride), padding='same')(X)\n        X = Activation('relu')(X)\n        return X\n\n    def __outblock(self,X,filter_num):\n        for i in range(3):\n            X = self.__resblock(X,filter_num)\n        X = Conv2D(3, kernel_size = (5, 5), strides = (1, 1), padding='same')(X)\n        X = Activation('tanh')(X)\n        X = Lambda(lambda x: x/2+0.5)(X)\n        return X\n\n    def __unet1(self,X):\n        e32 = self.__eblock(X,32,1)#None,None,32\n        e64 = self.__eblock(e32,64,2)#/2,64\n        e128 = self.__eblock(e64,128,2)#/4,128\n        d64 = self.__dblock(e128,64,2)#/2,64\n        d64e64 = Add()([d64, e64])\n        d32 = self.__dblock(d64e64,32,2)#None,None,32\n        d32e32 = Add()([d32, e32])\n        #d3 = self.__outblock(d32e32,32)\n        return d32e32\n\n    def __unet2(self,X):\n        e32 = self.__eblock(X,32,1)#None,None,32\n        e64 = self.__eblock(e32,64,2)#/2,64\n        e128 = self.__eblock(e64,128,2)#/4,128\n        d64 = self.__dblock(e128,64,2)#/2,64\n        d64e64 = Add()([d64, e64])\n        d32 = self.__dblock(d64e64,32,2)#None,None,32\n        d32e32 = Add()([d32, e32])\n        d3 = self.__outblock(d32e32,32)\n        return d3\n\n    def __makeDense(self,X,growthRate):\n        out = Conv2D(filters = growthRate, kernel_size = (3, 3), strides = (1,1), padding = 'same', use_bias=False)(X)\n        out = Activation('relu')(out)\n        out = concatenate([X,out], axis=3)\n        return out\n\n    def __RDB(self,X,nChannels,nDenselayer,growthRate):\n        X_shortcut = X\n        for i in range(nDenselayer):    \n            X = self.__makeDense(X, growthRate)\n        X = Conv2D(filters = nChannels, kernel_size = (1, 1), strides = (1,1), padding = 'same', use_bias=False)(X)\n        X = Add()([X, X_shortcut])\n        return X\n\n    def build_generator(self,input_shapeA,input_shapeB):#unet\n        if(self.load(self.config.resource.generator_json_path,self.config.resource.generator_weights_path)):\n            return self.model\n        else:#init\n            print(f'init network parameters')\n            inputsA = Input(input_shapeA,name='imageSmall')#None,None,6\n            inputsB = Input(input_shapeB,name='imageUp')#None,None,3\n            #layer 1\n            F_ = Conv2D(filters = 32, kernel_size = (3, 3), strides = (1,1), padding = 'same')(inputsA)#conv1\n            F_0 = self.__unet1(F_)#32\n            F_1 = self.__RDB(F_0,32,6,32)#RDB1\n            F_2 = self.__RDB(F_1,32,6,32)#RDB2\n            F_3 = self.__RDB(F_2,32,6,32)#RDB3\n            FF = concatenate([F_1, F_2,F_3], axis=3)\n            FdLF = Conv2D(filters = 32, kernel_size = (1, 1), strides = (1,1), padding = 'same')(FF)\n            FGF = Conv2D(filters = 32, kernel_size = (3, 3), strides = (1,1), padding = 'same')(FdLF)\n            FDF = Add()([FGF, F_])\n            us = Conv2D(filters = 32*4, kernel_size = (3, 3), strides = (1,1), padding = 'same')(FDF)\n            us = Lambda(lambda x: tf.depth_to_space(x,2))(us)#x2(upsample),32\n            d3 = Conv2D(filters = 3, kernel_size = (3, 3), strides = (1,1), padding = 'same')(us)\n            d3 = Activation('tanh')(d3)\n            d3 = Lambda(lambda x: x/2+0.5)(d3)\n            combined = concatenate([inputsB, d3], axis=3)#blur-generator,6\n            o2 = self.__unet2(combined)\n            model = Model(inputs=[inputsA,inputsB], outputs=o2, name='generator')\n            return model\n\n    def load(self, json_path, weights_path):\n        from keras.models import model_from_json\n        if os.path.exists(json_path) and os.path.exists(weights_path):\n            json_file = open(json_path, 'r')\n            loaded_model_json = json_file.read()\n            json_file.close()\n            self.model = model_from_json(loaded_model_json,custom_objects={'tf':tf})\n            # load weights into new model\n            self.model.load_weights(weights_path)\n            print(\"Loaded model from disk\")\n            return True\n        else:\n            return False\n\n    def save(self, model, json_path, weights_path):\n        # serialize model to JSON\n        model_json = model.to_json()\n        with open(json_path, \"w\") as json_file:\n            json_file.write(model_json)\n        # serialize weights to HDF5\n        model.save_weights(weights_path)\n        print(\"Saved model to disk\")"
  },
  {
    "path": "code/src/tester.py",
    "content": "import os\nfrom src.model.model import DDModel\nfrom src.lib.data_helper import DataHelper\nfrom src.lib.MLVSharpnessMeasure import MLVMeasurement\nfrom skimage import io,transform #reize image\nimport numpy as np\nimport pickle\nimport math\n\nclass Tester():\n    def __init__(self,config):\n        self.config = config\n        self.model = DDModel(config)\n        self.batch_size = 8\n        self.current_size = 0\n        self.pyramid_blurs = []\n        self.batch_sharps = []\n        #metrics\n        self.all_psnrs = {}\n\n    def start(self):\n        if(self.config.tester.iter == 0):\n            self.iters = [1,2,3,4]\n        else:\n            self.iters = [self.config.application.iter]\n        self.max_iter = max(self.iters)\n        for iter in self.iters:\n            self.all_psnrs[iter] = []\n        #json_path=self.config.resource.generator_json_path\n        #infos = json_path.split('generator')\n        #infos = infos[1].split('.')\n        #json_info = infos[0]\n        #weights_path=self.config.resource.generator_weights_path\n        #infos = weights_path.split('generator')\n        #infos = infos[1].split('.')\n        #weights_info = infos[0]\n        #print(f'json/weight:{json_info}/{weights_info}')\n        print(f'test strategy:{self.iters}')\n        self.test()\n\n    def __compute_psnr(self, x , label , max_diff):\n        mse =  np.mean(( x - label ) **2 )\n        return 10*math.log10( max_diff**2 / mse )\n\n    def __doBatchTest(self):\n        n = len(self.pyramid_blurs)\n        for iter in self.iters:\n            batch_blurs2x = []\n            batch_blurs1x = []\n            for i in range(iter,0,-1):\n                if(i == iter):#first iter\n                    #generate batch_blurs2x\n                    for j in range(n):\n                        pyramid_blur = self.pyramid_blurs[j]\n                        imageBlur2x = pyramid_blur[i]\n                        batch_blurs2x.append(imageBlur2x)\n                    batch_gen = batch_blurs2x\n                else:\n                    #generate batch_blurs2x\n                    batch_blurs2x = batch_blurs1x\n                    batch_blurs1x = []\n                #generate batch_blurs1x\n                for j in range(n):\n                    pyramid_blur = self.pyramid_blurs[j]\n                    imageBlur1x = pyramid_blur[i-1]\n                    batch_blurs1x.append(imageBlur1x)\n                #data prepare end\n                \n                #predict 2x\n                data_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels\n                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blurs1x)}\n                batch_gen = self.model.generator.predict(data_X)\n            #calculate metrics\n            for i in range(n):\n                pImage = batch_gen[i]\n                pImage = pImage[24:744]\n                psnr = self.__compute_psnr(pImage, self.batch_sharps[i], 1)\n                self.all_psnrs[iter].append(psnr)\n        #reset\n        self.current_size = 0\n        self.pyramid_blurs = []\n        self.batch_sharps = []\n\n    def __doInteration(self,blur,sharp):\n        #self.sharpness.append(self.measure.getScore(blur))\n        if(self.current_size < self.batch_size):\n            blur = np.pad(blur,((24,24),(0,0),(0,0)),'reflect')#be divided by 256\n            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))\n            self.batch_sharps.append(sharp)\n            self.current_size += 1\n        if(self.current_size == self.batch_size):#train a batch\n            self.__doBatchTest()\n\n    def test(self):\n        dataHelper = DataHelper()\n        dataHelper.load_data(self.config.resource.test_directory_path,0)\n        \n        blurSharpParis = dataHelper.getLoadedPairs()\n        for imageBlur,imageSharp in blurSharpParis:\n            self.__doInteration(imageBlur,imageSharp)\n        if(self.pyramid_blurs):\n            self.__doBatchTest()\n        \n        #analyse results\n        psnrs = []\n        for iter in self.iters:\n            psnrs.append(self.all_psnrs[iter])\n        psnrs = np.array(psnrs)\n        psnrs_by_iter = np.mean(psnrs,axis=1)\n        for i in range(len(psnrs_by_iter)):\n            print(f'PSNR:{psnrs_by_iter[i]}@{self.iters[i]}')\n        best_psnrs = np.amax(psnrs,axis=0)\n        path=os.path.join(self.config.resource.output_dir, \"psnrs.pkl\")\n        with open(path, 'wb') as pfile:\n          pickle.dump(best_psnrs, pfile, protocol=pickle.HIGHEST_PROTOCOL)\n        best_iters_index = np.argmax(psnrs,axis=0)\n        iters = np.array(self.iters)\n        best_iters = iters[best_iters_index]\n        path=os.path.join(self.config.resource.output_dir, \"iters.pkl\")\n        with open(path, 'wb') as pfile:\n          pickle.dump(best_iters, pfile, protocol=pickle.HIGHEST_PROTOCOL)\n        calculate_data_n = len(best_psnrs)\n        #path=os.path.join(self.config.resource.output_dir, \"sharpness.pkl\")\n        #with open(path, 'wb') as pfile:\n        #  pickle.dump(self.sharpness, pfile, protocol=pickle.HIGHEST_PROTOCOL)\n        calculate_data_n = len(best_psnrs)\n        print(f'{calculate_data_n}/{len(blurSharpParis)} done! Average PSNRs(Best):{np.mean(best_psnrs)}')"
  },
  {
    "path": "code/src/trainer.py",
    "content": "from src.model.model import DDModel\nfrom src.lib.data_producer import DataProducer\nimport tensorflow as tf\nimport keras.backend as K\nfrom keras.optimizers import RMSprop,Adam\nfrom skimage import io,transform,feature,color\nimport numpy as np\nimport sys\nfrom keras.utils.training_utils import multi_gpu_model\nimport queue\nimport threading\n\nclass Trainer():\n    def __init__(self,config):\n        self.config = config\n        self.model = DDModel(config)\n        self.batch_size = config.trainer.batch_size\n        self.learningSteps = [1e-4,3e-5,5e-6,1e-6]\n        #self.learningSteps = [1e-4,3e-5]\n        self.currentStep = 0\n        self.bestLoss = 2\n        self.bestEpoch = 0\n        self.current_size = 0\n        self.iters = [3]\n        self.iter_length = len(self.iters)\n        self.pyramid_blurs = []\n        self.pyramid_sharps = []\n        \n\n    def start(self):\n        #json_path=self.config.resource.generator_json_path\n        #infos = json_path.split('generator')\n        #infos = infos[1].split('.')\n        #json_info = infos[0]\n        #weights_path=self.config.resource.generator_weights_path\n        #infos = weights_path.split('generator')\n        #infos = infos[1].split('.')\n        #weights_info = infos[0]\n        #print(f'json/weight:{json_info}/{weights_info}')\n        self.train(self.config.trainer.maxEpoch)\n\n    def __trainBatch(self):\n        batch_blurs2x = []\n        batch_blurs1x = []\n        batch_sharps1x = []\n        n = len(self.pyramid_blurs)\n        for i in range(self.max_iter,0,-1):\n            if(i == self.max_iter):#first iter\n                #generate batch_blurs2x\n                for j in range(n):\n                    pyramid_blur = self.pyramid_blurs[j]\n                    imageBlur2x = pyramid_blur[i]\n                    batch_blurs2x.append(imageBlur2x)\n                batch_gen = batch_blurs2x\n            else:\n                #generate batch_blurs2x\n                batch_blurs2x = batch_blurs1x\n                batch_blurs1x = []\n                batch_sharps1x = []\n            #generate batch_blurs1x\n            for j in range(n):\n                pyramid_blur = self.pyramid_blurs[j]\n                imageBlur1x = pyramid_blur[i-1]\n                batch_blurs1x.append(imageBlur1x)\n            #generate batch_sharps1x\n            for j in range(n):\n                pyramid_sharp = self.pyramid_sharps[j]\n                imageSharp1x = pyramid_sharp[i-1]\n                batch_sharps1x.append(imageSharp1x)\n            #data generate end\n            \n            #train Generator 2x\n            train_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels\n            train_X = {'imageSmall':train_X1,'imageUp':np.array(batch_blurs1x)}\n            g_loss = self.generator.train_on_batch(train_X,np.array(batch_sharps1x))\n            if(i == 1):#last iter\n                self.g_loss += g_loss * n\n            else:\n                batch_gen = self.generator.predict(train_X)\n        #train end,reset\n        self.current_size = 0\n        self.pyramid_blurs = []\n        self.pyramid_sharps = []\n\n    def __doInteration(self,blur,sharp,epoch):\n        iter_index = epoch%self.iter_length\n        self.max_iter = self.iters[iter_index]\n        if(self.current_size < self.batch_size):\n            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))\n            self.pyramid_sharps.append(tuple(transform.pyramid_gaussian(sharp, downscale=2, max_layer=self.max_iter, multichannel=True)))\n            self.current_size += 1\n        if(self.current_size == self.batch_size):#train a batch\n            self.__trainBatch()\n\n    def __nextStep(self):\n        #lr = K.get_value(self.generator.optimizer.lr)\n        self.currentStep += 1\n        if(self.currentStep < len(self.learningSteps)):\n            lr = self.learningSteps[self.currentStep]\n            K.set_value(self.generator.optimizer.lr, lr)\n            self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)\n            f_lr = \"{:.2e}\".format(lr)\n            print(f'learning rate:{f_lr}')\n            return False\n        else:#early end\n            return True\n\n    def __learningScheduler(self,epoch):\n        if(epoch == 0):\n            lr = K.get_value(self.generator.optimizer.lr)\n            f_lr = \"{:.2e}\".format(lr)\n            print(f'learning rate:{f_lr}')\n            return False\n        if(self.bestLoss>self.g_loss):\n            self.bestLoss = self.g_loss\n            self.bestEpoch = epoch\n            if(self.currentStep == len(self.learningSteps)-1):#last step\n                self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)\n            return False\n        #self.bestLoss<=self.g_loss, model not improved\n        if(self.currentStep == len(self.learningSteps)-1):#last step\n            patience = 50\n        else:\n            patience = 30\n        if(epoch-self.bestEpoch >= patience):\n            return self.__nextStep()\n\n    def train(self,maxEpoch):\n        optimizer = Adam(self.learningSteps[self.currentStep])\n        if(self.config.trainer.gpu_num>1):\n            self.generator = multi_gpu_model(self.model.generator, self.config.trainer.gpu_num)\n        else:\n            self.generator = self.model.generator\n        self.generator.compile(loss='mean_absolute_error', optimizer=optimizer)\n        print(f'generator:{self.generator.metrics_names}')\n        print(f'training strategy:{self.iters}')\n        \n        image_queue = queue.Queue(maxsize=self.config.trainer.batch_size*4)\n        dataProducer = DataProducer('Producer',image_queue,self.config)\n        n = dataProducer.loadDataList(self.config.resource.train_directory_path)\n        dataProducer.start()\n        for epoch in range(maxEpoch):\n            #tune learning rate\n            \n            if(self.__learningScheduler(epoch)):#early end\n                print('early end')\n                sys.exit()\n            '''\n            if(epoch == 0):\n                lr = K.get_value(self.generator.optimizer.lr)\n                f_lr = \"{:.2e}\".format(lr)\n                print(f'learning rate:{f_lr}')\n            elif(epoch % 300 == 0):\n                earlyEnd = self.__nextStep()\n                if(earlyEnd):\n                    break\n            '''\n            self.g_loss = 0\n            for i in range(n):\n              imageBlur,imageSharp = image_queue.get(1)#block\n              self.__doInteration(imageBlur,imageSharp,epoch)\n            if(self.pyramid_blurs):\n              #last batch, may smaller than batch_size\n              self.__trainBatch()\n            #f_g_loss = [\"{:.2f}\".format(x) for x in self.g_loss]\n            self.g_loss = self.g_loss/n\n            f_g_loss = \"{:.3e}\".format(self.g_loss)\n            print(f'epoch:{epoch+1}/{maxEpoch},[G loss:{f_g_loss}]')\n        self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)"
  },
  {
    "path": "code/src/verification.py",
    "content": "from src.model.model import DDModel\nfrom src.lib.data_producer import DataProducer\nfrom src.lib.data_helper import DataHelper\nimport tensorflow as tf\nimport keras.backend as K\nfrom keras.optimizers import RMSprop,Adam\nfrom skimage import io,transform,feature,color,img_as_float\nimport numpy as np\nimport sys\nfrom keras.utils.training_utils import multi_gpu_model\nimport queue\nimport threading\nimport math\n\nclass Verification():\n    def __init__(self,config):\n        self.config = config\n        self.model = DDModel(config)\n        self.batch_size = config.trainer.batch_size\n        self.learningRate = 1e-6\n        self.bestMetric = 0#psnr\n        self.bestEpoch = 0\n        self.patience = 200\n        self.current_size = 0\n        self.iters = [3]\n        self.iter_length = len(self.iters)\n        self.pyramid_blurs = []\n        self.pyramid_sharps = []\n\n    def start(self):\n        #json_path=self.config.resource.generator_json_path\n        #infos = json_path.split('generator')\n        #infos = infos[1].split('.')\n        #json_info = infos[0]\n        #weights_path=self.config.resource.generator_weights_path\n        #infos = weights_path.split('generator')\n        #infos = infos[1].split('.')\n        #weights_info = infos[0]\n        #print(f'json/weight:{json_info}/{weights_info}')\n        print(f'verification strategy:{self.iters}')\n        self.bestMetric = self.__getMetric()#init\n        print(f'init metric:{self.bestMetric}')\n        self.train()\n\n    def __trainBatch(self):\n        batch_blurs2x = []\n        batch_blurs1x = []\n        batch_sharps1x = []\n        n = len(self.pyramid_blurs)\n        for i in range(self.max_iter,0,-1):\n            if(i == self.max_iter):#first iter\n                #generate batch_blurs2x\n                for j in range(n):\n                    pyramid_blur = self.pyramid_blurs[j]\n                    imageBlur2x = pyramid_blur[i]\n                    batch_blurs2x.append(imageBlur2x)\n                batch_gen = batch_blurs2x\n            else:\n                #generate batch_blurs2x\n                batch_blurs2x = batch_blurs1x\n                batch_blurs1x = []\n                batch_sharps1x = []\n            #generate batch_blurs1x\n            for j in range(n):\n                pyramid_blur = self.pyramid_blurs[j]\n                imageBlur1x = pyramid_blur[i-1]\n                batch_blurs1x.append(imageBlur1x)\n            #generate batch_sharps1x\n            for j in range(n):\n                pyramid_sharp = self.pyramid_sharps[j]\n                imageSharp1x = pyramid_sharp[i-1]\n                batch_sharps1x.append(imageSharp1x)\n            #data generate end\n            \n            #train Generator 2x\n            train_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels\n            train_X = {'imageSmall':train_X1,'imageUp':np.array(batch_blurs1x)}\n            g_loss = self.generator.train_on_batch(train_X,np.array(batch_sharps1x))\n            if(i == 1):#last iter\n                self.g_loss += g_loss * n\n            else:\n                batch_gen = self.generator.predict(train_X)\n        #train end,reset\n        self.current_size = 0\n        self.pyramid_blurs = []\n        self.pyramid_sharps = []\n\n    def __doInteration(self,blur,sharp,epoch):\n        iter_index = epoch%self.iter_length\n        self.max_iter = self.iters[iter_index]\n        if(self.current_size < self.batch_size):\n            self.pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=self.max_iter, multichannel=True)))\n            self.pyramid_sharps.append(tuple(transform.pyramid_gaussian(sharp, downscale=2, max_layer=self.max_iter, multichannel=True)))\n            self.current_size += 1\n        if(self.current_size == self.batch_size):#train a batch\n            self.__trainBatch()\n\n    def __compute_psnr(self, x , label , max_diff):\n        mse =  np.mean(( x - label ) **2 )\n        return 10*math.log10( max_diff**2 / mse )\n\n    def __testBatch(self,pyramid_blurs,batch_sharps):\n        n = len(pyramid_blurs)\n        psnrs = []\n        for iter in self.iters:\n            batch_blurs2x = []\n            batch_blurs1x = []\n            for i in range(iter,0,-1):\n                if(i == iter):#first iter\n                    #generate batch_blurs2x\n                    for j in range(n):\n                        pyramid_blur = pyramid_blurs[j]\n                        imageBlur2x = pyramid_blur[i]\n                        batch_blurs2x.append(imageBlur2x)\n                    batch_gen = batch_blurs2x\n                else:\n                    #generate batch_blurs2x\n                    batch_blurs2x = batch_blurs1x\n                    batch_blurs1x = []\n                #generate batch_blurs1x\n                for j in range(n):\n                    pyramid_blur = pyramid_blurs[j]\n                    imageBlur1x = pyramid_blur[i-1]\n                    batch_blurs1x.append(imageBlur1x)\n                #data prepare end\n                \n                #predict 2x\n                data_X1 = np.concatenate((batch_blurs2x,batch_gen), axis=3)#6channels\n                data_X = {'imageSmall':data_X1,'imageUp':np.array(batch_blurs1x)}\n                batch_gen = self.model.generator.predict(data_X)\n            #calculate metrics\n            batch_psnrs = []\n            for i in range(n):\n                pImage = batch_gen[i]\n                pImage = pImage[24:744]\n                psnr = self.__compute_psnr(pImage, batch_sharps[i], 1)\n                batch_psnrs.append(psnr)\n            psnrs.append(batch_psnrs)\n        psnrs = np.array(psnrs)\n        best_index = np.argmax(psnrs,axis=0)\n        for i in range(n):\n            best_psnr = psnrs[best_index[i]][i]\n            best_iter = self.iters[best_index[i]]\n            self.best_psnrs.append(best_psnr)\n            self.best_iters.append(best_iter)\n\n    def __getMetric(self):\n        dataHelper = DataHelper()\n        dataHelper.loadDataList(self.config.resource.test_directory_path)\n        fileBlurList = dataHelper.getTestDatas()\n        batch_size = 8\n        max_iter = max(self.iters)\n        #metrics\n        self.best_psnrs = []\n        self.best_iters = []\n        \n        current_size = 0\n        pyramid_blurs = []\n        batch_sharps = []\n        for fileFullPath in fileBlurList:\n            blur = img_as_float(io.imread(fileFullPath))\n            sharp = img_as_float(io.imread(fileFullPath.replace('/blur','/sharp')))\n            if(current_size < batch_size):\n                blur = np.pad(blur,((24,24),(0,0),(0,0)),'reflect')#be divided by 256\n                pyramid_blurs.append(tuple(transform.pyramid_gaussian(blur, downscale=2, max_layer=max_iter, multichannel=True)))\n                batch_sharps.append(sharp)\n                current_size += 1\n            if(current_size == batch_size):#verify a batch\n                self.__testBatch(pyramid_blurs,batch_sharps)\n                current_size = 0\n                pyramid_blurs = []\n                batch_sharps = []\n        if(pyramid_blurs):\n            self.__testBatch(pyramid_blurs,batch_sharps)\n            current_size = 0\n            pyramid_blurs = []\n            batch_sharps = []\n        return np.mean(self.best_psnrs)\n\n    def __verify(self,epoch):\n        if(epoch % 50 != 0):\n            return False\n        metric = self.__getMetric()\n        print(f'current metric:{metric}')\n        if(metric > self.bestMetric):\n            self.bestMetric = metric\n            self.bestEpoch = epoch\n            self.model.save(self.model.generator,self.config.resource.generator_json_path,self.config.resource.generator_weights_path)\n            return False\n        elif(epoch - self.bestEpoch < self.patience):\n            return False\n        else:\n            return True\n    \n    def train(self):\n        optimizer = Adam(self.learningRate)\n        if(self.config.trainer.gpu_num>1):\n            self.generator = multi_gpu_model(self.model.generator, self.config.trainer.gpu_num)\n        else:\n            self.generator = self.model.generator\n        self.generator.compile(loss='mean_absolute_error', optimizer=optimizer)\n        print(f'generator:{self.generator.metrics_names}')\n        \n        image_queue = queue.Queue(maxsize=self.config.trainer.batch_size*4)\n        dataProducer = DataProducer('Producer',image_queue,self.config)\n        n = dataProducer.loadDataList(self.config.resource.train_directory_path)\n        dataProducer.start()\n        epoch = 0\n        while(True):\n            self.g_loss = 0\n            for i in range(n):\n              imageBlur,imageSharp = image_queue.get(1)#block\n              self.__doInteration(imageBlur,imageSharp,epoch)\n            if(self.pyramid_blurs):\n              #last batch, may smaller than batch_size\n              self.__trainBatch()\n            #f_g_loss = [\"{:.2f}\".format(x) for x in self.g_loss]\n            self.g_loss = self.g_loss/n\n            f_g_loss = \"{:.3e}\".format(self.g_loss)\n            print(f'verification epoch:{epoch},[G loss:{f_g_loss}]')\n            epoch += 1\n            if(self.__verify(epoch)):\n                break"
  }
]