extensions c9201b0707b0 cached
26 files
68.8 KB
16.9k tokens
112 symbols
1 requests
Download .txt
Repository: moodoki/semantic_image_inpainting
Branch: extensions
Commit: c9201b0707b0
Files: 26
Total size: 68.8 KB

Directory structure:
gitextract_vv0wxru7/

├── .gitignore
├── LICENSE
├── README.md
├── graphs/
│   └── .gitignore
├── requirements.txt
├── runall.sh
└── src/
    ├── colorize.py
    ├── corrupt_image.py
    ├── denoising.py
    ├── external/
    │   ├── __init__.py
    │   └── poissonblending.py
    ├── helper.py
    ├── imgsnr.py
    ├── inpaint.py
    ├── inpaint_test.py
    ├── model.py
    ├── model_base.py
    ├── model_colorize.py
    ├── model_denoising.py
    ├── model_inpaint.py
    ├── model_inpaint_test.py
    ├── model_quantize.py
    ├── model_superres.py
    ├── quantize.py
    ├── superres.py
    └── tools.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
*.swp
__pycache__
*.pbtext
completions
*.pyc


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2017 TeckYian Lim

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
Semantic Image Inpainting With Deep Generative Models and image restoration with GANS
=====================================================
[[Inpainting CVPR2017]](http://www.isle.illinois.edu/~yeh17/projects/semantic_inpaint/index.html)
[[Image Restore ICASSP2018]](https://teckyianlim.me/image-restore/image-restore.html)

Tensorflow implementation for semantic image inpainting:

![](http://www.isle.illinois.edu/~yeh17/projects/semantic_inpaint/img/process.png)

Semantic Image Inpainting With Deep Generative Models

[Raymond A. Yeh*](http://www.isle.illinois.edu/~yeh17/),
[Teck Yian Lim*](http://tlim11.web.engr.illinois.edu/),
[Chen Chen](http://cchen156.web.engr.illinois.edu/),
[Alexander G. Schwing](http://www.alexander-schwing.de/),
[Mark Hasegawa-Johnson](http://www.ifp.illinois.edu/~hasegawa/),
[Minh N. Do](http://minhdo.ece.illinois.edu/)

In CVPR 2017

\* indicating equal contributions.

Overview
--------
Implementation of proposed cost function and backpropogation to input. 

In this code release, we load a pretrained DCGAN model, and apply our proposed
objective function for the task of image completion. 

Notes
-----
To reproduce the CVPR2017 work, run the inpainting example.

Dependencies
------------
 - Tensorflow >= 1.0
 - scipy + PIL/pillow (image io)
 - pyamg (for Poisson blending)

Tested to work with both Python 2.7 and Python 3.5


Files
-----
 - src/model.py - Main implementation
 - src/inpaint.py - command line application
 - src/external - external code used. Citations in code
 - graphs/dcgan-100.pb - frozen pretrained DCGAN with 100-dimension latent space
 
Weights
-------

Git doesn't work nicely with large binary files. Please download our weights from 
[here](https://www.dropbox.com/s/3uo97fzu4jfi2ms/dcgan-100.pb?dl=0), trained on the 
[CelebA dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html).

Alternatively, train your own GAN using your dataset. Conversion from checkpoint to 
Tensorflow ProtoBuf format can be done with 
[this script](https://gist.github.com/moodoki/e37a85fb0258b045c005ca3db9cbc7f6)


Running
-------


Generate multiple candidates for completion:
```
python src/inpaint.py --model_file graphs/dcgan-100.pb \
    --maskType center --in_image testimages/face1.png \
    --nIter 1000 --blend
```

Generate completions for multiple input images:
```
python src/inpaint.py --model_file graphs/dcgan-100.pb \
    --maskType center --inDir testimages \
    --nIter 1000 --blend
```


Citation
--------

~~~
@inproceedings{
    yeh2017semantic,
    title={Semantic Image Inpainting with Deep Generative Models},
    author={Yeh$^\ast$, Raymond A. and Chen$^\ast$, Chen and Lim, Teck Yian and Schwing Alexander G. and Hasegawa-Johnson, Mark and Do, Minh N.},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
    year={2017},
    note = {$^\ast$ equal contribution},
}
~~~



================================================
FILE: graphs/.gitignore
================================================
*
!.gitignore


================================================
FILE: requirements.txt
================================================
Pillow
tensorflow
pyamg


================================================
FILE: runall.sh
================================================
batches=`seq 0 15`

for b in $batches
do
    echo test_batches/quantized/batch_$b
    python src/quantize.py --model_file graphs/dcgan-100.pb --inDir test_batches/quantized/batch_$b --nIter 1000 --blend --outDir test_out
done

for b in $batches
do
    echo test_batches/gaussian_noise/batch_$b
    python src/denoising.py --model_file graphs/dcgan-100.pb --inDir test_batches/gaussian_noise/batch_$b --nIter 1000 --blend --outDir test_out
done

for b in $batches
do
    echo test_batches/testset/batch_$b
    python src/colorize.py --model_file graphs/dcgan-100.pb --inDir test_batches/testset/batch_$b --nIter 1000 --blend --outDir test_out
done

for b in $batches
do
    echo test_batches/sr_linear/batch_$b
    python src/superres.py --model_file graphs/dcgan-100.pb --inDir test_batches/sr_linear/batch_$b --nIter 1000 --blend --outDir test_out
done

for b in $batches
do
    echo test_batches/sr_nn/batch_$b
    python src/superres.py --model_file graphs/dcgan-100.pb --inDir test_batches/sr_nn/batch_$b --nIter 1000 --blend --outDir test_out/sr_nn
done




================================================
FILE: src/colorize.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob
from helper import loadimage_gray, saveimages

from model_colorize import ModelColorize

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.01)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=1000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.1)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='colorization')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')
parser.add_argument('-c', action='store_true', help='corrupt image on the fly')

args = parser.parse_args()

def main():
    m = ModelColorize(args.model_file, args)

    # Generate some samples from the model as a test
    imout = m.sample()
    #saveimages(imout)

    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage_gray(f) for f in imgfilenames])
    elif args.in_img is not None:
        in_img = loadimage_gray(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)

    saveimages(in_img, 'colorize_in', imgfilenames, args.outDir)

    inpaint_out, g_out = m.colorize(in_img, args.blend)
    saveimages(g_out, 'colorize_gen', imgfilenames, args.outDir)
    saveimages(inpaint_out, 'colorize', imgfilenames, args.outDir)


if __name__ == '__main__':
    main()


================================================
FILE: src/corrupt_image.py
================================================
from scipy.misc import imread, imsave, imresize
import argparse
import numpy as np

parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', type=str, required=True)
parser.add_argument('-o', '--output', type=str, required=True)
parser.add_argument('-t', '--type', type=str, 
                    choices=(['gaussian_noise',
                              'quantize',
                              'sr_linear',
                              'sr_nn',
                              'sr_black'
                             ]),
                    default='gaussian_noise')

def tofloat(img):
    img = img.astype(np.float32)
    return img/np.max(img)

def gaussian_noise(img, std=0.1):
    i = tofloat(img)
    i = i + std*np.random.normal(size=img.shape)
    return i

def quantize_2(img, levels=4):
    i = tofloat(img)
    i = np.floor(i*levels)/levels
    return i

def quantize(img, quantize_factor=55):
    img = img.astype(float)
    img /= 255.0
    img *= (255.0 / quantize_factor)
    images = img
    images = np.floor(images)
    images /= (255.0 / quantize_factor)
    images *= 255.0
    images = images.astype(int)
    return images


def resize(img, rate=4., interp='bilinear'):
    return imresize(img, float(rate), interp=interp) 

def sr_linear(img, rate=4.):
    s = resize(img, 1./rate, interp='bilinear')
    return resize(s, rate, interp='bilinear')

def sr_nn(img, rate=4.):
    s = resize(img, 1./rate, interp='bilinear')
    return resize(s, rate, interp='nearest')

def sr_black(img, rate=4):
    o = np.zeros_like(img)
    s = resize(img, 1./rate, interp='bilinear')
    o[::rate, ::rate, :] = s
    return o

def main(args):
    i = imread(args.input)
    o = eval(args.type)(i)
    imsave(args.output, o)

if __name__ == '__main__':
    args = parser.parse_args()
    main(args)
    


================================================
FILE: src/denoising.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob
from helper import loadimage, saveimages

from model_denoising import ModelDenoising

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.0001)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=1000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.03)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='denoising')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')
parser.add_argument('-c', action='store_true', help='corrupt image on the fly')

args = parser.parse_args()


def corrupt_image(img, down_sample_factor=4):
  noisy = img + 0.9*img.std()*np.random.random(img.shape)
  return noisy

def main():
    m = ModelDenoising(args.model_file, args)
    m.sigma=0.1

    # Generate some samples from the model as a test
    #imout = m.sample()
    #saveimages(imout)

    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage(f) for f in imgfilenames])
        if args.c:
            in_corrupt_img = np.array([corrupt_image(loadimage(f), dd) for f in imgfilenames])
        else:
            in_corrupt_img = np.copy(in_img)


    elif args.in_image is not None:
        in_img = in_corrupt_img #loadimage(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)
    #saveimages(in_corrupt_img, prefix='input')
    inpaint_out, g_out = m.restore_image(in_img)
    saveimages(g_out, 'denoise_gen', imgfilenames, args.outDir)
    saveimages(inpaint_out, 'denoise', imgfilenames, args.outDir)


if __name__ == '__main__':
    main()


================================================
FILE: src/external/__init__.py
================================================


================================================
FILE: src/external/poissonblending.py
================================================
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
#
# Original code from https://github.com/parosky/poissonblending 

import numpy as np
import scipy.sparse
import PIL.Image
import pyamg

# pre-process the mask array so that uint64 types from opencv.imread can be adapted
def prepare_mask(mask):
    if type(mask[0][0]) is np.ndarray:
        result = np.ndarray((mask.shape[0], mask.shape[1]), dtype=np.uint8)
        for i in range(mask.shape[0]):
            for j in range(mask.shape[1]):
                if sum(mask[i][j]) > 0:
                    result[i][j] = 1
                else:
                    result[i][j] = 0
        mask = result
    return mask

def blend(img_target, img_source, img_mask, offset=(0, 0)):
    # compute regions to be blended
    region_source = (
            max(-offset[0], 0),
            max(-offset[1], 0),
            min(img_target.shape[0]-offset[0], img_source.shape[0]),
            min(img_target.shape[1]-offset[1], img_source.shape[1]))
    region_target = (
            max(offset[0], 0),
            max(offset[1], 0),
            min(img_target.shape[0], img_source.shape[0]+offset[0]),
            min(img_target.shape[1], img_source.shape[1]+offset[1]))
    region_size = (region_source[2]-region_source[0], region_source[3]-region_source[1])

    # clip and normalize mask image
    img_mask = img_mask[region_source[0]:region_source[2], region_source[1]:region_source[3]]
    img_mask = prepare_mask(img_mask)
    img_mask[img_mask==0] = False
    img_mask[img_mask!=False] = True

    # create coefficient matrix
    A = scipy.sparse.identity(np.prod(region_size), format='lil')
    for y in range(region_size[0]):
        for x in range(region_size[1]):
            if img_mask[y,x]:
                index = x+y*region_size[1]
                A[index, index] = 4
                if index+1 < np.prod(region_size):
                    A[index, index+1] = -1
                if index-1 >= 0:
                    A[index, index-1] = -1
                if index+region_size[1] < np.prod(region_size):
                    A[index, index+region_size[1]] = -1
                if index-region_size[1] >= 0:
                    A[index, index-region_size[1]] = -1
    A = A.tocsr()
    
    # create poisson matrix for b
    P = pyamg.gallery.poisson(img_mask.shape)

    # for each layer (ex. RGB)
    for num_layer in range(img_target.shape[2]):
        # get subimages
        t = img_target[region_target[0]:region_target[2],region_target[1]:region_target[3],num_layer]
        s = img_source[region_source[0]:region_source[2], region_source[1]:region_source[3],num_layer]
        t = t.flatten()
        s = s.flatten()

        # create b
        b = P * s
        for y in range(region_size[0]):
            for x in range(region_size[1]):
                if not img_mask[y,x]:
                    index = x+y*region_size[1]
                    b[index] = t[index]

        # solve Ax = b
        x = pyamg.solve(A,b,verb=False,tol=1e-10)

        # assign x to target image
        x = np.reshape(x, region_size)
        x[x>255] = 255
        x[x<0] = 0
        x = np.array(x, img_target.dtype)
        img_target[region_target[0]:region_target[2],region_target[1]:region_target[3],num_layer] = x

    return img_target


def test():
    img_mask = np.asarray(PIL.Image.open('./testimages/test1_mask.png'))
    img_mask.flags.writeable = True
    img_source = np.asarray(PIL.Image.open('./testimages/test1_src.png'))
    img_source.flags.writeable = True
    img_target = np.asarray(PIL.Image.open('./testimages/test1_target.png'))
    img_target.flags.writeable = True
    img_ret = blend(img_target, img_source, img_mask, offset=(40,-30))
    img_ret = PIL.Image.fromarray(np.uint8(img_ret))
    img_ret.save('./testimages/test1_ret.png')


if __name__ == '__main__':
    test()


================================================
FILE: src/helper.py
================================================
import scipy
import os
import numpy as np

def loadimage(filename):
    img = scipy.misc.imread(filename).astype(np.float)
    return img

def loadimage_gray(filename):
    img = scipy.misc.imread(filename, mode='L').astype(np.float)[:,:,np.newaxis]
    return img

def saveimages(outimages, prefix='samples', filenames=None, outdir='out'):
    numimages = len(outimages)
    print("Array shape {}".format(outimages.shape))

    if not os.path.exists(outdir):
        os.mkdir(outdir)

    for i in range(numimages):
        if filenames is None:
            filename = '{}_{}.png'.format(prefix, i)
        else:
            filename = '{}_{}'.format(prefix, os.path.basename(filenames[i]))
        filename = os.path.join(outdir, filename)
        scipy.misc.imsave(filename, np.squeeze(outimages[i, :, :, :]))




================================================
FILE: src/imgsnr.py
================================================
from scipy.misc import imread
import numpy as np
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', type=str, help="original")
parser.add_argument('-o', '--output', type=str, help="processed")

def psnr(a, b):
    a = normalize(a)
    b = normalize(b)
    e = a-b
    n = np.mean(np.multiply(e, e))
    return 10*np.log(1/n)/np.log(10)

def normalize(i):
    i = i.astype(np.float32)
    return i/np.max(i)

def main(args):
    a = imread(args.input)
    b = imread(args.output)
    print(args.output, psnr(a, b))

if __name__ == '__main__':
    args = parser.parse_args()
    main(args)

    


================================================
FILE: src/inpaint.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob

from model_inpaint import ModelInpaint
from helper import loadimage, saveimages

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.0003)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=3000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.2)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='completions')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--maskType', type=str,
                    choices=['random', 'center', 'left', 'file'],
                    default='center')
parser.add_argument('--maskFile', type=str,
                    default=None,
                    help='Input binary mask for file mask type')
parser.add_argument('--maskThresh', type=int,
                    default=128,
                    help='Threshold in case input mask is not binary')
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')

args = parser.parse_args()


#def loadimage(filename):
#    img = scipy.misc.imread(filename, mode='RGB').astype(np.float)
#    return img
#
#
#def saveimages(outimages, prefix='samples'):
#    numimages = len(outimages)
#
#    if not os.path.exists(args.outDir):
#        os.mkdir(args.outDir)
#
#    for i in range(numimages):
#        filename = '{}_{}.png'.format(prefix, i)
#        filename = os.path.join(args.outDir, filename)
#        scipy.misc.imsave(filename, outimages[i, :, :, :])


def gen_mask(maskType):
    image_shape = [args.imgSize, args.imgSize]
    if maskType == 'random':
        fraction_masked = 0.2
        mask = np.ones(image_shape)
        mask[np.random.random(image_shape[:2]) < fraction_masked] = 0.0
    elif maskType == 'center':
        scale = 0.25
        assert(scale <= 0.5)
        mask = np.ones(image_shape)
        sz = args.imgSize
        l = int(args.imgSize*scale)
        u = int(args.imgSize*(1.0-scale))
        mask[l:u, l:u] = 0.0
    elif maskType == 'left':
        mask = np.ones(image_shape)
        c = args.imgSize // 2
        mask[:, :c] = 0.0
    elif maskType == 'file':
        mask = loadmask(args.maskfile, args.maskthresh)
    else:
        assert(False)
    return mask


def loadmask(filename, thresh=128):
    immask = scipy.misc.imread(filename, mode='L')
    image_shape = [args.imgSize, args.imgSize]
    mask = np.ones(image_shape)
    mask[immask < 128] = 0
    mask[immaks >= 128] = 1
    return mask


def main():
    m = ModelInpaint(args.model_file, args)

    # Generate some samples from the model as a test
    #imout = m.sample()
    #saveimages(imout)

    mask = gen_mask(args.maskType)
    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage(f) for f in imgfilenames])
    elif args.in_img is not None:
        in_img = loadimage(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)

    #saveimages(in_img*mask[:,:,np.newaxis], 'inpaint_in', imgfilenames, args.outDir)
    inpaint_out, g_out = m.restore_image(in_img, mask, args.blend)
    scipy.misc.imsave(os.path.join(args.outDir, 'mask.png'), mask)

    saveimages(g_out, 'inpaint_gen', imgfilenames, args.outDir)
    saveimages(inpaint_out, 'inpaint', imgfilenames, args.outDir)

if __name__ == '__main__':
    main()


================================================
FILE: src/inpaint_test.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob

from model_inpaint_test import ModelInpaintTest as ModelInpaint

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.01)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=1000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.1)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='completions')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--maskType', type=str,
                    choices=['random', 'center', 'left', 'file'],
                    default='center')
parser.add_argument('--maskFile', type=str,
                    default=None,
                    help='Input binary mask for file mask type')
parser.add_argument('--maskThresh', type=int,
                    default=128,
                    help='Threshold in case input mask is not binary')
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')

args = parser.parse_args()


def loadimage(filename):
    img = scipy.misc.imread(filename, mode='RGB').astype(np.float)
    return img


def saveimages(outimages, prefix='samples'):
    numimages = len(outimages)

    if not os.path.exists(args.outDir):
        os.mkdir(args.outDir)

    for i in range(numimages):
        filename = '{}_{}.png'.format(prefix, i)
        filename = os.path.join(args.outDir, filename)
        scipy.misc.imsave(filename, outimages[i, :, :, :])


def gen_mask(maskType):
    image_shape = [args.imgSize, args.imgSize]
    if maskType == 'random':
        fraction_masked = 0.2
        mask = np.ones(image_shape)
        mask[np.random.random(image_shape[:2]) < fraction_masked] = 0.0
    elif maskType == 'center':
        scale = 0.25
        assert(scale <= 0.5)
        mask = np.ones(image_shape)
        sz = args.imgSize
        l = int(args.imgSize*scale)
        u = int(args.imgSize*(1.0-scale))
        mask[l:u, l:u] = 0.0
    elif maskType == 'left':
        mask = np.ones(image_shape)
        c = args.imgSize // 2
        mask[:, :c] = 0.0
    elif maskType == 'file':
        mask = loadmask(args.maskfile, args.maskthresh)
    else:
        assert(False)
    return mask


def loadmask(filename, thresh=128):
    immask = scipy.misc.imread(filename, mode='L')
    image_shape = [args.imgSize, args.imgSize]
    mask = np.ones(image_shape)
    mask[immask < 128] = 0
    mask[immaks >= 128] = 1
    return mask


def main():
    m = ModelInpaint(args.model_file, args)

    # Generate some samples from the model as a test
    imout = m.sample()
    saveimages(imout)

    mask = gen_mask(args.maskType)
    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage(f) for f in imgfilenames])
    elif args.in_img is not None:
        in_img = loadimage(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)

    inpaint_out, g_out = m.restore_image(in_img, mask, args.blend)
    scipy.misc.imsave(os.path.join(args.outDir, 'mask.png'), mask)
    saveimages(g_out, 'gen')
    saveimages(inpaint_out, 'inpaint')


if __name__ == '__main__':
    main()


================================================
FILE: src/model.py
================================================
import tensorflow as tf
import numpy as np
import external.poissonblending as blending
from scipy.signal import convolve2d


class ModelInpaint():
    def __init__(self, modelfilename, config,
                 model_name='dcgan',
                 gen_input='z:0', gen_output='Tanh:0', gen_loss='Mean_2:0',
                 disc_input='real_images:0', disc_output='Sigmoid:0',
                 z_dim=100, batch_size=64):
        """
        Model for Semantic image inpainting.
        Loads frozen weights of a GAN and create the graph according to the
        loss function as described in paper

        Arguments:
            modelfilename - tensorflow .pb file with weights to be loaded
            config - training parameters: lambda_p, nIter
            gen_input - node name for generator input
            gen_output - node name for generator output
            disc_input - node name for discriminator input
            disc_output - node name for discriminator output
            z_dim - latent space dimension of GAN
            batch_size - training batch size
        """

        self.config = config

        self.batch_size = batch_size
        self.z_dim = z_dim
        self.graph, self.graph_def = ModelInpaint.loadpb(modelfilename,
                                                         model_name)

        self.gi = self.graph.get_tensor_by_name(model_name+'/'+gen_input)
        self.go = self.graph.get_tensor_by_name(model_name+'/'+gen_output)
        self.gl = self.graph.get_tensor_by_name(model_name+'/'+gen_loss)
        self.di = self.graph.get_tensor_by_name(model_name+'/'+disc_input)
        self.do = self.graph.get_tensor_by_name(model_name+'/'+disc_output)

        self.image_shape = self.go.shape[1:].as_list()

        self.l = config.lambda_p

        self.sess = tf.Session(graph=self.graph)

        self.init_z()

    def init_z(self):
        """Initializes latent variable z"""
        self.z = np.random.randn(self.batch_size, self.z_dim)

    def sample(self, z=None):
        """GAN sampler. Useful for checking if the GAN was loaded correctly"""
        if z is None:
            z = self.z
        sample_out = self.sess.run(self.go, feed_dict={self.gi: z})
        return sample_out

    def preprocess(self, images, imask, useWeightedMask = True, nsize=7):
        """Default preprocessing pipeline
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image
            mask - input mask

        Returns:
            None
        """
        images = ModelInpaint.imtransform(images)
        if useWeightedMask:
            mask = ModelInpaint.createWeightedMask(imask, nsize)
        else:
            mask = imask
        mask = ModelInpaint.create3ChannelMask(mask)
        
        bin_mask = ModelInpaint.binarizeMask(imask, dtype='uint8')
        self.bin_mask = ModelInpaint.create3ChannelMask(bin_mask)

        self.masks_data = np.repeat(mask[np.newaxis, :, :, :],
                                    self.batch_size,
                                    axis=0)

        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            self.images_data = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            self.images_data = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            self.images_data[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

    def postprocess(self, g_out, blend = True):
        """Default post processing pipeline
        Applies poisson blending using binary mask. (default)

        Arguments:
            g_out - generator output
            blend - Use poisson blending (True) or alpha blending (False)
        """
        images_out = ModelInpaint.iminvtransform(g_out)
        images_in = ModelInpaint.iminvtransform(self.images_data)

        if blend:
            for i in range(len(g_out)):
                images_out[i] = ModelInpaint.poissonblending(
                    images_in[i], images_out[i], self.bin_mask
                )
        else:
            images_out = np.multiply(images_out, 1-self.masks_data) \
                         + np.multiply(images_in, self.masks_data)

        return images_out

    def build_inpaint_graph(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
            self.masks = tf.placeholder(tf.float32,
                                        [None] + self.image_shape,
                                        name='mask')
            self.images = tf.placeholder(tf.float32,
                                         [None] + self.image_shape,
                                         name='images')
            self.context_loss = tf.reduce_sum(
                    tf.contrib.layers.flatten(
                        tf.abs(tf.multiply(self.masks, self.go) -
                               tf.multiply(self.masks, self.images))), 1
                )

            self.perceptual_loss = self.gl
            self.inpaint_loss = self.context_loss + self.l*self.perceptual_loss
            self.inpaint_grad = tf.gradients(self.inpaint_loss, self.gi)

    def inpaint(self, image, mask, blend=True):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are 
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_inpaint_graph()
        self.preprocess(image, mask)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout

    def backprop_to_input(self, verbose=True):
        """Main worker function. To be called after all initilization is done.
        Performs backpropagation to input using (accelerated) gradient descent
        to obtain latent space representation of target image

        Returns:
            generator output image
        """
        v = 0
        for i in range(self.config.nIter):
            out_vars = [self.inpaint_loss, self.inpaint_grad, self.go]
            in_dict = {self.masks: self.masks_data,
                       self.gi: self.z,
                       self.images: self.images_data}

            loss, grad, imout = self.sess.run(out_vars, feed_dict=in_dict)

            v_prev = np.copy(v)
            v = self.config.momentum*v - self.config.lr*grad[0]
            self.z += (-self.config.momentum * v_prev +
                       (1 + self.config.momentum) * v)
            self.z = np.clip(self.z, -1, 1)

            if verbose:
                print('Iteration {}: {}'.format(i, np.mean(loss)))

        return imout

    @staticmethod
    def loadpb(filename, model_name='dcgan'):
        """Loads pretrained graph from ProtoBuf file

        Arguments:
            filename - path to ProtoBuf graph definition
            model_name - prefix to assign to loaded graph node names

        Returns:
            graph, graph_def - as per Tensorflow definitions
        """
        with tf.gfile.GFile(filename, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())

        with tf.Graph().as_default() as graph:
            tf.import_graph_def(graph_def,
                                input_map=None,
                                return_elements=None,
                                op_dict=None,
                                producer_op_list=None,
                                name=model_name)

        return graph, graph_def

    @staticmethod
    def imtransform(img):
        """Helper: Rescale pixel value ranges to -1 and 1"""
        return np.array(img) / 127.5-1

    @staticmethod
    def iminvtransform(img):
        """Helper: Rescale pixel value ranges to 0 and 1"""
        return (np.array(img) + 1.0) / 2.0

    @staticmethod
    def poissonblending(img1, img2, mask):
        """Helper: interface to external poisson blending"""
        return blending.blend(img1, img2, 1 - mask)

    @staticmethod
    def createWeightedMask(mask, nsize=7):
        """Takes binary weighted mask to create weighted mask as described in 
        paper.

        Arguments:
            mask - binary mask input. numpy float32 array
            nsize - pixel neighbourhood size. default = 7
        """
        ker = np.ones((nsize,nsize), dtype=np.float32)
        ker = ker/np.sum(ker)
        wmask = mask * convolve2d(mask, ker, mode='same', boundary='symm')
        return wmask

    @staticmethod
    def binarizeMask(mask, dtype=np.float32):
        """Helper function, ensures mask is 0/1 or 0/255 and single channel
        If dtype specified as float32 (default), output mask will be 0, 1
        if required dtype is uint8, output mask will be 0, 255
        """
        assert(np.dtype(dtype) == np.float32 or np.dtype(dtype) == np.uint8)
        bmask = np.array(mask, dtype=np.float32)
        bmask[bmask>0] = 1.0
        bmask[bmask<=0] = 0
        if dtype == np.uint8:
            bmask = np.array(bmask*255, dtype=np.uint8)
        return bmask
    
    @staticmethod
    def create3ChannelMask(mask):
        """Helper function, repeats single channel mask to 3 channels"""
        assert(len(mask.shape)==2)
        return np.repeat(mask[:,:,np.newaxis], 3, axis=2)


================================================
FILE: src/model_base.py
================================================
"""Model base class."""

import tensorflow as tf
import numpy as np
import tools
import abc

class ModelBase(object):
  def __init__(self, modelfilename, config,
               model_name='dcgan',
               gen_input='z:0', gen_output='Tanh:0', gen_loss='Mean_2:0',
               disc_input='real_images:0', disc_output='Sigmoid:0',
               z_dim=100, batch_size=64):
    """
    Model for Semantic image inpainting.
    Loads frozen weights of a GAN and create the graph according to the
    loss function as described in paper

    Args:
      modelfilename: tensorflow .pb file with weights to be loaded
      config: training parameters: lambda_p, nIter
      gen_input: node name for generator input
      gen_output: node name for generator output
      disc_input: node name for discriminator input
      disc_output: node name for discriminator output
      z_dim: latent space dimension of GAN
      batch_size: training batch size
    """
    __metaclass__ = abc.ABCMeta
    self.config = config

    self.batch_size = batch_size
    self.z_dim = z_dim
    self.graph, self.graph_def = tools.loadpb(modelfilename,
    model_name)

    self.gi = self.graph.get_tensor_by_name(model_name+'/'+gen_input)
    self.go = self.graph.get_tensor_by_name(model_name+'/'+gen_output)
    self.gl = self.graph.get_tensor_by_name(model_name+'/'+gen_loss)
    self.di = self.graph.get_tensor_by_name(model_name+'/'+disc_input)
    self.do = self.graph.get_tensor_by_name(model_name+'/'+disc_output)

    self.image_shape = self.go.shape[1:].as_list()

    self.l = config.lambda_p

    self.sess = tf.Session(graph=self.graph)

    self.init_z()

  def init_z(self):
    """Initializes latent variable z"""
    self.z = np.random.randn(self.batch_size, self.z_dim)
    #self.z = np.random.uniform(size=[self.batch_size, self.z_dim])

  def sample(self, z=None):
    """GAN sampler. Useful for checking if the GAN was loaded correctly"""
    if z is None:
      z = self.z
    sample_out = self.sess.run(self.go, feed_dict={self.gi: z})
    return sample_out

  @abc.abstractmethod
  def preprocess(self, **kwargs):
    """
    """
    pass

  @abc.abstractmethod
  def postprocess(self, g_out, **kwargs):
    """
    """
    pass

  @abc.abstractmethod
  def build_context_loss(self):
    """Builds context loss function.
    """
    pass

  @abc.abstractmethod
  def build_input_placeholders(self):
    pass

  @abc.abstractmethod
  def restore_image(self, image, **kwargs):
    """
    """
    pass

  def build_restore_graph(self):
    """
    """
    self.build_input_placeholders()
    self.build_context_loss()

    self.perceptual_loss = self.gl
    self.inpaint_loss = self.context_loss + self.l*self.perceptual_loss
    self.inpaint_grad = tf.gradients(self.inpaint_loss, self.gi)


  def backprop_to_input(self, verbose=True):
    """Main worker function. To be called after all initilization is done.

    Performs backpropagation to input using (accelerated) gradient descent
    to obtain latent space representation of target image

    Returns:
      generator output image
    """
    v = 0
    for i in range(self.config.nIter):
      out_vars = [self.inpaint_loss, self.inpaint_grad, self.go]
      if hasattr(self, 'masks_data'):
        in_dict = {self.masks: self.masks_data,
                   self.gi: self.z,
                   self.images: self.images_data}
      else:
        in_dict = {self.gi: self.z,
                   self.images: self.images_data}


      loss, grad, imout = self.sess.run(out_vars, feed_dict=in_dict)

      v_prev = np.copy(v)
      v = self.config.momentum*v - self.config.lr*grad[0]
      self.z += (-self.config.momentum * v_prev +
                 (1 + self.config.momentum) * v)
      self.z = np.clip(self.z, -1, 1)

      if verbose:
        print('Iteration {}: {}'.format(i, np.mean(loss)))
    return imout


================================================
FILE: src/model_colorize.py
================================================
import tensorflow as tf
import numpy as np
import external.poissonblending as blending
from scipy.signal import convolve2d
from PIL import Image


class ModelColorize():
    def __init__(self, modelfilename, config,
                 model_name='dcgan',
                 gen_input='z:0', gen_output='Tanh:0', gen_loss='Mean_2:0',
                 disc_input='real_images:0', disc_output='Sigmoid:0',
                 z_dim=100, batch_size=64):
        """
        Model for Semantic image inpainting.
        Loads frozen weights of a GAN and create the graph according to the
        loss function as described in paper

        Arguments:
            modelfilename - tensorflow .pb file with weights to be loaded
            config - training parameters: lambda_p, nIter
            gen_input - node name for generator input
            gen_output - node name for generator output
            disc_input - node name for discriminator input
            disc_output - node name for discriminator output
            z_dim - latent space dimension of GAN
            batch_size - training batch size
        """

        self.config = config

        self.batch_size = batch_size
        self.z_dim = z_dim
        self.graph, self.graph_def = ModelColorize.loadpb(modelfilename,
                                                         model_name)

        self.gi = self.graph.get_tensor_by_name(model_name+'/'+gen_input)
        self.go = self.graph.get_tensor_by_name(model_name+'/'+gen_output)
        self.gl = self.graph.get_tensor_by_name(model_name+'/'+gen_loss)
        self.di = self.graph.get_tensor_by_name(model_name+'/'+disc_input)
        self.do = self.graph.get_tensor_by_name(model_name+'/'+disc_output)

        self.image_shape = self.go.shape[1:-1].as_list() + [1]

        self.l = config.lambda_p

        self.sess = tf.Session(graph=self.graph)

        self.init_z()

    def init_z(self):
        """Initializes latent variable z"""
        self.z = np.random.randn(self.batch_size, self.z_dim)
        #self.z = np.random.uniform(size=[self.batch_size, self.z_dim])

    def sample(self, z=None):
        """GAN sampler. Useful for checking if the GAN was loaded correctly"""
        if z is None:
            z = self.z
        sample_out = self.sess.run(self.go, feed_dict={self.gi: z})
        return sample_out

    def preprocess(self, images):
        """Default preprocessing pipeline
        Converts input image from single channel to 3 channel grayscale
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image

        Returns:
            None
        """
        images = ModelColorize.imtransform(images)

        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            self.images_data = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            self.images_data = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            self.images_data[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

    def postprocess(self, g_out, blend=False):
        """Default post processing pipeline
        Currently does nothing

        Arguments:
            g_out - generator output
        """
        images_out = ModelColorize.iminvtransform(g_out)
        images_in = ModelColorize.iminvtransform(self.images_data)

        if blend:
            for idx, (i, o) in enumerate(zip(images_in, images_out)):
                images_out[idx, :, :, :] = ModelColorize.colorblend( i, o )

        return images_out

    def build_colorization_graph(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
            self.images = tf.placeholder(tf.float32,
                                         [None] + self.image_shape,
                                         name='images')
            self.context_loss = tf.reduce_sum(
                    tf.contrib.layers.flatten(
                        tf.abs(tf.image.rgb_to_grayscale(self.go) -
                               self.images)), 1
                )

            self.prior_loss = self.gl
            self.colorization_loss = self.context_loss + self.l*self.prior_loss
            self.colorization_grad = tf.gradients(self.colorization_loss, self.gi)

    def colorize(self, image, blend=False):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are 
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_colorization_graph()
        self.preprocess(image)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout

    def backprop_to_input(self, verbose=True):
        """Main worker function. To be called after all initilization is done.
        Performs backpropagation to input using (accelerated) gradient descent
        to obtain latent space representation of target image

        Returns:
            generator output image
        """
        v = 0
        for i in range(self.config.nIter):
            out_vars = [self.colorization_loss, self.colorization_grad, self.go]
            in_dict = {self.gi: self.z,
                       self.images: self.images_data}

            loss, grad, imout = self.sess.run(out_vars, feed_dict=in_dict)

            v_prev = np.copy(v)
            v = self.config.momentum*v - self.config.lr*grad[0]
            self.z += (-self.config.momentum * v_prev +
                       (1 + self.config.momentum) * v)
            self.z = np.clip(self.z, -1, 1)

            if verbose:
                print('Iteration {}: {}'.format(i, np.mean(loss)))

        return imout

    @staticmethod
    def loadpb(filename, model_name='dcgan'):
        """Loads pretrained graph from ProtoBuf file

        Arguments:
            filename - path to ProtoBuf graph definition
            model_name - prefix to assign to loaded graph node names

        Returns:
            graph, graph_def - as per Tensorflow definitions
        """
        with tf.gfile.GFile(filename, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())

        with tf.Graph().as_default() as graph:
            tf.import_graph_def(graph_def,
                                input_map=None,
                                return_elements=None,
                                op_dict=None,
                                producer_op_list=None,
                                name=model_name)

        return graph, graph_def

    @staticmethod
    def imtransform(img):
        """Helper: Rescale pixel value ranges to -1 and 1"""
        return np.array(img) / 127.5-1

    @staticmethod
    def iminvtransform(img):
        """Helper: Rescale pixel value ranges to 0 and 1"""
        return (np.array(img) + 1.0) / 2.0

    @staticmethod
    def poissonblending(img1, img2, mask):
        """Helper: interface to external poisson blending"""
        return blending.blend(img1, img2, 1 - mask)

    @staticmethod
    def colorblend(img_gray, img_color):
        """Helper to apply color from one image to another"""
        img_blended_hsv = np.zeros_like(img_color, dtype=np.uint8)
        img_blended_hsv[:,:,2:] = np.uint8(np.copy(img_gray*255))
        img_color_hsv = np.array(Image.fromarray(np.uint8(img_color*255), mode='RGB').convert('HSV'))
        img_blended_hsv[:,:,:2] = img_color_hsv[:,:,:2]
        img_out = np.array(Image.fromarray(img_blended_hsv, mode='HSV').convert('RGB'))

        return img_out


================================================
FILE: src/model_denoising.py
================================================
import tensorflow as tf
import numpy as np
import tools
from scipy.ndimage.filters import gaussian_filter 

from model_base import ModelBase

class ModelDenoising(ModelBase):
    def preprocess(self, images, mask=None):
        """Default preprocessing pipeline
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image
            mask - input mask

        Returns:
            None
        """
        images = tools.imtransform(images)
        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            ii = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            ii = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            ii[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

        self.images_data=np.stack([gaussian_filter(i, self.sigma) for i in ii])

    def postprocess(self, g_out, blend = True):
        """Default post processing pipeline
        Applies poisson blending using binary mask. (default)

        Arguments:
            g_out - generator output
            blend - Use poisson blending (True) or alpha blending (False)
        """
        images_out = tools.iminvtransform(g_out)
        images_in = tools.iminvtransform(self.images_data)
        return images_out

    def build_input_placeholders(self):
      with self.graph.as_default():
        self.masks = tf.placeholder(tf.float32,
                                    [None] + self.image_shape,
                                    name='mask')
        self.images = tf.placeholder(tf.float32,
                                     [None] + self.image_shape,
                                     name='images')

    def perform_corruption(self, images):
      return images

    def build_context_loss(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
          self.context_loss = tf.reduce_sum(
            tf.contrib.layers.flatten(
              tf.abs(self.perform_corruption(self.go) -
                     self.perform_corruption(self.images))), 1
          )

    def restore_image(self, image, mask=None, blend=True):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_restore_graph()
        self.preprocess(image, mask)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout


================================================
FILE: src/model_inpaint.py
================================================
import tensorflow as tf
import numpy as np
import tools

from model_base import ModelBase

class ModelInpaint(ModelBase):
    def preprocess(self, images, imask, useWeightedMask=True, nsize=15):
        """Default preprocessing pipeline
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image
            mask - input mask

        Returns:
            None
        """
        images = tools.imtransform(images)
        if useWeightedMask:
            mask = tools.createWeightedMask(imask, nsize)
        else:
            mask = imask
        mask = tools.create3ChannelMask(mask)

        bin_mask = tools.binarizeMask(imask, dtype='uint8')
        self.bin_mask = tools.create3ChannelMask(bin_mask)

        self.masks_data = np.repeat(mask[np.newaxis, :, :, :],
                                    self.batch_size,
                                    axis=0)

        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            self.images_data = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            self.images_data = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            self.images_data[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

    def postprocess(self, g_out, blend = True):
        """Default post processing pipeline
        Applies poisson blending using binary mask. (default)

        Arguments:
            g_out - generator output
            blend - Use poisson blending (True) or alpha blending (False)
        """
        images_out = tools.iminvtransform(g_out)
        images_in = tools.iminvtransform(self.images_data)

        if blend:
            for i in range(len(g_out)):
                images_out[i] = tools.poissonblending(
                    images_in[i], images_out[i], self.bin_mask
                )
        else:
            images_out = np.multiply(images_out, 1-self.masks_data) \
                         + np.multiply(images_in, self.masks_data)

        return images_out

    def build_input_placeholders(self):
      with self.graph.as_default():
        self.masks = tf.placeholder(tf.float32,
                                    [None] + self.image_shape,
                                    name='mask')
        self.images = tf.placeholder(tf.float32,
                                     [None] + self.image_shape,
                                     name='images')

    def build_context_loss(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
          self.context_loss = tf.reduce_sum(
            tf.contrib.layers.flatten(
              tf.abs(tf.multiply(self.masks, self.go) -
                     tf.multiply(self.masks, self.images))), 1
          )

    def restore_image(self, image, mask, blend=True):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_restore_graph()
        self.preprocess(image, mask)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout


================================================
FILE: src/model_inpaint_test.py
================================================
import tensorflow as tf
import numpy as np
import tools

from model_inpaint import ModelInpaint

def p_standard_gaussian(z):

    return tf.exp( -.5 * tf.reduce_sum( tf.square(z), axis=1 ) )

class ModelInpaintTest(ModelInpaint):
    """Test of small modificaiton to prior loss"""

    def build_restore_graph(self):
        super(ModelInpaintTest, self).build_restore_graph()

        p = p_standard_gaussian
        self.perceptual_loss = (self.gl + tf.log(tf.clip_by_value(
            1-tf.exp(tf.clip_by_value(self.gl, -5, 5)), 1e-4, 99999)
        ))*p(self.gi)

        self.inpaint_loss = self.context_loss + self.l*self.perceptual_loss
        self.inpaint_grad = tf.gradients(self.inpaint_loss, self.gi)



================================================
FILE: src/model_quantize.py
================================================
import tensorflow as tf
import numpy as np
import tools

from model_base import ModelBase

class ModelQuantize(ModelBase):
    def preprocess(self, images, mask=None):
        """Default preprocessing pipeline
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image
            mask - input mask

        Returns:
            None
        """
        images = tools.imtransform(images)
        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            self.images_data = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            self.images_data = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            self.images_data[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

    def postprocess(self, g_out, blend = True):
        """Default post processing pipeline
        Applies poisson blending using binary mask. (default)

        Arguments:
            g_out - generator output
            blend - Use poisson blending (True) or alpha blending (False)
        """
        images_out = tools.iminvtransform(g_out)
        images_in = tools.iminvtransform(self.images_data)
        return images_out

    def build_input_placeholders(self):
      with self.graph.as_default():
        self.masks = tf.placeholder(tf.float32,
                                    [None] + self.image_shape,
                                    name='mask')
        self.images = tf.placeholder(tf.float32,
                                     [None] + self.image_shape,
                                     name='images')

    def perform_corruption(self, images):
      images = (images + 1.0) / 2.0
      images *= (255.0 / self.quantize_factor)
      images = tf.floor(images)
      images /= (255.0 / self.quantize_factor)
      images *= 2.
      images -= 1.
      return images
    
    def perform_corruption_new(self, images):
      images = (images + 1.0)/2.0
      images = tf.floor(images*self.levels)/self.levels
      images *= 2.
      images -= 1.
      return images

    def build_context_loss(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
          self.context_loss = tf.reduce_sum(
            tf.contrib.layers.flatten(
              tf.abs(self.go -
                     self.perform_corruption(self.images))), 1
          )

    def restore_image(self, image, mask=None, blend=True):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_restore_graph()
        self.preprocess(image, mask)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout


================================================
FILE: src/model_superres.py
================================================
import tensorflow as tf
import numpy as np
import tools

from model_base import ModelBase

class ModelSuperres(ModelBase):
    def preprocess(self, images, mask=None):
        """Default preprocessing pipeline
        Prepare the data to be fed to the network. Weighted mask is computed
        and images and masks are duplicated to fill the batch.

        Arguments:
            image - input image
            mask - input mask

        Returns:
            None
        """
        images = tools.imtransform(images)
        #Generate multiple candidates for completion if single image is given
        if len(images.shape) is 3:
            self.images_data = np.repeat(images[np.newaxis, :, :, :],
                                         self.batch_size,
                                         axis=0)
        elif len(images.shape) is 4:
            #Ensure batch is filled
            num_images = images.shape[0]
            self.images_data = np.repeat(images[np.newaxis, 0, :, :, :],
                                         self.batch_size,
                                         axis=0)
            ncpy = min(num_images, self.batch_size)
            self.images_data[:ncpy, :, :, :] = images[:ncpy, :, :, :].copy()

    def postprocess(self, g_out, blend = True):
        """Default post processing pipeline
        Applies poisson blending using binary mask. (default)

        Arguments:
            g_out - generator output
            blend - Use poisson blending (True) or alpha blending (False)
        """
        images_out = tools.iminvtransform(g_out)
        images_in = tools.iminvtransform(self.images_data)
        return images_out

    def build_input_placeholders(self):
      with self.graph.as_default():
        self.masks = tf.placeholder(tf.float32,
                                    [None] + self.image_shape,
                                    name='mask')
        self.images = tf.placeholder(tf.float32,
                                     [None] + self.image_shape,
                                     name='images')

    def perform_corruption(self, images):
      down_sample_factor = self.down_sample_factor

      ret = tf.image.resize_bilinear(
                images,
                [tf.shape(images)[1] // down_sample_factor,
                 tf.shape(images)[2] // down_sample_factor,
                ],
                align_corners=True)

      ret = tf.image.resize_bilinear(ret,
                                     [tf.shape(images)[1],
                                      tf.shape(images)[2]
                                     ],
                                     align_corners=True)
      return ret

    def build_context_loss(self):
        """Builds the context and prior loss objective"""
        with self.graph.as_default():
          self.context_loss = tf.reduce_sum(
            tf.contrib.layers.flatten(
              tf.abs(self.perform_corruption(self.go) -
                     self.perform_corruption(self.images))), 1
          )

    def restore_image(self, image, mask=None, blend=True):
        """Perform inpainting with the given image and mask with the standard
        pipeline as described in paper. To skip steps or try other pre/post
        processing, the methods can be called seperately.

        Arguments:
            image - input 3 channel image
            mask - input binary mask, single channel. Nonzeros values are
                   treated as 1
            blend - Flag to apply Poisson blending on output, Default = True

        Returns:
            post processed image (merged/blneded), raw generator output
        """
        self.build_restore_graph()
        self.preprocess(image, mask)

        imout = self.backprop_to_input()

        return self.postprocess(imout, blend), imout


================================================
FILE: src/quantize.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob
from helper import loadimage, saveimages

from model_quantize import ModelQuantize

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.0001)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=1000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.001)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='quantize')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')
parser.add_argument('-c', action='store_true', help='corrupt image on the fly')

args = parser.parse_args()


def corrupt_image(img, quantize_factor=4):
    img /= 255.0
    img *= (255.0 / quantize_factor)
    images = img
    images = np.floor(images)
    images /= (255.0 / quantize_factor)
    images *= 255.0
    images = images.astype(int)
    return images

def main():
    m = ModelQuantize(args.model_file, args)
    dd = 50
    m.quantize_factor = dd
    #m.levels = 4

    # Generate some samples from the model as a test
    #imout = m.sample()
    #saveimages(imout)

    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage(f) for f in imgfilenames])
        if args.c:
            in_corrupt_img = np.array([corrupt_image(loadimage(f), dd) for f in imgfilenames])
        else:
            in_corrupt_img = np.copy(in_img)

    elif args.in_image is not None:
        in_img = in_corrupt_img #loadimage(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)
    #saveimages(in_corrupt_img, prefix='input')
    inpaint_out, g_out = m.restore_image(in_img)
    saveimages(g_out, 'quantize_gen', imgfilenames, args.outDir)
    saveimages(inpaint_out, 'quantize', imgfilenames, args.outDir)


if __name__ == '__main__':
    main()


================================================
FILE: src/superres.py
================================================
import tensorflow as tf
import scipy.misc
import argparse
import os
import numpy as np
from glob import glob
from helper import loadimage, saveimages

from model_superres import ModelSuperres

parser = argparse.ArgumentParser()
parser.add_argument('--model_file', type=str, help="Pretrained GAN model")
parser.add_argument('--lr', type=float, default=0.0001)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--nIter', type=int, default=1000)
parser.add_argument('--imgSize', type=int, default=64)
parser.add_argument('--batch_size', type=int, default=64)
parser.add_argument('--lambda_p', type=float, default=0.01)
parser.add_argument('--checkpointDir', type=str, default='checkpoint')
parser.add_argument('--outDir', type=str, default='superres')
parser.add_argument('--blend', action='store_true', default=False,
                    help="Blend predicted image to original image")
parser.add_argument('--in_image', type=str, default=None,
                    help='Input Image (ignored if inDir is specified')
parser.add_argument('--inDir', type=str, default=None,
                    help='Path to input images')
parser.add_argument('--imgExt', type=str, default='png',
                    help='input images file extension')
parser.add_argument('-c', action='store_true', help='corrupt image on the fly')

args = parser.parse_args()


def corrupt_image(img, down_sample_factor=4):
  ret = scipy.misc.imresize(img, 1./down_sample_factor)
  ret = scipy.misc.imresize(ret, 1.*down_sample_factor)
  return ret

def main():
    m = ModelSuperres(args.model_file, args)
    dd = 4
    m.down_sample_factor = dd

    # Generate some samples from the model as a test
    #imout = m.sample()
    #saveimages(imout)

    if args.inDir is not None:
        imgfilenames = glob( args.inDir + '/*.' + args.imgExt )
        print('{} images found'.format(len(imgfilenames)))
        in_img = np.array([loadimage(f) for f in imgfilenames])
        if args.c:
            in_corrupt_img = np.array([corrupt_image(loadimage(f), dd) for f in imgfilenames])
        else:
            in_corrupt_img = np.copy(in_img)


    elif args.in_image is not None:
        in_img = loadimage(args.in_image)
    else:
        print('Input image needs to be specified')
        exit(1)
    #saveimages(in_corrupt_img, prefix='input')
    inpaint_out, g_out = m.restore_image(in_img)

    saveimages(g_out, 'superres_gen', imgfilenames, args.outDir)
    saveimages(inpaint_out, 'superres', imgfilenames, args.outDir)

if __name__ == '__main__':
    main()


================================================
FILE: src/tools.py
================================================
"""Helper functions
"""
import tensorflow as tf
import numpy as np
import external.poissonblending as blending
from scipy.signal import convolve2d

###################Helpers for image inapinting #######################
def imtransform(img):
	"""Helper: Rescale pixel value ranges to -1 and 1"""
	return np.array(img) / 127.5-1

def iminvtransform(img):
	"""Helper: Rescale pixel value ranges to 0 and 1"""
	return (np.array(img) + 1.0) / 2.0

def poissonblending(img1, img2, mask):
	"""Helper: interface to external poisson blending"""
	return blending.blend(img1, img2, 1 - mask)

def createWeightedMask(mask, nsize=7):
	"""Takes binary weighted mask to create weighted mask as described in paper.
	Args:
		mask: binary mask input. numpy float32 array
		nsize: pixel neighbourhood size. default = 7
	"""
	ker = np.ones((nsize,nsize), dtype=np.float32)
	ker = ker/np.sum(ker)
	wmask = mask * convolve2d(1-mask, ker, mode='same', boundary='symm')
	return wmask

def binarizeMask(mask, dtype=np.float32):
	"""Helper function, ensures mask is 0/1 or 0/255 and single channel.

	If dtype specified as float32 (default), output mask will be 0, 1
	if required dtype is uint8, output mask will be 0, 255

	Args:
		mask:.
		dtype:.
	"""
	assert(np.dtype(dtype) == np.float32 or np.dtype(dtype) == np.uint8)
	bmask = np.array(mask, dtype=np.float32)
	bmask[bmask>0] = 1.0
	bmask[bmask<=0] = 0
	if dtype == np.uint8:
		bmask = np.array(bmask*255, dtype=np.uint8)
	return bmask

def create3ChannelMask(mask):
	"""Helper function, repeats single channel mask to 3 channels"""
	assert(len(mask.shape)==2)
	return np.repeat(mask[:,:,np.newaxis], 3, axis=2)

def loadpb(filename, model_name='dcgan'):
  """Loads pretrained graph from ProtoBuf file
  Args:
    filename: path to ProtoBuf graph definition.
    model_name: prefix to assign to loaded graph node names.
  Returns:
    graph, graph_def: as per Tensorflow definitions.
  """
  with tf.gfile.GFile(filename, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

  with tf.Graph().as_default() as graph:
    tf.import_graph_def(graph_def,
                        input_map=None,
                        return_elements=None,
                        op_dict=None,
                        producer_op_list=None,
                        name=model_name)

  return graph, graph_def
Download .txt
gitextract_vv0wxru7/

├── .gitignore
├── LICENSE
├── README.md
├── graphs/
│   └── .gitignore
├── requirements.txt
├── runall.sh
└── src/
    ├── colorize.py
    ├── corrupt_image.py
    ├── denoising.py
    ├── external/
    │   ├── __init__.py
    │   └── poissonblending.py
    ├── helper.py
    ├── imgsnr.py
    ├── inpaint.py
    ├── inpaint_test.py
    ├── model.py
    ├── model_base.py
    ├── model_colorize.py
    ├── model_denoising.py
    ├── model_inpaint.py
    ├── model_inpaint_test.py
    ├── model_quantize.py
    ├── model_superres.py
    ├── quantize.py
    ├── superres.py
    └── tools.py
Download .txt
SYMBOL INDEX (112 symbols across 19 files)

FILE: src/colorize.py
  function main (line 33) | def main():

FILE: src/corrupt_image.py
  function tofloat (line 17) | def tofloat(img):
  function gaussian_noise (line 21) | def gaussian_noise(img, std=0.1):
  function quantize_2 (line 26) | def quantize_2(img, levels=4):
  function quantize (line 31) | def quantize(img, quantize_factor=55):
  function resize (line 43) | def resize(img, rate=4., interp='bilinear'):
  function sr_linear (line 46) | def sr_linear(img, rate=4.):
  function sr_nn (line 50) | def sr_nn(img, rate=4.):
  function sr_black (line 54) | def sr_black(img, rate=4):
  function main (line 60) | def main(args):

FILE: src/denoising.py
  function corrupt_image (line 34) | def corrupt_image(img, down_sample_factor=4):
  function main (line 38) | def main():

FILE: src/external/poissonblending.py
  function prepare_mask (line 13) | def prepare_mask(mask):
  function blend (line 25) | def blend(img_target, img_source, img_mask, offset=(0, 0)):
  function test (line 94) | def test():

FILE: src/helper.py
  function loadimage (line 5) | def loadimage(filename):
  function loadimage_gray (line 9) | def loadimage_gray(filename):
  function saveimages (line 13) | def saveimages(outimages, prefix='samples', filenames=None, outdir='out'):

FILE: src/imgsnr.py
  function psnr (line 9) | def psnr(a, b):
  function normalize (line 16) | def normalize(i):
  function main (line 20) | def main(args):

FILE: src/inpaint.py
  function gen_mask (line 59) | def gen_mask(maskType):
  function loadmask (line 84) | def loadmask(filename, thresh=128):
  function main (line 93) | def main():

FILE: src/inpaint_test.py
  function loadimage (line 41) | def loadimage(filename):
  function saveimages (line 46) | def saveimages(outimages, prefix='samples'):
  function gen_mask (line 58) | def gen_mask(maskType):
  function loadmask (line 83) | def loadmask(filename, thresh=128):
  function main (line 92) | def main():

FILE: src/model.py
  class ModelInpaint (line 7) | class ModelInpaint():
    method __init__ (line 8) | def __init__(self, modelfilename, config,
    method init_z (line 50) | def init_z(self):
    method sample (line 54) | def sample(self, z=None):
    method preprocess (line 61) | def preprocess(self, images, imask, useWeightedMask = True, nsize=7):
    method postprocess (line 101) | def postprocess(self, g_out, blend = True):
    method build_inpaint_graph (line 123) | def build_inpaint_graph(self):
    method inpaint (line 142) | def inpaint(self, image, mask, blend=True):
    method backprop_to_input (line 163) | def backprop_to_input(self, verbose=True):
    method loadpb (line 192) | def loadpb(filename, model_name='dcgan'):
    method imtransform (line 217) | def imtransform(img):
    method iminvtransform (line 222) | def iminvtransform(img):
    method poissonblending (line 227) | def poissonblending(img1, img2, mask):
    method createWeightedMask (line 232) | def createWeightedMask(mask, nsize=7):
    method binarizeMask (line 246) | def binarizeMask(mask, dtype=np.float32):
    method create3ChannelMask (line 260) | def create3ChannelMask(mask):

FILE: src/model_base.py
  class ModelBase (line 8) | class ModelBase(object):
    method __init__ (line 9) | def __init__(self, modelfilename, config,
    method init_z (line 51) | def init_z(self):
    method sample (line 56) | def sample(self, z=None):
    method preprocess (line 64) | def preprocess(self, **kwargs):
    method postprocess (line 70) | def postprocess(self, g_out, **kwargs):
    method build_context_loss (line 76) | def build_context_loss(self):
    method build_input_placeholders (line 82) | def build_input_placeholders(self):
    method restore_image (line 86) | def restore_image(self, image, **kwargs):
    method build_restore_graph (line 91) | def build_restore_graph(self):
    method backprop_to_input (line 102) | def backprop_to_input(self, verbose=True):

FILE: src/model_colorize.py
  class ModelColorize (line 8) | class ModelColorize():
    method __init__ (line 9) | def __init__(self, modelfilename, config,
    method init_z (line 51) | def init_z(self):
    method sample (line 56) | def sample(self, z=None):
    method preprocess (line 63) | def preprocess(self, images):
    method postprocess (line 91) | def postprocess(self, g_out, blend=False):
    method build_colorization_graph (line 107) | def build_colorization_graph(self):
    method colorize (line 123) | def colorize(self, image, blend=False):
    method backprop_to_input (line 144) | def backprop_to_input(self, verbose=True):
    method loadpb (line 172) | def loadpb(filename, model_name='dcgan'):
    method imtransform (line 197) | def imtransform(img):
    method iminvtransform (line 202) | def iminvtransform(img):
    method poissonblending (line 207) | def poissonblending(img1, img2, mask):
    method colorblend (line 212) | def colorblend(img_gray, img_color):

FILE: src/model_denoising.py
  class ModelDenoising (line 8) | class ModelDenoising(ModelBase):
    method preprocess (line 9) | def preprocess(self, images, mask=None):
    method postprocess (line 38) | def postprocess(self, g_out, blend = True):
    method build_input_placeholders (line 50) | def build_input_placeholders(self):
    method perform_corruption (line 59) | def perform_corruption(self, images):
    method build_context_loss (line 62) | def build_context_loss(self):
    method restore_image (line 71) | def restore_image(self, image, mask=None, blend=True):

FILE: src/model_inpaint.py
  class ModelInpaint (line 7) | class ModelInpaint(ModelBase):
    method preprocess (line 8) | def preprocess(self, images, imask, useWeightedMask=True, nsize=15):
    method postprocess (line 48) | def postprocess(self, g_out, blend = True):
    method build_input_placeholders (line 70) | def build_input_placeholders(self):
    method build_context_loss (line 79) | def build_context_loss(self):
    method restore_image (line 88) | def restore_image(self, image, mask, blend=True):

FILE: src/model_inpaint_test.py
  function p_standard_gaussian (line 7) | def p_standard_gaussian(z):
  class ModelInpaintTest (line 11) | class ModelInpaintTest(ModelInpaint):
    method build_restore_graph (line 14) | def build_restore_graph(self):

FILE: src/model_quantize.py
  class ModelQuantize (line 7) | class ModelQuantize(ModelBase):
    method preprocess (line 8) | def preprocess(self, images, mask=None):
    method postprocess (line 35) | def postprocess(self, g_out, blend = True):
    method build_input_placeholders (line 47) | def build_input_placeholders(self):
    method perform_corruption (line 56) | def perform_corruption(self, images):
    method perform_corruption_new (line 65) | def perform_corruption_new(self, images):
    method build_context_loss (line 72) | def build_context_loss(self):
    method restore_image (line 81) | def restore_image(self, image, mask=None, blend=True):

FILE: src/model_superres.py
  class ModelSuperres (line 7) | class ModelSuperres(ModelBase):
    method preprocess (line 8) | def preprocess(self, images, mask=None):
    method postprocess (line 35) | def postprocess(self, g_out, blend = True):
    method build_input_placeholders (line 47) | def build_input_placeholders(self):
    method perform_corruption (line 56) | def perform_corruption(self, images):
    method build_context_loss (line 73) | def build_context_loss(self):
    method restore_image (line 82) | def restore_image(self, image, mask=None, blend=True):

FILE: src/quantize.py
  function corrupt_image (line 34) | def corrupt_image(img, quantize_factor=4):
  function main (line 44) | def main():

FILE: src/superres.py
  function corrupt_image (line 34) | def corrupt_image(img, down_sample_factor=4):
  function main (line 39) | def main():

FILE: src/tools.py
  function imtransform (line 9) | def imtransform(img):
  function iminvtransform (line 13) | def iminvtransform(img):
  function poissonblending (line 17) | def poissonblending(img1, img2, mask):
  function createWeightedMask (line 21) | def createWeightedMask(mask, nsize=7):
  function binarizeMask (line 32) | def binarizeMask(mask, dtype=np.float32):
  function create3ChannelMask (line 50) | def create3ChannelMask(mask):
  function loadpb (line 55) | def loadpb(filename, model_name='dcgan'):
Condensed preview — 26 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (74K chars).
[
  {
    "path": ".gitignore",
    "chars": 45,
    "preview": "*.swp\n__pycache__\n*.pbtext\ncompletions\n*.pyc\n"
  },
  {
    "path": "LICENSE",
    "chars": 1069,
    "preview": "MIT License\n\nCopyright (c) 2017 TeckYian Lim\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "README.md",
    "chars": 2898,
    "preview": "Semantic Image Inpainting With Deep Generative Models and image restoration with GANS\n=================================="
  },
  {
    "path": "graphs/.gitignore",
    "chars": 14,
    "preview": "*\n!.gitignore\n"
  },
  {
    "path": "requirements.txt",
    "chars": 24,
    "preview": "Pillow\ntensorflow\npyamg\n"
  },
  {
    "path": "runall.sh",
    "chars": 1061,
    "preview": "batches=`seq 0 15`\n\nfor b in $batches\ndo\n    echo test_batches/quantized/batch_$b\n    python src/quantize.py --model_fil"
  },
  {
    "path": "src/colorize.py",
    "chars": 2209,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\nfrom helper"
  },
  {
    "path": "src/corrupt_image.py",
    "chars": 1825,
    "preview": "from scipy.misc import imread, imsave, imresize\nimport argparse\nimport numpy as np\n\nparser = argparse.ArgumentParser()\np"
  },
  {
    "path": "src/denoising.py",
    "chars": 2500,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\nfrom helper"
  },
  {
    "path": "src/external/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/external/poissonblending.py",
    "chars": 3840,
    "preview": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n#\n# Original code from https://github.com/parosky/poissonblending \n\nimpo"
  },
  {
    "path": "src/helper.py",
    "chars": 815,
    "preview": "import scipy\nimport os\nimport numpy as np\n\ndef loadimage(filename):\n    img = scipy.misc.imread(filename).astype(np.floa"
  },
  {
    "path": "src/imgsnr.py",
    "chars": 632,
    "preview": "from scipy.misc import imread\nimport numpy as np\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument"
  },
  {
    "path": "src/inpaint.py",
    "chars": 4137,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\n\nfrom model"
  },
  {
    "path": "src/inpaint_test.py",
    "chars": 3954,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\n\nfrom model"
  },
  {
    "path": "src/model.py",
    "chars": 10056,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport external.poissonblending as blending\nfrom scipy.signal import convolve"
  },
  {
    "path": "src/model_base.py",
    "chars": 3877,
    "preview": "\"\"\"Model base class.\"\"\"\n\nimport tensorflow as tf\nimport numpy as np\nimport tools\nimport abc\n\nclass ModelBase(object):\n  "
  },
  {
    "path": "src/model_colorize.py",
    "chars": 8388,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport external.poissonblending as blending\nfrom scipy.signal import convolve"
  },
  {
    "path": "src/model_denoising.py",
    "chars": 3356,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport tools\nfrom scipy.ndimage.filters import gaussian_filter \n\nfrom model_b"
  },
  {
    "path": "src/model_inpaint.py",
    "chars": 4024,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport tools\n\nfrom model_base import ModelBase\n\nclass ModelInpaint(ModelBase)"
  },
  {
    "path": "src/model_inpaint_test.py",
    "chars": 715,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport tools\n\nfrom model_inpaint import ModelInpaint\n\ndef p_standard_gaussian"
  },
  {
    "path": "src/model_quantize.py",
    "chars": 3639,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport tools\n\nfrom model_base import ModelBase\n\nclass ModelQuantize(ModelBase"
  },
  {
    "path": "src/model_superres.py",
    "chars": 3808,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport tools\n\nfrom model_base import ModelBase\n\nclass ModelSuperres(ModelBase"
  },
  {
    "path": "src/quantize.py",
    "chars": 2674,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\nfrom helper"
  },
  {
    "path": "src/superres.py",
    "chars": 2559,
    "preview": "import tensorflow as tf\nimport scipy.misc\nimport argparse\nimport os\nimport numpy as np\nfrom glob import glob\nfrom helper"
  },
  {
    "path": "src/tools.py",
    "chars": 2352,
    "preview": "\"\"\"Helper functions\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport external.poissonblending as blending\nfrom scip"
  }
]

About this extraction

This page contains the full source code of the moodoki/semantic_image_inpainting GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 26 files (68.8 KB), approximately 16.9k tokens, and a symbol index with 112 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!