[
  {
    "path": ".gitignore",
    "content": "*MNIST_data*\nChapter02/test/*\nChapter02/train/*\nChapter03/inception5h.zip\nChapter03/classify_image_graph_def.pb\nChapter03/imagenet_2012_challenge_label_map_proto.pbtxt\nChapter03/imagenet_comp_graph_label_strings.txt\nChapter03/imagenet_synset_to_human_label_map.txt\nChapter03/inception-2015-12-05.tgz\nChapter03/LICENSE\nChapter03/tensorflow_inception_graph.pb\nChapter03/stitched_filters_3x3.png\nChapter03/cropped_panda.jpg\nChapter08/gen_*\n.idea/*\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n"
  },
  {
    "path": "Chapter01/1_hello_tensorflow.py",
    "content": "import tensorflow as tf\nhello = tf.constant('Hello, TensorFlow!')\nsession = tf.Session()\nprint(session.run(hello))"
  },
  {
    "path": "Chapter01/2_add.py",
    "content": "import tensorflow as tf\n\nx = tf.placeholder(tf.float32)\ny = tf.placeholder(tf.float32)\n\nz = x + y\n\nsession = tf.Session()\n\nvalues = {x: 5.0, y: 4.0}\n\nresult = session.run([z], values)\nprint(result)\n"
  },
  {
    "path": "Chapter01/3_add_tensorboard.py",
    "content": "import tensorflow as tf\n\nx = tf.placeholder(tf.float32, name='x')\ny = tf.placeholder(tf.float32, name='y')\nz = tf.add(x, y, name='sum')\nsession = tf.Session()\nsummary_writer = tf.summary.FileWriter('/tmp/1', session.graph)\nsummary_writer.flush()\n\n"
  },
  {
    "path": "Chapter02/1_mnist_tf_perceptron.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 200\n\nx_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\n\nweights = tf.Variable(tf.random_normal([input_size, no_classes]))\nbias = tf.Variable(tf.random_normal([no_classes]))\n\nlogits = tf.matmul(x_input, weights) + bias\n\nsoftmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_input, logits=logits)\nloss_operation = tf.reduce_mean(softmax_cross_entropy)\noptimiser = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, train_labels = mnist_batch[0], mnist_batch[1]\n    _, loss_value = session.run([optimiser, loss_operation], feed_dict={x_input: train_images,\n                                                                        y_input: train_labels})\n    print(loss_value)\n\npredictions = tf.argmax(logits, 1)\ncorrect_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))\ntest_images, test_labels = mnist_data.test.images, mnist_data.test.labels\naccuracy_value = session.run(accuracy_operation, feed_dict={x_input: test_images,\n                                                            y_input: test_labels})\n                                                            \nprint('Accuracy : ', accuracy_value)\nsession.close()\n"
  },
  {
    "path": "Chapter02/2_mnist_cnn.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 200\n\nx_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\nx_input_reshape = tf.reshape(x_input, [-1, 28, 28, 1],\n                             name='input_reshape')\n\n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3],\n                      activation=tf.nn.relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\nconvolution_layer_1 = convolution_layer(x_input_reshape, 64)\npooling_layer_1 = pooling_layer(convolution_layer_1)\nconvolution_layer_2 = convolution_layer(pooling_layer_1, 128)\npooling_layer_2 = pooling_layer(convolution_layer_2)\nflattened_pool = tf.reshape(pooling_layer_2, [-1, 5 * 5 * 128],\n                            name='flattened_pool')\ndense_layer_bottleneck = dense_layer(flattened_pool, 1024)\n\ndropout_bool = tf.placeholder(tf.bool)\ndropout_layer = tf.layers.dropout(\n        inputs=dense_layer_bottleneck,\n        rate=0.4,\n        training=dropout_bool\n    )\nlogits = dense_layer(dropout_layer, no_classes)\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(\n        labels=y_input, logits=logits)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\n\nwith tf.name_scope('accuracy'):\n    with tf.name_scope('correct_prediction'):\n        predictions = tf.argmax(logits, 1)\n        correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\n    with tf.name_scope('accuracy'):\n        accuracy_operation = tf.reduce_mean(\n            tf.cast(correct_predictions, tf.float32))\ntf.summary.scalar('accuracy', accuracy_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\ntest_summary_writer = tf.summary.FileWriter('/tmp/test')\n\ntest_images, test_labels = mnist_data.test.images, mnist_data.test.labels\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, train_labels = mnist_batch[0], mnist_batch[1]\n    _, merged_summary = session.run([optimiser, merged_summary_operation],\n                                    feed_dict={\n        x_input: train_images,\n        y_input: train_labels,\n        dropout_bool: True\n    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n    if batch_no % 10 == 0:\n        merged_summary, _ = session.run([merged_summary_operation,\n                                         accuracy_operation], feed_dict={\n            x_input: test_images,\n            y_input: test_labels,\n            dropout_bool: False\n        })\n        test_summary_writer.add_summary(merged_summary, batch_no)\n"
  },
  {
    "path": "Chapter02/3_mnist_keras.py",
    "content": "import tensorflow as tf\n\nbatch_size = 128\nno_classes = 10\nepochs = 50\nimage_height, image_width = 28, 28\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\nx_train = x_train.reshape(x_train.shape[0], image_height, image_width, 1)\nx_test = x_test.reshape(x_test.shape[0], image_height, image_width, 1)\ninput_shape = (image_height, image_width, 1)\n\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\n\nx_train /= 255\nx_test /= 255\n\ny_train = tf.keras.utils.to_categorical(y_train, no_classes)\ny_test = tf.keras.utils.to_categorical(y_test, no_classes)\n\n\ndef simple_cnn(input_shape):\n    model = tf.keras.models.Sequential()\n    model.add(tf.keras.layers.Conv2D(\n        filters=64,\n        kernel_size=(3, 3),\n        activation='relu',\n        input_shape=input_shape\n    ))\n    model.add(tf.keras.layers.Conv2D(\n        filters=128,\n        kernel_size=(3, 3),\n        activation='relu'\n    ))\n    model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Flatten())\n    model.add(tf.keras.layers.Dense(units=1024, activation='relu'))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Dense(units=no_classes, activation='softmax'))\n    model.compile(loss=tf.keras.losses.categorical_crossentropy,\n                  optimizer=tf.keras.optimizers.Adam(),\n                  metrics=['accuracy'])\n    return model\n\nsimple_cnn_model = simple_cnn(input_shape)\n\nsimple_cnn_model.fit(x_train, y_train, batch_size, epochs, (x_test, y_test))\ntrain_loss, train_accuracy = simple_cnn_model.evaluate(\n    x_train, y_train, verbose=0)\nprint('Train data loss:', train_loss)\nprint('Train data accuracy:', train_accuracy)\n\ntest_loss, test_accuracy = simple_cnn_model.evaluate(\n    x_test, y_test, verbose=0)\nprint('Test data loss:', test_loss)\nprint('Test data accuracy:', test_accuracy)\n"
  },
  {
    "path": "Chapter02/4_cat_vs_dog_data_prep.py",
    "content": "import os\nimport shutil\n\nwork_dir = ''\nimage_names = sorted(os.listdir(os.path.join(work_dir, 'train')))\n\n\ndef copy_files(prefix_str, range_start, range_end, target_dir):\n    image_paths = [os.path.join(work_dir, 'train', prefix_str + '.' + str(i) + '.jpg')\n                   for i in range(range_start, range_end)]\n    dest_dir = os.path.join(work_dir, 'data', target_dir, prefix_str)\n    os.makedirs(dest_dir)\n    for image_path in image_paths:\n        shutil.copy(image_path, dest_dir)\n\n\ncopy_files('dog', 0, 1000, 'train')\ncopy_files('cat', 0, 1000, 'train')\ncopy_files('dog', 1000, 1400, 'test')\ncopy_files('cat', 1000, 1400, 'test')\n"
  },
  {
    "path": "Chapter02/5_cat_vs_dog_cnn.py",
    "content": "import numpy as np\nimport os\nimport tensorflow as tf\n\nwork_dir = ''\n\nimage_height, image_width = 150, 150\ntrain_dir = os.path.join(work_dir, 'train')\ntest_dir = os.path.join(work_dir, 'test')\nno_classes = 2\nno_validation = 800\nepochs = 2\nbatch_size = 200\nno_train = 2000\nno_test = 800\ninput_shape = (image_height, image_width, 3)\nepoch_steps = no_train // batch_size\ntest_steps = no_test // batch_size\n\n\ndef simple_cnn(input_shape):\n    model = tf.keras.models.Sequential()\n    model.add(tf.keras.layers.Conv2D(\n        filters=64,\n        kernel_size=(3, 3),\n        activation='relu',\n        input_shape=input_shape\n    ))\n    model.add(tf.keras.layers.Conv2D(\n        filters=128,\n        kernel_size=(3, 3),\n        activation='relu'\n    ))\n    model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Flatten())\n    model.add(tf.keras.layers.Dense(units=1024, activation='relu'))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Dense(units=no_classes, activation='softmax'))\n    model.compile(loss=tf.keras.losses.categorical_crossentropy,\n                  optimizer=tf.keras.optimizers.Adam(),\n                  metrics=['accuracy'])\n    return model\n\nsimple_cnn_model = simple_cnn(input_shape)\n\ngenerator_train = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)\ngenerator_test = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)\n\ntrain_images = generator_train.flow_from_directory(\n    train_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height))\n\ntest_images = generator_test.flow_from_directory(\n    test_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height))\n\nsimple_cnn_model.fit_generator(\n    train_images,\n    steps_per_epoch=epoch_steps,\n    epochs=epochs,\n    validation_data=test_images,\n    validation_steps=test_steps)\n\n"
  },
  {
    "path": "Chapter02/6_cat_vs_dog_augmentation.py",
    "content": "import tensorflow as tf\nimport os\n\nwork_dir = ''\n\n\nimage_height, image_width = 150, 150\ntrain_dir = os.path.join(work_dir, 'train')\ntest_dir = os.path.join(work_dir, 'test')\nno_classes = 2\nno_validation = 800\nepochs = 50\nbatch_size = 32\nno_train = 2000\nno_test = 800\ninput_shape = (image_height, image_width, 3)\nepoch_steps = no_train // batch_size\ntest_steps = no_test // batch_size\n\ndef simple_cnn(input_shape):\n    model = tf.keras.models.Sequential()\n    model.add(tf.keras.layers.Conv2D(\n        filters=64,\n        kernel_size=(3, 3),\n        activation='relu',\n        input_shape=input_shape\n    ))\n    model.add(tf.keras.layers.Conv2D(\n        filters=128,\n        kernel_size=(3, 3),\n        activation='relu'\n    ))\n    model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Flatten())\n    model.add(tf.keras.layers.Dense(units=1024, activation='relu'))\n    model.add(tf.keras.layers.Dropout(rate=0.3))\n    model.add(tf.keras.layers.Dense(units=no_classes, activation='softmax'))\n    model.compile(loss=tf.keras.losses.categorical_crossentropy,\n                  optimizer=tf.keras.optimizers.Adam(),\n                  metrics=['accuracy'])\n    return model\n\nsimple_cnn_model = simple_cnn(input_shape)\n\ngenerator_train = tf.keras.preprocessing.image.ImageDataGenerator(\n    rescale=1. / 255,\n    horizontal_flip=True,\n    zoom_range=0.3,\n    shear_range=0.3,)\n\ngenerator_test = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)\n\ntrain_images = generator_train.flow_from_directory(\n    train_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height))\n\ntest_images = generator_test.flow_from_directory(\n    test_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height))\n\nsimple_cnn_model.fit_generator(\n    train_images,\n    steps_per_epoch=epoch_steps,\n    epochs=epochs,\n    validation_data=test_images,\n    validation_steps=test_steps)\n\n\n\n"
  },
  {
    "path": "Chapter02/7_cat_vs_dog_bottleneck.py",
    "content": "import numpy as np\nimport os\nimport tensorflow as tf\n\nwork_dir = ''\n\nimage_height, image_width = 150, 150\ntrain_dir = os.path.join(work_dir, 'train')\ntest_dir = os.path.join(work_dir, 'test')\nno_classes = 2\nno_validation = 800\nepochs = 50\nbatch_size = 32\nno_train = 2000\nno_test = 800\ninput_shape = (image_height, image_width, 3)\nepoch_steps = no_train // batch_size\ntest_steps = no_test // batch_size\n\ngenerator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)\n\nmodel = tf.keras.applications.VGG16(include_top=False)\n\ntrain_images = generator.flow_from_directory(\n    train_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height),\n    class_mode=None,\n    shuffle=False\n)\ntrain_bottleneck_features = model.predict_generator(train_images, epoch_steps)\n\ntest_images = generator.flow_from_directory(\n    test_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height),\n    class_mode=None,\n    shuffle=False\n)\n\ntest_bottleneck_features = model.predict_generator(test_images, test_steps)\n\ntrain_labels = np.array([0] * int(no_train / 2) + [1] * int(no_train / 2))\ntest_labels = np.array([0] * int(no_test / 2) + [1] * int(no_test / 2))\n\nmodel = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.Flatten(input_shape=train_bottleneck_features.shape[1:]))\nmodel.add(tf.keras.layers.Dense(1024, activation='relu'))\nmodel.add(tf.keras.layers.Dropout(0.3))\nmodel.add(tf.keras.layers.Dense(1, activation='softmax'))\nmodel.compile(loss=tf.keras.losses.categorical_crossentropy,\n              optimizer=tf.keras.optimizers.Adam(),\n              metrics=['accuracy'])\n\nmodel.fit(\n    train_bottleneck_features,\n    train_labels,\n    batch_size=batch_size,\n    epochs=epochs,\n    validation_data=(test_bottleneck_features, test_labels))\n"
  },
  {
    "path": "Chapter02/8_cat_vs_dog_fine_tune.py",
    "content": "import tensorflow as tf\nimport os\n\nwork_dir = ''\n\nweights_path = '../keras/examples/vgg16_weights.h5'\ntop_model_weights_path = 'fc_model.h5'\n\nimage_height, image_width = 150, 150\ntrain_dir = os.path.join(work_dir, 'train')\ntest_dir = os.path.join(work_dir, 'test')\nno_classes = 2\nno_validation = 800\nepochs = 50\nbatch_size = 32\nno_train = 2000\nno_test = 800\ninput_shape = (image_height, image_width, 3)\nepoch_steps = no_train // batch_size\ntest_steps = no_test // batch_size\n\nmodel = tf.keras.applications.VGG16(include_top=False)\nmodel_fine_tune = tf.keras.models.Sequential()\nmodel_fine_tune.add(tf.keras.layers.Flatten(input_shape=model.output_shape))\nmodel_fine_tune.add(tf.keras.layers.Dense(256, activation='relu'))\nmodel_fine_tune.add(tf.keras.layers.Dropout(0.5))\nmodel_fine_tune.add(tf.keras.layers.Dense(no_classes, activation='softmax'))\n\nmodel_fine_tune.load_weights(top_model_weights_path)\nmodel.add(model_fine_tune)\n\nfor vgg_layer in model.layers[:25]:\n    vgg_layer.trainable = False\n\nmodel.compile(loss='binary_crossentropy',\n              optimizer=tf.keras.optimizers.SGD(lr=1e-4, momentum=0.9),\n              metrics=['accuracy'])\n\ngenerator_train = tf.keras.preprocessing.image.ImageDataGenerator(\n    rescale=1. / 255,\n    horizontal_flip=True,\n    zoom_range=0.3,\n    shear_range=0.3\n    )\n\ngenerator_test = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)\n\ngenerator_train = generator_train.flow_from_directory(\n    train_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height)\n)\n\ngenerator_test = generator_test.flow_from_directory(\n    test_dir,\n    batch_size=batch_size,\n    target_size=(image_width, image_height)\n)\n\nmodel.fit_generator(\n    generator_train,\n    steps_per_epoch=epoch_steps,\n    epochs=epochs,\n    validation_data=generator_test,\n    validation_steps=test_steps\n)"
  },
  {
    "path": "Chapter03/1_embedding_vis.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport os\nimport numpy as np\n\n\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 100\n\nx_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\nx_input_reshape = tf.reshape(x_input, [-1, 28, 28, 1], name='input_reshape')\n\nconvolution_layer_1 = tf.layers.conv2d(\n  inputs=x_input_reshape,\n  filters=64,\n  kernel_size=[3, 3],\n  activation=tf.nn.relu,\n)\nadd_variable_summary(convolution_layer_1, 'convolution1')\npooling_layer_1 = tf.layers.max_pooling2d(\n    inputs=convolution_layer_1,\n    pool_size=[2, 2],\n    strides=2\n)\nadd_variable_summary(pooling_layer_1, 'pooling1')\n\n\nconvolution_layer_2 = tf.layers.conv2d(\n  inputs=pooling_layer_1,\n  filters=128,\n  kernel_size=[3, 3],\n  activation=tf.nn.relu,\n)\nadd_variable_summary(convolution_layer_2, 'convolution2')\npooling_layer_2 = tf.layers.max_pooling2d(\n    inputs=convolution_layer_2,\n    pool_size=[2, 2],\n    strides=2\n)\nadd_variable_summary(pooling_layer_2, 'pool2')\n\nflattened_pool = tf.reshape(pooling_layer_2, [-1, 5 * 5 * 128], name='flattened_pool')\n\ndense_layer = tf.layers.dense(\n    inputs=flattened_pool,\n    units=1024,\n    activation=tf.nn.relu,\n    name='dense'\n)\nadd_variable_summary(dense_layer, 'dense')\n\ndropout_bool = tf.placeholder(tf.bool)\ndropout_layer = tf.layers.dropout(\n    inputs=dense_layer,\n    rate=0.4,\n    training=dropout_bool,\n    name='dropout'\n)\n\nlogits = tf.layers.dense(inputs=dropout_layer, units=no_classes, name='logits')\nadd_variable_summary(logits, 'logits')\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_input,\n                                                                    logits=logits)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\n\nwith tf.name_scope('accuracy'):\n    with tf.name_scope('correct_prediction'):\n        predictions = tf.argmax(logits, 1)\n        correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\n    with tf.name_scope('accuracy'):\n        accuracy_operation = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))\ntf.summary.scalar('accuracy', accuracy_operation)\n\nsession = tf.Session()\n\n# Adding the variable in between creating the session and initialising the graph\nno_embedding_data = 1000\nembedding_variable = tf.Variable(tf.stack(\n    mnist.test.images[:no_embedding_data], axis=0), trainable=False)\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\n\ntest_images, test_labels = mnist.test.images, mnist.test.labels\n\nfor batch_no in range(total_batches):\n    image_batch = mnist.train.next_batch(100)\n    _, merged_summary = session.run([optimiser, merged_summary_operation], feed_dict={\n        x_input: image_batch[0],\n        y_input: image_batch[1],\n        dropout_bool: True\n    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n\nwork_dir = ''  # change path\nmetadata_path = '/tmp/train/metadata.tsv'\n\nwith open(metadata_path, 'w') as metadata_file:\n    for i in range(no_embedding_data):\n        metadata_file.write('{}\\n'.format(\n            np.nonzero(mnist.test.labels[::1])[1:][0][i]))\n\nfrom tensorflow.contrib.tensorboard.plugins import projector\nprojector_config = projector.ProjectorConfig()\nembedding_projection = projector_config.embeddings.add()\nembedding_projection.tensor_name = embedding_variable.name\nembedding_projection.metadata_path = metadata_path\nembedding_projection.sprite.image_path = os.path.join(work_dir + '/mnist_10k_sprite.png')\nembedding_projection.sprite.single_image_dim.extend([28, 28])\nprojector.visualize_embeddings(train_summary_writer, projector_config)\ntf.train.Saver().save(session, '/tmp/train/model.ckpt', global_step=1)\n"
  },
  {
    "path": "Chapter03/2_guided_back_prop.py",
    "content": "from scipy.misc import imsave\nimport numpy as np\nimport tensorflow as tf\n\nimage_width, image_height = 128, 128\nvgg_model = tf.keras.applications.vgg16.VGG16(include_top=False)\n\ninput_image = vgg_model.input\nvgg_layer_dict = dict([(vgg_layer.name, vgg_layer) for vgg_layer in vgg_model.layers[1:]])\nvgg_layer_output = vgg_layer_dict['block5_conv1'].output\n\n\nfilters = []\nfor filter_idx in range(20):\n    loss = tf.keras.backend.mean(vgg_layer_output[:, :, :, filter_idx])\n    gradients = tf.keras.backend.gradients(loss, input_image)[0]\n    gradient_mean_square = tf.keras.backend.mean(tf.keras.backend.square(gradients))\n    gradients /= (tf.keras.backend.sqrt(gradient_mean_square) + 1e-5)\n    evaluator = tf.keras.backend.function([input_image], [loss, gradients])\n\n    gradient_ascent_step = 1.\n    input_image_data = np.random.random((1, image_width, image_height, 3))\n    input_image_data = (input_image_data - 0.5) * 20 + 128\n\n    for i in range(20):\n        loss_value, gradient_values = evaluator([input_image_data])\n        input_image_data += gradient_values * gradient_ascent_step\n        # print('Loss :', loss_value)\n        if loss_value <= 0.:\n            break\n\n    if loss_value > 0:\n        filter = input_image_data[0]\n        filter -= filter.mean()\n        filter /= (filter.std() + 1e-5)\n        filter *= 0.1\n        filter += 0.5\n        filter = np.clip(filter, 0, 1)\n        filter *= 255\n        filter = np.clip(filter, 0, 255).astype('uint8')\n        filters.append((filter, loss_value))\n\n\n# For visualisation, not in book\n\nn = 3\n\nfilters.sort(key=lambda x: x[1], reverse=True)\nfilters = filters[:n * n]\n\nmargin = 5\nwidth = n * image_width + (n - 1) * margin\nheight = n * image_height + (n - 1) * margin\nstitched_filters = np.zeros((width, height, 3))\n\nfor i in range(n):\n    for j in range(n):\n        img, loss = filters[i * n + j]\n        stitched_filters[(image_width + margin) * i: (image_width + margin) * i + image_width,\n                         (image_height + margin) * j: (image_height + margin) * j + image_height, :] = img\n\nimsave('stitched_filters_%dx%d.png' % (n, n), stitched_filters)\n"
  },
  {
    "path": "Chapter03/3_deep_dream.py",
    "content": "import os\nimport numpy as np\nimport PIL.Image\nimport urllib.request\nfrom tensorflow.python.platform import gfile\nimport zipfile\n\nimport tensorflow as tf\n\nwork_dir = ''\n\nmodel_url = 'https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip'\n\nfile_name = model_url.split('/')[-1]\n\nfile_path = os.path.join(work_dir, file_name)\n\nif not os.path.exists(file_path):\n    file_path, _ = urllib.request.urlretrieve(model_url, file_path)\n\nzip_handle = zipfile.ZipFile(file_path, 'r')\nzip_handle.extractall(work_dir)\nzip_handle.close()\n\ngraph = tf.Graph()\nsession = tf.InteractiveSession(graph=graph)\nmodel_path = os.path.join(work_dir, 'tensorflow_inception_graph.pb')\nwith gfile.FastGFile(model_path, 'rb') as f:\n    graph_defnition = tf.GraphDef()\n    graph_defnition.ParseFromString(f.read())\n\ninput_placeholder = tf.placeholder(np.float32, name='input')\nimagenet_mean_value = 117.0\npreprocessed_input = tf.expand_dims(input_placeholder-imagenet_mean_value, 0)\ntf.import_graph_def(graph_defnition, {'input': preprocessed_input})\n\n\ndef resize_image(image, size):\n    resize_placeholder = tf.placeholder(tf.float32)\n    resize_placeholder_expanded = tf.expand_dims(resize_placeholder, 0)\n    resized_image = tf.image.resize_bilinear(resize_placeholder_expanded, size)[0, :, :, :]\n    return session.run(resized_image, feed_dict={resize_placeholder: image})\n\n\nimage_name = 'mountain.jpg'\nimage = PIL.Image.open(image_name)\nimage = np.float32(image)\nobjective_fn = tf.square(graph.get_tensor_by_name(\"import/mixed4c:0\"))\n\nno_octave = 4\nscale = 1.4\nwindow_size = 51\n\nscore = tf.reduce_mean(objective_fn)\ngradients = tf.gradients(score, input_placeholder)[0]\n\noctave_images = []\nfor i in range(no_octave - 1):\n    image_height_width = image.shape[:2]\n    scaled_image = resize_image(image, np.int32(np.float32(image_height_width) / scale))\n    image_difference = image - resize_image(scaled_image, image_height_width)\n    image = scaled_image\n    octave_images.append(image_difference)\n\nfor octave_idx in range(no_octave):\n    if octave_idx > 0:\n        image_difference = octave_images[-octave_idx]\n        image = resize_image(image, image_difference.shape[:2]) + image_difference\n\n    for i in range(10):\n        image_heigth, image_width = image.shape[:2]\n        sx, sy = np.random.randint(window_size, size=2)\n        shifted_image = np.roll(np.roll(image, sx, 1), sy, 0)\n        gradient_values = np.zeros_like(image)\n\n        for y in range(0, max(image_heigth - window_size // 2, window_size), window_size):\n            for x in range(0, max(image_width - window_size // 2, window_size), window_size):\n                sub = shifted_image[y:y + window_size, x:x + window_size]\n                gradient_windows = session.run(gradients, {input_placeholder: sub})\n                gradient_values[y:y + window_size, x:x + window_size] = gradient_windows\n\n        gradient_windows = np.roll(np.roll(gradient_values, -sx, 1), -sy, 0)\n        image += gradient_windows * (1.5 / (np.abs(gradient_windows).mean() + 1e-7))\n\nimage /= 255.0\nimage = np.uint8(np.clip(image, 0, 1) * 255)\nPIL.Image.fromarray(image).save('dream_' + image_name, 'jpeg')\n\n"
  },
  {
    "path": "Chapter03/4_export_model.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport os\n\n\nwork_dir = '/tmp'\nmodel_version = 9\ntraining_iteration = 1000\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 200\n\ntf_example = tf.parse_example(tf.placeholder(tf.string, name='tf_example'),\n                              {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32), })\nx_input = tf.identity(tf_example['x'], name='x')\n\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\nweights = tf.Variable(tf.random_normal([input_size, no_classes]))\nbias = tf.Variable(tf.random_normal([no_classes]))\nlogits = tf.matmul(x_input, weights) + bias\nsoftmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_input, logits=logits)\nloss_operation = tf.reduce_mean(softmax_cross_entropy)\noptimiser = tf.train.GradientDescentOptimizer(0.5).minimize(loss_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\nfor batch_no in range(total_batches):\n    mnist_batch = mnist.train.next_batch(batch_size)\n    _, loss_value = session.run([optimiser, loss_operation], feed_dict={\n        x_input: mnist_batch[0],\n        y_input: mnist_batch[1]\n    })\n    print(loss_value)\n\nsignature_def = (\n      tf.saved_model.signature_def_utils.build_signature_def(\n          inputs={'x': tf.saved_model.utils.build_tensor_info(x_input)},\n          outputs={'y': tf.saved_model.utils.build_tensor_info(y_input)},\n          method_name=\"tensorflow/serving/predict\"))\n\nmodel_path = os.path.join(work_dir, str(model_version))\nsaved_model_builder = tf.saved_model.builder.SavedModelBuilder(model_path)\nsaved_model_builder.add_meta_graph_and_variables(\n      session, [tf.saved_model.tag_constants.SERVING],\n      signature_def_map={\n          'prediction': signature_def\n      },\n      legacy_init_op=tf.group(tf.tables_initializer(), name='legacy_init_op'))\nsaved_model_builder.save()\n"
  },
  {
    "path": "Chapter03/5_serving_client.py",
    "content": "from grpc.beta import implementations\nimport numpy\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nfrom tensorflow_serving.apis import predict_pb2\nfrom tensorflow_serving.apis import prediction_service_pb2\n\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n\nconcurrency = 1\nnum_tests = 100\nhost = ''\nport = 8000\nwork_dir = '/tmp'\n\n\ndef _create_rpc_callback():\n  def _callback(result):\n      response = numpy.array(\n        result.result().outputs['y'].float_val)\n      prediction = numpy.argmax(response)\n      print(prediction)\n  return _callback\n\n\ntest_data_set = mnist.test\ntest_image = mnist.test.images[0]\n\npredict_request = predict_pb2.PredictRequest()\npredict_request.model_spec.name = 'mnist'\npredict_request.model_spec.signature_name = 'prediction'\n\npredict_channel = implementations.insecure_channel(host, int(port))\npredict_stub = prediction_service_pb2.beta_create_PredictionService_stub(predict_channel)\n\npredict_request.inputs['x'].CopyFrom(\n    tf.contrib.util.make_tensor_proto(test_image, shape=[1, test_image.size]))\nresult = predict_stub.Predict.future(predict_request, 3.0)\nresult.add_done_callback(\n    _create_rpc_callback())\n"
  },
  {
    "path": "Chapter03/6_bottleneck_features.py",
    "content": "import tensorflow as tf\nimport os\nimport urllib.request\nfrom tensorflow.python.platform import gfile\nimport tarfile\nimport numpy as np\n\nwork_dir = ''\n\nmodel_url = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'\nfile_name = model_url.split('/')[-1]\nfile_path = os.path.join(work_dir, file_name)\n\nif not os.path.exists(file_path):\n    file_path, _ = urllib.request.urlretrieve(model_url, file_path)\ntarfile.open(file_path, 'r:gz').extractall(work_dir)\n\nmodel_path = os.path.join(work_dir, 'classify_image_graph_def.pb')\nwith gfile.FastGFile(model_path, 'rb') as f:\n    graph_defnition = tf.GraphDef()\n    graph_defnition.ParseFromString(f.read())\n\n\nbottleneck, image, resized_input = (\n    tf.import_graph_def(\n        graph_defnition,\n        name='',\n        return_elements=['pool_3/_reshape:0',\n                         'DecodeJpeg/contents:0',\n                         'ResizeBilinear:0'])\n)\n\nquery_image_path = os.path.join(work_dir, 'cat.1000.jpg')\nquery_image = gfile.FastGFile(query_image_path, 'rb').read()\ntarget_image_path = os.path.join(work_dir, 'cat.1001.jpg')\ntarget_image = gfile.FastGFile(target_image_path, 'rb').read()\n\n\ndef get_bottleneck_data(session, image_data):\n    bottleneck_data = session.run(bottleneck, {image: image_data})\n    bottleneck_data = np.squeeze(bottleneck_data)\n    return bottleneck_data\n\n\nsession = tf.Session()\nquery_feature = get_bottleneck_data(session, query_image)\nprint(query_feature)\ntarget_feature = get_bottleneck_data(session, target_image)\nprint(target_feature)\ndist = np.linalg.norm(np.asarray(query_feature) - np.asarray(target_feature))\nprint(dist)\n"
  },
  {
    "path": "Chapter03/7_annoy.py",
    "content": "import os\nfrom annoy import AnnoyIndex\n\nwork_dir = ''\nlayer_dimension = 256\ntarget_features = []\nquery_feature = []\n\n\ndef create_annoy(target_features):\n    t = AnnoyIndex(layer_dimension)\n    for idx, target_feature in enumerate(target_features):\n        t.add_item(idx, target_feature)\n    t.build(10)\n    t.save(os.path.join(work_dir, 'annoy.ann'))\n\ncreate_annoy(target_features)\n\nannoy_index = AnnoyIndex(10)\nannoy_index.load(os.path.join(work_dir, 'annoy.ann'))\nmatches = annoy_index.get_nns_by_vector(query_feature, 20)\n"
  },
  {
    "path": "Chapter03/8_auto_encoder.py",
    "content": "import tensorflow as tf\n\n\ndef fully_connected_layer(input_layer, units):\n    return tf.layers.dense(\n        input_layer,\n        units=units,\n        activation=tf.nn.relu\n    )\n\n\ndef convolution_layer(input_layer, filter_size):\n    return  tf.layers.conv2d(\n        input_layer,\n        filters=filter_size,\n        kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d(),\n        kernel_size=3,\n        strides=2\n    )\n\n\ndef deconvolution_layer(input_layer, filter_size, activation=tf.nn.relu):\n    return tf.layers.conv2d_transpose(\n        input_layer,\n        filters=filter_size,\n        kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d(),\n        kernel_size=3,\n        activation=activation,\n        strides=2\n    )\n\n\ninput_layer = tf.placeholder(tf.float32, [None, 128, 128, 3])\nconvolution_layer_1 = convolution_layer(input_layer, 1024)\nconvolution_layer_2 = convolution_layer(convolution_layer_1, 512)\nconvolution_layer_3 = convolution_layer(convolution_layer_2, 256)\nconvolution_layer_4 = convolution_layer(convolution_layer_3, 128)\nconvolution_layer_5 = convolution_layer(convolution_layer_4, 32)\n\nconvolution_layer_5_flattened = tf.layers.flatten(convolution_layer_5)\nbottleneck_layer = fully_connected_layer(convolution_layer_5_flattened, 16)\nc5_shape = convolution_layer_5.get_shape().as_list()\nc5f_flat_shape = convolution_layer_5_flattened.get_shape().as_list()[1]\nfully_connected = fully_connected_layer(bottleneck_layer, c5f_flat_shape)\nfully_connected = tf.reshape(fully_connected,\n                             [-1, c5_shape[1], c5_shape[2], c5_shape[3]])\n\ndeconvolution_layer_1 = deconvolution_layer(fully_connected, 128)\ndeconvolution_layer_2 = deconvolution_layer(deconvolution_layer_1, 256)\ndeconvolution_layer_3 = deconvolution_layer(deconvolution_layer_2, 512)\ndeconvolution_layer_4 = deconvolution_layer(deconvolution_layer_3, 1024)\ndeconvolution_layer_5 = deconvolution_layer(deconvolution_layer_4, 3,\n                                            activation=tf.nn.tanh)\n"
  },
  {
    "path": "Chapter03/9_denoising.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport numpy as np\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 2000\n\nx_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, input_size])\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.tanh):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\nlayer_1 = dense_layer(x_input, 500)\nlayer_2 = dense_layer(layer_1, 250)\nlayer_3 = dense_layer(layer_2, 50)\nlayer_4 = dense_layer(layer_3, 250)\nlayer_5 = dense_layer(layer_4, 500)\nlayer_6 = dense_layer(layer_5, 784)\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(\n        labels=y_input, logits=layer_6)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\nx_input_reshaped = tf.reshape(x_input, [-1, 28, 28, 1])\ntf.summary.image(\"noisy_images\", x_input_reshaped)\n\ny_input_reshaped = tf.reshape(y_input, [-1, 28, 28, 1])\ntf.summary.image(\"original_images\", y_input_reshaped)\n\nlayer_6_reshaped = tf.reshape(layer_6, [-1, 28, 28, 1])\ntf.summary.image(\"reconstructed_images\", layer_6_reshaped)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, _ = mnist_batch[0], mnist_batch[1]\n    train_images_noise = train_images + 0.2 * np.random.normal(size=train_images.shape)\n    train_images_noise = np.clip(train_images_noise, 0., 1.)\n    _, merged_summary = session.run([optimiser, merged_summary_operation],\n                                    feed_dict={\n        x_input: train_images_noise,\n        y_input: train_images,\n    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n"
  },
  {
    "path": "Chapter04/1_iou.py",
    "content": "import tensorflow as tf\n\n\ndef calculate_iou(gt_bb, pred_bb):\n    '''\n    :param gt_bb: ground truth bounding box\n    :param pred_bb: predicted bounding box\n    '''\n    gt_bb = tf.stack([\n        gt_bb[:, :, :, :, 0] - gt_bb[:, :, :, :, 2] / 2.0,\n        gt_bb[:, :, :, :, 1] - gt_bb[:, :, :, :, 3] / 2.0,\n        gt_bb[:, :, :, :, 0] + gt_bb[:, :, :, :, 2] / 2.0,\n        gt_bb[:, :, :, :, 1] + gt_bb[:, :, :, :, 3] / 2.0])\n    gt_bb = tf.transpose(gt_bb, [1, 2, 3, 4, 0])\n    pred_bb = tf.stack([\n        pred_bb[:, :, :, :, 0] - pred_bb[:, :, :, :, 2] / 2.0,\n        pred_bb[:, :, :, :, 1] - pred_bb[:, :, :, :, 3] / 2.0,\n        pred_bb[:, :, :, :, 0] + pred_bb[:, :, :, :, 2] / 2.0,\n        pred_bb[:, :, :, :, 1] + pred_bb[:, :, :, :, 3] / 2.0])\n    pred_bb = tf.transpose(pred_bb, [1, 2, 3, 4, 0])\n    area = tf.maximum(\n        0.0,\n        tf.minimum(gt_bb[:, :, :, :, 2:], pred_bb[:, :, :, :, 2:]) -\n        tf.maximum(gt_bb[:, :, :, :, :2], pred_bb[:, :, :, :, :2]))\n    intersection_area= area[:, :, :, :, 0] * area[:, :, :, :, 1]\n    gt_bb_area = (gt_bb[:, :, :, :, 2] - gt_bb[:, :, :, :, 0]) * \\\n                 (gt_bb[:, :, :, :, 3] - gt_bb[:, :, :, :, 1])\n    pred_bb_area = (pred_bb[:, :, :, :, 2] - pred_bb[:, :, :, :, 0]) * \\\n                   (pred_bb[:, :, :, :, 3] - pred_bb[:, :, :, :, 1])\n    union_area = tf.maximum(gt_bb_area + pred_bb_area - intersection_area, 1e-10)\n    iou = tf.clip_by_value(intersection_area / union_area, 0.0, 1.0)\n    return iou\n"
  },
  {
    "path": "Chapter04/2_overfeat.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 300\n\nx_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\nx_input_reshape = tf.reshape(x_input, [-1, 28, 28, 1],\n                             name='input_reshape')\n\n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3],\n                      activation=tf.nn.relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\nconvolution_layer_1 = convolution_layer(x_input_reshape, 64)\npooling_layer_1 = pooling_layer(convolution_layer_1)\nconvolution_layer_2 = convolution_layer(pooling_layer_1, 128)\npooling_layer_2 = pooling_layer(convolution_layer_2)\ndense_layer_bottleneck = convolution_layer(pooling_layer_2, 1024, [5, 5])\nlogits = convolution_layer(dense_layer_bottleneck, no_classes, [1, 1])\nlogits = tf.reshape(logits, [-1, 10])\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(\n        labels=y_input, logits=logits)\n    print(softmax_cross_entropy)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    print(loss_operation)\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\n\nwith tf.name_scope('accuracy'):\n    with tf.name_scope('correct_prediction'):\n        predictions = tf.argmax(logits, 1)\n        correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\n    with tf.name_scope('accuracy'):\n        accuracy_operation = tf.reduce_mean(\n            tf.cast(correct_predictions, tf.float32))\ntf.summary.scalar('accuracy', accuracy_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\ntest_summary_writer = tf.summary.FileWriter('/tmp/test')\n\ntest_images, test_labels = mnist_data.test.images, mnist_data.test.labels\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, train_labels = mnist_batch[0], mnist_batch[1]\n    _, merged_summary = session.run([optimiser, merged_summary_operation],\n                                    feed_dict={\n        x_input: train_images,\n        y_input: train_labels,\n    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n    if batch_no % 10 == 0:\n        merged_summary, _ = session.run([merged_summary_operation,\n                                         accuracy_operation], feed_dict={\n            x_input: test_images,\n            y_input: test_labels,\n        })\n        test_summary_writer.add_summary(merged_summary, batch_no)\n"
  },
  {
    "path": "Chapter04/3_object_detection_api.py",
    "content": ""
  },
  {
    "path": "Chapter04/4_yolo.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport os\nfrom pascal_voc import pascal_voc\n\n\ndef calculate_iou(gt_bb, pred_bb):\n    '''\n    :param gt_bb: ground truth bounding box\n    :param pred_bb: predicted bounding box\n    '''\n    gt_bb = tf.stack([\n        gt_bb[:, :, :, :, 0] - gt_bb[:, :, :, :, 2] / 2.0,\n        gt_bb[:, :, :, :, 1] - gt_bb[:, :, :, :, 3] / 2.0,\n        gt_bb[:, :, :, :, 0] + gt_bb[:, :, :, :, 2] / 2.0,\n        gt_bb[:, :, :, :, 1] + gt_bb[:, :, :, :, 3] / 2.0])\n    gt_bb = tf.transpose(gt_bb, [1, 2, 3, 4, 0])\n    pred_bb = tf.stack([\n        pred_bb[:, :, :, :, 0] - pred_bb[:, :, :, :, 2] / 2.0,\n        pred_bb[:, :, :, :, 1] - pred_bb[:, :, :, :, 3] / 2.0,\n        pred_bb[:, :, :, :, 0] + pred_bb[:, :, :, :, 2] / 2.0,\n        pred_bb[:, :, :, :, 1] + pred_bb[:, :, :, :, 3] / 2.0])\n    pred_bb = tf.transpose(pred_bb, [1, 2, 3, 4, 0])\n    area = tf.maximum(\n        0.0,\n        tf.minimum(gt_bb[:, :, :, :, 2:], pred_bb[:, :, :, :, 2:]) -\n        tf.maximum(gt_bb[:, :, :, :, :2], pred_bb[:, :, :, :, :2]))\n    intersection_area= area[:, :, :, :, 0] * area[:, :, :, :, 1]\n    gt_bb_area = (gt_bb[:, :, :, :, 2] - gt_bb[:, :, :, :, 0]) * \\\n                 (gt_bb[:, :, :, :, 3] - gt_bb[:, :, :, :, 1])\n    pred_bb_area = (pred_bb[:, :, :, :, 2] - pred_bb[:, :, :, :, 0]) * \\\n                   (pred_bb[:, :, :, :, 3] - pred_bb[:, :, :, :, 1])\n    union_area = tf.maximum(gt_bb_area + pred_bb_area - intersection_area, 1e-10)\n    iou = tf.clip_by_value(intersection_area / union_area, 0.0, 1.0)\n    return iou\n\n\n\nDATA_PATH = 'data'\nPASCAL_PATH = os.path.join(DATA_PATH, 'pascal_voc')\nCACHE_PATH = os.path.join(PASCAL_PATH, 'cache')\nOUTPUT_DIR = os.path.join(PASCAL_PATH, 'output')\nWEIGHTS_DIR = os.path.join(PASCAL_PATH, 'weight')\nFLIPPED = True\nDISP_CONSOLE = False\n\nGPU = ''\n\n\nTHRESHOLD = 0.2\n\nIOU_THRESHOLD = 0.5\n\nclasses = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',\n           'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',\n           'motorbike', 'person', 'pottedplant', 'sheep', 'sofa',\n           'train', 'tvmonitor']\n\nnum_class = len(classes)\nimage_size = 448\ncell_size = 7\nboxes_per_cell = 2\noutput_size = (cell_size * cell_size) * (num_class + boxes_per_cell * 5)\nscale = 1.0 * image_size / cell_size\nboundary1 = cell_size * cell_size * num_class\nboundary2 = boundary1 + cell_size * cell_size * boxes_per_cell\n\nobject_scale = 1.0\nnoobject_scale = 1.0\nclass_scale = 2.0\ncoord_scale = 5.0\n\nlearning_rate = 0.0001\nbatch_size = 45\nalpha = 0.1\n\nweights_file = None\nmax_iter = 15000\ninitial_learning_rate = 0.0001\ndecay_steps = 30000\ndecay_rate = 0.1\nstaircase = True\n\n\n\noffset = np.transpose(np.reshape(np.array(\n    [np.arange(cell_size)] * cell_size * boxes_per_cell),\n    (boxes_per_cell, cell_size, cell_size)), (1, 2, 0))\n\nimages = tf.placeholder(tf.float32, [None, image_size, image_size, 3], name='images')\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2, padding='valid'):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides,\n        padding=padding\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3], padding='valid',\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        padding=padding,\n        weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),\n        weights_regularizer=tf.l2_regularizer(0.0005)\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.leaky_relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation,\n        weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),\n        weights_regularizer=tf.l2_regularizer(0.0005)\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\nyolo = tf.pad(images, np.array([[0, 0], [3, 3], [3, 3], [0, 0]]), name='pad_1')\nyolo = convolution_layer(yolo, 64, 7, 2)\nyolo = pooling_layer(yolo, [2, 2], 2, 'same')\nyolo = convolution_layer(yolo, 192, 3)\nyolo = pooling_layer(yolo, 2, 'same')\nyolo = convolution_layer(yolo, 128, 1)\nyolo = convolution_layer(yolo, 256, 3)\nyolo = convolution_layer(yolo, 256, 1)\nyolo = convolution_layer(yolo, 512, 3)\nyolo = pooling_layer(yolo, 2, 'same')\nyolo = convolution_layer(yolo, 256, 1)\nyolo = convolution_layer(yolo, 512, 3)\nyolo = convolution_layer(yolo, 256, 1)\nyolo = convolution_layer(yolo, 512, 3)\nyolo = convolution_layer(yolo, 256, 1)\nyolo = convolution_layer(yolo, 512, 3)\nyolo = convolution_layer(yolo, 256, 1)\nyolo = convolution_layer(yolo, 512, 3)\nyolo = convolution_layer(yolo, 512, 1)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = pooling_layer(yolo, 2)\nyolo = convolution_layer(yolo, 512, 1)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = convolution_layer(yolo, 512, 1)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = tf.pad(yolo, np.array([[0, 0], [1, 1], [1, 1], [0, 0]]))\nyolo = convolution_layer(yolo, 1024, 3, 2)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = convolution_layer(yolo, 1024, 3)\nyolo = tf.transpose(yolo, [0, 3, 1, 2])\nyolo = tf.layers.flatten(yolo)\nyolo = dense_layer(yolo, 512)\nyolo = dense_layer(yolo, 4096)\n\ndropout_bool = tf.placeholder(tf.bool)\nyolo = tf.layers.dropout(\n        inputs=yolo,\n        rate=0.4,\n        training=dropout_bool\n    )\nyolo = dense_layer(yolo, output_size, None)\n\npredicts = yolo\nlabels = tf.placeholder(tf.float32, [None, cell_size, cell_size, 5 + num_class])\n\npredict_classes = tf.reshape(predicts[:, :boundary1], [batch_size, cell_size, cell_size, num_class])\npredict_scales = tf.reshape(predicts[:, boundary1:boundary2], [batch_size, cell_size, cell_size, boxes_per_cell])\npredict_boxes = tf.reshape(predicts[:, boundary2:], [batch_size, cell_size, cell_size, boxes_per_cell, 4])\n\nresponse = tf.reshape(labels[:, :, :, 0], [batch_size, cell_size, cell_size, 1])\nboxes = tf.reshape(labels[:, :, :, 1:5], [batch_size, cell_size, cell_size, 1, 4])\nboxes = tf.tile(boxes, [1, 1, 1, boxes_per_cell, 1]) / image_size\nclasses = labels[:, :, :, 5:]\n\noffset = tf.constant(offset, dtype=tf.float32)\noffset = tf.reshape(offset, [1, cell_size, cell_size, boxes_per_cell])\noffset = tf.tile(offset, [batch_size, 1, 1, 1])\npredict_boxes_tran = tf.stack([(predict_boxes[:, :, :, :, 0] + offset) / cell_size,\n                               (predict_boxes[:, :, :, :, 1] + tf.transpose(offset, (0, 2, 1, 3))) / cell_size,\n                               tf.square(predict_boxes[:, :, :, :, 2]),\n                               tf.square(predict_boxes[:, :, :, :, 3])])\npredict_boxes_tran = tf.transpose(predict_boxes_tran, [1, 2, 3, 4, 0])\n\niou_predict_truth = calculate_iou(predict_boxes_tran, boxes)\n\nobject_mask = tf.reduce_max(iou_predict_truth, 3, keep_dims=True)\nobject_mask = tf.cast((iou_predict_truth >= object_mask), tf.float32) * response\n\nnoobject_mask = tf.ones_like(object_mask, dtype=tf.float32) - object_mask\n\nboxes_tran = tf.stack([boxes[:, :, :, :, 0] * cell_size - offset,\n                       boxes[:, :, :, :, 1] * cell_size - tf.transpose(offset, (0, 2, 1, 3)),\n                       tf.sqrt(boxes[:, :, :, :, 2]),\n                       tf.sqrt(boxes[:, :, :, :, 3])])\nboxes_tran = tf.transpose(boxes_tran, [1, 2, 3, 4, 0])\n\nclass_delta = response * (predict_classes - classes)\nclass_loss = tf.reduce_mean(tf.reduce_sum(tf.square(class_delta), axis=[1, 2, 3]), name='class_loss') * class_scale\n\nobject_delta = object_mask * (predict_scales - iou_predict_truth)\nobject_loss = tf.reduce_mean(tf.reduce_sum(tf.square(object_delta), axis=[1, 2, 3]), name='object_loss') * object_scale\n\nnoobject_delta = noobject_mask * predict_scales\nnoobject_loss = tf.reduce_mean(tf.reduce_sum(tf.square(noobject_delta), axis=[1, 2, 3]), name='noobject_loss') * noobject_scale\n\ncoord_mask = tf.expand_dims(object_mask, 4)\nboxes_delta = coord_mask * (predict_boxes - boxes_tran)\ncoord_loss = tf.reduce_mean(tf.reduce_sum(tf.square(boxes_delta), axis=[1, 2, 3, 4]), name='coord_loss') * coord_scale\n\ntf.losses.add_loss(class_loss)\ntf.losses.add_loss(object_loss)\ntf.losses.add_loss(noobject_loss)\ntf.losses.add_loss(coord_loss)\n\ntotal_loss = tf.losses.get_total_loss()\n\nyolo = yolo\ndata = data\n\nglobal_step = tf.get_variable(\n    'global_step', [], initializer=tf.constant_initializer(0), trainable=False)\nlearning_rate = tf.train.exponential_decay(\n    initial_learning_rate, global_step, decay_steps,\n    decay_rate, staircase, name='learning_rate')\noptimizer = tf.train.GradientDescentOptimizer(\n    learning_rate=learning_rate).minimize(\n    yolo.total_loss, global_step=global_step)\nema = tf.train.ExponentialMovingAverage(decay=0.9999)\naverages_op = ema.apply(tf.trainable_variables())\nwith tf.control_dependencies([optimizer]):\n    train_op = tf.group(averages_op)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n\nfor step in range(1, max_iter + 1):\n    images, labels = data.get()\n    feed_dict = {yolo.images: images, yolo.labels: labels}\n    sess.run(train_op, feed_dict=feed_dict)\n\n\n\n\n"
  },
  {
    "path": "Chapter04/pascal_voc.py",
    "content": "import os\nimport xml.etree.ElementTree as ET\nimport numpy as np\nimport cv2\nimport cPickle\nimport copy\nimport yolo.config as cfg\n\n\nclass pascal_voc(object):\n    def __init__(self, phase, rebuild=False):\n        self.devkil_path = os.path.join(cfg.PASCAL_PATH, 'VOCdevkit')\n        self.data_path = os.path.join(self.devkil_path, 'VOC2007')\n        self.cache_path = cfg.CACHE_PATH\n        self.batch_size = cfg.BATCH_SIZE\n        self.image_size = cfg.IMAGE_SIZE\n        self.cell_size = cfg.CELL_SIZE\n        self.classes = cfg.CLASSES\n        self.class_to_ind = dict(zip(self.classes, xrange(len(self.classes))))\n        self.flipped = cfg.FLIPPED\n        self.phase = phase\n        self.rebuild = rebuild\n        self.cursor = 0\n        self.epoch = 1\n        self.gt_labels = None\n        self.prepare()\n\n    def get(self):\n        images = np.zeros((self.batch_size, self.image_size, self.image_size, 3))\n        labels = np.zeros((self.batch_size, self.cell_size, self.cell_size, 25))\n        count = 0\n        while count < self.batch_size:\n            imname = self.gt_labels[self.cursor]['imname']\n            flipped = self.gt_labels[self.cursor]['flipped']\n            images[count, :, :, :] = self.image_read(imname, flipped)\n            labels[count, :, :, :] = self.gt_labels[self.cursor]['label']\n            count += 1\n            self.cursor += 1\n            if self.cursor >= len(self.gt_labels):\n                np.random.shuffle(self.gt_labels)\n                self.cursor = 0\n                self.epoch += 1\n        return images, labels\n\n    def image_read(self, imname, flipped=False):\n        image = cv2.imread(imname)\n        image = cv2.resize(image, (self.image_size, self.image_size))\n        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)\n        image = (image / 255.0) * 2.0 - 1.0\n        if flipped:\n            image = image[:, ::-1, :]\n        return image\n\n    def prepare(self):\n        gt_labels = self.load_labels()\n        if self.flipped:\n            print('Appending horizontally-flipped training examples ...')\n            gt_labels_cp = copy.deepcopy(gt_labels)\n            for idx in range(len(gt_labels_cp)):\n                gt_labels_cp[idx]['flipped'] = True\n                gt_labels_cp[idx]['label'] = gt_labels_cp[idx]['label'][:, ::-1, :]\n                for i in xrange(self.cell_size):\n                    for j in xrange(self.cell_size):\n                        if gt_labels_cp[idx]['label'][i, j, 0] == 1:\n                            gt_labels_cp[idx]['label'][i, j, 1] = self.image_size - 1 - gt_labels_cp[idx]['label'][i, j, 1]\n            gt_labels += gt_labels_cp\n        np.random.shuffle(gt_labels)\n        self.gt_labels = gt_labels\n        return gt_labels\n\n    def load_labels(self):\n        cache_file = os.path.join(self.cache_path, 'pascal_' + self.phase + '_gt_labels.pkl')\n\n        if os.path.isfile(cache_file) and not self.rebuild:\n            print('Loading gt_labels from: ' + cache_file)\n            with open(cache_file, 'rb') as f:\n                gt_labels = cPickle.load(f)\n            return gt_labels\n\n        print('Processing gt_labels from: ' + self.data_path)\n\n        if not os.path.exists(self.cache_path):\n            os.makedirs(self.cache_path)\n\n        if self.phase == 'train':\n            txtname = os.path.join(self.data_path, 'ImageSets', 'Main',\n                                   'trainval.txt')\n        else:\n            txtname = os.path.join(self.data_path, 'ImageSets', 'Main',\n                                   'test.txt')\n        with open(txtname, 'r') as f:\n            self.image_index = [x.strip() for x in f.readlines()]\n\n        gt_labels = []\n        for index in self.image_index:\n            label, num = self.load_pascal_annotation(index)\n            if num == 0:\n                continue\n            imname = os.path.join(self.data_path, 'JPEGImages', index + '.jpg')\n            gt_labels.append({'imname': imname, 'label': label, 'flipped': False})\n        print('Saving gt_labels to: ' + cache_file)\n        with open(cache_file, 'wb') as f:\n            cPickle.dump(gt_labels, f)\n        return gt_labels\n\n    def load_pascal_annotation(self, index):\n        \"\"\"\n        Load image and bounding boxes info from XML file in the PASCAL VOC\n        format.\n        \"\"\"\n\n        imname = os.path.join(self.data_path, 'JPEGImages', index + '.jpg')\n        im = cv2.imread(imname)\n        h_ratio = 1.0 * self.image_size / im.shape[0]\n        w_ratio = 1.0 * self.image_size / im.shape[1]\n        # im = cv2.resize(im, [self.image_size, self.image_size])\n\n        label = np.zeros((self.cell_size, self.cell_size, 25))\n        filename = os.path.join(self.data_path, 'Annotations', index + '.xml')\n        tree = ET.parse(filename)\n        objs = tree.findall('object')\n\n        for obj in objs:\n            bbox = obj.find('bndbox')\n            # Make pixel indexes 0-based\n            x1 = max(min((float(bbox.find('xmin').text) - 1) * w_ratio, self.image_size - 1), 0)\n            y1 = max(min((float(bbox.find('ymin').text) - 1) * h_ratio, self.image_size - 1), 0)\n            x2 = max(min((float(bbox.find('xmax').text) - 1) * w_ratio, self.image_size - 1), 0)\n            y2 = max(min((float(bbox.find('ymax').text) - 1) * h_ratio, self.image_size - 1), 0)\n            cls_ind = self.class_to_ind[obj.find('name').text.lower().strip()]\n            boxes = [(x2 + x1) / 2.0, (y2 + y1) / 2.0, x2 - x1, y2 - y1]\n            x_ind = int(boxes[0] * self.cell_size / self.image_size)\n            y_ind = int(boxes[1] * self.cell_size / self.image_size)\n            if label[y_ind, x_ind, 0] == 1:\n                continue\n            label[y_ind, x_ind, 0] = 1\n            label[y_ind, x_ind, 1:5] = boxes\n            label[y_ind, x_ind, 5 + cls_ind] = 1\n\n        return label, len(objs)"
  },
  {
    "path": "Chapter05/1_segnet.py",
    "content": "import tensorflow as tf\n\ninput_height = 360\ninput_width = 480\nkernel = 3\nfilter_size = 64\npad = 1\npool_size = 2\n\nmodel = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.Layer(input_shape=(3, input_height, input_width)))\n\n# encoder\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(filter_size, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\nmodel.add(tf.keras.layers.Activation('relu'))\nmodel.add(tf.keras.layers.MaxPooling2D(pool_size=(pool_size, pool_size)))\n\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(128, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\nmodel.add(tf.keras.layers.Activation('relu'))\nmodel.add(tf.keras.layers.MaxPooling2D(pool_size=(pool_size, pool_size)))\n\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(256, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\nmodel.add(tf.keras.layers.Activation('relu'))\nmodel.add(tf.keras.layers.MaxPooling2D(pool_size=(pool_size, pool_size)))\n\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(512, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\nmodel.add(tf.keras.layers.Activation('relu'))\n\n# decoder\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(512, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\n\nmodel.add(tf.keras.layers.UpSampling2D(size=(pool_size, pool_size)))\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(256, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\n\nmodel.add(tf.keras.layers.UpSampling2D(size=(pool_size, pool_size)))\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(128, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\n\nmodel.add(tf.keras.layers.UpSampling2D(size=(pool_size, pool_size)))\nmodel.add(tf.keras.layers.ZeroPadding2D(padding=(pad, pad)))\nmodel.add(tf.keras.layers.Conv2D(filter_size, kernel, kernel, border_mode='valid'))\nmodel.add(tf.keras.layers.BatchNormalization())\n\nmodel.add(tf.keras.layers.Conv2D(nClasses, 1, 1, border_mode='valid', ))\n\nmodel.outputHeight = model.output_shape[-2]\nmodel.outputWidth = model.output_shape[-1]\n\nmodel.add(tf.keras.layers.Reshape((nClasses, model.output_shape[-2] * model.output_shape[-1]),\n                  input_shape=(nClasses, model.output_shape[-2], model.output_shape[-1])))\n\nmodel.add(tf.keras.layers.Permute((2, 1)))\nmodel.add(tf.keras.layers.Activation('softmax'))\n\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=tf.keras.optimizers.Adam, metrics=['accuracy'])\n"
  },
  {
    "path": "Chapter05/2_nerve_segmentation.py",
    "content": "import os\nfrom skimage.transform import resize\nfrom skimage.io import imsave\nimport numpy as np\nimport tensorflow as tf\n\nfrom data import load_train_data, load_test_data\n\n\nimage_height, image_width = 96, 96\nsmoothness = 1.0\nwork_dir = ''\n\n\ndef dice_coefficient(y1, y2):\n    y1 = tf.flatten(y1)\n    y2 = tf.flatten(y2)\n    return (2. * tf.sum(y1 * y2) + smoothness) / (tf.sum(y1) + tf.sum(y2) + smoothness)\n\n\ndef dice_coefficient_loss(y1, y2):\n    return -dice_coefficient(y1, y2)\n\n\ndef preprocess(imgs):\n    imgs_p = np.ndarray((imgs.shape[0], image_height, image_width), dtype=np.uint8)\n    for i in range(imgs.shape[0]):\n        imgs_p[i] = resize(imgs[i], (image_width, image_height), preserve_range=True)\n    imgs_p = imgs_p[..., np.newaxis]\n    return imgs_p\n\n\ndef convolution_layer(filters, kernel=(3,3), activation='relu', input_shape=None):\n    if input_shape is None:\n        return tf.keras.layers.Conv2D(\n            filters=filters,\n            kernel_size=kernel,\n            activation=activation)\n    else:\n        return tf.keras.layers.Conv2D(\n            filters=filters,\n            kernel_size=kernel,\n            activation=activation,\n            input_shape=input_shape)\n\n\ndef concatenated_de_convolution_layer(filters):\n    return tf.keras.layers.concatenate([\n        tf.keras.layers.Conv2DTranspose(\n            filters=filters,\n            kernel=(2, 2),\n            strides=(2, 2),\n            padding='same'\n        )],\n        axis=3\n    )\n\n\ndef pooling_layer():\n    return tf.keras.layers.MaxPooling2D(pool_size=(2, 2))\n\n\nunet = tf.keras.models.Sequential()\ninputs = tf.keras.layers.Input((image_height, image_width, 1))\ninput_shape = (image_height, image_width, 1)\nunet.add(convolution_layer(32, input_shape=input_shape))\nunet.add(convolution_layer(32))\nunet.add(pooling_layer())\n\nunet.add(convolution_layer(64))\nunet.add(convolution_layer(64))\nunet.add(pooling_layer())\n\nunet.add(convolution_layer(128))\nunet.add(convolution_layer(128))\nunet.add(pooling_layer())\n\nunet.add(convolution_layer(256))\nunet.add(convolution_layer(256))\nunet.add(pooling_layer())\n\nunet.add(convolution_layer(512))\nunet.add(convolution_layer(512))\n\nunet.add(concatenated_de_convolution_layer(256))\nunet.add(convolution_layer(256))\nunet.add(convolution_layer(256))\n\nunet.add(concatenated_de_convolution_layer(128))\nunet.add(convolution_layer(128))\nunet.add(convolution_layer(128))\n\nunet.add(concatenated_de_convolution_layer(64))\nunet.add(convolution_layer(64))\nunet.add(convolution_layer(64))\n\nunet.add(concatenated_de_convolution_layer(32))\nunet.add(convolution_layer(32))\nunet.add(convolution_layer(32))\n\nunet.add(convolution_layer(1, kernel=(1, 1), activation='sigmoid'))\n\nunet.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-5),\n              loss=dice_coefficient_loss,\n              metrics=[dice_coefficient])\n\n\nx_train, y_train_mask = load_train_data()\n\nx_train = preprocess(x_train)\ny_train_mask = preprocess(y_train_mask)\n\nx_train = x_train.astype('float32')\nmean = np.mean(x_train)\nstd = np.std(x_train)\n\nx_train -= mean\nx_train /= std\n\ny_train_mask = y_train_mask.astype('float32')\ny_train_mask /= 255.\n\nunet.fit(x_train, y_train_mask, batch_size=32, epochs=20, verbose=1, shuffle=True,\n          validation_split=0.2)\n\nx_test, y_test_mask = load_test_data()\nx_test = preprocess(x_test)\n\nx_test = x_test.astype('float32')\nx_test -= mean\nx_test /= std\n\ny_test_pred = unet.predict(x_test, verbose=1)\n\nfor image, image_id in zip(y_test_pred, y_test_mask):\n    image = (image[:, :, 0] * 255.).astype(np.uint8)\n    imsave(os.path.join(work_dir, str(image_id) + '.png'), image)\n"
  },
  {
    "path": "Chapter05/3_satellite.py",
    "content": "import tensorflow as tf\n\n\nfrom .resnet50 import ResNet50\nnb_labels = 6\ninput_shape = [28, 28]\n\nimg_height, img_width, _ = input_shape\ninput_tensor = tf.keras.layers.Input(shape=input_shape)\nweights = 'imagenet'\n\nresnet50_model = ResNet50(\n    include_top=False, weights='imagenet', input_tensor=input_tensor)\n\nfinal_32 = resnet50_model.get_layer('final_32').output\nfinal_16 = resnet50_model.get_layer('final_16').output\nfinal_x8 = resnet50_model.get_layer('final_x8').output\n\nc32 = tf.keras.layers.Conv2D(nb_labels, (1, 1))(final_32)\nc16 = tf.keras.layers.Conv2D(nb_labels, (1, 1))(final_16)\nc8 = tf.keras.layers.Conv2D(nb_labels, (1, 1))(final_x8)\n\n\ndef resize_bilinear(images):\n    return tf.image.resize_bilinear(images, [img_height, img_width])\n\n\nr32 = tf.keras.layers.Lambda(resize_bilinear)(c32)\nr16 = tf.keras.layers.Lambda(resize_bilinear)(c16)\nr8 = tf.keras.layers.Lambda(resize_bilinear)(c8)\n\nm = tf.keras.layers.Add()([r32, r16, r8])\n\nx = tf.keras.ayers.Reshape((img_height * img_width, nb_labels))(m)\nx = tf.keras.layers.Activation('img_height')(x)\nx = tf.keras.layers.Reshape((img_height, img_width, nb_labels))(x)\n\nfcn_model = tf.keras.models.Model(input=input_tensor, output=x)"
  },
  {
    "path": "Chapter05/data.py",
    "content": "from __future__ import print_function\n\nimport os\nimport numpy as np\n\nfrom skimage.io import imsave, imread\n\ndata_path = 'raw/'\n\nimage_rows = 420\nimage_cols = 580\n\n\ndef create_train_data():\n    train_data_path = os.path.join(data_path, 'train')\n    images = os.listdir(train_data_path)\n    total = int(len(images) / 2)\n\n    imgs = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)\n    imgs_mask = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)\n\n    i = 0\n    print('-'*30)\n    print('Creating training images...')\n    print('-'*30)\n    for image_name in images:\n        if 'mask' in image_name:\n            continue\n        image_mask_name = image_name.split('.')[0] + '_mask.tif'\n        img = imread(os.path.join(train_data_path, image_name), as_grey=True)\n        img_mask = imread(os.path.join(train_data_path, image_mask_name), as_grey=True)\n\n        img = np.array([img])\n        img_mask = np.array([img_mask])\n\n        imgs[i] = img\n        imgs_mask[i] = img_mask\n\n        if i % 100 == 0:\n            print('Done: {0}/{1} images'.format(i, total))\n        i += 1\n    print('Loading done.')\n\n    np.save('imgs_train.npy', imgs)\n    np.save('imgs_mask_train.npy', imgs_mask)\n    print('Saving to .npy files done.')\n\n\ndef load_train_data():\n    imgs_train = np.load('imgs_train.npy')\n    imgs_mask_train = np.load('imgs_mask_train.npy')\n    return imgs_train, imgs_mask_train\n\n\ndef create_test_data():\n    train_data_path = os.path.join(data_path, 'test')\n    images = os.listdir(train_data_path)\n    total = len(images)\n\n    imgs = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)\n    imgs_id = np.ndarray((total, ), dtype=np.int32)\n\n    i = 0\n    print('-'*30)\n    print('Creating test images...')\n    print('-'*30)\n    for image_name in images:\n        img_id = int(image_name.split('.')[0])\n        img = imread(os.path.join(train_data_path, image_name), as_grey=True)\n\n        img = np.array([img])\n\n        imgs[i] = img\n        imgs_id[i] = img_id\n\n        if i % 100 == 0:\n            print('Done: {0}/{1} images'.format(i, total))\n        i += 1\n    print('Loading done.')\n\n    np.save('imgs_test.npy', imgs)\n    np.save('imgs_id_test.npy', imgs_id)\n    print('Saving to .npy files done.')\n\n\ndef load_test_data():\n    imgs_test = np.load('imgs_test.npy')\n    imgs_id = np.load('imgs_id_test.npy')\n    return imgs_test, imgs_id\n\nif __name__ == '__main__':\n    create_train_data()\n    create_test_data()"
  },
  {
    "path": "Chapter06/1_contrastive_loss.py",
    "content": "import tensorflow as tf\n\n\ndef contrastive_loss(model_1, model_2, label, margin=0.1):\n    distance = tf.reduce_sum(tf.square(model_1 - model_2), 1)\n    loss = label * tf.square(\n        tf.maximum(0., margin - tf.sqrt(distance))) + (1 - label) * distance\n    loss = 0.5 * tf.reduce_mean(loss)\n    return loss"
  },
  {
    "path": "Chapter06/2_siamese_network.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 300\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n      \n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3],\n                      activation=tf.nn.relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_model(input_):\n    input_reshape = tf.reshape(input_, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    convolution_layer_1 = convolution_layer(input_reshape, 64)\n    pooling_layer_1 = pooling_layer(convolution_layer_1)\n    convolution_layer_2 = convolution_layer(pooling_layer_1, 128)\n    pooling_layer_2 = pooling_layer(convolution_layer_2)\n    flattened_pool = tf.reshape(pooling_layer_2, [-1, 5 * 5 * 128],\n                                name='flattened_pool')\n    dense_layer_bottleneck = dense_layer(flattened_pool, 1024)\n    return dense_layer_bottleneck\n\nleft_input = tf.placeholder(tf.float32, shape=[None, input_size])\nright_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\nleft_bottleneck = get_model(left_input)\nright_bottleneck = get_model(right_input)\ndense_layer_bottleneck = tf.concat([left_bottleneck, right_bottleneck], 1)\ndropout_bool = tf.placeholder(tf.bool)\ndropout_layer = tf.layers.dropout(\n        inputs=dense_layer_bottleneck,\n        rate=0.4,\n        training=dropout_bool\n    )\nlogits = dense_layer(dropout_layer, no_classes)\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(\n        labels=y_input, logits=logits)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\n\nwith tf.name_scope('accuracy'):\n    with tf.name_scope('correct_prediction'):\n        predictions = tf.argmax(logits, 1)\n        correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\n    with tf.name_scope('accuracy'):\n        accuracy_operation = tf.reduce_mean(\n            tf.cast(correct_predictions, tf.float32))\ntf.summary.scalar('accuracy', accuracy_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\ntest_summary_writer = tf.summary.FileWriter('/tmp/test')\n\ntest_images, test_labels = mnist_data.test.images, mnist_data.test.labels\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, train_labels = mnist_batch[0], mnist_batch[1]\n    _, merged_summary = session.run([optimiser, merged_summary_operation],\n                                    feed_dict={\n        left_input: train_images,\n        right_input: train_images,\n        y_input: train_labels,\n        dropout_bool: True\n    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n    if batch_no % 10 == 0:\n        merged_summary, _ = session.run([merged_summary_operation,\n                                         accuracy_operation], feed_dict={\n            left_input: test_images,\n            right_input: test_images,\n            y_input: test_labels,\n            dropout_bool: False\n        })\n        test_summary_writer.add_summary(merged_summary, batch_no)\n"
  },
  {
    "path": "Chapter06/3_triplet_loss.py",
    "content": "import tensorflow as tf\n\n\ndef triplet_loss(anchor_face, positive_face, negative_face, margin):\n    def get_distance(x, y):\n        return tf.reduce_sum(tf.square(tf.subtract(x, y)), 1)\n\n    positive_distance = get_distance(anchor_face, positive_face)\n    negative_distance = get_distance(anchor_face, negative_face)\n    total_distance = tf.add(tf.subtract(positive_distance, negative_distance), margin)\n    return tf.reduce_mean(tf.maximum(total_distance, 0.0), 0)\n"
  },
  {
    "path": "Chapter06/4_triplet_mining.py",
    "content": "from scipy.spatial.distance import cdist\nimport numpy as np\n\n\ndef mine_triplets(anchor, targets, negative_samples):\n    distances = cdist(anchor, targets, 'cosine')\n    distances = cdist(anchor, targets, 'cosine').tolist()\n    QnQ_duplicated = [\n        [target_index for target_index, dist in enumerate(QnQ_dist) if dist == QnQ_dist[query_index]]\n        for query_index, QnQ_dist in enumerate(distances)]\n    for i, QnT_dist in enumerate(QnT_dists):\n        for j in QnQ_duplicated[i]:\n            QnT_dist.itemset(j, np.inf)\n\n    QnT_dists_topk = QnT_dists.argsort(axis=1)[:, :negative_samples]\n    top_k_index = np.array([np.insert(QnT_dist, 0, i) for i, QnT_dist in enumerate(QnT_dists_topk)])\n    return top_k_index"
  },
  {
    "path": "Chapter06/5_fiducial_points.py",
    "content": "import fiducial_data\nimport tensorflow as tf\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3],\n                      activation=tf.nn.tanh):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.tanh):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\nimage_size = 40\nno_landmark = 10\nno_gender_classes = 2\nno_smile_classes = 2\nno_glasses_classes = 2\nno_headpose_classes = 5\nbatch_size = 100\ntotal_batches = 300\n\nimage_input = tf.placeholder(tf.float32, shape=[None, image_size, image_size])\n\nlandmark_input = tf.placeholder(tf.float32, shape=[None, no_landmark])\ngender_input = tf.placeholder(tf.float32, shape=[None, no_gender_classes])\nsmile_input = tf.placeholder(tf.float32, shape=[None, no_smile_classes])\nglasses_input = tf.placeholder(tf.float32, shape=[None, no_glasses_classes])\nheadpose_input = tf.placeholder(tf.float32, shape=[None, no_headpose_classes])\n\nimage_input_reshape = tf.reshape(image_input, [-1, image_size, image_size, 1],\n                             name='input_reshape')\n\nconvolution_layer_1 = convolution_layer(image_input_reshape, 16)\npooling_layer_1 = pooling_layer(convolution_layer_1)\nconvolution_layer_2 = convolution_layer(pooling_layer_1, 48)\npooling_layer_2 = pooling_layer(convolution_layer_2)\nconvolution_layer_3 = convolution_layer(pooling_layer_2, 64)\npooling_layer_3 = pooling_layer(convolution_layer_3)\nconvolution_layer_4 = convolution_layer(pooling_layer_3, 64)\nflattened_pool = tf.reshape(convolution_layer_4, [-1, 5 * 5 * 64],\n                            name='flattened_pool')\ndense_layer_bottleneck = dense_layer(flattened_pool, 1024)\ndropout_bool = tf.placeholder(tf.bool)\ndropout_layer = tf.layers.dropout(\n        inputs=dense_layer_bottleneck,\n        rate=0.4,\n        training=dropout_bool\n    )\nlandmark_logits = dense_layer(dropout_layer, 10)\nsmile_logits = dense_layer(dropout_layer, 2)\nglass_logits = dense_layer(dropout_layer, 2)\ngender_logits = dense_layer(dropout_layer, 2)\nheadpose_logits = dense_layer(dropout_layer, 5)\n\nlandmark_loss = 0.5 * tf.reduce_mean(\n    tf.square(landmark_input, landmark_logits))\n\ngender_loss = tf.reduce_mean(\n    tf.nn.softmax_cross_entropy_with_logits(\n        labels=gender_input, logits=gender_logits))\n\nsmile_loss = tf.reduce_mean(\n    tf.nn.softmax_cross_entropy_with_logits(\n        labels=smile_input, logits=smile_logits))\n\nglass_loss = tf.reduce_mean(\n    tf.nn.softmax_cross_entropy_with_logits(\n        labels=glasses_input, logits=glass_logits))\n\nheadpose_loss = tf.reduce_mean(\n    tf.nn.softmax_cross_entropy_with_logits(\n        labels=headpose_input, logits=headpose_logits))\n\nloss_operation = landmark_loss + gender_loss + \\\n                 smile_loss + glass_loss + headpose_loss\n\noptimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\n\nsession = tf.Session()\nsession.run(tf.initialize_all_variables())\nfiducial_test_data = fiducial_data.test\n\nfor batch_no in range(total_batches):\n    fiducial_data_batch = fiducial_data.train.next_batch(batch_size)\n    loss, _landmark_loss, _ = session.run(\n        [loss_operation, landmark_loss, optimiser],\n        feed_dict={\n            image_input: fiducial_data_batch.images,\n            landmark_input: fiducial_data_batch.landmarks,\n            gender_input: fiducial_data_batch.gender,\n            smile_input: fiducial_data_batch.smile,\n            glasses_input: fiducial_data_batch.glasses,\n            headpose_input: fiducial_data_batch.pose,\n            dropout_bool: True\n    })\n    if batch_no % 10 == 0:\n        loss, _landmark_loss, _ = session.run(\n            [loss_operation, landmark_loss],\n            feed_dict={\n                image_input: fiducial_test_data.images,\n                landmark_input: fiducial_test_data.landmarks,\n                gender_input: fiducial_test_data.gender,\n                smile_input: fiducial_test_data.smile,\n                glasses_input: fiducial_test_data.glasses,\n                headpose_input: fiducial_test_data.pose,\n                dropout_bool: False\n            })"
  },
  {
    "path": "Chapter06/6_extract_features.py",
    "content": "from scipy import misc\nimport tensorflow as tf\nimport numpy as np\nimport os\nimport facenet\nprint facenet\nfrom facenet import load_model, prewhiten\nimport align.detect_face\n\n\ndef load_and_align_data(image_paths,\n                        image_size=160,\n                        margin=44,\n                        gpu_memory_fraction=1.0):\n    minsize = 20\n    threshold = [0.6, 0.7, 0.7]\n    factor = 0.709\n\n    print('Creating networks and loading parameters')\n    with tf.Graph().as_default():\n        gpu_options = tf.GPUOptions(\n            per_process_gpu_memory_fraction=gpu_memory_fraction)\n        sess = tf.Session(config=tf.ConfigProto(\n            gpu_options=gpu_options, log_device_placement=False))\n        with sess.as_default():\n            pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None)\n\n    nrof_samples = len(image_paths)\n    img_list = [None] * nrof_samples\n    for i in range(nrof_samples):\n        img = misc.imread(os.path.expanduser(image_paths[i]), mode='RGB')\n        img_size = np.asarray(img.shape)[0:2]\n        bounding_boxes, _ = align.detect_face.detect_face(\n            img, minsize, pnet, rnet, onet, threshold, factor)\n        det = np.squeeze(bounding_boxes[0, 0:4])\n        bb = np.zeros(4, dtype=np.int32)\n        bb[0] = np.maximum(det[0] - margin / 2, 0)\n        bb[1] = np.maximum(det[1] - margin / 2, 0)\n        bb[2] = np.minimum(det[2] + margin / 2, img_size[1])\n        bb[3] = np.minimum(det[3] + margin / 2, img_size[0])\n        cropped = img[bb[1]:bb[3], bb[0]:bb[2], :]\n        aligned = misc.imresize(\n            cropped, (image_size, image_size), interp='bilinear')\n        prewhitened = prewhiten(aligned)\n        img_list[i] = prewhitened\n    images = np.stack(img_list)\n    return images\n\n\ndef get_face_embeddings(image_paths,\n                        model=''):\n    images = load_and_align_data(image_paths)\n    with tf.Graph().as_default():\n        with tf.Session() as sess:\n            load_model(model)\n            images_placeholder = tf.get_default_graph().get_tensor_by_name(\n                \"input:0\")\n            embeddings = tf.get_default_graph().get_tensor_by_name(\n                \"embeddings:0\")\n            phase_train_placeholder = tf.get_default_graph().get_tensor_by_name(\n                \"phase_train:0\")\n            feed_dict = {images_placeholder: images,\n                         phase_train_placeholder: False}\n            emb = sess.run(embeddings, feed_dict=feed_dict)\n\n    return emb\n\n\ndef compute_distance(embedding_1, embedding_2):\n    dist = np.sqrt(np.sum(np.square(np.subtract(embedding_1, embedding_2))))\n    return dist\n"
  },
  {
    "path": "Chapter07/1_caption_attention.py",
    "content": "import tensorflow as tf\nfrom keras.layers.recurrent import *\n\ntraining = True\nsequence_length = 0\nvocabulary_size = 0\ninput_tensor = 0\ninput_shape = 0\nembedding_dimension = 0\ndropout_prob = 0.3\nprevious_words = 0\nheight = 0\nshape = 0\ncnn_features = 0\ndepth = 0\n\nvgg_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',\n                                              include_top=False,\n                                              input_tensor=input_tensor,\n                                              input_shape=input_shape)\n\nword_embedding = tf.keras.layers.Embedding(\n    vocabulary_size, embedding_dimension, input_length=sequence_length)\nembbedding = word_embedding(previous_words)\nembbedding = tf.keras.layers.Activation('relu')(embbedding)\nembbedding = tf.keras.layers.Dropout(dropout_prob)(embbedding)\n\ncnn_features_flattened = tf.keras.layers.Reshape((height * height, shape))(cnn_features)\nnet = tf.keras.layers.GlobalAveragePooling1D()(cnn_features_flattened)\n\nnet = tf.keras.layers.Dense(embedding_dimension, activation='relu')(net)\nnet = tf.keras.layers.Dropout(dropout_prob)(net)\nnet = tf.keras.layers.RepeatVector(sequence_length)(net)\nnet = tf.keras.layers.concatenate()([net, embbedding])\nnet = tf.keras.layers.Dropout(dropout_prob)(net)\n\n\n\n\nclass LSTM_sent(Recurrent):\n    def __init__(self, output_dim,\n                 init='glorot_uniform', inner_init='orthogonal',\n                 forget_bias_init='one', activation='tanh',\n                 inner_activation='hard_sigmoid',\n                 W_regularizer=None, U_regularizer=None, b_regularizer=None,\n                 dropout_W=0., dropout_U=0., sentinel=True, **kwargs):\n        self.output_dim = output_dim\n        self.init = initializations.get(init)\n        self.inner_init = initializations.get(inner_init)\n        self.forget_bias_init = initializations.get(forget_bias_init)\n        self.activation = activations.get(activation)\n        self.inner_activation = activations.get(inner_activation)\n        self.W_regularizer = regularizers.get(W_regularizer)\n        self.U_regularizer = regularizers.get(U_regularizer)\n        self.b_regularizer = regularizers.get(b_regularizer)\n        self.dropout_W, self.dropout_U = dropout_W, dropout_U\n        self.sentinel = sentinel\n        if self.dropout_W or self.dropout_U:\n            self.uses_learning_phase = True\n        super(LSTM_sent, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n        self.input_spec = [InputSpec(shape=input_shape)]\n        input_dim = input_shape[2]\n        self.input_dim = input_dim\n\n        if self.stateful:\n            self.reset_states()\n        else:\n            if self.sentinel:\n                self.states = [None, None]\n            else:\n                self.states = [None]\n\n        self.W_i = self.init((input_dim, self.output_dim),\n                             name='{}_W_i'.format(self.name))\n        self.U_i = self.inner_init((self.output_dim, self.output_dim),\n                                   name='{}_U_i'.format(self.name))\n        self.b_i = K.zeros((self.output_dim,), name='{}_b_i'.format(self.name))\n\n        self.W_f = self.init((input_dim, self.output_dim),\n                             name='{}_W_f'.format(self.name))\n        self.U_f = self.inner_init((self.output_dim, self.output_dim),\n                                   name='{}_U_f'.format(self.name))\n        self.b_f = self.forget_bias_init((self.output_dim,),\n                                         name='{}_b_f'.format(self.name))\n\n        self.W_c = self.init((input_dim, self.output_dim),\n                             name='{}_W_c'.format(self.name))\n        self.U_c = self.inner_init((self.output_dim, self.output_dim),\n                                   name='{}_U_c'.format(self.name))\n        self.b_c = K.zeros((self.output_dim,), name='{}_b_c'.format(self.name))\n\n        self.W_o = self.init((input_dim, self.output_dim),\n                             name='{}_W_o'.format(self.name))\n        self.U_o = self.inner_init((self.output_dim, self.output_dim),\n                                   name='{}_U_o'.format(self.name))\n        self.b_o = K.zeros((self.output_dim,), name='{}_b_o'.format(self.name))\n\n        if self.sentinel:\n            # sentinel gate\n            self.W_g = self.init((input_dim, self.output_dim),\n                                 name='{}_W_g'.format(self.name))\n            self.U_g = self.inner_init((self.output_dim, self.output_dim),\n                                       name='{}_U_g'.format(self.name))\n            self.b_g = K.zeros((self.output_dim,), name='{}_b_g'.format(self.name))\n\n\n            self.trainable_weights = [self.W_i, self.U_i, self.b_i,\n                                      self.W_c, self.U_c, self.b_c,\n                                      self.W_f, self.U_f, self.b_f,\n                                      self.W_o, self.U_o, self.b_o,\n                                      self.W_g, self.U_g, self.b_g]\n        else:\n            self.trainable_weights = [self.W_i, self.U_i, self.b_i,\n                                      self.W_c, self.U_c, self.b_c,\n                                      self.W_f, self.U_f, self.b_f,\n                                      self.W_o, self.U_o, self.b_o]\n\n        if self.initial_weights is not None:\n            self.set_weights(self.initial_weights)\n            del self.initial_weights\n\n    def reset_states(self):\n        assert self.stateful, 'Layer must be stateful.'\n        input_shape = self.input_spec[0].shape\n        if not input_shape[0]:\n            raise Exception('If a RNN is stateful, a complete ' +\n                            'input_shape must be provided (including batch size).')\n        if hasattr(self, 'states'):\n            K.set_value(self.states[0],\n                        np.zeros((input_shape[0], self.output_dim)))\n            K.set_value(self.states[1],\n                        np.zeros((input_shape[0], self.output_dim)))\n        else:\n            self.states = [K.zeros((input_shape[0], self.output_dim)),\n                           K.zeros((input_shape[0], self.output_dim))]\n\n    def preprocess_input(self, x, train=False):\n        if self.consume_less == 'cpu':\n            if train and (0 < self.dropout_W < 1):\n                dropout = self.dropout_W\n            else:\n                dropout = 0\n            input_shape = self.input_spec[0].shape\n            input_dim = input_shape[2]\n            timesteps = input_shape[1]\n\n            x_i = time_distributed_dense(x, self.W_i, self.b_i, dropout,\n                                         input_dim, self.output_dim, timesteps)\n            x_f = time_distributed_dense(x, self.W_f, self.b_f, dropout,\n                                         input_dim, self.output_dim, timesteps)\n            x_c = time_distributed_dense(x, self.W_c, self.b_c, dropout,\n                                         input_dim, self.output_dim, timesteps)\n            x_o = time_distributed_dense(x, self.W_o, self.b_o, dropout,\n                                         input_dim, self.output_dim, timesteps)\n            if self.sentinel:\n                x_g = time_distributed_dense(x, self.W_g, self.b_g, dropout,\n                                             input_dim, self.output_dim, timesteps)\n                return K.concatenate([x_i, x_f, x_c, x_o,x_g], axis=2)\n            else:\n                return K.concatenate([x_i, x_f, x_c, x_o], axis=2)\n\n        else:\n            return x\n\n    def step(self, x, states):\n        h_tm1 = states[0]\n        c_tm1 = states[1]\n        B_U = states[2]\n        B_W = states[3]\n\n        if self.consume_less == 'cpu':\n            x_i = x[:, :self.output_dim]\n            x_f = x[:, self.output_dim: 2 * self.output_dim]\n            x_c = x[:, 2 * self.output_dim: 3 * self.output_dim]\n            x_o = x[:, 3 * self.output_dim: 4 * self.output_dim]\n            if self.sentinel:\n                x_g = x[:, 4 * self.output_dim:]\n        else:\n            x_i = K.dot(x, self.W_i) + self.b_i\n            x_f = K.dot(x * B_W[1], self.W_f) + self.b_f\n            x_c = K.dot(x * B_W[2], self.W_c) + self.b_c\n            x_o = K.dot(x * B_W[3], self.W_o) + self.b_o\n            if self.sentinel:\n                x_g = K.dot(x * B_W[4], self.W_g) + self.b_g\n\n        i = self.inner_activation(x_i + K.dot(h_tm1 * B_U[0], self.U_i))\n        f = self.inner_activation(x_f + K.dot(h_tm1 * B_U[1], self.U_f))\n        c = f * c_tm1 + i * self.activation(x_c + K.dot(h_tm1 * B_U[2], self.U_c))\n        o = self.inner_activation(x_o + K.dot(h_tm1 * B_U[3], self.U_o))\n        h = o * self.activation(c)\n        if self.sentinel:\n            g = self.inner_activation(x_g + K.dot(h_tm1 * B_U[4], self.U_g))\n            s = g * self.activation(c)\n            return [h,s], [h, c]\n        else:\n            return h, [h, c]\n\n    def get_constants(self, x):\n        constants = []\n        if self.sentinel:\n            Ngate = 5\n        else:\n            Ngate = 4\n        if 0 < self.dropout_U < 1:\n            ones = K.ones_like(K.reshape(x[:, 0, 0], (-1, 1)))\n            ones = K.concatenate([ones] * self.output_dim, 1)\n            B_U = [K.dropout(ones, self.dropout_U) for _ in range(Ngate)]\n            constants.append(B_U)\n        else:\n            constants.append([K.cast_to_floatx(1.) for _ in range(Ngate)])\n\n        if self.consume_less == 'cpu' and 0 < self.dropout_W < 1:\n            input_shape = self.input_spec[0].shape\n            input_dim = input_shape[-1]\n            ones = K.ones_like(K.reshape(x[:, 0, 0], (-1, 1)))\n            ones = K.concatenate([ones] * input_dim, 1)\n            B_W = [K.dropout(ones, self.dropout_W) for _ in range(Ngate)]\n            constants.append(B_W)\n        else:\n            constants.append([K.cast_to_floatx(1.) for _ in range(Ngate)])\n        return constants\n\n\n    def get_output_shape_for(self, input_shape):\n        if isinstance(input_shape, list) and len(input_shape) > 1:\n            input_shape = input_shape[0]\n        if self.return_sequences:\n            output_shape = (input_shape[0], input_shape[1], self.output_dim)\n        else:\n            output_shape = (input_shape[0], self.output_dim)\n        #the hidden state and the sentinel have the same shape\n        if self.sentinel:\n            return [output_shape, output_shape]\n        else:\n            return output_shape\n\n    def compute_mask(self, input, mask):\n        if self.return_sequences:\n            if self.sentinel:\n                return [mask, mask]\n            else :\n                return mask\n        else:\n            if self.sentinel:\n                return [None, None]\n            else:\n                return None\n\n    def call(self, x, mask=None):\n        input_shape = self.input_spec[0].shape\n        if K._BACKEND == 'tensorflow':\n            if not input_shape[1]:\n                raise Exception('When using TensorFlow, you should define '\n                                'explicitly the number of timesteps of '\n                                'your sequences.\\n'\n                                'If your first layer is an Embedding, '\n                                'make sure to pass it an \"input_length\" '\n                                'argument. Otherwise, make sure '\n                                'the first layer has '\n                                'an \"input_shape\" or \"batch_input_shape\" '\n                                'argument, including the time axis. '\n                                'Found input shape at layer ' + self.name +\n                                ': ' + str(input_shape))\n        if self.stateful:\n            initial_states = self.states\n        else:\n            initial_states = self.get_initial_states(x)\n        constants = self.get_constants(x)\n        preprocessed_input = self.preprocess_input(x)\n\n        last_output, outputs, states = K.rnn(self.step, preprocessed_input,\n                                             initial_states,\n                                             go_backwards=self.go_backwards,\n                                             mask=mask,\n                                             constants=constants,\n                                             unroll=self.unroll,\n                                             input_length=input_shape[1])\n\n\n        if self.stateful:\n            self.updates = []\n            for i in range(len(states)):\n                self.updates.append((self.states[i], states[i]))\n        if self.sentinel:\n            outputs = K.permute_dimensions(outputs, [0,2,1,3])\n            if self.return_sequences:\n                return [outputs[0], outputs[1]]\n            else:\n                return [last_output[0],last_output[1]]\n        else:\n            if self.return_sequences:\n                return outputs\n            else:\n                return last_output\n\n\n    def get_config(self):\n        config = {\"output_dim\": self.output_dim,\n                  \"init\": self.init.__name__,\n                  \"inner_init\": self.inner_init.__name__,\n                  \"forget_bias_init\": self.forget_bias_init.__name__,\n                  \"activation\": self.activation.__name__,\n                  \"inner_activation\": self.inner_activation.__name__,\n                  \"W_regularizer\": self.W_regularizer.get_config() if self.W_regularizer else None,\n                  \"U_regularizer\": self.U_regularizer.get_config() if self.U_regularizer else None,\n                  \"b_regularizer\": self.b_regularizer.get_config() if self.b_regularizer else None,\n                  \"dropout_W\": self.dropout_W,\n                  \"dropout_U\": self.dropout_U,\n                  \"sentinel\": self.sentinel}\n        base_config = super(LSTM_sent, self).get_config()\n        return dict(list(base_config.items()) + list(config.items()))\n\n\n\n\nlstm_ = LSTM_sent(output_dim = args_dict.lstm_dim,\n                  return_sequences=True,stateful=True,\n                  dropout_W = dropout_prob,\n                  dropout_U = dropout_prob,\n                  sentinel=True,name='hs')\nh, s = lstm_(x)\n\nnum_vfeats = wh * wh\nnum_vfeats = num_vfeats + 1\n\n\nh_out_linear = tf.keras.layers.Convolution1D(\n    depth, 1, activation='tanh', border_mode='same')(h)\nh_out_linear = tf.keras.layers.Dropout(\n    dropout_prob)(h_out_linear)\nh_out_embed = tf.keras.layers.Convolution1D(\n    embedding_dimension, 1, border_mode='same')(h_out_linear)\nz_h_embed = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.RepeatVector(num_vfeats))(h_out_embed)\n\nVi = tf.keras.layers.Convolution1D(\n    depth, 1, border_mode='same', activation='relu')(V)\n\nVi = tf.keras.layers.Dropout(dropout_prob)(Vi)\nVi_emb = tf.keras.layers.Convolution1D(\n    embedding_dimension, 1, border_mode='same', activation='relu')(Vi)\n\nz_v_linear = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.RepeatVector(sequence_length))(Vi)\nz_v_embed = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.RepeatVector(sequence_length))(Vi_emb)\n\nz_v_linear = tf.keras.layers.Permute((2, 1, 3))(z_v_linear)\nz_v_embed = tf.keras.layers.Permute((2, 1, 3))(z_v_embed)\n\nfake_feat = tf.keras.layers.Convolution1D(\n    depth, 1, activation='relu', border_mode='same')(s)\nfake_feat = tf.keras.layers.Dropout(dropout_prob)(fake_feat)\n\nfake_feat_embed = tf.keras.layers.Convolution1D(\n    embedding_dimension, 1, border_mode='same')(fake_feat)\nz_s_linear = tf.keras.layers.Reshape((sequence_length, 1, depth))(fake_feat)\nz_s_embed = tf.keras.layers.Reshape(\n    (sequence_length, 1, embedding_dimension))(fake_feat_embed)\n\nz_v_linear = tf.keras.layers.concatenate(axis=-2)([z_v_linear, z_s_linear])\nz_v_embed = tf.keras.layers.concatenate(axis=-2)([z_v_embed, z_s_embed])\n\nz = tf.keras.layers.Merge(mode='sum')([z_h_embed,z_v_embed])\nz = tf.keras.layers.Dropout(dropout_prob)(z)\nz = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.Activation('tanh'))(z)\nattention= tf.keras.layers.TimeDistributed(\n    tf.keras.layers.Convolution1D(1, 1, border_mode='same'))(z)\n\nattention = tf.keras.layers.Reshape((sequence_length, num_vfeats))(attention)\nattention = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.Activation('softmax'))(attention)\nattention = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.RepeatVector(depth))(attention)\nattention = tf.keras.layers.Permute((1,3,2))(attention)\nw_Vi = tf.keras.layers.Add()([attention,z_v_linear])\nsumpool = tf.keras.layers.Lambda(lambda x: K.sum(x, axis=-2),\n               output_shape=(depth,))\nc_vec = tf.keras.layers.TimeDistributed(sumpool)(w_Vi)\natten_out = tf.keras.layers.Merge(mode='sum')([h_out_linear,c_vec])\nh = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.Dense(embedding_dimension,activation='tanh'))(atten_out)\nh = tf.keras.layers.Dropout(dropout_prob)(h)\n\npredictions = tf.keras.layers.TimeDistributed(\n    tf.keras.layers.Dense(vocabulary_size, activation='softmax'))(h)\n\nmodel = tf.keras.models.Model(input=[cnn_features, prev_words], output=predictions)\nopt = get_opt(args_dict)\n\n\nin_im = tf.keras.layers.Input(\n    batch_shape=(args_dict.bs, args_dict.imsize, args_dict.imsize, 3), name='image')\n\nwh = vgg_model.output_shape[1]\ndim = vgg_model.output_shape[3]\n\nif not args_dict.cnn_train:\n    for i,layer in enumerate(convnet.layers):\n        if i > args_dict.finetune_start_layer:\n            layer.trainable = False\n\nimfeats = vgg_model(in_im)\ncnn_features = tf.keras.layers.Input(batch_shape=(args_dict.bs, wh, wh, dim))\nprev_words = tf.keras.layers.Input(batch_shape=(args_dict.bs, sequence_length))\nlang_model = language_model(args_dict, wh, dim, cnn_features, prev_words)\n\nout = lang_model([imfeats,prev_words])\n\nmodel = tf.keras.models.Model(input=[in_im, prev_words], output=out)\n"
  },
  {
    "path": "Chapter08/1_style_transfer.py",
    "content": "import numpy as np\nfrom PIL import Image\nfrom scipy.optimize import fmin_l_bfgs_b\nfrom scipy.misc import imsave\nfrom vgg16_avg import VGG16_Avg\nfrom keras import metrics\nfrom keras.models import Model\nfrom keras import backend as K\n\nimport tensorflow as tf\n\nwork_dir = ''\n\ncontent_image = Image.open(work_dir + 'bird_orig.png')\n\nimagenet_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32)\n\n\ndef subtract_imagenet_mean(image):\n    return (image - imagenet_mean)[:, :, :, ::-1]\n\n\ndef add_imagenet_mean(image, s):\n    return np.clip(image.reshape(s)[:, :, :, ::-1] + imagenet_mean, 0, 255)\n\n\nvgg_model = VGG16_Avg(include_top=False)\ncontent_layer = vgg_model.get_layer('block5_conv1').output\ncontent_model = Model(vgg_model.input, content_layer)\n\ncontent_image_array = subtract_imagenet_mean(np.expand_dims(np.array(content_image), 0))\ncontent_image_shape = content_image_array.shape\ntarget = K.variable(content_model.predict(content_image_array))\n\n\nclass ConvexOptimiser(object):\n    def __init__(self, cost_function, tensor_shape):\n        self.cost_function = cost_function\n        self.tensor_shape = tensor_shape\n        self.gradient_values = None\n\n    def loss(self, point):\n        loss_value, self.gradient_values = self.cost_function([point.reshape(self.tensor_shape)])\n        return loss_value.astype(np.float64)\n\n    def gradients(self, point):\n        return self.gradient_values.flatten().astype(np.float64)\n\n\nmse_loss = metrics.mean_squared_error(content_layer, target)\ngrads = K.gradients(mse_loss, vgg_model.input)\ncost_function = K.function([vgg_model.input], [mse_loss]+grads)\noptimiser = ConvexOptimiser(cost_function, content_image_shape)\n\n\ndef optimise(optimiser, iterations, point, tensor_shape, file_name):\n    for i in range(iterations):\n        point, min_val, info = fmin_l_bfgs_b(optimiser.loss, point.flatten(),\n                                         fprime=optimiser.gradients, maxfun=20)\n        point = np.clip(point, -127, 127)\n        print('Loss:', min_val)\n        imsave(work_dir + 'gen_'+file_name+'_{i}.png', add_imagenet_mean(point.copy(), tensor_shape)[0])\n    return point\n\n\ndef generate_rand_img(shape):\n    return np.random.uniform(-2.5, 2.5, shape)/1\ngenerated_image = generate_rand_img(content_image_shape)\n\niterations = 2\ngenerated_image = optimise(optimiser, iterations, generated_image, content_image_shape, 'content')\n\n\n# Style transfer\nstyle_image = Image.open(work_dir + 'starry_night.png')\nstyle_image = style_image.resize(np.divide(style_image.size, 3.5).astype('int32'))\nstyle_image_array = subtract_imagenet_mean(np.expand_dims(style_image, 0)[:, :, :, :3])\nstyle_image_shape = style_image_array.shape\nvgg_model = VGG16_Avg(include_top=False, input_shape=style_image_shape[1:])\nstyle_layers = {layer.name: layer.output for layer in vgg_model.layers}\nstyle_features = [style_layers['block{}_conv1'.format(o)] for o in range(1,3)]\nlayers_model = Model(vgg_model.input, style_features)\nstyle_targets = [K.variable(feature) for feature in layers_model.predict(style_image_array)]\n\n\ndef grammian_matrix(matrix):\n    flattened_matrix = K.batch_flatten(K.permute_dimensions(matrix, (2, 0, 1)))\n    matrix_transpose_dot = K.dot(flattened_matrix, K.transpose(flattened_matrix))\n    element_count = matrix.get_shape().num_elements()\n    return matrix_transpose_dot / element_count\n\n\ndef style_mse_loss(x, y):\n    return metrics.mse(grammian_matrix(x), grammian_matrix(y))\n\n\nstyle_loss = sum(style_mse_loss(l1[0], l2[0]) for l1, l2 in zip(style_features, style_targets))\ngrads = K.gradients(style_loss, vgg_model.input)\nstyle_fn = K.function([vgg_model.input], [style_loss]+grads)\noptimiser = ConvexOptimiser(style_fn, style_image_shape)\n\ngenerated_image = generate_rand_img(style_image_shape)\ngenerated_image = optimise(optimiser, iterations, generated_image, style_image_shape, 'style')\n\n\n\nw, h = style_image.size\nsrc = content_image_array[:, :h, :w]\noutputs = {l.name: l.output for l in vgg_model.layers}\n\nstyle_layers = [outputs['block{}_conv2'.format(o)] for o in range(1,6)]\ncontent_name = 'block4_conv2'\ncontent_layer = outputs[content_name]\n\nstyle_model = Model(vgg_model.input, style_layers)\nstyle_targs = [K.variable(o) for o in style_model.predict(style_image_array)]\n\ncontent_model = Model(vgg_model.input, content_layer)\ncontent_targ = K.variable(content_model.predict(src))\n\nstyle_wgts = [0.05, 0.2, 0.2, 0.25, 0.3]\n\nloss = sum(style_loss(l1[0], l2[0])*w for l1, l2, w in zip(style_layers, style_targs, style_wgts))\nloss += metrics.mse(content_layer, content_targ)/10\n\ngrads = tf.keras.backend.gradients(loss, vgg_model.input)\ntransfer_fn = tf.keras.backend.function([vgg_model.input], [loss]+grads)\nevaluator = ConvexOptimiser(transfer_fn, style_image_shape)\nenerated_image = generate_rand_img(style_image_shape)\ngenerated_image = optimise(optimiser, iterations, generated_image, style_image_shape)\n\n\n\n# Content plus style transfer\n\n\nstyle_width, style_height = style_image.size\ncontent_image_array = content_image_array[:, :style_height, :style_width]\n\nstyle_layers_2 = [style_layers['block{}_conv2'.format(block_no)] for block_no in range(1,6)]\ncontent_layer = style_layers['block4_conv2']\n\nstyle_model = Model(vgg_model.input, style_layers_2)\nstyle_targets = [K.variable(o) for o in style_model.predict(style_image_array)]\n\ncontent_model = Model(vgg_model.input, content_layer)\ncontent_target = K.variable(content_model.predict(content_image_array))\n\nstyle_weights = [0.05, 0.2, 0.2, 0.25, 0.3]\n\nstyle_loss = sum(style_loss(l1[0], l2[0])*w for l1, l2, w in zip(style_layers_2, style_targets, style_weights))\ncontent_loss = metrics.mse(content_layer, content_target)/10\nloss = style_loss + content_loss\n\ngradients = K.gradients(loss, vgg_model.input)\ntransfer_fn = K.function([vgg_model.input], [loss]+gradients)\noptimiser = ConvexOptimiser(transfer_fn, style_image_shape)\ngenerated_image = generate_rand_img(style_image_shape)\ngenerated_image = optimise(optimiser, iterations, generated_image, style_image_shape)\n"
  },
  {
    "path": "Chapter08/2_vanilla_gan.py",
    "content": "import tensorflow as tf\n\nbatch_size = 32\ninput_dimension = [227, 227]\nreal_images = None\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer,\n                      filters,\n                      kernel_size=[4, 4],\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef transpose_convolution_layer(input_layer,\n                                filters,\n                                kernel_size=[4, 4],\n                                activation=tf.nn.relu,\n                                strides=2):\n    layer = tf.layers.conv2d_transpose(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        strides=strides,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\ndef pooling_layer(input_layer,\n                  pool_size=[2, 2],\n                  strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer,\n                units,\n                activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_generator(input_noise, is_training=True):\n    generator = dense_layer(input_noise, 1024)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = dense_layer(generator, 7 * 7 * 256)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = tf.reshape(generator,  [-1, 7, 7, 256])\n    generator = transpose_convolution_layer(generator, 64)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = transpose_convolution_layer(generator, 32)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = convolution_layer(generator, 3)\n    generator = convolution_layer(generator, 1, activation=tf.nn.tanh)\n    print(generator)\n    return generator\n\n\ndef get_discriminator(image, is_training=True):\n    x_input_reshape = tf.reshape(image, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    discriminator = convolution_layer(x_input_reshape, 64)\n    discriminator = convolution_layer(discriminator, 128)\n    discriminator = tf.layers.flatten(discriminator)\n    discriminator = dense_layer(discriminator, 1024)\n    discriminator = tf.layers.batch_normalization(discriminator, training=is_training)\n    discriminator = dense_layer(discriminator, 2)\n    return discriminator\n\ninput_noise = tf.random_normal([batch_size, input_dimension])\n\n\ngan = tf.contrib.gan.gan_model(\n    get_generator,\n    get_discriminator,\n    real_images,\n    input_noise)\n\ntf.contrib.gan.gan_train(\n    tf.contrib.gan.gan_train_ops(\n        gan,\n        tf.contrib.gan.gan_loss(gan),\n        tf.train.AdamOptimizer(0.001),\n        tf.train.AdamOptimizer(0.0001)))\n"
  },
  {
    "path": "Chapter08/3_conditional_gan.py",
    "content": "import tensorflow as tf\nbatch_size = 32\ninput_dimension = [227, 227]\nreal_images = None\nlabels = None\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer,\n                      filters,\n                      kernel_size=[4, 4],\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef transpose_convolution_layer(input_layer,\n                                filters,\n                                kernel_size=[4, 4],\n                                activation=tf.nn.relu,\n                                strides=2):\n    layer = tf.layers.conv2d_transpose(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        strides=strides,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\ndef pooling_layer(input_layer,\n                  pool_size=[2, 2],\n                  strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer,\n                units,\n                activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_generator(input_noise, is_training=True):\n    generator = dense_layer(input_noise, 1024)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = dense_layer(generator, 7 * 7 * 256)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = tf.reshape(generator,  [-1, 7, 7, 256])\n    generator = transpose_convolution_layer(generator, 64)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = transpose_convolution_layer(generator, 32)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = convolution_layer(generator, 3)\n    generator = convolution_layer(generator, 1, activation=tf.nn.tanh)\n    print(generator)\n    return generator\n\n\ndef get_discriminator(image, is_training=True):\n    x_input_reshape = tf.reshape(image, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    discriminator = convolution_layer(x_input_reshape, 64)\n    discriminator = convolution_layer(discriminator, 128)\n    discriminator = tf.layers.flatten(discriminator)\n    discriminator = dense_layer(discriminator, 1024)\n    discriminator = tf.layers.batch_normalization(discriminator, training=is_training)\n    discriminator = dense_layer(discriminator, 2)\n    return discriminator\n\ninput_noise = tf.random_normal([batch_size, input_dimension])\ngan = tf.contrib.gan.gan_model(\n    get_generator,\n    get_discriminator,\n    real_images,\n    (input_noise, labels))"
  },
  {
    "path": "Chapter08/4_adverserial_loss.py",
    "content": "import tensorflow as tf\n\nbatch_size = 32\ninput_dimension = [227, 227]\nreal_images = None\nlabels = None\n\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer,\n                      filters,\n                      kernel_size=[4, 4],\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef transpose_convolution_layer(input_layer,\n                                filters,\n                                kernel_size=[4, 4],\n                                activation=tf.nn.relu,\n                                strides=2):\n    layer = tf.layers.conv2d_transpose(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        strides=strides,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\ndef pooling_layer(input_layer,\n                  pool_size=[2, 2],\n                  strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer,\n                units,\n                activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_generator(input_noise, is_training=True):\n    generator = dense_layer(input_noise, 1024)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = dense_layer(generator, 7 * 7 * 256)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = tf.reshape(generator,  [-1, 7, 7, 256])\n    generator = transpose_convolution_layer(generator, 64)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = transpose_convolution_layer(generator, 32)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = convolution_layer(generator, 3)\n    generator = convolution_layer(generator, 1, activation=tf.nn.tanh)\n    print(generator)\n    return generator\n\n\ndef get_discriminator(image, is_training=True):\n    x_input_reshape = tf.reshape(image, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    discriminator = convolution_layer(x_input_reshape, 64)\n    discriminator = convolution_layer(discriminator, 128)\n    discriminator = tf.layers.flatten(discriminator)\n    discriminator = dense_layer(discriminator, 1024)\n    discriminator = tf.layers.batch_normalization(discriminator, training=is_training)\n    discriminator = dense_layer(discriminator, 2)\n    return discriminator\n\ndef fully_connected_layer(input_layer, units):\n    return tf.layers.dense(\n        input_layer,\n        units=units,\n        activation=tf.nn.relu\n    )\n\n\ndef convolution_layer(input_layer, filter_size):\n    return  tf.layers.conv2d(\n        input_layer,\n        filters=filter_size,\n        kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d(),\n        kernel_size=3,\n        strides=2\n    )\n\n\ndef deconvolution_layer(input_layer, filter_size, activation=tf.nn.relu):\n    return tf.layers.conv2d_transpose(\n        input_layer,\n        filters=filter_size,\n        kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d(),\n        kernel_size=3,\n        activation=activation,\n        strides=2\n    )\n\ndef get_autoencoder():\n    input_layer = tf.placeholder(tf.float32, [None, 128, 128, 3])\n    convolution_layer_1 = convolution_layer(input_layer, 1024)\n    convolution_layer_2 = convolution_layer(convolution_layer_1, 512)\n    convolution_layer_3 = convolution_layer(convolution_layer_2, 256)\n    convolution_layer_4 = convolution_layer(convolution_layer_3, 128)\n    convolution_layer_5 = convolution_layer(convolution_layer_4, 32)\n\n    convolution_layer_5_flattened = tf.layers.flatten(convolution_layer_5)\n    bottleneck_layer = fully_connected_layer(convolution_layer_5_flattened, 16)\n    c5_shape = convolution_layer_5.get_shape().as_list()\n    c5f_flat_shape = convolution_layer_5_flattened.get_shape().as_list()[1]\n    fully_connected = fully_connected_layer(bottleneck_layer, c5f_flat_shape)\n    fully_connected = tf.reshape(fully_connected,\n                                 [-1, c5_shape[1], c5_shape[2], c5_shape[3]])\n\n    deconvolution_layer_1 = deconvolution_layer(fully_connected, 128)\n    deconvolution_layer_2 = deconvolution_layer(deconvolution_layer_1, 256)\n    deconvolution_layer_3 = deconvolution_layer(deconvolution_layer_2, 512)\n    deconvolution_layer_4 = deconvolution_layer(deconvolution_layer_3, 1024)\n    deconvolution_layer_5 = deconvolution_layer(deconvolution_layer_4, 3,\n                                                activation=tf.nn.tanh)\n    return deconvolution_layer_5\n\ngan = tf.contrib.gan.gan_model(\n    get_autoencoder,\n    get_discriminator,\n    real_images,\n    real_images)\n\nloss = tf.contrib.gan.gan_loss(\n    gan, gradient_penalty=1.0)\n\nl1_pixel_loss = tf.norm(gan.real_data - gan.generated_data, ord=1)\n\nloss = tf.contrib.gan.losses.combine_adversarial_loss(\n    loss, gan, l1_pixel_loss, weight_factor=1)"
  },
  {
    "path": "Chapter08/5_image_translation.py",
    "content": "import tensorflow as tf\nbatch_size = 32\ninput_dimension = [227, 227]\nreal_images = None\nlabels = None\ninput_images = None\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer,\n                      filters,\n                      kernel_size=[4, 4],\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef transpose_convolution_layer(input_layer,\n                                filters,\n                                kernel_size=[4, 4],\n                                activation=tf.nn.relu,\n                                strides=2):\n    layer = tf.layers.conv2d_transpose(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        strides=strides,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\ndef pooling_layer(input_layer,\n                  pool_size=[2, 2],\n                  strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer,\n                units,\n                activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_generator(input_noise, is_training=True):\n    generator = dense_layer(input_noise, 1024)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = dense_layer(generator, 7 * 7 * 256)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = tf.reshape(generator,  [-1, 7, 7, 256])\n    generator = transpose_convolution_layer(generator, 64)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = transpose_convolution_layer(generator, 32)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = convolution_layer(generator, 3)\n    generator = convolution_layer(generator, 1, activation=tf.nn.tanh)\n    print(generator)\n    return generator\n\n\ndef get_discriminator(image, is_training=True):\n    x_input_reshape = tf.reshape(image, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    discriminator = convolution_layer(x_input_reshape, 64)\n    discriminator = convolution_layer(discriminator, 128)\n    discriminator = tf.layers.flatten(discriminator)\n    discriminator = dense_layer(discriminator, 1024)\n    discriminator = tf.layers.batch_normalization(discriminator, training=is_training)\n    discriminator = dense_layer(discriminator, 2)\n    return discriminator\n\n\ngan = tf.contrib.gan.gan_model(\n    get_generator,\n    get_discriminator,\n    real_images,\n    input_images)\n\nloss = tf.contrib.gan.gan_loss(\n    gan,\n    tf.contrib.gan.losses.least_squares_generator_loss,\n    tf.contrib.gan.losses.least_squares_discriminator_loss)\n\nl1_loss = tf.norm(gan.real_data - gan.generated_data, ord=1)\n\ngan_loss = tf.contrib.gan.losses.combine_adversarial_loss(\n    loss, gan, l1_loss, weight_factor=1)\n"
  },
  {
    "path": "Chapter08/6_infogan.py",
    "content": "import tensorflow as tf\nbatch_size = 32\ninput_dimension = [227, 227]\nreal_images = None\nlabels = None\nunstructured_input = None\nstructured_input = None\n\ndef add_variable_summary(tf_variable, summary_name):\n  with tf.name_scope(summary_name + '_summary'):\n    mean = tf.reduce_mean(tf_variable)\n    tf.summary.scalar('Mean', mean)\n    with tf.name_scope('standard_deviation'):\n        standard_deviation = tf.sqrt(tf.reduce_mean(\n            tf.square(tf_variable - mean)))\n    tf.summary.scalar('StandardDeviation', standard_deviation)\n    tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n    tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n    tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer,\n                      filters,\n                      kernel_size=[4, 4],\n                      activation=tf.nn.leaky_relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef transpose_convolution_layer(input_layer,\n                                filters,\n                                kernel_size=[4, 4],\n                                activation=tf.nn.relu,\n                                strides=2):\n    layer = tf.layers.conv2d_transpose(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation,\n        strides=strides,\n        kernel_regularizer=tf.nn.l2_loss,\n        bias_regularizer=tf.nn.l2_loss,\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\ndef pooling_layer(input_layer,\n                  pool_size=[2, 2],\n                  strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer,\n                units,\n                activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_generator(input_noise, is_training=True):\n    generator = dense_layer(input_noise, 1024)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = dense_layer(generator, 7 * 7 * 256)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = tf.reshape(generator,  [-1, 7, 7, 256])\n    generator = transpose_convolution_layer(generator, 64)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = transpose_convolution_layer(generator, 32)\n    generator = tf.layers.batch_normalization(generator, training=is_training)\n    generator = convolution_layer(generator, 3)\n    generator = convolution_layer(generator, 1, activation=tf.nn.tanh)\n    print(generator)\n    return generator\n\n\ndef get_discriminator(image, is_training=True):\n    x_input_reshape = tf.reshape(image, [-1, 28, 28, 1],\n                                 name='input_reshape')\n    discriminator = convolution_layer(x_input_reshape, 64)\n    discriminator = convolution_layer(discriminator, 128)\n    discriminator = tf.layers.flatten(discriminator)\n    discriminator = dense_layer(discriminator, 1024)\n    discriminator = tf.layers.batch_normalization(discriminator, training=is_training)\n    discriminator = dense_layer(discriminator, 2)\n    return discriminator\n\n\ninfo_gan = tf.contrib.gan.infogan_model(\n    get_generator,\n    get_discriminator,\n    real_images,\n    unstructured_input,\n    structured_input)\n\nloss = tf.contrib.gan.gan_loss(\n    info_gan,\n    gradient_penalty_weight=1,\n    gradient_penalty_epsilon=1e-10,\n    mutual_information_penalty_weight=1)\n"
  },
  {
    "path": "Chapter08/utils.py",
    "content": "import math, keras, datetime, pandas as pd, numpy as np, keras.backend as K, threading, json, re, collections\nimport tarfile, tensorflow as tf, matplotlib.pyplot as plt, xgboost, operator, random, pickle, glob, os, bcolz\nimport shutil, sklearn, functools, itertools, scipy\nfrom PIL import Image\nfrom concurrent.futures import ProcessPoolExecutor, as_completed, ThreadPoolExecutor\nimport matplotlib.patheffects as PathEffects\nfrom sklearn.preprocessing import LabelEncoder, StandardScaler\nfrom sklearn.neighbors import NearestNeighbors, LSHForest\nimport IPython\nfrom IPython.display import display, Audio\nfrom numpy.random import normal\nfrom gensim.models import word2vec\nfrom keras.preprocessing.text import Tokenizer\nfrom nltk.tokenize import ToktokTokenizer, StanfordTokenizer\nfrom functools import reduce\nfrom itertools import chain\n\nfrom tensorflow.python.framework import ops\n#from tensorflow.contrib import rnn, legacy_seq2seq as seq2seq\n\nfrom keras_tqdm import TQDMNotebookCallback\nfrom keras import initializations\nfrom keras.applications.resnet50 import ResNet50, decode_predictions, conv_block, identity_block\nfrom keras.applications.vgg16 import VGG16\nfrom keras.preprocessing import image\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.models import Model, Sequential\nfrom keras.layers import *\nfrom keras.optimizers import Adam\nfrom keras.regularizers import l2\nfrom keras.utils.data_utils import get_file\nfrom keras.applications.imagenet_utils import decode_predictions, preprocess_input\n\n\nnp.set_printoptions(threshold=50, edgeitems=20)\ndef beep(): return Audio(filename='/home/jhoward/beep.mp3', autoplay=True)\ndef dump(obj, fname): pickle.dump(obj, open(fname, 'wb'))\ndef load(fname): return pickle.load(open(fname, 'rb'))\n\n\ndef limit_mem():\n    K.get_session().close()\n    cfg = K.tf.ConfigProto()\n    cfg.gpu_options.allow_growth = True\n    K.set_session(K.tf.Session(config=cfg))\n\n\ndef autolabel(plt, fmt='%.2f'):\n    rects = plt.patches\n    ax = rects[0].axes\n    y_bottom, y_top = ax.get_ylim()\n    y_height = y_top - y_bottom\n    for rect in rects:\n        height = rect.get_height()\n        if height / y_height > 0.95:\n            label_position = height - (y_height * 0.06)\n        else:\n            label_position = height + (y_height * 0.01)\n        txt = ax.text(rect.get_x() + rect.get_width()/2., label_position,\n                fmt % height, ha='center', va='bottom')\n        txt.set_path_effects([PathEffects.withStroke(linewidth=3, foreground='w')])\n\n\ndef column_chart(lbls, vals, val_lbls='%.2f'):\n    n = len(lbls)\n    p = plt.bar(np.arange(n), vals)\n    plt.xticks(np.arange(n), lbls)\n    if val_lbls: autolabel(p, val_lbls)\n\n\ndef save_array(fname, arr):\n    c=bcolz.carray(arr, rootdir=fname, mode='w')\n    c.flush()\n\n\ndef load_array(fname): return bcolz.open(fname)[:]\n\n\ndef load_glove(loc):\n    return (load_array(loc+'.dat'),\n        pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'),\n        pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1'))\n\ndef plot_multi(im, dim=(4,4), figsize=(6,6), **kwargs ):\n    plt.figure(figsize=figsize)\n    for i,img in enumerate(im):\n        plt.subplot(*((dim)+(i+1,)))\n        plt.imshow(img, **kwargs)\n        plt.axis('off')\n    plt.tight_layout()\n\n\ndef plot_train(hist):\n    h = hist.history\n    if 'acc' in h:\n        meas='acc'\n        loc='lower right'\n    else:\n        meas='loss'\n        loc='upper right'\n    plt.plot(hist.history[meas])\n    plt.plot(hist.history['val_'+meas])\n    plt.title('model '+meas)\n    plt.ylabel(meas)\n    plt.xlabel('epoch')\n    plt.legend(['train', 'validation'], loc=loc)\n\n\ndef fit_gen(gen, fn, eval_fn, nb_iter):\n    for i in range(nb_iter):\n        fn(*next(gen))\n        if i % (nb_iter//10) == 0: eval_fn()\n\n\ndef wrap_config(layer):\n    return {'class_name': layer.__class__.__name__, 'config': layer.get_config()}\n\n\ndef copy_layer(layer): return layer_from_config(wrap_config(layer))\n\n\ndef copy_layers(layers): return [copy_layer(layer) for layer in layers]\n\n\ndef copy_weights(from_layers, to_layers):\n    for from_layer,to_layer in zip(from_layers, to_layers):\n        to_layer.set_weights(from_layer.get_weights())\n\n\ndef copy_model(m):\n    res = Sequential(copy_layers(m.layers))\n    copy_weights(m.layers, res.layers)\n    return res\n\n\ndef insert_layer(model, new_layer, index):\n    res = Sequential()\n    for i,layer in enumerate(model.layers):\n        if i==index: res.add(new_layer)\n        copied = layer_from_config(wrap_config(layer))\n        res.add(copied)\n        copied.set_weights(layer.get_weights())\n    return res\n"
  },
  {
    "path": "Chapter08/vgg16_avg.py",
    "content": "from __future__ import print_function\nfrom __future__ import absolute_import\n\nimport warnings\n\nfrom keras.models import Model\nfrom keras.layers import Flatten, Dense, Input\nfrom keras.layers import Convolution2D, AveragePooling2D\nfrom keras.engine.topology import get_source_inputs\nfrom keras.utils.layer_utils import convert_all_kernels_in_model\nfrom keras.utils.data_utils import get_file\nfrom keras import backend as K\nfrom keras.applications.imagenet_utils import _obtain_input_shape\n\n\nTH_WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_th_dim_ordering_th_kernels.h5'\nTF_WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'\nTH_WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_th_dim_ordering_th_kernels_notop.h5'\nTF_WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'\n\n\ndef VGG16_Avg(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, classes=1000):\n    if weights not in {'imagenet', None}:\n        raise ValueError('The `weights` argument should be either '\n                         '`None` (random initialization) or `imagenet` '\n                         '(pre-training on ImageNet).')\n\n    if weights == 'imagenet' and include_top and classes != 1000:\n        raise ValueError('If using `weights` as imagenet with `include_top`'\n                         ' as true, `classes` should be 1000')\n    # Determine proper input shape\n    input_shape = _obtain_input_shape(input_shape,\n                                      default_size=224,\n                                      min_size=48,\n                                      dim_ordering=K.image_dim_ordering(),\n                                      include_top=include_top)\n\n    if input_tensor is None:\n        img_input = Input(shape=input_shape)\n    else:\n        if not K.is_keras_tensor(input_tensor):\n            img_input = Input(tensor=input_tensor, shape=input_shape)\n        else:\n            img_input = input_tensor\n    # Block 1\n    x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv1')(img_input)\n    x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv2')(x)\n    x = AveragePooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)\n\n    # Block 2\n    x = Convolution2D(128, 3, 3, activation='relu', border_mode='same', name='block2_conv1')(x)\n    x = Convolution2D(128, 3, 3, activation='relu', border_mode='same', name='block2_conv2')(x)\n    x = AveragePooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)\n\n    # Block 3\n    x = Convolution2D(256, 3, 3, activation='relu', border_mode='same', name='block3_conv1')(x)\n    x = Convolution2D(256, 3, 3, activation='relu', border_mode='same', name='block3_conv2')(x)\n    x = Convolution2D(256, 3, 3, activation='relu', border_mode='same', name='block3_conv3')(x)\n    x = AveragePooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)\n\n    # Block 4\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block4_conv1')(x)\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block4_conv2')(x)\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block4_conv3')(x)\n    x = AveragePooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)\n\n    # Block 5\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block5_conv1')(x)\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block5_conv2')(x)\n    x = Convolution2D(512, 3, 3, activation='relu', border_mode='same', name='block5_conv3')(x)\n    x = AveragePooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)\n\n    if include_top:\n        # Classification block\n        x = Flatten(name='flatten')(x)\n        x = Dense(4096, activation='relu', name='fc1')(x)\n        x = Dense(4096, activation='relu', name='fc2')(x)\n        x = Dense(classes, activation='softmax', name='predictions')(x)\n\n    # Ensure that the model takes into account\n    # any potential predecessors of `input_tensor`.\n    if input_tensor is not None:\n        inputs = get_source_inputs(input_tensor)\n    else:\n        inputs = img_input\n    # Create model.\n    model = Model(inputs, x, name='vgg16')\n\n    # load weights\n    if weights == 'imagenet':\n        if K.image_dim_ordering() == 'th':\n            if include_top:\n                weights_path = get_file('vgg16_weights_th_dim_ordering_th_kernels.h5',\n                                        TH_WEIGHTS_PATH,\n                                        cache_subdir='models')\n            else:\n                weights_path = get_file('vgg16_weights_th_dim_ordering_th_kernels_notop.h5',\n                                        TH_WEIGHTS_PATH_NO_TOP,\n                                        cache_subdir='models')\n            model.load_weights(weights_path)\n            if K.backend() == 'tensorflow':\n                warnings.warn('You are using the TensorFlow backend, yet you '\n                              'are using the Theano '\n                              'image dimension ordering convention '\n                              '(`image_dim_ordering=\"th\"`). '\n                              'For best performance, set '\n                              '`image_dim_ordering=\"tf\"` in '\n                              'your Keras config '\n                              'at ~/.keras/keras.json.')\n                convert_all_kernels_in_model(model)\n        else:\n            if include_top:\n                weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels.h5',\n                                        TF_WEIGHTS_PATH,\n                                        cache_subdir='models')\n            else:\n                weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',\n                                        TF_WEIGHTS_PATH_NO_TOP,\n                                        cache_subdir='models')\n            model.load_weights(weights_path)\n            if K.backend() == 'theano':\n                convert_all_kernels_in_model(model)\n    return model\n"
  },
  {
    "path": "Chapter09/1_video_to_frames_1.py",
    "content": "import cv2\nvideo_path = '/Users/i335713/Desktop/epat/lecture recordings and live lectures/batch35epat  (batch 35) Lecture Recordings  Live Lec  Additional Lecture on Machine Learning (.mp4'\nvideo_handle = cv2.VideoCapture(video_path)\nframe_no = 0\nwhile True:\n  eof, frame = video_handle.read()\n  if not eof:\n      break\n  cv2.imwrite(\"frame%d.jpg\" % frame_no, frame)\n  frame_no += 1"
  },
  {
    "path": "Chapter09/2_parallel_stream.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ninput_size = 784\nno_classes = 10\nbatch_size = 100\ntotal_batches = 300\n\n\ndef add_variable_summary(tf_variable, summary_name):\n    with tf.name_scope(summary_name + '_summary'):\n        mean = tf.reduce_mean(tf_variable)\n        tf.summary.scalar('Mean', mean)\n        with tf.name_scope('standard_deviation'):\n            standard_deviation = tf.sqrt(tf.reduce_mean(\n                tf.square(tf_variable - mean)))\n        tf.summary.scalar('StandardDeviation', standard_deviation)\n        tf.summary.scalar('Maximum', tf.reduce_max(tf_variable))\n        tf.summary.scalar('Minimum', tf.reduce_min(tf_variable))\n        tf.summary.histogram('Histogram', tf_variable)\n\n\ndef convolution_layer(input_layer, filters, kernel_size=[3, 3],\n                      activation=tf.nn.relu):\n    layer = tf.layers.conv2d(\n        inputs=input_layer,\n        filters=filters,\n        kernel_size=kernel_size,\n        activation=activation\n    )\n    add_variable_summary(layer, 'convolution')\n    return layer\n\n\ndef pooling_layer(input_layer, pool_size=[2, 2], strides=2):\n    layer = tf.layers.max_pooling2d(\n        inputs=input_layer,\n        pool_size=pool_size,\n        strides=strides\n    )\n    add_variable_summary(layer, 'pooling')\n    return layer\n\n\ndef dense_layer(input_layer, units, activation=tf.nn.relu):\n    layer = tf.layers.dense(\n        inputs=input_layer,\n        units=units,\n        activation=activation\n    )\n    add_variable_summary(layer, 'dense')\n    return layer\n\n\ndef get_model(input_):\n    input_reshape = tf.reshape(input_, [-1, 28, 28, 1],\n                               name='input_reshape')\n    convolution_layer_1 = convolution_layer(input_reshape, 64)\n    pooling_layer_1 = pooling_layer(convolution_layer_1)\n    convolution_layer_2 = convolution_layer(pooling_layer_1, 128)\n    pooling_layer_2 = pooling_layer(convolution_layer_2)\n    flattened_pool = tf.reshape(pooling_layer_2, [-1, 5 * 5 * 128],\n                                name='flattened_pool')\n    return flattened_pool\n\n\nhigh_resolution_input = tf.placeholder(tf.float32, shape=[None, input_size])\nlow_resolution_input = tf.placeholder(tf.float32, shape=[None, input_size])\ny_input = tf.placeholder(tf.float32, shape=[None, no_classes])\nhigh_resolution_cnn = get_model(high_resolution_input)\nlow_resolution_cnn = get_model(low_resolution_input)\ndense_layer_1 = tf.concat([high_resolution_cnn, low_resolution_cnn], 1)\ndense_layer_bottleneck = dense_layer(dense_layer_1, 1024)\nlogits = dense_layer(dense_layer_bottleneck, no_classes)\n\nwith tf.name_scope('loss'):\n    softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(\n        labels=y_input, logits=logits)\n    loss_operation = tf.reduce_mean(softmax_cross_entropy, name='loss')\n    tf.summary.scalar('loss', loss_operation)\n\nwith tf.name_scope('optimiser'):\n    optimiser = tf.train.AdamOptimizer().minimize(loss_operation)\n\nwith tf.name_scope('accuracy'):\n    with tf.name_scope('correct_prediction'):\n        predictions = tf.argmax(logits, 1)\n        correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1))\n    with tf.name_scope('accuracy'):\n        accuracy_operation = tf.reduce_mean(\n            tf.cast(correct_predictions, tf.float32))\ntf.summary.scalar('accuracy', accuracy_operation)\n\nsession = tf.Session()\nsession.run(tf.global_variables_initializer())\n\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train', session.graph)\ntest_summary_writer = tf.summary.FileWriter('/tmp/test')\n\ntest_images, test_labels = mnist_data.test.images, mnist_data.test.labels\n\nfor batch_no in range(total_batches):\n    mnist_batch = mnist_data.train.next_batch(batch_size)\n    train_images, train_labels = mnist_batch[0], mnist_batch[1]\n    _, merged_summary = session.run([optimiser, merged_summary_operation],\n                                    feed_dict={\n                                        high_resolution_input: train_images,\n                                        low_resolution_input: train_images,\n                                        y_input: train_labels\n                                    })\n    train_summary_writer.add_summary(merged_summary, batch_no)\n    if batch_no % 10 == 0:\n        merged_summary, _ = session.run([merged_summary_operation,\n                                         accuracy_operation], feed_dict={\n            high_resolution_input: test_images,\n            low_resolution_input: test_images,\n            y_input: test_labels\n        })\n        test_summary_writer.add_summary(merged_summary, batch_no)\n"
  },
  {
    "path": "Chapter09/3_lstm_after_cnn.py",
    "content": "import tensorflow as tf\ninput_shape = [500,500]\nno_classes = 2\n\nnet = tf.keras.models.Sequential()\nnet.add(tf.keras.layers.LSTM(2048,\n             return_sequences=False,\n             input_shape=input_shape,\n             dropout=0.5))\nnet.add(tf.keras.layers.Dense(512, activation='relu'))\nnet.add(tf.keras.layers.Dropout(0.5))\nnet.add(tf.keras.layers.Dense(no_classes, activation='softmax'))"
  },
  {
    "path": "Chapter09/4_3d_convolution.py",
    "content": "import tensorflow as tf\ninput_shape = 227, 227, 200, 3\nno_classes = 2\n\nnet = tf.keras.models.Sequential()\nnet.add(tf.keras.layers.Conv3D(32,\n                               kernel_size=(3, 3, 3),\n                               input_shape=(input_shape)))\nnet.add(tf.keras.layers.Activation('relu'))\nnet.add(tf.keras.layers.Conv3D(32, (3, 3, 3)))\nnet.add(tf.keras.layers.Activation('softmax'))\nnet.add(tf.keras.layers.MaxPooling3D())\nnet.add(tf.keras.layers.Dropout(0.25))\n\nnet.add(tf.keras.layers.Conv3D(64, (3, 3, 3)))\nnet.add(tf.keras.layers.Activation('relu'))\nnet.add(tf.keras.layers.Conv3D(64, (3, 3, 3)))\nnet.add(tf.keras.layers.Activation('softmax'))\nnet.add(tf.keras.layers.MaxPool3D())\nnet.add(tf.keras.layers.Dropout(0.25))\n\nnet.add(tf.keras.layers.Flatten())\nnet.add(tf.keras.layers.Dense(512, activation='sigmoid'))\nnet.add(tf.keras.layers.Dropout(0.5))\nnet.add(tf.keras.layers.Dense(no_classes, activation='softmax'))\nnet.compile(loss=tf.keras.losses.categorical_crossentropy,\n            optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy'])"
  },
  {
    "path": "Chapter10/1_ios.py",
    "content": "import tfcoreml as tf_converter\ntf_converter.convert(tf_model_path='tf_model_path.pb',\n                     mlmodel_path='mlmodel_path.mlmodel',\n                     output_feature_names=['softmax:0'],\n                     input_name_shape_dict={'input:0': [1, 227, 227, 3]})"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Packt\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "\n\n\n# Deep-Learning-for-Computer-Vision\nCode repository for Deep Learning for Computer Vision, by Packt\n\nThis is the code repository for [Deep Learning for Computer Vision](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-computer-vision?utm_source=github&utm_medium=repository&utm_campaign=9781788295628), published by [Packt](https://www.packtpub.com/?utm_source=github). It contains all the supporting project files necessary to work through the book from start to finish.\n## About the Book\nDeep learning has shown its power in several application areas of Artificial Intelligence, especially in Computer Vision. Computer Vision is the science of understanding and manipulating images, and finds enormous applications in the areas of robotics, automation, and so on. This book will also show you, with practical examples, how to develop Computer Vision applications by leveraging the power of deep learning.\n\n\n## Instructions and Navigation\nAll of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.\n\n\n\nThe code will look like the following:\n```\nmerged_summary_operation = tf.summary.merge_all()\ntrain_summary_writer = tf.summary.FileWriter('/tmp/train' , session.graph)\ntest_summary_writer = tf.summary.FileWriter('/tmp/test' )\n```\n\nThe examples covered in this book can be run with Windows, Ubuntu, or Mac. All the installation instructions are covered. Basic knowledge of Python and machine learning is required. It's preferable that the reader has GPU hardware but it's not necessary\n\n## Related Products\n* [Deep Learning with Keras](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-keras?utm_source=github&utm_medium=repository&utm_campaign=9781787128422)\n\n* [TensorFlow 1.x Deep Learning Cookbook](https://www.packtpub.com/big-data-and-business-intelligence/tensorflow-1x-deep-learning-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781788293594)\n\n* [Deep Learning with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow?utm_source=github&utm_medium=repository&utm_campaign=9781786469786)\n\n### Download a free PDF\n\n <i>If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.<br>Simply click on the link to claim your free PDF.</i>\n<p align=\"center\"> <a href=\"https://packt.link/free-ebook/9781788295628\">https://packt.link/free-ebook/9781788295628 </a> </p>"
  }
]