[
  {
    "path": ".github/FUNDING.yml",
    "content": "patreon: theaiepiphany\n"
  },
  {
    "path": ".gitignore",
    "content": ".idea\n__pycache__\n\n.ipynb_checkpoints\n\nruns\n\nmodels/binaries\nmodels/checkpoints\n\ndata/interpolated_imagery\ndata/generated_imagery\ndata/debug_imagery\ndata/MNIST\ndata/CelebA"
  },
  {
    "path": "LICENCE",
    "content": "MIT License\n\nCopyright (c) 2020 Aleksa Gordić\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "README.md",
    "content": "## PyTorch GANs :computer: vs :computer: = :heart:\nThis repo contains PyTorch implementation of various GAN architectures. <br/>\nIt's aimed at making it **easy for beginners** to start playing and learning about GANs. <br/>\n\nAll of the repos I found do obscure things like setting bias in some network layer to `False` without explaining <br/>\nwhy certain design decisions were made. This repo makes **every design decision transparent.**\n\n## Table of Contents\n  * [What are GANs?](#what-are-gans)\n  * [Setup](#setup)\n  * [Implementations](#implementations)\n    + [Vanilla GAN](#vanilla-gan)\n    + [Conditional GAN](#conditional-gan)\n    + [DCGAN](#dcgan)\n\n## What are GANs?\n\nGANs were originally proposed by Ian Goodfellow et al. in a seminal paper called [Generative Adversarial Nets](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf).\n\nGANs are a framework where 2 models (usually neural networks), called generator (G) and discriminator (D), play a **minimax game** against each other.\nThe generator is trying to **learn the distribution of real data** and is the network which we're usually interested in.\nDuring the game the goal of the generator is to trick the discriminator into \"thinking\" that the data it generates is real.\nThe goal of the discriminator, on the other hand, is to correctly discriminate between the generated (fake) images and real images coming from some dataset (e.g. MNIST).\n\n## Setup\n\n1. `git clone https://github.com/gordicaleksa/pytorch-gans`\n2. Open Anaconda console and navigate into project directory `cd path_to_repo`\n3. Run `conda env create` from project directory (this will create a brand new conda environment).\n4. Run `activate pytorch-gans` (for running scripts from your console or set the interpreter in your IDE)\n\nThat's it! It should work out-of-the-box executing environment.yml file which deals with dependencies.\n\n-----\n\nPyTorch package will pull some version of CUDA with it, but it is highly recommended that you install system-wide CUDA beforehand, mostly because of GPU drivers. I also recommend using Miniconda installer as a way to get conda on your system. \n\nFollow through points 1 and 2 of [this setup](https://github.com/Petlja/PSIML/blob/master/docs/MachineSetup.md) and use the most up-to-date versions of Miniconda and CUDA/cuDNN.\n\n## Implementations\n\nImportant note: you don't need to train the GANs to use this project I've checked-in pre-trained models. <br/>\nYou can just use the `generate_imagery.py` script to play with the models.\n\n## Vanilla GAN\n\nVanilla GAN is my implementation of the [original GAN paper (Goodfellow et al.)](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) with certain modifications mostly in the model architecture,\nlike the usage of LeakyReLU and 1D batch normalization (it didn't even exist back then) instead of the maxout activation and dropout.\n\n### Examples\n\nGAN was trained on data from MNIST dataset. Here is how the digits from the dataset look like:\n\n<p align=\"center\">\n<img src=\"data/examples/real_samples/mnist.jpg\" width=\"850\"/>\n</p>\n\nYou can see how the network is slowly learning to capture the data distribution during training:\n\n<p align=\"center\">\n<img src=\"data/examples/training_progress/training_progress_vgan.gif\" />\n</p>\n\nAfter the generator is trained we can use it to generate all 10 digits! Looks like it's coming directly from MNIST, right!?\n\n<p align=\"center\">\n<img src=\"data/examples/generated_samples/generated_vgan.jpg\" width=\"850\"/>\n</p>\n\nWe can also pick 2 generated numbers that we like, save their latent vectors, and subsequently [linearly](https://en.wikipedia.org/wiki/Linear_interpolation) or [spherically](https://en.wikipedia.org/wiki/Slerp)<br/>\ninterpolate between them to generate new images and understand how the latent space (z-space) is structured:\n\n<p align=\"center\">\n<img src=\"data/examples/interpolation/vgan_interpolated.jpg\" width=\"850\"/>\n</p>\n\nWe can see how the number 4 is slowly morphing into 9 and then into the number 3. <br/>\n\nThe idea behind spherical interpolation is super easy - instead of moving over the shortest possible path<br/>\n(line i.e. linear interpolation) from the first vector (p0) to the second (p1), you take the sphere's arc path: \n\n<p align=\"center\">\n<img src=\"data/examples/interpolation/slerp.png\" width=\"330\"/>\n</p>\n\n### Usage\n\n#### Option 1: Jupyter Notebook\n\nJust run `jupyter notebook` from you Anaconda console and it will open the session in your default browser. <br/>\nOpen `Vanilla GAN (PyTorch).py` and you're ready to play! <br/>\n\nIf you created the env before I added jupyter just do `pip install jupyter==1.0.0` and you're ready.\n\n---\n\n**Note:** if you get `DLL load failed while importing win32api: The specified module could not be found` <br/>\nJust do `pip uninstall pywin32` and then either `pip install pywin32` or `conda install pywin32` [should fix it](https://github.com/jupyter/notebook/issues/4980)!\n\n#### Option 2: Use your IDE of choice\n\n#### Training\n\nIt's really easy to kick-off new training just run this: <br/>\n`python train_vanilla_gan.py --batch_size <number which won't break your GPU's VRAM>`\n\nThe code is well commented so you can exactly understand how the training itself works. <br/>\n\nThe script will:\n* Dump checkpoint *.pth models into `models/checkpoints/`\n* Dump the final *.pth model into `models/binaries/`\n* Dump intermediate generated imagery into `data/debug_imagery/`\n* Download MNIST (~100 MB) the first time you run it and place it into `data/MNIST/`\n* Dump tensorboard data into `runs/`, just run `tensorboard --logdir=runs` from your Anaconda\n\nAnd that's it you can track the training both visually (dumped imagery) and through G's and D's loss progress.\n\n<p align=\"center\">\n<img src=\"data/examples/intermediate_imagery.PNG\" height=\"250\"/>\n<img src=\"data/examples/losses.PNG\" height=\"250\"/>\n</p>\n\nTracking loss can be helpful but I mostly relied on visually analyzing intermediate imagery. <br/>\n\nNote1: also make sure to check out **playground.py** file if you're having problems understanding adversarial loss.<br/>\nNote2: Images are dumped both to the file system `data/debug_imagery/` but also to tensorboard.\n\n#### Generating imagery and interpolating\n\nTo generate a single image just run the script with defaults: <br/>\n`python generate_imagery.py`\n\nIt will display and dump the generated image into `data/generated_imagery/` using checked-in generator model. <br/>\n\nMake sure to change the `--model_name` param to your model's name (once you train your own model). <br/>\n\n-----\n\nIf you want to play with interpolation, just set the `--generation_mode` to `GenerationMode.INTERPOLATION`. <br/>\nAnd optionally set `--slerp` to true if you want to use spherical interpolation.\n\nThe first time you run it in this mode the script will start generating images, <br/>\nand ask you to pick 2 images you like by entering `'y'` into the console.\n\nFinally it will start displaying interpolated imagery and dump the results to `data/interpolated_imagery`.\n\n## Conditional GAN\n\nConditional GAN (cGAN) is my implementation of the [cGAN paper (Mehdi et al.)](https://arxiv.org/pdf/1411.1784.pdf).<br/>\nIt basically just adds conditioning vectors (one hot encoding of digit labels) to the vanilla GAN above.\n\n### Examples\n\nIn addition to everything that we could do with the original GAN, here we can exactly control which digit we want to generate!\nWe make it dump 10x10 grid where each column is a single digit and this is how the learning proceeds:\n\n<p align=\"center\">\n<img src=\"data/examples/training_progress/training_progress_cgan.gif\" />\n</p>\n\n### Usage\n\nFor training just check out [vanilla GAN](#training) (just make sure to use `train_cgan.py` instead).\n\n#### Generating imagery\n\nSame as for [vanilla GAN](#generating-imagery-and-interpolating) but you can additionally set `cgan_digit` to a number between 0 and 9 to generate that exact digit!\nThere is no interpolation support for cGAN, it's the same as for vanilla GAN feel free to use that.\n\nNote: make sure to set `--model_name` to either `CGAN_000000.pth` (pre-trained and checked-in) or your own model.\n\n## DCGAN\n\nDCGAN is my implementation of the [DCGAN paper (Radford et al.)](https://arxiv.org/pdf/1511.06434.pdf).<br/>\nThe main contribution of the paper was that they were the first who made CNNs successfully work in the GAN setup.<br/>\nBatch normalization was invented in the meanwhile and that's what got CNNs to work basically.\n\n### Examples\n\nI trained DCGAN on preprocessed [CelebA dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). Here are some samples from the dataset:\n\n<p align=\"center\">\n<img src=\"data/examples/real_samples/celeba.jpg\" width=\"850\"/>\n</p>\n\nAgain, you can see how the network is slowly learning to capture the data distribution during training:\n\n<p align=\"center\">\n<img src=\"data/examples/training_progress/training_progress_dcgan.gif\" />\n</p>\n\nAfter the generator is trained we can use it to generate new faces! This problem is much harder than generating MNIST digits,\nso generated faces are not indistinguishable from the real ones.\n\n<p align=\"center\">\n<img src=\"data/examples/generated_samples/generated_dcgan.jpg\" width=\"850\"/>\n</p>\n\nSome SOTA GAN papers did a much better job at generating faces, currently the best model is [StyleGAN2](https://github.com/NVlabs/stylegan2).\n\nSimilarly we can explore the structure of the latent space via interpolations:\n\n<p align=\"center\">\n<img src=\"data/examples/interpolation/dcgan_interpolated.jpg\" width=\"850\"/>\n</p>\n\nWe can see how the man's face is slowly morphing into woman's face and also the skin tan is changing gradually.\n\nFinally, because the latent space has some nice properties (linear structure) we can do some interesting things.<br/>\nSubtracting neutral woman's latent vector from smiling woman's latent vector gives us the \"smile vector\". <br/>\nAdding that vector to neutral man's latent vector, we hopefully get smiling man's latent vector. And so it is!\n\n<p align=\"center\">\n<img src=\"data/examples/vector_arithmetic/vector_arithmetic.jpg\" />\n</p>\n\nYou can also create the \"sunglasses vector\" and use it to add sunglasses to other faces, etc.\n\n*Note: I've created an interactive script so you can play with this check out `GenerationMode.VECTOR_ARITHMETIC`.*\n\n### Usage\n\nFor training just check out [vanilla GAN](#training) (just make sure to use `train_dcgan.py` instead). <br/>\nThe only difference is that this script will download [pre-processed CelebA dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip) instead of MNIST.\n\n#### Generating imagery\n\nAgain just use the `generate_imagery.py` script.\n\nYou have 3 options you can set the `generation_mode` to:\n* `GenerationMode.SINGLE_IMAGE` <- generate a single face image\n* `GenerationMode.INTERPOLATION` <- pick 2 face images you like and script will interpolate between them\n* `GenerationMode.VECTOR_ARITHMETIC` <- pick 9 images and script will do vector arithmetic\n\nGenerationMode.VECTOR_ARITHMETIC will give you an **interactive matplotlib plot** to pick 9 images.\n\nNote: make sure to set `--model_name` to either `DCGAN_000000.pth` (pre-trained and checked-in) or your own model.\n\n## Acknowledgements\n\nI found these repos useful (while developing this one):\n* [gans](https://github.com/diegoalejogm/gans) (PyTorch & TensorFlow)\n* [PyTorch-GAN](https://github.com/eriklindernoren/PyTorch-GAN) (PyTorch)\n\n## Citation\n\nIf you find this code useful for your research, please cite the following:\n\n```\n@misc{Gordić2020PyTorchGANs,\n  author = {Gordić, Aleksa},\n  title = {pytorch-gans},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/gordicaleksa/pytorch-gans}},\n}\n```\n\n## Connect with me\n\nIf you'd love to have some more AI-related content in your life :nerd_face:, consider:\n* Subscribing to my YouTube channel [The AI Epiphany](https://www.youtube.com/c/TheAiEpiphany) :bell:\n* Follow me on [LinkedIn](https://www.linkedin.com/in/aleksagordic/) and [Twitter](https://twitter.com/gordic_aleksa) :bulb:\n* Follow me on [Medium](https://gordicaleksa.medium.com/) :books: :heart:\n\n## Licence\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/gordicaleksa/pytorch-gans/blob/master/LICENCE)"
  },
  {
    "path": "Vanilla GAN (PyTorch).ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Ultimate beginner's guide to GANs 👨🏽‍💻\\n\",\n    \"\\n\",\n    \"In this notebook you'll learn:\\n\",\n    \"\\n\",\n    \"✅ What are GANs exactly? <br/>\\n\",\n    \"✅ How to train them? <br/>\\n\",\n    \"✅ How to use them? <br/>\\n\",\n    \"\\n\",\n    \"After you complete this one you'll have a much better understanding of GANs!\\n\",\n    \"\\n\",\n    \"So, let's start!\\n\",\n    \"\\n\",\n    \"---\\n\",\n    \"\\n\",\n    \"## What the heck are GANs and how they came to be?\\n\",\n    \"\\n\",\n    \"GANs were originally proposed by Ian Goodfellow et al. in a seminal paper called [Generative Adversarial Nets](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf).\\n\",\n    \"\\n\",\n    \"(`et al.` - fancy Latin phrase you'll be seing around it means `and others`)\\n\",\n    \"\\n\",\n    \"You DON'T need to understand the paper in order to understand this notebook. That being said:\\n\",\n    \"\\n\",\n    \"---\\n\",\n    \"\\n\",\n    \"GANs are a `framework` where 2 models (usually `neural networks`), called `generator (G)` and `discriminator (D)`, play a `minimax game` against each other. The generator is trying to learn the `distribution of real data` and is the network which we're usually interested in. During the game the goal of the generator is to trick the discriminator into \\\"thinking\\\" that the images it generates is real. The goal of the discriminator, on the other hand, is to correctly discriminate between the generated (fake) images and real images coming from some dataset (e.g. MNIST). \\n\",\n    \"\\n\",\n    \"At the equilibrium of the game the generator learns to generate images indistinguishable from the real images and the best that discriminator can do is output 0.5 - meaning it's 50% sure that what you gave him is a real image (and 50% sure that it's fake) - i.e. it doesn't have a clue of what's happening!\\n\",\n    \"\\n\",\n    \"Potentially confusing parts: <br/><br/>\\n\",\n    \"`minimax game` - basically they have some goal (objective function) and one is trying to minimize it, the other to maximize it, that's it. <br/><br/>\\n\",\n    \"`distribution of real data` - basically you can think of any data you use as a point in the `n-dimensional` space. For example, MNIST 28x28 image when flattened has 784 numbers. So 1 image is simply a point in the 784-dimensional space. That's it. when I say `n` in order to visualize it just think of `3` or `2` dimensions - that's how everybody does it. So you can think of your data as a 3D/2D cloud of points. Each point has some probability associated with it - how likely is it to appear - that's the `distribution` part. So if your model has internal representation of this 3D/2D point cloud there is nothing stopping it from generating more points from that cloud! And those are new images (be it human faces, digits or whatever) that never existed!\\n\",\n    \"\\n\",\n    \"<img src=\\\"data/examples/jupyter/data_distribution.PNG\\\" alt=\\\"example of a simple 2D data distribution\\\" align=\\\"center\\\" style=\\\"width: 550px;\\\"/> <br/>\\n\",\n    \"\\n\",\n    \"Here is an example of a simple data distribution. The data here is 2-dimensional and the height of the plot is the probability of certain datapoint appearing. You can see that points around (0, 0) have the highest probability of happening. Those datapoints could be your 784-dimensional images projected into 2-dimensional space via PCA, t-SNE, UMAP, etc. (you don't need to know what these are, they are just some dimensionality reduction methods out there).\\n\",\n    \"\\n\",\n    \"In reality this plot would have multiple peaks (`multi-modal`) and wouldn't be this nice.\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"---\\n\",\n    \"\\n\",\n    \"That was everything you need to know for now as a beginner! Let's code!\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"# I always like to structure my imports into Python's native libs,\\n\",\n    \"# stuff I installed via conda/pip and local file imports (we don't have those here)\\n\",\n    \"import os\\n\",\n    \"import re\\n\",\n    \"import time\\n\",\n    \"import enum\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"import cv2 as cv\\n\",\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import git\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"import torch\\n\",\n    \"from torch import nn\\n\",\n    \"from torch.optim import Adam\\n\",\n    \"from torchvision import transforms, datasets\\n\",\n    \"from torchvision.utils import make_grid, save_image\\n\",\n    \"from torch.utils.data import DataLoader\\n\",\n    \"from torch.utils.tensorboard import SummaryWriter\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"# Let's create some constant to make stuff a bit easier\\n\",\n    \"BINARIES_PATH = os.path.join(os.getcwd(), 'models', 'binaries')  # location where trained models are located\\n\",\n    \"CHECKPOINTS_PATH = os.path.join(os.getcwd(), 'models', 'checkpoints')  # semi-trained models during training will be dumped here\\n\",\n    \"DATA_DIR_PATH = os.path.join(os.getcwd(), 'data')  # all data both input (MNIST) and generated will be stored here\\n\",\n    \"DEBUG_IMAGERY_PATH = os.path.join(DATA_DIR_PATH, 'debug_imagery')  # we'll be dumping images here during GAN training\\n\",\n    \"\\n\",\n    \"MNIST_IMG_SIZE = 28  # MNIST images have 28x28 resolution, it's just convinient to put this into a constant you'll see later why\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Understand your data - Become One With Your Data!\\n\",\n    \"\\n\",\n    \"You should always invest time to understand your data. You should be able to answer questions like:\\n\",\n    \"1. How many images do I have?\\n\",\n    \"2. What's the shape of my image?\\n\",\n    \"3. How do my images look like?\\n\",\n    \"\\n\",\n    \"So let's first answer those questions!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"metadata\": {},\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Dataset size: 60000 images.\\n\",\n      \"Image shape torch.Size([1, 28, 28])\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": \"iVBORw0KGgoAAAANSUhEUgAAAW4AAAF1CAYAAADIswDXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nOydebxUcx/H3197hDalVUm2ynqFxxZJ0uKxZcmWHiHLYwllC61Esif7kmRfIiIP2XOzVCqKUmlDRSWRfs8fM985Z+bO3Dt39nPn+3697mvO/M72O2fmnvn8vr/vIs45DMMwjOCwUb47YBiGYVQOe3AbhmEEDHtwG4ZhBAx7cBuGYQQMe3AbhmEEDHtwG4ZhBAx7cBtRiMiNIvJUho8pIvKoiKwQkcmZPHaqiMhjIjIw3/3IFyLSVESciGyS774Ylcce3AWCiBwsIh+LyG8islxEPhKR/fLdrwxxMNAeaOSca5Prk4vI2SLyYRaP/174IbhnTPvL4fa24fc3ht+f5Ntmk3Bb0/D7qB8UEekpIrNEZJWILBWR10VkaxEZLyKrw39/i8hfvvcjM3x9bUVkYSaPmc/zVAXswV0AiMg2wDjgbqAW0BC4CViXz35lkB2Aec65NfFWVhHV9x1wpr4RkdrAAcDPMdstB24WkY0rOqCIHAYMBk51zm0N7AY8C+Cc6+icq+6cqw6MBm7V98658zNyRUbBYg/uwmBnAOfcGOfcP865tc65Cc65qQAi0lxE3hWRX0XkFxEZLSI1dGcRmSciV4rIVBFZIyIPi0i9sCpbJSLviEjN8LY6RO4lIotEZLGIXJGoYyJyQHgksFJEvlb1GF53toj8ED7HXBHpHmf/nsBDwIFhNXiTKisRuVpElgCPhrc9V0TmhEccr4pIA99xnIj0FpHZ4fMNCN+XT0TkdxF5VkQ2i3P+3YCRvvOv9K2uGVawq0TkMxFp7ttvVxF5O9yXb0WkWwWf4WjgZN8D+VTgJeCvmO3eDLedXsHxAPYDPnHOfQngnFvunHvcObcqiX2jEJGNReS28PfnB6BTzPoeIjIzfC9+EJHzwu1bAeOBBj5F30BE2oTv/crwd+gevf8S4g4RWSahEeRUEWkVXrd5uB/zwyOIkSJSLdF5KnudRYNzzv7y/AdsA/wKPA50BGrGrN+JkKlhc2A7YBIwwrd+HvApUI+QWl8GfAHsHd7nXaB/eNumgAPGAFsBrQmpwiPD628EngovNwz36xhCP/Ltw++3C+/7O7BLeNv6QMsE13c28KHvfVtgPXBLuH/VgCOAX4B9wm13A5N8+zjg1fC9akloNDIR2BHYFpgBnJXM+cNtjxFSv22ATQg9eJ8Jr9sKWAD0CK/bJ9y3RNf3HvAfYALQMdw2GTgQWAi09d9boCvwA7Bp+PgOaOrr18Dw8iHAWkKjr4OAzROcP7JPOd+x84FZQGNCo7r/hc+7SXh9J6A5IMBhwB/APr7Pa2HM8fYlNKLYhNB3aiZwaXhdB2AKUCN8vN2A+uF1I8KfYy1ga+A1YEii89hf/D9T3AWAc+53QnZgBzwI/BxWnPXC6+c45952zq1zzv0MDCf0z+XnbufcUufcT8AHwGfOuS+dc+sIKb+9Y7a/yTm3xjk3jZDiPTVO104H3nDOveGc2+CcexsoJfQgB9gAtBKRas65xc65bypx2RsI/Zisc86tBboDjzjnvgj3uR8hldzUt88tzrnfw+eZDkxwzv3gnPuNkFqLvcaKeNE5N9k5t57Qg3uvcHtnQqadR51z651zXwAvACdWcLwngDNFZBeghnPuk3gbOedeJfRj+Z/yDuac+wA4ntAPx+vAryIyPBkzSxy6EfqxX+CcWw4MiTnX6865712I9wn9CB1STt+mOOc+Dd+fecADeN/Jvwk9lHcFxDk30zm3WEQEOBe4zIVGD6sImYJOSeF6ihp7cBcI4S/32c65RkAroAEhdYKI1BWRZ0TkJxH5nZBqqxNziKW+5bVx3leP2X6Bb/nH8Pli2QE4KTwcXhk2MxxMSD2tAU4mpOQWh00Ou1bikn92zv3pe98g3A8AnHOrCan7hr5tKnuNFbHEt/yHb/8dgP1jrrs7sH0Fx3uR0MjhYuDJCra9DrgW2KK8jZxz451zXQgp1GMJjR7KfeAnoAFlP/MIItJRRD4Nm4ZWEvpxjv2O+bffWUTGiciS8HdysG7vnHsXuAe4F1gqIqMkNI+zHbAlMMV3X98MtxuVwB7cBYhzbhah4W+rcNMQQmp8D+fcNoSUsKR5msa+5SbAojjbLACedM7V8P1t5ZwbGu7nW8659oTMJLMIjRaSJTYt5SJCD0wgYlutDfxUiWMme66KWAC8H3Pd1Z1zF5R7Euf+IKT8L6CCB3d49DIH6J1Mh8IjnomEzF6tKto+Dosp+5kDIbszoRHFbUA951wN4A2871i8+3c/oc+8Rfg7eY1ve5xzdznn9iVk1toZuJKQuWktIZOT3tdtXWiCNdF5jDjYg7sACE+EXSEijcLvGxMyXXwa3mRrYDWwUkQaEvonSJfrRWRLEWlJyJY7Ns42TwFdRKRDeHJri/DEYiMJTX52DT9g14X7908a/Xka6CEie4UfJIMJmXvmpXFMZSnQKN7kZQLGATuLyBkismn4b7/wRGdFXAMclmS/rwWuSrRSRI4VkVNEpGZ4wq8NIXPEp4n2KYdngUvCn11NoK9v3WaE5hV+BtaLSEfgKN/6pUBtEdnW17Y1oTmO1eGRVuRHLXyv9heRTYE1wJ/AP865DYR+3O8QkbrhbRuKSIdyzmPEwR7chcEqYH/gMxFZQ+gfczqg3h43EbJz/kbI1vliBs75PiHFNxG4zTk3IXYD59wCQsPzawj9Uy8g9KOxUfjvCkJKeTmhB0pS6jEeYTV5PSHlt5jQRFmmbJ/vAt8AS0TklyT6sorQg+sUQte3BG8itaJ9FznnkvIZd859RGgSMxErCNmEZxN6SD4FDHPOjU7m+DE8CLwFfE1o4jryHQpf7yWEHu4rgNMITSDq+lmEJrN/CJs4GgB9wtutCh/b/8O/TbhtBSGTzK+E1DzA1YS+d5+GTSzvALuUcx4jDuKcjU6KifBk31xg0/CknGEYAcMUt2EYRsCwB7dhGEbAyNqDW0SOllDE2RwR6VvxHkYucM7Nc86JmUkMI7hkxcYdDhD4jlCk3ULgc0L5FmZk/GSGYRhFRrYUdxtgTjiq7S/gGULeCYZhGEaaZCsrW0Oio7QWEnJ3i0udOnVc06ZNs9QVwzCM4DFv3jx++eWXuIF22XpwxztZlE1GRHoBvQCaNGlCaWlplrpiGIYRPEpKShKuy5apZCHR4bWNiAmpds6Ncs6VOOdKttvOUhUYhmEkS7YU9+dACxFpRijXxCmEoqySIpRErLiInSQutntQ7NcPdg+K/fqh7D1IRFYe3M659SJyEaEQ240JpeusTMpPwzAMIwFZKxnlnHuDUIYxwzAMI4NY5KRhGEbAsAe3YRhGwLAHt2EYRsCwB7dhGEbAsAe3YRhGwLAHt2EYRsDImjtgIfDvf/8bgOeffx6AhQsXAtC+fXsAZs+enZ+OGYZhpIEpbsMwjIBR5RT3LrvsElkePnw44IWRNmzYEIAxY8YA5SdxMSpG7/V7770HwPTp0wFvRFNVqF27NgBTpkwBoG3btkAoe1uQ2WyzUNH7mjVrAnD++ecD0L9/fwA2bNgAwG23her83nHHHWWOsXTp0qz3szL4w+TPOOMMAK677joAdtppJyD5sPJ4NG/eHMj/Z2+K2zAMI2BUOcX9wgsvRJYbN24cte7tt9+Ou0+bNm2AaLU+ePBgAD7++GMAxo0bB8D8+fMBeP/99zPU4+By7733AlC3bl0AFi9enM/uZJzNN98cgNdffx2ALbbYAoC//vorb32qLJp5U5WiqmqA+vXrA3DEEUdE7aNKW5XpFVdcEfXqp2fPngA88cQTmex2ytSpUyeyfPvttwOwbt06AM4991zAU8sHHnggAPXq1StznJUrVwLeaFJH6UcddRQAo0aNynDPK4cpbsMwjIBRZRR3s2bNAE/9+dFfzSFDhgBw8MEHA55NVn9xa9SoEdlHbWXHH3981Ovq1asBmDRpEuDZ0YOmwLfddlsABg4cCMCgQYMAWLJkSYX77rHHHoBn6/3zzz8B6NevX6a7mVdUce+3336A91kvWrQo4T75Zp999gHgsssuA2DfffcFYOeddwbSs+/GY+TIkYDnsfXuu+9m9PiV5eeff44s60jp2GNDVRM/+OADAL777jsAJk6cmPRxDzjgAAB++OGHjPQzXUxxG4ZhBAx7cBuGYQSMKmMq6dixI+C5NoFnIunRowcAd999NwCdO3dO+TzVq1cH4JhjjgHgsMMOAzyXI/ACfpIxO+SL5557DoB27doB8MUXXwDw6KOPVrjviBEjANhoo9Dv/pVXXgnATz/9lPF+5pM+ffpEvfdPfBcaLVq0ALzhv35Py0Mnk2NNBmomfOONUDr9GTNmlNlX91FXyauuugrIv6nEj07E6iTs5MmTAe9e+c0qFVEoJhLFFLdhGEbAqDKK+5xzzinT1rJlSwBeeuklAPbaa6+4+/74449AdDCBqg6d7Nxhhx3i7rvVVlsB0cEJqnaGDh2a/AXkgBtuuCGyrC5gOqn04osvlruvTmaC5za5Zs0aAF555ZWM9jOf7LbbbpFlHUkoOqlViFSrVg1IrLT/+OMPAO67775Im7q4ff3115U+3/r166PeF2LBb500v/DCCwFvVNmtWzfAc2cNIqa4DcMwAkbgFbe66am69qMqQF/Xrl0LQGlpKeDZaqdOnQrED2NVN8MzzzwTgLPPPhuARo0aJezTNddcA3g23yeffDLJq8kO+++/PxCtIFUt/+c//wHgt99+K/cYw4YNiyxvv/32AFxyySVA1bJtqwsgeCHhzzzzDACrVq3KS5+SQe3Vffv2jWr//PPPAc99NR0OOeSQyPLWW28NePMcqugLkTlz5gBeMJ26BZviNgzDMHJG4BW3BhZsskniS9HZYw0yqcwv7dy5cwG46aabAHjkkUcAz26+5557ltlH7Y133XVX1LYavJMrttlmG8CzX2+55ZaRdbfccguQOA2Aol46nTp1irTpdVQl27Zy6aWXRpb/+ecfAB588EEA/v7777z0KRn0O65h3plAU0bo3JB+98H7jmv4v6rYQka9vTQJmnqijR8/HohOUNWhQwcATjzxRMDzotHvgI7OdfSea0xxG4ZhBIzAK26/TTIW9dvUggqV8dtMxIIFCwAvjPa1114DoHXr1mW21Rn+3r17A3Drrbemff7KoDZ89WP1++smq5B0Rl6PAZ5KV7uqpsfVpD3qLx8k1LdXPQ4Ali9fDnjxAMWC2vYfeughoGwSKj86n6M+34WMjhDvuecewEsapikc/B43+l1W270+Q1SV6+hV5350RA7eSC2bmOI2DMMIGIFX3Ndffz0QP3mORkxmQmnHop4U5513HuDNWBcCqoBPOeUUwLs3atcGz8c1Eaq6Yo8Bnp1PS7+pj/uECROAYCpuHTH5R3BPP/10vrqTF3bffXcAxo4dC8Cuu+6acFv1XtHEW0FAnwP6vdXvqdrr1UMGvOvSUav+v+s2vXr1Arz5Mr/Hkd8DK1uY4jYMwwgYgVfc+guoyd9zjfrJvvrqq5E2tX8r/tnqXPDyyy8DZe3/b731VmRZvWViPSVUOWy88cZAdCShorPxK1asADz/cE3xGUQ0dav6+gM8/PDD+epOxlGb7AUXXBBpO+GEEwBvxKTbqI92eSlgNY+LpjPOl3dFKnz00UeAF22tqllLtEHi2AR9zqj9X4szdO/ePbKNKW7DMAyjDPbgNgzDCBiBN5XE1sfLF/7zx/YlV307+eSTgbI19L7//nsgOjGQukJpAFNFvPPOO5FlDWTSSZ4g15rUdAiaAvTbb7+NrPvmm2/y0qdsoOYP/wR1IvT7umzZMsALwtIJa/Dum7rc6v3Ldy3GZFDTngYr6fe4Muj/kpol1eyUK0xxG4ZhBIzAK+7y0Pp72UzHqS6HGiKbT7QIgk4s6qSLhnH7AwPUBUoDT5SmTZsCnpLQyUtNlA+ppQEtVDR5mKYH0ACrqoYmEfMXhzjjjDMAr1DCm2++CXiuolo4QgNw/N8BdR1s0KAB4I3C9H+tkIOWdAI+E0nDcp3GQjHFbRiGETBSVtwi0hh4Atge2ACMcs7dKSK1gLFAU2Ae0M05tyL9rsZH1UI8tzW1O2lazkyirkRaQEFtxn7U/qeuQ9lGQ+pr1aoFeJW+44XgqtubprRV1L1J7Zzjxo0DqpbKLo9CLk+WDqqiNaAkdrk8tByZvyyZuhCqzVz/1/S9lsTLlyLNFbHzSbkiHcW9HrjCObcbcABwoYjsDvQFJjrnWgATw+8NwzCMDJGy4nbOLQYWh5dXichMoCFwLNA2vNnjwHvA1Wn1shw0LHnAgAHZOgXgFQ/QWfNDDz0UiE6VqmhorSam0WRF2ebGG29MeV9Ni3vkkUcCXtDQ4MGD0+5XIaNh3f/73/8AGD16dD67Exi03J++KjqvpHMl06dPz2m/coV66egc1/3335/T82fExi0iTYG9gc+AeuGHuj7c62biHIZhGEaItL1KRKQ68AJwqXPu92TDu0WkF9ALoEmTJul2Iy4HHXQQ4KVrvPrqkPBPZjZZQ6C1wK7a9OLZ0iHaF1S9ObKR3CpbdO7cGfD8ujXk1+/XXJXQ4sc6wtCQ7XylTggq+n+iaAj8zJkz89GdSqFlCB977LFK79ulSxfASyuh6Z1zRVqKW0Q2JfTQHu2c0zLhS0Wkfnh9fWBZvH2dc6OccyXOuZJCrBBtGIZRqKTjVSLAw8BM55w/t+OrwFnA0PBrVutb/frrr4CnlPypGfUHQT0l1Pf0l19+qfC4ap+uKOpR07lq1CLAkiVLkup7IaElmhRNOl9VvQLUj129gbSgrJEcOhJt27Yt4P3/rVy5EshNMYFU0ZSsOn9TGcXdqlUrwCsArvdhypQpGexhxaRjKjkIOAOYJiJfhduuIfTAflZEegLzgZPS66JhGIbhJx2vkg+BRAbtdqket7JoIVdV0/60lRpBqKjNOx3U/1nTuZ522mkALF26NO1j55N999036n0Qck6kg46KdPS100475bM7GaNZs2aA5/Wg8yzp5JPREnw6DwJevg9V2vPnzwfgoosuSvk8uUI9wiozf6NzWxpRPG3aNCB5X/hMY5GThmEYAcMe3IZhGAGjyiSZ0vBuPz179gS8iahU+OuvvwD45JNPAK+6hb+aTFXi999/B7zqNkYwUBPJp59+CnhpDzRA5pFHHgE8E5+fRYsWAVCnTh3AS9969NFHA3DIIYcAsPfee5fZV/8/tMp7kCbm9To1wZh+9/1o8riXXnoJ8ILptDK8VoHPNaa4DcMwAkaVUdyKX3lr4I2mstTEUInwq+gPPvgA8Nx8/IUEqjKaGKuqpjeNRZWopkzQiTgIliukTj7q5Jl+1zVw7Oabbwbiu7eqC59e+6abbppw21jOO+88wHOPCwL6v3zqqacCXqGRCRMmlNn2pJNCTnGqtFu3bg3kT2krprgNwzAChuS75BdASUmJ81eJznVV9EIg9nPI9T3QEGXth7pX5op8Xb+qTFWqxx13XGRdJhLtV4ZM3AO122pIv1aqP+aYY+KeIx56Xt1W7eTPP/98ZBtNqhSbZCodcv0d6NixI+DNhWkwEXgpEDTAbvjwUIxhtkdh/ntQUlJCaWlp3JtgitswDCNgmOIuEPKtuPNNsV8/2D0o9usHU9yGYRhVFntwG4ZhBAx7cBuGYQQMe3AbhmEEDHtwG4ZhBAx7cBuGYQSMggx5LwQXxXxT7Peg2K8f7B4U+/WXhyluwzCMgGEPbsMwjIBhD27DMIyAUZA27mIPdYXiuwfFfv1g96DYrx+St+ub4jYMwwgY9uA2DMMIGPbgNgzDCBj24C5yWrVqRatWrVi/fj3r169njz32iBRCNQyjMLEHt2EYRsAoSK8SI3d06dIF8GaztTzVEUccAcDChQvz0zHDMBJiitswDCNgmOIucpo2bRr1vlmzZgDssssugCluwyhETHEbhmEEDHtwG4ZhBAwzlSSgY8eOANx0001AqOIyeJN4zz77LADLli0D4J577onsO3v27Jz1M1U23XRTwLuuWCZOnJjL7hQMrVu3jiwfc8wxAOy0004ANGnSBIClS5cC8OKLLwLw8ssv57KLhmGK2zAMI2iY4sZTVPfff3+k7bDDDgNg4403Bsomf+nWrVvU+5NPPjmy3Lt3b8BTZIVI165dAYo22Gb77bcH4IwzzgDg1FNPBaBFixaRbbbccksgceKf0047DYBHHnkk0nb11VcDsGLFigz32DA8THEbhmEEjLQVt4hsDJQCPznnOotILWAs0BSYB3RzzhWk/Nhhhx0AeOKJJwDYf//9E25bWloKeLZtpXv37gDsueeekbYhQ4YAha24TzrppLjtsdcXdKpXrw54AUX16tUDYNiwYVHr06Fnz56R5V9//RWAfv36pX3cbNC+ffvI8ptvvgnAiBEjALjiiivy0iej8mRCcf8XmOl73xeY6JxrAUwMvzcMwzAyRFqKW0QaAZ2AQcDl4eZjgbbh5ceB94Cr0zlPpmnTpg0A48ePB6BGjRplttHAk9GjRwNw1113AbBkyZKo7d5++20AJk+eHGmrW7cu4HkhzJ8/P2N9Twf/qEBt+LEMHDgwV93JCQMGDADg4osvBrzk/Ins1j/++GNk+Z133gHghRdeADwPohtuuAHw5gn8qGr98ssvgcIbwfTv3z+yrPfg8MMPB7wRqP8eJEK9b/T/YtWqVQAMHToUgI8//jhDPY5m0KBBQNnPTz8b/3xDItasWRP3GEEiXcU9ArgK2OBrq+ecWwwQfq2b5jkMwzAMH5Lqr46IdAaOcc71FpG2QJ+wjXulc66Gb7sVzrmacfbvBfQCaNKkyb7+X/lslSzaZpttAPjhhx8AqFkzulsfffRRZPmCCy4AYPHixQAsX7683GNPnTo1styyZUsABg8eDMD1119fYd9yUbbJf3066lBUMbVq1QqARYsWZfz85ZHJ69eRDnifi9qyEynuTp06AdFKUe9JLFtssQUA1113HRBtz9bjfvXVVwCcffbZAEyfPr3CfmfzO3DssccC0fMusedbt25d1KvO60ybNg2AvfbaK7LtwQcfDHheV9rXDz74AIC2bdtWuo/JXP8///wTd9vKoKr8jz/+iDrP6tWrI9vcfffdUfv89ddfQMXPgXTxX1dJSQmlpaVxvwTpKO6DgK4iMg94BjhCRJ4ClopIfYDw67IEHRzlnCtxzpVst912aXTDMAyjuEjZxu2c6wf0A/Ap7tNFZBhwFjA0/PpKBvqZEdRPO1Zp66/p5ZdfHmn75ptvKnVstaECvPvuuwDst99+KfUzH/z3v/8FyirtOnXqAHDggQdG2mbODM1Fz5kzJ0e9S45NNgl9nf0jnK233jpqG1VXqqzVF3vBggVJn+fPP/8EPMWtnioA55xzDgB77703AGeeeSYAV111VdLHzyQ6r/H000+XWae2XlWRjRs3BmDzzTcHoF27doDnkZMMiUYpmUK/c6pM9d7Hfs7loZ+REm8Upv74ikbL6rzYo48+CnhzW3///XfS588E2fDjHgq0F5HZQPvwe8MwDCNDZCRy0jn3HiHvEZxzvwLtMnFcwzAMoyxFEfKuiZROOeWUqHZ17dOh4LfffpvyOXQo5ae8gJ5copNKfndA5bvvvgPgueeei2rXSjgHHHAAEG0OWLlyJQA9evQAYNy4cRnucWroEFj7BWUnsfr06QN4ScEyMcT1TzzGnk8TVeXLVHL00UcDnvlj7dq1kXXqEqrfgdq1awOeO6ua+vSa/AnJ/PcYPHPLeeedl9kLiEHzxCvaJ3Vl9HP66acDnglI0Tm1hg0bJn1e/f7rZLO+ahCT38yq9zObWMi7YRhGwKjSirt58+YAPP7443HX669mOko7CKjCUDc2P6o4dcJNlfbxxx8PwIYNG8rso5O7I0eOBOBf//oXkP9AI+1zPDSY5o477sj4eT///POE6/Kd4ldHk/r5XnPNNZF16rKoqHucTtROmTIlav3OO+8cWY511dMka7l2I1WXRX31o4FTsai76G677ZbwuLVq1QI8Ja2TzbHoiMZ/rA4dOgDZnbw3xW0YhhEwqrTiVvW46667RrU/9NBDgBeuXlXZdtttAe964wUtqOuiJko66qijAE9plxfooHY/TZGaL8VdrVo1wBth+VFXT01dkA38dmM932abbQZEp4nNBxriroUzNECmMmjgmj91sX4vvv/+e8D7HgUB/Z4m83199dVXAW+eSO3mOl+mboh+G7uO7mLruWYSU9yGYRgBo8opbi1iAIltWNlIgBPPY0O9L/KFehI0aNCgzDpViToDrvbqWHRU4lcnZ511FuAFvOQbtT9qhXo/mt7gySefzNr5169fH1nWkOxC4dNPP037GBqMoiMr8K5TFWhVRYOUNF2Evqo3ic6f+dMDN2rUKOv9MsVtGIYRMApDMmUAVZcahgyeXW/u3LmAN2t+2WWXAV4BhUykdzzxxBPLtN1+++1pHzdbqFo+99xz465/+OGHAS/Z1j777BNZp4pb0XB4f2rbfBAvKVG2Epb50dJ34JU7Ux577LGsnz9bqLeQP52DoiH05XnUVGVeeSWUyUPjNzJRkKMymOI2DMMIGFVGcWvRV3+En9rhtEyXRkpOmjQJgI022ihqu1RQO7pGyIGXHlIT0hQiWmIrUbHg2Ag4f+RfrG07Fza9ZIg3cspFsnz/Zx97PvWfDiKaeExHEf7Uy4Vami1X6ChLPbdyjSluwzCMgFFlFHe8QqfqW6plpBRV4OkobbWdqv3PH5WokWfq41qIaEmtSy65pNztdAQTL0WtqvZEHinFwpFHHlmmTQtwBNHGrdGC+n+iowj//1hsCb9iYccddwTgxhtvBLy0x36yVbbNjyluwzCMgGEPbsMwjIBRZUwl9evXL9N22223xd02NrlOKmhlE53E81f+0HWFTEXVeXSory6V/kkYHTprUEu+TULz5s0D4Oeffwa8pGjNMNQAACAASURBVFrZZvjw4UB0jUu9Nxq45K9jWOh07twZKFtJXUO4Nfy7mOnVqxcAp556alS7Pz2wJp7KJqa4DcMwAkbgFbe6s8VLWVqZOoIVoZORDzzwAOClhP3tt9+A6HSh6m6YbzQUW/voV83+GpLxUNUaz51Ole2VV16ZkX6mi6YS1RGAP6m9Jv/RlK/+KuepovfR7waoaEBKvMnyQkWTdGlNVh1laUpaHVXGS/FbLJx//vkAdO/eHSj7fzFs2LDIsgb6ZRNT3IZhGAEj8Ipb7YuqEjKNBpfcfffdAHTt2hXwVOyIESMAGDBgQFbOnw5aTqpbt24AjB07NrKuRo0alTrWvffeG1nOd3GAROhIwB/mrsEjardNRXFrwFGbNm0Ar1SbKm//+dTNcsWKFZU+T75Q27bOE+l8jaZs0DmEYkSTx2nAUexcmrbr8yFXmOI2DMMIGIFX3Dprr/Y3DWMHLwF8svgT8fft2xeALl26AJ7NV9XIpZdeCniJqgoZDUTyh7dr//32YPA8bhYuXAh4Cbl++umnyDaZKLCbDbTg8U033RRpiy1oMGPGDMBLkqR23Xho8WFVXbHFptXO6Q+y0SLEQUJHjcp9990HwIMPPpiP7hQEmsJWPWliCwu//PLLgHevcp3awBS3YRhGwJBcJOGpiJKSEucv9plKKk5VlYcddlikTcNyVRHpOTQFrNqv1ePAXxBUlZoqelWi119/PZB5z5HYzyEX6UgLiUxe/wknnBBZVk8T/Tz1uMl87yvaVlPfDhw4MNKWjidTLr4Dfp9zHXWoh5Haslu2bAnkXkXm+3/AP0+mMSAXXnhh1DYvvfQS4I3G/PEbmcB/D0pKSigtLY17E0xxG4ZhBIzA27iV4447DoCvv/460ta4cWPAs1dXxLp16yLLWmR1zJgxQP6jA43keeGFFyLLOuehn2eicnblocnI1JtGk2rpq790WaGjcxYABxxwAFA2EjbIqWjTQed9IPdKu7KY4jYMwwgYVUZxq1+133NiyJAhANStWxfwbNlqn1bPCfXtnTp1amRfU9hVA/U00Ve153bq1AnwRlnt2rUDvOLJftT/feLEidntbA7we04pv//+O1CYsQi5QP30/fNjihb81nmMfCttxRS3YRhGwLAHt2EYRsCoMqYSRYd9UHaCwTA++eSTqFelWM0EAHPmzAHSqwgVZDRBXbx0rFrN3e/0UAiY4jYMwwgYVU5xG4aRHLNmzQK8tK3Fypo1a4DoEXqfPn0A6N27d176VBGmuA3DMAJGWiHvIlIDeAhoBTjgHOBbYCzQFJgHdHPOlZvjMhMh70En3+G++abYrx/sHuT7+v3nU7v32rVrc9qHXIW83wm86ZzbFdgTmAn0BSY651oAE8PvDcMwjAyRso1bRLYBDgXOBnDO/QX8JSLHAm3Dmz0OvAdcXZljF0Liq3xT7Peg2K8f7B4U+/WXRzqKe0fgZ+BREflSRB4Ska2Aes65xQDh17rxdhaRXiJSKiKlWrnEMAzDqJh0HtybAPsA9zvn9gbWUAmziHNulHOuxDlXokUKDMMwjIpJ58G9EFjonPss/P55Qg/ypSJSHyD8uiy9LhqGYRh+UrZxO+eWiMgCEdnFOfct0A6YEf47Cxgafn2lsscuttl0yP+Mer4p9usHuwfFfv2QvF0/3QCci4HRIrIZ8APQg5CKf1ZEegLzgZPSPIdhGIbhI60Ht3PuK6Akzqp26RzXMAzDSIxFThqGYQQMe3AbhmEEDEsyZUShrplao1ErXxdjIqLNN98c8O7BySefDMANN9wAeKlhTznllDz0zihmTHEbhmEEDFPcRU6zZs0AL33l+eefD0C1atWAwksgn23q168fWZ4wYQKQuDK8hWQb+cIUt2EYRsAwxZ0hvv3228iyKrFdd901X92pkMMPPxyAZ555BoBatWpFrddq5/fcc09uO5Yn1J6tKhsSK+1ioUWLFgA88MADALRu3Tqy7s033wTgjDPOyH3HDFPchmEYQaPKKe7rrrsusnz66acD0L59ewAWLFiQ8fOp4mjcuHGkbd68eRk/TzrsvvvuAIwePTrS1qhRIwBq1KgRte3zzz8PwMUXXwzAL7/8kosu5h0tlDt58uRIW7Epbh116WffsmVLwPMo2mOPPSLbXnXVVTnuXW7ZcsstgbKj5pkzZwK5L7AQiyluwzCMgFFlFHeDBg0AOPvssyNtO+ywAwBXXx2q46Aq4Y8//sjYefWY6utbSKjSfvvttwGoV69eZJ3a4X/88UcAunbtCniKotg8JtavXw/ABRdcEGl79tlnAWjevDkAd911V+47lgPUs+iRRx4B4MUXXwSgW7duUdsV8pxNOlxzzTUA/Pvf/4606ei0b99QpmqNb4hV3Pr+rLPOyk1nw5jiNgzDCBhVRnGrTUpVth+10b366qtAtOdAqpxwwgmAZwctRIV66aWXAp7SXrNmTWTdvffeC3hqwwjx119/RZbfeustALbaaisAunfvDsD++++f+45liD333DOyfP311wPQqlUrAC666CIA3nnnnbj7NmnSJLL8xRdfZKuLWeO4444DYODAgQDssssugJc+1v8/vO+++0a16Tb6/67v99lnn6h2gDZt2mTnAnyY4jYMwwgY9uA2DMMIGFXGVHLooYfm9HwHH3xwTs9XGXTCVIeGy5aFqse1a+elSddJFaNi1HVSJ3uDzJw5cyLLN998MwBz584FYNWqVeXue+6550aWX3rppSz0Ljs88cQTgDf5qGZVNYPMmjUL8EwnfmJNoIne+ydu+/XrB8CQIUPS7nsiTHEbhmEEjCqjuI8//vh8dyFCvtRs9erVATjxxBMBz2WpS5cuee1X0NGEW1tvvXWee5I+/gnqqVOnJrWPBmltv/32kTZ1HSxkdOJdJ5VVHev/hQbPvfzyy0B08JW6/5WUhAp8/frrrwC88cYbAHTo0CHqXPq/BzBgwADAC1578MEHM3I9fkxxG4ZhBIzAK261V+lrPFRlZCLwRgN9VMXGQ5Py5Jodd9wRgL333huAp556Cii+1KyZRlVXLFOmTMlxT/JDp06dAPjtt98ibdlIH5EJ/O6tGjyjSltf9fP0J4aD6BGpLqvSVlSlq+LWeSR99Z9H5wQ0oCn2WOlgitswDCNgBF5xqzfJIYccknCbSZMmAfDhhx+mfb7yAn3yTawyHDlyZJ56UjWoWbMm4CVdikVTCVR1NNjs/vvvz3NPEqOeI2rPhsQ27Vilrfi9QNTTJBZVzU8//TTgBWfFm2PTIB4NXDLFbRiGUcQEXnErGoIaDw39zsb5Ntoo9Ns3ffr0yLpEv9bZQme0L7vssqi+pWPT1+vS2fXy2LBhA5DZuYR84f8e3XnnnYAXEq6o7bSqzx2oF42mRdZkbYVEnTp1AC+uwu9nrcux3iOJSOX/Nl5StlykvzDFbRiGETACr7g1mi3bv3L6y642dT2fqk1/khn15shVNKfa2TR6S/vWsWNHAKZNm1bpY5555plAcj6o69atA2DGjBmA58eqNuA///yz0ufPFvo5brJJ9Fd/s802A7zESwCnnXYa4N1PTUD13nvvZbubBcEll1wCeEp00aJF+exOXLRYitqR/SMm9aOuSGmng86bxRvxl2cFSBdT3IZhGAEj8Iq7V69eFW6jdlr1PU3GI0Rn0lU1165dG/BmimPxR6Q9+eSTFR4/FxxxxBGAl1di9uzZSe9bmbJdW2yxBeD5j6vf6qBBgwC48cYbkz5WpmnYsCEAF154IeAVSoiNgoyX2jMWjTRMZQQTJDbeeGPA800eMWIEEP0dLxQ0v0i8z02/h7nA7wMem/NEfctPOumkjJ3PFLdhGEbAsAe3YRhGwAi8qSQZtH7cBx98AHgTGskMj5PZBrwgH8hOUplU0DSu2jd/WledSEzEHXfcAXhBA37TSefOnQFvskqrgcdy+OGHAzB06NBIWzYnKjUZ1JFHHhlpe+ihhwDP1JUOGuB0zDHHALkdimeK/fbbL7J8+eWXA/DTTz8Bnpuj1k+sW7cukJmKUdlCTZn6f5rNCcHy8Af3qZOATU4ahmEYEQKvuJP5pW3atGnUq6JBJurSFw/dRpPNq3pVp35dn8/E8j///DMA//nPfwDPpU0nYdUF7uOPP47s8/zzzwOe655We1eWLFkCwK233lrmfLVq1QI8N0Cd/NXzqiuhpgPVya5so0o73mehoy1V/zpZqeq5Mtx3332ANwmVzYT5maJ+/fqAV7kevKIKWmBA07a2bdsW8D5PLcRRiJQ3OZlLygvAGTx4cMbPZ4rbMAwjYKSluEXkMuA/gAOmAT2ALYGxQFNgHtDNObcirV6WQ2zKxsqgSru8fdXup0lsVNlrUitV8f60jo8++mil+5IOeh2PP/444LltqTte8+bNAdhmm20i+5x99tlRr0rPnj0BePPNNxOeTxW1qnYNcddADb1HX375ZVR/Mo0Gzdx+++0AdOvWrcw2n332GQD9+/ePav/Xv/4V95iff/55ZLlPnz6Ad716b3QE89///heAo48+OrKPqtcVK7L2lU8J7Ze/PJkmRtL5jHPOOQfwqtvn+nucCmpb1pB3/8g7l+UM/efNhb09ZcUtIg2BS4AS51wrYGPgFKAvMNE51wKYGH5vGIZhZIh0bdybANVE5G9CSnsR0A9oG17/OPAekLXsNGrfVVWZCvPmzQNg8eLFkbaBAwcCiWfUX3vtNSBxys98okpY01nq+0033TSyTaJRhnphJONNo7a72G31Ndu2UVXNGlSj+D8THSmpmrzlllsAT60rsWXewAuZVq8LLSigtmBV3voa77iFQo8ePQD43//+F2lT9a0jI8XvIVXoqGfM66+/DkQnRYtX/DfT6EhbRzTgff/1+6OvmSRlxe2c+wm4DZgPLAZ+c85NAOo55xaHt1kM1I23v4j0EpFSESnVh69hGIZRMSkrbhGpCRwLNANWAs+JyOnJ7u+cGwWMAigpKUl5SlgTp48bNw6IH6qt6jnRL5+GqP/++++pdqMgURVy7bXXAvE9RLKB+jfffPPNWT3P0qVL47b7y1dpSt/GjRsDZRWxhrFrioN43xH9Xug6f9HcoKCjBk3KBp5tW72RdK5E/eGDwBdffAF46SzU99yPjjjVE0xHV+mgzx2dV/Lbs1Vx60gwG2Xe0vEqORKY65z72Tn3N/Ai8C9gqYjUBwi/Fq4vkWEYRgBJx8Y9HzhARLYE1gLtgFJgDXAWMDT8+kq6nSy3E/PnA7DHHnsAnnIC75cvGxFuOputngX5ithKhuHDhwPRpbbUf1vttumgquebb74B4LbbbgNg9erVaR+7PLQE1dixYwE4+eSTAc9nuTwefvhhwPMaKs8LRBX3+PHjgbKFFfxRqIVaREJt+y+88EKkTecgDjroIACGDRsGJI6ELWR0vkVH1+DZuNX+PHnyZMBTwLH+/v7nhP4/axSk/r+rTVuVdrw5IE04lc3YjpQf3M65z0TkeeALYD3wJSHTR3XgWRHpSejhnrmUWIZhGEZ6XiXOuf5A/5jmdYTUt2EYhpEFJN+hohCanCwtLY28L2SzQyzr168H4I033oi0de3atdLHif0csn0PNBhHh5N+ExN4Yc86AejPN6xo1W/1CkrHTJDO9etkoU6GquufHw0OuummmwB47rnnKt1HTSHw/vvvA54pyJ+8K9GEaTLk+jsQi05ia477ZPLWZ5JMXL/fNbNfv36AV4tVJ19j3Vfjub5WtI2+1+++mgvBmwRNpaq7vw8lJSWUlpbGvQkW8m4YhhEwTHGnSVAVd6FR7NcP+b8HOhGnLpKa3nXMmDE5OX+2rv+oo44CvElKHVGko7hVTWtd19ggplQxxW0YhlFFCXxaV8MwMsMPP/wAwDPPPAOkZqMtRDRthb727t0b8Fz7NGjPH7Yei6YF1rmSfBdLMcVtGIYRMMzGXSDk276Zb4r9+sHuQbFfP5iN2zAMo8piD27DMIyAYQ9uwzCMgGEPbsMwjIBhD27DMIyAYQ9uwzCMgGEPbsMwjIBhD27DMIyAUZAh74UQFJRviv0eFPv1g92DYr/+8jDFbRiGETDswW0YhhEw7MFtGIYRMArSxl3syWWg+O5BsV8/2D0o9uuH5O36prgNwzAChj24DcMwAoY9uA3DMAKGPbgNwzAChj24DcMwAkZBepUYhpE/Fi1aBMD2228PeIVxzzvvvLz1yYjGFLdhGEbAsAe3YRhGwDBTiWEYUWgQyIYNG4BQtfGqxA477AB41zl//nwA9thjj8g2hx56aNx9TzjhBACGDx8OwGuvvZa1fpaHKW7DMIyAYYo7SWrVqgXAnDlzABg6dCgAt956a976lAk23XRTALp37w7AkCFDAKhbty4Af//9NwB9+vSJ7HP//fcD8M8//+Ssn0b2adeuHQA1atTIc0+yy+jRowFo0qQJAH/88QcQfd21a9eOu6+G4e+zzz4A3HjjjQC88MILkW1UwWcTU9yGYRgBo6gU9yabhC63f//+ANx8882ApyrLQ9XINttsA8BOO+2UjS7mjM033xyAsWPHAtC5c2cAlixZAsAHH3wAePfs4osvjuy73XbbAd59DCI6grjlllsAmDZtGuCpL4AFCxYA8PzzzwMwfvx4AFavXp2zfuaC+vXrA3D88ccDsNlmm0Wtf+KJJ3Lep2zQtWtXAFq3bg3AlltuGbX++++/jywvX7487jE22iikdVWt64i7R48ekW38tvJsYYrbMAwjYFSouEXkEaAzsMw51yrcVgsYCzQF5gHdnHMrwuv6AT2Bf4BLnHNvZaXnKXDNNdcA0K9fPwC23nprAC699NIK9z311FOj3k+cODHDvcstL774IgAdOnQAYP369QD06tULgDfeeCNqe7/Nb9iwYbnoYlbp2LEj4HkWtGrVqsw2+++/P+B5EqjSHjRoEFA17gN4166fvaIjjJEjR+a8T9lA/89jlbay6667Jn2sWbNmAdC8eXMAdtttt8g6/b747d6ZJhnF/RhwdExbX2Cic64FMDH8HhHZHTgFaBne5z4R2ThjvTUMwzAqVtzOuUki0jSm+VigbXj5ceA94Opw+zPOuXXAXBGZA7QBPslMd9OjcePG5b6Px1FHHQXA0UeHfrtmz54NZPfXNFuceeaZkeUjjzwSgD///BOA008/HSirtJVff/01sly9evVsdTHrqG2/adOmUe0jRowAYMyYMZG2Tp06AZ4S69atGwDXXnst4HnXBN3mrT7LsYUL9Luwbt26nPcpk+j1HXLIIXHXv//++5U+pu6jitvPwQcfDORfccejnnNuMUD4tW64vSGwwLfdwnBbGUSkl4iUikjpzz//nGI3DMMwio9Me5XEqzUUtxaPc24UMAqgpKQkuXo9abLzzjtHvU+kLv2oPVxn2tW2pTbhIFCzZk0AbrvttkjbxhuHLFjPPvssAC+//HLSxwtysqFGjRoBXvScqmW1Vy9dujSy7ZQpU6K2VcWtI4569epFHSNobLHFFgAcfvjhgGfvVyGlI4qgorbsRHNYa9asAbzRVmV46aWXADjnnHNS7F16pKq4l4pIfYDw67Jw+0LAb39oBCxKvXuGYRhGLKkq7leBs4Ch4ddXfO1Pi8hwoAHQApicbiczhdq4fv/9dwAmTJhQ4T6qqtT+99FHH2Wpd9njyiuvBKI9Q1RtXHLJJZU+3ooVKzLTsTzg90cHLxLWr7STZccddwSi/X+DhI5A1XtGGTBgQD66k3EaNGgAQJcuXeKunzp1KgDjxo2r9LFLS0sBb1S27777ptLFlEnGHXAMoYnIOiKyEOhP6IH9rIj0BOYDJwE4574RkWeBGcB64ELnnMVFG4ZhZJBkvEpOTbCqXYLtBwGD0umUYRiGkZiiCHnXUF6dfNEkMxrSHI8DDzwQ8NzGdN8gugGeeOKJgHcN4IVxB9nskQoaqqymr2QmmdWUoPtoWoAPP/wwG13MGbvvvnvcdv1uxG43Y8aMrPcplwwcODDlfTXtQ506dcqsO+ywwwDPNOl3pc0UFvJuGIYRMIpKcSu//fZbhftoukZ1A/zss88AWLhwYWY7l0VUXWqQgF9x+9O0xkMTD6nL5MMPPxxZd88992S0n7lEJ2X1Xtx9990V7nPQQQdF7TNz5kwA1q5dm40u5gwNKould+/egOf2Wa1aNSA6AVffvn0BePLJJ7PZxYwQG1hUUXsyqItobCAXwF577QXAtttuC5jiNgzDMKjiiltTr2rSFyVR4I2/RJOGhKvKuuOOO4DkUsAWCldddVXUe1WbUHEY80UXXQR4KTAT2UODhro/PvXUU0D54c4aaHPWWWdFtX/77bdZ6l1uUcUZqzyvv/56wEthqiXMNCkbwKOPPgp46X/nzZuX1b6mg3+kCd5nPmnSpJSPed1118U9NsBjjz0GZLeggiluwzCMgFGlFbeGdWt5LkW9Sn788ceodi3XFQ9NHKOq47vvvgO8ckevv/56BnqcXb788svIsl99+9EwaPWqUaqKylQvmrfeqjjbsI66VHn/9NNPgFeAI4hoIRDwPuN4qhHgq6++ArzEXLEpIwDOP/98wLN5BwG11acyRxFbjCEeOoeWzbQYprgNwzACRpVW3PqLumrVKsCb5dVEQ/oaj1i7X2yotPLxxx8Dham41U9dr8Wf1vKyyy4DvHukiYY0hWlsYQFNcQpw5513ZqnHhYXOayiafCmV8PhC4YwzzogsN2vWLO42WjRE/bm1FJeGdxcz6qmVqBhDrjDFbRiGETCqtOLWWV1VDOqfqvYptfdpMhr1VwXYfvvtAc/+pwUHYv2A1d+7ELnvvvsArxCw326dqOyWRgW+/fbbgGfn/de//hXZRu+NbluV8BfXUF9d/ayD7L+uqGIsj1QKCxQysaPnVPy3tRjDXXfdBXieNvHIRUStKW7DMIyAUaUVt6K2Xi2KEEvDhqEiPX6vC2X69OmA58ur+RqC4M+ttv0jjjgC8Mqwgefbrukply9fDngFFTRNpSpu9TaBsl46VQl/Un1V2pq29bnnnstLnzKJft6ZIgjFFmK9ZhJ50cQjthiDKu3YY/jzuOQin5EpbsMwjIBhD27DMIyAURSmkorQSUl/SK+iwRZff/11TvuUSdSs43dZrMh9UZNpaWi8BmFA/PtUVWjbtm2ZNq3VmShoKUi89957kWX9bP1mMPAm7zTkvUOHDlHtfoKeaCsRmoBL00YkqhCvz4XYRHbZxhS3YRhGwDDFTeLQePAmJ4sNTQcQT3FrYYYgh37HoiHb/tGE1iZ9/PHH89KnbKBBROAF2HTv3j1qG00ott9++wEwePBgIHpCTlMHBGGSPha9Pg02mzVrFhA9MX366acDXtBeLDoZqUo7mwml4mGK2zAMI2CY4gZ69OgBRNvw0kmyXpVQJeFPquNPf1tVOPfccwHPrgvw2muvARWnwA0qiVKxavCV/g/Ec58755xzgMIufbdy5UoApk2bBniBeFr8IHY07f/sEwXY6LHUTTYbRRKSwRS3YRhGwDDFjRfOHk9ZqNLUNK7FhioMf9KpRDPsQURHD/FKUMUWoqhqaDk6Vc9ari4WVdX33ntvpO2dd97Jcu/S55dffgGgffv2ALz77rtA4qIgfpWdKMBm5MiRQP6UtmKK2zAMI2CY4gbGjBkDeOWI/JRXXKEYiJcMXj0vNtlkk4TbBIVBgwYBnsKaMGFCZF1VTKLlR1NBdOnSBYABAwYA0LFjR8BLNnXDDTcA8NFHH+W6ixlB1bH64z/wwANA+akb1Gd/6tSpgOdlkmvvkUSY4jYMwwgYprgrYOLEifnuQl755JNPgOgE/IomrUpUfLmQUR9ef7paKOw0vdlCo/+0LFdV5cknnwS8z768OQxNKqWFfwsNU9yGYRgBwx7chmEYAcNMJXiO+v5agrNnzwa8is3Fioa+T5o0KdKm1UCWLVuWlz5lgiuuuAKIrnoEVaeavZGYa6+9Nuo1iJjiNgzDCBimuPGUtlbCMTzeeuutqNeqwo477hj1XoM1gpg0ySg+THEbhmEEDFPcRlHSrl27fHfBMFLGFLdhGEbAqPDBLSKPiMgyEZnuaxsmIrNEZKqIvCQiNXzr+onIHBH5VkQ6ZKvjhmEYxUoyivsx4OiYtreBVs65PYDvgH4AIrI7cArQMrzPfSKyccZ6axiGYVRs43bOTRKRpjFtE3xvPwVODC8fCzzjnFsHzBWROUAb4JPKdCpeetVio9jvQbFfP9g9KPbrL49M2LjPAcaHlxsCC3zrFobbyiAivUSkVERK/XXwDMMwjPJJ68EtItcC64HR2hRns7g/m865Uc65EudcyXbbbZdONwzDMIqKlN0BReQsoDPQznljmoVAY99mjYBFqXfPMAzDiCWlB7eIHA1cDRzmnPvDt+pV4GkRGQ40AFoAk1M4firdCjSx9rxiuwfFfv1g96DYrx+St+tX+OAWkTFAW6COiCwE+hPyItkceDt8cz91zp3vnPtGRJ4FZhAyoVzonPsnpSswDMMw4pKMV8mpcZofLmf7QcCgdDplGIZhJMYiJw3DMAKGPbgNwzAChj24DcMwAoY9uA3DMAKGPbgNwzAChj24DcMwAoYVUqgALWf29NNPA3DQQQcB8PDDIY/I8847Lz8dMwyjaDHFbRiGETBMcSdAlfb9998PwIEHHgjAhg0bAEs5aVRd2rdvD8CNN94IwAEHHJBw2wULQslAb775ZgAee+wxwPs/CSo1a9YEoH79+gA0b94cgKOOOipqu1NOOQWAWrVqRdpinw2LFoXSNbVp0waAJUuWpN0/U9yGYRgBwxR3AnbYYQcAjj46tviPUZXQkZSqoUsuuSSyrmnTpgCsWrUKgIsvvhiAJ598Moc9TI8ePXoAsOeeewIwc+bMyLoHHngAgDp16gBw2WWXAXDFFVcAsG7dOgAWL16c8PhbbrklAKNGjQK8kergwYMB+Oefwk9V1KVLCAN59AAAEH5JREFUl6hXgEMOOQSAFi1aABWPsP3rY7dV1a4q3hS3YRhGEWIPbsMwjIBhphIfTZo0iSyPGzeu3G07d+4MQMeOHQEYP358eZsXLNWqVQPg5JNPBuD4448HoFOnTgC88847AFx++eWRfebPnw94JoQgoSawnXbaCYAnnngCgC+//BLwzCMAc+fOBWD77bcHvIlqHfreeuut2e9wiuy6664AXHfddYB33ZMmTYpso20XXHABAFtvvXXUMfbbbz8Apk2blvA8++67LwAvvfQSAP379wc899nvv/8+javILGqqUHNY165dAWjdujVQfv7vlStXAvD3339Hteuk5Cab5PZRaorbMAwjYJji9uGfdNpmm22AxG5NqsiDqrRVcapqVPWxevVqAEaOHAl4AUd+pfbII48AcOWVV+amsxng0EMPBeCZZ54BoG7duoA3eaajiA4dOkT20SLWu+22GwAPPfQQ4LmAqQIvpJFHjRo1AHj99dcBT1Ureh9ilwGmT58OwC233AKUr7SVKVOmAN49eeuttwCoV68ekF/FvemmmwJw0kknAXDPPfcA3v+2TiL++OOPAMybNy+yr35PVqxYAcD7778PwC+//BJ1jqlTpwKw++67J+yHqvU1a9akeCVlMcVtGIYRMExxFxHHHXdcZFnVsqpJDd2fMGEC4AVW3HnnnQBceOGFkX233Xbb7Hc2Dfy2SnVXu+uuuwAYMWIE4LkB3nvvvYBny4/H119/DcD5558PwMEHHwzAa6+9BsC3334b2fbSSy8FYO3atWleRWpsscUWQFml/fHHHwPR8ziNGjUC4O233wbg2GOPBeCvv/6q9Hn1+N999x3guSFqez7Q0cfjjz8e1a4qWkcJ+vnGqul46P3V61O7eTx0lKrb6qguE5jiNgzDCBimuIE+ffoAsNdee1W4rXof3HDDDVntUyZR9aWqE6B69eoAnHjiiQBMnDgxah+dLVdF7kc9TgoN9Yro27dvpO2MM84APKWZjkdIaWkp4CluDdLQ9+AFbBxxxBGVPn4m0OuNRW20/fr1K7Nu6NChQGpKOwjEeouo50vsd7482rVrB8CgQaFyuiUlJVHr/fdO7fynnhoq1/vnn39WsscVY4rbMAwjYJjiBmrXrg14Ps3loSHwaicLAsOGDQOi7XFq706kOoYMGQLEV45qJy401JPBr7iVww8/HID//e9/lT7umWeeCXgh4Uo8v9/DDjsM8HygP//880qfLx005DyWX3/9FYi242qo+8KFCzPej2bNmgGw+eabR9o0hD5XqDeHpmA+55xzAM+PW0PPX3zxxYTH0Dme7t27A978TmxYe+/evSPLmmgrm5jiNgzDCBhFrbjV9zLWn7WqoLZotWOrXzJ4ngQ6yth7770BT62WZ8f+8MMPM9/ZDKDX6VdDqjBTUdqKqsdEiYb87XqP1Tc43yxfvhzwvGZ23HHHyDpViepRdNVVV2XsvG3btgU8zw6ApUuXZuz4yaBRjhdddBHg2aU14VZssjC/z/mYMWMA2HnnnQHvM9ZX9d9WO/bs2bMzfwHlYIrbMAwjYBS14tbk6GqPrGpo9NzkyZMBT1UDfPPNNwBsttlmgKcq169fD3gKVRWatoPnWVNoqOL2o0nsU0FtwP77VhFqV122bFnK500HjWRU1HdYoyLVgwpgq622AuCHH35I+7w6F9KyZcu0j5VpVHlrBOV7770HeB5GY8eOrfAY6uutUaXPPfdcprtZKUxxG4ZhBAx7cBuGYQSMojSVaApGDS7ZaKOyv1+xbVo5xG8yCApnn3024AUPxEMnW4YPHw7ArFmzAG9CSd9DdgIK0mGXXXYBvImkeOtS4YMPPgC8oJpE+L8TX331VcrnywQ66azcfvvtUe/9gSK9evUCMlMfUk1u+r+lE3+5npAsDzUJvfLKK4CXwqA8dFsNbPrjjz+y1LvKYYrbMAwjYBSl4tbE8QMHDgTKVxy6Tt3kCimFZ7KoWj7hhBOS3sdfe7HQ0SRPo0ePBuDcc8+NrHv++ecrdSz/BKe/qEJ53HbbbZFlLVyQLzSkXYODYhP/+8lEPUhNp3DWWWdFnffdd99N+9iZRp0R1P23vMIJ6uaX70nIRJjiNgzDCBhFqbiTsW0pGg6b69DlfOMPVYbcBxikwgsvvAB4tluA008/HfCUqCaz17SuWjhBlbY/ZLyiyt5vvvkmkH+V7Sc2UOTII48EKj/ySBadJ1JXOz1vPtO5Kvod1v/3m2++GfDcIMv7fI866ijAFLdhGIaRISpU3CLyCNAZWOacaxWzrg8wDNjOOfdLuK0f0BP4B7jEOfdWxnudIlr0VUsalcdnn30GeEnQ85UYP9+oHfCjjz7Kc08qRm35/nkITV+rXkFq19XyVbH4VVhFilvLuxUSscnPtPScFpTIlFeEho0/8MADUe1aiCNfAUj+kWKXLl2A6DkI8L4DGpimKRz8wUnqiTVq1Cig8EbcySjux4CjYxtFpDHQHpjva9sdOAVoGd7nPhHZOCM9NQzDMIAkFLdzbpKINI2z6g7gKuAVX9uxwDPOuXXAXBGZA7QBPkm/q+mjNq5kvAXUN7dYlbYSm1SnkNH0pOoRAJ5femyyoER88cUXkWUtw3XyySdHbaPeK1q6rJDQgrhaak5t0GrDf+KJJ1I+to5YAV599VWgbBpZnUvIl/eVqmzwEkXFon3U74b6oPsTcB1//PGAV0Q7iIq7DCLSFfjJOfd1zKqGwALf+4XhtnjH6CUipSJS6s9aZxiGYZRPpb1KRGRL4FrgqHir47TFlTjOuVHAKICSkpLyZVCaaGkptfclg5ZzKla0VJOiZbuCwPjx48ssayIxTe0ZG2WpiYf8KWufeuqpqG1+++03oLB93LVggipitdVqEYFUFLfOCWmqViirtNW2rl47uUaLhFx++eUJt7n++usBT2krGk167bXXRtq08IbOcd1///1AeknLMkkq7oDNgWbA1+GJq0bAFyLShpDCbuzbthFQGFdqGIZRRaj0g9s5Nw2oq+9FZB5Q4pz7RUReBZ4WkeFAA6AFMDlDfU0ZzTWRKOeEqgT9dYVo1VZMaCTcPvvsA3j3IUil2uKhNsqKbJVXXnllZNmvMMHLA6LKO0jsscceAPz73/+OtL388stJ7XvppZcCXjk7P/q90NJec+bMSaufqaLzV23atIm0rV69GvCiOjXvSCL8fdccK5qmdqeddgIKR3FXaOMWkTGEJhd3EZGFItIz0bbOuW+AZ4EZwJvAhc659ONqDcMwjAjJeJWcWsH6pjHvBwGJ09AZhmEYaVGlQ941+EKDBRIxYMAAIPmhY1VGJ51q1aoFeBM3FbnRVRVat24dWVY3MTWl3XDDDXnpUyrcddddgBfy3qhRIyA6PF+rHCUy/ahrYbzKQp98EvLw1f8dDbzJN/7vqQYbVWQiKe84mUh5mw0s5N0wDCNgVGnFrW5MdevWrWBLQ9EJWg11z1focq6pXbs24LkLgqe67rzzTsALyAkCGjClASQaLLTXXntFtvnvf/8LwIIFodCLGTNmAN5knlZ/jzfaGjFiBFA4SjteoJwmk4odKel3O/a6NLEUeO6iGmPy448/Zq6zGcAUt2EYRsCo0opbXZVUWWiinWOOOSZvfQoKmoinWOz+mmDIX+pMFWhsObAgocpbU92+/vrrkXWxSlRtwtWqVYvbrmHg4JV1KxTU1u6fo1D7vgbeKIkUdzw0CMsUt2EYhpEWVVpxK2qn8gcfGPFp0qQJAMuXLwfgrbcKJitvVvErNUXTtk6aNCnX3ck4OmrYe++9I239+/cHvP8LHZEqRx8dSgqqaQAKOeGaJrW65pprIm1a/ENt11q6LBlUpcemrS0UTHEbhmEEjKJQ3EbFqDeF2gWLLWPj3Llzy7QlWyw4CKg/8rRp0yJt8fyzg86XX34Zd7mqYYrbMAwjYJjiNgCYP39+1Hu/90ExcPHFF0e9GkYhY4rbMAwjYNiD2/h/e3cXIlUZx3H8+2MtSyPUzDJX0kIqk0qRsBciskhN9NZIEOoyyKIoRQi6Lnq56IWwFynRC7MSoVAs6CrTLM1Sc03RNUsjeqEglX5dnGd1GmZsJWaf57D/Dyw755wRvjs75+/MM7O7IYSaiaWSAJz+0faurvjbziGULh5xhxBCzRT5iHuw/ArRMxnst8Fg//ohboPB/vWfSTziDiGEmonBHUIINRODO4QQakYlrCNJOgb8AfyUu6WfRhOtnRCtnRGtndHp1sttX9zqQBGDG0DSVtvT//ua+UVrZ0RrZ0RrZ+RsjaWSEEKomRjcIYRQMyUN7ldzB5yFaO2MaO2MaO2MbK3FrHGHEELon5IecYcQQuiHIga3pFmS9kjqkbQkd08jSeMlfSxpl6SvJS1O+0dJ2ihpb/o8MncrgKQuSV9IWp+2i+wEkDRC0hpJu9Pte1OJvZIeSd/7nZJWSTqvpE5Jr0s6Kmlnw762fZKWpnNtj6S7M3c+nb7/OyS9K2lE7s52rQ3HHpNkSaNztWYf3JK6gBeB2cBk4F5Jk/NW/ctJ4FHb1wAzgAdT3xJgk+1JwKa0XYLFwK6G7VI7AV4APrR9NXA9VXdRvZLGAQ8B021PAbqABZTV+SYwq2lfy750310AXJv+zUvpHMzVuRGYYvs64FtgaQGd0LoVSeOBu4CDDfsGvDX74AZuBHpsf2f7OLAamJ+56RTbR2xvS5d/pxou46gaV6SrrQCy/wl5Sd3APcDyht3FdQJIuhC4DXgNwPZx279QZu8Q4HxJQ4BhwPcU1Gn7E+Dnpt3t+uYDq23/ZXs/0EN1DmbptL3B9sm0+SnQnbuzXWvyHPA40Pji4IC3ljC4xwGHGrZ7077iSJoATAU2A5fYPgLVcAfG5Cs75XmqO9XfDftK7AS4AjgGvJGWdpZLGk5hvbYPA89QPcI6AvxqewOFdbbQrq/k8+1+4IN0ubhOSfOAw7a3Nx0a8NYSBrda7CvurS6SLgDeAR62/VvunmaS5gJHbX+eu6WfhgDTgJdtT6X6lQclLeMAkNaG5wMTgcuA4ZIW5q36X4o83yQto1qWXNm3q8XVsnVKGgYsA55sdbjFvo62ljC4e4HxDdvdVE9FiyHpHKqhvdL22rT7R0lj0/GxwNFcfcktwDxJB6iWm+6Q9DbldfbpBXptb07ba6gGeWm9dwL7bR+zfQJYC9xMeZ3N2vUVd75JWgTMBe7z6fcnl9Z5JdV/3tvTOdYNbJN0KRlaSxjcW4BJkiZKOpdqkX9d5qZTJIlqHXaX7WcbDq0DFqXLi4D3B7qtke2ltrttT6C6DT+yvZDCOvvY/gE4JOmqtGsm8A3l9R4EZkgalu4LM6le5yits1m7vnXAAklDJU0EJgGfZegDqneUAU8A82z/2XCoqE7bX9keY3tCOsd6gWnpfjzwrbazfwBzqF5R3gcsy93T1HYr1dOeHcCX6WMOcBHVq/V70+dRuVsbmm8H1qfLJXfeAGxNt+17wMgSe4GngN3ATuAtYGhJncAqqvX3E1QD5YEz9VE95d8H7AFmZ+7soVof7ju3Xsnd2a616fgBYHSu1vjJyRBCqJkSlkpCCCGchRjcIYRQMzG4QwihZmJwhxBCzcTgDiGEmonBHUIINRODO4QQaiYGdwgh1Mw/qh1ppwUJm5kAAAAASUVORK5CYII=\\n\",\n      \"text/plain\": [\n       \"<Figure size 432x432 with 1 Axes>\"\n      ]\n     },\n     \"metadata\": {\n      \"needs_background\": \"light\"\n     },\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"batch_size = 128\\n\",\n    \"\\n\",\n    \"# Images are usually in the [0., 1.] or [0, 255] range, Normalize transform will bring them into [-1, 1] range\\n\",\n    \"# It's one of those things somebody figured out experimentally that it works (without special theoretical arguments)\\n\",\n    \"# https://github.com/soumith/ganhacks <- you can find more of those hacks here\\n\",\n    \"transform = transforms.Compose([\\n\",\n    \"        transforms.ToTensor(),\\n\",\n    \"        transforms.Normalize((.5,), (.5,))  \\n\",\n    \"    ])\\n\",\n    \"\\n\",\n    \"# MNIST is a super simple, \\\"hello world\\\" dataset so it's included in PyTorch.\\n\",\n    \"# First time you run this it will download the MNIST dataset and store it in DATA_DIR_PATH\\n\",\n    \"# the 'transform' (defined above) will be applied to every single image\\n\",\n    \"mnist_dataset = datasets.MNIST(root=DATA_DIR_PATH, train=True, download=True, transform=transform)\\n\",\n    \"\\n\",\n    \"# Nice wrapper class helps us load images in batches (suitable for GPUs)\\n\",\n    \"mnist_data_loader = DataLoader(mnist_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\\n\",\n    \"\\n\",\n    \"# Let's answer our questions\\n\",\n    \"\\n\",\n    \"# Q1: How many images do I have?\\n\",\n    \"print(f'Dataset size: {len(mnist_dataset)} images.')\\n\",\n    \"\\n\",\n    \"num_imgs_to_visualize = 25  # number of images we'll display\\n\",\n    \"batch = next(iter(mnist_data_loader))  # take a single batch from the dataset\\n\",\n    \"img_batch = batch[0]  # extract only images and ignore the labels (batch[1])\\n\",\n    \"img_batch_subset = img_batch[:num_imgs_to_visualize]  # extract only a subset of images\\n\",\n    \"\\n\",\n    \"# Q2: What's the shape of my image?\\n\",\n    \"# format is (B,C,H,W), B - number of images in batch, C - number of channels, H - height, W - width\\n\",\n    \"print(f'Image shape {img_batch_subset.shape[1:]}')  # we ignore shape[0] - number of imgs in batch.\\n\",\n    \"\\n\",\n    \"# Q3: How do my images look like?\\n\",\n    \"# Creates a 5x5 grid of images, normalize will bring images from [-1, 1] range back into [0, 1] for display\\n\",\n    \"# pad_value is 1. (white) because it's 0. (black) by default but since our background is also black,\\n\",\n    \"# we wouldn't see the grid pattern so I set it to 1.\\n\",\n    \"grid = make_grid(img_batch_subset, nrow=int(np.sqrt(num_imgs_to_visualize)), normalize=True, pad_value=1.)\\n\",\n    \"grid = np.moveaxis(grid.numpy(), 0, 2)  # from CHW -> HWC format that's what matplotlib expects! Get used to this.\\n\",\n    \"plt.figure(figsize=(6, 6))\\n\",\n    \"plt.title(\\\"Samples from the MNIST dataset\\\")\\n\",\n    \"plt.imshow(grid)\\n\",\n    \"plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Understand your model (neural networks)!\\n\",\n    \"\\n\",\n    \"Let's define the generator and discriminator networks!\\n\",\n    \"\\n\",\n    \"The original paper used the maxout activation and dropout for regularization (you don't need to understand this). <br/>\\n\",\n    \"I'm using `LeakyReLU` instead and `batch normalization` which came after the original paper was published.\\n\",\n    \"\\n\",\n    \"Those design decisions are inspired by the DCGAN model which came later than the original GAN.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"# Size of the generator's input vector. Generator will eventually learn how to map these into meaningful images!\\n\",\n    \"LATENT_SPACE_DIM = 100\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# This one will produce a batch of those vectors\\n\",\n    \"def get_gaussian_latent_batch(batch_size, device):\\n\",\n    \"    return torch.randn((batch_size, LATENT_SPACE_DIM), device=device)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# It's cleaner if you define the block like this - bear with me\\n\",\n    \"def vanilla_block(in_feat, out_feat, normalize=True, activation=None):\\n\",\n    \"    layers = [nn.Linear(in_feat, out_feat)]\\n\",\n    \"    if normalize:\\n\",\n    \"        layers.append(nn.BatchNorm1d(out_feat))\\n\",\n    \"    # 0.2 was used in DCGAN, I experimented with other values like 0.5 didn't notice significant change\\n\",\n    \"    layers.append(nn.LeakyReLU(0.2) if activation is None else activation)\\n\",\n    \"    return layers\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"class GeneratorNet(torch.nn.Module):\\n\",\n    \"    \\\"\\\"\\\"Simple 4-layer MLP generative neural network.\\n\",\n    \"\\n\",\n    \"    By default it works for MNIST size images (28x28).\\n\",\n    \"\\n\",\n    \"    There are many ways you can construct generator to work on MNIST.\\n\",\n    \"    Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc.\\n\",\n    \"\\n\",\n    \"    It's generally an open-research question on how to evaluate GANs i.e. quantify that \\\"ok\\\" statement.\\n\",\n    \"\\n\",\n    \"    People tried to automate the task using IS (inception score, often used incorrectly), etc.\\n\",\n    \"    but so far it always ends up with some form of visual inspection (human in the loop).\\n\",\n    \"    \\n\",\n    \"    Fancy way of saying you'll have to take a look at the images from your generator and say hey this looks good!\\n\",\n    \"\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\\n\",\n    \"        super().__init__()\\n\",\n    \"        self.generated_img_shape = img_shape\\n\",\n    \"        num_neurons_per_layer = [LATENT_SPACE_DIM, 256, 512, 1024, img_shape[0] * img_shape[1]]\\n\",\n    \"\\n\",\n    \"        # Now you see why it's nice to define blocks - it's super concise!\\n\",\n    \"        # These are pretty much just linear layers followed by LeakyReLU and batch normalization\\n\",\n    \"        # Except for the last layer where we exclude batch normalization and we add Tanh (maps images into our [-1, 1] range!)\\n\",\n    \"        self.net = nn.Sequential(\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]),\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]),\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]),\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh())\\n\",\n    \"        )\\n\",\n    \"\\n\",\n    \"    def forward(self, latent_vector_batch):\\n\",\n    \"        img_batch_flattened = self.net(latent_vector_batch)\\n\",\n    \"        # just un-flatten using view into (N, 1, 28, 28) shape for MNIST\\n\",\n    \"        return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape)\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# You can interpret the output from the discriminator as a probability and the question it should\\n\",\n    \"# give an answer to is \\\"hey is this image real?\\\". If it outputs 1. it's 100% sure it's real. 0.5 - 50% sure, etc.\\n\",\n    \"class DiscriminatorNet(torch.nn.Module):\\n\",\n    \"    \\\"\\\"\\\"Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes.\\n\",\n    \"\\n\",\n    \"    By default it works for MNIST size images (28x28).\\n\",\n    \"\\n\",\n    \"    Again there are many ways you can construct discriminator network that would work on MNIST.\\n\",\n    \"    You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though.\\n\",\n    \"\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"\\n\",\n    \"    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\\n\",\n    \"        super().__init__()\\n\",\n    \"        num_neurons_per_layer = [img_shape[0] * img_shape[1], 512, 256, 1]\\n\",\n    \"\\n\",\n    \"        # Last layer is Sigmoid function - basically the goal of the discriminator is to output 1.\\n\",\n    \"        # for real images and 0. for fake images and sigmoid is clamped between 0 and 1 so it's perfect.\\n\",\n    \"        self.net = nn.Sequential(\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False),\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False),\\n\",\n    \"            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid())\\n\",\n    \"        )\\n\",\n    \"\\n\",\n    \"    def forward(self, img_batch):\\n\",\n    \"        img_batch_flattened = img_batch.view(img_batch.shape[0], -1)  # flatten from (N,1,H,W) into (N, HxW)\\n\",\n    \"        return self.net(img_batch_flattened)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## GAN Training\\n\",\n    \"\\n\",\n    \"**Feel free to skip this entire section** if you just want to use the pre-trained model to generate some new images - which don't exist in the original MNIST dataset and that's the whole magic of GANs!\\n\",\n    \"\\n\",\n    \"Phew, so far we got familiar with data and our models, awesome work! <br/>\\n\",\n    \"But brace yourselves as this is arguable the hardest part. How to actually train your GAN?\\n\",\n    \"\\n\",\n    \"Let's start with understanding the loss function! We'll be using `BCE (binary cross-entropy loss`), let's see why? <br/>\\n\",\n    \"\\n\",\n    \"If we input real images into the discriminator we expect it to output 1 (I'm 100% sure that this is a real image). <br/>\\n\",\n    \"The further away it is from 1 and the closer it is to 0 the more we should penalize it, as it is making wrong prediction. <br/>\\n\",\n    \"So this is how the loss should look like in that case (it's basically `-log(x)`):\\n\",\n    \"\\n\",\n    \"<img src=\\\"data/examples/jupyter/cross_entropy_loss.png\\\" alt=\\\"BCE loss when true label = 1.\\\" align=\\\"left\\\"/> <br/>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"BCE loss basically becomes `-log(x)` when it's target (true) label is 1. <br/>\\n\",\n    \"\\n\",\n    \"Similarly for fake images, the target (true) label is 0 (as we want the discriminator to output 0 for fake images) and we want to penalize the generator if it starts outputing values close to 1. So we basically want to mirror the above loss function and that's just: `-log(1-x)`. <br/>\\n\",\n    \"\\n\",\n    \"BCE loss basically becomes `-log(1-x)` when it's target (true) label is 0. That's why it perfectly fits the task! <br/>\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"### Training utility functions\\n\",\n    \"Let's define some useful utility functions:\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"# Tried SGD for the discriminator, had problems tweaking it - Adam simply works nicely but default lr 1e-3 won't work!\\n\",\n    \"# I had to train discriminator more (4 to 1 schedule worked) to get it working with default lr, still got worse results.\\n\",\n    \"# 0.0002 and 0.5, 0.999 are from the DCGAN paper it works here nicely!\\n\",\n    \"def get_optimizers(d_net, g_net):\\n\",\n    \"    d_opt = Adam(d_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\\n\",\n    \"    g_opt = Adam(g_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\\n\",\n    \"    return d_opt, g_opt\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# It's useful to add some metadata when saving your model, it should probably make sense to also add the number of epochs\\n\",\n    \"def get_training_state(generator_net, gan_type_name):\\n\",\n    \"    training_state = {\\n\",\n    \"        \\\"commit_hash\\\": git.Repo(search_parent_directories=True).head.object.hexsha,\\n\",\n    \"        \\\"state_dict\\\": generator_net.state_dict(),\\n\",\n    \"        \\\"gan_type\\\": gan_type_name\\n\",\n    \"    }\\n\",\n    \"    return training_state\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Makes things useful when you have multiple models\\n\",\n    \"class GANType(enum.Enum):\\n\",\n    \"    VANILLA = 0\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# Feel free to ignore this one not important for GAN training. \\n\",\n    \"# It just figures out a good binary name so as not to overwrite your older models.\\n\",\n    \"def get_available_binary_name(gan_type_enum=GANType.VANILLA):\\n\",\n    \"    def valid_binary_name(binary_name):\\n\",\n    \"        # First time you see raw f-string? Don't worry the only trick is to double the brackets.\\n\",\n    \"        pattern = re.compile(rf'{gan_type_enum.name}_[0-9]{{6}}\\\\.pth')\\n\",\n    \"        return re.fullmatch(pattern, binary_name) is not None\\n\",\n    \"\\n\",\n    \"    prefix = gan_type_enum.name\\n\",\n    \"    # Just list the existing binaries so that we don't overwrite them but write to a new one\\n\",\n    \"    valid_binary_names = list(filter(valid_binary_name, os.listdir(BINARIES_PATH)))\\n\",\n    \"    if len(valid_binary_names) > 0:\\n\",\n    \"        last_binary_name = sorted(valid_binary_names)[-1]\\n\",\n    \"        new_suffix = int(last_binary_name.split('.')[0][-6:]) + 1  # increment by 1\\n\",\n    \"        return f'{prefix}_{str(new_suffix).zfill(6)}.pth'\\n\",\n    \"    else:\\n\",\n    \"        return f'{prefix}_000000.pth'\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Tracking your model's progress during training\\n\",\n    \"You can track how your GAN training is progressing through:\\n\",\n    \"1. Console output\\n\",\n    \"2. Images dumped to: `data/debug_imagery`\\n\",\n    \"3. Tensorboard, just type in `tensorboard --logdir=runs` to your Anaconda console \\n\",\n    \"\\n\",\n    \"Note: to use tensorboard just navigate to project root first via `cd path_to_root` and open `http://localhost:6006/` (browser)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"####################### constants #####################\\n\",\n    \"# For logging purpose\\n\",\n    \"ref_batch_size = 16\\n\",\n    \"ref_noise_batch = get_gaussian_latent_batch(ref_batch_size, device)  # Track G's quality during training on fixed noise vectors\\n\",\n    \"\\n\",\n    \"discriminator_loss_values = []\\n\",\n    \"generator_loss_values = []\\n\",\n    \"\\n\",\n    \"img_cnt = 0\\n\",\n    \"\\n\",\n    \"enable_tensorboard = True\\n\",\n    \"console_log_freq = 50\\n\",\n    \"debug_imagery_log_freq = 50\\n\",\n    \"checkpoint_freq = 2\\n\",\n    \"\\n\",\n    \"# For training purpose\\n\",\n    \"num_epochs = 10  # feel free to increase this\\n\",\n    \"\\n\",\n    \"########################################################\\n\",\n    \"\\n\",\n    \"writer = SummaryWriter()  # (tensorboard) writer will output to ./runs/ directory by default\\n\",\n    \"\\n\",\n    \"# Hopefully you have some GPU ^^\\n\",\n    \"device = torch.device(\\\"cuda\\\" if torch.cuda.is_available() else \\\"cpu\\\")\\n\",\n    \"\\n\",\n    \"# Prepare feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights\\n\",\n    \"discriminator_net = DiscriminatorNet().train().to(device)\\n\",\n    \"generator_net = GeneratorNet().train().to(device)\\n\",\n    \"\\n\",\n    \"discriminator_opt, generator_opt = get_optimizers(discriminator_net, generator_net)\\n\",\n    \"\\n\",\n    \"# 1s (real_images_gt) will configure BCELoss into -log(x) (check out the loss image above that's -log(x)) \\n\",\n    \"# whereas 0s (fake_images_gt) will configure it to -log(1-x)\\n\",\n    \"# So that means we can effectively use binary cross-entropy loss to achieve adversarial loss!\\n\",\n    \"adversarial_loss = nn.BCELoss()\\n\",\n    \"real_images_gt = torch.ones((batch_size, 1), device=device)\\n\",\n    \"fake_images_gt = torch.zeros((batch_size, 1), device=device)\\n\",\n    \"\\n\",\n    \"ts = time.time()  # start measuring time\\n\",\n    \"\\n\",\n    \"# GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse!\\n\",\n    \"# A mode collapse, for example, is when your generator learns to only generate a single digit instead of all 10 digits!\\n\",\n    \"for epoch in range(num_epochs):\\n\",\n    \"    for batch_idx, (real_images, _) in enumerate(mnist_data_loader):\\n\",\n    \"\\n\",\n    \"        real_images = real_images.to(device)  # Place imagery on GPU (if present)\\n\",\n    \"\\n\",\n    \"        #\\n\",\n    \"        # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V\\n\",\n    \"        # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images\\n\",\n    \"        #\\n\",\n    \"\\n\",\n    \"        # Zero out .grad variables in discriminator network,\\n\",\n    \"        # otherwise we would have corrupt results - leftover gradients from the previous training iteration\\n\",\n    \"        discriminator_opt.zero_grad()\\n\",\n    \"\\n\",\n    \"        # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible\\n\",\n    \"        real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt)\\n\",\n    \"\\n\",\n    \"        # G(z) | G == generator_net and z == get_gaussian_latent_batch(batch_size, device)\\n\",\n    \"        fake_images = generator_net(get_gaussian_latent_batch(batch_size, device))\\n\",\n    \"        # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward()\\n\",\n    \"        fake_images_predictions = discriminator_net(fake_images.detach())\\n\",\n    \"        # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible\\n\",\n    \"        fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt)\\n\",\n    \"\\n\",\n    \"        discriminator_loss = real_discriminator_loss + fake_discriminator_loss\\n\",\n    \"        discriminator_loss.backward()  # this will populate .grad vars in the discriminator net\\n\",\n    \"        discriminator_opt.step()  # perform D weights update according to optimizer's strategy\\n\",\n    \"\\n\",\n    \"        #\\n\",\n    \"        # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2)\\n\",\n    \"        # The original expression (V1) had problems with diminishing gradients for G when D is too good.\\n\",\n    \"        #\\n\",\n    \"\\n\",\n    \"        # if you want to cause mode collapse probably the easiest way to do that would be to add \\\"for i in range(n)\\\"\\n\",\n    \"        # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment.\\n\",\n    \"\\n\",\n    \"        # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\\n\",\n    \"        generator_opt.zero_grad()\\n\",\n    \"\\n\",\n    \"        # D(G(z)) (see above for explanations)\\n\",\n    \"        generated_images_predictions = discriminator_net(generator_net(get_gaussian_latent_batch(batch_size, device)))\\n\",\n    \"        # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1\\n\",\n    \"        # i.e. we're tricking D into thinking that these generated images are real!\\n\",\n    \"        generator_loss = adversarial_loss(generated_images_predictions, real_images_gt)\\n\",\n    \"\\n\",\n    \"        generator_loss.backward()  # this will populate .grad vars in the G net (also in D but we won't use those)\\n\",\n    \"        generator_opt.step()  # perform G weights update according to optimizer's strategy\\n\",\n    \"\\n\",\n    \"        #\\n\",\n    \"        # Logging and checkpoint creation\\n\",\n    \"        #\\n\",\n    \"\\n\",\n    \"        generator_loss_values.append(generator_loss.item())\\n\",\n    \"        discriminator_loss_values.append(discriminator_loss.item())\\n\",\n    \"        \\n\",\n    \"        if enable_tensorboard:\\n\",\n    \"            global_batch_idx = len(mnist_data_loader) * epoch + batch_idx + 1\\n\",\n    \"            writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, global_batch_idx)\\n\",\n    \"            # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly)\\n\",\n    \"            if batch_idx % debug_imagery_log_freq == 0:\\n\",\n    \"                with torch.no_grad():\\n\",\n    \"                    log_generated_images = generator_net(ref_noise_batch)\\n\",\n    \"                    log_generated_images = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images)\\n\",\n    \"                    intermediate_imagery_grid = make_grid(log_generated_images, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\\n\",\n    \"                    writer.add_image('intermediate generated imagery', intermediate_imagery_grid, global_batch_idx)\\n\",\n    \"\\n\",\n    \"        if batch_idx % console_log_freq == 0:\\n\",\n    \"            prefix = 'GAN training: time elapsed'\\n\",\n    \"            print(f'{prefix} = {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]')\\n\",\n    \"\\n\",\n    \"        # Save intermediate generator images (more convenient like this than through tensorboard)\\n\",\n    \"        if batch_idx % debug_imagery_log_freq == 0:\\n\",\n    \"            with torch.no_grad():\\n\",\n    \"                log_generated_images = generator_net(ref_noise_batch)\\n\",\n    \"                log_generated_images_resized = nn.Upsample(scale_factor=2.5, mode='nearest')(log_generated_images)\\n\",\n    \"                out_path = os.path.join(DEBUG_IMAGERY_PATH, f'{str(img_cnt).zfill(6)}.jpg')\\n\",\n    \"                save_image(log_generated_images_resized, out_path, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\\n\",\n    \"                img_cnt += 1\\n\",\n    \"\\n\",\n    \"        # Save generator checkpoint\\n\",\n    \"        if (epoch + 1) % checkpoint_freq == 0 and batch_idx == 0:\\n\",\n    \"            ckpt_model_name = f\\\"vanilla_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth\\\"\\n\",\n    \"            torch.save(get_training_state(generator_net, GANType.VANILLA.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name))\\n\",\n    \"\\n\",\n    \"# Save the latest generator in the binaries directory\\n\",\n    \"torch.save(get_training_state(generator_net, GANType.VANILLA.name), os.path.join(BINARIES_PATH, get_available_binary_name()))\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Generate images with your vanilla GAN\\n\",\n    \"\\n\",\n    \"Nice, finally we can use the generator we trained to generate some MNIST-like imagery!\\n\",\n    \"\\n\",\n    \"Let's define a couple of utility functions which will make things cleaner!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"def postprocess_generated_img(generated_img_tensor):\\n\",\n    \"    assert isinstance(generated_img_tensor, torch.Tensor), f'Expected PyTorch tensor but got {type(generated_img_tensor)}.'\\n\",\n    \"\\n\",\n    \"    # Move the tensor from GPU to CPU, convert to numpy array, extract 0th batch, move the image channel\\n\",\n    \"    # from 0th to 2nd position (CHW -> HWC)\\n\",\n    \"    generated_img = np.moveaxis(generated_img_tensor.to('cpu').numpy()[0], 0, 2)\\n\",\n    \"\\n\",\n    \"    # Since MNIST images are grayscale (1-channel only) repeat 3 times to get RGB image\\n\",\n    \"    generated_img = np.repeat(generated_img,  3, axis=2)\\n\",\n    \"\\n\",\n    \"    # Imagery is in the range [-1, 1] (generator has tanh as the output activation) move it into [0, 1] range\\n\",\n    \"    generated_img -= np.min(generated_img)\\n\",\n    \"    generated_img /= np.max(generated_img)\\n\",\n    \"\\n\",\n    \"    return generated_img\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# This function will generate a random vector pass it to the generator which will generate a new image\\n\",\n    \"# which we will just post-process and return it\\n\",\n    \"def generate_from_random_latent_vector(generator):\\n\",\n    \"    with torch.no_grad():  # Tells PyTorch not to compute gradients which would have huge memory footprint\\n\",\n    \"        \\n\",\n    \"        # Generate a single random (latent) vector\\n\",\n    \"        latent_vector = get_gaussian_latent_batch(1, next(generator.parameters()).device)\\n\",\n    \"        \\n\",\n    \"        # Post process generator output (as it's in the [-1, 1] range, remember?)\\n\",\n    \"        generated_img = postprocess_generated_img(generator(latent_vector))\\n\",\n    \"\\n\",\n    \"    return generated_img\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"# You don't need to get deep into this one - irrelevant for GANs - it will just figure out a good name for your generated\\n\",\n    \"# images so that you don't overwrite the old ones. They'll be stored with xxxxxx.jpg naming scheme.\\n\",\n    \"def get_available_file_name(input_dir): \\n\",\n    \"    def valid_frame_name(str):\\n\",\n    \"        pattern = re.compile(r'[0-9]{6}\\\\.jpg')  # regex, examples it covers: 000000.jpg or 923492.jpg, etc.\\n\",\n    \"        return re.fullmatch(pattern, str) is not None\\n\",\n    \"\\n\",\n    \"    # Filter out only images with xxxxxx.jpg format from the input_dir\\n\",\n    \"    valid_frames = list(filter(valid_frame_name, os.listdir(input_dir)))\\n\",\n    \"    if len(valid_frames) > 0:\\n\",\n    \"        # Images are saved in the <xxxxxx>.jpg format we find the biggest such <xxxxxx> number and increment by 1\\n\",\n    \"        last_img_name = sorted(valid_frames)[-1]\\n\",\n    \"        new_prefix = int(last_img_name.split('.')[0]) + 1  # increment by 1\\n\",\n    \"        return f'{str(new_prefix).zfill(6)}.jpg'\\n\",\n    \"    else:\\n\",\n    \"        return '000000.jpg'\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def save_and_maybe_display_image(dump_dir, dump_img, out_res=(256, 256), should_display=False):\\n\",\n    \"    assert isinstance(dump_img, np.ndarray), f'Expected numpy array got {type(dump_img)}.'\\n\",\n    \"\\n\",\n    \"    # step1: get next valid image name\\n\",\n    \"    dump_img_name = get_available_file_name(dump_dir)\\n\",\n    \"\\n\",\n    \"    # step2: convert to uint8 format <- OpenCV expects it otherwise your image will be completely black. Don't ask...\\n\",\n    \"    if dump_img.dtype != np.uint8:\\n\",\n    \"        dump_img = (dump_img*255).astype(np.uint8)\\n\",\n    \"\\n\",\n    \"    # step3: write image to the file system (::-1 because opencv expects BGR (and not RGB) format...)\\n\",\n    \"    cv.imwrite(os.path.join(dump_dir, dump_img_name), cv.resize(dump_img[:, :, ::-1], out_res, interpolation=cv.INTER_NEAREST))  \\n\",\n    \"\\n\",\n    \"    # step4: maybe display part of the function\\n\",\n    \"    if should_display:\\n\",\n    \"        plt.imshow(dump_img)\\n\",\n    \"        plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### We're now ready to generate some new digit images!\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"metadata\": {},\n   \"outputs\": [\n    {\n     \"name\": \"stdout\",\n     \"output_type\": \"stream\",\n     \"text\": [\n      \"Model states contains this data: dict_keys(['commit_hash', 'state_dict', 'gan_type'])\\n\",\n      \"Using VANILLA GAN!\\n\",\n      \"Generating new MNIST-like images!\\n\"\n     ]\n    },\n    {\n     \"data\": {\n      \"image/png\": \"iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAQ30lEQVR4nO3db4xV9Z3H8c+XYUZkQIVFcALjgpUHiwTtCoSEZlNsllh9ANXUaMwGskT6oCSt4cEalqTGjQnZbIt9YBqniyndFJsmSsTErEXjnyUxzQCyCIsurGD5Mw4oQgGBgeG7D+awO8U5v99wz733HPi9X8nkztzvPff8OMNnzr33e875mbsLwPVvRNkDANAchB1IBGEHEkHYgUQQdiARI5u5MjMr7aN/MwvW6UrgeuHuQ/5nLxR2M7tf0s8ltUj6V3dfU+T5YlpaWnJr/f39wWVHjRoVrJ89ezZYD/2x4A9FvjK324gR+S9cL126FFx25MhwNC5evFjTmIbz/EWfO0/NL+PNrEXS85K+K2mGpMfMbEa9Bgagvoq8Z58raZ+7f+LufZJ+K2lRfYYFoN6KhH2ypIODfj6U3fdnzGy5mW01s60F1gWgoCLv2Yd6M/a1N2Hu3iWpSyr3AzogdUX27IckdQ76eYqkI8WGA6BRioS9W9J0M5tmZm2SHpW0qT7DAlBvNb+Md/eLZrZC0hsaaL296O676zayIcTaayGx1lqsNXfu3Lma1x0zbty4YP3LL79s2Lpjih6fUGbrLdZeC4m1v4q25hrVXgsp1Gd399clvV6nsQBoIA6XBRJB2IFEEHYgEYQdSARhBxJB2IFEWDNPzyzzcNnQ6Y5SvJ9cpMff6HPpy+xlt7W1Bet9fX0NXX9VLViwIFh/++23G7buvPPZ2bMDiSDsQCIIO5AIwg4kgrADiSDsQCKa3nprVJuoypeKrvLY0BiNvjptCK03IHGEHUgEYQcSQdiBRBB2IBGEHUgEYQcS0dQpmxupzF51a2trsH7hwoUmjQRVETu2ogzs2YFEEHYgEYQdSARhBxJB2IFEEHYgEYQdSEQyl5KOGTNmTLB++vTp3FrsMtWxPnzR38HSpUtrXnds7GfOnAnW9+7dG6yvXLkytzZ//vzgsrFzvvfv3x+sP/vss7m13t7e4LLbtm0L1ov+zm644Ybc2vnz5ws9d9757IUOqjGzA5JOSeqXdNHdZxd5PgCNU48j6Ba4++d1eB4ADcR7diARRcPukn5vZtvMbPlQDzCz5Wa21cy2FlwXgAKKvoyf7+5HzGyipM1m9pG7vzf4Ae7eJalLqvYHdMD1rtCe3d2PZLdHJW2UNLcegwJQfzWH3czazWzs5e8lLZS0q14DA1BfRV7GT5K0MTtvd6SkDe7+73UZVQlCffSYS5cuBeuxvmms1z1x4sRgvbOzM7e2ePHi4LIdHR3Beuz4g9iUzaFtE+tVx64DMHPmzGD90Ucfza1t3LgxuOz27duD9aJ99tD/iTlz5gSX7e7urmmdNYfd3T+RdHetywNoLlpvQCIIO5AIwg4kgrADiSDsQCI4xbUCYu2tJUuWBOurV6/Ord1yyy3BZfv6+oL1mLFjx9a8bH9/f7AeG1tsWuTQ6bmPPPJIcNk333wzWK8ypmwGEkfYgUQQdiARhB1IBGEHEkHYgUQQdiAR182UzVUW6wc//vjjwfqaNWuC9dGjR+fWYqffxi4V3d7eHqzHjtMITV187ty54LInTpwI1m+88cZgfefOnbm1kydPBpeNTblc5hThtWLPDiSCsAOJIOxAIgg7kAjCDiSCsAOJIOxAIuiz10HsUtCxSwOvXbs2WA9N7yuFpzY+ePBgcNnnnnsuWG9paQnWFyxYEKzv2bOnppokzZ0bnnNk4cKFwfqGDRtya7Gppsvso0+YMCFY//zz2uZRZc8OJIKwA4kg7EAiCDuQCMIOJIKwA4kg7EAirpvrxpd5/nHs2unr1q0L1h966KFC6+/t7c2tLVq0KLhsrNfd2toarH/11VfBeugYgNjxAzNmzAjWe3p6gvXPPvsstxY7z79MseM2YmOv+brxZvaimR01s12D7htvZpvNbG92Oy72PADKNZyX8b+SdP8V9z0l6S13ny7prexnABUWDbu7vyfp+BV3L5K0Pvt+vaTFdR4XgDqr9dj4Se7eI0nu3mNmE/MeaGbLJS2vcT0A6qThJ8K4e5ekLomJHYEy1dp66zWzDknKbo/Wb0gAGqHWsG+SdHke4SWSXq3PcAA0SvRlvJm9JOnbkiaY2SFJP5G0RtLvzGyZpD9K+v5wVxjqIcb6h6FeepnnH997773B+oMPPhisx44RiG2X/fv359ZCfW5JmjlzZrA+e/bsYH3Tpk3B+uHDh3Nr58+fDy770UcfBeuxHn+Vr+1+22235dZCxwcUEQ27uz+WU/pOnccCoIE4XBZIBGEHEkHYgUQQdiARhB1IRFNPcW1vb/e77rort97d3d20sdTTqFGjgvXjx688teDqlo8Jteb6+/uDy8amk4617t5///1gfdmyZbm1UFtOirfmqtxaiwlt99g2j6n5FFcA1wfCDiSCsAOJIOxAIgg7kAjCDiSCsAOJuG4uJT2MdQfrRbZD7Lm3bNkSrM+bN6/Q8xdRdLvE6qdOncqtLV26NLjsa6+9FqzHjiGosiKnesfQZwcSR9iBRBB2IBGEHUgEYQcSQdiBRBB2IBGV6rOXOe1yI82aNStYf+edd4L1m2++OVhvZB++kS5cuBCsx44/+OCDD+o5nOsGfXYgcYQdSARhBxJB2IFEEHYgEYQdSARhBxJRqT57qtra2oL1IteVnzNnTrA+efLkYD3W43/yySeD9dtvvz23FjqnW5LOnDkTrN9xxx3B+rFjx4L1MrW0tOTWip6nX3Of3cxeNLOjZrZr0H1Pm9lhM9uRfT1QaHQAGm44L+N/Jen+Ie5f6+73ZF+v13dYAOotGnZ3f09SeP4iAJVX5AO6FWa2M3uZPy7vQWa23My2mtnWAusCUFCtYf+FpG9IukdSj6Sf5j3Q3bvcfba7z65xXQDqoKawu3uvu/e7+yVJv5Q0t77DAlBvNYXdzDoG/fg9SbvyHgugGqJ9djN7SdK3JU2Q1CvpJ9nP90hySQck/cDde6Iro89+zYmdKz9+/Phgvbu7O7c2derU4LKx66fffffdwfru3buD9TKFtmvRY1/y+uz5M8L//4KPDXH3ukKjAdB0HC4LJIKwA4kg7EAiCDuQCMIOJCL6aXyVjB07NrcWmhoYtYu1gU6cOBGsb9iwIbe2atWq4LKxtt+UKVOC9Sq33kaOzI9e7BTXWqd0Zs8OJIKwA4kg7EAiCDuQCMIOJIKwA4kg7EAirqk+O7306on1fI8cOdKwdZ89e7Zhz91osemqG4E9O5AIwg4kgrADiSDsQCIIO5AIwg4kgrADibim+uzXqth52bF67JzyZk67faXYdNIrVqyo+blj/67YZayrrMilpGtdlj07kAjCDiSCsAOJIOxAIgg7kAjCDiSCsAOJoM9eB62trcH6ww8/HKx3dnYWWv/zzz+fW+vr6wsuG+vxx/5t9913X7Aem5Y55OTJk8H6u+++W/Nzly3UDy963EWe6J7dzDrN7G0z22Nmu83sR9n9481ss5ntzW7H1TQCAE0xnJfxFyWtdPe/kjRP0g/NbIakpyS95e7TJb2V/QygoqJhd/ced9+efX9K0h5JkyUtkrQ+e9h6SYsbNUgAxV3Ve3Yzmyrpm5L+IGmSu/dIA38QzGxizjLLJS0vNkwARQ077GY2RtLLkn7s7n+KfYhwmbt3SerKnqO8MzaAxA2r9WZmrRoI+m/c/ZXs7l4z68jqHZKONmaIAOohume3gV34Okl73P1ng0qbJC2RtCa7fbXoYO68885gfd++fUVX0RDt7e3B+gsvvFBo+VirJdT++vjjj4PLxi71HGsbTps2LVgPTU188eLF4LLPPPNMsH4tX1o89Ds/c+ZMQ9Y5nJfx8yX9naQPzWxHdt8qDYT8d2a2TNIfJX2/ISMEUBfRsLv7Fkl5b9C/U9/hAGgUDpcFEkHYgUQQdiARhB1IBGEHEmHNvAzx9XoEXewU1S1bthRaPiY0bXJsSuURI8J/74uebhla/xdffBFcdt68ecH6p59+GqxXWVtbW24tNp3zMC4tPuQvjT07kAjCDiSCsAOJIOxAIgg7kAjCDiSCsAOJ4FLSdXDw4MFgfc6cOcH6gQMHgvXYtMihXnmsj17UcK9YNJT+/v5gPXYZ7GtZ6N9WZJuGsGcHEkHYgUQQdiARhB1IBGEHEkHYgUQQdiAR18357I2a5rYZbrrppmB9+vTpwfoTTzyRW5s1a1Zw2di1+s+fPx+snzt3Llh/4403cmurV68OLhu7LnysT99ILS0twXqZY+N8diBxhB1IBGEHEkHYgUQQdiARhB1IBGEHEhHts5tZp6RfS7pN0iVJXe7+czN7WtITko5lD13l7q9Hnqu6zW6gjkLzr0vF5mAPHVPi7rl99uFcvOKipJXuvt3MxkraZmabs9pad/+Xqx4tgKYbzvzsPZJ6su9PmdkeSZMbPTAA9XVV79nNbKqkb0r6Q3bXCjPbaWYvmtm4nGWWm9lWM9taaKQAChn2sfFmNkbSu5KedfdXzGySpM8luaR/ktTh7n8feQ7esyMJVXzPPqw9u5m1SnpZ0m/c/ZXsSXvdvd/dL0n6paS5Vz1qAE0TDbsN/BlZJ2mPu/9s0P0dgx72PUm76j88APUynNbbtyT9h6QPNdB6k6RVkh6TdI8GXsYfkPSD7MO80HMFV3brrbcGx3Ls2LFgPbLuYL3Kp8AiLbHLf8em4a659ebuWyQNtXCwpw6gWjiCDkgEYQcSQdiBRBB2IBGEHUgEYQcS0fQpm0P97lh/MbRs0Uv7FunDjxwZ3oyxdRft8ccOnwwp85LIsamoY5epjmlra8utFZ0OevTo0cH62bNng/XQ7yXWR68Ve3YgEYQdSARhBxJB2IFEEHYgEYQdSARhBxLR7Cmbj0n6dNBdEzRwaasqqurYqjouibHVqp5j+0t3H/LCEE0N+9dWbrbV3WeXNoCAqo6tquOSGFutmjU2XsYDiSDsQCLKDntXyesPqerYqjouibHVqiljK/U9O4DmKXvPDqBJCDuQiFLCbmb3m9nHZrbPzJ4qYwx5zOyAmX1oZjvKnp8um0PvqJntGnTfeDPbbGZ7s9sh59graWxPm9nhbNvtMLMHShpbp5m9bWZ7zGy3mf0ou7/UbRcYV1O2W9Pfs5tZi6T/lvS3kg5J6pb0mLv/V1MHksPMDkia7e6lH4BhZn8j6bSkX7v7zOy+f5Z03N3XZH8ox7n7P1RkbE9LOl32NN7ZbEUdg6cZl7RY0lKVuO0C43pETdhuZezZ50ra5+6fuHufpN9KWlTCOCrP3d+TdPyKuxdJWp99v14D/1maLmdsleDuPe6+Pfv+lKTL04yXuu0C42qKMsI+WdLBQT8fUrXme3dJvzezbWa2vOzBDGHS5Wm2stuJJY/nStFpvJvpimnGK7Ptapn+vKgywj7UBdOq1P+b7+5/Lem7kn6YvVzF8PxC0jc0MAdgj6SfljmYbJrxlyX92N3/VOZYBhtiXE3ZbmWE/ZCkzkE/T5F0pIRxDMndj2S3RyVtVPWmou69PINudnu05PH8nypN4z3UNOOqwLYrc/rzMsLeLWm6mU0zszZJj0raVMI4vsbM2rMPTmRm7ZIWqnpTUW+StCT7fomkV0scy5+pyjTeedOMq+RtV/r05+7e9C9JD2jgE/n/kfSPZYwhZ1x3SPrP7Gt32WOT9JIGXtZd0MAromWS/kLSW5L2ZrfjKzS2f9PA1N47NRCsjpLG9i0NvDXcKWlH9vVA2dsuMK6mbDcOlwUSwRF0QCIIO5AIwg4kgrADiSDsQCIIO5AIwg4k4n8BIFvhXuFW7T4AAAAASUVORK5CYII=\\n\",\n      \"text/plain\": [\n       \"<Figure size 432x288 with 1 Axes>\"\n      ]\n     },\n     \"metadata\": {\n      \"needs_background\": \"light\"\n     },\n     \"output_type\": \"display_data\"\n    }\n   ],\n   \"source\": [\n    \"# VANILLA_000000.pth is the model I pretrained for you, feel free to change it if you trained your own model (last section)!\\n\",\n    \"model_path = os.path.join(BINARIES_PATH, 'VANILLA_000000.pth')  \\n\",\n    \"assert os.path.exists(model_path), f'Could not find the model {model_path}. You first need to train your generator.'\\n\",\n    \"\\n\",\n    \"# Hopefully you have some GPU ^^\\n\",\n    \"device = torch.device(\\\"cuda\\\" if torch.cuda.is_available() else \\\"cpu\\\")\\n\",\n    \"\\n\",\n    \"# let's load the model, this is a dictionary containing model weights but also some metadata\\n\",\n    \"# commit_hash - simply tells me which version of my code generated this model (hey you have to learn git!)\\n\",\n    \"# gan_type - this one is \\\"VANILLA\\\" but I also have \\\"DCGAN\\\" and \\\"cGAN\\\" models\\n\",\n    \"# state_dict - contains the actuall neural network weights\\n\",\n    \"model_state = torch.load(model_path)  \\n\",\n    \"print(f'Model states contains this data: {model_state.keys()}')\\n\",\n    \"\\n\",\n    \"gan_type = model_state[\\\"gan_type\\\"]  # \\n\",\n    \"print(f'Using {gan_type} GAN!')\\n\",\n    \"\\n\",\n    \"# Let's instantiate a generator net and place it on GPU (if you have one)\\n\",\n    \"generator = GeneratorNet().to(device)\\n\",\n    \"# Load the weights, strict=True just makes sure that the architecture corresponds to the weights 100%\\n\",\n    \"generator.load_state_dict(model_state[\\\"state_dict\\\"], strict=True)\\n\",\n    \"generator.eval()  # puts some layers like batch norm in a good state so it's ready for inference <- fancy name right?\\n\",\n    \"    \\n\",\n    \"generated_imgs_path = os.path.join(DATA_DIR_PATH, 'generated_imagery')  # this is where we'll dump images\\n\",\n    \"os.makedirs(generated_imgs_path, exist_ok=True)\\n\",\n    \"\\n\",\n    \"#\\n\",\n    \"# This is where the magic happens!\\n\",\n    \"#\\n\",\n    \"\\n\",\n    \"print('Generating new MNIST-like images!')\\n\",\n    \"generated_img = generate_from_random_latent_vector(generator)\\n\",\n    \"save_and_maybe_display_image(generated_imgs_path, generated_img, should_display=True)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# I'd love to hear your feedback\\n\",\n    \"\\n\",\n    \"If you found this notebook useful and would like me to add the same for cGAN and DCGAN please [open an issue](https://github.com/gordicaleksa/pytorch-gans/issues/new). <br/>\\n\",\n    \"\\n\",\n    \"I'm super not aware of how useful people find this, I usually do stuff through my IDE.\\n\",\n    \"\\n\",\n    \"# Connect with me\\n\",\n    \"\\n\",\n    \"I share lots of useful (I hope so at least!) content on LinkedIn, Twitter, YouTube and Medium. <br/>\\n\",\n    \"So feel free to connect with me there:\\n\",\n    \"1. My [LinkedIn](https://www.linkedin.com/in/aleksagordic) and [Twitter](https://twitter.com/gordic_aleksa) profiles\\n\",\n    \"2. My YouTube channel - [The AI Epiphany](https://www.youtube.com/c/TheAiEpiphany)\\n\",\n    \"3. My [Medium](https://gordicaleksa.medium.com/) profile\\n\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.8.3\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "environment.yml",
    "content": "name: pytorch-gans\nchannels:\n  - defaults\n  - pytorch\ndependencies:\n  - python==3.8.3\n  - pip==20.0.2\n  - matplotlib==3.1.3\n  - pytorch==1.5.0\n  - torchvision==0.6.0\n  - pip:\n    - numpy==1.18.4\n    - opencv-python==4.2.0.32\n    - GitPython==3.1.2\n    - tensorboard==2.2.2\n    - imageio==2.9.0\n    - jupyter==1.0.0\n\n"
  },
  {
    "path": "generate_imagery.py",
    "content": "import os\nimport shutil\nimport argparse\n\n\nimport torch\nfrom torch import nn\nfrom torchvision.utils import save_image, make_grid\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport cv2 as cv\n\n\nimport utils.utils as utils\nfrom utils.constants import *\n\n\nclass GenerationMode(enum.Enum):\n    SINGLE_IMAGE = 0,\n    INTERPOLATION = 1,\n    VECTOR_ARITHMETIC = 2\n\n\ndef postprocess_generated_img(generated_img_tensor):\n    assert isinstance(generated_img_tensor, torch.Tensor), f'Expected PyTorch tensor but got {type(generated_img_tensor)}.'\n\n    # Move the tensor from GPU to CPU, convert to numpy array, extract 0th batch, move the image channel\n    # from 0th to 2nd position (CHW -> HWC)\n    generated_img = np.moveaxis(generated_img_tensor.to('cpu').numpy()[0], 0, 2)\n\n    # If grayscale image repeat 3 times to get RGB image (for generators trained on MNIST)\n    if generated_img.shape[2] == 1:\n        generated_img = np.repeat(generated_img,  3, axis=2)\n\n    # Imagery is in the range [-1, 1] (generator has tanh as the output activation) move it into [0, 1] range\n    generated_img -= np.min(generated_img)\n    generated_img /= np.max(generated_img)\n\n    return generated_img\n\n\ndef generate_from_random_latent_vector(generator, cgan_digit=None):\n    with torch.no_grad():\n        latent_vector = utils.get_gaussian_latent_batch(1, next(generator.parameters()).device)\n\n        if cgan_digit is None:\n            generated_img = postprocess_generated_img(generator(latent_vector))\n        else:  # condition and generate the digit specified by cgan_digit\n            ref_label = torch.tensor([cgan_digit], dtype=torch.int64)\n            ref_label_one_hot_encoding = torch.nn.functional.one_hot(ref_label, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(next(generator.parameters()).device)\n            generated_img = postprocess_generated_img(generator(latent_vector, ref_label_one_hot_encoding))\n\n    return generated_img, latent_vector.to('cpu').numpy()[0]\n\n\ndef generate_from_specified_numpy_latent_vector(generator, latent_vector):\n    assert isinstance(latent_vector, np.ndarray), f'Expected latent vector to be numpy array but got {type(latent_vector)}.'\n\n    with torch.no_grad():\n        latent_vector_tensor = torch.unsqueeze(torch.tensor(latent_vector, device=next(generator.parameters()).device), dim=0)\n        return postprocess_generated_img(generator(latent_vector_tensor))\n\n\ndef linear_interpolation(t, p0, p1):\n    return p0 + t * (p1 - p0)\n\n\ndef spherical_interpolation(t, p0, p1):\n    \"\"\" Spherical interpolation (slerp) formula: https://en.wikipedia.org/wiki/Slerp\n\n    Found inspiration here: https://github.com/soumith/ganhacks\n    but I didn't get any improvement using it compared to linear interpolation.\n\n    Args:\n        t (float): has [0, 1] range\n        p0 (numpy array): First n-dimensional vector\n        p1 (numpy array): Second n-dimensional vector\n\n    Result:\n        Returns spherically interpolated vector.\n\n    \"\"\"\n    if t <= 0:\n        return p0\n    elif t >= 1:\n        return p1\n    elif np.allclose(p0, p1):\n        return p0\n\n    # Convert p0 and p1 to unit vectors and find the angle between them (omega)\n    omega = np.arccos(np.dot(p0 / np.linalg.norm(p0), p1 / np.linalg.norm(p1)))\n    sin_omega = np.sin(omega)  # syntactic sugar\n    return np.sin((1.0 - t) * omega) / sin_omega * p0 + np.sin(t * omega) / sin_omega * p1\n\n\ndef display_vector_arithmetic_results(imgs_to_display):\n    fig = plt.figure(figsize=(6, 6))\n    title_fontsize = 'x-small'\n    num_display_imgs = 7\n    titles = ['happy women', 'happy woman (avg)', 'neutral women', 'neutral woman (avg)', 'neutral men', 'neutral man (avg)', 'result - happy man']\n    ax = np.zeros(num_display_imgs, dtype=object)\n    assert len(imgs_to_display) == num_display_imgs, f'Expected {num_display_imgs} got {len(imgs_to_display)} images.'\n\n    gs = fig.add_gridspec(5, 4, left=0.02, right=0.98, wspace=0.05, hspace=0.3)\n    ax[0] = fig.add_subplot(gs[0, :3])\n    ax[1] = fig.add_subplot(gs[0, 3])\n    ax[2] = fig.add_subplot(gs[1, :3])\n    ax[3] = fig.add_subplot(gs[1, 3])\n    ax[4] = fig.add_subplot(gs[2, :3])\n    ax[5] = fig.add_subplot(gs[2, 3])\n    ax[6] = fig.add_subplot(gs[3:, 1:3])\n\n    for i in range(num_display_imgs):\n        ax[i].imshow(cv.resize(imgs_to_display[i], (0, 0), fx=3, fy=3, interpolation=cv.INTER_NEAREST))\n        ax[i].set_title(titles[i], fontsize=title_fontsize)\n        ax[i].tick_params(which='both', bottom=False, left=False, labelleft=False, labelbottom=False)\n\n    plt.show()\n\n\ndef generate_new_images(model_name, cgan_digit=None, generation_mode=True, slerp=True, a=None, b=None, should_display=True):\n    \"\"\" Generate imagery using pre-trained generator (using vanilla_generator_000000.pth by default)\n\n    Args:\n        model_name (str): model name you want to use (default lookup location is BINARIES_PATH).\n        cgan_digit (int): if specified generate that exact digit.\n        generation_mode (enum):  generate a single image from a random vector, interpolate between the 2 chosen latent\n         vectors, or perform arithmetic over latent vectors (note: not every mode is supported for every model type)\n        slerp (bool): if True use spherical interpolation otherwise use linear interpolation.\n        a, b (numpy arrays): latent vectors, if set to None you'll be prompted to choose images you like,\n         and use corresponding latent vectors instead.\n        should_display (bool): Display the generated images before saving them.\n\n    \"\"\"\n\n    model_path = os.path.join(BINARIES_PATH, model_name)\n    assert os.path.exists(model_path), f'Could not find the model {model_path}. You first need to train your generator.'\n\n    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n    # Prepare the correct (vanilla, cGAN, DCGAN, ...) model, load the weights and put the model into evaluation mode\n    model_state = torch.load(model_path)\n    gan_type = model_state[\"gan_type\"]\n    print(f'Found {gan_type} GAN!')\n    _, generator = utils.get_gan(device, gan_type)\n    generator.load_state_dict(model_state[\"state_dict\"], strict=True)\n    generator.eval()\n\n    # Generate a single image, save it and potentially display it\n    if generation_mode == GenerationMode.SINGLE_IMAGE:\n        generated_imgs_path = os.path.join(DATA_DIR_PATH, 'generated_imagery')\n        os.makedirs(generated_imgs_path, exist_ok=True)\n\n        generated_img, _ = generate_from_random_latent_vector(generator, cgan_digit if gan_type == GANType.CGAN.name else None)\n        utils.save_and_maybe_display_image(generated_imgs_path, generated_img, should_display=should_display)\n\n    # Pick 2 images you like between which you'd like to interpolate (by typing 'y' into console)\n    elif generation_mode == GenerationMode.INTERPOLATION:\n        assert gan_type == GANType.VANILLA.name or gan_type ==GANType.DCGAN.name, f'Got {gan_type} but only VANILLA/DCGAN are supported for the interpolation mode.'\n\n        interpolation_name = \"spherical\" if slerp else \"linear\"\n        interpolation_fn = spherical_interpolation if slerp else linear_interpolation\n\n        grid_interpolated_imgs_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery')  # combined results dir\n        decomposed_interpolated_imgs_path = os.path.join(grid_interpolated_imgs_path, f'tmp_{gan_type}_{interpolation_name}_dump')  # dump separate results\n        if os.path.exists(decomposed_interpolated_imgs_path):\n            shutil.rmtree(decomposed_interpolated_imgs_path)\n        os.makedirs(grid_interpolated_imgs_path, exist_ok=True)\n        os.makedirs(decomposed_interpolated_imgs_path, exist_ok=True)\n\n        latent_vector_a, latent_vector_b = [None, None]\n\n        # If a and b were not specified loop until the user picked the 2 images he/she likes.\n        found_good_vectors_flag = False\n        if a is None or b is None:\n            while not found_good_vectors_flag:\n                generated_img, latent_vector = generate_from_random_latent_vector(generator)\n                plt.imshow(generated_img); plt.title('Do you like this image?'); plt.show()\n                user_input = input(\"Do you like this generated image? [y for yes]:\")\n                if user_input == 'y':\n                    if latent_vector_a is None:\n                        latent_vector_a = latent_vector\n                        print('Saved the first latent vector.')\n                    elif latent_vector_b is None:\n                        latent_vector_b = latent_vector\n                        print('Saved the second latent vector.')\n                        found_good_vectors_flag = True\n                else:\n                    print('Well lets generate a new one!')\n                    continue\n        else:\n            print('Skipping latent vectors selection section and using cached ones.')\n            latent_vector_a, latent_vector_b = [a, b]\n\n        # Cache latent vectors\n        if a is None or b is None:\n            np.save(os.path.join(grid_interpolated_imgs_path, 'a.npy'), latent_vector_a)\n            np.save(os.path.join(grid_interpolated_imgs_path, 'b.npy'), latent_vector_b)\n\n        print(f'Lets do some {interpolation_name} interpolation!')\n        interpolation_resolution = 47  # number of images between the vectors a and b\n        num_interpolated_imgs = interpolation_resolution + 2  # + 2 so that we include a and b\n\n        generated_imgs = []\n        for i in range(num_interpolated_imgs):\n            t = i / (num_interpolated_imgs - 1)  # goes from 0. to 1.\n            current_latent_vector = interpolation_fn(t, latent_vector_a, latent_vector_b)\n            generated_img = generate_from_specified_numpy_latent_vector(generator, current_latent_vector)\n\n            print(f'Generated image [{i+1}/{num_interpolated_imgs}].')\n            utils.save_and_maybe_display_image(decomposed_interpolated_imgs_path, generated_img, should_display=should_display)\n\n            # Move from channel last to channel first (CHW->HWC), PyTorch's save_image function expects BCHW format\n            generated_imgs.append(torch.tensor(np.moveaxis(generated_img, 2, 0)))\n\n        interpolated_block_img = torch.stack(generated_imgs)\n        interpolated_block_img = nn.Upsample(scale_factor=2.5, mode='nearest')(interpolated_block_img)\n        save_image(interpolated_block_img, os.path.join(grid_interpolated_imgs_path, utils.get_available_file_name(grid_interpolated_imgs_path)), nrow=int(np.sqrt(num_interpolated_imgs)))\n\n    elif generation_mode == GenerationMode.VECTOR_ARITHMETIC:\n        assert gan_type == GANType.DCGAN.name, f'Got {gan_type} but only DCGAN is supported for arithmetic mode.'\n\n        # Generate num_options face images and create a grid image from them\n        num_options = 100\n        generated_imgs = []\n        latent_vectors = []\n        padding = 2\n        for i in range(num_options):\n            generated_img, latent_vector = generate_from_random_latent_vector(generator)\n            generated_imgs.append(torch.tensor(np.moveaxis(generated_img, 2, 0)))  # make_grid expects CHW format\n            latent_vectors.append(latent_vector)\n        stacked_tensor_imgs = torch.stack(generated_imgs)\n        final_tensor_img = make_grid(stacked_tensor_imgs, nrow=int(np.sqrt(num_options)), padding=padding)\n        display_img = np.moveaxis(final_tensor_img.numpy(), 0, 2)\n\n        # For storing latent vectors\n        num_of_vectors_per_category = 3\n        happy_woman_latent_vectors = []\n        neutral_woman_latent_vectors = []\n        neutral_man_latent_vectors = []\n\n        # Make it easy - by clicking on the plot you pick the image.\n        def onclick(event):\n            if event.dblclick:\n                pass\n            else:  # single click\n                if event.button == 1:  # left click\n                    x_coord = event.xdata\n                    y_coord = event.ydata\n                    column = int(x_coord / (64 + padding))\n                    row = int(y_coord / (64 + padding))\n\n                    # Store latent vector corresponding to the image that the user clicked on.\n                    if len(happy_woman_latent_vectors) < num_of_vectors_per_category:\n                        happy_woman_latent_vectors.append(latent_vectors[10*row + column])\n                        print(f'Picked image row={row}, column={column} as {len(happy_woman_latent_vectors)}. happy woman.')\n                    elif len(neutral_woman_latent_vectors) < num_of_vectors_per_category:\n                        neutral_woman_latent_vectors.append(latent_vectors[10*row + column])\n                        print(f'Picked image row={row}, column={column} as {len(neutral_woman_latent_vectors)}. neutral woman.')\n                    elif len(neutral_man_latent_vectors) < num_of_vectors_per_category:\n                        neutral_man_latent_vectors.append(latent_vectors[10*row + column])\n                        print(f'Picked image row={row}, column={column} as {len(neutral_man_latent_vectors)}. neutral man.')\n                    else:\n                        plt.close()\n\n        plt.figure(figsize=(10, 10))\n        plt.imshow(display_img)\n        # This is just an example you could also pick 3 neutral woman images with sunglasses, etc.\n        plt.title('Click on 3 happy women, 3 neutral women and \\n 3 neutral men images (order matters!)')\n        cid = plt.gcf().canvas.mpl_connect('button_press_event', onclick)\n        plt.show()\n        plt.gcf().canvas.mpl_disconnect(cid)\n        print('Done choosing images.')\n\n        # Calculate the average latent vector for every category (happy woman, neutral woman, neutral man)\n        happy_woman_avg_latent_vector = np.mean(np.array(happy_woman_latent_vectors), axis=0)\n        neutral_woman_avg_latent_vector = np.mean(np.array(neutral_woman_latent_vectors), axis=0)\n        neutral_man_avg_latent_vector = np.mean(np.array(neutral_man_latent_vectors), axis=0)\n\n        # By subtracting neutral woman from the happy woman we capture the \"vector of smiling\". Adding that vector\n        # to a neutral man we get a happy man's latent vector! Our latent space has amazingly beautiful structure!\n        happy_man_latent_vector = neutral_man_avg_latent_vector + (happy_woman_avg_latent_vector - neutral_woman_avg_latent_vector)\n\n        # Generate images from these latent vectors\n        happy_women_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in happy_woman_latent_vectors])\n        neutral_women_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in neutral_woman_latent_vectors])\n        neutral_men_imgs = np.hstack([generate_from_specified_numpy_latent_vector(generator, v) for v in neutral_man_latent_vectors])\n\n        happy_woman_avg_img = generate_from_specified_numpy_latent_vector(generator, happy_woman_avg_latent_vector)\n        neutral_woman_avg_img = generate_from_specified_numpy_latent_vector(generator, neutral_woman_avg_latent_vector)\n        neutral_man_avg_img = generate_from_specified_numpy_latent_vector(generator, neutral_man_avg_latent_vector)\n\n        happy_man_img = generate_from_specified_numpy_latent_vector(generator, happy_man_latent_vector)\n\n        display_vector_arithmetic_results([happy_women_imgs, happy_woman_avg_img, neutral_women_imgs, neutral_woman_avg_img, neutral_men_imgs, neutral_man_avg_img, happy_man_img])\n    else:\n        raise Exception(f'Generation mode not yet supported.')\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--model_name\", type=str, help=\"Pre-trained generator model name\", default=r'VANILLA_000000.pth')\n    parser.add_argument(\"--cgan_digit\", type=int, help=\"Used only for cGAN - generate specified digit\", default=3)\n    parser.add_argument(\"--generation_mode\", type=bool, help=\"Pick between 3 generation modes\", default=GenerationMode.SINGLE_IMAGE)\n    parser.add_argument(\"--slerp\", type=bool, help=\"Should use spherical interpolation (default No)\", default=False)\n    parser.add_argument(\"--should_display\", type=bool, help=\"Display intermediate results\", default=True)\n    args = parser.parse_args()\n\n    # The first time you start generation in the interpolation mode it will cache a and b\n    # which you'll choose the first time you run the it.\n    a_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery', 'a.npy')\n    b_path = os.path.join(DATA_DIR_PATH, 'interpolated_imagery', 'b.npy')\n    latent_vector_a = np.load(a_path) if os.path.exists(a_path) else None\n    latent_vector_b = np.load(b_path) if os.path.exists(b_path) else None\n\n    generate_new_images(\n        args.model_name,\n        args.cgan_digit,\n        generation_mode=args.generation_mode,\n        slerp=args.slerp,\n        a=latent_vector_a,\n        b=latent_vector_b,\n        should_display=args.should_display)\n"
  },
  {
    "path": "models/definitions/conditional_gan.py",
    "content": "\"\"\"Conditional GAN (cGAN) implementation.\n\n    It's completely the same architecture as vanilla GAN just with additional conditioning vector on the input.\n\n    Note: I could have merged this file with vanilla_gan.py and made the conditioning vector be an optional input,\n    but I decided not to for ease of understanding for the beginners. Otherwise it could get a bit confusing.\n\"\"\"\n\n\nimport torch\nfrom torch import nn\n\n\nfrom utils.constants import LATENT_SPACE_DIM, MNIST_IMG_SIZE, MNIST_NUM_CLASSES\nfrom .vanilla_gan import vanilla_block\n\n\nclass ConditionalGeneratorNet(torch.nn.Module):\n    \"\"\"Simple 4-layer MLP generative neural network.\n\n    By default it works for MNIST size images (28x28).\n\n    There are many ways you can construct generator to work on MNIST.\n    Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc.\n\n    It's generally an open-research question on how to evaluate GANs i.e. quantify that \"ok\" statement.\n\n    People tried to automate the task using IS (inception score, often used incorrectly), etc.\n    but so far it always ends up with some form of visual inspection (human in the loop).\n\n    \"\"\"\n\n    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n        super().__init__()\n        self.generated_img_shape = img_shape\n        # We're adding the conditioning vector (hence +MNIST_NUM_CLASSES) which will directly control\n        # which MNIST class we should generate. We did not have this control in the original (vanilla) GAN.\n        # If that vector = [1., 0., ..., 0.] we generate 0, if [0., 1., 0., ..., 0.] we generate 1, etc.\n        num_neurons_per_layer = [LATENT_SPACE_DIM + MNIST_NUM_CLASSES, 256, 512, 1024, img_shape[0] * img_shape[1]]\n\n        self.net = nn.Sequential(\n            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]),\n            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]),\n            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]),\n            *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh())\n        )\n\n    def forward(self, latent_vector_batch, one_hot_conditioning_vector_batch):\n        img_batch_flattened = self.net(torch.cat((latent_vector_batch, one_hot_conditioning_vector_batch), 1))\n        # just un-flatten using view into (N, 1, 28, 28) shape for MNIST\n        return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape)\n\n\nclass ConditionalDiscriminatorNet(torch.nn.Module):\n    \"\"\"Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes.\n\n    By default it works for MNIST size images (28x28).\n\n    Again there are many ways you can construct discriminator network that would work on MNIST.\n    You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though.\n\n    \"\"\"\n\n    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n        super().__init__()\n        # Same as above using + MNIST_NUM_CLASSES we add support for the conditioning vector\n        num_neurons_per_layer = [img_shape[0] * img_shape[1] + MNIST_NUM_CLASSES, 512, 256, 1]\n\n        self.net = nn.Sequential(\n            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False),\n            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False),\n            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid())\n        )\n\n    def forward(self, img_batch, one_hot_conditioning_vector_batch):\n        img_batch_flattened = img_batch.view(img_batch.shape[0], -1)  # flatten from (N,1,H,W) into (N, HxW)\n        # One hot conditioning vector batch is of shape (N, 10) for MNIST\n        conditioned_input = torch.cat((img_batch_flattened, one_hot_conditioning_vector_batch), 1)\n        return self.net(conditioned_input)\n\n\n\n"
  },
  {
    "path": "models/definitions/dcgan.py",
    "content": "\"\"\"DCGAN implementation\n\n    Note1:\n        Many implementations out there, including PyTorch's official, did certain deviations from the original arch,\n        without clearly explaining why they did it. PyTorch for example uses 512 channels initially instead of 1024.\n\n    Note2:\n        Small modification I did compared to the original paper is used kernel size = 4 as I can't get 64x64\n        output spatial dimension with 5 no matter the padding setting. I noticed others did the same thing.\n\n        Also I'm not doing 0-centered normal weight initialization - it actually gives far worse results.\n        Batch normalization, in general, reduced the need for smart initialization but it obviously still matters.\n\n\"\"\"\n\nimport torch\nfrom torch import nn\nimport numpy as np\n\n\nfrom utils.constants import LATENT_SPACE_DIM\n\n\ndef dcgan_upsample_block(in_channels, out_channels, normalize=True, activation=None):\n    # Bias set to True gives unnatural color casts\n    layers = [nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=2, padding=1, bias=False)]\n    # There were debates to whether BatchNorm should go before or after the activation function, in my experiments it\n    # did not matter. Goodfellow also had a talk where he mentioned that it should not matter.\n    if normalize:\n        layers.append(nn.BatchNorm2d(out_channels))\n    layers.append(nn.ReLU() if activation is None else activation)\n    return layers\n\n\nclass ConvolutionalGenerativeNet(nn.Module):\n\n    def __init__(self):\n        super().__init__()\n\n        # Constants as defined in the DCGAN paper\n        num_channels_per_layer = [1024, 512, 256, 128, 3]\n        self.init_volume_shape = (num_channels_per_layer[0], 4, 4)\n\n        # Both with and without bias gave similar results\n        self.linear = nn.Linear(LATENT_SPACE_DIM, num_channels_per_layer[0] * np.prod(self.init_volume_shape[1:]))\n\n        self.net = nn.Sequential(\n            *dcgan_upsample_block(num_channels_per_layer[0], num_channels_per_layer[1]),\n            *dcgan_upsample_block(num_channels_per_layer[1], num_channels_per_layer[2]),\n            *dcgan_upsample_block(num_channels_per_layer[2], num_channels_per_layer[3]),\n            *dcgan_upsample_block(num_channels_per_layer[3], num_channels_per_layer[4], normalize=False, activation=nn.Tanh())\n        )\n\n    def forward(self, latent_vector_batch):\n        # Project from the space with dimensionality 100 into the space with dimensionality 1024 * 4 * 4\n        # -> basic linear algebra (huh you thought you'll never need math?) and reshape into a 3D volume\n        latent_vector_batch_projected = self.linear(latent_vector_batch)\n        latent_vector_batch_projected_reshaped = latent_vector_batch_projected.view(latent_vector_batch_projected.shape[0], *self.init_volume_shape)\n\n        return self.net(latent_vector_batch_projected_reshaped)\n\n\ndef dcgan_downsample_block(in_channels, out_channels, normalize=True, activation=None, padding=1):\n    layers = [nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=4, stride=2, padding=padding, bias=False)]\n    if normalize:\n        layers.append(nn.BatchNorm2d(out_channels))\n    layers.append(nn.LeakyReLU(0.2) if activation is None else activation)\n    return layers\n\n\nclass ConvolutionalDiscriminativeNet(nn.Module):\n\n    def __init__(self):\n        super().__init__()\n\n        num_channels_per_layer = [3, 128, 256, 512, 1024, 1]\n\n        # Since the last volume has a shape = 1024x4x4, we can do 1 more block and since it has a 4x4 kernels it will\n        # collapse the spatial dimension into 1x1 and putting channel number to 1 and padding to 0 we get a scalar value\n        # that we can pass into Sigmoid - effectively simulating a fully connected layer.\n        self.net = nn.Sequential(\n            *dcgan_downsample_block(num_channels_per_layer[0], num_channels_per_layer[1], normalize=False),\n            *dcgan_downsample_block(num_channels_per_layer[1], num_channels_per_layer[2]),\n            *dcgan_downsample_block(num_channels_per_layer[2], num_channels_per_layer[3]),\n            *dcgan_downsample_block(num_channels_per_layer[3], num_channels_per_layer[4]),\n            *dcgan_downsample_block(num_channels_per_layer[4], num_channels_per_layer[5], normalize=False, activation=nn.Sigmoid(), padding=0),\n        )\n\n    def forward(self, img_batch):\n        return self.net(img_batch)\n\n\n# Hurts the peformance in all my experiments, leaving it here as a proof that I tried it and it didn't give good results\n# Batch normalization in general reduces the need for smart initialization - that's one of it's main advantages.\ndef weights_init_normal(m):\n    classname = m.__class__.__name__\n    print(classname)\n    if classname.find(\"Conv2d\") != -1:\n        torch.nn.init.normal_(m.weight.data, 0.0, 0.02)\n    elif classname.find(\"BatchNorm2d\") != -1:\n        # It wouldn't make sense to make this 0-centered normal distribution as it would clamp the outputs to 0\n        # that's why it's 1-centered normal distribution with std dev of 0.02 as specified in the paper\n        torch.nn.init.normal_(m.weight.data, 1.0, 0.02)\n        torch.nn.init.constant_(m.bias.data, 0.0)\n\n\n"
  },
  {
    "path": "models/definitions/vanilla_gan.py",
    "content": "\"\"\"The original (vanilla) GAN implementation with some modifications.\n\n    Modifications:\n        The original paper used the maxout activation and dropout for regularization.\n        I'm using LeakyReLU instead and batch normalization which came after the original paper was published.\n\n    Also note that certain architectural design decisions were inspired by the DCGAN paper.\n\"\"\"\n\nimport torch\nfrom torch import nn\n\n\nfrom utils.constants import LATENT_SPACE_DIM, MNIST_IMG_SIZE\n\n\ndef vanilla_block(in_feat, out_feat, normalize=True, activation=None):\n    layers = [nn.Linear(in_feat, out_feat)]\n    if normalize:\n        layers.append(nn.BatchNorm1d(out_feat))\n    # 0.2 was used in DCGAN, I experimented with other values like 0.5 didn't notice significant change\n    layers.append(nn.LeakyReLU(0.2) if activation is None else activation)\n    return layers\n\n\nclass GeneratorNet(torch.nn.Module):\n    \"\"\"Simple 4-layer MLP generative neural network.\n\n    By default it works for MNIST size images (28x28).\n\n    There are many ways you can construct generator to work on MNIST.\n    Even without normalization layers it will work ok. Even with 5 layers it will work ok, etc.\n\n    It's generally an open-research question on how to evaluate GANs i.e. quantify that \"ok\" statement.\n\n    People tried to automate the task using IS (inception score, often used incorrectly), etc.\n    but so far it always ends up with some form of visual inspection (human in the loop).\n\n    \"\"\"\n\n    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n        super().__init__()\n        self.generated_img_shape = img_shape\n        num_neurons_per_layer = [LATENT_SPACE_DIM, 256, 512, 1024, img_shape[0] * img_shape[1]]\n\n        self.net = nn.Sequential(\n            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1]),\n            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2]),\n            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3]),\n            *vanilla_block(num_neurons_per_layer[3], num_neurons_per_layer[4], normalize=False, activation=nn.Tanh())\n        )\n\n    def forward(self, latent_vector_batch):\n        img_batch_flattened = self.net(latent_vector_batch)\n        # just un-flatten using view into (N, 1, 28, 28) shape for MNIST\n        return img_batch_flattened.view(img_batch_flattened.shape[0], 1, *self.generated_img_shape)\n\n\nclass DiscriminatorNet(torch.nn.Module):\n    \"\"\"Simple 3-layer MLP discriminative neural network. It should output probability 1. for real images and 0. for fakes.\n\n    By default it works for MNIST size images (28x28).\n\n    Again there are many ways you can construct discriminator network that would work on MNIST.\n    You could use more or less layers, etc. Using normalization as in the DCGAN paper doesn't work well though.\n\n    \"\"\"\n\n    def __init__(self, img_shape=(MNIST_IMG_SIZE, MNIST_IMG_SIZE)):\n        super().__init__()\n        num_neurons_per_layer = [img_shape[0] * img_shape[1], 512, 256, 1]\n\n        self.net = nn.Sequential(\n            *vanilla_block(num_neurons_per_layer[0], num_neurons_per_layer[1], normalize=False),\n            *vanilla_block(num_neurons_per_layer[1], num_neurons_per_layer[2], normalize=False),\n            *vanilla_block(num_neurons_per_layer[2], num_neurons_per_layer[3], normalize=False, activation=nn.Sigmoid())\n        )\n\n    def forward(self, img_batch):\n        img_batch_flattened = img_batch.view(img_batch.shape[0], -1)  # flatten from (N,1,H,W) into (N, HxW)\n        return self.net(img_batch_flattened)\n\n\n\n"
  },
  {
    "path": "playground.py",
    "content": "import os\n\n\nimport torch\nfrom torch import nn\n\n\nfrom utils.video_utils import create_gif\nfrom utils.constants import *\n\n\ndef understand_adversarial_loss():\n    \"\"\"Understand why we can use binary cross entropy as adversarial loss.\n\n    It's currently setup so as to make discriminator's output close to 1 (we assume real images),\n    but you can create fake_images_gt = torch.tensor(0.) and do a similar thing for fake images.\n\n    How to use it:\n        Read through the comments and analyze the console output.\n\n    \"\"\"\n    adversarial_loss = nn.BCELoss()\n\n    logits = [-10, -3, 0, 3, 10]  # Simulation of discriminator net's outputs before the sigmoid activation\n\n    # This will setup BCE loss into -log(x) (0. would set it to -log(1-x))\n    real_images_gt = torch.tensor(1.)\n\n    lr = 0.1  # learning rate\n\n    for logit in logits:\n        print('*' * 5)\n\n        # Consider this as discriminator net's last layer's (single neuron) output\n        # just before the sigmoid which converts it to probability.\n        logit_tensor = torch.tensor(float(logit), requires_grad=True)\n        print(f'logit value before optimization: {logit}')\n\n        # Note: with_requires grad we force PyTorch to build the computational graph so that we can push the logit\n        # towards values which will give us probability 1\n\n        # Discriminator's output (probability that the image is real)\n        prediction = nn.Sigmoid()(logit_tensor)\n        print(f'discriminator net\\'s output: {prediction}')\n\n        # The closer the prediction is to 1 the lower the loss will be!\n        # -log(prediction) <- for predictions close to 1 loss will be close to 0,\n        # predictions close to 0 will cause the loss to go to \"+ infinity\".\n        loss = adversarial_loss(prediction, real_images_gt)\n        print(f'adversarial loss output: {loss}')\n\n        loss.backward()  # calculate the gradient (sets the .grad field of the logit_tensor)\n        # The closer the discriminator's prediction is to 1 the closer the loss will be to 0,\n        # and the smaller this gradient will be, as there is no need to change logit,\n        # because we accomplished what we wanted - to make prediction as close to 1 as possible.\n        print(f'logit gradient {logit_tensor.grad.data}')\n\n        # Effectively the biggest update will be made for logit -10.\n        # Logit value -10 will cause the discriminator to output probability close to 0, which will give us huge loss\n        # -log(0), which will cause big (negative) grad value which will then push the logit towards \"+ infinity\",\n        # as that forces the discriminator to output the probability of 1. So -10 goes to ~ -9.9 in the first iteration.\n        logit_tensor.data -= lr * logit_tensor.grad.data\n        print(f'logit value after optimization {logit_tensor}')\n\n        print('')\n\n\nif __name__ == \"__main__\":\n    # understand_adversarial_loss()\n\n    create_gif(os.path.join(DATA_DIR_PATH, 'debug_imagery'), os.path.join(DATA_DIR_PATH, 'default.gif'), downsample=10)\n\n\n"
  },
  {
    "path": "train_cgan.py",
    "content": "\"\"\"\n    The main difference between training vanilla GAN and training cGAN is that we are additionally\n    adding this conditioning vector y to discriminators and generators inputs (by just concatenating it to old input).\n\n    y is one hot vector meaning if we want to condition the generator to:\n        generate 0 -> we add [1., 0., ..., 0.] (10 elements)\n        generate 1 -> we add [0., 1., 0., ..., 0.] (10 elements)\n        ...\n\"\"\"\n\nimport os\nimport argparse\nimport time\n\n\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torchvision.utils import save_image, make_grid\nfrom torch.utils.tensorboard import SummaryWriter\n\n\nimport utils.utils as utils\nfrom utils.constants import *\n\n\ndef train_cgan(training_config):\n    writer = SummaryWriter()  # (tensorboard) writer will output to ./runs/ directory by default\n    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")  # checking whether you have a GPU\n\n    # Prepare MNIST data loader (it will download MNIST the first time you run it)\n    mnist_data_loader = utils.get_mnist_data_loader(training_config['batch_size'])\n\n    # Fetch feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights\n    discriminator_net, generator_net = utils.get_gan(device, GANType.CGAN.name)\n    discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net)\n\n    # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x)\n    # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss!\n    adversarial_loss = nn.BCELoss()\n    real_images_gt = torch.ones((training_config['batch_size'], 1), device=device)\n    fake_images_gt = torch.zeros((training_config['batch_size'], 1), device=device)\n\n    # For logging purposes\n    ref_batch_size = MNIST_NUM_CLASSES**2  # We'll create a grid 10x10 where each column is a single digit\n    ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device)  # Track G's quality during training\n\n    # We'll generate exactly this grid of 10x10 (each digit in a separate column and 10 instances) for easier debugging\n    ref_labels = torch.tensor(np.array([digit for _ in range(MNIST_NUM_CLASSES) for digit in range(MNIST_NUM_CLASSES)]), dtype=torch.int64)\n    ref_labels_one_hot = torch.nn.functional.one_hot(ref_labels, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(device)\n\n    discriminator_loss_values = []\n    generator_loss_values = []\n    img_cnt = 0\n\n    ts = time.time()  # start measuring time\n\n    # cGAN training loop\n    utils.print_training_info_to_console(training_config)\n    for epoch in range(training_config['num_epochs']):\n        for batch_idx, (real_images, labels) in enumerate(mnist_data_loader):\n\n            # Labels [0-9], converted to one hot encoding, are used for conditioning. Basically a fancy word for\n            # if we give you e.g. [1., 0., ..., 0.] we expect a digit from class 0.\n            # I found that using real labels for training both G and D works nice. No need for random labels.\n            labels_one_hot = torch.nn.functional.one_hot(labels, MNIST_NUM_CLASSES).type(torch.FloatTensor).to(device)\n            real_images = real_images.to(device)  # Place imagery on GPU (if present)\n\n            #\n            # Train discriminator: maximize V = log(D(x|y)) + log(1-D(G(z|y))) or equivalently minimize -V\n            # Note: D-discriminator, x-real images, G-generator, z-latent vectors, G(z)-fake images, y-conditioning\n            #\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            discriminator_opt.zero_grad()\n\n            # -log(D(x|y)) <- we minimize this by making D(x|y) as close to 1 as possible\n            real_discriminator_loss = adversarial_loss(discriminator_net(real_images, labels_one_hot), real_images_gt)\n\n            # G(z|y) | G ~ generator_net and z ~ utils.get_gaussian_latent_batch(batch_size, device), y ~ conditioning\n            fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device), labels_one_hot)\n            # D(G(z|y)), we call detach() so that we don't calculate gradients for the generator during backward()\n            fake_images_predictions = discriminator_net(fake_images.detach(), labels_one_hot)\n            # -log(1 - D(G(z|y))) <- we minimize this by making D(G(z|y)) as close to 0 as possible\n            fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt)\n\n            discriminator_loss = real_discriminator_loss + fake_discriminator_loss\n            discriminator_loss.backward()  # this will populate .grad vars in the discriminator net\n            discriminator_opt.step()  # perform D weights update according to optimizer's strategy\n\n            #\n            # Train G: minimize V1 = log(1-D(G(z|y))) or equivalently maximize V2 = log(D(G(z|y))) (or min of -V2)\n            # The original expression (V1) had problems with diminishing gradients for G when D is too good.\n            #\n\n            # if you want to cause mode collapse probably the easiest way to do that would be to add \"for i in range(n)\"\n            # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment.\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            generator_opt.zero_grad()\n\n            # D(G(z|y)) (see above for explanations)\n            generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device), labels_one_hot), labels_one_hot)\n            # By placing real_images_gt here we minimize -log(D(G(z|y))) which happens when D approaches 1\n            # i.e. we're tricking D into thinking that these generated images are real!\n            generator_loss = adversarial_loss(generated_images_predictions, real_images_gt)\n\n            generator_loss.backward()  # this will populate .grad vars in the G net (also in D but we won't use those)\n            generator_opt.step()  # perform G weights update according to optimizer's strategy\n\n            #\n            # Logging and checkpoint creation\n            #\n\n            generator_loss_values.append(generator_loss.item())\n            discriminator_loss_values.append(discriminator_loss.item())\n\n            if training_config['enable_tensorboard']:\n                writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(mnist_data_loader) * epoch + batch_idx + 1)\n                # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly)\n                if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                    with torch.no_grad():\n                        log_generated_images = generator_net(ref_noise_batch, ref_labels_one_hot)\n                        log_generated_images_resized = nn.Upsample(scale_factor=1.5, mode='nearest')(log_generated_images)\n                        intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                        writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(mnist_data_loader) * epoch + batch_idx + 1)\n\n            if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0:\n                print(f'GAN training: time elapsed= {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]')\n\n            # Save intermediate generator images (more convenient like this than through tensorboard)\n            if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                with torch.no_grad():\n                    log_generated_images = generator_net(ref_noise_batch, ref_labels_one_hot)\n                    log_generated_images_resized = nn.Upsample(scale_factor=1.5, mode='nearest')(log_generated_images)\n                    save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                    img_cnt += 1\n\n            # Save generator checkpoint\n            if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0:\n                ckpt_model_name = f\"cgan_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth\"\n                torch.save(utils.get_training_state(generator_net, GANType.CGAN.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name))\n\n    # Save the latest generator in the binaries directory\n    torch.save(utils.get_training_state(generator_net, GANType.CGAN.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name(GANType.CGAN)))\n\n\nif __name__ == \"__main__\":\n    #\n    # fixed args - don't change these unless you have a good reason\n    #\n    debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery')\n    os.makedirs(debug_path, exist_ok=True)\n\n    #\n    # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering)\n    #\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--num_epochs\", type=int, help=\"height of content and style images\", default=100)\n    parser.add_argument(\"--batch_size\", type=int, help=\"height of content and style images\", default=128)\n\n    # logging/debugging/checkpoint related (helps a lot with experimentation)\n    parser.add_argument(\"--enable_tensorboard\", type=bool, help=\"enable tensorboard logging (D and G loss)\", default=True)\n    parser.add_argument(\"--debug_imagery_log_freq\", type=int, help=\"log generator images during training (batch) freq\", default=100)\n    parser.add_argument(\"--console_log_freq\", type=int, help=\"log to output console (batch) freq\", default=100)\n    parser.add_argument(\"--checkpoint_freq\", type=int, help=\"checkpoint model saving (epoch) freq\", default=5)\n    args = parser.parse_args()\n\n    # Wrapping training configuration into a dictionary\n    training_config = dict()\n    for arg in vars(args):\n        training_config[arg] = getattr(args, arg)\n    training_config['debug_path'] = debug_path\n\n    # train GAN model\n    train_cgan(training_config)\n\n"
  },
  {
    "path": "train_dcgan.py",
    "content": "\"\"\"\n    Literally nothing changed in the training loop of DCGAN compared to vanilla GAN:\n\n    Things that changed:\n        * Model architecture - using CNNs compared to fully connected networks\n        * We're now using CelebA dataset loaded via utils.get_celeba_data_loader (MNIST would work, it's just too easy)\n        * Logging parameters and number of epochs (as we have bigger images)\n\n\"\"\"\n\nimport os\nimport argparse\nimport time\n\n\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torchvision.utils import save_image, make_grid\nfrom torch.utils.tensorboard import SummaryWriter\n\n\nimport utils.utils as utils\nfrom utils.constants import *\n\n\ndef train_dcgan(training_config):\n    writer = SummaryWriter()  # (tensorboard) writer will output to ./runs/ directory by default\n    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")  # checking whether you have a GPU\n\n    # Prepare CelebA data loader (it will download preprocessed CelebA the first time you run it ~240 MB)\n    celeba_data_loader = utils.get_celeba_data_loader(training_config['batch_size'])\n\n    # Fetch convolutional nets (place them on GPU if present) and optimizers which will tweak their weights\n    discriminator_net, generator_net = utils.get_gan(device, GANType.DCGAN.name)\n    discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net)\n\n    # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x)\n    # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss!\n    adversarial_loss = nn.BCELoss()\n    real_images_gt = torch.ones((training_config['batch_size'], 1, 1, 1), device=device)\n    fake_images_gt = torch.zeros((training_config['batch_size'], 1, 1, 1), device=device)\n\n    # For logging purposes\n    ref_batch_size = 25\n    ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device)  # Track G's quality during training\n    discriminator_loss_values = []\n    generator_loss_values = []\n    img_cnt = 0\n\n    ts = time.time()  # start measuring time\n\n    # GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse!\n    utils.print_training_info_to_console(training_config)\n    for epoch in range(training_config['num_epochs']):\n        for batch_idx, (real_images, _) in enumerate(celeba_data_loader):\n\n            real_images = real_images.to(device)  # Place imagery on GPU (if present)\n\n            #\n            # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V\n            # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images\n            #\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            discriminator_opt.zero_grad()\n\n            # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible\n            real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt)\n\n            # G(z) | G == generator_net and z == utils.get_gaussian_latent_batch(batch_size, device)\n            fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device))\n            # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward()\n            fake_images_predictions = discriminator_net(fake_images.detach())\n            # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible\n            fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt)\n\n            discriminator_loss = real_discriminator_loss + fake_discriminator_loss\n            discriminator_loss.backward()  # this will populate .grad vars in the discriminator net\n            discriminator_opt.step()  # perform D weights update according to optimizer's strategy\n\n            #\n            # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2)\n            # The original expression (V1) had problems with diminishing gradients for G when D is too good.\n            #\n\n            # if you want to cause mode collapse probably the easiest way to do that would be to add \"for i in range(n)\"\n            # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment.\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            generator_opt.zero_grad()\n\n            # D(G(z)) (see above for explanations)\n            generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device)))\n            # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1\n            # i.e. we're tricking D into thinking that these generated images are real!\n            generator_loss = adversarial_loss(generated_images_predictions, real_images_gt)\n\n            generator_loss.backward()  # this will populate .grad vars in the G net (also in D but we won't use those)\n            generator_opt.step()  # perform G weights update according to optimizer's strategy\n\n            #\n            # Logging and checkpoint creation\n            #\n\n            generator_loss_values.append(generator_loss.item())\n            discriminator_loss_values.append(discriminator_loss.item())\n\n            if training_config['enable_tensorboard']:\n                writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(celeba_data_loader) * epoch + batch_idx + 1)\n                # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly)\n                if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                    with torch.no_grad():\n                        log_generated_images = generator_net(ref_noise_batch)\n                        log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images)\n                        intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                        writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(celeba_data_loader) * epoch + batch_idx + 1)\n\n            if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0:\n                print(f'GAN training: time elapsed= {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(celeba_data_loader)}]')\n\n            # Save intermediate generator images (more convenient like this than through tensorboard)\n            if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                with torch.no_grad():\n                    log_generated_images = generator_net(ref_noise_batch)\n                    log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images)\n                    save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                    img_cnt += 1\n\n            # Save generator checkpoint\n            if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0:\n                ckpt_model_name = f\"dcgan_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth\"\n                torch.save(utils.get_training_state(generator_net, GANType.DCGAN.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name))\n\n    # Save the latest generator in the binaries directory\n    torch.save(utils.get_training_state(generator_net, GANType.DCGAN.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name(GANType.DCGAN)))\n\n\nif __name__ == \"__main__\":\n    #\n    # fixed args - don't change these unless you have a good reason\n    #\n    debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery')\n    os.makedirs(debug_path, exist_ok=True)\n\n    #\n    # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering)\n    #\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--num_epochs\", type=int, help=\"height of content and style images\", default=8)\n    parser.add_argument(\"--batch_size\", type=int, help=\"height of content and style images\", default=128)\n\n    # logging/debugging/checkpoint related (helps a lot with experimentation)\n    parser.add_argument(\"--enable_tensorboard\", type=bool, help=\"enable tensorboard logging (D and G loss)\", default=True)\n    parser.add_argument(\"--debug_imagery_log_freq\", type=int, help=\"log generator images during training (batch) freq\", default=20)\n    parser.add_argument(\"--console_log_freq\", type=int, help=\"log to output console (batch) freq\", default=20)\n    parser.add_argument(\"--checkpoint_freq\", type=int, help=\"checkpoint model saving (epoch) freq\", default=2)\n    args = parser.parse_args()\n\n    # Wrapping training configuration into a dictionary\n    training_config = dict()\n    for arg in vars(args):\n        training_config[arg] = getattr(args, arg)\n    training_config['debug_path'] = debug_path\n\n    # train GAN model\n    train_dcgan(training_config)\n\n"
  },
  {
    "path": "train_vanilla_gan.py",
    "content": "import os\nimport argparse\nimport time\n\n\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torchvision.utils import save_image, make_grid\nfrom torch.utils.tensorboard import SummaryWriter\n\n\nimport utils.utils as utils\nfrom utils.constants import *\n\n\ndef train_vanilla_gan(training_config):\n    writer = SummaryWriter()  # (tensorboard) writer will output to ./runs/ directory by default\n    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")  # checking whether you have a GPU\n\n    # Prepare MNIST data loader (it will download MNIST the first time you run it)\n    mnist_data_loader = utils.get_mnist_data_loader(training_config['batch_size'])\n\n    # Fetch feed-forward nets (place them on GPU if present) and optimizers which will tweak their weights\n    discriminator_net, generator_net = utils.get_gan(device, GANType.VANILLA.name)\n    discriminator_opt, generator_opt = utils.get_optimizers(discriminator_net, generator_net)\n\n    # 1s will configure BCELoss into -log(x) whereas 0s will configure it to -log(1-x)\n    # So that means we can effectively use binary cross-entropy loss to achieve adversarial loss!\n    adversarial_loss = nn.BCELoss()\n    real_images_gt = torch.ones((training_config['batch_size'], 1), device=device)\n    fake_images_gt = torch.zeros((training_config['batch_size'], 1), device=device)\n\n    # For logging purposes\n    ref_batch_size = 16\n    ref_noise_batch = utils.get_gaussian_latent_batch(ref_batch_size, device)  # Track G's quality during training\n    discriminator_loss_values = []\n    generator_loss_values = []\n    img_cnt = 0\n\n    ts = time.time()  # start measuring time\n\n    # GAN training loop, it's always smart to first train the discriminator so as to avoid mode collapse!\n    utils.print_training_info_to_console(training_config)\n    for epoch in range(training_config['num_epochs']):\n        for batch_idx, (real_images, _) in enumerate(mnist_data_loader):\n\n            real_images = real_images.to(device)  # Place imagery on GPU (if present)\n\n            #\n            # Train discriminator: maximize V = log(D(x)) + log(1-D(G(z))) or equivalently minimize -V\n            # Note: D = discriminator, x = real images, G = generator, z = latent Gaussian vectors, G(z) = fake images\n            #\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            discriminator_opt.zero_grad()\n\n            # -log(D(x)) <- we minimize this by making D(x)/discriminator_net(real_images) as close to 1 as possible\n            real_discriminator_loss = adversarial_loss(discriminator_net(real_images), real_images_gt)\n\n            # G(z) | G == generator_net and z == utils.get_gaussian_latent_batch(batch_size, device)\n            fake_images = generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device))\n            # D(G(z)), we call detach() so that we don't calculate gradients for the generator during backward()\n            fake_images_predictions = discriminator_net(fake_images.detach())\n            # -log(1 - D(G(z))) <- we minimize this by making D(G(z)) as close to 0 as possible\n            fake_discriminator_loss = adversarial_loss(fake_images_predictions, fake_images_gt)\n\n            discriminator_loss = real_discriminator_loss + fake_discriminator_loss\n            discriminator_loss.backward()  # this will populate .grad vars in the discriminator net\n            discriminator_opt.step()  # perform D weights update according to optimizer's strategy\n\n            #\n            # Train generator: minimize V1 = log(1-D(G(z))) or equivalently maximize V2 = log(D(G(z))) (or min of -V2)\n            # The original expression (V1) had problems with diminishing gradients for G when D is too good.\n            #\n\n            # if you want to cause mode collapse probably the easiest way to do that would be to add \"for i in range(n)\"\n            # here (simply train G more frequent than D), n = 10 worked for me other values will also work - experiment.\n\n            # Zero out .grad variables in discriminator network (otherwise we would have corrupt results)\n            generator_opt.zero_grad()\n\n            # D(G(z)) (see above for explanations)\n            generated_images_predictions = discriminator_net(generator_net(utils.get_gaussian_latent_batch(training_config['batch_size'], device)))\n            # By placing real_images_gt here we minimize -log(D(G(z))) which happens when D approaches 1\n            # i.e. we're tricking D into thinking that these generated images are real!\n            generator_loss = adversarial_loss(generated_images_predictions, real_images_gt)\n\n            generator_loss.backward()  # this will populate .grad vars in the G net (also in D but we won't use those)\n            generator_opt.step()  # perform G weights update according to optimizer's strategy\n\n            #\n            # Logging and checkpoint creation\n            #\n\n            generator_loss_values.append(generator_loss.item())\n            discriminator_loss_values.append(discriminator_loss.item())\n\n            if training_config['enable_tensorboard']:\n                writer.add_scalars('losses/g-and-d', {'g': generator_loss.item(), 'd': discriminator_loss.item()}, len(mnist_data_loader) * epoch + batch_idx + 1)\n                # Save debug imagery to tensorboard also (some redundancy but it may be more beginner-friendly)\n                if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                    with torch.no_grad():\n                        log_generated_images = generator_net(ref_noise_batch)\n                        log_generated_images_resized = nn.Upsample(scale_factor=2, mode='nearest')(log_generated_images)\n                        intermediate_imagery_grid = make_grid(log_generated_images_resized, nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                        writer.add_image('intermediate generated imagery', intermediate_imagery_grid, len(mnist_data_loader) * epoch + batch_idx + 1)\n\n            if training_config['console_log_freq'] is not None and batch_idx % training_config['console_log_freq'] == 0:\n                print(f'GAN training: time elapsed = {(time.time() - ts):.2f} [s] | epoch={epoch + 1} | batch= [{batch_idx + 1}/{len(mnist_data_loader)}]')\n\n            # Save intermediate generator images (more convenient like this than through tensorboard)\n            if training_config['debug_imagery_log_freq'] is not None and batch_idx % training_config['debug_imagery_log_freq'] == 0:\n                with torch.no_grad():\n                    log_generated_images = generator_net(ref_noise_batch)\n                    log_generated_images_resized = nn.Upsample(scale_factor=2.5, mode='nearest')(log_generated_images)\n                    save_image(log_generated_images_resized, os.path.join(training_config['debug_path'], f'{str(img_cnt).zfill(6)}.jpg'), nrow=int(np.sqrt(ref_batch_size)), normalize=True)\n                    img_cnt += 1\n\n            # Save generator checkpoint\n            if training_config['checkpoint_freq'] is not None and (epoch + 1) % training_config['checkpoint_freq'] == 0 and batch_idx == 0:\n                ckpt_model_name = f\"vanilla_ckpt_epoch_{epoch + 1}_batch_{batch_idx + 1}.pth\"\n                torch.save(utils.get_training_state(generator_net, GANType.VANILLA.name), os.path.join(CHECKPOINTS_PATH, ckpt_model_name))\n\n    # Save the latest generator in the binaries directory\n    torch.save(utils.get_training_state(generator_net, GANType.VANILLA.name), os.path.join(BINARIES_PATH, utils.get_available_binary_name()))\n\n\nif __name__ == \"__main__\":\n    #\n    # fixed args - don't change these unless you have a good reason\n    #\n    debug_path = os.path.join(DATA_DIR_PATH, 'debug_imagery')\n    os.makedirs(debug_path, exist_ok=True)\n\n    #\n    # modifiable args - feel free to play with these (only small subset is exposed by design to avoid cluttering)\n    #\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--num_epochs\", type=int, help=\"height of content and style images\", default=100)\n    parser.add_argument(\"--batch_size\", type=int, help=\"height of content and style images\", default=128)\n\n    # logging/debugging/checkpoint related (helps a lot with experimentation)\n    parser.add_argument(\"--enable_tensorboard\", type=bool, help=\"enable tensorboard logging (D and G loss)\", default=True)\n    parser.add_argument(\"--debug_imagery_log_freq\", type=int, help=\"log generator images during training (batch) freq\", default=100)\n    parser.add_argument(\"--console_log_freq\", type=int, help=\"log to output console (batch) freq\", default=100)\n    parser.add_argument(\"--checkpoint_freq\", type=int, help=\"checkpoint model saving (epoch) freq\", default=5)\n    args = parser.parse_args()\n\n    # Wrapping training configuration into a dictionary\n    training_config = dict()\n    for arg in vars(args):\n        training_config[arg] = getattr(args, arg)\n    training_config['debug_path'] = debug_path\n\n    # train GAN model\n    train_vanilla_gan(training_config)\n\n"
  },
  {
    "path": "utils/constants.py",
    "content": "\"\"\"\n    Contains constants shared across the project.\n\"\"\"\n\nimport os\nimport enum\n\n\nBINARIES_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'models', 'binaries')\nCHECKPOINTS_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'models', 'checkpoints')\nDATA_DIR_PATH = os.path.join(os.path.dirname(__file__), os.pardir, 'data')\n\n# Make sure these exist as the rest of the code assumes it\nos.makedirs(BINARIES_PATH, exist_ok=True)\nos.makedirs(CHECKPOINTS_PATH, exist_ok=True)\nos.makedirs(DATA_DIR_PATH, exist_ok=True)\n\nLATENT_SPACE_DIM = 100  # input random vector size to generator network\nMNIST_IMG_SIZE = 28\nMNIST_NUM_CLASSES = 10\n\n\nclass GANType(enum.Enum):\n    VANILLA = 0,\n    CGAN = 1,\n    DCGAN = 2\n"
  },
  {
    "path": "utils/utils.py",
    "content": "import os\nimport re\nimport zipfile\nimport shutil\n\n\nimport git\nimport cv2 as cv\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport torch\nfrom torchvision import transforms, datasets\nfrom torchvision.datasets import ImageFolder\nfrom torch.utils.data import DataLoader\nfrom torch.optim import Adam\nfrom torch.hub import download_url_to_file\n\n\nfrom .constants import *\nfrom models.definitions.vanilla_gan import DiscriminatorNet, GeneratorNet\nfrom models.definitions.conditional_gan import ConditionalDiscriminatorNet, ConditionalGeneratorNet\nfrom models.definitions.dcgan import ConvolutionalDiscriminativeNet, ConvolutionalGenerativeNet\n\n\ndef load_image(img_path, target_shape=None):\n    if not os.path.exists(img_path):\n        raise Exception(f'Path does not exist: {img_path}')\n    img = cv.imread(img_path)[:, :, ::-1]  # [:, :, ::-1] converts BGR (opencv format...) into RGB\n\n    if target_shape is not None:  # resize section\n        if isinstance(target_shape, int) and target_shape != -1:  # scalar -> implicitly setting the width\n            current_height, current_width = img.shape[:2]\n            new_width = target_shape\n            new_height = int(current_height * (new_width / current_width))\n            img = cv.resize(img, (new_width, new_height), interpolation=cv.INTER_CUBIC)\n        else:  # set both dimensions to target shape\n            img = cv.resize(img, (target_shape[1], target_shape[0]), interpolation=cv.INTER_CUBIC)\n\n    # this need to go after resizing - otherwise cv.resize will push values outside of [0,1] range\n    img = img.astype(np.float32)  # convert from uint8 to float32\n    img /= 255.0  # get to [0, 1] range\n    return img\n\n\ndef save_and_maybe_display_image(dump_dir, dump_img, out_res=(256, 256), should_display=False):\n    assert isinstance(dump_img, np.ndarray), f'Expected numpy array got {type(dump_img)}.'\n\n    # step1: get next valid image name\n    dump_img_name = get_available_file_name(dump_dir)\n\n    # step2: convert to uint8 format\n    if dump_img.dtype != np.uint8:\n        dump_img = (dump_img*255).astype(np.uint8)\n\n    # step3: write image to the file system\n    cv.imwrite(os.path.join(dump_dir, dump_img_name), cv.resize(dump_img[:, :, ::-1], out_res, interpolation=cv.INTER_NEAREST))  # ::-1 because opencv expects BGR (and not RGB) format...\n\n    # step4: maybe display part of the function\n    if should_display:\n        plt.imshow(dump_img)\n        plt.show()\n\n\ndef get_available_file_name(input_dir):\n    def valid_frame_name(str):\n        pattern = re.compile(r'[0-9]{6}\\.jpg')  # regex, examples it covers: 000000.jpg or 923492.jpg, etc.\n        return re.fullmatch(pattern, str) is not None\n\n    valid_frames = list(filter(valid_frame_name, os.listdir(input_dir)))\n    if len(valid_frames) > 0:\n        # Images are saved in the <xxxxxx>.jpg format we find the biggest such <xxxxxx> number and increment by 1\n        last_img_name = sorted(valid_frames)[-1]\n        new_prefix = int(last_img_name.split('.')[0]) + 1  # increment by 1\n        return f'{str(new_prefix).zfill(6)}.jpg'\n    else:\n        return '000000.jpg'\n\n\ndef get_available_binary_name(gan_type_enum=GANType.VANILLA):\n    def valid_binary_name(binary_name):\n        # First time you see raw f-string? Don't worry the only trick is to double the brackets.\n        pattern = re.compile(rf'{gan_type_enum.name}_[0-9]{{6}}\\.pth')\n        return re.fullmatch(pattern, binary_name) is not None\n\n    prefix = gan_type_enum.name\n    # Just list the existing binaries so that we don't overwrite them but write to a new one\n    valid_binary_names = list(filter(valid_binary_name, os.listdir(BINARIES_PATH)))\n    if len(valid_binary_names) > 0:\n        last_binary_name = sorted(valid_binary_names)[-1]\n        new_suffix = int(last_binary_name.split('.')[0][-6:]) + 1  # increment by 1\n        return f'{prefix}_{str(new_suffix).zfill(6)}.pth'\n    else:\n        return f'{prefix}_000000.pth'\n\n\ndef get_gan_data_transform():\n    # It's good to normalize the images to [-1, 1] range https://github.com/soumith/ganhacks\n    transform = transforms.Compose([\n        transforms.ToTensor(),\n        transforms.Normalize((.5,), (.5,))\n    ])\n    return transform\n\n\ndef get_mnist_dataset():\n    # This will download the MNIST the first time it is called\n    return datasets.MNIST(root=DATA_DIR_PATH, train=True, download=True, transform=get_gan_data_transform())\n\n\ndef get_mnist_data_loader(batch_size):\n    mnist_dataset = get_mnist_dataset()\n    mnist_data_loader = DataLoader(mnist_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\n    return mnist_data_loader\n\n\ndef download_and_prepare_celeba(celeba_path):\n    celeba_url = r'https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip'\n\n    # Step1: Download the resource to local filesystem\n    print('*' * 50)\n    print(f'Downloading {celeba_url}.')\n    print('This may take a while the first time, the zip file has 240 MBs.')\n    print('*' * 50)\n\n    resource_tmp_path = celeba_path + '.zip'\n    download_url_to_file(celeba_url, resource_tmp_path)\n\n    # Step2: Unzip the resource\n    print(f'Started unzipping. Go and take a cup of coffe.')\n    with zipfile.ZipFile(resource_tmp_path) as zf:\n        os.makedirs(celeba_path, exist_ok=True)\n        zf.extractall(path=celeba_path)\n    print(f'Unzipping to: {celeba_path} finished.')\n\n    # Step3: Remove the temporary resource file\n    os.remove(resource_tmp_path)\n    print(f'Removing tmp file {resource_tmp_path}.')\n\n    # Step4: Prepare the dataset into a suitable format for PyTorch's ImageFolder\n    # I don't have any control over this zip so it's got bunch of junk that needs to be cleaned up.\n    # I am also assuming that the directory structure will remain like this.\n    print(f'Preparing the CelebA dataset - this may take a while the first time.')\n    shutil.rmtree(os.path.join(celeba_path, '__MACOSX'))\n    dst_data_directory = os.path.join(celeba_path, 'processed_celeba_small')\n    os.remove(os.path.join(dst_data_directory, '.DS_Store'))\n    data_directory1 = os.path.join(dst_data_directory, 'celeba')\n    data_directory2 = os.path.join(data_directory1, 'New Folder With Items')\n    for element in os.listdir(data_directory1):\n        if os.path.isfile(os.path.join(data_directory1, element)) and element.endswith('.jpg'):\n            shutil.move(os.path.join(data_directory1, element), os.path.join(dst_data_directory, element))\n\n    for element in os.listdir(data_directory2):\n        if os.path.isfile(os.path.join(data_directory2, element)) and element.endswith('.jpg'):\n            shutil.move(os.path.join(data_directory2, element), os.path.join(dst_data_directory, element))\n\n    shutil.rmtree(data_directory1)\n\n\ndef get_celeba_data_loader(batch_size):\n    celeba_path = os.path.join(DATA_DIR_PATH, 'CelebA')\n    if not os.path.exists(celeba_path):  # We'll have to do this only 1 time, I promise.\n        download_and_prepare_celeba(celeba_path)\n\n    celeba_dataset = ImageFolder(celeba_path, transform=get_gan_data_transform())\n    celeba_data_loader = DataLoader(celeba_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\n    return celeba_data_loader\n\n\ndef get_gaussian_latent_batch(batch_size, device):\n    return torch.randn((batch_size, LATENT_SPACE_DIM), device=device)\n\n\ndef get_gan(device, gan_type_name):\n    assert gan_type_name in [gan_type.name for gan_type in GANType], f'Unknown GAN type = {gan_type_name}.'\n\n    if gan_type_name == GANType.VANILLA.name:\n        d_net = DiscriminatorNet().train().to(device)\n        g_net = GeneratorNet().train().to(device)\n    elif gan_type_name == GANType.CGAN.name:\n        d_net = ConditionalDiscriminatorNet().train().to(device)\n        g_net = ConditionalGeneratorNet().train().to(device)\n    elif gan_type_name == GANType.DCGAN.name:\n        d_net = ConvolutionalDiscriminativeNet().train().to(device)\n        g_net = ConvolutionalGenerativeNet().train().to(device)\n    else:\n        raise Exception(f'GAN type {gan_type_name} not yet supported.')\n\n    return d_net, g_net\n\n\n# Tried SGD for the discriminator, had problems tweaking it - Adam simply works nicely but default lr 1e-3 won't work!\n# I had to train discriminator more (4 to 1 schedule worked) to get it working with default lr, still got worse results.\n# 0.0002 and 0.5, 0.999 are from the DCGAN paper it works here nicely!\ndef get_optimizers(d_net, g_net):\n    d_opt = Adam(d_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\n    g_opt = Adam(g_net.parameters(), lr=0.0002, betas=(0.5, 0.999))\n    return d_opt, g_opt\n\n\ndef get_training_state(generator_net, gan_type_name):\n    training_state = {\n        \"commit_hash\": git.Repo(search_parent_directories=True).head.object.hexsha,\n        \"state_dict\": generator_net.state_dict(),\n        \"gan_type\": gan_type_name\n    }\n    return training_state\n\n\ndef print_training_info_to_console(training_config):\n    print(f'Starting the GAN training.')\n    print('*' * 80)\n    print(f'Settings: num_epochs={training_config[\"num_epochs\"]}, batch_size={training_config[\"batch_size\"]}')\n    print('*' * 80)\n\n    if training_config[\"console_log_freq\"]:\n        print(f'Logging to console every {training_config[\"console_log_freq\"]} batches.')\n    else:\n        print(f'Console logging disabled. Set console_log_freq if you want to use it.')\n\n    print('')\n\n    if training_config[\"debug_imagery_log_freq\"]:\n        print(f'Saving intermediate generator images to {training_config[\"debug_path\"]} every {training_config[\"debug_imagery_log_freq\"]} batches.')\n    else:\n        print(f'Generator intermediate image saving disabled. Set debug_imagery_log_freq you want to use it')\n\n    print('')\n\n    if training_config[\"checkpoint_freq\"]:\n        print(f'Saving checkpoint models to {CHECKPOINTS_PATH} every {training_config[\"checkpoint_freq\"]} epochs.')\n    else:\n        print(f'Checkpoint models saving disabled. Set checkpoint_freq you want to use it')\n\n    print('')\n\n    if training_config['enable_tensorboard']:\n        print('Tensorboard enabled. Logging generator and discriminator losses.')\n        print('Run \"tensorboard --logdir=runs\" from your Anaconda (with conda env activated)')\n        print('Open http://localhost:6006/ in your browser and you\\'re ready to use tensorboard!')\n    else:\n        print('Tensorboard logging disabled.')\n    print('*' * 80)\n\n"
  },
  {
    "path": "utils/video_utils.py",
    "content": "import os\n\n\nimport cv2 as cv\nimport numpy as np\nimport imageio\n\n\nfrom .utils import load_image\n\n\ndef create_gif(frames_dir, out_path, downsample=1, img_width=None):\n    assert os.path.splitext(out_path)[1].lower() == '.gif', f'Expected gif got {os.path.splitext(out_path)[1]}.'\n\n    frame_paths = [os.path.join(frames_dir, frame_name) for cnt, frame_name in enumerate(os.listdir(frames_dir)) if frame_name.endswith('.jpg') and cnt % downsample == 0]\n\n    if img_width is not None:  # overwrites over the old frames\n        for frame_path in frame_paths:\n            img = load_image(frame_path, target_shape=img_width)\n            cv.imwrite(frame_path, np.uint8(img[:, :, ::-1] * 255))\n\n    images = [imageio.imread(frame_path) for frame_path in frame_paths]\n    imageio.mimwrite(out_path, images, fps=5)\n    print(f'Saved gif to {out_path}.')"
  }
]