[
  {
    "path": "README.md",
    "content": "# Variational Autoencoder (VAE) + Transfer learning (ResNet + VAE)\r\n\r\nThis repository implements the VAE in PyTorch, using a pretrained ResNet model as its encoder, and a transposed convolutional network as decoder.  \r\n\r\n\r\n## Datasets\r\n\r\n### 1. MNIST\r\n\r\nThe [MNIST](http://yann.lecun.com/exdb/mnist/) database contains 60,000 training images and 10,000 testing images. Each image is saved as a 28x28 matrix.\r\n\r\n<img src=\"./fig/MNIST.png\" width=\"650\">\r\n\r\n\r\n### 2. CIFAR10\r\n\r\nThe [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.\r\n\r\n<img src=\"./fig/cifar10.png\" width=\"650\">\r\n\r\n\r\n\r\n### 3. Olivetti faces dataset\r\n\r\nThe [Olivetti](https://scikit-learn.org/0.19/datasets/olivetti_faces.html) faces dataset consists of 10 64x64 images for 40 distinct subjects.\r\n\r\n<img src=\"./fig/face.png\" width=\"650\">\r\n\r\n\r\n\r\n\r\n## Model\r\n\r\n\r\nA [VAE](https://arxiv.org/pdf/1312.6114.pdf) model contains a pair of encoder and decoder. An encoder <img src=\"./fig/theta.png\" width=\"8\"> compresses an 2D image *x* into a vector *z* in a lower dimension space, which is normally called the latent space, while the decoder <img src=\"./fig/phi.png\" width=\"10\"> receives the vectors in latent space, and outputs objects in the same space as the inputs of the encoder. The training goal is to make the composition of encoder and decoder to be \"as close to identity as possible\". Precisely, the loss function is:\r\n<img src=\"./fig/loss.png\" width=\"350\">,\r\nwhere <img src=\"./fig/DKL.png\" width=\"30\"> is the Kullback-Leibler divergence, and <img src=\"./fig/normal.png\" width=\"18\"> is the standard normal distribution. The first term measures how good the reconstruction is, and second term measures how close the normal distribution and q are. After training two applications will be granted. First, the encoder can do dimension reduction. Second, the decoder can be used to reproduce input images, or even generate new images. We shall show the results of our experiments in the end.\r\n\r\n\r\n  - For our **encoder**, we do fine tuning, a technique in transfer learning, on [ResNet-152](https://arxiv.org/abs/1512.03385). ResNet-152 is a [CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network) pretrained on ImageNet [ILSVRC-2012-CLS](http://www.image-net.org/challenges/LSVRC/2012/). Our **decoder** uses transposed convolution network. \r\n  \r\n  \r\n\r\n## Training \r\n\r\n- The input images are resized to **(channels, x-dim, y-dim) = (3, 224, 224)**, which is reqiured by the ResNet-152 model. \r\n- We use ADAM in our optimization process.\r\n   \r\n<img src=\"./fig/training_curve.png\" width=\"750\">\r\n\r\n\r\n## Usage \r\n\r\n### Prerequisites\r\n- [Python 3.6](https://www.python.org/)\r\n- [PyTorch 1.0.0](https://pytorch.org/)\r\n- [Numpy 1.15.0](http://www.numpy.org/)\r\n- [Sklearn 0.19.2](https://scikit-learn.org/stable/)\r\n- [Matplotlib](https://matplotlib.org/)\r\n\r\n\r\n### Model ouputs\r\n\r\nWe saved labels (y coordinates), resulting latent space (z coordinates), models, and optimizers.\r\n\r\n\r\n - Run plot_latent.ipynb to see the clustering results\r\n   \r\n - Run ResNetVAE_reconstruction.ipynb to reproduce or generate images\r\n  \r\n - Optimizer recordings are convenient for re-training. \r\n\r\n\r\n\r\n## Results \r\n\r\n### Clustering\r\n\r\nWith encoder compressing high dimension inputs to low dimension latent space, we can use it to see the clustering of data points. \r\n\r\n<img src=\"./fig/cluster_MNIST.png\" width=\"850\">\r\n<img src=\"./fig/cluster_cifar10.png\" width=\"850\">\r\n\r\n\r\n### Reproduce and generate images\r\n\r\nThe decoder reproduces the input images from the latent space. Not only so, it can even generate new images, which are not in the original datasets.\r\n\r\n<img src=\"./fig/reconstruction_MNIST.png\" width=\"450\">\r\n<img src=\"./fig/reconstruction_face.png\" width=\"450\">\r\n\r\n<img src=\"./fig/generated_MNIST.png\" width=\"550\">\r\n<img src=\"./fig/generated_face.png\" width=\"550\">\r\n"
  },
  {
    "path": "ResNetVAE_FACE.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\nimport torchvision.transforms as transforms\nimport torch.utils.data as data\nimport torchvision\nfrom torch.autograd import Variable\nimport matplotlib.pyplot as plt\nfrom modules import *\nfrom sklearn.datasets import fetch_olivetti_faces\nfrom torch.utils.data import Dataset, DataLoader, TensorDataset\nfrom skimage.transform import resize\nfrom sklearn.model_selection import train_test_split\nimport pickle\n\n# os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"   \n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\n# EncoderCNN architecture\nCNN_fc_hidden1, CNN_fc_hidden2 = 1024, 1024\nCNN_embed_dim = 256     # latent dim extracted by 2D CNN\nres_size = 224        # ResNet image size\ndropout_p = 0.2       # dropout probability\n\n\n# training parameters\nepochs = 100        # training epochs\nbatch_size = 50\nlearning_rate = 1e-3\nlog_interval = 10   # interval for displaying training info\n\n# save model\nsave_model_path = './results_Olivetti_face'\n\ndef check_mkdir(dir_name):\n    if not os.path.exists(dir_name):\n        os.mkdir(dir_name)\n\n\ndef loss_function(recon_x, x, mu, logvar):\n    # MSE = F.mse_loss(recon_x, x, reduction='sum')\n    MSE = F.binary_cross_entropy(recon_x, x, reduction='sum')\n    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n    return MSE + KLD\n\n\ndef train(log_interval, model, device, train_loader, optimizer, epoch):\n    # set model as training mode\n    model.train()\n\n    losses = []\n    all_X, all_y, all_z, all_mu, all_logvar = [], [], [], [], []\n    N_count = 0   # counting total trained sample in one epoch\n    for batch_idx, (X, y) in enumerate(train_loader):\n        # distribute data to device\n        X, y = X.to(device), y.to(device).view(-1, )\n        N_count += X.size(0)\n\n        optimizer.zero_grad()\n        X_reconst, z, mu, logvar = model(X)  # VAE\n        loss = loss_function(X_reconst, X, mu, logvar)\n        losses.append(loss.item())\n\n        loss.backward()\n        optimizer.step()\n        \n        all_X.extend(X.data.cpu().numpy())\n        all_y.extend(y.data.cpu().numpy())\n        all_z.extend(z.data.cpu().numpy())\n        all_mu.extend(mu.data.cpu().numpy())\n        all_logvar.extend(logvar.data.cpu().numpy())\n\n        # show information\n        if (batch_idx + 1) % log_interval == 0:\n            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n                epoch + 1, N_count, len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item()))\n    \n    all_X = np.stack(all_X, axis=0)\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # save Pytorch models of best record\n    torch.save(model.state_dict(), os.path.join(save_model_path, 'model_epoch{}.pth'.format(epoch + 1)))  # save motion_encoder\n    torch.save(optimizer.state_dict(), os.path.join(save_model_path, 'optimizer_epoch{}.pth'.format(epoch + 1)))      # save optimizer\n    print(\"Epoch {} model saved!\".format(epoch + 1))\n\n    return all_X, all_y, all_z, all_mu, all_logvar, losses\n\n\ndef validation(model, device, optimizer, test_loader):\n    # set model as testing mode\n    model.eval()\n\n    test_loss = 0\n    all_X, all_y, all_z, all_mu, all_logvar = [], [], [], [], []\n    with torch.no_grad():\n        for X, y in test_loader:\n            # distribute data to device\n            X, y = X.to(device), y.to(device).view(-1, )\n            X_reconst, z, mu, logvar = model(X)\n\n            loss = loss_function(X_reconst, X, mu, logvar)\n            test_loss += loss.item()  # sum up batch loss\n\n            all_X.extend(X.data.cpu().numpy())\n            all_y.extend(y.data.cpu().numpy())\n            all_z.extend(z.data.cpu().numpy())\n            all_mu.extend(mu.data.cpu().numpy())\n            all_logvar.extend(logvar.data.cpu().numpy())\n\n    test_loss /= len(test_loader.dataset)\n    all_X = np.stack(all_X, axis=0)\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # show information\n    print('\\nTest set ({:d} samples): Average loss: {:.4f}\\n'.format(len(test_loader.dataset), test_loss))\n    return all_X, all_y, all_z, all_mu, all_logvar, test_loss\n\n\n# Detect devices\nuse_cuda = torch.cuda.is_available()                   # check if GPU exists\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")   # use CPU or GPU\n\n# Data loading parameters\nparams = {'batch_size': batch_size, 'shuffle': True, 'num_workers': 2, 'pin_memory': True} if use_cuda else {}\n\n\n# Load the faces datasets\ndata = fetch_olivetti_faces()\nface_img = data.images.reshape((data.images.shape[0], data.images.shape[1], data.images.shape[2]))\nface_img_resized = [np.tile(np.expand_dims(resize(face_img[i, :, :], (res_size, res_size), anti_aliasing=True), axis=0), (3, 1, 1)) for i in range(face_img.shape[0])]\nface_img_resized = np.stack(face_img_resized, axis=0)\nface_img_resized = torch.from_numpy(face_img_resized).float()\nlabels = torch.from_numpy(data.target)\n\nolivetti_data = TensorDataset(face_img_resized, labels)\n# Data loader (input pipeline)\ntrain_loader = torch.utils.data.DataLoader(dataset=olivetti_data, **params)\nvalid_loader = torch.utils.data.DataLoader(dataset=olivetti_data, **params)\n\n# Create model\nresnet_vae = ResNet_VAE(fc_hidden1=CNN_fc_hidden1, fc_hidden2=CNN_fc_hidden2, drop_p=dropout_p, CNN_embed_dim=CNN_embed_dim).to(device)\n\nprint(\"Using\", torch.cuda.device_count(), \"GPU!\")\nmodel_params = list(resnet_vae.parameters())\noptimizer = torch.optim.Adam(model_params, lr=learning_rate)\n\n\n# record training process\nepoch_train_losses = []\nepoch_test_losses = []\ncheck_mkdir(save_model_path)\n\n# start training\nfor epoch in range(epochs):\n    # train, test model\n    X_train, y_train, z_train, mu_train, logvar_train, train_losses = train(log_interval, resnet_vae, device, train_loader, optimizer, epoch)\n    X_test, y_test, z_test, mu_test, logvar_test, epoch_test_loss = validation(resnet_vae, device, optimizer, valid_loader)\n\n    # save results\n    epoch_train_losses.append(train_losses)\n    epoch_test_losses.append(epoch_test_loss)\n\n    # save all train test results\n    A = np.array(epoch_train_losses)\n    C = np.array(epoch_test_losses)\n    \n    np.save(os.path.join(save_model_path, 'ResNet_VAE_training_loss.npy'), A)\n    np.save(os.path.join(save_model_path, 'X_Olivetti_train_epoch{}.npy'.format(epoch + 1)), X_train)\n    np.save(os.path.join(save_model_path, 'y_Olivetti_train_epoch{}.npy'.format(epoch + 1)), y_train)\n    np.save(os.path.join(save_model_path, 'z_Olivetti_train_epoch{}.npy'.format(epoch + 1)), z_train)\n\n"
  },
  {
    "path": "ResNetVAE_MNIST.py",
    "content": "import os\nimport glob\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\nimport torchvision.transforms as transforms\nimport torch.utils.data as data\nimport torchvision\nfrom torch.autograd import Variable\nimport matplotlib.pyplot as plt\nfrom modules import *\nfrom sklearn.model_selection import train_test_split\nimport pickle\n\n# os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"   \n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\n# EncoderCNN architecture\nCNN_fc_hidden1, CNN_fc_hidden2 = 1024, 1024\nCNN_embed_dim = 256     # latent dim extracted by 2D CNN\nres_size = 224        # ResNet image size\ndropout_p = 0.2       # dropout probability\n\n# training parameters\nepochs = 20        # training epochs\nbatch_size = 50\nlearning_rate = 1e-3\nlog_interval = 10   # interval for displaying training info\n\n\n# save model\nsave_model_path = './results_MNIST'\n\n\ndef check_mkdir(dir_name):\n    if not os.path.exists(dir_name):\n        os.mkdir(dir_name)\n\ndef loss_function(recon_x, x, mu, logvar):\n    # MSE = F.mse_loss(recon_x, x, reduction='sum')\n    MSE = F.binary_cross_entropy(recon_x, x, reduction='sum')\n    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n    return MSE + KLD\n\n\ndef train(log_interval, model, device, train_loader, optimizer, epoch):\n    # set model as training mode\n    model.train()\n\n    losses = []\n    all_y, all_z, all_mu, all_logvar = [], [], [], []\n    N_count = 0   # counting total trained sample in one epoch\n    for batch_idx, (X, y) in enumerate(train_loader):\n        # distribute data to device\n        X, y = X.to(device), y.to(device).view(-1, )\n        N_count += X.size(0)\n\n        optimizer.zero_grad()\n        X_reconst, z, mu, logvar = model(X)  # VAE\n        loss = loss_function(X_reconst, X, mu, logvar)\n        losses.append(loss.item())\n\n        loss.backward()\n        optimizer.step()\n\n        all_y.extend(y.data.cpu().numpy())\n        all_z.extend(z.data.cpu().numpy())\n        all_mu.extend(mu.data.cpu().numpy())\n        all_logvar.extend(logvar.data.cpu().numpy())\n\n        # show information\n        if (batch_idx + 1) % log_interval == 0:\n            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n                epoch + 1, N_count, len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item()))\n\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # save Pytorch models of best record\n    torch.save(model.state_dict(), os.path.join(save_model_path, 'model_epoch{}.pth'.format(epoch + 1)))  # save motion_encoder\n    torch.save(optimizer.state_dict(), os.path.join(save_model_path, 'optimizer_epoch{}.pth'.format(epoch + 1)))      # save optimizer\n    print(\"Epoch {} model saved!\".format(epoch + 1))\n\n    return X.data.cpu().numpy(), all_y, all_z, all_mu, all_logvar, losses\n\n\ndef validation(model, device, optimizer, test_loader):\n    # set model as testing mode\n    model.eval()\n\n    test_loss = 0\n    all_y, all_z, all_mu, all_logvar = [], [], [], []\n    with torch.no_grad():\n        for X, y in test_loader:\n            # distribute data to device\n            X, y = X.to(device), y.to(device).view(-1, )\n            X_reconst, z, mu, logvar = model(X)\n\n            loss = loss_function(X_reconst, X, mu, logvar)\n            test_loss += loss.item()  # sum up batch loss\n\n            all_y.extend(y.data.cpu().numpy())\n            all_z.extend(z.data.cpu().numpy())\n            all_mu.extend(mu.data.cpu().numpy())\n            all_logvar.extend(logvar.data.cpu().numpy())\n\n    test_loss /= len(test_loader.dataset)\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # show information\n    print('\\nTest set ({:d} samples): Average loss: {:.4f}\\n'.format(len(test_loader.dataset), test_loss))\n    return X.data.cpu().numpy(), all_y, all_z, all_mu, all_logvar, test_loss\n\n\n# Detect devices\nuse_cuda = torch.cuda.is_available()                   # check if GPU exists\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")   # use CPU or GPU\n\n# Data loading parameters\nparams = {'batch_size': batch_size, 'shuffle': True, 'num_workers': 4, 'pin_memory': True} if use_cuda else {}\ntransform = transforms.Compose([transforms.Resize([res_size, res_size]),\n                                transforms.ToTensor(),\n                                transforms.Lambda(lambda x: x.repeat(3, 1, 1)),  # gray -> GRB 3 channel (lambda function)\n                                transforms.Normalize(mean=[0.0, 0.0, 0.0], std=[1.0, 1.0, 1.0])])  # for grayscale images\n\n# MNIST dataset (images and labels)\nMNIST_train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transform, download=True)\nMNIST_test_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=transform)\n\n# Data loader (input pipeline)\ntrain_loader = torch.utils.data.DataLoader(dataset=MNIST_train_dataset, batch_size=batch_size, shuffle=True)\nvalid_loader = torch.utils.data.DataLoader(dataset=MNIST_test_dataset, batch_size=batch_size, shuffle=False)\n\n# Create model\nresnet_vae = ResNet_VAE(fc_hidden1=CNN_fc_hidden1, fc_hidden2=CNN_fc_hidden2, drop_p=dropout_p, CNN_embed_dim=CNN_embed_dim).to(device)\n\nprint(\"Using\", torch.cuda.device_count(), \"GPU!\")\nmodel_params = list(resnet_vae.parameters())\noptimizer = torch.optim.Adam(model_params, lr=learning_rate)\n\n\n# record training process\nepoch_train_losses = []\nepoch_test_losses = []\ncheck_mkdir(save_model_path)\n\n# start training\nfor epoch in range(epochs):\n\n    # train, test model\n    X_train, y_train, z_train, mu_train, logvar_train, train_losses = train(log_interval, resnet_vae, device, train_loader, optimizer, epoch)\n    X_test, y_test, z_test, mu_test, logvar_test, epoch_test_loss = validation(resnet_vae, device, optimizer, valid_loader)\n\n    # save results\n    epoch_train_losses.append(train_losses)\n    epoch_test_losses.append(epoch_test_loss)\n\n    \n    # save all train test results\n    A = np.array(epoch_train_losses)\n    C = np.array(epoch_test_losses)\n    \n    np.save(os.path.join(save_model_path, 'ResNet_VAE_training_loss.npy'), A)\n    np.save(os.path.join(save_model_path, 'X_MNIST_train_epoch{}.npy'.format(epoch + 1)), X_train) #save last batch\n    np.save(os.path.join(save_model_path, 'y_MNIST_train_epoch{}.npy'.format(epoch + 1)), y_train)\n    np.save(os.path.join(save_model_path, 'z_MNIST_train_epoch{}.npy'.format(epoch + 1)), z_train)"
  },
  {
    "path": "ResNetVAE_cifar10.py",
    "content": "import os\nimport glob\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\nimport torchvision.transforms as transforms\nimport torch.utils.data as data\nimport torchvision\nfrom torch.autograd import Variable\nimport matplotlib.pyplot as plt\nfrom modules import *\nfrom sklearn.model_selection import train_test_split\nimport pickle\n\n# os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"   \n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\n# EncoderCNN architecture\nCNN_fc_hidden1, CNN_fc_hidden2 = 1024, 1024\nCNN_embed_dim = 256     # latent dim extracted by 2D CNN\nres_size = 224        # ResNet image size\ndropout_p = 0.2       # dropout probability\n\n\n# training parameters\nepochs = 20        # training epochs\nbatch_size = 50\nlearning_rate = 1e-3\nlog_interval = 10   # interval for displaying training info\n\n# save model\nsave_model_path = './results_cifar10'\n\ndef check_mkdir(dir_name):\n    if not os.path.exists(dir_name):\n        os.mkdir(dir_name)\n\n\ndef loss_function(recon_x, x, mu, logvar):\n    # MSE = F.mse_loss(recon_x, x, reduction='sum')\n    MSE = F.binary_cross_entropy(recon_x, x, reduction='sum')\n    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n    return MSE + KLD\n\n\ndef train(log_interval, model, device, train_loader, optimizer, epoch):\n    # set model as training mode\n    model.train()\n\n    losses = []\n    all_y, all_z, all_mu, all_logvar = [], [], [], []\n    N_count = 0   # counting total trained sample in one epoch\n    for batch_idx, (X, y) in enumerate(train_loader):\n        # distribute data to device\n        X, y = X.to(device), y.to(device).view(-1, )\n        N_count += X.size(0)\n\n        optimizer.zero_grad()\n        X_reconst, z, mu, logvar = model(X)  # VAE\n        loss = loss_function(X_reconst, X, mu, logvar)\n        losses.append(loss.item())\n\n        loss.backward()\n        optimizer.step()\n\n        all_y.extend(y.data.cpu().numpy())\n        all_z.extend(z.data.cpu().numpy())\n        all_mu.extend(mu.data.cpu().numpy())\n        all_logvar.extend(logvar.data.cpu().numpy())\n\n        # show information\n        if (batch_idx + 1) % log_interval == 0:\n            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n                epoch + 1, N_count, len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item()))\n\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # save Pytorch models of best record\n    torch.save(model.state_dict(), os.path.join(save_model_path, 'model_epoch{}.pth'.format(epoch + 1)))  # save motion_encoder\n    torch.save(optimizer.state_dict(), os.path.join(save_model_path, 'optimizer_epoch{}.pth'.format(epoch + 1)))      # save optimizer\n    print(\"Epoch {} model saved!\".format(epoch + 1))\n\n    return X_reconst.data.cpu().numpy(), all_y, all_z, all_mu, all_logvar, losses\n\n\ndef validation(model, device, optimizer, test_loader):\n    # set model as testing mode\n    model.eval()\n\n    test_loss = 0\n    all_y, all_z, all_mu, all_logvar = [], [], [], []\n    with torch.no_grad():\n        for X, y in test_loader:\n            # distribute data to device\n            X, y = X.to(device), y.to(device).view(-1, )\n            X_reconst, z, mu, logvar = model(X)\n\n            loss = loss_function(X_reconst, X, mu, logvar)\n            test_loss += loss.item()  # sum up batch loss\n\n            all_y.extend(y.data.cpu().numpy())\n            all_z.extend(z.data.cpu().numpy())\n            all_mu.extend(mu.data.cpu().numpy())\n            all_logvar.extend(logvar.data.cpu().numpy())\n\n    test_loss /= len(test_loader.dataset)\n    all_y = np.stack(all_y, axis=0)\n    all_z = np.stack(all_z, axis=0)\n    all_mu = np.stack(all_mu, axis=0)\n    all_logvar = np.stack(all_logvar, axis=0)\n\n    # show information\n    print('\\nTest set ({:d} samples): Average loss: {:.4f}\\n'.format(len(test_loader.dataset), test_loss))\n    return X_reconst.data.cpu().numpy(), all_y, all_z, all_mu, all_logvar, test_loss\n\n\n# Detect devices\nuse_cuda = torch.cuda.is_available()                   # check if GPU exists\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")   # use CPU or GPU\n\n# Data loading parameters\nparams = {'batch_size': batch_size, 'shuffle': True, 'num_workers': 2, 'pin_memory': True} if use_cuda else {}\n# transform = transforms.Compose([transforms.Resize([res_size, res_size]),\n#                                 transforms.ToTensor(),\n#                                 transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])\n\ntransform = transforms.Compose([transforms.Resize([res_size, res_size]),\n                                transforms.ToTensor(),\n                                transforms.Normalize(mean=[0.0, 0.0, 0.0], std=[1.0, 1.0, 1.0])])\n\n# cifar10 dataset (images and labels)\ncifar10_train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)\ncifar10_test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)\n\n\n# classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')\n\n# Data loader (input pipeline)\ntrain_loader = torch.utils.data.DataLoader(dataset=cifar10_train_dataset, batch_size=batch_size, shuffle=True)\nvalid_loader = torch.utils.data.DataLoader(dataset=cifar10_test_dataset, batch_size=batch_size, shuffle=False)\n\n# Create model\nresnet_vae = ResNet_VAE(fc_hidden1=CNN_fc_hidden1, fc_hidden2=CNN_fc_hidden2, drop_p=dropout_p, CNN_embed_dim=CNN_embed_dim).to(device)\n\nprint(\"Using\", torch.cuda.device_count(), \"GPU!\")\nmodel_params = list(resnet_vae.parameters())\noptimizer = torch.optim.Adam(model_params, lr=learning_rate)\n\n\n# record training process\nepoch_train_losses = []\nepoch_test_losses = []\ncheck_mkdir(save_model_path)\n\n# start training\nfor epoch in range(epochs):\n    # train, test model\n    X_reconst_train, y_train, z_train, mu_train, logvar_train, train_losses = train(log_interval, resnet_vae, device, train_loader, optimizer, epoch)\n    X_reconst_test, y_test, z_test, mu_test, logvar_test, epoch_test_loss = validation(resnet_vae, device, optimizer, valid_loader)\n\n    # save results\n    epoch_train_losses.append(train_losses)\n    epoch_test_losses.append(epoch_test_loss)\n\n    # save all train test results\n    A = np.array(epoch_train_losses)\n    C = np.array(epoch_test_losses)\n    \n    np.save(os.path.join(save_model_path, 'ResNet_VAE_training_loss.npy'), A)\n    np.save(os.path.join(save_model_path, 'y_cifar10_train_epoch{}.npy'.format(epoch + 1)), y_train)\n    np.save(os.path.join(save_model_path, 'z_cifar10_train_epoch{}.npy'.format(epoch + 1)), z_train)\n"
  },
  {
    "path": "ResNetVAE_reconstruction.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"%matplotlib inline\\n\",\n    \"import matplotlib\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import os\\n\",\n    \"import glob\\n\",\n    \"import numpy as np\\n\",\n    \"import torch\\n\",\n    \"import torch.nn as nn\\n\",\n    \"import torch.nn.functional as F\\n\",\n    \"import torchvision.models as models\\n\",\n    \"import torchvision.transforms as transforms\\n\",\n    \"import torch.utils.data as data\\n\",\n    \"import torchvision\\n\",\n    \"from torch.autograd import Variable\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from modules import *\\n\",\n    \"from sklearn.model_selection import train_test_split\\n\",\n    \"import pickle\\n\",\n    \"from sklearn.datasets import fetch_olivetti_faces\\n\",\n    \"from torch.utils.data import Dataset, DataLoader, TensorDataset\\n\",\n    \"from skimage.transform import resize\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"def decoder(model, device, z):\\n\",\n    \"    model.eval()\\n\",\n    \"    z = Variable(torch.FloatTensor(z)).to(device)\\n\",\n    \"    new_images = model.decode(z).squeeze_().data.cpu().numpy().transpose((1, 2, 0))\\n\",\n    \"    return new_images\\n\",\n    \"\\n\",\n    \"saved_model_path = './results_Olivetti_face'\\n\",\n    \"# saved_model_path = './results_MNIST'\\n\",\n    \"\\n\",\n    \"exp = 'Olivetti'\\n\",\n    \"# exp = 'MNIST'\\n\",\n    \"\\n\",\n    \"# use same ResNet Encoder saved earlier!\\n\",\n    \"CNN_fc_hidden1, CNN_fc_hidden2 = 1024, 1024\\n\",\n    \"CNN_embed_dim = 256\\n\",\n    \"res_size = 224        # ResNet image size\\n\",\n    \"dropout_p = 0.2       # dropout probability\\n\",\n    \"\\n\",\n    \"epoch = 20\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"use_cuda = torch.cuda.is_available()                   # check if GPU exists\\n\",\n    \"device = torch.device(\\\"cuda\\\" if use_cuda else \\\"cpu\\\")   # use CPU or GPU\\n\",\n    \"\\n\",\n    \"# reload ResNetVAE model\\n\",\n    \"resnet_vae = ResNet_VAE(fc_hidden1=CNN_fc_hidden1, fc_hidden2=CNN_fc_hidden2, drop_p=dropout_p, CNN_embed_dim=CNN_embed_dim).to(device)\\n\",\n    \"resnet_vae.load_state_dict(torch.load(os.path.join(saved_model_path, 'model_epoch{}.pth'.format(epoch))))\\n\",\n    \"\\n\",\n    \"print('ResNetVAE epoch {} model reloaded!'.format(epoch))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Reconstruction \"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"z_train = np.load(os.path.join(saved_model_path, 'z_{}_train_epoch{}.npy').format(exp, epoch))\\n\",\n    \"X_train = np.load(os.path.join(saved_model_path, 'X_{}_train_epoch{}.npy').format(exp, epoch))\\n\",\n    \"\\n\",\n    \"ind = 1\\n\",\n    \"zz = torch.from_numpy(z_train[ind]).view(1, -1)\\n\",\n    \"X = np.transpose(X_train[ind], (1, 2, 0))\\n\",\n    \"\\n\",\n    \"new_imgs = decoder(resnet_vae, device, zz)\\n\",\n    \"\\n\",\n    \"fig = plt.figure(figsize=(10, 10))\\n\",\n    \"\\n\",\n    \"plt.subplot(1, 2, 1)\\n\",\n    \"plt.imshow(X)\\n\",\n    \"plt.title('original')\\n\",\n    \"plt.axis('off')\\n\",\n    \"\\n\",\n    \"plt.subplot(1, 2, 2)\\n\",\n    \"plt.imshow(new_imgs)\\n\",\n    \"plt.title('reconstructed')\\n\",\n    \"plt.axis('off')\\n\",\n    \"plt.savefig(\\\"./reconstruction_{}.png\\\".format(exp), bbox_inches='tight', dpi=600)\\n\",\n    \"plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Generate new images from latent points\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"scrolled\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# choose two original images\\n\",\n    \"sample1, sample2 = 0, 1\\n\",\n    \"w = 0.4 # weight for fusing two images\\n\",\n    \"\\n\",\n    \"X1 = np.transpose(X_train[-sample1], (1, 2, 0))\\n\",\n    \"X2 = np.transpose(X_train[-sample2], (1, 2, 0))\\n\",\n    \"\\n\",\n    \"# generate image using decoder\\n\",\n    \"z_train = np.load(os.path.join(saved_model_path, 'z_{}_train_epoch{}.npy').format(exp, epoch))\\n\",\n    \"z = z_train[-sample1] * w + z_train[-sample2] * (1 - w)\\n\",\n    \"new_imgs = decoder(resnet_vae, device, torch.from_numpy(z).view(1, -1))\\n\",\n    \"\\n\",\n    \"fig = plt.figure(figsize=(15, 15))\\n\",\n    \"\\n\",\n    \"plt.subplot(1, 3, 1)\\n\",\n    \"plt.imshow(X1)\\n\",\n    \"plt.title('original 1')\\n\",\n    \"plt.axis('off')\\n\",\n    \"\\n\",\n    \"plt.subplot(1, 3, 2)\\n\",\n    \"plt.imshow(X2)\\n\",\n    \"plt.title('original 2')\\n\",\n    \"plt.axis('off')\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"plt.subplot(1, 3, 3)\\n\",\n    \"plt.imshow(new_imgs)\\n\",\n    \"plt.title('new image')\\n\",\n    \"plt.axis('off')\\n\",\n    \"plt.savefig(\\\"./generated_{}.png\\\".format(exp), bbox_inches='tight', dpi=600)\\n\",\n    \"plt.show()\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.7.3\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "modules.py",
    "content": "import os\nimport numpy as np\nfrom PIL import Image\nfrom torch.utils import data\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\nfrom torch.autograd import Variable\nimport torchvision.transforms as transforms\n\n\nclass Dataset(data.Dataset):\n    \"Characterizes a dataset for PyTorch\"\n    def __init__(self, filenames, labels, transform=None):\n        \"Initialization\"\n        self.filenames = filenames\n        self.labels = labels\n        self.transform = transform\n\n    def __len__(self):\n        \"Denotes the total number of samples\"\n        return len(self.filenames)\n\n\n    def __getitem__(self, index):\n        \"Generates one sample of data\"\n        # Select sample\n        filename = self.filenames[index]\n        X = Image.open(filename)\n\n        if self.transform:\n            X = self.transform(X)     # transform\n\n        y = torch.LongTensor([self.labels[index]])\n        return X, y\n\n## ---------------------- end of Dataloaders ---------------------- ##\n\ndef conv2D_output_size(img_size, padding, kernel_size, stride):\n    # compute output shape of conv2D\n    outshape = (np.floor((img_size[0] + 2 * padding[0] - (kernel_size[0] - 1) - 1) / stride[0] + 1).astype(int),\n                np.floor((img_size[1] + 2 * padding[1] - (kernel_size[1] - 1) - 1) / stride[1] + 1).astype(int))\n    return outshape\n\ndef convtrans2D_output_size(img_size, padding, kernel_size, stride):\n    # compute output shape of conv2D\n    outshape = ((img_size[0] - 1) * stride[0] - 2 * padding[0] + kernel_size[0],\n                (img_size[1] - 1) * stride[1] - 2 * padding[1] + kernel_size[1])\n    return outshape\n\n## ---------------------- ResNet VAE ---------------------- ##\n\nclass ResNet_VAE(nn.Module):\n    def __init__(self, fc_hidden1=1024, fc_hidden2=768, drop_p=0.3, CNN_embed_dim=256):\n        super(ResNet_VAE, self).__init__()\n\n        self.fc_hidden1, self.fc_hidden2, self.CNN_embed_dim = fc_hidden1, fc_hidden2, CNN_embed_dim\n\n        # CNN architechtures\n        self.ch1, self.ch2, self.ch3, self.ch4 = 16, 32, 64, 128\n        self.k1, self.k2, self.k3, self.k4 = (5, 5), (3, 3), (3, 3), (3, 3)      # 2d kernal size\n        self.s1, self.s2, self.s3, self.s4 = (2, 2), (2, 2), (2, 2), (2, 2)      # 2d strides\n        self.pd1, self.pd2, self.pd3, self.pd4 = (0, 0), (0, 0), (0, 0), (0, 0)  # 2d padding\n\n        # encoding components\n        resnet = models.resnet152(pretrained=True)\n        modules = list(resnet.children())[:-1]      # delete the last fc layer.\n        self.resnet = nn.Sequential(*modules)\n        self.fc1 = nn.Linear(resnet.fc.in_features, self.fc_hidden1)\n        self.bn1 = nn.BatchNorm1d(self.fc_hidden1, momentum=0.01)\n        self.fc2 = nn.Linear(self.fc_hidden1, self.fc_hidden2)\n        self.bn2 = nn.BatchNorm1d(self.fc_hidden2, momentum=0.01)\n        # Latent vectors mu and sigma\n        self.fc3_mu = nn.Linear(self.fc_hidden2, self.CNN_embed_dim)      # output = CNN embedding latent variables\n        self.fc3_logvar = nn.Linear(self.fc_hidden2, self.CNN_embed_dim)  # output = CNN embedding latent variables\n\n        # Sampling vector\n        self.fc4 = nn.Linear(self.CNN_embed_dim, self.fc_hidden2)\n        self.fc_bn4 = nn.BatchNorm1d(self.fc_hidden2)\n        self.fc5 = nn.Linear(self.fc_hidden2, 64 * 4 * 4)\n        self.fc_bn5 = nn.BatchNorm1d(64 * 4 * 4)\n        self.relu = nn.ReLU(inplace=True)\n\n        # Decoder\n        self.convTrans6 = nn.Sequential(\n            nn.ConvTranspose2d(in_channels=64, out_channels=32, kernel_size=self.k4, stride=self.s4,\n                               padding=self.pd4),\n            nn.BatchNorm2d(32, momentum=0.01),\n            nn.ReLU(inplace=True),\n        )\n        self.convTrans7 = nn.Sequential(\n            nn.ConvTranspose2d(in_channels=32, out_channels=8, kernel_size=self.k3, stride=self.s3,\n                               padding=self.pd3),\n            nn.BatchNorm2d(8, momentum=0.01),\n            nn.ReLU(inplace=True),\n        )\n\n        self.convTrans8 = nn.Sequential(\n            nn.ConvTranspose2d(in_channels=8, out_channels=3, kernel_size=self.k2, stride=self.s2,\n                               padding=self.pd2),\n            nn.BatchNorm2d(3, momentum=0.01),\n            nn.Sigmoid()    # y = (y1, y2, y3) \\in [0 ,1]^3\n        )\n\n\n    def encode(self, x):\n        x = self.resnet(x)  # ResNet\n        x = x.view(x.size(0), -1)  # flatten output of conv\n\n        # FC layers\n        x = self.bn1(self.fc1(x))\n        x = self.relu(x)\n        x = self.bn2(self.fc2(x))\n        x = self.relu(x)\n        # x = F.dropout(x, p=self.drop_p, training=self.training)\n        mu, logvar = self.fc3_mu(x), self.fc3_logvar(x)\n        return mu, logvar\n\n    def reparameterize(self, mu, logvar):\n        if self.training:\n            std = logvar.mul(0.5).exp_()\n            eps = Variable(std.data.new(std.size()).normal_())\n            return eps.mul(std).add_(mu)\n        else:\n            return mu\n\n    def decode(self, z):\n        x = self.relu(self.fc_bn4(self.fc4(z)))\n        x = self.relu(self.fc_bn5(self.fc5(x))).view(-1, 64, 4, 4)\n        x = self.convTrans6(x)\n        x = self.convTrans7(x)\n        x = self.convTrans8(x)\n        x = F.interpolate(x, size=(224, 224), mode='bilinear')\n        return x\n\n    def forward(self, x):\n        mu, logvar = self.encode(x)\n        z = self.reparameterize(mu, logvar)\n        x_reconst = self.decode(z)\n\n        return x_reconst, z, mu, logvar\n\n\n\n"
  },
  {
    "path": "plot_latent.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"scrolled\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"%matplotlib inline\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"from mpl_toolkits.mplot3d import Axes3D\\n\",\n    \"import matplotlib.cm as cm\\n\",\n    \"from sklearn.manifold import TSNE\\n\",\n    \"import numpy as np\\n\",\n    \"import pickle\\n\",\n    \"\\n\",\n    \"      \\n\",\n    \"epoch = 20\\n\",\n    \"exp = 'cifar10'\\n\",\n    \"# exp = 'MNIST'\\n\",\n    \"\\n\",\n    \"N = 6000 # image number\\n\",\n    \"\\n\",\n    \"y_train = np.load('./results_{}/y_{}_train_epoch{}.npy'.format(exp, exp, epoch))\\n\",\n    \"z_train = np.load('./results_{}/z_{}_train_epoch{}.npy'.format(exp, exp, epoch))\\n\",\n    \"classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']  # cifar10\\n\",\n    \"# classes = np.arange(10) #MNIST\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Direct projection of latent space\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"y_train = y_train[:N]\\n\",\n    \"z_train = z_train[:N]\\n\",\n    \"\\n\",\n    \"fig = plt.figure(figsize=(12, 10))\\n\",\n    \"plots = []\\n\",\n    \"markers = ['o', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']\\n\",\n    \"for i, c in enumerate(classes):\\n\",\n    \"    ind = (y_train == i).tolist() or ([j < N // len(classes) for j in range(len(y_train))])\\n\",\n    \"    color = cm.jet([i / len(classes)] * sum(ind))\\n\",\n    \"    plots.append(plt.scatter(z_train[ind, 1], z_train[ind, 2], marker=markers[i], c=color, s=8, label=i))\\n\",\n    \"\\n\",\n    \"plt.axis('off')\\n\",\n    \"plt.legend(plots, classes, fontsize=14, loc='upper right')\\n\",\n    \"plt.title('{} (direct projection: {}-dim -> 2-dim)'.format(exp, z_train.shape[1]), fontsize=14)\\n\",\n    \"plt.savefig(\\\"./ResNetVAE_{}_direct_plot.png\\\".format(exp), bbox_inches='tight', dpi=600)\\n\",\n    \"plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Use t-SNE for dimension reduction\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### compressed to 2-dimension\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"z_embed = TSNE(n_components=2, n_iter=12000).fit_transform(z_train[:N])\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"fig = plt.figure(figsize=(12, 10))\\n\",\n    \"plots = []\\n\",\n    \"markers = ['o', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']  # select different markers\\n\",\n    \"for i, c in enumerate(classes):\\n\",\n    \"    ind = (y_train[:N] == i).tolist()\\n\",\n    \"    color = cm.jet([i / len(classes)] * sum(ind))\\n\",\n    \"    # plot each category one at a time \\n\",\n    \"    plots.append(plt.scatter(z_embed[ind, 0], z_embed[ind, 1], c=color, marker=markers[i], s=8, label=i))\\n\",\n    \"\\n\",\n    \"plt.axis('off')\\n\",\n    \"plt.xlim(-150, 150)\\n\",\n    \"plt.ylim(-150, 150)\\n\",\n    \"plt.legend(plots, classes, fontsize=14, loc='upper right')\\n\",\n    \"plt.title('{} (t-SNE: {}-dim -> 2-dim)'.format(exp, z_train.shape[1]), fontsize=14)\\n\",\n    \"plt.savefig(\\\"./ResNetVAE_{}_embedded_plot.png\\\".format(exp), bbox_inches='tight', dpi=600)\\n\",\n    \"plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"### compressed to 3-dimension\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"z_embed3D = TSNE(n_components=3, n_iter=12000).fit_transform(z_train[:N])\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"fig = plt.figure(figsize=(12, 10))\\n\",\n    \"ax = fig.add_subplot(111, projection='3d')\\n\",\n    \"\\n\",\n    \"plots = []\\n\",\n    \"markers = ['o', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']  # select different markers\\n\",\n    \"for i, c in enumerate(classes):\\n\",\n    \"    ind = (y_train[:N] == i).tolist()\\n\",\n    \"    color = cm.jet([i / len(classes)] * sum(ind))\\n\",\n    \"    # plot each category one at a time \\n\",\n    \"    ax.scatter(z_embed3D[ind, 0], z_embed3D[ind, 1], c=color, marker=markers[i], s=8, label=i)\\n\",\n    \"\\n\",\n    \"ax.axis('on')\\n\",\n    \"\\n\",\n    \"r_max = 20\\n\",\n    \"r_min = -r_max\\n\",\n    \"\\n\",\n    \"ax.set_xlim(r_min, r_max)\\n\",\n    \"ax.set_ylim(r_min, r_max)\\n\",\n    \"ax.set_zlim(r_min, r_max)\\n\",\n    \"ax.set_xlabel('z-dim 1')\\n\",\n    \"ax.set_ylabel('z-dim 2')\\n\",\n    \"ax.set_zlabel('z-dim 3')\\n\",\n    \"ax.set_title('{} (t-SNE: {}-dim -> 3-dim)'.format(exp, z_train.shape[1]), fontsize=14)\\n\",\n    \"ax.legend(plots, classes, fontsize=14, loc='upper right')\\n\",\n    \"plt.savefig(\\\"./ResNetVAE_{}_embedded_3Dplot.png\\\".format(exp), bbox_inches='tight', dpi=600)\\n\",\n    \"plt.show()\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python 3\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.7.3\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  }
]