[
  {
    "path": "FQ-BigGAN/BigGAN.py",
    "content": "import numpy as np\nimport math\nimport functools\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\nfrom vq_layer import Quantize\nimport layers\nfrom sync_batchnorm import SynchronizedBatchNorm2d as SyncBatchNorm2d\n\n\n# Architectures for G\n# Attention is passed in in the format '32_64' to mean applying an attention\n# block at both resolution 32x32 and 64x64. Just '64' will apply at 64x64.\ndef G_arch(ch=64, attention='64', ksize='333333', dilation='111111'):\n\tarch = {}\n\tarch[512] = {'in_channels' :  [ch * item for item in [16, 16, 8, 8, 4, 2, 1]],\n\t             'out_channels' : [ch * item for item in [16,  8, 8, 4, 2, 1, 1]],\n\t             'upsample' : [True] * 7,\n\t             'resolution' : [8, 16, 32, 64, 128, 256, 512],\n\t             'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n\t                            for i in range(3,10)}}\n\tarch[256] = {'in_channels' :  [ch * item for item in [16, 16, 8, 8, 4, 2]],\n\t             'out_channels' : [ch * item for item in [16,  8, 8, 4, 2, 1]],\n\t             'upsample' : [True] * 6,\n\t             'resolution' : [8, 16, 32, 64, 128, 256],\n\t             'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n\t                            for i in range(3,9)}}\n\tarch[128] = {'in_channels' :  [ch * item for item in [16, 16, 8, 4, 2]],\n\t             'out_channels' : [ch * item for item in [16, 8, 4, 2, 1]],\n\t             'upsample' : [True] * 5,\n\t             'resolution' : [8, 16, 32, 64, 128],\n\t             'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n\t                            for i in range(3,8)}}\n\tarch[64]  = {'in_channels' :  [ch * item for item in [16, 16, 8, 4]],\n\t             'out_channels' : [ch * item for item in [16, 8, 4, 2]],\n\t             'upsample' : [True] * 4,\n\t             'resolution' : [8, 16, 32, 64],\n\t             'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n\t                            for i in range(3,7)}}\n\tarch[32]  = {'in_channels' :  [ch * item for item in [4, 4, 4]],\n\t             'out_channels' : [ch * item for item in [4, 4, 4]],\n\t             'upsample' : [True] * 3,\n\t             'resolution' : [8, 16, 32],\n\t             'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n\t                            for i in range(3,6)}}\n\n\treturn arch\n\nclass Generator(nn.Module):\n\tdef __init__(self, G_ch=64, dim_z=128, bottom_width=4, resolution=128,\n\t             G_kernel_size=3, G_attn='64', n_classes=1000,\n\t             num_G_SVs=1, num_G_SV_itrs=1,\n\t             G_shared=True, shared_dim=0, hier=False,\n\t             cross_replica=False, mybn=False,\n\t             G_activation=nn.ReLU(inplace=False),\n\t             G_lr=5e-5, G_B1=0.0, G_B2=0.999, adam_eps=1e-8,\n\t             BN_eps=1e-5, SN_eps=1e-12, G_mixed_precision=False, G_fp16=False,\n\t             G_init='ortho', skip_init=False, no_optim=False,\n\t             G_param='SN', norm_style='bn',\n\t             **kwargs):\n\t\tsuper(Generator, self).__init__()\n\t\t# Channel width mulitplier\n\t\tself.ch = G_ch\n\t\t# Dimensionality of the latent space\n\t\tself.dim_z = dim_z\n\t\t# The initial spatial dimensions\n\t\tself.bottom_width = bottom_width\n\t\t# Resolution of the output\n\t\tself.resolution = resolution\n\t\t# Kernel size?\n\t\tself.kernel_size = G_kernel_size\n\t\t# Attention?\n\t\tself.attention = G_attn\n\t\t# number of classes, for use in categorical conditional generation\n\t\tself.n_classes = n_classes\n\t\t# Use shared embeddings?\n\t\tself.G_shared = G_shared\n\t\t# Dimensionality of the shared embedding? Unused if not using G_shared\n\t\tself.shared_dim = shared_dim if shared_dim > 0 else dim_z\n\t\t# Hierarchical latent space?\n\t\tself.hier = hier\n\t\t# Cross replica batchnorm?\n\t\tself.cross_replica = cross_replica\n\t\t# Use my batchnorm?\n\t\tself.mybn = mybn\n\t\t# nonlinearity for residual blocks\n\t\tself.activation = G_activation\n\t\t# Initialization style\n\t\tself.init = G_init\n\t\t# Parameterization style\n\t\tself.G_param = G_param\n\t\t# Normalization style\n\t\tself.norm_style = norm_style\n\t\t# Epsilon for BatchNorm?\n\t\tself.BN_eps = BN_eps\n\t\t# Epsilon for Spectral Norm?\n\t\tself.SN_eps = SN_eps\n\t\t# fp16?\n\t\tself.fp16 = G_fp16\n\t\t# Architecture dict\n\t\tself.arch = G_arch(self.ch, self.attention)[resolution]\n\n\t\t# If using hierarchical latents, adjust z\n\t\tif self.hier:\n\t\t\t# Number of places z slots into\n\t\t\tself.num_slots = len(self.arch['in_channels']) + 1\n\t\t\tself.z_chunk_size = (self.dim_z // self.num_slots)\n\t\t\t# Recalculate latent dimensionality for even splitting into chunks\n\t\t\tself.dim_z = self.z_chunk_size *  self.num_slots\n\t\telse:\n\t\t\tself.num_slots = 1\n\t\t\tself.z_chunk_size = 0\n\n\t\t# Which convs, batchnorms, and linear layers to use\n\t\tif self.G_param == 'SN':\n\t\t\tself.which_conv = functools.partial(layers.SNConv2d,\n\t\t\t                                    kernel_size=3, padding=1,\n\t\t\t                                    num_svs=num_G_SVs, num_itrs=num_G_SV_itrs,\n\t\t\t                                    eps=self.SN_eps)\n\t\t\tself.which_linear = functools.partial(layers.SNLinear,\n\t\t\t                                      num_svs=num_G_SVs, num_itrs=num_G_SV_itrs,\n\t\t\t                                      eps=self.SN_eps)\n\t\telse:\n\t\t\tself.which_conv = functools.partial(nn.Conv2d, kernel_size=3, padding=1)\n\t\t\tself.which_linear = nn.Linear\n\n\t\t# We use a non-spectral-normed embedding here regardless;\n\t\t# For some reason applying SN to G's embedding seems to randomly cripple G\n\t\tself.which_embedding = nn.Embedding\n\t\tbn_linear = (functools.partial(self.which_linear, bias=False) if self.G_shared\n\t\t             else self.which_embedding)\n\t\t##TODO: Modify BN\n\t\tself.which_bn = functools.partial(layers.ccbn,\n\t\t                                  which_linear=bn_linear,\n\t\t                                  cross_replica=self.cross_replica,\n\t\t                                  mybn=self.mybn,\n\t\t                                  input_size=(self.shared_dim + self.z_chunk_size if self.G_shared\n\t\t                                              else self.n_classes),\n\t\t                                  norm_style=self.norm_style,\n\t\t                                  eps=self.BN_eps)\n\t\t# self.which_bn = functools.partial(layers.bn,\n\t\t#                       cross_replica=self.cross_replica,\n\t\t#                       mybn=self.mybn,\n\t\t#                       eps=self.BN_eps)\n\n\t\t# Prepare model\n\t\t# If not using shared embeddings, self.shared is just a passthrough\n\t\tself.shared = (self.which_embedding(n_classes, self.shared_dim) if G_shared\n\t\t               else layers.identity())\n\t\t# First linear layer\n\t\tself.linear = self.which_linear(self.dim_z // self.num_slots,\n\t\t                                self.arch['in_channels'][0] * (self.bottom_width **2))\n\n\t\t# self.blocks is a doubly-nested list of modules, the outer loop intended\n\t\t# to be over blocks at a given resolution (resblocks and/or self-attention)\n\t\t# while the inner loop is over a given block\n\t\tself.blocks = []\n\n\t\tfor index in range(len(self.arch['out_channels'])):\n\t\t\tself.blocks += [[layers.GBlock(in_channels=self.arch['in_channels'][index],\n\t\t\t                               out_channels=self.arch['out_channels'][index],\n\t\t\t                               which_conv=self.which_conv,\n\t\t\t                               which_bn=self.which_bn,\n\t\t\t                               activation=self.activation,\n\t\t\t                               upsample=(functools.partial(F.interpolate, scale_factor=2)\n\t\t\t                                         if self.arch['upsample'][index] else None))]]\n\n\t\t\t# If attention on this block, attach it to the end\n\t\t\tif self.arch['attention'][self.arch['resolution'][index]]:\n\t\t\t\tprint('Adding attention layer in G at resolution %d' % self.arch['resolution'][index])\n\t\t\t\tself.blocks[-1] += [layers.Attention(self.arch['out_channels'][index], self.which_conv)]\n\n\t\t# Turn self.blocks into a ModuleList so that it's all properly registered.\n\t\tself.blocks = nn.ModuleList([nn.ModuleList(block) for block in self.blocks])\n\n\t\t# output layer: batchnorm-relu-conv.\n\t\t# Consider using a non-spectral conv here\n\t\tself.output_layer = nn.Sequential(layers.bn(self.arch['out_channels'][-1],\n\t\t                                            cross_replica=self.cross_replica,\n\t\t                                            mybn=self.mybn),\n\t\t                                  self.activation,\n\t\t                                  self.which_conv(self.arch['out_channels'][-1], 3))\n\n\t\t# Initialize weights. Optionally skip init for testing.\n\t\tif not skip_init:\n\t\t\tself.init_weights()\n\n\t\t# Set up optimizer\n\t\t# If this is an EMA copy, no need for an optim, so just return now\n\t\tif no_optim:\n\t\t\treturn\n\t\tself.lr, self.B1, self.B2, self.adam_eps = G_lr, G_B1, G_B2, adam_eps\n\t\tif G_mixed_precision:\n\t\t\tprint('Using fp16 adam in G...')\n\t\t\timport utils\n\t\t\tself.optim = utils.Adam16(params=self.parameters(), lr=self.lr,\n\t\t\t                          betas=(self.B1, self.B2), weight_decay=0,\n\t\t\t                          eps=self.adam_eps)\n\t\telse:\n\t\t\tself.optim = optim.Adam(params=self.parameters(), lr=self.lr,\n\t\t\t                        betas=(self.B1, self.B2), weight_decay=0,\n\t\t\t                        eps=self.adam_eps)\n\n\t\t# LR scheduling, left here for forward compatibility\n\t\t# self.lr_sched = {'itr' : 0}# if self.progressive else {}\n\t\t# self.j = 0\n\n\t# Initialize\n\tdef init_weights(self):\n\t\tself.param_count = 0\n\t\tfor module in self.modules():\n\t\t\tif (isinstance(module, nn.Conv2d)\n\t\t\t\t\tor isinstance(module, nn.Linear)\n\t\t\t\t\tor isinstance(module, nn.Embedding)):\n\t\t\t\tif self.init == 'ortho':\n\t\t\t\t\tinit.orthogonal_(module.weight)\n\t\t\t\telif self.init == 'N02':\n\t\t\t\t\tinit.normal_(module.weight, 0, 0.02)\n\t\t\t\telif self.init in ['glorot', 'xavier']:\n\t\t\t\t\tinit.xavier_uniform_(module.weight)\n\t\t\t\telse:\n\t\t\t\t\tprint('Init style not recognized...')\n\t\t\t\tself.param_count += sum([p.data.nelement() for p in module.parameters()])\n\t\tprint('Param count for G''s initialized parameters: %d' % self.param_count)\n\n\t# Note on this forward function: we pass in a y vector which has\n\t# already been passed through G.shared to enable easy class-wise\n\t# interpolation later. If we passed in the one-hot and then ran it through\n\t# G.shared in this forward function, it would be harder to handle.\n\tdef forward(self, z, y):\n\t\t# If hierarchical, concatenate zs and ys\n\t\tif self.hier:\n\t\t\tzs = torch.split(z, self.z_chunk_size, 1)\n\t\t\tz = zs[0]\n\t\t\tys = [torch.cat([y, item], 1) for item in zs[1:]]\n\t\telse:\n\t\t\tys = [y] * len(self.blocks)\n\n\t\t# First linear layer\n\t\th = self.linear(z)\n\t\t# Reshape\n\t\th = h.view(h.size(0), -1, self.bottom_width, self.bottom_width)\n\n\t\t# Loop over blocks\n\t\tfor index, blocklist in enumerate(self.blocks):\n\t\t\t# Second inner loop in case block has multiple layers\n\t\t\tfor block in blocklist:\n\t\t\t\th = block(h, ys[index])\n\n\t\t# Apply batchnorm-relu-conv-tanh at output\n\t\treturn torch.tanh(self.output_layer(h))\n\n\n# Discriminator architecture, same paradigm as G's above\ndef D_arch(ch=64, attention='64',ksize='333333', dilation='111111'):\n\tarch = {}\n\tarch[256] = {'in_channels' :  [3] + [ch*item for item in [1, 2, 4, 8, 8, 16]],\n\t             'out_channels' : [item * ch for item in [1, 2, 4, 8, 8, 16, 16]],\n\t             'downsample' : [True] * 6 + [False],\n\t             'resolution' : [128, 64, 32, 16, 8, 4, 4 ],\n\t             'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n\t                            for i in range(2,8)}}\n\tarch[128] = {'in_channels' :  [3] + [ch*item for item in [1, 2, 4, 8, 16]],\n\t             'out_channels' : [item * ch for item in [1, 2, 4, 8, 16, 16]],\n\t             'downsample' : [True] * 5 + [False],\n\t             'resolution' : [64, 32, 16, 8, 4, 4],\n\t             'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n\t                            for i in range(2,8)}}\n\tarch[64]  = {'in_channels' :  [3] + [ch*item for item in [1, 2, 4, 8]],\n\t             'out_channels' : [item * ch for item in [1, 2, 4, 8, 16]],\n\t             'downsample' : [True] * 4 + [False],\n\t             'resolution' : [32, 16, 8, 4, 4],\n\t             'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n\t                            for i in range(2,7)}}\n\tarch[32]  = {'in_channels' :  [3] + [item * ch for item in [4, 4, 4]],\n\t             'out_channels' : [item * ch for item in [4, 4, 4, 4]],\n\t             'downsample' : [True, True, False, False],\n\t             'resolution' : [16, 16, 16, 16],\n\t             'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n\t                            for i in range(2,6)}}\n\treturn arch\n\nclass Discriminator(nn.Module):\n\tdef __init__(self, D_ch=64, D_wide=True, resolution=128,\n\t             D_kernel_size=3, D_attn='64', n_classes=1000,\n\t             num_D_SVs=1, num_D_SV_itrs=1, D_activation=nn.ReLU(inplace=False),\n\t             D_lr=2e-4, D_B1=0.0, D_B2=0.999, adam_eps=1e-8,\n\t             SN_eps=1e-12, output_dim=1, D_mixed_precision=False, D_fp16=False,\n\t             D_init='ortho', skip_init=False, D_param='SN',\n\t             dict_decay=0.8, commitment=0.5, discrete_layer='2', dict_size=10,\n\t             **kwargs):\n\t\tsuper(Discriminator, self).__init__()\n\t\t# Width multiplier\n\t\tself.ch = D_ch\n\t\t# Use Wide D as in BigGAN and SA-GAN or skinny D as in SN-GAN?\n\t\tself.D_wide = D_wide\n\t\t# Resolution\n\t\tself.resolution = resolution\n\t\t# Kernel size\n\t\tself.kernel_size = D_kernel_size\n\t\t# Attention?\n\t\tself.attention = D_attn\n\t\t# Number of classes\n\t\tself.n_classes = n_classes\n\t\t# Activation\n\t\tself.activation = D_activation\n\t\t# Initialization style\n\t\tself.init = D_init\n\t\t# Parameterization style\n\t\tself.D_param = D_param\n\t\t# Epsilon for Spectral Norm?\n\t\tself.SN_eps = SN_eps\n\t\t# Fp16?\n\t\tself.fp16 = D_fp16\n\t\t# Architecture\n\t\tself.arch = D_arch(self.ch, self.attention)[resolution]\n\t\t# Which convs, batchnorms, and linear layers to use\n\t\t# No option to turn off SN in D right now\n\t\tif self.D_param == 'SN':\n\t\t\tself.which_conv = functools.partial(layers.SNConv2d,\n\t\t\t                                    kernel_size=3, padding=1,\n\t\t\t                                    num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n\t\t\t                                    eps=self.SN_eps)\n\t\t\tself.which_linear = functools.partial(layers.SNLinear,\n\t\t\t                                      num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n\t\t\t                                      eps=self.SN_eps)\n\t\t\tself.which_embedding = functools.partial(layers.SNEmbedding,\n\t\t\t                                         num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n\t\t\t                                         eps=self.SN_eps)\n\t\t# Prepare model\n\t\t# self.blocks is a doubly-nested list of modules, the outer loop intended\n\t\t# to be over blocks at a given resolution (resblocks and/or self-attention)\n\t\tself.blocks = []\n\t\tself.quant_layer = [int(x) for x in discrete_layer]\n\t\tfor index in range(len(self.arch['out_channels'])):\n\t\t\tself.blocks += [[layers.DBlock(in_channels=self.arch['in_channels'][index],\n\t\t\t                               out_channels=self.arch['out_channels'][index],\n\t\t\t                               which_conv=self.which_conv,\n\t\t\t                               wide=self.D_wide,\n\t\t\t                               activation=self.activation,\n\t\t\t                               preactivation=(index > 0),\n\t\t\t                               downsample=(nn.AvgPool2d(2) if self.arch['downsample'][index] else None))]]\n\n\t\t\tif index in self.quant_layer:\n\t\t\t\tself.blocks[-1] += [Quantize(self.arch['out_channels'][index], 2 ** dict_size,\n\t\t\t\t                             commitment=commitment, decay=dict_decay, )]\n\t\t\t# If attention on this block, attach it to the end\n\t\t\tif self.arch['attention'][self.arch['resolution'][index]]:\n\t\t\t\tprint('Adding attention layer in D at resolution %d' % self.arch['resolution'][index])\n\t\t\t\tself.blocks[-1] += [layers.Attention(self.arch['out_channels'][index],\n\t\t\t\t                                     self.which_conv)]\n\n\t\t# Turn self.blocks into a ModuleList so that it's all properly registered.\n\t\tself.blocks = nn.ModuleList([nn.ModuleList(block) for block in self.blocks])\n\t\t# Linear output layer. The output dimension is typically 1, but may be\n\t\t# larger if we're e.g. turning this into a VAE with an inference output\n\t\tself.linear = self.which_linear(self.arch['out_channels'][-1], output_dim)\n\t\t# Embedding for projection discrimination\n\t\tself.embed = self.which_embedding(self.n_classes, self.arch['out_channels'][-1])\n\n\t\t# Initialize weights\n\t\tif not skip_init:\n\t\t\tself.init_weights()\n\n\t\t# Set up optimizer\n\t\tself.lr, self.B1, self.B2, self.adam_eps = D_lr, D_B1, D_B2, adam_eps\n\t\tif D_mixed_precision:\n\t\t\tprint('Using fp16 adam in D...')\n\t\t\timport utils\n\t\t\tself.optim = utils.Adam16(params=self.parameters(), lr=self.lr,\n\t\t\t                          betas=(self.B1, self.B2), weight_decay=0, eps=self.adam_eps)\n\t\telse:\n\t\t\tself.optim = optim.Adam(params=self.parameters(), lr=self.lr,\n\t\t\t                        betas=(self.B1, self.B2), weight_decay=0, eps=self.adam_eps)\n\t\t# LR scheduling, left here for forward compatibility\n\t\t# self.lr_sched = {'itr' : 0}# if self.progressive else {}\n\t\t# self.j = 0\n\n\t# Initialize\n\tdef init_weights(self):\n\t\tself.param_count = 0\n\t\tfor module in self.modules():\n\t\t\tif (isinstance(module, nn.Conv2d)\n\t\t\t\t\tor isinstance(module, nn.Linear)\n\t\t\t\t\tor isinstance(module, nn.Embedding)):\n\t\t\t\tif self.init == 'ortho':\n\t\t\t\t\tinit.orthogonal_(module.weight)\n\t\t\t\telif self.init == 'N02':\n\t\t\t\t\tinit.normal_(module.weight, 0, 0.02)\n\t\t\t\telif self.init in ['glorot', 'xavier']:\n\t\t\t\t\tinit.xavier_uniform_(module.weight)\n\t\t\t\telse:\n\t\t\t\t\tprint('Init style not recognized...')\n\t\t\t\tself.param_count += sum([p.data.nelement() for p in module.parameters()])\n\t\tprint('Param count for D''s initialized parameters: %d' % self.param_count)\n\n\tdef forward(self, x, y=None):\n\t\t# Stick x into h for cleaner for loops without flow control\n\t\th = x\n\t\tquant_loss = 0\n\t\t# Loop over blocks\n\t\tfor index, blocklist in enumerate(self.blocks):\n\t\t\tif index in self.quant_layer:\n\t\t\t\th = blocklist[0](h)\n\t\t\t\t# print(h.shape)\n\t\t\t\th_, diff, ppl = blocklist[1](h)\n\t\t\t\tif len(blocklist) == 3:\n\t\t\t\t\th = blocklist[2](h)\n\t\t\t\tquant_loss += diff\n\t\t\telse:\n\t\t\t\tfor block in blocklist:\n\t\t\t\t\th = block(h)\n\n\t\t# Apply global sum pooling as in SN-GAN\n\t\th = torch.sum(self.activation(h), [2, 3])\n\t\t# Get initial class-unconditional output\n\t\tout = self.linear(h)\n\t\t## TODO: Uncomment for Class conditional\n\t\t# Get projection of final featureset onto class vectors and add to evidence\n\t\tout = out + torch.sum(self.embed(y) * h, 1, keepdim=True)\n\t\treturn out, quant_loss, ppl\n\n# Parallelized G_D to minimize cross-gpu communication\n# Without this, Generator outputs would get all-gathered and then rebroadcast.\nclass G_D(nn.Module):\n\tdef __init__(self, G, D):\n\t\tsuper(G_D, self).__init__()\n\t\tself.G = G\n\t\tself.D = D\n\n\tdef forward(self, z, gy, x=None, dy=None, train_G=False, return_G_z=False,\n\t            split_D=False):\n\t\t# If training G, enable grad tape\n\t\twith torch.set_grad_enabled(train_G):\n\t\t\t# Get Generator output given noise\n\t\t\tG_z = self.G(z, self.G.shared(gy))\n\t\t\t# Cast as necessary\n\t\t\tif self.G.fp16 and not self.D.fp16:\n\t\t\t\tG_z = G_z.float()\n\t\t\tif self.D.fp16 and not self.G.fp16:\n\t\t\t\tG_z = G_z.half()\n\t\t# Split_D means to run D once with real data and once with fake,\n\t\t# rather than concatenating along the batch dimension.\n\t\tif split_D:\n\t\t\tD_fake, quant_loss_fake, ppl = self.D(G_z, gy)\n\t\t\tif x is not None:\n\t\t\t\tD_real, quant_loss_real, ppl = self.D(x, dy)\n\t\t\t\treturn D_fake, D_real, quant_loss_fake, quant_loss_real\n\t\t\telse:\n\t\t\t\tif return_G_z:\n\t\t\t\t\treturn D_fake, G_z\n\t\t\t\telse:\n\t\t\t\t\treturn D_fake, quant_loss_fake\n\t\t# If real data is provided, concatenate it with the Generator's output\n\t\t# along the batch dimension for improved efficiency.\n\t\telse:\n\t\t\tD_input = torch.cat([G_z, x], 0) if x is not None else G_z\n\t\t\tD_class = torch.cat([gy, dy], 0) if dy is not None else gy\n\t\t\t# Get Discriminator output\n\t\t\tD_out, quant_loss, ppl = self.D(D_input, D_class)\n\t\t\t# print(torch.split(D_out, [G_z.shape[0], x.shape[0]]))\n\t\t\tif x is not None:\n\t\t\t\tD_real, D_fake = torch.split(D_out, [G_z.shape[0], x.shape[0]])\n\t\t\t\tquant_loss_real, quant_loss_fake = torch.split(quant_loss, (G_z.shape[0],\n\t\t\t\t                                                            x.shape[0]), dim=0)\n\t\t\t\treturn D_real, D_fake, quant_loss_real, quant_loss_fake, ppl.view(-1, 1) # D_fake,\n\t\t\t# D_real\n\t\t\telse:\n\t\t\t\tif return_G_z:\n\t\t\t\t\treturn D_out, G_z\n\t\t\t\telse:\n\t\t\t\t\treturn D_out, quant_loss\n"
  },
  {
    "path": "FQ-BigGAN/BigGANdeep.py",
    "content": "import numpy as np\nimport math\nimport functools\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\n\nimport layers\nfrom sync_batchnorm import SynchronizedBatchNorm2d as SyncBatchNorm2d\n\n# BigGAN-deep: uses a different resblock and pattern\n\n\n# Architectures for G\n# Attention is passed in in the format '32_64' to mean applying an attention\n# block at both resolution 32x32 and 64x64. Just '64' will apply at 64x64.\n\n# Channel ratio is the ratio of \nclass GBlock(nn.Module):\n  def __init__(self, in_channels, out_channels,\n               which_conv=nn.Conv2d, which_bn=layers.bn, activation=None,\n               upsample=None, channel_ratio=4):\n    super(GBlock, self).__init__()\n    \n    self.in_channels, self.out_channels = in_channels, out_channels\n    self.hidden_channels = self.in_channels // channel_ratio\n    self.which_conv, self.which_bn = which_conv, which_bn\n    self.activation = activation\n    # Conv layers\n    self.conv1 = self.which_conv(self.in_channels, self.hidden_channels, \n                                 kernel_size=1, padding=0)\n    self.conv2 = self.which_conv(self.hidden_channels, self.hidden_channels)\n    self.conv3 = self.which_conv(self.hidden_channels, self.hidden_channels)\n    self.conv4 = self.which_conv(self.hidden_channels, self.out_channels, \n                                 kernel_size=1, padding=0)\n    # Batchnorm layers\n    self.bn1 = self.which_bn(self.in_channels)\n    self.bn2 = self.which_bn(self.hidden_channels)\n    self.bn3 = self.which_bn(self.hidden_channels)\n    self.bn4 = self.which_bn(self.hidden_channels)\n    # upsample layers\n    self.upsample = upsample\n\n  def forward(self, x, y):\n    # Project down to channel ratio\n    h = self.conv1(self.activation(self.bn1(x, y)))\n    # Apply next BN-ReLU\n    h = self.activation(self.bn2(h, y))\n    # Drop channels in x if necessary\n    if self.in_channels != self.out_channels:\n      x = x[:, :self.out_channels]      \n    # Upsample both h and x at this point  \n    if self.upsample:\n      h = self.upsample(h)\n      x = self.upsample(x)\n    # 3x3 convs\n    h = self.conv2(h)\n    h = self.conv3(self.activation(self.bn3(h, y)))\n    # Final 1x1 conv\n    h = self.conv4(self.activation(self.bn4(h, y)))\n    return h + x\n\ndef G_arch(ch=64, attention='64', ksize='333333', dilation='111111'):\n  arch = {}\n  arch[256] = {'in_channels' :  [ch * item for item in [16, 16, 8, 8, 4, 2]],\n               'out_channels' : [ch * item for item in [16,  8, 8, 4, 2, 1]],\n               'upsample' : [True] * 6,\n               'resolution' : [8, 16, 32, 64, 128, 256],\n               'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n                              for i in range(3,9)}}\n  arch[128] = {'in_channels' :  [ch * item for item in [16, 16, 8, 4, 2]],\n               'out_channels' : [ch * item for item in [16, 8, 4,  2, 1]],\n               'upsample' : [True] * 5,\n               'resolution' : [8, 16, 32, 64, 128],\n               'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n                              for i in range(3,8)}}\n  arch[64]  = {'in_channels' :  [ch * item for item in [16, 16, 8, 4]],\n               'out_channels' : [ch * item for item in [16, 8, 4, 2]],\n               'upsample' : [True] * 4,\n               'resolution' : [8, 16, 32, 64],\n               'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n                              for i in range(3,7)}}\n  arch[32]  = {'in_channels' :  [ch * item for item in [4, 4, 4]],\n               'out_channels' : [ch * item for item in [4, 4, 4]],\n               'upsample' : [True] * 3,\n               'resolution' : [8, 16, 32],\n               'attention' : {2**i: (2**i in [int(item) for item in attention.split('_')])\n                              for i in range(3,6)}}\n\n  return arch\n\nclass Generator(nn.Module):\n  def __init__(self, G_ch=64, G_depth=2, dim_z=128, bottom_width=4, resolution=128,\n               G_kernel_size=3, G_attn='64', n_classes=1000,\n               num_G_SVs=1, num_G_SV_itrs=1,\n               G_shared=True, shared_dim=0, hier=False,\n               cross_replica=False, mybn=False,\n               G_activation=nn.ReLU(inplace=False),\n               G_lr=5e-5, G_B1=0.0, G_B2=0.999, adam_eps=1e-8,\n               BN_eps=1e-5, SN_eps=1e-12, G_mixed_precision=False, G_fp16=False,\n               G_init='ortho', skip_init=False, no_optim=False,\n               G_param='SN', norm_style='bn',\n               **kwargs):\n    super(Generator, self).__init__()\n    # Channel width mulitplier\n    self.ch = G_ch\n    # Number of resblocks per stage\n    self.G_depth = G_depth\n    # Dimensionality of the latent space\n    self.dim_z = dim_z\n    # The initial spatial dimensions\n    self.bottom_width = bottom_width\n    # Resolution of the output\n    self.resolution = resolution\n    # Kernel size?\n    self.kernel_size = G_kernel_size\n    # Attention?\n    self.attention = G_attn\n    # number of classes, for use in categorical conditional generation\n    self.n_classes = n_classes\n    # Use shared embeddings?\n    self.G_shared = G_shared\n    # Dimensionality of the shared embedding? Unused if not using G_shared\n    self.shared_dim = shared_dim if shared_dim > 0 else dim_z\n    # Hierarchical latent space?\n    self.hier = hier\n    # Cross replica batchnorm?\n    self.cross_replica = cross_replica\n    # Use my batchnorm?\n    self.mybn = mybn\n    # nonlinearity for residual blocks\n    self.activation = G_activation\n    # Initialization style\n    self.init = G_init\n    # Parameterization style\n    self.G_param = G_param\n    # Normalization style\n    self.norm_style = norm_style\n    # Epsilon for BatchNorm?\n    self.BN_eps = BN_eps\n    # Epsilon for Spectral Norm?\n    self.SN_eps = SN_eps\n    # fp16?\n    self.fp16 = G_fp16\n    # Architecture dict\n    self.arch = G_arch(self.ch, self.attention)[resolution]\n\n\n    # Which convs, batchnorms, and linear layers to use\n    if self.G_param == 'SN':\n      self.which_conv = functools.partial(layers.SNConv2d,\n                          kernel_size=3, padding=1,\n                          num_svs=num_G_SVs, num_itrs=num_G_SV_itrs,\n                          eps=self.SN_eps)\n      self.which_linear = functools.partial(layers.SNLinear,\n                          num_svs=num_G_SVs, num_itrs=num_G_SV_itrs,\n                          eps=self.SN_eps)\n    else:\n      self.which_conv = functools.partial(nn.Conv2d, kernel_size=3, padding=1)\n      self.which_linear = nn.Linear\n      \n    # We use a non-spectral-normed embedding here regardless;\n    # For some reason applying SN to G's embedding seems to randomly cripple G\n    self.which_embedding = nn.Embedding\n    bn_linear = (functools.partial(self.which_linear, bias=False) if self.G_shared\n                 else self.which_embedding)\n    self.which_bn = functools.partial(layers.ccbn,\n                          which_linear=bn_linear,\n                          cross_replica=self.cross_replica,\n                          mybn=self.mybn,\n                          input_size=(self.shared_dim + self.dim_z if self.G_shared\n                                      else self.n_classes),\n                          norm_style=self.norm_style,\n                          eps=self.BN_eps)\n\n\n    # Prepare model\n    # If not using shared embeddings, self.shared is just a passthrough\n    self.shared = (self.which_embedding(n_classes, self.shared_dim) if G_shared \n                    else layers.identity())\n    # First linear layer\n    self.linear = self.which_linear(self.dim_z + self.shared_dim, self.arch['in_channels'][0] * (self.bottom_width **2))\n\n    # self.blocks is a doubly-nested list of modules, the outer loop intended\n    # to be over blocks at a given resolution (resblocks and/or self-attention)\n    # while the inner loop is over a given block\n    self.blocks = []\n    for index in range(len(self.arch['out_channels'])):\n      self.blocks += [[GBlock(in_channels=self.arch['in_channels'][index],\n                             out_channels=self.arch['in_channels'][index] if g_index==0 else self.arch['out_channels'][index],\n                             which_conv=self.which_conv,\n                             which_bn=self.which_bn,\n                             activation=self.activation,\n                             upsample=(functools.partial(F.interpolate, scale_factor=2)\n                                       if self.arch['upsample'][index] and g_index == (self.G_depth-1) else None))]\n                       for g_index in range(self.G_depth)]\n\n      # If attention on this block, attach it to the end\n      if self.arch['attention'][self.arch['resolution'][index]]:\n        print('Adding attention layer in G at resolution %d' % self.arch['resolution'][index])\n        self.blocks[-1] += [layers.Attention(self.arch['out_channels'][index], self.which_conv)]\n\n    # Turn self.blocks into a ModuleList so that it's all properly registered.\n    self.blocks = nn.ModuleList([nn.ModuleList(block) for block in self.blocks])\n\n    # output layer: batchnorm-relu-conv.\n    # Consider using a non-spectral conv here\n    self.output_layer = nn.Sequential(layers.bn(self.arch['out_channels'][-1],\n                                                cross_replica=self.cross_replica,\n                                                mybn=self.mybn),\n                                    self.activation,\n                                    self.which_conv(self.arch['out_channels'][-1], 3))\n\n    # Initialize weights. Optionally skip init for testing.\n    if not skip_init:\n      self.init_weights()\n\n    # Set up optimizer\n    # If this is an EMA copy, no need for an optim, so just return now\n    if no_optim:\n      return\n    self.lr, self.B1, self.B2, self.adam_eps = G_lr, G_B1, G_B2, adam_eps\n    if G_mixed_precision:\n      print('Using fp16 adam in G...')\n      import utils\n      self.optim = utils.Adam16(params=self.parameters(), lr=self.lr,\n                           betas=(self.B1, self.B2), weight_decay=0,\n                           eps=self.adam_eps)\n    else:\n      self.optim = optim.Adam(params=self.parameters(), lr=self.lr,\n                           betas=(self.B1, self.B2), weight_decay=0,\n                           eps=self.adam_eps)\n\n    # LR scheduling, left here for forward compatibility\n    # self.lr_sched = {'itr' : 0}# if self.progressive else {}\n    # self.j = 0\n\n  # Initialize\n  def init_weights(self):\n    self.param_count = 0\n    for module in self.modules():\n      if (isinstance(module, nn.Conv2d) \n          or isinstance(module, nn.Linear) \n          or isinstance(module, nn.Embedding)):\n        if self.init == 'ortho':\n          init.orthogonal_(module.weight)\n        elif self.init == 'N02':\n          init.normal_(module.weight, 0, 0.02)\n        elif self.init in ['glorot', 'xavier']:\n          init.xavier_uniform_(module.weight)\n        else:\n          print('Init style not recognized...')\n        self.param_count += sum([p.data.nelement() for p in module.parameters()])\n    print('Param count for G''s initialized parameters: %d' % self.param_count)\n\n  # Note on this forward function: we pass in a y vector which has\n  # already been passed through G.shared to enable easy class-wise\n  # interpolation later. If we passed in the one-hot and then ran it through\n  # G.shared in this forward function, it would be harder to handle.\n  # NOTE: The z vs y dichotomy here is for compatibility with not-y\n  def forward(self, z, y):\n    # If hierarchical, concatenate zs and ys\n    if self.hier:\n      z = torch.cat([y, z], 1)      \n      y = z\n    # First linear layer\n    h = self.linear(z)\n    # Reshape\n    h = h.view(h.size(0), -1, self.bottom_width, self.bottom_width)    \n    # Loop over blocks\n    for index, blocklist in enumerate(self.blocks):\n      # Second inner loop in case block has multiple layers\n      for block in blocklist:\n        h = block(h, y)\n        \n    # Apply batchnorm-relu-conv-tanh at output\n    return torch.tanh(self.output_layer(h))\n\nclass DBlock(nn.Module):\n  def __init__(self, in_channels, out_channels, which_conv=layers.SNConv2d, wide=True,\n               preactivation=True, activation=None, downsample=None,\n               channel_ratio=4):\n    super(DBlock, self).__init__()\n    self.in_channels, self.out_channels = in_channels, out_channels\n    # If using wide D (as in SA-GAN and BigGAN), change the channel pattern\n    self.hidden_channels = self.out_channels // channel_ratio\n    self.which_conv = which_conv\n    self.preactivation = preactivation\n    self.activation = activation\n    self.downsample = downsample\n        \n    # Conv layers\n    self.conv1 = self.which_conv(self.in_channels, self.hidden_channels, \n                                 kernel_size=1, padding=0)\n    self.conv2 = self.which_conv(self.hidden_channels, self.hidden_channels)\n    self.conv3 = self.which_conv(self.hidden_channels, self.hidden_channels)\n    self.conv4 = self.which_conv(self.hidden_channels, self.out_channels, \n                                 kernel_size=1, padding=0)\n                                 \n    self.learnable_sc = True if (in_channels != out_channels) else False\n    if self.learnable_sc:\n      self.conv_sc = self.which_conv(in_channels, out_channels - in_channels, \n                                     kernel_size=1, padding=0)\n  def shortcut(self, x):\n    if self.downsample:\n      x = self.downsample(x)\n    if self.learnable_sc:\n      x = torch.cat([x, self.conv_sc(x)], 1)    \n    return x\n    \n  def forward(self, x):\n    # 1x1 bottleneck conv\n    h = self.conv1(F.relu(x))\n    # 3x3 convs\n    h = self.conv2(self.activation(h))\n    h = self.conv3(self.activation(h))\n    # relu before downsample\n    h = self.activation(h)\n    # downsample\n    if self.downsample:\n      h = self.downsample(h)     \n    # final 1x1 conv\n    h = self.conv4(h)\n    return h + self.shortcut(x)\n    \n# Discriminator architecture, same paradigm as G's above\ndef D_arch(ch=64, attention='64',ksize='333333', dilation='111111'):\n  arch = {}\n  arch[256] = {'in_channels' :  [item * ch for item in [1, 2, 4, 8, 8, 16]],\n               'out_channels' : [item * ch for item in [2, 4, 8, 8, 16, 16]],\n               'downsample' : [True] * 6 + [False],\n               'resolution' : [128, 64, 32, 16, 8, 4, 4 ],\n               'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n                              for i in range(2,8)}}\n  arch[128] = {'in_channels' :  [item * ch for item in [1, 2, 4,  8, 16]],\n               'out_channels' : [item * ch for item in [2, 4, 8, 16, 16]],\n               'downsample' : [True] * 5 + [False],\n               'resolution' : [64, 32, 16, 8, 4, 4],\n               'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n                              for i in range(2,8)}}\n  arch[64]  = {'in_channels' :  [item * ch for item in [1, 2, 4, 8]],\n               'out_channels' : [item * ch for item in [2, 4, 8, 16]],\n               'downsample' : [True] * 4 + [False],\n               'resolution' : [32, 16, 8, 4, 4],\n               'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n                              for i in range(2,7)}}\n  arch[32]  = {'in_channels' :  [item * ch for item in [4, 4, 4]],\n               'out_channels' : [item * ch for item in [4, 4, 4]],\n               'downsample' : [True, True, False, False],\n               'resolution' : [16, 16, 16, 16],\n               'attention' : {2**i: 2**i in [int(item) for item in attention.split('_')]\n                              for i in range(2,6)}}\n  return arch\n\nclass Discriminator(nn.Module):\n\n  def __init__(self, D_ch=64, D_wide=True, D_depth=2, resolution=128,\n               D_kernel_size=3, D_attn='64', n_classes=1000,\n               num_D_SVs=1, num_D_SV_itrs=1, D_activation=nn.ReLU(inplace=False),\n               D_lr=2e-4, D_B1=0.0, D_B2=0.999, adam_eps=1e-8,\n               SN_eps=1e-12, output_dim=1, D_mixed_precision=False, D_fp16=False,\n               D_init='ortho', skip_init=False, D_param='SN', **kwargs):\n    super(Discriminator, self).__init__()\n    # Width multiplier\n    self.ch = D_ch\n    # Use Wide D as in BigGAN and SA-GAN or skinny D as in SN-GAN?\n    self.D_wide = D_wide\n    # How many resblocks per stage?\n    self.D_depth = D_depth\n    # Resolution\n    self.resolution = resolution\n    # Kernel size\n    self.kernel_size = D_kernel_size\n    # Attention?\n    self.attention = D_attn\n    # Number of classes\n    self.n_classes = n_classes\n    # Activation\n    self.activation = D_activation\n    # Initialization style\n    self.init = D_init\n    # Parameterization style\n    self.D_param = D_param\n    # Epsilon for Spectral Norm?\n    self.SN_eps = SN_eps\n    # Fp16?\n    self.fp16 = D_fp16\n    # Architecture\n    self.arch = D_arch(self.ch, self.attention)[resolution]\n\n\n    # Which convs, batchnorms, and linear layers to use\n    # No option to turn off SN in D right now\n    if self.D_param == 'SN':\n      self.which_conv = functools.partial(layers.SNConv2d,\n                          kernel_size=3, padding=1,\n                          num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n                          eps=self.SN_eps)\n      self.which_linear = functools.partial(layers.SNLinear,\n                          num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n                          eps=self.SN_eps)\n      self.which_embedding = functools.partial(layers.SNEmbedding,\n                              num_svs=num_D_SVs, num_itrs=num_D_SV_itrs,\n                              eps=self.SN_eps)\n    \n    \n    # Prepare model\n    # Stem convolution\n    self.input_conv = self.which_conv(3, self.arch['in_channels'][0])\n    # self.blocks is a doubly-nested list of modules, the outer loop intended\n    # to be over blocks at a given resolution (resblocks and/or self-attention)\n    self.blocks = []\n    for index in range(len(self.arch['out_channels'])):\n      self.blocks += [[DBlock(in_channels=self.arch['in_channels'][index] if d_index==0 else self.arch['out_channels'][index],\n                       out_channels=self.arch['out_channels'][index],\n                       which_conv=self.which_conv,\n                       wide=self.D_wide,\n                       activation=self.activation,\n                       preactivation=True,\n                       downsample=(nn.AvgPool2d(2) if self.arch['downsample'][index] and d_index==0 else None))\n                       for d_index in range(self.D_depth)]]\n      # If attention on this block, attach it to the end\n      if self.arch['attention'][self.arch['resolution'][index]]:\n        print('Adding attention layer in D at resolution %d' % self.arch['resolution'][index])\n        self.blocks[-1] += [layers.Attention(self.arch['out_channels'][index],\n                                             self.which_conv)]\n    # Turn self.blocks into a ModuleList so that it's all properly registered.\n    self.blocks = nn.ModuleList([nn.ModuleList(block) for block in self.blocks])\n    # Linear output layer. The output dimension is typically 1, but may be\n    # larger if we're e.g. turning this into a VAE with an inference output\n    self.linear = self.which_linear(self.arch['out_channels'][-1], output_dim)\n    # Embedding for projection discrimination\n    self.embed = self.which_embedding(self.n_classes, self.arch['out_channels'][-1])\n\n    # Initialize weights\n    if not skip_init:\n      self.init_weights()\n\n    # Set up optimizer\n    self.lr, self.B1, self.B2, self.adam_eps = D_lr, D_B1, D_B2, adam_eps\n    if D_mixed_precision:\n      print('Using fp16 adam in D...')\n      import utils\n      self.optim = utils.Adam16(params=self.parameters(), lr=self.lr,\n                             betas=(self.B1, self.B2), weight_decay=0, eps=self.adam_eps)\n    else:\n      self.optim = optim.Adam(params=self.parameters(), lr=self.lr,\n                             betas=(self.B1, self.B2), weight_decay=0, eps=self.adam_eps)\n    # LR scheduling, left here for forward compatibility\n    # self.lr_sched = {'itr' : 0}# if self.progressive else {}\n    # self.j = 0\n\n  # Initialize\n  def init_weights(self):\n    self.param_count = 0\n    for module in self.modules():\n      if (isinstance(module, nn.Conv2d)\n          or isinstance(module, nn.Linear)\n          or isinstance(module, nn.Embedding)):\n        if self.init == 'ortho':\n          init.orthogonal_(module.weight)\n        elif self.init == 'N02':\n          init.normal_(module.weight, 0, 0.02)\n        elif self.init in ['glorot', 'xavier']:\n          init.xavier_uniform_(module.weight)\n        else:\n          print('Init style not recognized...')\n        self.param_count += sum([p.data.nelement() for p in module.parameters()])\n    print('Param count for D''s initialized parameters: %d' % self.param_count)\n\n  def forward(self, x, y=None):\n    # Run input conv\n    h = self.input_conv(x)\n    # Loop over blocks\n    for index, blocklist in enumerate(self.blocks):\n      for block in blocklist:\n        h = block(h)\n    # Apply global sum pooling as in SN-GAN\n    h = torch.sum(self.activation(h), [2, 3])\n    # Get initial class-unconditional output\n    out = self.linear(h)\n    # Get projection of final featureset onto class vectors and add to evidence\n    out = out + torch.sum(self.embed(y) * h, 1, keepdim=True)\n    return out\n\n# Parallelized G_D to minimize cross-gpu communication\n# Without this, Generator outputs would get all-gathered and then rebroadcast.\nclass G_D(nn.Module):\n  def __init__(self, G, D):\n    super(G_D, self).__init__()\n    self.G = G\n    self.D = D\n\n  def forward(self, z, gy, x=None, dy=None, train_G=False, return_G_z=False,\n              split_D=False):              \n    # If training G, enable grad tape\n    with torch.set_grad_enabled(train_G):\n      # Get Generator output given noise\n      G_z = self.G(z, self.G.shared(gy))\n      # Cast as necessary\n      if self.G.fp16 and not self.D.fp16:\n        G_z = G_z.float()\n      if self.D.fp16 and not self.G.fp16:\n        G_z = G_z.half()\n    # Split_D means to run D once with real data and once with fake,\n    # rather than concatenating along the batch dimension.\n    if split_D:\n      D_fake = self.D(G_z, gy)\n      if x is not None:\n        D_real = self.D(x, dy)\n        return D_fake, D_real\n      else:\n        if return_G_z:\n          return D_fake, G_z\n        else:\n          return D_fake\n    # If real data is provided, concatenate it with the Generator's output\n    # along the batch dimension for improved efficiency.\n    else:\n      D_input = torch.cat([G_z, x], 0) if x is not None else G_z\n      D_class = torch.cat([gy, dy], 0) if dy is not None else gy\n      # Get Discriminator output\n      D_out = self.D(D_input, D_class)\n      if x is not None:\n        return torch.split(D_out, [G_z.shape[0], x.shape[0]]) # D_fake, D_real\n      else:\n        if return_G_z:\n          return D_out, G_z\n        else:\n          return D_out\n"
  },
  {
    "path": "FQ-BigGAN/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Andy Brock\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "FQ-BigGAN/TFHub/README.md",
    "content": "# BigGAN-PyTorch TFHub converter\nThis dir contains scripts for taking the [pre-trained generator weights from TFHub](https://tfhub.dev/s?q=biggan) and porting them to BigGAN-Pytorch.\n\nIn addition to the base libraries for BigGAN-PyTorch, to run this code you will need:\n\nTensorFlow\nTFHub\nparse\n\nNote that this code is only presently set up to run the ported models without truncation--you'll need to accumulate standing stats at each truncation level yourself if you wish to employ it.\n\nTo port the 128x128 model from tfhub, produce a pretrained weights .pth file, and generate samples using all your GPUs, run\n\n`python converter.py -r 128 --generate_samples --parallel`"
  },
  {
    "path": "FQ-BigGAN/TFHub/biggan_v1.py",
    "content": "# BigGAN V1:\n# This is now deprecated code used for porting the TFHub modules to pytorch,\n# included here for reference only.\nimport numpy as np\nimport torch\nfrom scipy.stats import truncnorm\nfrom torch import nn\nfrom torch.nn import Parameter\nfrom torch.nn import functional as F\n\n\ndef l2normalize(v, eps=1e-4):\n  return v / (v.norm() + eps)\n\n\ndef truncated_z_sample(batch_size, z_dim, truncation=0.5, seed=None):\n  state = None if seed is None else np.random.RandomState(seed)\n  values = truncnorm.rvs(-2, 2, size=(batch_size, z_dim), random_state=state)\n  return truncation * values\n\n\ndef denorm(x):\n  out = (x + 1) / 2\n  return out.clamp_(0, 1)\n\n\nclass SpectralNorm(nn.Module):\n  def __init__(self, module, name='weight', power_iterations=1):\n    super(SpectralNorm, self).__init__()\n    self.module = module\n    self.name = name\n    self.power_iterations = power_iterations\n    if not self._made_params():\n      self._make_params()\n\n  def _update_u_v(self):\n    u = getattr(self.module, self.name + \"_u\")\n    v = getattr(self.module, self.name + \"_v\")\n    w = getattr(self.module, self.name + \"_bar\")\n\n    height = w.data.shape[0]\n    _w = w.view(height, -1)\n    for _ in range(self.power_iterations):\n      v = l2normalize(torch.matmul(_w.t(), u))\n      u = l2normalize(torch.matmul(_w, v))\n\n    sigma = u.dot((_w).mv(v))\n    setattr(self.module, self.name, w / sigma.expand_as(w))\n\n  def _made_params(self):\n    try:\n      getattr(self.module, self.name + \"_u\")\n      getattr(self.module, self.name + \"_v\")\n      getattr(self.module, self.name + \"_bar\")\n      return True\n    except AttributeError:\n      return False\n\n  def _make_params(self):\n    w = getattr(self.module, self.name)\n\n    height = w.data.shape[0]\n    width = w.view(height, -1).data.shape[1]\n\n    u = Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)\n    v = Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)\n    u.data = l2normalize(u.data)\n    v.data = l2normalize(v.data)\n    w_bar = Parameter(w.data)\n\n    del self.module._parameters[self.name]\n    self.module.register_parameter(self.name + \"_u\", u)\n    self.module.register_parameter(self.name + \"_v\", v)\n    self.module.register_parameter(self.name + \"_bar\", w_bar)\n\n  def forward(self, *args):\n    self._update_u_v()\n    return self.module.forward(*args)\n\n\nclass SelfAttention(nn.Module):\n  \"\"\" Self Attention Layer\"\"\"\n\n  def __init__(self, in_dim, activation=F.relu):\n    super().__init__()\n    self.chanel_in = in_dim\n    self.activation = activation\n\n    self.theta = SpectralNorm(nn.Conv2d(in_channels=in_dim, out_channels=in_dim // 8, kernel_size=1, bias=False))\n    self.phi = SpectralNorm(nn.Conv2d(in_channels=in_dim, out_channels=in_dim // 8, kernel_size=1, bias=False))\n    self.pool = nn.MaxPool2d(2, 2)\n    self.g = SpectralNorm(nn.Conv2d(in_channels=in_dim, out_channels=in_dim // 2, kernel_size=1, bias=False))\n    self.o_conv = SpectralNorm(nn.Conv2d(in_channels=in_dim // 2, out_channels=in_dim, kernel_size=1, bias=False))\n    self.gamma = nn.Parameter(torch.zeros(1))\n\n    self.softmax = nn.Softmax(dim=-1)\n\n  def forward(self, x):\n    m_batchsize, C, width, height = x.size()\n    N = height * width\n\n    theta = self.theta(x)\n    phi = self.phi(x)\n    phi = self.pool(phi)\n    phi = phi.view(m_batchsize, -1, N // 4)\n    theta = theta.view(m_batchsize, -1, N)\n    theta = theta.permute(0, 2, 1)\n    attention = self.softmax(torch.bmm(theta, phi))\n    g = self.pool(self.g(x)).view(m_batchsize, -1, N // 4)\n    attn_g = torch.bmm(g, attention.permute(0, 2, 1)).view(m_batchsize, -1, width, height)\n    out = self.o_conv(attn_g)\n    return self.gamma * out + x\n\n\nclass ConditionalBatchNorm2d(nn.Module):\n  def __init__(self, num_features, num_classes, eps=1e-4, momentum=0.1):\n    super().__init__()\n    self.num_features = num_features\n    self.bn = nn.BatchNorm2d(num_features, affine=False, eps=eps, momentum=momentum)\n    self.gamma_embed = SpectralNorm(nn.Linear(num_classes, num_features, bias=False))\n    self.beta_embed = SpectralNorm(nn.Linear(num_classes, num_features, bias=False))\n\n  def forward(self, x, y):\n    out = self.bn(x)\n    gamma = self.gamma_embed(y) + 1\n    beta = self.beta_embed(y)\n    out = gamma.view(-1, self.num_features, 1, 1) * out + beta.view(-1, self.num_features, 1, 1)\n    return out\n\n\nclass GBlock(nn.Module):\n  def __init__(\n    self,\n    in_channel,\n    out_channel,\n    kernel_size=[3, 3],\n    padding=1,\n    stride=1,\n    n_class=None,\n    bn=True,\n    activation=F.relu,\n    upsample=True,\n    downsample=False,\n    z_dim=148,\n  ):\n    super().__init__()\n\n    self.conv0 = SpectralNorm(\n      nn.Conv2d(in_channel, out_channel, kernel_size, stride, padding, bias=True if bn else True)\n    )\n    self.conv1 = SpectralNorm(\n      nn.Conv2d(out_channel, out_channel, kernel_size, stride, padding, bias=True if bn else True)\n    )\n\n    self.skip_proj = False\n    if in_channel != out_channel or upsample or downsample:\n      self.conv_sc = SpectralNorm(nn.Conv2d(in_channel, out_channel, 1, 1, 0))\n      self.skip_proj = True\n\n    self.upsample = upsample\n    self.downsample = downsample\n    self.activation = activation\n    self.bn = bn\n    if bn:\n      self.HyperBN = ConditionalBatchNorm2d(in_channel, z_dim)\n      self.HyperBN_1 = ConditionalBatchNorm2d(out_channel, z_dim)\n\n  def forward(self, input, condition=None):\n    out = input\n\n    if self.bn:\n      out = self.HyperBN(out, condition)\n    out = self.activation(out)\n    if self.upsample:\n      out = F.interpolate(out, scale_factor=2)\n    out = self.conv0(out)\n    if self.bn:\n      out = self.HyperBN_1(out, condition)\n    out = self.activation(out)\n    out = self.conv1(out)\n\n    if self.downsample:\n      out = F.avg_pool2d(out, 2)\n\n    if self.skip_proj:\n      skip = input\n      if self.upsample:\n        skip = F.interpolate(skip, scale_factor=2)\n      skip = self.conv_sc(skip)\n      if self.downsample:\n        skip = F.avg_pool2d(skip, 2)\n    else:\n      skip = input\n    return out + skip\n\n\nclass Generator128(nn.Module):\n  def __init__(self, code_dim=120, n_class=1000, chn=96, debug=False):\n    super().__init__()\n\n    self.linear = nn.Linear(n_class, 128, bias=False)\n\n    if debug:\n      chn = 8\n\n    self.first_view = 16 * chn\n\n    self.G_linear = SpectralNorm(nn.Linear(20, 4 * 4 * 16 * chn))\n\n    z_dim = code_dim + 28\n\n    self.GBlock = nn.ModuleList([\n      GBlock(16 * chn, 16 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(16 * chn, 8 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(8 * chn, 4 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(4 * chn, 2 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(2 * chn, 1 * chn, n_class=n_class, z_dim=z_dim),\n    ])\n\n    self.sa_id = 4\n    self.num_split = len(self.GBlock) + 1\n    self.attention = SelfAttention(2 * chn)\n    self.ScaledCrossReplicaBN = nn.BatchNorm2d(1 * chn, eps=1e-4)\n    self.colorize = SpectralNorm(nn.Conv2d(1 * chn, 3, [3, 3], padding=1))\n\n  def forward(self, input, class_id):\n    codes = torch.chunk(input, self.num_split, 1)\n    class_emb = self.linear(class_id)  # 128\n\n    out = self.G_linear(codes[0])\n    out = out.view(-1, 4, 4, self.first_view).permute(0, 3, 1, 2)\n    for i, (code, GBlock) in enumerate(zip(codes[1:], self.GBlock)):\n      if i == self.sa_id:\n        out = self.attention(out)\n      condition = torch.cat([code, class_emb], 1)\n      out = GBlock(out, condition)\n\n    out = self.ScaledCrossReplicaBN(out)\n    out = F.relu(out)\n    out = self.colorize(out)\n    return torch.tanh(out)\n\n\nclass Generator256(nn.Module):\n  def __init__(self, code_dim=140, n_class=1000, chn=96, debug=False):\n    super().__init__()\n\n    self.linear = nn.Linear(n_class, 128, bias=False)\n\n    if debug:\n      chn = 8\n\n    self.first_view = 16 * chn\n\n    self.G_linear = SpectralNorm(nn.Linear(20, 4 * 4 * 16 * chn))\n\n    self.GBlock = nn.ModuleList([\n      GBlock(16 * chn, 16 * chn, n_class=n_class),\n      GBlock(16 * chn, 8 * chn, n_class=n_class),\n      GBlock(8 * chn, 8 * chn, n_class=n_class),\n      GBlock(8 * chn, 4 * chn, n_class=n_class),\n      GBlock(4 * chn, 2 * chn, n_class=n_class),\n      GBlock(2 * chn, 1 * chn, n_class=n_class),\n    ])\n\n    self.sa_id = 5\n    self.num_split = len(self.GBlock) + 1\n    self.attention = SelfAttention(2 * chn)\n    self.ScaledCrossReplicaBN = nn.BatchNorm2d(1 * chn, eps=1e-4)\n    self.colorize = SpectralNorm(nn.Conv2d(1 * chn, 3, [3, 3], padding=1))\n\n  def forward(self, input, class_id):\n    codes = torch.chunk(input, self.num_split, 1)\n    class_emb = self.linear(class_id)  # 128\n\n    out = self.G_linear(codes[0])\n    out = out.view(-1, 4, 4, self.first_view).permute(0, 3, 1, 2)\n    for i, (code, GBlock) in enumerate(zip(codes[1:], self.GBlock)):\n      if i == self.sa_id:\n        out = self.attention(out)\n      condition = torch.cat([code, class_emb], 1)\n      out = GBlock(out, condition)\n\n    out = self.ScaledCrossReplicaBN(out)\n    out = F.relu(out)\n    out = self.colorize(out)\n    return torch.tanh(out)\n\n\nclass Generator512(nn.Module):\n  def __init__(self, code_dim=128, n_class=1000, chn=96, debug=False):\n    super().__init__()\n\n    self.linear = nn.Linear(n_class, 128, bias=False)\n\n    if debug:\n      chn = 8\n\n    self.first_view = 16 * chn\n\n    self.G_linear = SpectralNorm(nn.Linear(16, 4 * 4 * 16 * chn))\n\n    z_dim = code_dim + 16\n\n    self.GBlock = nn.ModuleList([\n      GBlock(16 * chn, 16 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(16 * chn, 8 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(8 * chn, 8 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(8 * chn, 4 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(4 * chn, 2 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(2 * chn, 1 * chn, n_class=n_class, z_dim=z_dim),\n      GBlock(1 * chn, 1 * chn, n_class=n_class, z_dim=z_dim),\n    ])\n\n    self.sa_id = 4\n    self.num_split = len(self.GBlock) + 1\n    self.attention = SelfAttention(4 * chn)\n    self.ScaledCrossReplicaBN = nn.BatchNorm2d(1 * chn)\n    self.colorize = SpectralNorm(nn.Conv2d(1 * chn, 3, [3, 3], padding=1))\n\n  def forward(self, input, class_id):\n    codes = torch.chunk(input, self.num_split, 1)\n    class_emb = self.linear(class_id)  # 128\n\n    out = self.G_linear(codes[0])\n    out = out.view(-1, 4, 4, self.first_view).permute(0, 3, 1, 2)\n    for i, (code, GBlock) in enumerate(zip(codes[1:], self.GBlock)):\n      if i == self.sa_id:\n        out = self.attention(out)\n      condition = torch.cat([code, class_emb], 1)\n      out = GBlock(out, condition)\n\n    out = self.ScaledCrossReplicaBN(out)\n    out = F.relu(out)\n    out = self.colorize(out)\n    return torch.tanh(out)\n\n\nclass Discriminator(nn.Module):\n  def __init__(self, n_class=1000, chn=96, debug=False):\n    super().__init__()\n\n    def conv(in_channel, out_channel, downsample=True):\n      return GBlock(in_channel, out_channel, bn=False, upsample=False, downsample=downsample)\n\n    if debug:\n      chn = 8\n    self.debug = debug\n\n    self.pre_conv = nn.Sequential(\n      SpectralNorm(nn.Conv2d(3, 1 * chn, 3, padding=1)),\n      nn.ReLU(),\n      SpectralNorm(nn.Conv2d(1 * chn, 1 * chn, 3, padding=1)),\n      nn.AvgPool2d(2),\n    )\n    self.pre_skip = SpectralNorm(nn.Conv2d(3, 1 * chn, 1))\n\n    self.conv = nn.Sequential(\n      conv(1 * chn, 1 * chn, downsample=True),\n      conv(1 * chn, 2 * chn, downsample=True),\n      SelfAttention(2 * chn),\n      conv(2 * chn, 2 * chn, downsample=True),\n      conv(2 * chn, 4 * chn, downsample=True),\n      conv(4 * chn, 8 * chn, downsample=True),\n      conv(8 * chn, 8 * chn, downsample=True),\n      conv(8 * chn, 16 * chn, downsample=True),\n      conv(16 * chn, 16 * chn, downsample=False),\n    )\n\n    self.linear = SpectralNorm(nn.Linear(16 * chn, 1))\n\n    self.embed = nn.Embedding(n_class, 16 * chn)\n    self.embed.weight.data.uniform_(-0.1, 0.1)\n    self.embed = SpectralNorm(self.embed)\n\n  def forward(self, input, class_id):\n\n    out = self.pre_conv(input)\n    out += self.pre_skip(F.avg_pool2d(input, 2))\n    out = self.conv(out)\n    out = F.relu(out)\n    out = out.view(out.size(0), out.size(1), -1)\n    out = out.sum(2)\n    out_linear = self.linear(out).squeeze(1)\n    embed = self.embed(class_id)\n\n    prod = (out * embed).sum(1)\n\n    return out_linear + prod"
  },
  {
    "path": "FQ-BigGAN/TFHub/converter.py",
    "content": "\"\"\"Utilities for converting TFHub BigGAN generator weights to PyTorch.\n\nRecommended usage:\n\nTo convert all BigGAN variants and generate test samples, use:\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python converter.py --generate_samples\n```\n\nSee `parse_args` for additional options.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport h5py\nimport torch\nimport torch.nn as nn\nfrom torchvision.utils import save_image\nimport tensorflow as tf\nimport tensorflow_hub as hub\nimport parse\n\n# import reference biggan from this folder\nimport biggan_v1 as biggan_for_conversion\n\n# Import model from main folder\nsys.path.append('..')\nimport BigGAN\n\n\n\n\nDEVICE = 'cuda'\nHDF5_TMPL = 'biggan-{}.h5'\nPTH_TMPL = 'biggan-{}.pth'\nMODULE_PATH_TMPL = 'https://tfhub.dev/deepmind/biggan-{}/2'\nZ_DIMS = {\n  128: 120,\n  256: 140,\n  512: 128}\nRESOLUTIONS = list(Z_DIMS)\n\n\ndef dump_tfhub_to_hdf5(module_path, hdf5_path, redownload=False):\n  \"\"\"Loads TFHub weights and saves them to intermediate HDF5 file.\n\n  Args:\n    module_path ([Path-like]): Path to TFHub module.\n    hdf5_path ([Path-like]): Path to output HDF5 file.\n\n  Returns:\n    [h5py.File]: Loaded hdf5 file containing module weights.\n  \"\"\"\n  if os.path.exists(hdf5_path) and (not redownload):\n    print('Loading BigGAN hdf5 file from:', hdf5_path)\n    return h5py.File(hdf5_path, 'r')\n\n  print('Loading BigGAN module from:', module_path)\n  tf.reset_default_graph()\n  hub.Module(module_path)\n  print('Loaded BigGAN module from:', module_path)\n\n  initializer = tf.global_variables_initializer()\n  sess = tf.Session()\n  sess.run(initializer)\n\n  print('Saving BigGAN weights to :', hdf5_path)\n  h5f = h5py.File(hdf5_path, 'w')\n  for var in tf.global_variables():\n    val = sess.run(var)\n    h5f.create_dataset(var.name, data=val)\n    print(f'Saving {var.name} with shape {val.shape}')\n  h5f.close()\n  return h5py.File(hdf5_path, 'r')\n\n\nclass TFHub2Pytorch(object):\n\n  TF_ROOT = 'module'\n\n  NUM_GBLOCK = {\n    128: 5,\n    256: 6,\n    512: 7\n  }\n\n  w = 'w'\n  b = 'b'\n  u = 'u0'\n  v = 'u1'\n  gamma = 'gamma'\n  beta = 'beta'\n\n  def __init__(self, state_dict, tf_weights, resolution=256, load_ema=True, verbose=False):\n    self.state_dict = state_dict\n    self.tf_weights = tf_weights\n    self.resolution = resolution\n    self.verbose = verbose\n    if load_ema:\n      for name in ['w', 'b', 'gamma', 'beta']:\n        setattr(self, name, getattr(self, name) + '/ema_b999900')\n\n  def load(self):\n    self.load_generator()\n    return self.state_dict\n\n  def load_generator(self):\n    GENERATOR_ROOT = os.path.join(self.TF_ROOT, 'Generator')\n\n    for i in range(self.NUM_GBLOCK[self.resolution]):\n      name_tf = os.path.join(GENERATOR_ROOT, 'GBlock')\n      name_tf += f'_{i}' if i != 0 else ''\n      self.load_GBlock(f'GBlock.{i}.', name_tf)\n\n    self.load_attention('attention.', os.path.join(GENERATOR_ROOT, 'attention'))\n    self.load_linear('linear', os.path.join(self.TF_ROOT, 'linear'), bias=False)\n    self.load_snlinear('G_linear', os.path.join(GENERATOR_ROOT, 'G_Z', 'G_linear'))\n    self.load_colorize('colorize', os.path.join(GENERATOR_ROOT, 'conv_2d'))\n    self.load_ScaledCrossReplicaBNs('ScaledCrossReplicaBN',\n                    os.path.join(GENERATOR_ROOT, 'ScaledCrossReplicaBN'))\n\n  def load_linear(self, name_pth, name_tf, bias=True):\n    self.state_dict[name_pth + '.weight'] = self.load_tf_tensor(name_tf, self.w).permute(1, 0)\n    if bias:\n      self.state_dict[name_pth + '.bias'] = self.load_tf_tensor(name_tf, self.b)\n\n  def load_snlinear(self, name_pth, name_tf, bias=True):\n    self.state_dict[name_pth + '.module.weight_u'] = self.load_tf_tensor(name_tf, self.u).squeeze()\n    self.state_dict[name_pth + '.module.weight_v'] = self.load_tf_tensor(name_tf, self.v).squeeze()\n    self.state_dict[name_pth + '.module.weight_bar'] = self.load_tf_tensor(name_tf, self.w).permute(1, 0)\n    if bias:\n      self.state_dict[name_pth + '.module.bias'] = self.load_tf_tensor(name_tf, self.b)\n\n  def load_colorize(self, name_pth, name_tf):\n    self.load_snconv(name_pth, name_tf)\n\n  def load_GBlock(self, name_pth, name_tf):\n    self.load_convs(name_pth, name_tf)\n    self.load_HyperBNs(name_pth, name_tf)\n\n  def load_convs(self, name_pth, name_tf):\n    self.load_snconv(name_pth + 'conv0', os.path.join(name_tf, 'conv0'))\n    self.load_snconv(name_pth + 'conv1', os.path.join(name_tf, 'conv1'))\n    self.load_snconv(name_pth + 'conv_sc', os.path.join(name_tf, 'conv_sc'))\n\n  def load_snconv(self, name_pth, name_tf, bias=True):\n    if self.verbose:\n      print(f'loading: {name_pth} from {name_tf}')\n    self.state_dict[name_pth + '.module.weight_u'] = self.load_tf_tensor(name_tf, self.u).squeeze()\n    self.state_dict[name_pth + '.module.weight_v'] = self.load_tf_tensor(name_tf, self.v).squeeze()\n    self.state_dict[name_pth + '.module.weight_bar'] = self.load_tf_tensor(name_tf, self.w).permute(3, 2, 0, 1)\n    if bias:\n      self.state_dict[name_pth + '.module.bias'] = self.load_tf_tensor(name_tf, self.b).squeeze()\n\n  def load_conv(self, name_pth, name_tf, bias=True):\n\n    self.state_dict[name_pth + '.weight_u'] = self.load_tf_tensor(name_tf, self.u).squeeze()\n    self.state_dict[name_pth + '.weight_v'] = self.load_tf_tensor(name_tf, self.v).squeeze()\n    self.state_dict[name_pth + '.weight_bar'] = self.load_tf_tensor(name_tf, self.w).permute(3, 2, 0, 1)\n    if bias:\n      self.state_dict[name_pth + '.bias'] = self.load_tf_tensor(name_tf, self.b)\n\n  def load_HyperBNs(self, name_pth, name_tf):\n    self.load_HyperBN(name_pth + 'HyperBN', os.path.join(name_tf, 'HyperBN'))\n    self.load_HyperBN(name_pth + 'HyperBN_1', os.path.join(name_tf, 'HyperBN_1'))\n\n  def load_ScaledCrossReplicaBNs(self, name_pth, name_tf):\n    self.state_dict[name_pth + '.bias'] = self.load_tf_tensor(name_tf, self.beta).squeeze()\n    self.state_dict[name_pth + '.weight'] = self.load_tf_tensor(name_tf, self.gamma).squeeze()\n    self.state_dict[name_pth + '.running_mean'] = self.load_tf_tensor(name_tf + 'bn', 'accumulated_mean')\n    self.state_dict[name_pth + '.running_var'] = self.load_tf_tensor(name_tf + 'bn', 'accumulated_var')\n    self.state_dict[name_pth + '.num_batches_tracked'] = torch.tensor(\n      self.tf_weights[os.path.join(name_tf + 'bn', 'accumulation_counter:0')][()], dtype=torch.float32)\n\n  def load_HyperBN(self, name_pth, name_tf):\n    if self.verbose:\n      print(f'loading: {name_pth} from {name_tf}')\n    beta = name_pth + '.beta_embed.module'\n    gamma = name_pth + '.gamma_embed.module'\n    self.state_dict[beta + '.weight_u'] = self.load_tf_tensor(os.path.join(name_tf, 'beta'), self.u).squeeze()\n    self.state_dict[gamma + '.weight_u'] = self.load_tf_tensor(os.path.join(name_tf, 'gamma'), self.u).squeeze()\n    self.state_dict[beta + '.weight_v'] = self.load_tf_tensor(os.path.join(name_tf, 'beta'), self.v).squeeze()\n    self.state_dict[gamma + '.weight_v'] = self.load_tf_tensor(os.path.join(name_tf, 'gamma'), self.v).squeeze()\n    self.state_dict[beta + '.weight_bar'] = self.load_tf_tensor(os.path.join(name_tf, 'beta'), self.w).permute(1, 0)\n    self.state_dict[gamma +\n            '.weight_bar'] = self.load_tf_tensor(os.path.join(name_tf, 'gamma'), self.w).permute(1, 0)\n\n    cr_bn_name = name_tf.replace('HyperBN', 'CrossReplicaBN')\n    self.state_dict[name_pth + '.bn.running_mean'] = self.load_tf_tensor(cr_bn_name, 'accumulated_mean')\n    self.state_dict[name_pth + '.bn.running_var'] = self.load_tf_tensor(cr_bn_name, 'accumulated_var')\n    self.state_dict[name_pth + '.bn.num_batches_tracked'] = torch.tensor(\n      self.tf_weights[os.path.join(cr_bn_name, 'accumulation_counter:0')][()], dtype=torch.float32)\n\n  def load_attention(self, name_pth, name_tf):\n\n    self.load_snconv(name_pth + 'theta', os.path.join(name_tf, 'theta'), bias=False)\n    self.load_snconv(name_pth + 'phi', os.path.join(name_tf, 'phi'), bias=False)\n    self.load_snconv(name_pth + 'g', os.path.join(name_tf, 'g'), bias=False)\n    self.load_snconv(name_pth + 'o_conv', os.path.join(name_tf, 'o_conv'), bias=False)\n    self.state_dict[name_pth + 'gamma'] = self.load_tf_tensor(name_tf, self.gamma)\n\n  def load_tf_tensor(self, prefix, var, device='0'):\n    name = os.path.join(prefix, var) + f':{device}'\n    return torch.from_numpy(self.tf_weights[name][:])\n\n# Convert from v1: This function maps \ndef convert_from_v1(hub_dict, resolution=128):\n  weightname_dict = {'weight_u': 'u0', 'weight_bar': 'weight', 'bias': 'bias'}\n  convnum_dict = {'conv0': 'conv1', 'conv1': 'conv2', 'conv_sc': 'conv_sc'}\n  attention_blocknum = {128: 3, 256: 4, 512: 3}[resolution]\n  hub2me = {'linear.weight': 'shared.weight', # This is actually the shared weight \n          # Linear stuff\n          'G_linear.module.weight_bar': 'linear.weight', \n          'G_linear.module.bias': 'linear.bias',\n          'G_linear.module.weight_u': 'linear.u0',\n          # output layer stuff\n          'ScaledCrossReplicaBN.weight': 'output_layer.0.gain', \n          'ScaledCrossReplicaBN.bias': 'output_layer.0.bias',\n          'ScaledCrossReplicaBN.running_mean': 'output_layer.0.stored_mean',\n          'ScaledCrossReplicaBN.running_var': 'output_layer.0.stored_var',\n          'colorize.module.weight_bar': 'output_layer.2.weight', \n          'colorize.module.bias': 'output_layer.2.bias',\n          'colorize.module.weight_u':  'output_layer.2.u0',\n          # Attention stuff\n          'attention.gamma': 'blocks.%d.1.gamma' % attention_blocknum, \n          'attention.theta.module.weight_u': 'blocks.%d.1.theta.u0' % attention_blocknum,\n          'attention.theta.module.weight_bar': 'blocks.%d.1.theta.weight' % attention_blocknum, \n          'attention.phi.module.weight_u':  'blocks.%d.1.phi.u0' % attention_blocknum,\n          'attention.phi.module.weight_bar': 'blocks.%d.1.phi.weight' % attention_blocknum,\n          'attention.g.module.weight_u': 'blocks.%d.1.g.u0' % attention_blocknum,\n          'attention.g.module.weight_bar': 'blocks.%d.1.g.weight' % attention_blocknum, \n          'attention.o_conv.module.weight_u': 'blocks.%d.1.o.u0' % attention_blocknum,\n          'attention.o_conv.module.weight_bar':'blocks.%d.1.o.weight' % attention_blocknum, \n          }\n\n  # Loop over the hub dict and build the hub2me map\n  for name in hub_dict.keys():\n    if 'GBlock' in name:\n      if 'HyperBN' not in name: # it's a conv\n        out = parse.parse('GBlock.{:d}.{}.module.{}',name)\n        blocknum, convnum, weightname = out\n        if weightname not in weightname_dict:\n          continue # else hyperBN in \n        out_name = 'blocks.%d.0.%s.%s' % (blocknum, convnum_dict[convnum], weightname_dict[weightname]) # Increment conv number by 1\n      else: # hyperbn not conv\n        BNnum = 2 if 'HyperBN_1' in name else 1\n        if 'embed' in name:        \n          out = parse.parse('GBlock.{:d}.{}.module.{}',name)\n          blocknum, gamma_or_beta, weightname = out\n          if weightname not in weightname_dict: # Ignore weight_v\n            continue\n          out_name = 'blocks.%d.0.bn%d.%s.%s' % (blocknum, BNnum, 'gain' if 'gamma' in gamma_or_beta else 'bias', weightname_dict[weightname])\n        else:\n           out = parse.parse('GBlock.{:d}.{}.bn.{}',name)\n           blocknum, dummy, mean_or_var = out\n           if 'num_batches_tracked' in mean_or_var:\n            continue\n           out_name = 'blocks.%d.0.bn%d.%s' % (blocknum, BNnum, 'stored_mean' if 'mean' in mean_or_var else 'stored_var')\n      hub2me[name] = out_name\n\n\n  # Invert the hub2me map\n  me2hub = {hub2me[item]: item for item in hub2me}\n  new_dict = {}\n  dimz_dict = {128: 20, 256: 20, 512:16} \n  for item in me2hub:\n    # Swap input dim ordering on batchnorm bois to account for my arbitrary change of ordering when concatenating Ys and Zs  \n    if ('bn' in item and 'weight' in item) and ('gain' in item or 'bias' in item) and ('output_layer' not in item):\n      new_dict[item] = torch.cat([hub_dict[me2hub[item]][:, -128:], hub_dict[me2hub[item]][:, :dimz_dict[resolution]]], 1)\n    # Reshape the first linear weight, bias, and u0\n    elif item == 'linear.weight':\n      new_dict[item] = hub_dict[me2hub[item]].contiguous().view(4, 4, 96 * 16, -1).permute(2,0,1,3).contiguous().view(-1,dimz_dict[resolution])\n    elif item == 'linear.bias':\n      new_dict[item] = hub_dict[me2hub[item]].view(4, 4, 96  * 16).permute(2,0,1).contiguous().view(-1)\n    elif item == 'linear.u0':\n       new_dict[item] = hub_dict[me2hub[item]].view(4, 4, 96  * 16).permute(2,0,1).contiguous().view(1, -1)\n    elif me2hub[item] == 'linear.weight': # THIS IS THE SHARED WEIGHT NOT THE FIRST LINEAR LAYER\n      # Transpose shared weight so that it's an embedding\n      new_dict[item] = hub_dict[me2hub[item]].t()\n    elif 'weight_u' in me2hub[item]: # Unsqueeze u0s    \n      new_dict[item] = hub_dict[me2hub[item]].unsqueeze(0)\n    else:\n      new_dict[item] = hub_dict[me2hub[item]]      \n  return new_dict\n\ndef get_config(resolution):\n  attn_dict = {128: '64', 256: '128', 512: '64'}\n  dim_z_dict = {128: 120, 256: 140, 512: 128}\n  config = {'G_param': 'SN', 'D_param': 'SN', \n           'G_ch': 96, 'D_ch': 96, \n           'D_wide': True, 'G_shared': True, \n           'shared_dim': 128, 'dim_z': dim_z_dict[resolution], \n           'hier': True, 'cross_replica': False, \n           'mybn': False, 'G_activation': nn.ReLU(inplace=True),\n           'G_attn': attn_dict[resolution],\n           'norm_style': 'bn',\n           'G_init': 'ortho', 'skip_init': True, 'no_optim': True,\n           'G_fp16': False, 'G_mixed_precision': False,\n           'accumulate_stats': False, 'num_standing_accumulations': 16, \n           'G_eval_mode': True,\n           'BN_eps': 1e-04, 'SN_eps': 1e-04, \n           'num_G_SVs': 1, 'num_G_SV_itrs': 1, 'resolution': resolution, \n           'n_classes': 1000}\n  return config\n\n\ndef convert_biggan(resolution, weight_dir, redownload=False, no_ema=False, verbose=False):\n  module_path = MODULE_PATH_TMPL.format(resolution)\n  hdf5_path = os.path.join(weight_dir, HDF5_TMPL.format(resolution))\n  pth_path = os.path.join(weight_dir, PTH_TMPL.format(resolution))\n\n  tf_weights = dump_tfhub_to_hdf5(module_path, hdf5_path, redownload=redownload)\n  G_temp = getattr(biggan_for_conversion, f'Generator{resolution}')()\n  state_dict_temp = G_temp.state_dict()\n\n  converter = TFHub2Pytorch(state_dict_temp, tf_weights, resolution=resolution,\n                load_ema=(not no_ema), verbose=verbose)\n  state_dict_v1 = converter.load()\n  state_dict = convert_from_v1(state_dict_v1, resolution)\n  # Get the config, build the model\n  config = get_config(resolution)\n  G = BigGAN.Generator(**config)\n  G.load_state_dict(state_dict, strict=False) # Ignore missing sv0 entries\n  torch.save(state_dict, pth_path)\n  \n  # output_location ='pretrained_weights/TFHub-PyTorch-128.pth'\n  \n  return G\n\n\ndef generate_sample(G, z_dim, batch_size, filename, parallel=False):\n  \n  G.eval()\n  G.to(DEVICE)\n  with torch.no_grad():\n    z = torch.randn(batch_size, G.dim_z).to(DEVICE)\n    y = torch.randint(low=0, high=1000, size=(batch_size,), \n        device=DEVICE, dtype=torch.int64, requires_grad=False)\n    if parallel:\n      images = nn.parallel.data_parallel(G, (z, G.shared(y)))\n    else:\n      images = G(z, G.shared(y))\n  save_image(images, filename, scale_each=True, normalize=True)\n\ndef parse_args():\n  usage = 'Parser for conversion script.'\n  parser = argparse.ArgumentParser(description=usage)\n  parser.add_argument(\n    '--resolution', '-r', type=int, default=None, choices=[128, 256, 512],\n    help='Resolution of TFHub module to convert. Converts all resolutions if None.')\n  parser.add_argument(\n    '--redownload', action='store_true', default=False,\n    help='Redownload weights and overwrite current hdf5 file, if present.')\n  parser.add_argument(\n    '--weights_dir', type=str, default='pretrained_weights')\n  parser.add_argument(\n    '--samples_dir', type=str, default='pretrained_samples')\n  parser.add_argument(\n    '--no_ema', action='store_true', default=False,\n    help='Do not load ema weights.')\n  parser.add_argument(\n    '--verbose', action='store_true', default=False,\n    help='Additionally logging.')\n  parser.add_argument(\n    '--generate_samples', action='store_true', default=False,\n    help='Generate test sample with pretrained model.')\n  parser.add_argument(\n    '--batch_size', type=int, default=64,\n    help='Batch size used for test sample.')\n  parser.add_argument(\n    '--parallel', action='store_true', default=False,\n    help='Parallelize G?')     \n  args = parser.parse_args()\n  return args\n\n\nif __name__ == '__main__':\n\n  args = parse_args()\n  os.makedirs(args.weights_dir, exist_ok=True)\n  os.makedirs(args.samples_dir, exist_ok=True)\n\n  if args.resolution is not None:\n    G = convert_biggan(args.resolution, args.weights_dir,\n               redownload=args.redownload,\n               no_ema=args.no_ema, verbose=args.verbose)\n    if args.generate_samples:\n      filename = os.path.join(args.samples_dir, f'biggan{args.resolution}_samples.jpg')\n      print('Generating samples...')\n      generate_sample(G, Z_DIMS[args.resolution], args.batch_size, filename, args.parallel)\n  else:\n    for res in RESOLUTIONS:\n      G = convert_biggan(res, args.weights_dir,\n                 redownload=args.redownload,\n                 no_ema=args.no_ema, verbose=args.verbose)\n      if args.generate_samples:\n        filename = os.path.join(args.samples_dir, f'biggan{res}_samples.jpg')\n        print('Generating samples...')\n        generate_sample(G, Z_DIMS[res], args.batch_size, filename, args.parallel)"
  },
  {
    "path": "FQ-BigGAN/animal_hash.py",
    "content": "c = ['Aardvark', 'Abyssinian', 'Affenpinscher', 'Akbash', 'Akita', 'Albatross',\n     'Alligator', 'Alpaca', 'Angelfish', 'Ant', 'Anteater', 'Antelope', 'Ape',\n     'Armadillo', 'Ass', 'Avocet', 'Axolotl', 'Baboon', 'Badger', 'Balinese',\n     'Bandicoot', 'Barb', 'Barnacle', 'Barracuda', 'Bat', 'Beagle', 'Bear',\n     'Beaver', 'Bee', 'Beetle', 'Binturong', 'Bird', 'Birman', 'Bison',\n     'Bloodhound', 'Boar', 'Bobcat', 'Bombay', 'Bongo', 'Bonobo', 'Booby',\n     'Budgerigar', 'Buffalo', 'Bulldog', 'Bullfrog', 'Burmese', 'Butterfly',\n     'Caiman', 'Camel', 'Capybara', 'Caracal', 'Caribou', 'Cassowary', 'Cat',\n     'Caterpillar', 'Catfish', 'Cattle', 'Centipede', 'Chameleon', 'Chamois',\n     'Cheetah', 'Chicken', 'Chihuahua', 'Chimpanzee', 'Chinchilla', 'Chinook', \n     'Chipmunk', 'Chough', 'Cichlid', 'Clam', 'Coati', 'Cobra', 'Cockroach',\n     'Cod', 'Collie', 'Coral', 'Cormorant', 'Cougar', 'Cow', 'Coyote', \n     'Crab', 'Crane', 'Crocodile', 'Crow', 'Curlew', 'Cuscus', 'Cuttlefish',\n     'Dachshund', 'Dalmatian', 'Deer', 'Dhole', 'Dingo', 'Dinosaur', 'Discus',\n     'Dodo', 'Dog', 'Dogball', 'Dogfish', 'Dolphin', 'Donkey', 'Dormouse',\n     'Dove', 'Dragonfly', 'Drever', 'Duck', 'Dugong', 'Dunker', 'Dunlin', \n     'Eagle', 'Earwig', 'Echidna', 'Eel', 'Eland', 'Elephant', 'ElephantSeal',\n     'Elk', 'Emu', 'Falcon', 'Ferret', 'Finch', 'Fish', 'Flamingo', 'Flounder',\n     'Fly', 'Fossa', 'Fox', 'Frigatebird', 'Frog', 'Galago', 'Gar', 'Gaur', \n     'Gazelle', 'Gecko', 'Gerbil', 'Gharial', 'GiantPanda', 'Gibbon', 'Giraffe',\n     'Gnat', 'Gnu', 'Goat', 'Goldfinch', 'Goldfish', 'Goose', 'Gopher',\n     'Gorilla', 'Goshawk', 'Grasshopper', 'Greyhound', 'Grouse', 'Guanaco', \n     'GuineaFowl', 'GuineaPig', 'Gull', 'Guppy', 'Hamster', 'Hare', 'Harrier', \n     'Havanese', 'Hawk', 'Hedgehog', 'Heron', 'Herring', 'Himalayan', \n     'Hippopotamus', 'Hornet', 'Horse', 'Human', 'Hummingbird', 'Hyena', \n     'Ibis', 'Iguana', 'Impala', 'Indri', 'Insect', 'Jackal', 'Jaguar', \n     'Javanese', 'Jay', 'Jellyfish', 'Kakapo', 'Kangaroo', 'Kingfisher', \n     'Kiwi', 'Koala', 'KomodoDragon', 'Kouprey', 'Kudu', 'Labradoodle', \n     'Ladybird', 'Lapwing', 'Lark', 'Lemming', 'Lemur', 'Leopard', 'Liger',\n     'Lion', 'Lionfish', 'Lizard', 'Llama', 'Lobster', 'Locust', 'Loris', \n     'Louse', 'Lynx', 'Lyrebird', 'Macaw', 'Magpie', 'Mallard', 'Maltese',\n     'Manatee', 'Mandrill', 'Markhor', 'Marten', 'Mastiff', 'Mayfly', 'Meerkat',\n     'Millipede', 'Mink', 'Mole', 'Molly', 'Mongoose', 'Mongrel', 'Monkey',\n     'Moorhen', 'Moose', 'Mosquito', 'Moth', 'Mouse', 'Mule', 'Narwhal',\n     'Neanderthal', 'Newfoundland', 'Newt', 'Nightingale', 'Numbat', 'Ocelot',\n     'Octopus', 'Okapi', 'Olm', 'Opossum', 'Orang-utan', 'Oryx', 'Ostrich', \n     'Otter', 'Owl', 'Ox', 'Oyster', 'Pademelon', 'Panther', 'Parrot',\n     'Partridge', 'Peacock', 'Peafowl', 'Pekingese', 'Pelican', 'Penguin', \n     'Persian', 'Pheasant', 'Pig', 'Pigeon', 'Pika', 'Pike', 'Piranha', \n     'Platypus', 'Pointer', 'Pony', 'Poodle', 'Porcupine', 'Porpoise',\n     'Possum', 'PrairieDog', 'Prawn', 'Puffin', 'Pug', 'Puma', 'Quail', \n     'Quelea', 'Quetzal', 'Quokka', 'Quoll', 'Rabbit', 'Raccoon', 'Ragdoll', \n     'Rail', 'Ram', 'Rat', 'Rattlesnake', 'Raven', 'RedDeer', 'RedPanda',\n     'Reindeer', 'Rhinoceros', 'Robin', 'Rook', 'Rottweiler', 'Ruff',\n     'Salamander', 'Salmon', 'SandDollar', 'Sandpiper', 'Saola', \n     'Sardine', 'Scorpion', 'SeaLion', 'SeaUrchin', 'Seahorse',\n     'Seal', 'Serval', 'Shark', 'Sheep', 'Shrew', 'Shrimp', 'Siamese',\n     'Siberian', 'Skunk', 'Sloth', 'Snail', 'Snake', 'Snowshoe', 'Somali', \n     'Sparrow', 'Spider', 'Sponge', 'Squid', 'Squirrel', 'Starfish', 'Starling',\n     'Stingray', 'Stinkbug', 'Stoat', 'Stork', 'Swallow', 'Swan', 'Tang', \n     'Tapir', 'Tarsier', 'Termite', 'Tetra', 'Tiffany', 'Tiger', 'Toad', \n     'Tortoise', 'Toucan', 'Tropicbird', 'Trout', 'Tuatara', 'Turkey', \n     'Turtle', 'Uakari', 'Uguisu', 'Umbrellabird', 'Viper', 'Vulture',\n     'Wallaby', 'Walrus', 'Warthog', 'Wasp', 'WaterBuffalo', 'Weasel',\n     'Whale', 'Whippet', 'Wildebeest', 'Wolf', 'Wolverine', 'Wombat', \n     'Woodcock', 'Woodlouse', 'Woodpecker', 'Worm', 'Wrasse', 'Wren', \n     'Yak', 'Zebra', 'Zebu', 'Zonkey']\na = ['able', 'above', 'absent', 'absolute', 'abstract', 'abundant', 'academic',\n     'acceptable', 'accepted', 'accessible', 'accurate', 'accused', 'active', \n     'actual', 'acute', 'added', 'additional', 'adequate', 'adjacent', \n     'administrative', 'adorable', 'advanced', 'adverse', 'advisory', \n     'aesthetic', 'afraid', 'african', 'aggregate', 'aggressive', 'agreeable', \n     'agreed', 'agricultural', 'alert', 'alive', 'alleged', 'allied', 'alone', \n     'alright', 'alternative', 'amateur', 'amazing', 'ambitious', 'american', \n     'amused', 'ancient', 'angry', 'annoyed', 'annual', 'anonymous', 'anxious', \n     'appalling', 'apparent', 'applicable', 'appropriate', 'arab', 'arbitrary',\n     'architectural', 'armed', 'arrogant', 'artificial', 'artistic', 'ashamed', \n     'asian', 'asleep', 'assistant', 'associated', 'atomic', 'attractive',\n     'australian', 'automatic', 'autonomous', 'available', 'average',\n     'awake', 'aware', 'awful', 'awkward', 'back', 'bad', 'balanced', 'bare', \n     'basic', 'beautiful', 'beneficial', 'better', 'bewildered', 'big', \n     'binding', 'biological', 'bitter', 'bizarre', 'black', 'blank', 'blind', \n     'blonde', 'bloody', 'blue', 'blushing', 'boiling', 'bold', 'bored', \n     'boring', 'bottom', 'brainy', 'brave', 'breakable', 'breezy', 'brief', \n     'bright', 'brilliant', 'british', 'broad', 'broken', 'brown', 'bumpy', \n     'burning', 'busy', 'calm', 'canadian', 'capable', 'capitalist', 'careful',\n     'casual', 'catholic', 'causal', 'cautious', 'central', 'certain', \n     'changing', 'characteristic', 'charming', 'cheap', 'cheerful', 'chemical', \n     'chief', 'chilly', 'chinese', 'chosen', 'christian', 'chronic', 'chubby', \n     'circular', 'civic', 'civil', 'civilian', 'classic', 'classical', 'clean',\n     'clear', 'clever', 'clinical', 'close', 'closed', 'cloudy', 'clumsy', \n     'coastal', 'cognitive', 'coherent', 'cold', 'collective', 'colonial', \n     'colorful', 'colossal', 'coloured', 'colourful', 'combative', 'combined',\n     'comfortable', 'coming', 'commercial', 'common', 'communist', 'compact', \n     'comparable', 'comparative', 'compatible', 'competent', 'competitive', \n     'complete', 'complex', 'complicated', 'comprehensive', 'compulsory',\n     'conceptual', 'concerned', 'concrete', 'condemned', 'confident', \n     'confidential', 'confused', 'conscious', 'conservation', 'conservative',\n     'considerable', 'consistent', 'constant', 'constitutional', \n     'contemporary', 'content', 'continental', 'continued', 'continuing', \n     'continuous', 'controlled', 'controversial', 'convenient', 'conventional',\n     'convinced', 'convincing', 'cooing', 'cool', 'cooperative', 'corporate',\n     'correct', 'corresponding', 'costly', 'courageous', 'crazy', 'creative', \n     'creepy', 'criminal', 'critical', 'crooked', 'crowded', 'crucial', \n     'crude', 'cruel', 'cuddly', 'cultural', 'curious', 'curly', 'current', \n     'curved', 'cute', 'daily', 'damaged', 'damp', 'dangerous', 'dark', 'dead',\n     'deaf', 'deafening', 'dear', 'decent', 'decisive', 'deep', 'defeated', \n     'defensive', 'defiant', 'definite', 'deliberate', 'delicate', 'delicious',\n     'delighted', 'delightful', 'democratic', 'dependent', 'depressed', \n     'desirable', 'desperate', 'detailed', 'determined', 'developed', \n     'developing', 'devoted', 'different', 'difficult', 'digital', 'diplomatic', \n     'direct', 'dirty', 'disabled', 'disappointed', 'disastrous', \n     'disciplinary', 'disgusted', 'distant', 'distinct', 'distinctive',\n     'distinguished', 'disturbed', 'disturbing', 'diverse', 'divine', 'dizzy', \n     'domestic', 'dominant', 'double', 'doubtful', 'drab', 'dramatic',\n     'dreadful', 'driving', 'drunk', 'dry', 'dual', 'due', 'dull', 'dusty',\n     'dutch', 'dying', 'dynamic', 'eager', 'early', 'eastern', 'easy', \n     'economic', 'educational', 'eerie', 'effective', 'efficient', \n     'elaborate', 'elated', 'elderly', 'eldest', 'electoral', 'electric',\n     'electrical', 'electronic', 'elegant', 'eligible', 'embarrassed',\n     'embarrassing', 'emotional', 'empirical', 'empty', 'enchanting',\n     'encouraging', 'endless', 'energetic', 'english', 'enormous', \n     'enthusiastic', 'entire', 'entitled', 'envious', 'environmental', 'equal', \n     'equivalent', 'essential', 'established', 'estimated', 'ethical', \n     'ethnic', 'european', 'eventual', 'everyday', 'evident', 'evil', \n     'evolutionary', 'exact', 'excellent', 'exceptional', 'excess', \n     'excessive', 'excited', 'exciting', 'exclusive', 'existing', 'exotic', \n     'expected', 'expensive', 'experienced', 'experimental', 'explicit',\n     'extended', 'extensive', 'external', 'extra', 'extraordinary', 'extreme', \n     'exuberant', 'faint', 'fair', 'faithful', 'familiar', 'famous', 'fancy',\n     'fantastic', 'far', 'fascinating', 'fashionable', 'fast', 'fat', 'fatal', \n     'favourable', 'favourite', 'federal', 'fellow', 'female', 'feminist', \n     'few', 'fierce', 'filthy', 'final', 'financial', 'fine', 'firm', 'fiscal', \n     'fit', 'fixed', 'flaky', 'flat', 'flexible', 'fluffy', 'fluttering', \n     'flying', 'following', 'fond', 'foolish', 'foreign', 'formal', \n     'formidable', 'forthcoming', 'fortunate', 'forward', 'fragile', \n     'frail', 'frantic', 'free', 'french', 'frequent', 'fresh', 'friendly', \n     'frightened', 'front', 'frozen', 'fucking', 'full', 'full-time', 'fun',\n     'functional', 'fundamental', 'funny', 'furious', 'future', 'fuzzy',\n     'gastric', 'gay', 'general', 'generous', 'genetic', 'gentle', 'genuine',\n     'geographical', 'german', 'giant', 'gigantic', 'given', 'glad',\n     'glamorous', 'gleaming', 'global', 'glorious', 'golden', 'good', \n     'gorgeous', 'gothic', 'governing', 'graceful', 'gradual', 'grand', \n     'grateful', 'greasy', 'great', 'greek', 'green', 'grey', 'grieving',\n     'grim', 'gross', 'grotesque', 'growing', 'grubby', 'grumpy', 'guilty',\n     'handicapped', 'handsome', 'happy', 'hard', 'harsh', 'head', 'healthy', \n     'heavy', 'helpful', 'helpless', 'hidden', 'high', 'high-pitched',\n     'hilarious', 'hissing', 'historic', 'historical', 'hollow', 'holy',\n     'homeless', 'homely', 'hon', 'honest', 'horizontal', 'horrible', \n     'hostile', 'hot', 'huge', 'human', 'hungry', 'hurt', 'hushed', 'husky',\n     'icy', 'ideal', 'identical', 'ideological', 'ill', 'illegal', \n     'imaginative', 'immediate', 'immense', 'imperial', 'implicit', \n     'important', 'impossible', 'impressed', 'impressive', 'improved', \n     'inadequate', 'inappropriate', 'inc', 'inclined', 'increased', \n     'increasing', 'incredible', 'independent', 'indian', 'indirect', \n     'individual', 'industrial', 'inevitable', 'influential', 'informal',\n     'inherent', 'initial', 'injured', 'inland', 'inner', 'innocent', \n     'innovative', 'inquisitive', 'instant', 'institutional', 'insufficient',\n     'intact', 'integral', 'integrated', 'intellectual', 'intelligent', \n     'intense', 'intensive', 'interested', 'interesting', 'interim', \n     'interior', 'intermediate', 'internal', 'international', 'intimate',\n     'invisible', 'involved', 'iraqi', 'irish', 'irrelevant', 'islamic',\n     'isolated', 'israeli', 'italian', 'itchy', 'japanese', 'jealous', \n     'jewish', 'jittery', 'joint', 'jolly', 'joyous', 'judicial', 'juicy', \n     'junior', 'just', 'keen', 'key', 'kind', 'known', 'korean', 'labour', \n     'large', 'large-scale', 'late', 'latin', 'lazy', 'leading', 'left', \n     'legal', 'legislative', 'legitimate', 'lengthy', 'lesser', 'level', \n     'lexical', 'liable', 'liberal', 'light', 'like', 'likely', 'limited', \n     'linear', 'linguistic', 'liquid', 'literary', 'little', 'live', 'lively', \n     'living', 'local', 'logical', 'lonely', 'long', 'long-term', 'loose', \n     'lost', 'loud', 'lovely', 'low', 'loyal', 'ltd', 'lucky', 'mad',\n     'magenta', 'magic', 'magnetic', 'magnificent', 'main', 'major', 'male',\n     'mammoth', 'managerial', 'managing', 'manual', 'many', 'marginal', \n     'marine', 'marked', 'married', 'marvellous', 'marxist', 'mass', 'massive', \n     'mathematical', 'mature', 'maximum', 'mean', 'meaningful', 'mechanical',\n     'medical', 'medieval', 'melodic', 'melted', 'mental', 'mere', \n     'metropolitan', 'mid', 'middle', 'middle-class', 'mighty', 'mild',\n     'military', 'miniature', 'minimal', 'minimum', 'ministerial', 'minor', \n     'miserable', 'misleading', 'missing', 'misty', 'mixed', 'moaning', \n     'mobile', 'moderate', 'modern', 'modest', 'molecular', 'monetary', \n     'monthly', 'moral', 'motionless', 'muddy', 'multiple', 'mushy', \n     'musical', 'mute', 'mutual', 'mysterious', 'naked', 'narrow', 'nasty',\n     'national', 'native', 'natural', 'naughty', 'naval', 'near', 'nearby', \n     'neat', 'necessary', 'negative', 'neighbouring', 'nervous', 'net', \n     'neutral', 'new', 'nice', 'nineteenth-century', 'noble', 'noisy', \n     'normal', 'northern', 'nosy', 'notable', 'novel', 'nuclear', 'numerous',\n     'nursing', 'nutritious', 'nutty', 'obedient', 'objective', 'obliged', \n     'obnoxious', 'obvious', 'occasional', 'occupational', 'odd', 'official',\n     'ok', 'okay', 'old', 'old-fashioned', 'olympic', 'only', 'open', \n     'operational', 'opposite', 'optimistic', 'oral', 'orange', 'ordinary', \n     'organic', 'organisational', 'original', 'orthodox', 'other', 'outdoor', \n     'outer', 'outrageous', 'outside', 'outstanding', 'overall', 'overseas',\n     'overwhelming', 'painful', 'pale', 'palestinian', 'panicky', 'parallel', \n     'parental', 'parliamentary', 'part-time', 'partial', 'particular', \n     'passing', 'passive', 'past', 'patient', 'payable', 'peaceful', \n     'peculiar', 'perfect', 'permanent', 'persistent', 'personal', 'petite',\n     'philosophical', 'physical', 'pink', 'plain', 'planned', 'plastic',\n     'pleasant', 'pleased', 'poised', 'polish', 'polite', 'political', 'poor', \n     'popular', 'positive', 'possible', 'post-war', 'potential', 'powerful',\n     'practical', 'precious', 'precise', 'preferred', 'pregnant', \n     'preliminary', 'premier', 'prepared', 'present', 'presidential', \n     'pretty', 'previous', 'prickly', 'primary', 'prime', 'primitive', \n     'principal', 'printed', 'prior', 'private', 'probable', 'productive',\n     'professional', 'profitable', 'profound', 'progressive', 'prominent', \n     'promising', 'proper', 'proposed', 'prospective', 'protective', \n     'protestant', 'proud', 'provincial', 'psychiatric', 'psychological',\n     'public', 'puny', 'pure', 'purple', 'purring', 'puzzled', 'quaint', \n     'qualified', 'quick', 'quickest', 'quiet', 'racial', 'radical', 'rainy',\n     'random', 'rapid', 'rare', 'raspy', 'rational', 'ratty', 'raw', 'ready', \n     'real', 'realistic', 'rear', 'reasonable', 'recent', 'red', 'reduced',\n     'redundant', 'regional', 'registered', 'regular', 'regulatory', 'related', \n     'relative', 'relaxed', 'relevant', 'reliable', 'relieved', 'religious',\n     'reluctant', 'remaining', 'remarkable', 'remote', 'renewed',\n     'representative', 'repulsive', 'required', 'resident', 'residential',\n     'resonant', 'respectable', 'respective', 'responsible', 'resulting',\n     'retail', 'retired', 'revolutionary', 'rich', 'ridiculous', 'right',\n     'rigid', 'ripe', 'rising', 'rival', 'roasted', 'robust', 'rolling', \n     'roman', 'romantic', 'rotten', 'rough', 'round', 'royal', 'rubber',\n     'rude', 'ruling', 'running', 'rural', 'russian', 'sacred', 'sad', 'safe',\n     'salty', 'satisfactory', 'satisfied', 'scared', 'scary', 'scattered',\n     'scientific', 'scornful', 'scottish', 'scrawny', 'screeching', \n     'secondary', 'secret', 'secure', 'select', 'selected', 'selective', \n     'selfish', 'semantic', 'senior', 'sensible', 'sensitive', 'separate',\n     'serious', 'severe', 'sexual', 'shaggy', 'shaky', 'shallow', 'shared', \n     'sharp', 'sheer', 'shiny', 'shivering', 'shocked', 'short', 'short-term', \n     'shrill', 'shy', 'sick', 'significant', 'silent', 'silky', 'silly', \n     'similar', 'simple', 'single', 'skilled', 'skinny', 'sleepy', 'slight',\n     'slim', 'slimy', 'slippery', 'slow', 'small', 'smart', 'smiling', \n     'smoggy', 'smooth', 'so-called', 'social', 'socialist', 'soft', 'solar',\n     'sole', 'solid', 'sophisticated', 'sore', 'sorry', 'sound', 'sour', \n     'southern', 'soviet', 'spanish', 'spare', 'sparkling', 'spatial', \n     'special', 'specific', 'specified', 'spectacular', 'spicy', 'spiritual',\n     'splendid', 'spontaneous', 'sporting', 'spotless', 'spotty', 'square', \n     'squealing', 'stable', 'stale', 'standard', 'static', 'statistical', \n     'statutory', 'steady', 'steep', 'sticky', 'stiff', 'still', 'stingy',\n     'stormy', 'straight', 'straightforward', 'strange', 'strategic',\n     'strict', 'striking', 'striped', 'strong', 'structural', 'stuck', \n     'stupid', 'subjective', 'subsequent', 'substantial', 'subtle', \n     'successful', 'successive', 'sudden', 'sufficient', 'suitable',\n     'sunny', 'super', 'superb', 'superior', 'supporting', 'supposed',\n     'supreme', 'sure', 'surprised', 'surprising', 'surrounding', \n     'surviving', 'suspicious', 'sweet', 'swift', 'swiss', 'symbolic',\n     'sympathetic', 'systematic', 'tall', 'tame', 'tan', 'tart',\n     'tasteless', 'tasty', 'technical', 'technological', 'teenage', \n     'temporary', 'tender', 'tense', 'terrible', 'territorial', 'testy',\n     'then', 'theoretical', 'thick', 'thin', 'thirsty', 'thorough', \n     'thoughtful', 'thoughtless', 'thundering', 'tight', 'tiny', 'tired',\n     'top', 'tory', 'total', 'tough', 'toxic', 'traditional', 'tragic', \n     'tremendous', 'tricky', 'tropical', 'troubled', 'turkish', 'typical', \n     'ugliest', 'ugly', 'ultimate', 'unable', 'unacceptable', 'unaware', \n     'uncertain', 'unchanged', 'uncomfortable', 'unconscious', 'underground',\n     'underlying', 'unemployed', 'uneven', 'unexpected', 'unfair', \n     'unfortunate', 'unhappy', 'uniform', 'uninterested', 'unique', 'united',\n     'universal', 'unknown', 'unlikely', 'unnecessary', 'unpleasant', \n     'unsightly', 'unusual', 'unwilling', 'upper', 'upset', 'uptight', \n     'urban', 'urgent', 'used', 'useful', 'useless', 'usual', 'vague', \n     'valid', 'valuable', 'variable', 'varied', 'various', 'varying', 'vast',\n     'verbal', 'vertical', 'very', 'victorian', 'victorious', 'video-taped', \n     'violent', 'visible', 'visiting', 'visual', 'vital', 'vivacious', \n     'vivid', 'vocational', 'voiceless', 'voluntary', 'vulnerable', \n     'wandering', 'warm', 'wasteful', 'watery', 'weak', 'wealthy', 'weary', \n     'wee', 'weekly', 'weird', 'welcome', 'well', 'well-known', 'welsh', \n     'western', 'wet', 'whispering', 'white', 'whole', 'wicked', 'wide',\n     'wide-eyed', 'widespread', 'wild', 'willing', 'wise', 'witty', \n     'wonderful', 'wooden', 'working', 'working-class', 'worldwide',\n     'worried', 'worrying', 'worthwhile', 'worthy', 'written', 'wrong',\n     'yellow', 'young', 'yummy', 'zany', 'zealous']\nb = ['abiding', 'accelerating', 'accepting', 'accomplishing', 'achieving', \n'acquiring', 'acteding', 'activating', 'adapting', 'adding', 'addressing', \n'administering', 'admiring', 'admiting', 'adopting', 'advising', 'affording', \n'agreeing', 'alerting', 'alighting', 'allowing', 'altereding', 'amusing', \n'analyzing', 'announcing', 'annoying', 'answering', 'anticipating', \n'apologizing', 'appearing', 'applauding', 'applieding', 'appointing',\n 'appraising', 'appreciating', 'approving', 'arbitrating', 'arguing', \n 'arising', 'arranging', 'arresting', 'arriving', 'ascertaining', 'asking', \n 'assembling', 'assessing', 'assisting', 'assuring', 'attaching', 'attacking', \n 'attaining', 'attempting', 'attending', 'attracting', 'auditeding', 'avoiding',\n 'awaking', 'backing', 'baking', 'balancing', 'baning', 'banging', 'baring', \n 'bating', 'bathing', 'battling', 'bing', 'beaming', 'bearing', 'beating', \n 'becoming', 'beging', 'begining', 'behaving', 'beholding', 'belonging', \n 'bending', 'beseting', 'beting', 'biding', 'binding', 'biting', 'bleaching',\n 'bleeding', 'blessing', 'blinding', 'blinking', 'bloting', 'blowing', \n 'blushing', 'boasting', 'boiling', 'bolting', 'bombing', 'booking', \n 'boring', 'borrowing', 'bouncing', 'bowing', 'boxing', 'braking', \n 'branching', 'breaking', 'breathing', 'breeding', 'briefing', 'bringing',\n 'broadcasting', 'bruising', 'brushing', 'bubbling', 'budgeting', 'building', \n 'bumping', 'burning', 'bursting', 'burying', 'busting', 'buying', 'buzing', \n 'calculating', 'calling', 'camping', 'caring', 'carrying', 'carving', \n 'casting', 'cataloging', 'catching', 'causing', 'challenging', 'changing',\n 'charging', 'charting', 'chasing', 'cheating', 'checking', 'cheering', \n 'chewing', 'choking', 'choosing', 'choping', 'claiming', 'claping', \n 'clarifying', 'classifying', 'cleaning', 'clearing', 'clinging', 'cliping',\n 'closing', 'clothing', 'coaching', 'coiling', 'collecting', 'coloring', \n 'combing', 'coming', 'commanding', 'communicating', 'comparing', 'competing',\n 'compiling', 'complaining', 'completing', 'composing', 'computing',\n 'conceiving', 'concentrating', 'conceptualizing', 'concerning', 'concluding',\n 'conducting', 'confessing', 'confronting', 'confusing', 'connecting', \n 'conserving', 'considering', 'consisting', 'consolidating', 'constructing',\n 'consulting', 'containing', 'continuing', 'contracting', 'controling', \n 'converting', 'coordinating', 'copying', 'correcting', 'correlating',\n 'costing', 'coughing', 'counseling', 'counting', 'covering', 'cracking',\n 'crashing', 'crawling', 'creating', 'creeping', 'critiquing', 'crossing', \n 'crushing', 'crying', 'curing', 'curling', 'curving', 'cuting', 'cycling',\n 'daming', 'damaging', 'dancing', 'daring', 'dealing', 'decaying', 'deceiving',\n 'deciding', 'decorating', 'defining', 'delaying', 'delegating', 'delighting',\n 'delivering', 'demonstrating', 'depending', 'describing', 'deserting', \n 'deserving', 'designing', 'destroying', 'detailing', 'detecting', \n 'determining', 'developing', 'devising', 'diagnosing', 'diging', \n 'directing', 'disagreing', 'disappearing', 'disapproving', 'disarming', \n 'discovering', 'disliking', 'dispensing', 'displaying', 'disproving',\n 'dissecting', 'distributing', 'diving', 'diverting', 'dividing', 'doing',\n 'doubling', 'doubting', 'drafting', 'draging', 'draining', 'dramatizing', \n 'drawing', 'dreaming', 'dressing', 'drinking', 'driping', 'driving', \n 'dropping', 'drowning', 'druming', 'drying', 'dusting', 'dwelling',\n 'earning', 'eating', 'editeding', 'educating', 'eliminating',\n 'embarrassing', 'employing', 'emptying', 'enacteding', 'encouraging',\n 'ending', 'enduring', 'enforcing', 'engineering', 'enhancing',\n 'enjoying', 'enlisting', 'ensuring', 'entering', 'entertaining',\n 'escaping', 'establishing', 'estimating', 'evaluating', 'examining',\n 'exceeding', 'exciting', 'excusing', 'executing', 'exercising', 'exhibiting',\n 'existing', 'expanding', 'expecting', 'expediting', 'experimenting', \n 'explaining', 'exploding', 'expressing', 'extending', 'extracting', \n 'facing', 'facilitating', 'fading', 'failing', 'fancying', 'fastening', \n 'faxing', 'fearing', 'feeding', 'feeling', 'fencing', 'fetching', 'fighting', \n 'filing', 'filling', 'filming', 'finalizing', 'financing', 'finding',\n 'firing', 'fiting', 'fixing', 'flaping', 'flashing', 'fleing', 'flinging',\n 'floating', 'flooding', 'flowing', 'flowering', 'flying', 'folding', \n 'following', 'fooling', 'forbiding', 'forcing', 'forecasting', 'foregoing', \n 'foreseing', 'foretelling', 'forgeting', 'forgiving', 'forming', \n 'formulating', 'forsaking', 'framing', 'freezing', 'frightening', 'frying',\n 'gathering', 'gazing', 'generating', 'geting', 'giving', 'glowing', 'gluing', \n 'going', 'governing', 'grabing', 'graduating', 'grating', 'greasing', 'greeting',\n 'grinning', 'grinding', 'griping', 'groaning', 'growing', 'guaranteeing',\n 'guarding', 'guessing', 'guiding', 'hammering', 'handing', 'handling', \n 'handwriting', 'hanging', 'happening', 'harassing', 'harming', 'hating',\n 'haunting', 'heading', 'healing', 'heaping', 'hearing', 'heating', 'helping', \n 'hiding', 'hitting', 'holding', 'hooking', 'hoping', 'hopping', 'hovering',\n 'hugging', 'hmuming', 'hunting', 'hurrying', 'hurting', 'hypothesizing', \n 'identifying', 'ignoring', 'illustrating', 'imagining', 'implementing', \n 'impressing', 'improving', 'improvising', 'including', 'increasing', \n 'inducing', 'influencing', 'informing', 'initiating', 'injecting', \n 'injuring', 'inlaying', 'innovating', 'inputing', 'inspecting', \n 'inspiring', 'installing', 'instituting', 'instructing', 'insuring', \n 'integrating', 'intending', 'intensifying', 'interesting', \n 'interfering', 'interlaying', 'interpreting', 'interrupting', \n 'interviewing', 'introducing', 'inventing', 'inventorying', \n 'investigating', 'inviting', 'irritating', 'itching', 'jailing', \n 'jamming', 'jogging', 'joining', 'joking', 'judging', 'juggling', 'jumping',\n 'justifying', 'keeping', 'kepting', 'kicking', 'killing', 'kissing', 'kneeling',\n 'kniting', 'knocking', 'knotting', 'knowing', 'labeling', 'landing', 'lasting',\n 'laughing', 'launching', 'laying', 'leading', 'leaning', 'leaping', 'learning', \n 'leaving', 'lecturing', 'leding', 'lending', 'leting', 'leveling', \n 'licensing', 'licking', 'lying', 'lifteding', 'lighting', 'lightening',\n 'liking', 'listing', 'listening', 'living', 'loading', 'locating', \n 'locking', 'loging', 'longing', 'looking', 'losing', 'loving', \n 'maintaining', 'making', 'maning', 'managing', 'manipulating', \n 'manufacturing', 'mapping', 'marching', 'marking', 'marketing',\n 'marrying', 'matching', 'mating', 'mattering', 'meaning', 'measuring',\n 'meddling', 'mediating', 'meeting', 'melting', 'melting', 'memorizing',\n 'mending', 'mentoring', 'milking', 'mining', 'misleading', 'missing',\n 'misspelling', 'mistaking', 'misunderstanding', 'mixing', 'moaning', \n 'modeling', 'modifying', 'monitoring', 'mooring', 'motivating',\n 'mourning', 'moving', 'mowing', 'muddling', 'muging', 'multiplying', \n 'murdering', 'nailing', 'naming', 'navigating', 'needing', 'negotiating', \n 'nesting', 'noding', 'nominating', 'normalizing', 'noting', 'noticing', \n 'numbering', 'obeying', 'objecting', 'observing', 'obtaining', 'occuring', \n 'offending', 'offering', 'officiating', 'opening', 'operating', 'ordering', \n 'organizing', 'orienteding', 'originating', 'overcoming', 'overdoing', \n 'overdrawing', 'overflowing', 'overhearing', 'overtaking', 'overthrowing',\n 'owing', 'owning', 'packing', 'paddling', 'painting', 'parking', 'parting', \n 'participating', 'passing', 'pasting', 'pating', 'pausing', 'paying',\n 'pecking', 'pedaling', 'peeling', 'peeping', 'perceiving', 'perfecting', \n 'performing', 'permiting', 'persuading', 'phoning', 'photographing',\n 'picking', 'piloting', 'pinching', 'pining', 'pinpointing', 'pioneering',\n 'placing', 'planing', 'planting', 'playing', 'pleading', 'pleasing',\n 'plugging', 'pointing', 'poking', 'polishing', 'poping', 'possessing',\n 'posting', 'pouring', 'practicing', 'praiseding', 'praying', 'preaching', \n 'preceding', 'predicting', 'prefering', 'preparing', 'prescribing', \n 'presenting', 'preserving', 'preseting', 'presiding', 'pressing', \n 'pretending', 'preventing', 'pricking', 'printing', 'processing', \n 'procuring', 'producing', 'professing', 'programing', 'progressing', \n 'projecting', 'promising', 'promoting', 'proofreading', 'proposing', \n 'protecting', 'proving', 'providing', 'publicizing', 'pulling', 'pumping',\n 'punching', 'puncturing', 'punishing', 'purchasing', 'pushing', 'puting',\n 'qualifying', 'questioning', 'queuing', 'quiting', 'racing', 'radiating',\n 'raining', 'raising', 'ranking', 'rating', 'reaching', 'reading', \n 'realigning', 'realizing', 'reasoning', 'receiving', 'recognizing', \n 'recommending', 'reconciling', 'recording', 'recruiting', 'reducing', \n 'referring', 'reflecting', 'refusing', 'regreting', 'regulating', \n 'rehabilitating', 'reigning', 'reinforcing', 'rejecting', 'rejoicing',\n 'relating', 'relaxing', 'releasing', 'relying', 'remaining', 'remembering',\n 'reminding', 'removing', 'rendering', 'reorganizing', 'repairing',\n 'repeating', 'replacing', 'replying', 'reporting', 'representing',\n 'reproducing', 'requesting', 'rescuing', 'researching', 'resolving', \n 'responding', 'restoreding', 'restructuring', 'retiring', 'retrieving',\n 'returning', 'reviewing', 'revising', 'rhyming', 'riding', 'riding', \n 'ringing', 'rinsing', 'rising', 'risking', 'robing', 'rocking', 'rolling',\n 'roting', 'rubing', 'ruining', 'ruling', 'runing', 'rushing', 'sacking',\n 'sailing', 'satisfying', 'saving', 'sawing', 'saying', 'scaring', \n 'scattering', 'scheduling', 'scolding', 'scorching', 'scraping', \n 'scratching', 'screaming', 'screwing', 'scribbling', 'scrubing', \n 'sealing', 'searching', 'securing', 'seing', 'seeking', 'selecting', \n 'selling', 'sending', 'sensing', 'separating', 'serving', 'servicing', \n 'seting', 'settling', 'sewing', 'shading', 'shaking', 'shaping', \n 'sharing', 'shaving', 'shearing', 'sheding', 'sheltering', 'shining', \n 'shivering', 'shocking', 'shoing', 'shooting', 'shoping', 'showing', \n 'shrinking', 'shruging', 'shuting', 'sighing', 'signing', 'signaling',\n 'simplifying', 'sining', 'singing', 'sinking', 'siping', 'siting',\n 'sketching', 'skiing', 'skiping', 'slaping', 'slaying', 'sleeping',\n 'sliding', 'slinging', 'slinking', 'sliping', 'sliting', 'slowing',\n 'smashing', 'smelling', 'smiling', 'smiting', 'smoking', 'snatching',\n 'sneaking', 'sneezing', 'sniffing', 'snoring', 'snowing', 'soaking', \n 'solving', 'soothing', 'soothsaying', 'sorting', 'sounding', 'sowing', \n 'sparing', 'sparking', 'sparkling', 'speaking', 'specifying', 'speeding',\n 'spelling', 'spending', 'spilling', 'spining', 'spiting', 'spliting',\n 'spoiling', 'spoting', 'spraying', 'spreading', 'springing', 'sprouting', \n 'squashing', 'squeaking', 'squealing', 'squeezing', 'staining', 'stamping',\n 'standing', 'staring', 'starting', 'staying', 'stealing', 'steering', \n 'stepping', 'sticking', 'stimulating', 'stinging', 'stinking', 'stirring', \n 'stitching', 'stoping', 'storing', 'straping', 'streamlining', \n 'strengthening', 'stretching', 'striding', 'striking', 'stringing', \n 'stripping', 'striving', 'stroking', 'structuring', 'studying', \n 'stuffing', 'subleting', 'subtracting', 'succeeding', 'sucking', \n 'suffering', 'suggesting', 'suiting', 'summarizing', 'supervising',\n 'supplying', 'supporting', 'supposing', 'surprising', 'surrounding', \n 'suspecting', 'suspending', 'swearing', 'sweating', 'sweeping', 'swelling', \n 'swimming', 'swinging', 'switching', 'symbolizing', 'synthesizing',\n 'systemizing', 'tabulating', 'taking', 'talking', 'taming', 'taping', \n 'targeting', 'tasting', 'teaching', 'tearing', 'teasing', 'telephoning', \n 'telling', 'tempting', 'terrifying', 'testing', 'thanking', 'thawing', \n 'thinking', 'thriving', 'throwing', 'thrusting', 'ticking', 'tickling', \n 'tying', 'timing', 'tiping', 'tiring', 'touching', 'touring', 'towing',\n 'tracing', 'trading', 'training', 'transcribing', 'transfering',\n 'transforming', 'translating', 'transporting', 'traping', 'traveling',\n 'treading', 'treating', 'trembling', 'tricking', 'triping', 'troting', \n 'troubling', 'troubleshooting', 'trusting', 'trying', 'tuging', 'tumbling',\n 'turning', 'tutoring', 'twisting', 'typing', 'undergoing', 'understanding',\n 'undertaking', 'undressing', 'unfastening', 'unifying', 'uniting', \n 'unlocking', 'unpacking', 'untidying', 'updating', 'upgrading', \n 'upholding', 'upseting', 'using', 'utilizing', 'vanishing', 'verbalizing',\n 'verifying', 'vexing', 'visiting', 'wailing', 'waiting', 'waking', \n 'walking', 'wandering', 'wanting', 'warming', 'warning', 'washing', \n 'wasting', 'watching', 'watering', 'waving', 'wearing', 'weaving', \n 'wedding', 'weeping', 'weighing', 'welcoming', 'wending', 'weting', \n 'whining', 'whiping', 'whirling', 'whispering', 'whistling', 'wining', \n 'winding', 'winking', 'wiping', 'wishing', 'withdrawing', 'withholding',\n 'withstanding', 'wobbling', 'wondering', 'working', 'worrying', 'wrapping', \n 'wrecking', 'wrestling', 'wriggling', 'wringing', 'writing', 'x-raying',\n 'yawning', 'yelling', 'zipping', 'zooming']"
  },
  {
    "path": "FQ-BigGAN/calculate_inception_moments.py",
    "content": "''' Calculate Inception Moments\n This script iterates over the dataset and calculates the moments of the \n activations of the Inception net (needed for FID), and also returns\n the Inception Score of the training data.\n \n Note that if you don't shuffle the data, the IS of true data will be under-\n estimated as it is label-ordered. By default, the data is not shuffled\n so as to reduce non-determinism. '''\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport utils\nimport inception_utils\nfrom tqdm import tqdm, trange\nfrom argparse import ArgumentParser\n\ndef prepare_parser():\n  usage = 'Calculate and store inception metrics.'\n  parser = ArgumentParser(description=usage)\n  parser.add_argument(\n    '--dataset', type=str, default='I128_hdf5',\n    help='Which Dataset to train on, out of I128, I256, C10, C100...'\n         'Append _hdf5 to use the hdf5 version of the dataset. (default: %(default)s)')\n  parser.add_argument(\n    '--data_root', type=str, default='data',\n    help='Default location where data is stored (default: %(default)s)') \n  parser.add_argument(\n    '--batch_size', type=int, default=64,\n    help='Default overall batchsize (default: %(default)s)')\n  parser.add_argument(\n    '--parallel', action='store_true', default=False,\n    help='Train with multiple GPUs (default: %(default)s)')\n  parser.add_argument(\n    '--augment', action='store_true', default=False,\n    help='Augment with random crops and flips (default: %(default)s)')\n  parser.add_argument(\n    '--num_workers', type=int, default=8,\n    help='Number of dataloader workers (default: %(default)s)')\n  parser.add_argument(\n    '--shuffle', action='store_true', default=False,\n    help='Shuffle the data? (default: %(default)s)') \n  parser.add_argument(\n    '--seed', type=int, default=0,\n    help='Random seed to use.')\n  return parser\n\ndef run(config):\n  # Get loader\n  config['drop_last'] = False\n  loaders = utils.get_data_loaders(**config)\n\n  # Load inception net\n  net = inception_utils.load_inception_net(parallel=config['parallel'])\n  pool, logits, labels = [], [], []\n  device = 'cuda'\n  for i, (x, y) in enumerate(tqdm(loaders[0])):\n    x = x.to(device)\n    with torch.no_grad():\n      pool_val, logits_val = net(x)\n      pool += [np.asarray(pool_val.cpu())]\n      logits += [np.asarray(F.softmax(logits_val, 1).cpu())]\n      labels += [np.asarray(y.cpu())]\n\n  pool, logits, labels = [np.concatenate(item, 0) for item in [pool, logits, labels]]\n  # uncomment to save pool, logits, and labels to disk\n  # print('Saving pool, logits, and labels to disk...')\n  # np.savez(config['dataset']+'_inception_activations.npz',\n  #           {'pool': pool, 'logits': logits, 'labels': labels})\n  # Calculate inception metrics and report them\n  print('Calculating inception metrics...')\n  IS_mean, IS_std = inception_utils.calculate_inception_score(logits)\n  print('Training data from dataset %s has IS of %5.5f +/- %5.5f' % (config['dataset'], IS_mean, IS_std))\n  # Prepare mu and sigma, save to disk. Remove \"hdf5\" by default \n  # (the FID code also knows to strip \"hdf5\")\n  print('Calculating means and covariances...')\n  mu, sigma = np.mean(pool, axis=0), np.cov(pool, rowvar=False)\n  print('Saving calculated means and covariances to disk...')\n  np.savez(config['dataset'].strip('_hdf5')+'_inception_moments.npz', **{'mu' : mu, 'sigma' : sigma})\n\ndef main():\n  # parse command line    \n  parser = prepare_parser()\n  config = vars(parser.parse_args())\n  print(config)\n  run(config)\n\n\nif __name__ == '__main__':    \n    main()"
  },
  {
    "path": "FQ-BigGAN/datasets.py",
    "content": "''' Datasets\n    This file contains definitions for our CIFAR, ImageFolder, and HDF5 datasets\n'''\nimport os\nimport os.path\nimport sys\nfrom PIL import Image\nimport numpy as np\nfrom tqdm import tqdm, trange\n\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nfrom torchvision.datasets.utils import download_url, check_integrity\nimport torch.utils.data as data\nfrom torch.utils.data import DataLoader\n         \nIMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']\n\n\ndef is_image_file(filename):\n    \"\"\"Checks if a file is an image.\n\n    Args:\n        filename (string): path to a file\n\n    Returns:\n        bool: True if the filename ends with a known image extension\n    \"\"\"\n    filename_lower = filename.lower()\n    return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS)\n\n\ndef find_classes(dir):\n    classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]\n    classes.sort()\n    class_to_idx = {classes[i]: i for i in range(len(classes))}\n    return classes, class_to_idx\n\n\ndef make_dataset(dir, class_to_idx):\n  images = []\n  dir = os.path.expanduser(dir)\n  for target in tqdm(sorted(os.listdir(dir))):\n    d = os.path.join(dir, target)\n    if not os.path.isdir(d):\n      continue\n\n    for root, _, fnames in sorted(os.walk(d)):\n      for fname in sorted(fnames):\n        if is_image_file(fname):\n          path = os.path.join(root, fname)\n          item = (path, class_to_idx[target])\n          images.append(item)\n\n  return images\n\n\ndef pil_loader(path):\n    # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)\n  with open(path, 'rb') as f:\n    img = Image.open(f)\n    return img.convert('RGB')\n\n\ndef accimage_loader(path):\n  import accimage\n  try:\n    return accimage.Image(path)\n  except IOError:\n    # Potentially a decoding problem, fall back to PIL.Image\n    return pil_loader(path)\n\n\ndef default_loader(path):\n  from torchvision import get_image_backend\n  if get_image_backend() == 'accimage':\n    return accimage_loader(path)\n  else:\n    return pil_loader(path)\n\n\nclass ImageFolder(data.Dataset):\n  \"\"\"A generic data loader where the images are arranged in this way: ::\n\n      root/dogball/xxx.png\n      root/dogball/xxy.png\n      root/dogball/xxz.png\n\n      root/cat/123.png\n      root/cat/nsdf3.png\n      root/cat/asd932_.png\n\n  Args:\n      root (string): Root directory path.\n      transform (callable, optional): A function/transform that  takes in an PIL image\n          and returns a transformed version. E.g, ``transforms.RandomCrop``\n      target_transform (callable, optional): A function/transform that takes in the\n          target and transforms it.\n      loader (callable, optional): A function to load an image given its path.\n\n   Attributes:\n      classes (list): List of the class names.\n      class_to_idx (dict): Dict with items (class_name, class_index).\n      imgs (list): List of (image path, class_index) tuples\n  \"\"\"\n\n  def __init__(self, root, transform=None, target_transform=None,\n               loader=default_loader, load_in_mem=False, \n               index_filename='imagenet_imgs.npz', **kwargs):\n    classes, class_to_idx = find_classes(root)\n    # Load pre-computed image directory walk\n    if os.path.exists(index_filename):\n      print('Loading pre-saved Index file %s...' % index_filename)\n      imgs = np.load(index_filename)['imgs']\n    # If first time, walk the folder directory and save the \n    # results to a pre-computed file.\n    else:\n      print('Generating  Index file %s...' % index_filename)\n      imgs = make_dataset(root, class_to_idx)\n      np.savez_compressed(index_filename, **{'imgs' : imgs})\n    if len(imgs) == 0:\n      raise(RuntimeError(\"Found 0 images in subfolders of: \" + root + \"\\n\"\n                           \"Supported image extensions are: \" + \",\".join(IMG_EXTENSIONS)))\n\n    self.root = root\n    self.imgs = imgs\n    self.classes = classes\n    self.class_to_idx = class_to_idx\n    self.transform = transform\n    self.target_transform = target_transform\n    self.loader = loader\n    self.load_in_mem = load_in_mem\n    \n    if self.load_in_mem:\n      print('Loading all images into memory...')\n      self.data, self.labels = [], []\n      for index in tqdm(range(len(self.imgs))):\n        path, target = imgs[index][0], imgs[index][1]\n        self.data.append(self.transform(self.loader(path)))\n        self.labels.append(target)\n          \n\n  def __getitem__(self, index):\n    \"\"\"\n    Args:\n        index (int): Index\n\n    Returns:\n        tuple: (image, target) where target is class_index of the target class.\n    \"\"\"\n    if self.load_in_mem:\n        img = self.data[index]\n        target = self.labels[index]\n    else:\n      path, target = self.imgs[index]\n      img = self.loader(str(path))\n      if self.transform is not None:\n        img = self.transform(img)\n    \n    if self.target_transform is not None:\n      target = self.target_transform(target)\n    \n    # print(img.size(), target)\n    return img, int(target)\n\n  def __len__(self):\n    return len(self.imgs)\n\n  def __repr__(self):\n    fmt_str = 'Dataset ' + self.__class__.__name__ + '\\n'\n    fmt_str += '    Number of datapoints: {}\\n'.format(self.__len__())\n    fmt_str += '    Root Location: {}\\n'.format(self.root)\n    tmp = '    Transforms (if any): '\n    fmt_str += '{0}{1}\\n'.format(tmp, self.transform.__repr__().replace('\\n', '\\n' + ' ' * len(tmp)))\n    tmp = '    Target Transforms (if any): '\n    fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\\n', '\\n' + ' ' * len(tmp)))\n    return fmt_str\n        \n\n''' ILSVRC_HDF5: A dataset to support I/O from an HDF5 to avoid\n    having to load individual images all the time. '''\nimport h5py as h5\nimport torch\nclass ILSVRC_HDF5(data.Dataset):\n  def __init__(self, root, transform=None, target_transform=None,\n               load_in_mem=False, train=True,download=False, validate_seed=0,\n               val_split=0, **kwargs): # last four are dummies\n      \n    self.root = root\n    self.num_imgs = len(h5.File(root, 'r')['labels'])\n    \n    # self.transform = transform\n    self.target_transform = target_transform   \n    \n    # Set the transform here\n    self.transform = transform\n    \n    # load the entire dataset into memory? \n    self.load_in_mem = load_in_mem\n    \n    # If loading into memory, do so now\n    if self.load_in_mem:\n      print('Loading %s into memory...' % root)\n      with h5.File(root,'r') as f:\n        self.data = f['imgs'][:]\n        self.labels = f['labels'][:]\n\n  def __getitem__(self, index):\n    \"\"\"\n    Args:\n        index (int): Index\n\n    Returns:\n        tuple: (image, target) where target is class_index of the target class.\n    \"\"\"\n    # If loaded the entire dataset in RAM, get image from memory\n    if self.load_in_mem:\n      img = self.data[index]\n      target = self.labels[index]\n    \n    # Else load it from disk\n    else:\n      with h5.File(self.root,'r') as f:\n        img = f['imgs'][index]\n        target = f['labels'][index]\n    \n   \n    # if self.transform is not None:\n        # img = self.transform(img)\n    # Apply my own transform\n    img = ((torch.from_numpy(img).float() / 255) - 0.5) * 2\n    \n    if self.target_transform is not None:\n      target = self.target_transform(target)\n    \n    return img, int(target)\n\n  def __len__(self):\n      return self.num_imgs\n      # return len(self.f['imgs'])\n\nimport pickle\nclass CIFAR10(dset.CIFAR10):\n\n  def __init__(self, root, train=True,\n           transform=None, target_transform=None,\n           download=True, validate_seed=0,\n           val_split=0, load_in_mem=True, **kwargs):\n    self.root = os.path.expanduser(root)\n    self.transform = transform\n    self.target_transform = target_transform\n    self.train = train  # training set or test set\n    self.val_split = val_split\n\n    if download:\n      self.download()\n\n    if not self._check_integrity():\n      raise RuntimeError('Dataset not found or corrupted.' +\n                           ' You can use download=True to download it')\n\n    # now load the picked numpy arrays    \n    self.data = []\n    self.labels= []\n    for fentry in self.train_list:\n      f = fentry[0]\n      file = os.path.join(self.root, self.base_folder, f)\n      fo = open(file, 'rb')\n      if sys.version_info[0] == 2:\n        entry = pickle.load(fo)\n      else:\n        entry = pickle.load(fo, encoding='latin1')\n      self.data.append(entry['data'])\n      if 'labels' in entry:\n        self.labels += entry['labels']\n      else:\n        self.labels += entry['fine_labels']\n      fo.close()\n        \n    self.data = np.concatenate(self.data)\n    # Randomly select indices for validation\n    if self.val_split > 0:\n      label_indices = [[] for _ in range(max(self.labels)+1)]\n      for i,l in enumerate(self.labels):\n        label_indices[l] += [i]  \n      label_indices = np.asarray(label_indices)\n      \n      # randomly grab 500 elements of each class\n      np.random.seed(validate_seed)\n      self.val_indices = []           \n      for l_i in label_indices:\n        self.val_indices += list(l_i[np.random.choice(len(l_i), int(len(self.data) * val_split) // (max(self.labels) + 1) ,replace=False)])\n    \n    if self.train=='validate':    \n      self.data = self.data[self.val_indices]\n      self.labels = list(np.asarray(self.labels)[self.val_indices])\n      \n      self.data = self.data.reshape((int(50e3 * self.val_split), 3, 32, 32))\n      self.data = self.data.transpose((0, 2, 3, 1))  # convert to HWC\n    \n    elif self.train:\n      print(np.shape(self.data))\n      if self.val_split > 0:\n        self.data = np.delete(self.data,self.val_indices,axis=0)\n        self.labels = list(np.delete(np.asarray(self.labels),self.val_indices,axis=0))\n          \n      self.data = self.data.reshape((int(50e3 * (1.-self.val_split)), 3, 32, 32))\n      self.data = self.data.transpose((0, 2, 3, 1))  # convert to HWC\n    else:\n      f = self.test_list[0][0]\n      file = os.path.join(self.root, self.base_folder, f)\n      fo = open(file, 'rb')\n      if sys.version_info[0] == 2:\n        entry = pickle.load(fo)\n      else:\n        entry = pickle.load(fo, encoding='latin1')\n      self.data = entry['data']\n      if 'labels' in entry:\n        self.labels = entry['labels']\n      else:\n        self.labels = entry['fine_labels']\n      fo.close()\n      self.data = self.data.reshape((10000, 3, 32, 32))\n      self.data = self.data.transpose((0, 2, 3, 1))  # convert to HWC\n      \n  def __getitem__(self, index):\n    \"\"\"\n    Args:\n        index (int): Index\n    Returns:\n        tuple: (image, target) where target is index of the target class.\n    \"\"\"\n    img, target = self.data[index], self.labels[index]\n\n    # doing this so that it is consistent with all other datasets\n    # to return a PIL Image\n    img = Image.fromarray(img)\n\n    if self.transform is not None:\n      img = self.transform(img)\n\n    if self.target_transform is not None:\n      target = self.target_transform(target)\n\n    return img, target\n      \n  def __len__(self):\n      return len(self.data)\n\n\nclass CIFAR100(CIFAR10):\n    base_folder = 'cifar-100-python'\n    url = \"http://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz\"\n    filename = \"cifar-100-python.tar.gz\"\n    tgz_md5 = 'eb9058c3a382ffc7106e4002c42a8d85'\n    train_list = [\n        ['train', '16019d7e3df5f24257cddd939b257f8d'],\n    ]\n\n    test_list = [\n        ['test', 'f0ef6b0ae62326f3e7ffdfab6717acfc'],\n    ]\n"
  },
  {
    "path": "FQ-BigGAN/inception_tf13.py",
    "content": "''' Tensorflow inception score code\nDerived from https://github.com/openai/improved-gan\nCode derived from tensorflow/tensorflow/models/image/imagenet/classify_image.py\nTHIS CODE REQUIRES TENSORFLOW 1.3 or EARLIER to run in PARALLEL BATCH MODE \n\nTo use this code, run sample.py on your model with --sample_npz, and then \npass the experiment name in the --experiment_name.\nThis code also saves pool3 stats to an npz file for FID calculation\n'''\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport tarfile\nimport math\nfrom tqdm import tqdm, trange\nfrom argparse import ArgumentParser\n\nimport numpy as np\nfrom six.moves import urllib\nimport tensorflow as tf\n\nMODEL_DIR = ''\nDATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'\nsoftmax = None\n\ndef prepare_parser():\n  usage = 'Parser for TF1.3- Inception Score scripts.'\n  parser = ArgumentParser(description=usage)\n  parser.add_argument(\n    '--experiment_name', type=str, default='',\n    help='Which experiment''s samples.npz file to pull and evaluate')\n  parser.add_argument(\n    '--experiment_root', type=str, default='samples',\n    help='Default location where samples are stored (default: %(default)s)')\n  parser.add_argument(\n    '--batch_size', type=int, default=500,\n    help='Default overall batchsize (default: %(default)s)')\n  return parser\n\n\ndef run(config):\n  # Inception with TF1.3 or earlier.\n  # Call this function with list of images. Each of elements should be a \n  # numpy array with values ranging from 0 to 255.\n  def get_inception_score(images, splits=10):\n    assert(type(images) == list)\n    assert(type(images[0]) == np.ndarray)\n    assert(len(images[0].shape) == 3)\n    assert(np.max(images[0]) > 10)\n    assert(np.min(images[0]) >= 0.0)\n    inps = []\n    for img in images:\n      img = img.astype(np.float32)\n      inps.append(np.expand_dims(img, 0))\n    bs = config['batch_size']\n    with tf.Session() as sess:\n      preds, pools = [], []\n      n_batches = int(math.ceil(float(len(inps)) / float(bs)))\n      for i in trange(n_batches):\n        inp = inps[(i * bs):min((i + 1) * bs, len(inps))]\n        inp = np.concatenate(inp, 0)\n        pred, pool = sess.run([softmax, pool3], {'ExpandDims:0': inp})\n        preds.append(pred)\n        pools.append(pool)\n      preds = np.concatenate(preds, 0)\n      scores = []\n      for i in range(splits):\n        part = preds[(i * preds.shape[0] // splits):((i + 1) * preds.shape[0] // splits), :]\n        kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))\n        kl = np.mean(np.sum(kl, 1))\n        scores.append(np.exp(kl))\n      return np.mean(scores), np.std(scores), np.squeeze(np.concatenate(pools, 0))\n  # Init inception\n  def _init_inception():\n    global softmax, pool3\n    if not os.path.exists(MODEL_DIR):\n      os.makedirs(MODEL_DIR)\n    filename = DATA_URL.split('/')[-1]\n    filepath = os.path.join(MODEL_DIR, filename)\n    if not os.path.exists(filepath):\n      def _progress(count, block_size, total_size):\n        sys.stdout.write('\\r>> Downloading %s %.1f%%' % (\n            filename, float(count * block_size) / float(total_size) * 100.0))\n        sys.stdout.flush()\n      filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)\n      print()\n      statinfo = os.stat(filepath)\n      print('Succesfully downloaded', filename, statinfo.st_size, 'bytes.')\n    tarfile.open(filepath, 'r:gz').extractall(MODEL_DIR)\n    with tf.gfile.FastGFile(os.path.join(\n        MODEL_DIR, 'classify_image_graph_def.pb'), 'rb') as f:\n      graph_def = tf.GraphDef()\n      graph_def.ParseFromString(f.read())\n      _ = tf.import_graph_def(graph_def, name='')\n    # Works with an arbitrary minibatch size.\n    with tf.Session() as sess:\n      pool3 = sess.graph.get_tensor_by_name('pool_3:0')\n      ops = pool3.graph.get_operations()\n      for op_idx, op in enumerate(ops):\n        for o in op.outputs:\n          shape = o.get_shape()\n          shape = [s.value for s in shape]\n          new_shape = []\n          for j, s in enumerate(shape):\n            if s == 1 and j == 0:\n              new_shape.append(None)\n            else:\n              new_shape.append(s)\n          o._shape = tf.TensorShape(new_shape)\n      w = sess.graph.get_operation_by_name(\"softmax/logits/MatMul\").inputs[1]\n      logits = tf.matmul(tf.squeeze(pool3), w)\n      softmax = tf.nn.softmax(logits)\n\n  # if softmax is None: # No need to functionalize like this.\n  _init_inception()\n\n  fname = '%s/%s/samples.npz' % (config['experiment_root'], config['experiment_name'])\n  print('loading %s ...'%fname)\n  ims = np.load(fname)['x']\n  import time\n  t0 = time.time()\n  inc_mean, inc_std, pool_activations = get_inception_score(list(ims.swapaxes(1,2).swapaxes(2,3)), splits=10)\n  t1 = time.time()\n  print('Saving pool to numpy file for FID calculations...')\n  np.savez('%s/%s/TF_pool.npz' % (config['experiment_root'], config['experiment_name']), **{'pool_mean': np.mean(pool_activations,axis=0), 'pool_var': np.cov(pool_activations, rowvar=False)})\n  print('Inception took %3f seconds, score of %3f +/- %3f.'%(t1-t0, inc_mean, inc_std))\ndef main():\n  # parse command line and run\n  parser = prepare_parser()\n  config = vars(parser.parse_args())\n  print(config)\n  run(config)\n\nif __name__ == '__main__':\n  main()"
  },
  {
    "path": "FQ-BigGAN/inception_utils.py",
    "content": "''' Inception utilities\n    This file contains methods for calculating IS and FID, using either\n    the original numpy code or an accelerated fully-pytorch version that \n    uses a fast newton-schulz approximation for the matrix sqrt. There are also\n    methods for acquiring a desired number of samples from the Generator,\n    and parallelizing the inbuilt PyTorch inception network.\n    \n    NOTE that Inception Scores and FIDs calculated using these methods will \n    *not* be directly comparable to values calculated using the original TF\n    IS/FID code. You *must* use the TF model if you wish to report and compare\n    numbers. This code tends to produce IS values that are 5-10% lower than\n    those obtained through TF. \n'''    \nimport numpy as np\nfrom scipy import linalg # For numpy FID\nimport time\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\nfrom torchvision.models.inception import inception_v3\n\n\n# Module that wraps the inception network to enable use with dataparallel and\n# returning pool features and logits.\nclass WrapInception(nn.Module):\n  def __init__(self, net):\n    super(WrapInception,self).__init__()\n    self.net = net\n    self.mean = P(torch.tensor([0.485, 0.456, 0.406]).view(1, -1, 1, 1),\n                  requires_grad=False)\n    self.std = P(torch.tensor([0.229, 0.224, 0.225]).view(1, -1, 1, 1),\n                 requires_grad=False)\n  def forward(self, x):\n    # Normalize x\n    x = (x + 1.) / 2.0\n    x = (x - self.mean) / self.std\n    # Upsample if necessary\n    if x.shape[2] != 299 or x.shape[3] != 299:\n      x = F.interpolate(x, size=(299, 299), mode='bilinear', align_corners=True)\n    # 299 x 299 x 3\n    x = self.net.Conv2d_1a_3x3(x)\n    # 149 x 149 x 32\n    x = self.net.Conv2d_2a_3x3(x)\n    # 147 x 147 x 32\n    x = self.net.Conv2d_2b_3x3(x)\n    # 147 x 147 x 64\n    x = F.max_pool2d(x, kernel_size=3, stride=2)\n    # 73 x 73 x 64\n    x = self.net.Conv2d_3b_1x1(x)\n    # 73 x 73 x 80\n    x = self.net.Conv2d_4a_3x3(x)\n    # 71 x 71 x 192\n    x = F.max_pool2d(x, kernel_size=3, stride=2)\n    # 35 x 35 x 192\n    x = self.net.Mixed_5b(x)\n    # 35 x 35 x 256\n    x = self.net.Mixed_5c(x)\n    # 35 x 35 x 288\n    x = self.net.Mixed_5d(x)\n    # 35 x 35 x 288\n    x = self.net.Mixed_6a(x)\n    # 17 x 17 x 768\n    x = self.net.Mixed_6b(x)\n    # 17 x 17 x 768\n    x = self.net.Mixed_6c(x)\n    # 17 x 17 x 768\n    x = self.net.Mixed_6d(x)\n    # 17 x 17 x 768\n    x = self.net.Mixed_6e(x)\n    # 17 x 17 x 768\n    # 17 x 17 x 768\n    x = self.net.Mixed_7a(x)\n    # 8 x 8 x 1280\n    x = self.net.Mixed_7b(x)\n    # 8 x 8 x 2048\n    x = self.net.Mixed_7c(x)\n    # 8 x 8 x 2048\n    pool = torch.mean(x.view(x.size(0), x.size(1), -1), 2)\n    # 1 x 1 x 2048\n    logits = self.net.fc(F.dropout(pool, training=False).view(pool.size(0), -1))\n    # 1000 (num_classes)\n    return pool, logits\n\n\n# A pytorch implementation of cov, from Modar M. Alfadly\n# https://discuss.pytorch.org/t/covariance-and-gradient-support/16217/2\ndef torch_cov(m, rowvar=False):\n    '''Estimate a covariance matrix given data.\n\n    Covariance indicates the level to which two variables vary together.\n    If we examine N-dimensional samples, `X = [x_1, x_2, ... x_N]^T`,\n    then the covariance matrix element `C_{ij}` is the covariance of\n    `x_i` and `x_j`. The element `C_{ii}` is the variance of `x_i`.\n\n    Args:\n        m: A 1-D or 2-D array containing multiple variables and observations.\n            Each row of `m` represents a variable, and each column a single\n            observation of all those variables.\n        rowvar: If `rowvar` is True, then each row represents a\n            variable, with observations in the columns. Otherwise, the\n            relationship is transposed: each column represents a variable,\n            while the rows contain observations.\n\n    Returns:\n        The covariance matrix of the variables.\n    '''\n    if m.dim() > 2:\n        raise ValueError('m has more than 2 dimensions')\n    if m.dim() < 2:\n        m = m.view(1, -1)\n    if not rowvar and m.size(0) != 1:\n        m = m.t()\n    # m = m.type(torch.double)  # uncomment this line if desired\n    fact = 1.0 / (m.size(1) - 1)\n    m -= torch.mean(m, dim=1, keepdim=True)\n    mt = m.t()  # if complex: mt = m.t().conj()\n    return fact * m.matmul(mt).squeeze()\n\n\n# Pytorch implementation of matrix sqrt, from Tsung-Yu Lin, and Subhransu Maji\n# https://github.com/msubhransu/matrix-sqrt \ndef sqrt_newton_schulz(A, numIters, dtype=None):\n  with torch.no_grad():\n    if dtype is None:\n      dtype = A.type()\n    batchSize = A.shape[0]\n    dim = A.shape[1]\n    normA = A.mul(A).sum(dim=1).sum(dim=1).sqrt()\n    Y = A.div(normA.view(batchSize, 1, 1).expand_as(A));\n    I = torch.eye(dim,dim).view(1, dim, dim).repeat(batchSize,1,1).type(dtype)\n    Z = torch.eye(dim,dim).view(1, dim, dim).repeat(batchSize,1,1).type(dtype)\n    for i in range(numIters):\n      T = 0.5*(3.0*I - Z.bmm(Y))\n      Y = Y.bmm(T)\n      Z = T.bmm(Z)\n    sA = Y*torch.sqrt(normA).view(batchSize, 1, 1).expand_as(A)\n  return sA\n\n\n# FID calculator from TTUR--consider replacing this with GPU-accelerated cov\n# calculations using torch?\ndef numpy_calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):\n  \"\"\"Numpy implementation of the Frechet Distance.\n  Taken from https://github.com/bioinf-jku/TTUR\n  The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)\n  and X_2 ~ N(mu_2, C_2) is\n          d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).\n  Stable version by Dougal J. Sutherland.\n  Params:\n  -- mu1   : Numpy array containing the activations of a layer of the\n             inception net (like returned by the function 'get_predictions')\n             for generated samples.\n  -- mu2   : The sample mean over activations, precalculated on an \n             representive data set.\n  -- sigma1: The covariance matrix over activations for generated samples.\n  -- sigma2: The covariance matrix over activations, precalculated on an \n             representive data set.\n  Returns:\n  --   : The Frechet Distance.\n  \"\"\"\n\n  mu1 = np.atleast_1d(mu1)\n  mu2 = np.atleast_1d(mu2)\n\n  sigma1 = np.atleast_2d(sigma1)\n  sigma2 = np.atleast_2d(sigma2)\n\n  assert mu1.shape == mu2.shape, \\\n    'Training and test mean vectors have different lengths'\n  assert sigma1.shape == sigma2.shape, \\\n    'Training and test covariances have different dimensions'\n\n  diff = mu1 - mu2\n\n  # Product might be almost singular\n  covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)\n  if not np.isfinite(covmean).all():\n    msg = ('fid calculation produces singular product; '\n           'adding %s to diagonal of cov estimates') % eps\n    print(msg)\n    offset = np.eye(sigma1.shape[0]) * eps\n    covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))\n\n  # Numerical error might give slight imaginary component\n  if np.iscomplexobj(covmean):\n    print('wat')\n    if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):\n      m = np.max(np.abs(covmean.imag))\n      raise ValueError('Imaginary component {}'.format(m))\n    covmean = covmean.real  \n\n  tr_covmean = np.trace(covmean) \n\n  out = diff.dot(diff) + np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean\n  return out\n\n\ndef torch_calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):\n  \"\"\"Pytorch implementation of the Frechet Distance.\n  Taken from https://github.com/bioinf-jku/TTUR\n  The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)\n  and X_2 ~ N(mu_2, C_2) is\n          d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).\n  Stable version by Dougal J. Sutherland.\n  Params:\n  -- mu1   : Numpy array containing the activations of a layer of the\n             inception net (like returned by the function 'get_predictions')\n             for generated samples.\n  -- mu2   : The sample mean over activations, precalculated on an \n             representive data set.\n  -- sigma1: The covariance matrix over activations for generated samples.\n  -- sigma2: The covariance matrix over activations, precalculated on an \n             representive data set.\n  Returns:\n  --   : The Frechet Distance.\n  \"\"\"\n\n\n  assert mu1.shape == mu2.shape, \\\n    'Training and test mean vectors have different lengths'\n  assert sigma1.shape == sigma2.shape, \\\n    'Training and test covariances have different dimensions'\n\n  diff = mu1 - mu2\n  # Run 50 itrs of newton-schulz to get the matrix sqrt of sigma1 dot sigma2\n  covmean = sqrt_newton_schulz(sigma1.mm(sigma2).unsqueeze(0), 50).squeeze()  \n  out = (diff.dot(diff) +  torch.trace(sigma1) + torch.trace(sigma2)\n         - 2 * torch.trace(covmean))\n  return out\n\n\n# Calculate Inception Score mean + std given softmax'd logits and number of splits\ndef calculate_inception_score(pred, num_splits=10):\n  scores = []\n  for index in range(num_splits):\n    pred_chunk = pred[index * (pred.shape[0] // num_splits): (index + 1) * (pred.shape[0] // num_splits), :]\n    kl_inception = pred_chunk * (np.log(pred_chunk) - np.log(np.expand_dims(np.mean(pred_chunk, 0), 0)))\n    kl_inception = np.mean(np.sum(kl_inception, 1))\n    scores.append(np.exp(kl_inception))\n  return np.mean(scores), np.std(scores)\n\n\n# Loop and run the sampler and the net until it accumulates num_inception_images\n# activations. Return the pool, the logits, and the labels (if one wants \n# Inception Accuracy the labels of the generated class will be needed)\ndef accumulate_inception_activations(sample, net, num_inception_images=50000):\n  pool, logits, labels = [], [], []\n  while (torch.cat(logits, 0).shape[0] if len(logits) else 0) < num_inception_images:\n    with torch.no_grad():\n      images, labels_val = sample()\n      pool_val, logits_val = net(images.float())\n      pool += [pool_val]\n      logits += [F.softmax(logits_val, 1)]\n      labels += [labels_val]\n  return torch.cat(pool, 0), torch.cat(logits, 0), torch.cat(labels, 0)\n\n\n# Load and wrap the Inception model\ndef load_inception_net(parallel=False):\n  inception_model = inception_v3(pretrained=True, transform_input=False)\n  inception_model = WrapInception(inception_model.eval()).cuda()\n  if parallel:\n    print('Parallelizing Inception module...')\n    inception_model = nn.DataParallel(inception_model)\n  return inception_model\n\n\n# This produces a function which takes in an iterator which returns a set number of samples\n# and iterates until it accumulates config['num_inception_images'] images.\n# The iterator can return samples with a different batch size than used in\n# training, using the setting confg['inception_batchsize']\ndef prepare_inception_metrics(dataset, parallel, no_fid=False):\n  # Load metrics; this is intentionally not in a try-except loop so that\n  # the script will crash here if it cannot find the Inception moments.\n  # By default, remove the \"hdf5\" from dataset\n  dataset = dataset.strip('_hdf5')\n  data_mu = np.load(dataset+'_inception_moments.npz')['mu']\n  data_sigma = np.load(dataset+'_inception_moments.npz')['sigma']\n  # Load network\n  net = load_inception_net(parallel)\n  def get_inception_metrics(sample, num_inception_images, num_splits=10, \n                            prints=True, use_torch=False):\n    if prints:\n      print('Gathering activations...')\n    pool, logits, labels = accumulate_inception_activations(sample, net, num_inception_images)\n    if prints:  \n      print('Calculating Inception Score...')\n    IS_mean, IS_std = calculate_inception_score(logits.cpu().numpy(), num_splits)\n    if no_fid:\n      FID = 9999.0\n    else:\n      if prints:\n        print('Calculating means and covariances...')\n      if use_torch:\n        mu, sigma = torch.mean(pool, 0), torch_cov(pool, rowvar=False)\n      else:\n        mu, sigma = np.mean(pool.cpu().numpy(), axis=0), np.cov(pool.cpu().numpy(), rowvar=False)\n      if prints:\n        print('Covariances calculated, getting FID...')\n      if use_torch:\n        FID = torch_calculate_frechet_distance(mu, sigma, torch.tensor(data_mu).float().cuda(), torch.tensor(data_sigma).float().cuda())\n        FID = float(FID.cpu().numpy())\n      else:\n        FID = numpy_calculate_frechet_distance(mu, sigma, data_mu, data_sigma)\n    # Delete mu, sigma, pool, logits, and labels, just in case\n    del mu, sigma, pool, logits, labels\n    return IS_mean, IS_std, FID\n  return get_inception_metrics"
  },
  {
    "path": "FQ-BigGAN/layers.py",
    "content": "''' Layers\n    This file contains various layers for the BigGAN models.\n'''\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\n\nfrom sync_batchnorm import SynchronizedBatchNorm2d as SyncBN2d\n\n\n# Projection of x onto y\ndef proj(x, y):\n  return torch.mm(y, x.t()) * y / torch.mm(y, y.t())\n\n\n# Orthogonalize x wrt list of vectors ys\ndef gram_schmidt(x, ys):\n  for y in ys:\n    x = x - proj(x, y)\n  return x\n\n\n# Apply num_itrs steps of the power method to estimate top N singular values.\ndef power_iteration(W, u_, update=True, eps=1e-12):\n  # Lists holding singular vectors and values\n  us, vs, svs = [], [], []\n  for i, u in enumerate(u_):\n    # Run one step of the power iteration\n    with torch.no_grad():\n      v = torch.matmul(u, W)\n      # Run Gram-Schmidt to subtract components of all other singular vectors\n      v = F.normalize(gram_schmidt(v, vs), eps=eps)\n      # Add to the list\n      vs += [v]\n      # Update the other singular vector\n      u = torch.matmul(v, W.t())\n      # Run Gram-Schmidt to subtract components of all other singular vectors\n      u = F.normalize(gram_schmidt(u, us), eps=eps)\n      # Add to the list\n      us += [u]\n      if update:\n        u_[i][:] = u\n    # Compute this singular value and add it to the list\n    svs += [torch.squeeze(torch.matmul(torch.matmul(v, W.t()), u.t()))]\n    #svs += [torch.sum(F.linear(u, W.transpose(0, 1)) * v)]\n  return svs, us, vs\n\n\n# Convenience passthrough function\nclass identity(nn.Module):\n  def forward(self, input):\n    return input\n \n\n# Spectral normalization base class \nclass SN(object):\n  def __init__(self, num_svs, num_itrs, num_outputs, transpose=False, eps=1e-12):\n    # Number of power iterations per step\n    self.num_itrs = num_itrs\n    # Number of singular values\n    self.num_svs = num_svs\n    # Transposed?\n    self.transpose = transpose\n    # Epsilon value for avoiding divide-by-0\n    self.eps = eps\n    # Register a singular vector for each sv\n    for i in range(self.num_svs):\n      self.register_buffer('u%d' % i, torch.randn(1, num_outputs))\n      self.register_buffer('sv%d' % i, torch.ones(1))\n  \n  # Singular vectors (u side)\n  @property\n  def u(self):\n    return [getattr(self, 'u%d' % i) for i in range(self.num_svs)]\n\n  # Singular values; \n  # note that these buffers are just for logging and are not used in training. \n  @property\n  def sv(self):\n   return [getattr(self, 'sv%d' % i) for i in range(self.num_svs)]\n   \n  # Compute the spectrally-normalized weight\n  def W_(self):\n    W_mat = self.weight.view(self.weight.size(0), -1)\n    if self.transpose:\n      W_mat = W_mat.t()\n    # Apply num_itrs power iterations\n    for _ in range(self.num_itrs):\n      svs, us, vs = power_iteration(W_mat, self.u, update=self.training, eps=self.eps) \n    # Update the svs\n    if self.training:\n      with torch.no_grad(): # Make sure to do this in a no_grad() context or you'll get memory leaks!\n        for i, sv in enumerate(svs):\n          self.sv[i][:] = sv     \n    return self.weight / svs[0]\n\n\n# 2D Conv layer with spectral norm\nclass SNConv2d(nn.Conv2d, SN):\n  def __init__(self, in_channels, out_channels, kernel_size, stride=1,\n             padding=0, dilation=1, groups=1, bias=True, \n             num_svs=1, num_itrs=1, eps=1e-12):\n    nn.Conv2d.__init__(self, in_channels, out_channels, kernel_size, stride, \n                     padding, dilation, groups, bias)\n    SN.__init__(self, num_svs, num_itrs, out_channels, eps=eps)    \n  def forward(self, x):\n    return F.conv2d(x, self.W_(), self.bias, self.stride, \n                    self.padding, self.dilation, self.groups)\n\n\n# Linear layer with spectral norm\nclass SNLinear(nn.Linear, SN):\n  def __init__(self, in_features, out_features, bias=True,\n               num_svs=1, num_itrs=1, eps=1e-12):\n    nn.Linear.__init__(self, in_features, out_features, bias)\n    SN.__init__(self, num_svs, num_itrs, out_features, eps=eps)\n  def forward(self, x):\n    return F.linear(x, self.W_(), self.bias)\n\n\n# Embedding layer with spectral norm\n# We use num_embeddings as the dim instead of embedding_dim here\n# for convenience sake\nclass SNEmbedding(nn.Embedding, SN):\n  def __init__(self, num_embeddings, embedding_dim, padding_idx=None, \n               max_norm=None, norm_type=2, scale_grad_by_freq=False,\n               sparse=False, _weight=None,\n               num_svs=1, num_itrs=1, eps=1e-12):\n    nn.Embedding.__init__(self, num_embeddings, embedding_dim, padding_idx,\n                          max_norm, norm_type, scale_grad_by_freq, \n                          sparse, _weight)\n    SN.__init__(self, num_svs, num_itrs, num_embeddings, eps=eps)\n  def forward(self, x):\n    return F.embedding(x, self.W_())\n\n\n# A non-local block as used in SA-GAN\n# Note that the implementation as described in the paper is largely incorrect;\n# refer to the released code for the actual implementation.\nclass Attention(nn.Module):\n  def __init__(self, ch, which_conv=SNConv2d, name='attention'):\n    super(Attention, self).__init__()\n    # Channel multiplier\n    self.ch = ch\n    self.which_conv = which_conv\n    self.theta = self.which_conv(self.ch, self.ch // 8, kernel_size=1, padding=0, bias=False)\n    self.phi = self.which_conv(self.ch, self.ch // 8, kernel_size=1, padding=0, bias=False)\n    self.g = self.which_conv(self.ch, self.ch // 2, kernel_size=1, padding=0, bias=False)\n    self.o = self.which_conv(self.ch // 2, self.ch, kernel_size=1, padding=0, bias=False)\n    # Learnable gain parameter\n    self.gamma = P(torch.tensor(0.), requires_grad=True)\n  def forward(self, x, y=None):\n    # Apply convs\n    theta = self.theta(x)\n    phi = F.max_pool2d(self.phi(x), [2,2])\n    g = F.max_pool2d(self.g(x), [2,2])    \n    # Perform reshapes\n    theta = theta.view(-1, self. ch // 8, x.shape[2] * x.shape[3])\n    phi = phi.view(-1, self. ch // 8, x.shape[2] * x.shape[3] // 4)\n    g = g.view(-1, self. ch // 2, x.shape[2] * x.shape[3] // 4)\n    # Matmul and softmax to get attention maps\n    beta = F.softmax(torch.bmm(theta.transpose(1, 2), phi), -1)\n    # Attention map times g path\n    o = self.o(torch.bmm(g, beta.transpose(1,2)).view(-1, self.ch // 2, x.shape[2], x.shape[3]))\n    return self.gamma * o + x\n\n\n# Fused batchnorm op\ndef fused_bn(x, mean, var, gain=None, bias=None, eps=1e-5):\n  # Apply scale and shift--if gain and bias are provided, fuse them here\n  # Prepare scale\n  scale = torch.rsqrt(var + eps)\n  # If a gain is provided, use it\n  if gain is not None:\n    scale = scale * gain\n  # Prepare shift\n  shift = mean * scale\n  # If bias is provided, use it\n  if bias is not None:\n    shift = shift - bias\n  return x * scale - shift\n  #return ((x - mean) / ((var + eps) ** 0.5)) * gain + bias # The unfused way.\n\n\n# Manual BN\n# Calculate means and variances using mean-of-squares minus mean-squared\ndef manual_bn(x, gain=None, bias=None, return_mean_var=False, eps=1e-5):\n  # Cast x to float32 if necessary\n  float_x = x.float()\n  # Calculate expected value of x (m) and expected value of x**2 (m2)  \n  # Mean of x\n  m = torch.mean(float_x, [0, 2, 3], keepdim=True)\n  # Mean of x squared\n  m2 = torch.mean(float_x ** 2, [0, 2, 3], keepdim=True)\n  # Calculate variance as mean of squared minus mean squared.\n  var = (m2 - m **2)\n  # Cast back to float 16 if necessary\n  var = var.type(x.type())\n  m = m.type(x.type())\n  # Return mean and variance for updating stored mean/var if requested  \n  if return_mean_var:\n    return fused_bn(x, m, var, gain, bias, eps), m.squeeze(), var.squeeze()\n  else:\n    return fused_bn(x, m, var, gain, bias, eps)\n\n\n# My batchnorm, supports standing stats    \nclass myBN(nn.Module):\n  def __init__(self, num_channels, eps=1e-5, momentum=0.1):\n    super(myBN, self).__init__()\n    # momentum for updating running stats\n    self.momentum = momentum\n    # epsilon to avoid dividing by 0\n    self.eps = eps\n    # Momentum\n    self.momentum = momentum\n    # Register buffers\n    self.register_buffer('stored_mean', torch.zeros(num_channels))\n    self.register_buffer('stored_var',  torch.ones(num_channels))\n    self.register_buffer('accumulation_counter', torch.zeros(1))\n    # Accumulate running means and vars\n    self.accumulate_standing = False\n    \n  # reset standing stats\n  def reset_stats(self):\n    self.stored_mean[:] = 0\n    self.stored_var[:] = 0\n    self.accumulation_counter[:] = 0\n    \n  def forward(self, x, gain, bias):\n    if self.training:\n      out, mean, var = manual_bn(x, gain, bias, return_mean_var=True, eps=self.eps)\n      # If accumulating standing stats, increment them\n      if self.accumulate_standing:\n        self.stored_mean[:] = self.stored_mean + mean.data\n        self.stored_var[:] = self.stored_var + var.data\n        self.accumulation_counter += 1.0\n      # If not accumulating standing stats, take running averages\n      else:\n        self.stored_mean[:] = self.stored_mean * (1 - self.momentum) + mean * self.momentum\n        self.stored_var[:] = self.stored_var * (1 - self.momentum) + var * self.momentum\n      return out\n    # If not in training mode, use the stored statistics\n    else:         \n      mean = self.stored_mean.view(1, -1, 1, 1)\n      var = self.stored_var.view(1, -1, 1, 1)\n      # If using standing stats, divide them by the accumulation counter   \n      if self.accumulate_standing:\n        mean = mean / self.accumulation_counter\n        var = var / self.accumulation_counter\n      return fused_bn(x, mean, var, gain, bias, self.eps)\n\n\n# Simple function to handle groupnorm norm stylization                      \ndef groupnorm(x, norm_style):\n  # If number of channels specified in norm_style:\n  if 'ch' in norm_style:\n    ch = int(norm_style.split('_')[-1])\n    groups = max(int(x.shape[1]) // ch, 1)\n  # If number of groups specified in norm style\n  elif 'grp' in norm_style:\n    groups = int(norm_style.split('_')[-1])\n  # If neither, default to groups = 16\n  else:\n    groups = 16\n  return F.group_norm(x, groups)\n\n\n# Class-conditional bn\n# output size is the number of channels, input size is for the linear layers\n# Andy's Note: this class feels messy but I'm not really sure how to clean it up\n# Suggestions welcome! (By which I mean, refactor this and make a pull request\n# if you want to make this more readable/usable). \nclass ccbn(nn.Module):\n  def __init__(self, output_size, input_size, which_linear, eps=1e-5, momentum=0.1,\n               cross_replica=False, mybn=False, norm_style='bn',):\n    super(ccbn, self).__init__()\n    self.output_size, self.input_size = output_size, input_size\n    # Prepare gain and bias layers\n    self.gain = which_linear(input_size, output_size)\n    self.bias = which_linear(input_size, output_size)\n    # epsilon to avoid dividing by 0\n    self.eps = eps\n    # Momentum\n    self.momentum = momentum\n    # Use cross-replica batchnorm?\n    self.cross_replica = cross_replica\n    # Use my batchnorm?\n    self.mybn = mybn\n    # Norm style?\n    self.norm_style = norm_style\n    \n    if self.cross_replica:\n      self.bn = SyncBN2d(output_size, eps=self.eps, momentum=self.momentum, affine=False)\n    elif self.mybn:\n      self.bn = myBN(output_size, self.eps, self.momentum)\n    elif self.norm_style in ['bn', 'in']:\n      self.register_buffer('stored_mean', torch.zeros(output_size))\n      self.register_buffer('stored_var',  torch.ones(output_size)) \n    \n    \n  def forward(self, x, y):\n    # Calculate class-conditional gains and biases\n    gain = (1 + self.gain(y)).view(y.size(0), -1, 1, 1)\n    bias = self.bias(y).view(y.size(0), -1, 1, 1)\n    # If using my batchnorm\n    if self.mybn or self.cross_replica:\n      return self.bn(x, gain=gain, bias=bias)\n    # else:\n    else:\n      if self.norm_style == 'bn':\n        out = F.batch_norm(x, self.stored_mean, self.stored_var, None, None,\n                          self.training, 0.1, self.eps)\n      elif self.norm_style == 'in':\n        out = F.instance_norm(x, self.stored_mean, self.stored_var, None, None,\n                          self.training, 0.1, self.eps)\n      elif self.norm_style == 'gn':\n        out = groupnorm(x, self.normstyle)\n      elif self.norm_style == 'nonorm':\n        out = x\n      return out * gain + bias\n\n  def extra_repr(self):\n    s = 'out: {output_size}, in: {input_size},'\n    s +=' cross_replica={cross_replica}'\n    return s.format(**self.__dict__)\n\n\n# Normal, non-class-conditional BN\nclass bn(nn.Module):\n  def __init__(self, output_size, eps=1e-5, momentum=0.1,\n                cross_replica=False, mybn=False):\n    super(bn, self).__init__()\n    self.output_size= output_size\n    # Prepare gain and bias layers\n    self.gain = P(torch.ones(output_size), requires_grad=True)\n    self.bias = P(torch.zeros(output_size), requires_grad=True)\n    # epsilon to avoid dividing by 0\n    self.eps = eps\n    # Momentum\n    self.momentum = momentum\n    # Use cross-replica batchnorm?\n    self.cross_replica = cross_replica\n    # Use my batchnorm?\n    self.mybn = mybn\n    \n    if self.cross_replica:\n      self.bn = SyncBN2d(output_size, eps=self.eps, momentum=self.momentum, affine=False)    \n    elif mybn:\n      self.bn = myBN(output_size, self.eps, self.momentum)\n     # Register buffers if neither of the above\n    else:     \n      self.register_buffer('stored_mean', torch.zeros(output_size))\n      self.register_buffer('stored_var',  torch.ones(output_size))\n    \n  def forward(self, x, y=None):\n    if self.cross_replica or self.mybn:\n      gain = self.gain.view(1,-1,1,1)\n      bias = self.bias.view(1,-1,1,1)\n      return self.bn(x, gain=gain, bias=bias)\n    else:\n      return F.batch_norm(x, self.stored_mean, self.stored_var, self.gain,\n                          self.bias, self.training, self.momentum, self.eps)\n\n                          \n# Generator blocks\n# Note that this class assumes the kernel size and padding (and any other\n# settings) have been selected in the main generator module and passed in\n# through the which_conv arg. Similar rules apply with which_bn (the input\n# size [which is actually the number of channels of the conditional info] must \n# be preselected)\nclass GBlock(nn.Module):\n  def __init__(self, in_channels, out_channels,\n               which_conv=nn.Conv2d, which_bn=bn, activation=None, \n               upsample=None):\n    super(GBlock, self).__init__()\n    \n    self.in_channels, self.out_channels = in_channels, out_channels\n    self.which_conv, self.which_bn = which_conv, which_bn\n    self.activation = activation\n    self.upsample = upsample\n    # Conv layers\n    self.conv1 = self.which_conv(self.in_channels, self.out_channels)\n    self.conv2 = self.which_conv(self.out_channels, self.out_channels)\n    self.learnable_sc = in_channels != out_channels or upsample\n    if self.learnable_sc:\n      self.conv_sc = self.which_conv(in_channels, out_channels, \n                                     kernel_size=1, padding=0)\n    # Batchnorm layers\n    self.bn1 = self.which_bn(in_channels)\n    self.bn2 = self.which_bn(out_channels)\n    # upsample layers\n    self.upsample = upsample\n\n  def forward(self, x, y):\n    h = self.activation(self.bn1(x, y))\n    if self.upsample:\n      h = self.upsample(h)\n      x = self.upsample(x)\n    h = self.conv1(h)\n    h = self.activation(self.bn2(h, y))\n    h = self.conv2(h)\n    if self.learnable_sc:       \n      x = self.conv_sc(x)\n    return h + x\n    \n    \n# Residual block for the discriminator\nclass DBlock(nn.Module):\n  def __init__(self, in_channels, out_channels, which_conv=SNConv2d, wide=True,\n               preactivation=False, activation=None, downsample=None,):\n    super(DBlock, self).__init__()\n    self.in_channels, self.out_channels = in_channels, out_channels\n    # If using wide D (as in SA-GAN and BigGAN), change the channel pattern\n    self.hidden_channels = self.out_channels if wide else self.in_channels\n    self.which_conv = which_conv\n    self.preactivation = preactivation\n    self.activation = activation\n    self.downsample = downsample\n        \n    # Conv layers\n    self.conv1 = self.which_conv(self.in_channels, self.hidden_channels)\n    self.conv2 = self.which_conv(self.hidden_channels, self.out_channels)\n    self.learnable_sc = True if (in_channels != out_channels) or downsample else False\n    if self.learnable_sc:\n      self.conv_sc = self.which_conv(in_channels, out_channels, \n                                     kernel_size=1, padding=0)\n  def shortcut(self, x):\n    if self.preactivation:\n      if self.learnable_sc:\n        x = self.conv_sc(x)\n      if self.downsample:\n        x = self.downsample(x)\n    else:\n      if self.downsample:\n        x = self.downsample(x)\n      if self.learnable_sc:\n        x = self.conv_sc(x)\n    return x\n    \n  def forward(self, x):\n    if self.preactivation:\n      # h = self.activation(x) # NOT TODAY SATAN\n      # Andy's note: This line *must* be an out-of-place ReLU or it \n      #              will negatively affect the shortcut connection.\n      h = F.relu(x)\n    else:\n      h = x    \n    h = self.conv1(h)\n    h = self.conv2(self.activation(h))\n    if self.downsample:\n      h = self.downsample(h)     \n        \n    return h + self.shortcut(x)\n    \n# dogball"
  },
  {
    "path": "FQ-BigGAN/losses.py",
    "content": "import torch\nimport torch.nn.functional as F\n\n# DCGAN loss\ndef loss_dcgan_dis(dis_fake, dis_real):\n  L1 = torch.mean(F.softplus(-dis_real))\n  L2 = torch.mean(F.softplus(dis_fake))\n  return L1, L2\n\n\ndef loss_dcgan_gen(dis_fake):\n  loss = torch.mean(F.softplus(-dis_fake))\n  return loss\n\n\n# Hinge Loss\ndef loss_hinge_dis(dis_fake, dis_real):\n  loss_real = torch.mean(F.relu(1. - dis_real))\n  loss_fake = torch.mean(F.relu(1. + dis_fake))\n  return loss_real, loss_fake\n# def loss_hinge_dis(dis_fake, dis_real): # This version returns a single loss\n  # loss = torch.mean(F.relu(1. - dis_real))\n  # loss += torch.mean(F.relu(1. + dis_fake))\n  # return loss\n\n\ndef loss_hinge_gen(dis_fake):\n  loss = -torch.mean(dis_fake)\n  return loss\n\n# Default to hinge loss\ngenerator_loss = loss_hinge_gen\ndiscriminator_loss = loss_hinge_dis"
  },
  {
    "path": "FQ-BigGAN/make_hdf5.py",
    "content": "\"\"\" Convert dataset to HDF5\n    This script preprocesses a dataset and saves it (images and labels) to \n    an HDF5 file for improved I/O. \"\"\"\nimport os\nimport sys\nfrom argparse import ArgumentParser\nfrom tqdm import tqdm, trange\nimport h5py as h5\n\nimport numpy as np\nimport torch\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nfrom torchvision.utils import save_image\nimport torchvision.transforms as transforms\nfrom torch.utils.data import DataLoader\n\nimport utils\n\ndef prepare_parser():\n  usage = 'Parser for ImageNet HDF5 scripts.'\n  parser = ArgumentParser(description=usage)\n  parser.add_argument(\n    '--dataset', type=str, default='I128',\n    help='Which Dataset to train on, out of I128, I256, C10, C100;'\n         'Append \"_hdf5\" to use the hdf5 version for ISLVRC (default: %(default)s)')\n  parser.add_argument(\n    '--data_root', type=str, default='data',\n    help='Default location where data is stored (default: %(default)s)')\n  parser.add_argument(\n    '--batch_size', type=int, default=256,\n    help='Default overall batchsize (default: %(default)s)')\n  parser.add_argument(\n    '--num_workers', type=int, default=16,\n    help='Number of dataloader workers (default: %(default)s)')\n  parser.add_argument(\n    '--chunk_size', type=int, default=500,\n    help='Default overall batchsize (default: %(default)s)')\n  parser.add_argument(\n    '--compression', action='store_true', default=False,\n    help='Use LZF compression? (default: %(default)s)')\n  return parser\n\n\ndef run(config):\n  if 'hdf5' in config['dataset']:\n    raise ValueError('Reading from an HDF5 file which you will probably be '\n                     'about to overwrite! Override this error only if you know '\n                     'what you''re doing!')\n  # Get image size\n  config['image_size'] = utils.imsize_dict[config['dataset']]\n\n  # Update compression entry\n  config['compression'] = 'lzf' if config['compression'] else None #No compression; can also use 'lzf' \n\n  # Get dataset\n  kwargs = {'num_workers': config['num_workers'], 'pin_memory': False, 'drop_last': False}\n  train_loader = utils.get_data_loaders(dataset=config['dataset'],\n                                        batch_size=config['batch_size'],\n                                        shuffle=False,\n                                        data_root=config['data_root'],\n                                        use_multiepoch_sampler=False,\n                                        **kwargs)[0]     \n\n  # HDF5 supports chunking and compression. You may want to experiment \n  # with different chunk sizes to see how it runs on your machines.\n  # Chunk Size/compression     Read speed @ 256x256   Read speed @ 128x128  Filesize @ 128x128    Time to write @128x128\n  # 1 / None                   20/s\n  # 500 / None                 ramps up to 77/s       102/s                 61GB                  23min\n  # 500 / LZF                                         8/s                   56GB                  23min\n  # 1000 / None                78/s\n  # 5000 / None                81/s\n  # auto:(125,1,16,32) / None                         11/s                  61GB        \n\n  print('Starting to load %s into an HDF5 file with chunk size %i and compression %s...' % (config['dataset'], config['chunk_size'], config['compression']))\n  # Loop over train loader\n  for i,(x,y) in enumerate(tqdm(train_loader)):\n    # Stick X into the range [0, 255] since it's coming from the train loader\n    x = (255 * ((x + 1) / 2.0)).byte().numpy()\n    # Numpyify y\n    y = y.numpy()\n    # If we're on the first batch, prepare the hdf5\n    if i==0:\n      with h5.File(config['data_root'] + '/ILSVRC%i.hdf5' % config['image_size'], 'w') as f:\n        print('Producing dataset of len %d' % len(train_loader.dataset))\n        imgs_dset = f.create_dataset('imgs', x.shape,dtype='uint8', maxshape=(len(train_loader.dataset), 3, config['image_size'], config['image_size']),\n                                     chunks=(config['chunk_size'], 3, config['image_size'], config['image_size']), compression=config['compression']) \n        print('Image chunks chosen as ' + str(imgs_dset.chunks))\n        imgs_dset[...] = x\n        labels_dset = f.create_dataset('labels', y.shape, dtype='int64', maxshape=(len(train_loader.dataset),), chunks=(config['chunk_size'],), compression=config['compression'])\n        print('Label chunks chosen as ' + str(labels_dset.chunks))\n        labels_dset[...] = y\n    # Else append to the hdf5\n    else:\n      with h5.File(config['data_root'] + '/ILSVRC%i.hdf5' % config['image_size'], 'a') as f:\n        f['imgs'].resize(f['imgs'].shape[0] + x.shape[0], axis=0)\n        f['imgs'][-x.shape[0]:] = x\n        f['labels'].resize(f['labels'].shape[0] + y.shape[0], axis=0)\n        f['labels'][-y.shape[0]:] = y\n\n\ndef main():\n  # parse command line and run    \n  parser = prepare_parser()\n  config = vars(parser.parse_args())\n  print(config)\n  run(config)\n\nif __name__ == '__main__':    \n  main()"
  },
  {
    "path": "FQ-BigGAN/sample.py",
    "content": "''' Sample\n   This script loads a pretrained net and a weightsfile and sample '''\nimport functools\nimport math\nimport numpy as np\nfrom tqdm import tqdm, trange\n\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\nimport torchvision\n\n# Import my stuff\nimport inception_utils\nimport utils\nimport losses\n\n\n\ndef run(config):\n  # Prepare state dict, which holds things like epoch # and itr #\n  state_dict = {'itr': 0, 'epoch': 0, 'save_num': 0, 'save_best_num': 0,\n                'best_IS': 0, 'best_FID': 999999, 'config': config}\n                \n  # Optionally, get the configuration from the state dict. This allows for\n  # recovery of the config provided only a state dict and experiment name,\n  # and can be convenient for writing less verbose sample shell scripts.\n  if config['config_from_name']:\n    utils.load_weights(None, None, state_dict, config['weights_root'], \n                       config['experiment_name'], config['load_weights'], None,\n                       strict=False, load_optim=False)\n    # Ignore items which we might want to overwrite from the command line\n    for item in state_dict['config']:\n      if item not in ['z_var', 'base_root', 'batch_size', 'G_batch_size', 'use_ema', 'G_eval_mode']:\n        config[item] = state_dict['config'][item]\n  \n  # update config (see train.py for explanation)\n  config['resolution'] = utils.imsize_dict[config['dataset']]\n  config['n_classes'] = utils.nclass_dict[config['dataset']]\n  config['G_activation'] = utils.activation_dict[config['G_nl']]\n  config['D_activation'] = utils.activation_dict[config['D_nl']]\n  config = utils.update_config_roots(config)\n  config['skip_init'] = True\n  config['no_optim'] = True\n  device = 'cuda'\n  \n  # Seed RNG\n  utils.seed_rng(config['seed'])\n   \n  # Setup cudnn.benchmark for free speed\n  torch.backends.cudnn.benchmark = True\n  \n  # Import the model--this line allows us to dynamically select different files.\n  model = __import__(config['model'])\n  experiment_name = (config['experiment_name'] if config['experiment_name']\n                       else utils.name_from_config(config))\n  print('Experiment name is %s' % experiment_name)\n  \n  G = model.Generator(**config).cuda()\n  utils.count_parameters(G)\n  \n  # Load weights\n  print('Loading weights...')\n  # Here is where we deal with the ema--load ema weights or load normal weights\n  utils.load_weights(G if not (config['use_ema']) else None, None, state_dict, \n                     config['weights_root'], experiment_name, config['load_weights'],\n                     G if config['ema'] and config['use_ema'] else None,\n                     strict=False, load_optim=False)\n  # Update batch size setting used for G\n  G_batch_size = max(config['G_batch_size'], config['batch_size']) \n  z_, y_ = utils.prepare_z_y(G_batch_size, G.dim_z, config['n_classes'],\n                             device=device, fp16=config['G_fp16'], \n                             z_var=config['z_var'])\n  \n  if config['G_eval_mode']:\n    print('Putting G in eval mode..')\n    G.eval()\n  else:\n    print('G is in %s mode...' % ('training' if G.training else 'eval'))\n    \n  #Sample function\n  sample = functools.partial(utils.sample, G=G, z_=z_, y_=y_, config=config)  \n  if config['accumulate_stats']:\n    print('Accumulating standing stats across %d accumulations...' % config['num_standing_accumulations'])\n    utils.accumulate_standing_stats(G, z_, y_, config['n_classes'],\n                                    config['num_standing_accumulations'])\n    \n  \n  # Sample a number of images and save them to an NPZ, for use with TF-Inception\n  if config['sample_npz']:\n    # Lists to hold images and labels for images\n    x, y = [], []\n    print('Sampling %d images and saving them to npz...' % config['sample_num_npz'])\n    for i in trange(int(np.ceil(config['sample_num_npz'] / float(G_batch_size)))):\n      with torch.no_grad():\n        images, labels = sample()\n      x += [np.uint8(255 * (images.cpu().numpy() + 1) / 2.)]\n      y += [labels.cpu().numpy()]\n    x = np.concatenate(x, 0)[:config['sample_num_npz']]\n    y = np.concatenate(y, 0)[:config['sample_num_npz']]    \n    print('Images shape: %s, Labels shape: %s' % (x.shape, y.shape))\n    npz_filename = '%s/%s/samples.npz' % (config['samples_root'], experiment_name)\n    print('Saving npz to %s...' % npz_filename)\n    np.savez(npz_filename, **{'x' : x, 'y' : y})\n  \n  # Prepare sample sheets\n  if config['sample_sheets']:\n    print('Preparing conditional sample sheets...')\n    utils.sample_sheet(G, classes_per_sheet=utils.classes_per_sheet_dict[config['dataset']], \n                         num_classes=config['n_classes'], \n                         samples_per_class=10, parallel=config['parallel'],\n                         samples_root=config['samples_root'], \n                         experiment_name=experiment_name,\n                         folder_number=config['sample_sheet_folder_num'],\n                         z_=z_,)\n  # Sample interp sheets\n  if config['sample_interps']:\n    print('Preparing interp sheets...')\n    for fix_z, fix_y in zip([False, False, True], [False, True, False]):\n      utils.interp_sheet(G, num_per_sheet=16, num_midpoints=8,\n                         num_classes=config['n_classes'], \n                         parallel=config['parallel'], \n                         samples_root=config['samples_root'], \n                         experiment_name=experiment_name,\n                         folder_number=config['sample_sheet_folder_num'], \n                         sheet_number=0,\n                         fix_z=fix_z, fix_y=fix_y, device='cuda')\n  # Sample random sheet\n  if config['sample_random']:\n    print('Preparing random sample sheet...')\n    images, labels = sample()    \n    torchvision.utils.save_image(images.float(),\n                                 '%s/%s/random_samples.jpg' % (config['samples_root'], experiment_name),\n                                 nrow=int(G_batch_size**0.5),\n                                 normalize=True)\n\n  # Get Inception Score and FID\n  get_inception_metrics = inception_utils.prepare_inception_metrics(config['dataset'], config['parallel'], config['no_fid'])\n  # Prepare a simple function get metrics that we use for trunc curves\n  def get_metrics():\n    sample = functools.partial(utils.sample, G=G, z_=z_, y_=y_, config=config)    \n    IS_mean, IS_std, FID = get_inception_metrics(sample, config['num_inception_images'], num_splits=10, prints=False)\n    # Prepare output string\n    outstring = 'Using %s weights ' % ('ema' if config['use_ema'] else 'non-ema')\n    outstring += 'in %s mode, ' % ('eval' if config['G_eval_mode'] else 'training')\n    outstring += 'with noise variance %3.3f, ' % z_.var\n    outstring += 'over %d images, ' % config['num_inception_images']\n    if config['accumulate_stats'] or not config['G_eval_mode']:\n      outstring += 'with batch size %d, ' % G_batch_size\n    if config['accumulate_stats']:\n      outstring += 'using %d standing stat accumulations, ' % config['num_standing_accumulations']\n    outstring += 'Itr %d: PYTORCH UNOFFICIAL Inception Score is %3.3f +/- %3.3f, PYTORCH UNOFFICIAL FID is %5.4f' % (state_dict['itr'], IS_mean, IS_std, FID)\n    print(outstring)\n  if config['sample_inception_metrics']: \n    print('Calculating Inception metrics...')\n    get_metrics()\n    \n  # Sample truncation curve stuff. This is basically the same as the inception metrics code\n  if config['sample_trunc_curves']:\n    start, step, end = [float(item) for item in config['sample_trunc_curves'].split('_')]\n    print('Getting truncation values for variance in range (%3.3f:%3.3f:%3.3f)...' % (start, step, end))\n    for var in np.arange(start, end + step, step):     \n      z_.var = var\n      # Optionally comment this out if you want to run with standing stats\n      # accumulated at one z variance setting\n      if config['accumulate_stats']:\n        utils.accumulate_standing_stats(G, z_, y_, config['n_classes'],\n                                    config['num_standing_accumulations'])\n      get_metrics()\ndef main():\n  # parse command line and run    \n  parser = utils.prepare_parser()\n  parser = utils.add_sample_parser(parser)\n  config = vars(parser.parse_args())\n  print(config)\n  run(config)\n  \nif __name__ == '__main__':    \n  main()"
  },
  {
    "path": "FQ-BigGAN/scripts/launch_C10.sh",
    "content": "#!/bin/bash\n#export CUDA_VISIBLE_DEVICES=0,1\npython3 train.py --shuffle --batch_size 64 --parallel \\\n--num_G_accumulations 1 --num_D_accumulations 1 --num_epochs 500 \\\n--num_D_steps 4 --G_lr 2e-4 \\\n--D_lr 2e-4 --dataset C10 --G_ortho 0.0 \\\n--G_attn 0 --D_attn 0 --G_init N02 --D_init N02 \\\n--ema --use_ema --ema_start 1000 \\\n--test_every 1000 --save_every 1000 \\\n--num_best_copies 5 --num_save_copies 2 --seed 0 \\\n--discrete_layer 0123 --commitment 1.0 --dict_size 10 --dict_decay 0.8 \\\n--name_suffix quant\n"
  },
  {
    "path": "FQ-BigGAN/scripts/launch_C100.sh",
    "content": "#!/bin/bash\n#export CUDA_VISIBLE_DEVICES=2\npython3 train.py --shuffle --batch_size 64 --parallel \\\n--num_G_accumulations 1 --num_D_accumulations 1 --num_epochs 500 \\\n--num_D_steps 4 --G_lr 2e-4 \\\n--D_lr 2e-4 --dataset C100 --G_ortho 0.0 \\\n--G_attn 0 --D_attn 0 --G_init N02 --D_init N02 \\\n--ema --use_ema --ema_start 1000 \\\n--test_every 2000 --save_every 1000 \\\n--num_best_copies 5 --num_save_copies 2 --seed 0 \\\n--discrete_layer 0123 --commitment 10.0 --dict_size 6 --dict_decay 0.9 \\\n--name_suffix quant\n"
  },
  {
    "path": "FQ-BigGAN/scripts/launch_I128_bs256x4.sh",
    "content": "#!/bin/bash\n# export CUDA_VISIBLE_DEVICES=1,2\npython train.py \\\n--dataset I128_hdf5 --parallel --shuffle  --num_workers 8 --batch_size 256 --load_in_mem  \\\n--num_G_accumulations 4 --num_D_accumulations 4 \\\n--num_D_steps 1 --G_lr 1e-4 --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 \\\n--G_attn 64 --D_attn 64 \\\n--G_nl inplace_relu --D_nl inplace_relu \\\n--SN_eps 1e-6 --BN_eps 1e-5 --adam_eps 1e-6 \\\n--G_ortho 0.0 \\\n--hier --dim_z 120 \\\n--G_eval_mode \\\n--G_ch 64 --D_ch 64 \\\n--ema --use_ema --ema_start 20000 \\\n--test_every 1000 --save_every 1000 --num_best_copies 5 --num_save_copies 2 --seed 0 \\\n--discrete_layer 0123 --commitment 15.0 --dict_size 10 --dict_decay 0.8 \\\n--use_multiepoch_sampler --name_suffix quant"
  },
  {
    "path": "FQ-BigGAN/scripts/launch_I64_bs128x4.sh",
    "content": "#!/bin/bash\nexport CUDA_VISIBLE_DEVICES=1,2\npython train.py \\\n--dataset I64_hdf5 --parallel --shuffle  --num_workers 8 --batch_size 128 --load_in_mem  \\\n--num_G_accumulations 4 --num_D_accumulations 4 \\\n--num_D_steps 1 --G_lr 1e-4 --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 \\\n--G_attn 32 --D_attn 32 \\\n--G_nl inplace_relu --D_nl inplace_relu \\\n--SN_eps 1e-6 --BN_eps 1e-5 --adam_eps 1e-6 \\\n--G_ortho 0.0 \\\n--G_shared \\\n--G_init ortho --D_init ortho \\\n--hier --dim_z 120 --shared_dim 128 \\\n--G_eval_mode \\\n--G_ch 64 --D_ch 64 \\\n--ema --use_ema --ema_start 20000 \\\n--test_every 1000 --save_every 1000 --num_best_copies 5 --num_save_copies 2 --seed 0 \\\n--discrete_layer 2 --commitment 0.5 --dict_size 10 --dict_decay 0.7 \\\n--use_multiepoch_sampler --name_suffix test"
  },
  {
    "path": "FQ-BigGAN/scripts/utils/duplicate.sh",
    "content": "#duplicate.sh\nsource=BigGAN_I128_hdf5_seed0_Gch64_Dch64_bs256_Glr1.0e-04_Dlr4.0e-04_Gnlinplace_relu_Dnlinplace_relu_Ginitxavier_Dinitxavier_Gshared_alex0\ntarget=BigGAN_I128_hdf5_seed0_Gch64_Dch64_bs256_Glr1.0e-04_Dlr4.0e-04_Gnlinplace_relu_Dnlinplace_relu_Ginitxavier_Dinitxavier_Gshared_alex0A\nlogs_root=logs\nweights_root=weights\necho \"copying ${source} to ${target}\"\ncp -r ${logs_root}/${source} ${logs_root}/${target}\ncp ${logs_root}/${source}_log.jsonl ${logs_root}/${target}_log.jsonl\ncp ${weights_root}/${source}_G.pth ${weights_root}/${target}_G.pth\ncp ${weights_root}/${source}_G_ema.pth ${weights_root}/${target}_G_ema.pth\ncp ${weights_root}/${source}_D.pth ${weights_root}/${target}_D.pth\ncp ${weights_root}/${source}_G_optim.pth ${weights_root}/${target}_G_optim.pth\ncp ${weights_root}/${source}_D_optim.pth ${weights_root}/${target}_D_optim.pth\ncp ${weights_root}/${source}_state_dict.pth ${weights_root}/${target}_state_dict.pth"
  },
  {
    "path": "FQ-BigGAN/scripts/utils/prepare_data.sh",
    "content": "#!/bin/bash\n# export CUDA_VISIBLE_DEVICES=3\npython make_hdf5.py --dataset C100 --batch_size 256 --data_root data\npython calculate_inception_moments.py --dataset C100 --data_root data --batch_size 128\n"
  },
  {
    "path": "FQ-BigGAN/scripts/utils/trans.py",
    "content": "filename = 'prepare_data.sh'\nfileCont = open(filename, 'r').read()\nf = open(filename, 'w', newline='\\n')\nf.write(fileCont)\nf.close()"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : __init__.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n# \n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nfrom .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d\nfrom .replicate import DataParallelWithCallback, patch_replication_callback\n"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/batchnorm.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : batchnorm.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport collections\n\nimport torch\nimport torch.nn.functional as F\n\nfrom torch.nn.modules.batchnorm import _BatchNorm\nfrom torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast\n\nfrom .comm import SyncMaster\n\n__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']\n\n\ndef _sum_ft(tensor):\n    \"\"\"sum over the first and last dimention\"\"\"\n    return tensor.sum(dim=0).sum(dim=-1)\n\n\ndef _unsqueeze_ft(tensor):\n    \"\"\"add new dementions at the front and the tail\"\"\"\n    return tensor.unsqueeze(0).unsqueeze(-1)\n\n\n_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])\n_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])\n# _MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'ssum', 'sum_size'])\n\nclass _SynchronizedBatchNorm(_BatchNorm):\n    def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):\n        super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)\n\n        self._sync_master = SyncMaster(self._data_parallel_master)\n\n        self._is_parallel = False\n        self._parallel_id = None\n        self._slave_pipe = None\n\n    def forward(self, input, gain=None, bias=None):\n        # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.\n        if not (self._is_parallel and self.training):\n            out = F.batch_norm(\n                input, self.running_mean, self.running_var, self.weight, self.bias,\n                self.training, self.momentum, self.eps)\n            if gain is not None:\n              out = out + gain\n            if bias is not None:\n              out = out + bias\n            return out\n\n        # Resize the input to (B, C, -1).\n        input_shape = input.size()\n        # print(input_shape)\n        input = input.view(input.size(0), input.size(1), -1)\n\n        # Compute the sum and square-sum.\n        sum_size = input.size(0) * input.size(2)\n        input_sum = _sum_ft(input)\n        input_ssum = _sum_ft(input ** 2)\n        # Reduce-and-broadcast the statistics.\n        # print('it begins')\n        if self._parallel_id == 0:\n            mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))\n        else:\n            mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))\n        # if self._parallel_id == 0:\n            # # print('here')\n            # sum, ssum, num = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))\n        # else:\n            # # print('there')\n            # sum, ssum, num = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))\n        \n        # print('how2')\n        # num = sum_size\n        # print('Sum: %f, ssum: %f, sumsize: %f, insum: %f' %(float(sum.sum().cpu()), float(ssum.sum().cpu()), float(sum_size), float(input_sum.sum().cpu()))) \n        # Fix the graph\n        # sum = (sum.detach() - input_sum.detach()) + input_sum\n        # ssum = (ssum.detach() - input_ssum.detach()) + input_ssum\n        \n        # mean = sum / num\n        # var = ssum / num - mean ** 2\n        # # var = (ssum - mean * sum) / num\n        # inv_std = torch.rsqrt(var + self.eps)\n        \n        # Compute the output.\n        if gain is not None:\n          # print('gaining')\n          # scale = _unsqueeze_ft(inv_std) * gain.squeeze(-1)\n          # shift = _unsqueeze_ft(mean) * scale - bias.squeeze(-1)\n          # output = input * scale - shift\n          output = (input - _unsqueeze_ft(mean)) * (_unsqueeze_ft(inv_std) * gain.squeeze(-1)) + bias.squeeze(-1)\n        elif self.affine:\n            # MJY:: Fuse the multiplication for speed.\n            output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)        \n        else:\n            output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)\n\n        # Reshape it.\n        return output.view(input_shape)\n\n    def __data_parallel_replicate__(self, ctx, copy_id):\n        self._is_parallel = True\n        self._parallel_id = copy_id\n\n        # parallel_id == 0 means master device.\n        if self._parallel_id == 0:\n            ctx.sync_master = self._sync_master\n        else:\n            self._slave_pipe = ctx.sync_master.register_slave(copy_id)\n\n    def _data_parallel_master(self, intermediates):\n        \"\"\"Reduce the sum and square-sum, compute the statistics, and broadcast it.\"\"\"\n\n        # Always using same \"device order\" makes the ReduceAdd operation faster.\n        # Thanks to:: Tete Xiao (http://tetexiao.com/)\n        intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())\n\n        to_reduce = [i[1][:2] for i in intermediates]\n        to_reduce = [j for i in to_reduce for j in i]  # flatten\n        target_gpus = [i[1].sum.get_device() for i in intermediates]\n\n        sum_size = sum([i[1].sum_size for i in intermediates])\n        sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)\n        mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)\n\n        broadcasted = Broadcast.apply(target_gpus, mean, inv_std)\n        # print('a')\n        # print(type(sum_), type(ssum), type(sum_size), sum_.shape, ssum.shape, sum_size)\n        # broadcasted = Broadcast.apply(target_gpus, sum_, ssum, torch.tensor(sum_size).float().to(sum_.device))\n        # print('b')\n        outputs = []\n        for i, rec in enumerate(intermediates):\n            outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))\n            # outputs.append((rec[0], _MasterMessage(*broadcasted[i*3:i*3+3])))\n\n        return outputs\n\n    def _compute_mean_std(self, sum_, ssum, size):\n        \"\"\"Compute the mean and standard-deviation with sum and square-sum. This method\n        also maintains the moving average on the master device.\"\"\"\n        assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'\n        mean = sum_ / size\n        sumvar = ssum - sum_ * mean\n        unbias_var = sumvar / (size - 1)\n        bias_var = sumvar / size\n\n        self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data\n        self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data\n        return mean, torch.rsqrt(bias_var + self.eps)\n        # return mean, bias_var.clamp(self.eps) ** -0.5\n\n\nclass SynchronizedBatchNorm1d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a\n    mini-batch.\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm1d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of size\n            `batch_size x num_features [x width]`\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape:\n        - Input: :math:`(N, C)` or :math:`(N, C, L)`\n        - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm1d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm1d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 2 and input.dim() != 3:\n            raise ValueError('expected 2D or 3D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm1d, self)._check_input_dim(input)\n\n\nclass SynchronizedBatchNorm2d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Batch Normalization over a 4d input that is seen as a mini-batch\n    of 3d inputs\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm2d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of\n            size batch_size x num_features x height x width\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape:\n        - Input: :math:`(N, C, H, W)`\n        - Output: :math:`(N, C, H, W)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm2d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm2d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 4:\n            raise ValueError('expected 4D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm2d, self)._check_input_dim(input)\n\n\nclass SynchronizedBatchNorm3d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Batch Normalization over a 5d input that is seen as a mini-batch\n    of 4d inputs\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm3d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm\n    or Spatio-temporal BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of\n            size batch_size x num_features x depth x height x width\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape:\n        - Input: :math:`(N, C, D, H, W)`\n        - Output: :math:`(N, C, D, H, W)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm3d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm3d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 5:\n            raise ValueError('expected 5D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm3d, self)._check_input_dim(input)"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/batchnorm_reimpl.py",
    "content": "#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\n# File   : batchnorm_reimpl.py\n# Author : acgtyrant\n# Date   : 11/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.init as init\n\n__all__ = ['BatchNormReimpl']\n\n\nclass BatchNorm2dReimpl(nn.Module):\n    \"\"\"\n    A re-implementation of batch normalization, used for testing the numerical\n    stability.\n\n    Author: acgtyrant\n    See also:\n    https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues/14\n    \"\"\"\n    def __init__(self, num_features, eps=1e-5, momentum=0.1):\n        super().__init__()\n\n        self.num_features = num_features\n        self.eps = eps\n        self.momentum = momentum\n        self.weight = nn.Parameter(torch.empty(num_features))\n        self.bias = nn.Parameter(torch.empty(num_features))\n        self.register_buffer('running_mean', torch.zeros(num_features))\n        self.register_buffer('running_var', torch.ones(num_features))\n        self.reset_parameters()\n\n    def reset_running_stats(self):\n        self.running_mean.zero_()\n        self.running_var.fill_(1)\n\n    def reset_parameters(self):\n        self.reset_running_stats()\n        init.uniform_(self.weight)\n        init.zeros_(self.bias)\n\n    def forward(self, input_):\n        batchsize, channels, height, width = input_.size()\n        numel = batchsize * height * width\n        input_ = input_.permute(1, 0, 2, 3).contiguous().view(channels, numel)\n        sum_ = input_.sum(1)\n        sum_of_square = input_.pow(2).sum(1)\n        mean = sum_ / numel\n        sumvar = sum_of_square - sum_ * mean\n\n        self.running_mean = (\n                (1 - self.momentum) * self.running_mean\n                + self.momentum * mean.detach()\n        )\n        unbias_var = sumvar / (numel - 1)\n        self.running_var = (\n                (1 - self.momentum) * self.running_var\n                + self.momentum * unbias_var.detach()\n        )\n\n        bias_var = sumvar / numel\n        inv_std = 1 / (bias_var + self.eps).pow(0.5)\n        output = (\n                (input_ - mean.unsqueeze(1)) * inv_std.unsqueeze(1) *\n                self.weight.unsqueeze(1) + self.bias.unsqueeze(1))\n\n        return output.view(channels, batchsize, height, width).permute(1, 0, 2, 3).contiguous()\n\n"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/comm.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : comm.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n# \n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport queue\nimport collections\nimport threading\n\n__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']\n\n\nclass FutureResult(object):\n    \"\"\"A thread-safe future implementation. Used only as one-to-one pipe.\"\"\"\n\n    def __init__(self):\n        self._result = None\n        self._lock = threading.Lock()\n        self._cond = threading.Condition(self._lock)\n\n    def put(self, result):\n        with self._lock:\n            assert self._result is None, 'Previous result has\\'t been fetched.'\n            self._result = result\n            self._cond.notify()\n\n    def get(self):\n        with self._lock:\n            if self._result is None:\n                self._cond.wait()\n\n            res = self._result\n            self._result = None\n            return res\n\n\n_MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])\n_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])\n\n\nclass SlavePipe(_SlavePipeBase):\n    \"\"\"Pipe for master-slave communication.\"\"\"\n\n    def run_slave(self, msg):\n        self.queue.put((self.identifier, msg))\n        ret = self.result.get()\n        self.queue.put(True)\n        return ret\n\n\nclass SyncMaster(object):\n    \"\"\"An abstract `SyncMaster` object.\n\n    - During the replication, as the data parallel will trigger an callback of each module, all slave devices should\n    call `register(id)` and obtain an `SlavePipe` to communicate with the master.\n    - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,\n    and passed to a registered callback.\n    - After receiving the messages, the master device should gather the information and determine to message passed\n    back to each slave devices.\n    \"\"\"\n\n    def __init__(self, master_callback):\n        \"\"\"\n\n        Args:\n            master_callback: a callback to be invoked after having collected messages from slave devices.\n        \"\"\"\n        self._master_callback = master_callback\n        self._queue = queue.Queue()\n        self._registry = collections.OrderedDict()\n        self._activated = False\n\n    def __getstate__(self):\n        return {'master_callback': self._master_callback}\n\n    def __setstate__(self, state):\n        self.__init__(state['master_callback'])\n\n    def register_slave(self, identifier):\n        \"\"\"\n        Register an slave device.\n\n        Args:\n            identifier: an identifier, usually is the device id.\n\n        Returns: a `SlavePipe` object which can be used to communicate with the master device.\n\n        \"\"\"\n        if self._activated:\n            assert self._queue.empty(), 'Queue is not clean before next initialization.'\n            self._activated = False\n            self._registry.clear()\n        future = FutureResult()\n        self._registry[identifier] = _MasterRegistry(future)\n        return SlavePipe(identifier, self._queue, future)\n\n    def run_master(self, master_msg):\n        \"\"\"\n        Main entry for the master device in each forward pass.\n        The messages were first collected from each devices (including the master device), and then\n        an callback will be invoked to compute the message to be sent back to each devices\n        (including the master device).\n\n        Args:\n            master_msg: the message that the master want to send to itself. This will be placed as the first\n            message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.\n\n        Returns: the message to be sent back to the master device.\n\n        \"\"\"\n        self._activated = True\n\n        intermediates = [(0, master_msg)]\n        for i in range(self.nr_slaves):\n            intermediates.append(self._queue.get())\n\n        results = self._master_callback(intermediates)\n        assert results[0][0] == 0, 'The first result should belongs to the master.'\n\n        for i, res in results:\n            if i == 0:\n                continue\n            self._registry[i].result.put(res)\n\n        for i in range(self.nr_slaves):\n            assert self._queue.get() is True\n\n        return results[0][1]\n\n    @property\n    def nr_slaves(self):\n        return len(self._registry)\n"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/replicate.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : replicate.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n# \n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport functools\n\nfrom torch.nn.parallel.data_parallel import DataParallel\n\n__all__ = [\n    'CallbackContext',\n    'execute_replication_callbacks',\n    'DataParallelWithCallback',\n    'patch_replication_callback'\n]\n\n\nclass CallbackContext(object):\n    pass\n\n\ndef execute_replication_callbacks(modules):\n    \"\"\"\n    Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.\n\n    The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`\n\n    Note that, as all modules are isomorphism, we assign each sub-module with a context\n    (shared among multiple copies of this module on different devices).\n    Through this context, different copies can share some information.\n\n    We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback\n    of any slave copies.\n    \"\"\"\n    master_copy = modules[0]\n    nr_modules = len(list(master_copy.modules()))\n    ctxs = [CallbackContext() for _ in range(nr_modules)]\n\n    for i, module in enumerate(modules):\n        for j, m in enumerate(module.modules()):\n            if hasattr(m, '__data_parallel_replicate__'):\n                m.__data_parallel_replicate__(ctxs[j], i)\n\n\nclass DataParallelWithCallback(DataParallel):\n    \"\"\"\n    Data Parallel with a replication callback.\n\n    An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by\n    original `replicate` function.\n    The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`\n\n    Examples:\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])\n        # sync_bn.__data_parallel_replicate__ will be invoked.\n    \"\"\"\n\n    def replicate(self, module, device_ids):\n        modules = super(DataParallelWithCallback, self).replicate(module, device_ids)\n        execute_replication_callbacks(modules)\n        return modules\n\n\ndef patch_replication_callback(data_parallel):\n    \"\"\"\n    Monkey-patch an existing `DataParallel` object. Add the replication callback.\n    Useful when you have customized `DataParallel` implementation.\n\n    Examples:\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallel(sync_bn, device_ids=[0, 1])\n        > patch_replication_callback(sync_bn)\n        # this is equivalent to\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])\n    \"\"\"\n\n    assert isinstance(data_parallel, DataParallel)\n\n    old_replicate = data_parallel.replicate\n\n    @functools.wraps(old_replicate)\n    def new_replicate(module, device_ids):\n        modules = old_replicate(module, device_ids)\n        execute_replication_callbacks(modules)\n        return modules\n\n    data_parallel.replicate = new_replicate\n"
  },
  {
    "path": "FQ-BigGAN/sync_batchnorm/unittest.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : unittest.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport unittest\nimport torch\n\n\nclass TorchTestCase(unittest.TestCase):\n    def assertTensorClose(self, x, y):\n        adiff = float((x - y).abs().max())\n        if (y == 0).all():\n            rdiff = 'NaN'\n        else:\n            rdiff = float((adiff / y).abs().max())\n\n        message = (\n            'Tensor close check failed\\n'\n            'adiff={}\\n'\n            'rdiff={}\\n'\n        ).format(adiff, rdiff)\n        self.assertTrue(torch.allclose(x, y), message)\n\n"
  },
  {
    "path": "FQ-BigGAN/train.py",
    "content": "\"\"\" BigGAN: The Authorized Unofficial PyTorch release\n    Code by A. Brock and A. Andonian\n    This code is an unofficial reimplementation of\n    \"Large-Scale GAN Training for High Fidelity Natural Image Synthesis,\"\n    by A. Brock, J. Donahue, and K. Simonyan (arXiv 1809.11096).\n\n    Let's go.\n\"\"\"\n\nimport os\nimport functools\nimport math\nimport numpy as np\nfrom tqdm import tqdm, trange\n\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.nn import Parameter as P\nimport torchvision\n\n# Import my stuff\nimport inception_utils\nimport utils\nimport losses\nimport train_fns\nfrom sync_batchnorm import patch_replication_callback\n\n# The main training file. Config is a dictionary specifying the configuration\n# of this training run.\ndef run(config):\n\n  # Update the config dict as necessary\n  # This is for convenience, to add settings derived from the user-specified\n  # configuration into the config-dict (e.g. inferring the number of classes\n  # and size of the images from the dataset, passing in a pytorch object\n  # for the activation specified as a string)\n  config['resolution'] = utils.imsize_dict[config['dataset']]\n  config['n_classes'] = utils.nclass_dict[config['dataset']]\n  config['G_activation'] = utils.activation_dict[config['G_nl']]\n  config['D_activation'] = utils.activation_dict[config['D_nl']]\n  # By default, skip init if resuming training.\n  if config['resume']:\n    print('Skipping initialization for training resumption...')\n    config['skip_init'] = True\n  config = utils.update_config_roots(config)\n  device = 'cuda'\n  \n  # Seed RNG\n  utils.seed_rng(config['seed'])\n\n  # Prepare root folders if necessary\n  utils.prepare_root(config)\n\n  # Setup cudnn.benchmark for free speed\n  torch.backends.cudnn.benchmark = True\n\n  # Import the model--this line allows us to dynamically select different files.\n  model = __import__(config['model'])\n  experiment_name = (config['experiment_name'] if config['experiment_name']\n                       else utils.name_from_config(config))\n  print('Experiment name is %s' % experiment_name)\n\n  # Next, build the model\n  G = model.Generator(**config).to(device)\n  D = model.Discriminator(**config).to(device)\n  \n   # If using EMA, prepare it\n  if config['ema']:\n    print('Preparing EMA for G with decay of {}'.format(config['ema_decay']))\n    G_ema = model.Generator(**{**config, 'skip_init':True, \n                               'no_optim': True}).to(device)\n    ema = utils.ema(G, G_ema, config['ema_decay'], config['ema_start'])\n  else:\n    G_ema, ema = None, None\n  \n  # FP16?\n  if config['G_fp16']:\n    print('Casting G to float16...')\n    G = G.half()\n    if config['ema']:\n      G_ema = G_ema.half()\n  if config['D_fp16']:\n    print('Casting D to fp16...')\n    D = D.half()\n    # Consider automatically reducing SN_eps?\n  GD = model.G_D(G, D)\n  print(G)\n  print(D)\n  print('Number of params in G: {} D: {}'.format(\n    *[sum([p.data.nelement() for p in net.parameters()]) for net in [G,D]]))\n  # Prepare state dict, which holds things like epoch # and itr #\n  state_dict = {'itr': 0, 'epoch': 0, 'save_num': 0, 'save_best_num': 0,\n                'best_IS': 0, 'best_FID': 999999, 'config': config}\n\n  # If loading from a pre-trained model, load weights\n  if config['resume']:\n    print('Loading weights...')\n    utils.load_weights(G, D, state_dict,\n                       config['weights_root'], experiment_name, \n                       config['load_weights'] if config['load_weights'] else None,\n                       G_ema if config['ema'] else None)\n\n  # If parallel, parallelize the GD module\n  if config['parallel']:\n    GD = nn.DataParallel(GD)\n    if config['cross_replica']:\n      patch_replication_callback(GD)\n\n  # Prepare loggers for stats; metrics holds test metrics,\n  # lmetrics holds any desired training metrics.\n  test_metrics_fname = '%s/%s_log.jsonl' % (config['logs_root'],\n                                            experiment_name)\n  train_metrics_fname = '%s/%s' % (config['logs_root'], experiment_name)\n  print('Inception Metrics will be saved to {}'.format(test_metrics_fname))\n  test_log = utils.MetricsLogger(test_metrics_fname, \n                                 reinitialize=(not config['resume']))\n  print('Training Metrics will be saved to {}'.format(train_metrics_fname))\n  train_log = utils.MyLogger(train_metrics_fname, \n                             reinitialize=(not config['resume']),\n                             logstyle=config['logstyle'])\n  # Write metadata\n  utils.write_metadata(config['logs_root'], experiment_name, config, state_dict)\n  # Prepare data; the Discriminator's batch size is all that needs to be passed\n  # to the dataloader, as G doesn't require dataloading.\n  # Note that at every loader iteration we pass in enough data to complete\n  # a full D iteration (regardless of number of D steps and accumulations)\n  D_batch_size = (config['batch_size'] * config['num_D_steps']\n                  * config['num_D_accumulations'])\n  loaders = utils.get_data_loaders(**{**config, 'batch_size': D_batch_size,\n                                      'start_itr': state_dict['itr']})\n\n  # Prepare inception metrics: FID and IS\n  get_inception_metrics = inception_utils.prepare_inception_metrics(config['dataset'], config['parallel'], config['no_fid'])\n\n  # Prepare noise and randomly sampled label arrays\n  # Allow for different batch sizes in G\n  G_batch_size = max(config['G_batch_size'], config['batch_size'])\n  z_, y_ = utils.prepare_z_y(G_batch_size, G.dim_z, config['n_classes'],\n                             device=device, fp16=config['G_fp16'])\n  # Prepare a fixed z & y to see individual sample evolution throghout training\n  fixed_z, fixed_y = utils.prepare_z_y(G_batch_size, G.dim_z,\n                                       config['n_classes'], device=device,\n                                       fp16=config['G_fp16'])  \n  fixed_z.sample_()\n  fixed_y.sample_()\n  # Loaders are loaded, prepare the training function\n  if config['which_train_fn'] == 'GAN':\n    train = train_fns.GAN_training_function(G, D, GD, z_, y_, \n                                            ema, state_dict, config)\n  # Else, assume debugging and use the dummy train fn\n  else:\n    train = train_fns.dummy_training_function()\n  # Prepare Sample function for use with inception metrics\n  sample = functools.partial(utils.sample,\n                              G=(G_ema if config['ema'] and config['use_ema']\n                                 else G),\n                              z_=z_, y_=y_, config=config)\n\n  print('Beginning training at epoch %d...' % state_dict['epoch'])\n  # Train for specified number of epochs, although we mostly track G iterations.\n  for epoch in range(state_dict['epoch'], config['num_epochs']):    \n    # Which progressbar to use? TQDM or my own?\n    if config['pbar'] == 'mine':\n      pbar = utils.progress(loaders[0],displaytype='s1k' if config['use_multiepoch_sampler'] else 'eta')\n    else:\n      pbar = tqdm(loaders[0])\n    for i, (x, y) in enumerate(pbar):\n      # Increment the iteration counter\n      state_dict['itr'] += 1\n      # Make sure G and D are in training mode, just in case they got set to eval\n      # For D, which typically doesn't have BN, this shouldn't matter much.\n      G.train()\n      D.train()\n      if config['ema']:\n        G_ema.train()\n      if config['D_fp16']:\n        x, y = x.to(device).half(), y.to(device)\n      else:\n        x, y = x.to(device), y.to(device)\n      metrics = train(x, y)\n      train_log.log(itr=int(state_dict['itr']), **metrics)\n      \n      # Every sv_log_interval, log singular values\n      if (config['sv_log_interval'] > 0) and (not (state_dict['itr'] % config['sv_log_interval'])):\n        train_log.log(itr=int(state_dict['itr']), \n                      **{**utils.get_SVs(G, 'G'), **utils.get_SVs(D, 'D')})\n\n      # If using my progbar, print metrics.\n      if config['pbar'] == 'mine':\n          print(', '.join(['itr: %d' % state_dict['itr']] \n                           + ['%s : %+4.3f' % (key, metrics[key])\n                           for key in metrics]), end=' ')\n\n      # Save weights and copies as configured at specified interval\n      if not (state_dict['itr'] % config['save_every']):\n        if config['G_eval_mode']:\n          print('Switchin G to eval mode...')\n          G.eval()\n          if config['ema']:\n            G_ema.eval()\n        train_fns.save_and_sample(G, D, G_ema, z_, y_, fixed_z, fixed_y, \n                                  state_dict, config, experiment_name)\n\n      # Test every specified interval\n      if not (state_dict['itr'] % config['test_every']):\n        if config['G_eval_mode']:\n          print('Switchin G to eval mode...')\n          G.eval()\n        train_fns.test(G, D, G_ema, z_, y_, state_dict, config, sample,\n                       get_inception_metrics, experiment_name, test_log)\n    # Increment epoch counter at end of epoch\n    state_dict['epoch'] += 1\n\n\ndef main():\n  # parse command line and run\n  parser = utils.prepare_parser()\n  config = vars(parser.parse_args())\n  print(config)\n  run(config)\n\nif __name__ == '__main__':\n  main()"
  },
  {
    "path": "FQ-BigGAN/train_fns.py",
    "content": "''' train_fns.py\nFunctions for the main loop of training different conditional image models\n'''\nimport torch\nimport torch.nn as nn\nimport torchvision\nimport os\n\nimport utils\nimport losses\n\n\n# Dummy training function for debugging\ndef dummy_training_function():\n  def train(x, y):\n    return {}\n  return train\n\ndef GAN_training_function(G, D, GD, z_, y_, ema, state_dict, config):\n  def train(x, y):\n    G.optim.zero_grad()\n    D.optim.zero_grad()\n    # How many chunks to split x and y into?\n    x = torch.split(x, config['batch_size'])\n    y = torch.split(y, config['batch_size'])\n    # print('chunks', len(x), len(y))\n    counter = 0\n    \n    # Optionally toggle D and G's \"require_grad\"\n    if config['toggle_grads']:\n      utils.toggle_grad(D, True)\n      utils.toggle_grad(G, False)\n\n\n    for step_index in range(config['num_D_steps']):\n      # If accumulating gradients, loop multiple times before an optimizer step\n      D.optim.zero_grad()\n      for accumulation_index in range(config['num_D_accumulations']):\n        z_.sample_()\n        y_.sample_()\n\n        D_fake, D_real, quant_loss_real, quant_loss_fake, ppl = GD(z_[:config['batch_size']],\n                                                               y_[:config[\n            'batch_size']],\n                            x[counter], y[counter], train_G=False, split_D=config['split_D'])\n         \n        # Compute components of D's loss, average them, and divide by \n        # the number of gradient accumulations\n        D_loss_real, D_loss_fake = losses.discriminator_loss(D_fake, D_real)\n        D_loss_real += quant_loss_real.mean()\n        D_loss_fake += quant_loss_fake.mean()\n        D_loss = (D_loss_real + D_loss_fake) / float(config['num_D_accumulations'])\n        D_loss.backward()\n        counter += 1\n\n      # Optionally apply ortho reg in D\n      if config['D_ortho'] > 0.0:\n        # Debug print to indicate we're using ortho reg in D.\n        print('using modified ortho reg in D')\n        utils.ortho(D, config['D_ortho'])\n      \n      D.optim.step()\n    \n    # Optionally toggle \"requires_grad\"\n    if config['toggle_grads']:\n      utils.toggle_grad(D, False)\n      utils.toggle_grad(G, True)\n      \n    # Zero G's gradients by default before training G, for safety\n    G.optim.zero_grad()\n    \n    # If accumulating gradients, loop multiple times\n    for accumulation_index in range(config['num_G_accumulations']):    \n      z_.sample_()\n      y_.sample_()\n      D_fake, quant_loss_G = GD(z_, y_, train_G=True, split_D=config['split_D'])\n      G_loss = (losses.generator_loss(D_fake) + quant_loss_G.mean()) / float(config['num_G_accumulations'])\n      G_loss.backward()\n    \n    # Optionally apply modified ortho reg in G\n    if config['G_ortho'] > 0.0:\n      print('using modified ortho reg in G') # Debug print to indicate we're using ortho reg in G\n      # Don't ortho reg shared, it makes no sense. Really we should blacklist any embeddings for this\n      utils.ortho(G, config['G_ortho'], \n                  blacklist=[param for param in G.shared.parameters()])\n    G.optim.step()\n    \n    # If we have an ema, update it, regardless of if we test with it or not\n    if config['ema']:\n      ema.update(state_dict['itr'])\n    \n    out = {'G_loss': float(G_loss.item()), \n            'D_loss_real': float(D_loss_real.item()),\n            'D_loss_fake': float(D_loss_fake.item()),\n           'Quant_loss': float(quant_loss_G.mean().item()),\n           'Perplexity': float(ppl.mean().item())\n           }\n    # Return G's loss and the components of D's loss.\n    return out\n  return train\n\n''' This function takes in the model, saves the weights (multiple copies if \n    requested), and prepares sample sheets: one consisting of samples given\n    a fixed noise seed (to show how the model evolves throughout training),\n    a set of full conditional sample sheets, and a set of interp sheets. '''\ndef save_and_sample(G, D, G_ema, z_, y_, fixed_z, fixed_y, \n                    state_dict, config, experiment_name):\n  utils.save_weights(G, D, state_dict, config['weights_root'],\n                     experiment_name, None, G_ema if config['ema'] else None)\n  # Save an additional copy to mitigate accidental corruption if process\n  # is killed during a save (it's happened to me before -.-)\n  if config['num_save_copies'] > 0:\n    utils.save_weights(G, D, state_dict, config['weights_root'],\n                       experiment_name,\n                       'copy%d' %  state_dict['save_num'],\n                       G_ema if config['ema'] else None)\n    state_dict['save_num'] = (state_dict['save_num'] + 1 ) % config['num_save_copies']\n    \n  # Use EMA G for samples or non-EMA?\n  which_G = G_ema if config['ema'] and config['use_ema'] else G\n  \n  # Accumulate standing statistics?\n  if config['accumulate_stats']:\n    utils.accumulate_standing_stats(G_ema if config['ema'] and config['use_ema'] else G,\n                           z_, y_, config['n_classes'],\n                           config['num_standing_accumulations'])\n  \n  # Save a random sample sheet with fixed z and y      \n  with torch.no_grad():\n    if config['parallel']:\n      fixed_Gz =  nn.parallel.data_parallel(which_G, (fixed_z, which_G.shared(fixed_y)))\n    else:\n      fixed_Gz = which_G(fixed_z, which_G.shared(fixed_y))\n  if not os.path.isdir('%s/%s' % (config['samples_root'], experiment_name)):\n    os.mkdir('%s/%s' % (config['samples_root'], experiment_name))\n  image_filename = '%s/%s/fixed_samples%d.jpg' % (config['samples_root'], \n                                                  experiment_name,\n                                                  state_dict['itr'])\n  torchvision.utils.save_image(fixed_Gz.float().cpu(), image_filename,\n                             nrow=int(fixed_Gz.shape[0] **0.5), normalize=True)\n  # For now, every time we save, also save sample sheets\n  utils.sample_sheet(which_G,\n                     classes_per_sheet=utils.classes_per_sheet_dict[config['dataset']],\n                     num_classes=config['n_classes'],\n                     samples_per_class=10, parallel=config['parallel'],\n                     samples_root=config['samples_root'],\n                     experiment_name=experiment_name,\n                     folder_number=state_dict['itr'],\n                     z_=z_)\n  # Also save interp sheets\n  for fix_z, fix_y in zip([False, False, True], [False, True, False]):\n    utils.interp_sheet(which_G,\n                       num_per_sheet=16,\n                       num_midpoints=8,\n                       num_classes=config['n_classes'],\n                       parallel=config['parallel'],\n                       samples_root=config['samples_root'],\n                       experiment_name=experiment_name,\n                       folder_number=state_dict['itr'],\n                       sheet_number=0,\n                       fix_z=fix_z, fix_y=fix_y, device='cuda')\n\n\n  \n''' This function runs the inception metrics code, checks if the results\n    are an improvement over the previous best (either in IS or FID, \n    user-specified), logs the results, and saves a best_ copy if it's an \n    improvement. '''\ndef test(G, D, G_ema, z_, y_, state_dict, config, sample, get_inception_metrics,\n         experiment_name, test_log):\n  print('Gathering inception metrics...')\n  if config['accumulate_stats']:\n    utils.accumulate_standing_stats(G_ema if config['ema'] and config['use_ema'] else G,\n                           z_, y_, config['n_classes'],\n                           config['num_standing_accumulations'])\n  IS_mean, IS_std, FID = get_inception_metrics(sample, \n                                               config['num_inception_images'],\n                                               num_splits=10)\n  print('Itr %d: PYTORCH UNOFFICIAL Inception Score is %3.3f +/- %3.3f, PYTORCH UNOFFICIAL FID is %5.4f' % (state_dict['itr'], IS_mean, IS_std, FID))\n  # If improved over previous best metric, save approrpiate copy\n  if ((config['which_best'] == 'IS' and IS_mean > state_dict['best_IS'])\n    or (config['which_best'] == 'FID' and FID < state_dict['best_FID'])):\n    print('%s improved over previous best, saving checkpoint...' % config['which_best'])\n    utils.save_weights(G, D, state_dict, config['weights_root'],\n                       experiment_name, 'best%d' % state_dict['save_best_num'],\n                       G_ema if config['ema'] else None)\n    state_dict['save_best_num'] = (state_dict['save_best_num'] + 1 ) % config['num_best_copies']\n  state_dict['best_IS'] = max(state_dict['best_IS'], IS_mean)\n  state_dict['best_FID'] = min(state_dict['best_FID'], FID)\n  # Log results to file\n  test_log.log(itr=int(state_dict['itr']), IS_mean=float(IS_mean),\n               IS_std=float(IS_std), FID=float(FID))"
  },
  {
    "path": "FQ-BigGAN/utility/extract_imagenet.py",
    "content": "import os\nimport sys\nimport shutil\n\ndef create_imagenet_ext(src_path, tgt_path, num_class=20):\n\tif not os.path.exists(tgt_path):\n\t\tos.mkdir(tgt_path)\n\telse:\n\t\tshutil.rmtree(tgt_path)\n\n\tfor i, img_dir in enumerate(os.listdir(src_path)):\n\t\tshutil.copytree(os.path.join(src_path, img_dir), os.path.join(tgt_path, img_dir))\n\t\tif i == num_class-1:\n\t\t\tbreak\n\nsrc_path = '/media/cchen/StorageDisk/imagenet/raw-data/train'\ntgt_path = '/media/cchen/StorageDisk/yzhao/GAN/BigGAN-PyTorch/data/Ext'\ncreate_imagenet_ext(src_path, tgt_path)"
  },
  {
    "path": "FQ-BigGAN/utility/untar.py",
    "content": "import tarfile\n\nsrc_path = '/media/cchen/StorageDisk/yzhao/datasets/images/ImageNet/'\nfor fname in src_path:\n\tif (fname.endswith(\"tar.gz\")):\n\t\ttar = tarfile.open(fname, \"r:gz\")\n\t\ttar.extractall()\n\t\ttar.close()\n\telif (fname.endswith(\"tar\")):\n\t\ttar = tarfile.open(fname, \"r:\")\n\t\ttar.extractall()\n\t\ttar.close()"
  },
  {
    "path": "FQ-BigGAN/utils.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n''' Utilities file\nThis file contains utility functions for bookkeeping, logging, and data loading.\nMethods which directly affect training should either go in layers, the model,\nor train_fns.py.\n'''\n\nfrom __future__ import print_function\nimport sys\nimport os\nimport numpy as np\nimport time\nimport datetime\nimport json\nimport pickle\nfrom argparse import ArgumentParser\nimport animal_hash\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nimport torchvision.transforms as transforms\nfrom torch.utils.data import DataLoader\n\nimport datasets as dset\n\ndef prepare_parser():\n  usage = 'Parser for all scripts.'\n  parser = ArgumentParser(description=usage)\n  \n  ### Dataset/Dataloader stuff ###\n  parser.add_argument(\n    '--dataset', type=str, default='I128_hdf5',\n    help='Which Dataset to train on, out of I128, I256, C10, C100;'\n         'Append \"_hdf5\" to use the hdf5 version for ISLVRC '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--augment', action='store_true', default=False,\n    help='Augment with random crops and flips (default: %(default)s)')\n  parser.add_argument(\n    '--num_workers', type=int, default=8,\n    help='Number of dataloader workers; consider using less for HDF5 '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--no_pin_memory', action='store_false', dest='pin_memory', default=True,\n    help='Pin data into memory through dataloader? (default: %(default)s)') \n  parser.add_argument(\n    '--shuffle', action='store_true', default=False,\n    help='Shuffle the data (strongly recommended)? (default: %(default)s)')\n  parser.add_argument(\n    '--load_in_mem', action='store_true', default=False,\n    help='Load all data into memory? (default: %(default)s)')\n  parser.add_argument(\n    '--use_multiepoch_sampler', action='store_true', default=False,\n    help='Use the multi-epoch sampler for dataloader? (default: %(default)s)')\n  \n  ### Quantization layer stuff\n  parser.add_argument(\n    '--dict_decay', type=float, default=0.8,\n    help='discrete dict learning decay')\n  parser.add_argument(\n    '--commitment', type=float, default=0.5,\n    help='regularizer coefficient')\n  parser.add_argument(\n    '--discrete_layer', type=str, default='2',\n    help='which layer to add the discretization')\n  parser.add_argument(\n    '--dict_size', type=int, default=10,\n    help='number of keys in dict')\n\n  ### Model stuff ###\n  parser.add_argument(\n    '--model', type=str, default='BigGAN',\n    help='Name of the model module (default: %(default)s)')\n  parser.add_argument(\n    '--G_param', type=str, default='SN',\n    help='Parameterization style to use for G, spectral norm (SN) or SVD (SVD)'\n          ' or None (default: %(default)s)')\n  parser.add_argument(\n    '--D_param', type=str, default='SN',\n    help='Parameterization style to use for D, spectral norm (SN) or SVD (SVD)'\n         ' or None (default: %(default)s)')    \n  parser.add_argument(\n    '--G_ch', type=int, default=64,\n    help='Channel multiplier for G (default: %(default)s)')\n  parser.add_argument(\n    '--D_ch', type=int, default=64,\n    help='Channel multiplier for D (default: %(default)s)')\n  parser.add_argument(\n    '--G_depth', type=int, default=1,\n    help='Number of resblocks per stage in G? (default: %(default)s)')\n  parser.add_argument(\n    '--D_depth', type=int, default=1,\n    help='Number of resblocks per stage in D? (default: %(default)s)')\n  parser.add_argument(\n    '--D_thin', action='store_false', dest='D_wide', default=True,\n    help='Use the SN-GAN channel pattern for D? (default: %(default)s)')\n  parser.add_argument(\n    '--G_shared', action='store_true', default=False,\n    help='Use shared embeddings in G? (default: %(default)s)')\n  parser.add_argument(\n    '--shared_dim', type=int, default=0,\n    help='G''s shared embedding dimensionality; if 0, will be equal to dim_z. '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--dim_z', type=int, default=128,\n    help='Noise dimensionality: %(default)s)')\n  parser.add_argument(\n    '--z_var', type=float, default=1.0,\n    help='Noise variance: %(default)s)')    \n  parser.add_argument(\n    '--hier', action='store_true', default=False,\n    help='Use hierarchical z in G? (default: %(default)s)')\n  parser.add_argument(\n    '--cross_replica', action='store_true', default=False,\n    help='Cross_replica batchnorm in G?(default: %(default)s)')\n  parser.add_argument(\n    '--mybn', action='store_true', default=False,\n    help='Use my batchnorm (which supports standing stats?) %(default)s)')\n  parser.add_argument(\n    '--G_nl', type=str, default='relu',\n    help='Activation function for G (default: %(default)s)')\n  parser.add_argument(\n    '--D_nl', type=str, default='relu',\n    help='Activation function for D (default: %(default)s)')\n  parser.add_argument(\n    '--G_attn', type=str, default='64',\n    help='What resolutions to use attention on for G (underscore separated) '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--D_attn', type=str, default='64',\n    help='What resolutions to use attention on for D (underscore separated) '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--norm_style', type=str, default='bn',\n    help='Normalizer style for G, one of bn [batchnorm], in [instancenorm], '\n         'ln [layernorm], gn [groupnorm] (default: %(default)s)')\n         \n  ### Model init stuff ###\n  parser.add_argument(\n    '--seed', type=int, default=0,\n    help='Random seed to use; affects both initialization and '\n         ' dataloading. (default: %(default)s)')\n  parser.add_argument(\n    '--G_init', type=str, default='ortho',\n    help='Init style to use for G (default: %(default)s)')\n  parser.add_argument(\n    '--D_init', type=str, default='ortho',\n    help='Init style to use for D(default: %(default)s)')\n  parser.add_argument(\n    '--skip_init', action='store_true', default=False,\n    help='Skip initialization, ideal for testing when ortho init was used '\n          '(default: %(default)s)')\n  \n  ### Optimizer stuff ###\n  parser.add_argument(\n    '--G_lr', type=float, default=5e-5,\n    help='Learning rate to use for Generator (default: %(default)s)')\n  parser.add_argument(\n    '--D_lr', type=float, default=2e-4,\n    help='Learning rate to use for Discriminator (default: %(default)s)')\n  parser.add_argument(\n    '--G_B1', type=float, default=0.0,\n    help='Beta1 to use for Generator (default: %(default)s)')\n  parser.add_argument(\n    '--D_B1', type=float, default=0.0,\n    help='Beta1 to use for Discriminator (default: %(default)s)')\n  parser.add_argument(\n    '--G_B2', type=float, default=0.999,\n    help='Beta2 to use for Generator (default: %(default)s)')\n  parser.add_argument(\n    '--D_B2', type=float, default=0.999,\n    help='Beta2 to use for Discriminator (default: %(default)s)')\n    \n  ### Batch size, parallel, and precision stuff ###\n  parser.add_argument(\n    '--batch_size', type=int, default=64,\n    help='Default overall batchsize (default: %(default)s)')\n  parser.add_argument(\n    '--G_batch_size', type=int, default=0,\n    help='Batch size to use for G; if 0, same as D (default: %(default)s)')\n  parser.add_argument(\n    '--num_G_accumulations', type=int, default=1,\n    help='Number of passes to accumulate G''s gradients over '\n         '(default: %(default)s)')  \n  parser.add_argument(\n    '--num_D_steps', type=int, default=2,\n    help='Number of D steps per G step (default: %(default)s)')\n  parser.add_argument(\n    '--num_D_accumulations', type=int, default=1,\n    help='Number of passes to accumulate D''s gradients over '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--split_D', action='store_true', default=False,\n    help='Run D twice rather than concatenating inputs? (default: %(default)s)')\n  parser.add_argument(\n    '--num_epochs', type=int, default=100,\n    help='Number of epochs to train for (default: %(default)s)')\n  parser.add_argument(\n    '--parallel', action='store_true', default=False,\n    help='Train with multiple GPUs (default: %(default)s)')\n  parser.add_argument(\n    '--G_fp16', action='store_true', default=False,\n    help='Train with half-precision in G? (default: %(default)s)')\n  parser.add_argument(\n    '--D_fp16', action='store_true', default=False,\n    help='Train with half-precision in D? (default: %(default)s)')\n  parser.add_argument(\n    '--D_mixed_precision', action='store_true', default=False,\n    help='Train with half-precision activations but fp32 params in D? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--G_mixed_precision', action='store_true', default=False,\n    help='Train with half-precision activations but fp32 params in G? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--accumulate_stats', action='store_true', default=False,\n    help='Accumulate \"standing\" batchnorm stats? (default: %(default)s)')\n  parser.add_argument(\n    '--num_standing_accumulations', type=int, default=16,\n    help='Number of forward passes to use in accumulating standing stats? '\n         '(default: %(default)s)')        \n    \n  ### Bookkeping stuff ###  \n  parser.add_argument(\n    '--G_eval_mode', action='store_true', default=False,\n    help='Run G in eval mode (running/standing stats?) at sample/test time? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--save_every', type=int, default=2000,\n    help='Save every X iterations (default: %(default)s)')\n  parser.add_argument(\n    '--num_save_copies', type=int, default=2,\n    help='How many copies to save (default: %(default)s)')\n  parser.add_argument(\n    '--num_best_copies', type=int, default=2,\n    help='How many previous best checkpoints to save (default: %(default)s)')\n  parser.add_argument(\n    '--which_best', type=str, default='FID',\n    help='Which metric to use to determine when to save new \"best\"'\n         'checkpoints, one of IS or FID (default: %(default)s)')\n  parser.add_argument(\n    '--no_fid', action='store_true', default=False,\n    help='Calculate IS only, not FID? (default: %(default)s)')\n  parser.add_argument(\n    '--test_every', type=int, default=5000,\n    help='Test every X iterations (default: %(default)s)')\n  parser.add_argument(\n    '--num_inception_images', type=int, default=50000,\n    help='Number of samples to compute inception metrics with '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--hashname', action='store_true', default=False,\n    help='Use a hash of the experiment name instead of the full config '\n         '(default: %(default)s)') \n  parser.add_argument(\n    '--base_root', type=str, default='',\n    help='Default location to store all weights, samples, data, and logs '\n           ' (default: %(default)s)')\n  parser.add_argument(\n    '--data_root', type=str, default='data',\n    help='Default location where data is stored (default: %(default)s)')\n  parser.add_argument(\n    '--weights_root', type=str, default='weights',\n    help='Default location to store weights (default: %(default)s)')\n  parser.add_argument(\n    '--logs_root', type=str, default='logs',\n    help='Default location to store logs (default: %(default)s)')\n  parser.add_argument(\n    '--samples_root', type=str, default='samples',\n    help='Default location to store samples (default: %(default)s)')  \n  parser.add_argument(\n    '--pbar', type=str, default='mine',\n    help='Type of progressbar to use; one of \"mine\" or \"tqdm\" '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--name_suffix', type=str, default='',\n    help='Suffix for experiment name for loading weights for sampling '\n         '(consider \"best0\") (default: %(default)s)')\n  parser.add_argument(\n    '--experiment_name', type=str, default='',\n    help='Optionally override the automatic experiment naming with this arg. '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--config_from_name', action='store_true', default=False,\n    help='Use a hash of the experiment name instead of the full config '\n         '(default: %(default)s)')\n         \n  ### EMA Stuff ###\n  parser.add_argument(\n    '--ema', action='store_true', default=False,\n    help='Keep an ema of G''s weights? (default: %(default)s)')\n  parser.add_argument(\n    '--ema_decay', type=float, default=0.9999,\n    help='EMA decay rate (default: %(default)s)')\n  parser.add_argument(\n    '--use_ema', action='store_true', default=False,\n    help='Use the EMA parameters of G for evaluation? (default: %(default)s)')\n  parser.add_argument(\n    '--ema_start', type=int, default=0,\n    help='When to start updating the EMA weights (default: %(default)s)')\n  \n  ### Numerical precision and SV stuff ### \n  parser.add_argument(\n    '--adam_eps', type=float, default=1e-8,\n    help='epsilon value to use for Adam (default: %(default)s)')\n  parser.add_argument(\n    '--BN_eps', type=float, default=1e-5,\n    help='epsilon value to use for BatchNorm (default: %(default)s)')\n  parser.add_argument(\n    '--SN_eps', type=float, default=1e-8,\n    help='epsilon value to use for Spectral Norm(default: %(default)s)')\n  parser.add_argument(\n    '--num_G_SVs', type=int, default=1,\n    help='Number of SVs to track in G (default: %(default)s)')\n  parser.add_argument(\n    '--num_D_SVs', type=int, default=1,\n    help='Number of SVs to track in D (default: %(default)s)')\n  parser.add_argument(\n    '--num_G_SV_itrs', type=int, default=1,\n    help='Number of SV itrs in G (default: %(default)s)')\n  parser.add_argument(\n    '--num_D_SV_itrs', type=int, default=1,\n    help='Number of SV itrs in D (default: %(default)s)')\n  \n  ### Ortho reg stuff ### \n  parser.add_argument(\n    '--G_ortho', type=float, default=0.0, # 1e-4 is default for BigGAN\n    help='Modified ortho reg coefficient in G(default: %(default)s)')\n  parser.add_argument(\n    '--D_ortho', type=float, default=0.0,\n    help='Modified ortho reg coefficient in D (default: %(default)s)')\n  parser.add_argument(\n    '--toggle_grads', action='store_true', default=True,\n    help='Toggle D and G''s \"requires_grad\" settings when not training them? '\n         ' (default: %(default)s)')\n  \n  ### Which train function ###\n  parser.add_argument(\n    '--which_train_fn', type=str, default='GAN',\n    help='How2trainyourbois (default: %(default)s)')  \n  \n  ### Resume training stuff\n  parser.add_argument(\n    '--load_weights', type=str, default='',\n    help='Suffix for which weights to load (e.g. best0, copy0) '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--resume', action='store_true', default=False,\n    help='Resume training? (default: %(default)s)')\n  \n  ### Log stuff ###\n  parser.add_argument(\n    '--logstyle', type=str, default='%3.3e',\n    help='What style to use when logging training metrics?'\n         'One of: %#.#f/ %#.#e (float/exp, text),'\n         'pickle (python pickle),'\n         'npz (numpy zip),'\n         'mat (MATLAB .mat file) (default: %(default)s)')\n  parser.add_argument(\n    '--log_G_spectra', action='store_true', default=False,\n    help='Log the top 3 singular values in each SN layer in G? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--log_D_spectra', action='store_true', default=False,\n    help='Log the top 3 singular values in each SN layer in D? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--sv_log_interval', type=int, default=10,\n    help='Iteration interval for logging singular values '\n         ' (default: %(default)s)') \n   \n  return parser\n\n# Arguments for sample.py; not presently used in train.py\ndef add_sample_parser(parser):\n  parser.add_argument(\n    '--sample_npz', action='store_true', default=False,\n    help='Sample \"sample_num_npz\" images and save to npz? '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--sample_num_npz', type=int, default=50000,\n    help='Number of images to sample when sampling NPZs '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--sample_sheets', action='store_true', default=False,\n    help='Produce class-conditional sample sheets and stick them in '\n         'the samples root? (default: %(default)s)')\n  parser.add_argument(\n    '--sample_interps', action='store_true', default=False,\n    help='Produce interpolation sheets and stick them in '\n         'the samples root? (default: %(default)s)')         \n  parser.add_argument(\n    '--sample_sheet_folder_num', type=int, default=-1,\n    help='Number to use for the folder for these sample sheets '\n         '(default: %(default)s)')\n  parser.add_argument(\n    '--sample_random', action='store_true', default=False,\n    help='Produce a single random sheet? (default: %(default)s)')\n  parser.add_argument(\n    '--sample_trunc_curves', type=str, default='',\n    help='Get inception metrics with a range of variances?'\n         'To use this, specify a startpoint, step, and endpoint, e.g. '\n         '--sample_trunc_curves 0.2_0.1_1.0 for a startpoint of 0.2, '\n         'endpoint of 1.0, and stepsize of 1.0.  Note that this is '\n         'not exactly identical to using tf.truncated_normal, but should '\n         'have approximately the same effect. (default: %(default)s)')\n  parser.add_argument(\n    '--sample_inception_metrics', action='store_true', default=False,\n    help='Calculate Inception metrics with sample.py? (default: %(default)s)')  \n  return parser\n\n# Convenience dicts\ndset_dict = {'I32': dset.ImageFolder, 'I64': dset.ImageFolder, \n             'I128': dset.ImageFolder, 'I256': dset.ImageFolder,\n             'I32_hdf5': dset.ILSVRC_HDF5, 'I64_hdf5': dset.ILSVRC_HDF5, \n             'I128_hdf5': dset.ILSVRC_HDF5, 'I256_hdf5': dset.ILSVRC_HDF5,\n             'C10': dset.CIFAR10, 'C100': dset.CIFAR100,\n             'I64ext': dset.ImageFolder, 'I64ext_hdf5': dset.ILSVRC_HDF5,\n             'I128ext': dset.ImageFolder, 'I128ext_hdf5': dset.ILSVRC_HDF5}\nimsize_dict = {'I32': 32, 'I32_hdf5': 32,\n               'I64': 64, 'I64_hdf5': 64,\n               'I128': 128, 'I128_hdf5': 128,\n               'I256': 256, 'I256_hdf5': 256,\n               'C10': 32, 'C100': 32,\n               'I64ext': 64, 'I64ext_hdf5': 64,\n               'I128ext': 128, 'I128ext_hdf5': 128}\nroot_dict = {'I32': 'ImageNet', 'I32_hdf5': 'ILSVRC32.hdf5',\n             'I64': 'ImageNet', 'I64_hdf5': 'ILSVRC64.hdf5',\n             'I128': 'ImageNet', 'I128_hdf5': 'ILSVRC128.hdf5',\n             'I256': 'ImageNet', 'I256_hdf5': 'ILSVRC256.hdf5',\n             'C10': 'cifar', 'C100': 'cifar',\n             'I64ext': 'Ext', 'I64ext_hdf5': 'I64Ext.hdf5',\n             'I128ext': 'Ext', 'I128ext_hdf5': 'I128Ext.hdf5',}\nnclass_dict = {'I32': 1000, 'I32_hdf5': 1000,\n               'I64': 1000, 'I64_hdf5': 1000,\n               'I128': 1000, 'I128_hdf5': 1000,\n               'I256': 1000, 'I256_hdf5': 1000,\n               'C10': 10, 'C100': 100,\n               'I64ext': 20, 'I64ext_hdf5': 20,\n               'I128ext': 10, 'I128ext_hdf5': 10}\n# Number of classes to put per sample sheet               \nclasses_per_sheet_dict = {'I32': 50, 'I32_hdf5': 50,\n                          'I64': 50, 'I64_hdf5': 50,\n                          'I128': 20, 'I128_hdf5': 20,\n                          'I256': 20, 'I256_hdf5': 20,\n                          'C10': 10, 'C100': 100,\n                          'I64ext': 20, 'I64ext_hdf5': 20,\n                          'I128ext': 20, 'I128ext_hdf5': 20}\nactivation_dict = {'inplace_relu': nn.ReLU(inplace=True),\n                   'relu': nn.ReLU(inplace=False),\n                   'ir': nn.ReLU(inplace=True),}\n\nclass CenterCropLongEdge(object):\n  \"\"\"Crops the given PIL Image on the long edge.\n  Args:\n      size (sequence or int): Desired output size of the crop. If size is an\n          int instead of sequence like (h, w), a square crop (size, size) is\n          made.\n  \"\"\"\n  def __call__(self, img):\n    \"\"\"\n    Args:\n        img (PIL Image): Image to be cropped.\n    Returns:\n        PIL Image: Cropped image.\n    \"\"\"\n    return transforms.functional.center_crop(img, min(img.size))\n\n  def __repr__(self):\n    return self.__class__.__name__\n\nclass RandomCropLongEdge(object):\n  \"\"\"Crops the given PIL Image on the long edge with a random start point.\n  Args:\n      size (sequence or int): Desired output size of the crop. If size is an\n          int instead of sequence like (h, w), a square crop (size, size) is\n          made.\n  \"\"\"\n  def __call__(self, img):\n    \"\"\"\n    Args:\n        img (PIL Image): Image to be cropped.\n    Returns:\n        PIL Image: Cropped image.\n    \"\"\"\n    size = (min(img.size), min(img.size))\n    # Only step forward along this edge if it's the long edge\n    i = (0 if size[0] == img.size[0] \n          else np.random.randint(low=0,high=img.size[0] - size[0]))\n    j = (0 if size[1] == img.size[1]\n          else np.random.randint(low=0,high=img.size[1] - size[1]))\n    return transforms.functional.crop(img, i, j, size[0], size[1])\n\n  def __repr__(self):\n    return self.__class__.__name__\n\n    \n# multi-epoch Dataset sampler to avoid memory leakage and enable resumption of\n# training from the same sample regardless of if we stop mid-epoch\nclass MultiEpochSampler(torch.utils.data.Sampler):\n  r\"\"\"Samples elements randomly over multiple epochs\n\n  Arguments:\n      data_source (Dataset): dataset to sample from\n      num_epochs (int) : Number of times to loop over the dataset\n      start_itr (int) : which iteration to begin from\n  \"\"\"\n\n  def __init__(self, data_source, num_epochs, start_itr=0, batch_size=128):\n    self.data_source = data_source\n    self.num_samples = len(self.data_source)\n    self.num_epochs = num_epochs\n    self.start_itr = start_itr\n    self.batch_size = batch_size\n\n    if not isinstance(self.num_samples, int) or self.num_samples <= 0:\n      raise ValueError(\"num_samples should be a positive integeral \"\n                       \"value, but got num_samples={}\".format(self.num_samples))\n\n  def __iter__(self):\n    n = len(self.data_source)\n    # Determine number of epochs\n    num_epochs = int(np.ceil((n * self.num_epochs \n                              - (self.start_itr * self.batch_size)) / float(n)))\n    # Sample all the indices, and then grab the last num_epochs index sets;\n    # This ensures if we're starting at epoch 4, we're still grabbing epoch 4's\n    # indices\n    out = [torch.randperm(n) for epoch in range(self.num_epochs)][-num_epochs:]\n    # Ignore the first start_itr % n indices of the first epoch\n    out[0] = out[0][(self.start_itr * self.batch_size % n):]\n    # if self.replacement:\n      # return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())\n    # return iter(.tolist())\n    output = torch.cat(out).tolist()\n    print('Length dataset output is %d' % len(output))\n    return iter(output)\n\n  def __len__(self):\n    return len(self.data_source) * self.num_epochs - self.start_itr * self.batch_size\n\n\n# Convenience function to centralize all data loaders\ndef get_data_loaders(dataset, data_root=None, augment=False, batch_size=64, \n                     num_workers=8, shuffle=True, load_in_mem=False, hdf5=False,\n                     pin_memory=True, drop_last=True, start_itr=0,\n                     num_epochs=500, use_multiepoch_sampler=False,\n                     **kwargs):\n\n  # Append /FILENAME.hdf5 to root if using hdf5\n  data_root += '/%s' % root_dict[dataset]\n  print('Using dataset root location %s' % data_root)\n\n  which_dataset = dset_dict[dataset]\n  norm_mean = [0.5,0.5,0.5]\n  norm_std = [0.5,0.5,0.5]\n  image_size = imsize_dict[dataset]\n  # For image folder datasets, name of the file where we store the precomputed\n  # image locations to avoid having to walk the dirs every time we load.\n  dataset_kwargs = {'index_filename': '%s_imgs.npz' % dataset}\n  \n  # HDF5 datasets have their own inbuilt transform, no need to train_transform  \n  if 'hdf5' in dataset:\n    train_transform = None\n  else:\n    if augment:\n      print('Data will be augmented...')\n      if dataset in ['C10', 'C100']:\n        train_transform = [transforms.RandomCrop(32, padding=4),\n                           transforms.RandomHorizontalFlip()]\n      else:\n        train_transform = [RandomCropLongEdge(),\n                         transforms.Resize(image_size),\n                         transforms.RandomHorizontalFlip()]\n    else:\n      print('Data will not be augmented...')\n      if dataset in ['C10', 'C100']:\n        train_transform = []\n      else:\n        train_transform = [CenterCropLongEdge(), transforms.Resize(image_size)]\n      # train_transform = [transforms.Resize(image_size), transforms.CenterCrop]\n    train_transform = transforms.Compose(train_transform + [\n                     transforms.ToTensor(),\n                     transforms.Normalize(norm_mean, norm_std)])\n  train_set = which_dataset(root=data_root, transform=train_transform,\n                            load_in_mem=load_in_mem, **dataset_kwargs)\n\n  # Prepare loader; the loaders list is for forward compatibility with\n  # using validation / test splits.\n  loaders = []   \n  if use_multiepoch_sampler:\n    print('Using multiepoch sampler from start_itr %d...' % start_itr)\n    loader_kwargs = {'num_workers': num_workers, 'pin_memory': pin_memory}\n    sampler = MultiEpochSampler(train_set, num_epochs, start_itr, batch_size)\n    train_loader = DataLoader(train_set, batch_size=batch_size,\n                              sampler=sampler, **loader_kwargs)\n  else:\n    loader_kwargs = {'num_workers': num_workers, 'pin_memory': pin_memory,\n                     'drop_last': drop_last} # Default, drop last incomplete batch\n    train_loader = DataLoader(train_set, batch_size=batch_size,\n                              shuffle=shuffle, **loader_kwargs)\n  loaders.append(train_loader)\n  return loaders\n\n\n# Utility file to seed rngs\ndef seed_rng(seed):\n  torch.manual_seed(seed)\n  torch.cuda.manual_seed(seed)\n  np.random.seed(seed)\n\n\n# Utility to peg all roots to a base root\n# If a base root folder is provided, peg all other root folders to it.\ndef update_config_roots(config):\n  if config['base_root']:\n    print('Pegging all root folders to base root %s' % config['base_root'])\n    for key in ['data', 'weights', 'logs', 'samples']:\n      config['%s_root' % key] = '%s/%s' % (config['base_root'], key)\n  return config\n\n\n# Utility to prepare root folders if they don't exist; parent folder must exist\ndef prepare_root(config):\n  for key in ['weights_root', 'logs_root', 'samples_root']:\n    if not os.path.exists(config[key]):\n      print('Making directory %s for %s...' % (config[key], key))\n      os.mkdir(config[key])\n\n\n# Simple wrapper that applies EMA to a model. COuld be better done in 1.0 using\n# the parameters() and buffers() module functions, but for now this works\n# with state_dicts using .copy_\nclass ema(object):\n  def __init__(self, source, target, decay=0.9999, start_itr=0):\n    self.source = source\n    self.target = target\n    self.decay = decay\n    # Optional parameter indicating what iteration to start the decay at\n    self.start_itr = start_itr\n    # Initialize target's params to be source's\n    self.source_dict = self.source.state_dict()\n    self.target_dict = self.target.state_dict()\n    print('Initializing EMA parameters to be source parameters...')\n    with torch.no_grad():\n      for key in self.source_dict:\n        self.target_dict[key].data.copy_(self.source_dict[key].data)\n        # target_dict[key].data = source_dict[key].data # Doesn't work!\n\n  def update(self, itr=None):\n    # If an iteration counter is provided and itr is less than the start itr,\n    # peg the ema weights to the underlying weights.\n    if itr and itr < self.start_itr:\n      decay = 0.0\n    else:\n      decay = self.decay\n    with torch.no_grad():\n      for key in self.source_dict:\n        self.target_dict[key].data.copy_(self.target_dict[key].data * decay \n                                     + self.source_dict[key].data * (1 - decay))\n\n\n# Apply modified ortho reg to a model\n# This function is an optimized version that directly computes the gradient,\n# instead of computing and then differentiating the loss.\ndef ortho(model, strength=1e-4, blacklist=[]):\n  with torch.no_grad():\n    for param in model.parameters():\n      # Only apply this to parameters with at least 2 axes, and not in the blacklist\n      if len(param.shape) < 2 or any([param is item for item in blacklist]):\n        continue\n      w = param.view(param.shape[0], -1)\n      grad = (2 * torch.mm(torch.mm(w, w.t()) \n              * (1. - torch.eye(w.shape[0], device=w.device)), w))\n      param.grad.data += strength * grad.view(param.shape)\n\n\n# Default ortho reg\n# This function is an optimized version that directly computes the gradient,\n# instead of computing and then differentiating the loss.\ndef default_ortho(model, strength=1e-4, blacklist=[]):\n  with torch.no_grad():\n    for param in model.parameters():\n      # Only apply this to parameters with at least 2 axes & not in blacklist\n      if len(param.shape) < 2 or param in blacklist:\n        continue\n      w = param.view(param.shape[0], -1)\n      grad = (2 * torch.mm(torch.mm(w, w.t()) \n               - torch.eye(w.shape[0], device=w.device), w))\n      param.grad.data += strength * grad.view(param.shape)\n\n\n# Convenience utility to switch off requires_grad\ndef toggle_grad(model, on_or_off):\n  for param in model.parameters():\n    param.requires_grad = on_or_off\n\n\n# Function to join strings or ignore them\n# Base string is the string to link \"strings,\" while strings\n# is a list of strings or Nones.\ndef join_strings(base_string, strings):\n  return base_string.join([item for item in strings if item])\n\n\n# Save a model's weights, optimizer, and the state_dict\ndef save_weights(G, D, state_dict, weights_root, experiment_name, \n                 name_suffix=None, G_ema=None):\n  root = '/'.join([weights_root, experiment_name])\n  if not os.path.exists(root):\n    os.mkdir(root)\n  if name_suffix:\n    print('Saving weights to %s/%s...' % (root, name_suffix))\n  else:\n    print('Saving weights to %s...' % root)\n  torch.save(G.state_dict(), \n              '%s/%s.pth' % (root, join_strings('_', ['G', name_suffix])))\n  torch.save(G.optim.state_dict(), \n              '%s/%s.pth' % (root, join_strings('_', ['G_optim', name_suffix])))\n  torch.save(D.state_dict(), \n              '%s/%s.pth' % (root, join_strings('_', ['D', name_suffix])))\n  torch.save(D.optim.state_dict(),\n              '%s/%s.pth' % (root, join_strings('_', ['D_optim', name_suffix])))\n  torch.save(state_dict,\n              '%s/%s.pth' % (root, join_strings('_', ['state_dict', name_suffix])))\n  if G_ema is not None:\n    torch.save(G_ema.state_dict(), \n                '%s/%s.pth' % (root, join_strings('_', ['G_ema', name_suffix])))\n\n\n# Load a model's weights, optimizer, and the state_dict\ndef load_weights(G, D, state_dict, weights_root, experiment_name, \n                 name_suffix=None, G_ema=None, strict=True, load_optim=True):\n  root = '/'.join([weights_root, experiment_name])\n  if name_suffix:\n    print('Loading %s weights from %s...' % (name_suffix, root))\n  else:\n    print('Loading weights from %s...' % root)\n  if G is not None:\n    G.load_state_dict(\n      torch.load('%s/%s.pth' % (root, join_strings('_', ['G', name_suffix]))),\n      strict=strict)\n    if load_optim:\n      G.optim.load_state_dict(\n        torch.load('%s/%s.pth' % (root, join_strings('_', ['G_optim', name_suffix]))))\n  if D is not None:\n    D.load_state_dict(\n      torch.load('%s/%s.pth' % (root, join_strings('_', ['D', name_suffix]))),\n      strict=strict)\n    if load_optim:\n      D.optim.load_state_dict(\n        torch.load('%s/%s.pth' % (root, join_strings('_', ['D_optim', name_suffix]))))\n  # Load state dict\n  for item in state_dict:\n    state_dict[item] = torch.load('%s/%s.pth' % (root, join_strings('_', ['state_dict', name_suffix])))[item]\n  if G_ema is not None:\n    G_ema.load_state_dict(\n      torch.load('%s/%s.pth' % (root, join_strings('_', ['G_ema', name_suffix]))),\n      strict=strict)\n\n\n''' MetricsLogger originally stolen from VoxNet source code.\n    Used for logging inception metrics'''\nclass MetricsLogger(object):\n  def __init__(self, fname, reinitialize=False):\n    self.fname = fname\n    self.reinitialize = reinitialize\n    if os.path.exists(self.fname):\n      if self.reinitialize:\n        print('{} exists, deleting...'.format(self.fname))\n        os.remove(self.fname)\n\n  def log(self, record=None, **kwargs):\n    \"\"\"\n    Assumption: no newlines in the input.\n    \"\"\"\n    if record is None:\n      record = {}\n    record.update(kwargs)\n    record['_stamp'] = time.time()\n    with open(self.fname, 'a') as f:\n      f.write(json.dumps(record, ensure_ascii=True) + '\\n')\n\n\n# Logstyle is either:\n# '%#.#f' for floating point representation in text\n# '%#.#e' for exponent representation in text\n# 'npz' for output to npz # NOT YET SUPPORTED\n# 'pickle' for output to a python pickle # NOT YET SUPPORTED\n# 'mat' for output to a MATLAB .mat file # NOT YET SUPPORTED\nclass MyLogger(object):\n  def __init__(self, fname, reinitialize=False, logstyle='%3.3f'):\n    self.root = fname\n    if not os.path.exists(self.root):\n      os.mkdir(self.root)\n    self.reinitialize = reinitialize\n    self.metrics = []\n    self.logstyle = logstyle # One of '%3.3f' or like '%3.3e'\n\n  # Delete log if re-starting and log already exists\n  def reinit(self, item):\n    if os.path.exists('%s/%s.log' % (self.root, item)):\n      if self.reinitialize:\n        # Only print the removal mess\n        if 'sv' in item :\n          if not any('sv' in item for item in self.metrics):\n            print('Deleting singular value logs...')\n        else:\n          print('{} exists, deleting...'.format('%s_%s.log' % (self.root, item)))\n        os.remove('%s/%s.log' % (self.root, item))\n  \n  # Log in plaintext; this is designed for being read in MATLAB(sorry not sorry)\n  def log(self, itr, **kwargs):\n    for arg in kwargs:\n      if arg not in self.metrics:\n        if self.reinitialize:\n          self.reinit(arg)\n        self.metrics += [arg]\n      if self.logstyle == 'pickle':\n        print('Pickle not currently supported...')\n         # with open('%s/%s.log' % (self.root, arg), 'a') as f:\n          # pickle.dump(kwargs[arg], f)\n      elif self.logstyle == 'mat':\n        print('.mat logstyle not currently supported...')\n      else:\n        with open('%s/%s.log' % (self.root, arg), 'a') as f:\n          f.write('%d: %s\\n' % (itr, self.logstyle % kwargs[arg]))\n\n\n# Write some metadata to the logs directory\ndef write_metadata(logs_root, experiment_name, config, state_dict):\n  with open(('%s/%s/metalog.txt' % \n             (logs_root, experiment_name)), 'w') as writefile:\n    writefile.write('datetime: %s\\n' % str(datetime.datetime.now()))\n    writefile.write('config: %s\\n' % str(config))\n    writefile.write('state: %s\\n' %str(state_dict))\n\n\n\"\"\"\nVery basic progress indicator to wrap an iterable in.\n\nAuthor: Jan Schlüter\nAndy's adds: time elapsed in addition to ETA, makes it possible to add\nestimated time to 1k iters instead of estimated time to completion.\n\"\"\"\ndef progress(items, desc='', total=None, min_delay=0.1, displaytype='s1k'):\n  \"\"\"\n  Returns a generator over `items`, printing the number and percentage of\n  items processed and the estimated remaining processing time before yielding\n  the next item. `total` gives the total number of items (required if `items`\n  has no length), and `min_delay` gives the minimum time in seconds between\n  subsequent prints. `desc` gives an optional prefix text (end with a space).\n  \"\"\"\n  total = total or len(items)\n  t_start = time.time()\n  t_last = 0\n  for n, item in enumerate(items):\n    t_now = time.time()\n    if t_now - t_last > min_delay:\n      print(\"\\r%s%d/%d (%6.2f%%)\" % (\n              desc, n+1, total, n / float(total) * 100), end=\" \")\n      if n > 0:\n        \n        if displaytype == 's1k': # minutes/seconds for 1000 iters\n          next_1000 = n + (1000 - n%1000)\n          t_done = t_now - t_start\n          t_1k = t_done / n * next_1000\n          outlist = list(divmod(t_done, 60)) + list(divmod(t_1k - t_done, 60))\n          print(\"(TE/ET1k: %d:%02d / %d:%02d)\" % tuple(outlist), end=\" \")\n        else:# displaytype == 'eta':\n          t_done = t_now - t_start\n          t_total = t_done / n * total\n          outlist = list(divmod(t_done, 60)) + list(divmod(t_total - t_done, 60))\n          print(\"(TE/ETA: %d:%02d / %d:%02d)\" % tuple(outlist), end=\" \")\n          \n      sys.stdout.flush()\n      t_last = t_now\n    yield item\n  t_total = time.time() - t_start\n  print(\"\\r%s%d/%d (100.00%%) (took %d:%02d)\" % ((desc, total, total) +\n                                                   divmod(t_total, 60)))\n\n\n# Sample function for use with inception metrics\ndef sample(G, z_, y_, config):\n  with torch.no_grad():\n    z_.sample_()\n    y_.sample_()\n    if config['parallel']:\n      G_z =  nn.parallel.data_parallel(G, (z_, G.shared(y_)))\n    else:\n      G_z = G(z_, G.shared(y_))\n    return G_z, y_\n\n\n# Sample function for sample sheets\ndef sample_sheet(G, classes_per_sheet, num_classes, samples_per_class, parallel,\n                 samples_root, experiment_name, folder_number, z_=None):\n  # Prepare sample directory\n  if not os.path.isdir('%s/%s' % (samples_root, experiment_name)):\n    os.mkdir('%s/%s' % (samples_root, experiment_name))\n  if not os.path.isdir('%s/%s/%d' % (samples_root, experiment_name, folder_number)):\n    os.mkdir('%s/%s/%d' % (samples_root, experiment_name, folder_number))\n  # loop over total number of sheets\n  for i in range(num_classes // classes_per_sheet):\n    ims = []\n    y = torch.arange(i * classes_per_sheet, (i + 1) * classes_per_sheet, device='cuda')\n    for j in range(samples_per_class):\n      if (z_ is not None) and hasattr(z_, 'sample_') and classes_per_sheet <= z_.size(0):\n        z_.sample_()\n      else:\n        z_ = torch.randn(classes_per_sheet, G.dim_z, device='cuda')        \n      with torch.no_grad():\n        if parallel:\n          o = nn.parallel.data_parallel(G, (z_[:classes_per_sheet], G.shared(y)))\n        else:\n          o = G(z_[:classes_per_sheet], G.shared(y))\n\n      ims += [o.data.cpu()]\n    # This line should properly unroll the images\n    out_ims = torch.stack(ims, 1).view(-1, ims[0].shape[1], ims[0].shape[2], \n                                       ims[0].shape[3]).data.float().cpu()\n    # The path for the samples\n    image_filename = '%s/%s/%d/samples%d.jpg' % (samples_root, experiment_name, \n                                                 folder_number, i)\n    torchvision.utils.save_image(out_ims, image_filename,\n                                 nrow=samples_per_class, normalize=True)\n\n\n# Interp function; expects x0 and x1 to be of shape (shape0, 1, rest_of_shape..)\ndef interp(x0, x1, num_midpoints):\n  lerp = torch.linspace(0, 1.0, num_midpoints + 2, device='cuda').to(x0.dtype)\n  return ((x0 * (1 - lerp.view(1, -1, 1))) + (x1 * lerp.view(1, -1, 1)))\n\n\n# interp sheet function\n# Supports full, class-wise and intra-class interpolation\ndef interp_sheet(G, num_per_sheet, num_midpoints, num_classes, parallel,\n                 samples_root, experiment_name, folder_number, sheet_number=0,\n                 fix_z=False, fix_y=False, device='cuda'):\n  # Prepare zs and ys\n  if fix_z: # If fix Z, only sample 1 z per row\n    zs = torch.randn(num_per_sheet, 1, G.dim_z, device=device)\n    zs = zs.repeat(1, num_midpoints + 2, 1).view(-1, G.dim_z)\n  else:\n    zs = interp(torch.randn(num_per_sheet, 1, G.dim_z, device=device),\n                torch.randn(num_per_sheet, 1, G.dim_z, device=device),\n                num_midpoints).view(-1, G.dim_z)\n  if fix_y: # If fix y, only sample 1 z per row\n    ys = sample_1hot(num_per_sheet, num_classes)\n    ys = G.shared(ys).view(num_per_sheet, 1, -1)\n    ys = ys.repeat(1, num_midpoints + 2, 1).view(num_per_sheet * (num_midpoints + 2), -1)\n  else:\n    ys = interp(G.shared(sample_1hot(num_per_sheet, num_classes)).view(num_per_sheet, 1, -1),\n                G.shared(sample_1hot(num_per_sheet, num_classes)).view(num_per_sheet, 1, -1),\n                num_midpoints).view(num_per_sheet * (num_midpoints + 2), -1)\n  # Run the net--note that we've already passed y through G.shared.\n  if G.fp16:\n    zs = zs.half()\n  with torch.no_grad():\n    if parallel:\n      out_ims = nn.parallel.data_parallel(G, (zs, ys)).data.cpu()\n    else:\n      out_ims = G(zs, ys).data.cpu()\n  interp_style = '' + ('Z' if not fix_z else '') + ('Y' if not fix_y else '')\n  image_filename = '%s/%s/%d/interp%s%d.jpg' % (samples_root, experiment_name,\n                                                folder_number, interp_style,\n                                                sheet_number)\n  torchvision.utils.save_image(out_ims, image_filename,\n                               nrow=num_midpoints + 2, normalize=True)\n\n\n# Convenience debugging function to print out gradnorms and shape from each layer\n# May need to rewrite this so we can actually see which parameter is which\ndef print_grad_norms(net):\n    gradsums = [[float(torch.norm(param.grad).item()),\n                 float(torch.norm(param).item()), param.shape]\n                for param in net.parameters()]\n    order = np.argsort([item[0] for item in gradsums])\n    print(['%3.3e,%3.3e, %s' % (gradsums[item_index][0],\n                                gradsums[item_index][1],\n                                str(gradsums[item_index][2])) \n                              for item_index in order])\n\n\n# Get singular values to log. This will use the state dict to find them\n# and substitute underscores for dots.\ndef get_SVs(net, prefix):\n  d = net.state_dict()\n  return {('%s_%s' % (prefix, key)).replace('.', '_') :\n            float(d[key].item())\n            for key in d if 'sv' in key}\n\n\n# Name an experiment based on its config\ndef name_from_config(config):\n  name = '_'.join([\n  item for item in [\n  'Big%s' % config['which_train_fn'],\n  config['dataset'],\n  config['model'] if config['model'] != 'BigGAN' else None,\n  'seed%d' % config['seed'],\n  'Gch%d' % config['G_ch'],\n  'Dch%d' % config['D_ch'],\n  # 'Gd%d' % config['G_depth'] if config['G_depth'] > 1 else None,\n  # 'Dd%d' % config['D_depth'] if config['D_depth'] > 1 else None,\n  'bs%d' % config['batch_size'],\n  # 'Gfp16' if config['G_fp16'] else None,\n  # 'Dfp16' if config['D_fp16'] else None,\n  # 'nDs%d' % config['num_D_steps'] if config['num_D_steps'] > 1 else None,\n  'nDa%d' % config['num_D_accumulations'] if config['num_D_accumulations'] > 1 else None,\n  'nGa%d' % config['num_G_accumulations'] if config['num_G_accumulations'] > 1 else None,\n  # 'Glr%2.1e' % config['G_lr'],\n  # 'Dlr%2.1e' % config['D_lr'],\n  # 'GB%3.3f' % config['G_B1'] if config['G_B1'] !=0.0 else None,\n  # 'GBB%3.3f' % config['G_B2'] if config['G_B2'] !=0.999 else None,\n  # 'DB%3.3f' % config['D_B1'] if config['D_B1'] !=0.0 else None,\n  # 'DBB%3.3f' % config['D_B2'] if config['D_B2'] !=0.999 else None,\n  # 'Gnl%s' % config['G_nl'],\n  # 'Dnl%s' % config['D_nl'],\n  # 'Ginit%s' % config['G_init'],\n  # 'Dinit%s' % config['D_init'],\n  # 'G%s' % config['G_param'] if config['G_param'] != 'SN' else None,\n  # 'D%s' % config['D_param'] if config['D_param'] != 'SN' else None,\n  'Gattn%s' % config['G_attn'] if config['G_attn'] != '0' else None,\n  'Dattn%s' % config['D_attn'] if config['D_attn'] != '0' else None,\n  # 'Gortho%2.1e' % config['G_ortho'] if config['G_ortho'] > 0.0 else None,\n  # 'Dortho%2.1e' % config['D_ortho'] if config['D_ortho'] > 0.0 else None,\n  # config['norm_style'] if config['norm_style'] != 'bn' else None,\n  # 'cr' if config['cross_replica'] else None,\n  # 'Gshared' if config['G_shared'] else None,\n  # 'hier' if config['hier'] else None,\n  # 'ema' if config['ema'] else None,\n  'Commit%3.2f' % config['commitment'] if config['commitment'] else None,\n  'Layer%s' % config['discrete_layer'] if config['discrete_layer'] else None,\n  'Dicsz%d' % config['dict_size'] if config['dict_size'] else None,\n  'Dicdecay%3.2f' % config['dict_decay'] if config['dict_decay'] else None,\n  config['name_suffix'] if config['name_suffix'] else None,\n  ]\n  if item is not None])\n  # dogball\n  if config['hashname']:\n    return hashname(name)\n  else:\n    return name\n\n\n# A simple function to produce a unique experiment name from the animal hashes.\ndef hashname(name):\n  h = hash(name)\n  a = h % len(animal_hash.a)\n  h = h // len(animal_hash.a)\n  b = h % len(animal_hash.b)\n  h = h // len(animal_hash.c)\n  c = h % len(animal_hash.c)\n  return animal_hash.a[a] + animal_hash.b[b] + animal_hash.c[c]\n\n\n# Get GPU memory, -i is the index\ndef query_gpu(indices):\n  os.system('nvidia-smi -i 0 --query-gpu=memory.free --format=csv')\n\n\n# Convenience function to count the number of parameters in a module\ndef count_parameters(module):\n  print('Number of parameters: {}'.format(\n    sum([p.data.nelement() for p in module.parameters()])))\n\n   \n# Convenience function to sample an index, not actually a 1-hot\ndef sample_1hot(batch_size, num_classes, device='cuda'):\n  return torch.randint(low=0, high=num_classes, size=(batch_size,),\n          device=device, dtype=torch.int64, requires_grad=False)\n\n\n# A highly simplified convenience class for sampling from distributions\n# One could also use PyTorch's inbuilt distributions package.\n# Note that this class requires initialization to proceed as\n# x = Distribution(torch.randn(size))\n# x.init_distribution(dist_type, **dist_kwargs)\n# x = x.to(device,dtype)\n# This is partially based on https://discuss.pytorch.org/t/subclassing-torch-tensor/23754/2\nclass Distribution(torch.Tensor):\n  # Init the params of the distribution\n  def init_distribution(self, dist_type, **kwargs):    \n    self.dist_type = dist_type\n    self.dist_kwargs = kwargs\n    if self.dist_type == 'normal':\n      self.mean, self.var = kwargs['mean'], kwargs['var']\n    elif self.dist_type == 'categorical':\n      self.num_categories = kwargs['num_categories']\n\n  def sample_(self):\n    if self.dist_type == 'normal':\n      self.normal_(self.mean, self.var)\n    elif self.dist_type == 'categorical':\n      self.random_(0, self.num_categories)    \n    # return self.variable\n    \n  # Silly hack: overwrite the to() method to wrap the new object\n  # in a distribution as well\n  def to(self, *args, **kwargs):\n    new_obj = Distribution(self)\n    new_obj.init_distribution(self.dist_type, **self.dist_kwargs)\n    new_obj.data = super().to(*args, **kwargs)    \n    return new_obj\n\n\n# Convenience function to prepare a z and y vector\ndef prepare_z_y(G_batch_size, dim_z, nclasses, device='cuda', \n                fp16=False,z_var=1.0):\n  z_ = Distribution(torch.randn(G_batch_size, dim_z, requires_grad=False))\n  z_.init_distribution('normal', mean=0, var=z_var)\n  z_ = z_.to(device,torch.float16 if fp16 else torch.float32)   \n  \n  if fp16:\n    z_ = z_.half()\n\n  y_ = Distribution(torch.zeros(G_batch_size, requires_grad=False))\n  y_.init_distribution('categorical',num_categories=nclasses)\n  y_ = y_.to(device, torch.int64)\n  return z_, y_\n\n\ndef initiate_standing_stats(net):\n  for module in net.modules():\n    if hasattr(module, 'accumulate_standing'):\n      module.reset_stats()\n      module.accumulate_standing = True\n\n\ndef accumulate_standing_stats(net, z, y, nclasses, num_accumulations=16):\n  initiate_standing_stats(net)\n  net.train()\n  for i in range(num_accumulations):\n    with torch.no_grad():\n      z.normal_()\n      y.random_(0, nclasses)\n      x = net(z, net.shared(y)) # No need to parallelize here unless using syncbn\n  # Set to eval mode\n  net.eval() \n\n\n# This version of Adam keeps an fp32 copy of the parameters and\n# does all of the parameter updates in fp32, while still doing the\n# forwards and backwards passes using fp16 (i.e. fp16 copies of the\n# parameters and fp16 activations).\n#\n# Note that this calls .float().cuda() on the params.\nimport math\nfrom torch.optim.optimizer import Optimizer\nclass Adam16(Optimizer):\n  def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,weight_decay=0):\n    defaults = dict(lr=lr, betas=betas, eps=eps,\n            weight_decay=weight_decay)\n    params = list(params)\n    super(Adam16, self).__init__(params, defaults)\n      \n  # Safety modification to make sure we floatify our state\n  def load_state_dict(self, state_dict):\n    super(Adam16, self).load_state_dict(state_dict)\n    for group in self.param_groups:\n      for p in group['params']:\n        self.state[p]['exp_avg'] = self.state[p]['exp_avg'].float()\n        self.state[p]['exp_avg_sq'] = self.state[p]['exp_avg_sq'].float()\n        self.state[p]['fp32_p'] = self.state[p]['fp32_p'].float()\n\n  def step(self, closure=None):\n    \"\"\"Performs a single optimization step.\n    Arguments:\n      closure (callable, optional): A closure that reevaluates the model\n        and returns the loss.\n    \"\"\"\n    loss = None\n    if closure is not None:\n      loss = closure()\n\n    for group in self.param_groups:\n      for p in group['params']:\n        if p.grad is None:\n          continue\n          \n        grad = p.grad.data.float()\n        state = self.state[p]\n\n        # State initialization\n        if len(state) == 0:\n          state['step'] = 0\n          # Exponential moving average of gradient values\n          state['exp_avg'] = grad.new().resize_as_(grad).zero_()\n          # Exponential moving average of squared gradient values\n          state['exp_avg_sq'] = grad.new().resize_as_(grad).zero_()\n          # Fp32 copy of the weights\n          state['fp32_p'] = p.data.float()\n\n        exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']\n        beta1, beta2 = group['betas']\n\n        state['step'] += 1\n\n        if group['weight_decay'] != 0:\n          grad = grad.add(group['weight_decay'], state['fp32_p'])\n\n        # Decay the first and second moment running average coefficient\n        exp_avg.mul_(beta1).add_(1 - beta1, grad)\n        exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)\n\n        denom = exp_avg_sq.sqrt().add_(group['eps'])\n\n        bias_correction1 = 1 - beta1 ** state['step']\n        bias_correction2 = 1 - beta2 ** state['step']\n        step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1\n      \n        state['fp32_p'].addcdiv_(-step_size, exp_avg, denom)\n        p.data = state['fp32_p'].half()\n\n    return loss\n"
  },
  {
    "path": "FQ-BigGAN/vq_layer.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Quantize(nn.Module):\n\tdef __init__(self, dim, n_embed, commitment=1.0, decay=0.8, eps=1e-5):\n\t\tsuper().__init__()\n\n\t\tself.dim = dim\n\t\tself.n_embed = n_embed\n\t\tself.decay = decay\n\t\tself.eps = eps\n\t\tself.commitment = commitment\n\n\t\tembed = torch.randn(dim, n_embed)\n\t\tself.register_buffer('embed', embed)\n\t\tself.register_buffer('cluster_size', torch.zeros(n_embed))\n\t\tself.register_buffer('embed_avg', embed.clone())\n\n\tdef forward(self, x, y=None):\n\t\tx = x.permute(0, 2, 3, 1).contiguous()\n\t\tinput_shape = x.shape\n\t\tflatten = x.reshape(-1, self.dim)\n\t\tdist = (\n\t\t    flatten.pow(2).sum(1, keepdim=True)\n\t\t    - 2 * flatten @ self.embed\n\t\t    + self.embed.pow(2).sum(0, keepdim=True)\n\t\t)\n\t\t_, embed_ind = (-dist).max(1)\n\t\tembed_onehot = F.one_hot(embed_ind, self.n_embed).type(flatten.dtype)\n\t\tembed_ind = embed_ind.view(*x.shape[:-1])\n\t\tquantize = self.embed_code(embed_ind).view(input_shape)\n\n\t\tif self.training:\n\t\t\tself.cluster_size.data.mul_(self.decay).add_(\n\t\t\t    1 - self.decay, embed_onehot.sum(0)\n\t\t\t)\n\t\t\tembed_sum = flatten.transpose(0, 1) @ embed_onehot\n\t\t\tself.embed_avg.data.mul_(self.decay).add_(1 - self.decay, embed_sum)\n\t\t\tn = self.cluster_size.sum()\n\t\t\tcluster_size = (\n\t\t\t    (self.cluster_size + self.eps) / (n + self.n_embed * self.eps) * n\n\t\t\t)\n\t\t\tembed_normalized = self.embed_avg / cluster_size.unsqueeze(0)\n\t\t\tself.embed.data.copy_(embed_normalized)\n\n\t\tdiff = self.commitment*torch.mean(torch.mean((quantize.detach() - x).pow(2), dim=(1,2)),\n\t\t                                  dim=(1,), keepdim=True)\n\t\tquantize = x + (quantize - x).detach()\n\t\tavg_probs = torch.mean(embed_onehot, 0)\n\t\tperplexity = torch.exp(- torch.sum(avg_probs * torch.log(avg_probs + 1e-10)))\n\n\t\treturn quantize.permute(0, 3, 1, 2).contiguous(), diff, perplexity\n\n\tdef embed_code(self, embed_id):\n\t\treturn F.embedding(embed_id, self.embed.transpose(0, 1))\n\n\n# class VectorQuantizerEMA(nn.Module):\n# \tdef __init__(self, num_embeddings, embedding_dim, commitment_cost, decay, epsilon=1e-5):\n# \t\tsuper(VectorQuantizerEMA, self).__init__()\n\n# \t\tself._embedding_dim = embedding_dim\n# \t\tself._num_embeddings = num_embeddings\n\n# \t\tself._embedding = nn.Embedding(self._num_embeddings, self._embedding_dim)\n# \t\tself._embedding.weight.data.normal_()\n# \t\tself._commitment_cost = commitment_cost\n\n# \t\tself.register_buffer('_ema_cluster_size', torch.zeros(num_embeddings))\n# \t\tself._ema_w = nn.Parameter(torch.Tensor(num_embeddings, self._embedding_dim))\n# \t\tself._ema_w.data.normal_()\n\n# \t\tself._decay = decay\n# \t\tself._epsilon = epsilon\n\n\n# \tdef forward(self, inputs):\n# \t\t# convert inputs from BCHW -> BHWC\n# \t\tinputs = inputs.permute(0, 2, 3, 1).contiguous()\n# \t\tinput_shape = inputs.shape\n\n# \t\t# Flatten input\n# \t\tflat_input = inputs.view(-1, self._embedding_dim)\n\n# \t\t# Calculate distances\n# \t\tdistances = (torch.sum(flat_input ** 2, dim=1, keepdim=True)\n# \t\t             + torch.sum(self._embedding.weight ** 2, dim=1)\n# \t\t             - 2 * torch.matmul(flat_input, self._embedding.weight.t()))\n\n# \t\t# Encoding\n# \t\tencoding_indices = torch.argmin(distances, dim=1).unsqueeze(1)\n# \t\tencodings = torch.zeros(encoding_indices.shape[0], self._num_embeddings, device=inputs.device)\n# \t\tencodings.scatter_(1, encoding_indices, 1)\n\n# \t\t# Quantize and unflatten\n# \t\tquantized = torch.matmul(encodings, self._embedding.weight).view(input_shape)\n\n# \t\t# Use EMA to update the embedding vectors\n# \t\tif self.training:\n# \t\t\tself._ema_cluster_size = self._ema_cluster_size * self._decay + \\\n# \t\t\t                         (1 - self._decay) * torch.sum(encodings, 0)\n\n# \t\t\t# Laplace smoothing of the cluster size\n# \t\t\tn = torch.sum(self._ema_cluster_size.data)\n# \t\t\tself._ema_cluster_size = (\n# \t\t\t\t\t(self._ema_cluster_size + self._epsilon)\n# \t\t\t\t\t/ (n + self._num_embeddings * self._epsilon) * n)\n\n# \t\t\tdw = torch.matmul(encodings.t(), flat_input)\n# \t\t\tself._ema_w = nn.Parameter(self._ema_w * self._decay + (1 - self._decay) * dw)\n\n# \t\t\tself._embedding.weight = nn.Parameter(self._ema_w / self._ema_cluster_size.unsqueeze(1))\n\n# \t\t# Loss\n# \t\te_latent_loss = F.mse_loss(quantized.detach(), inputs)\n# \t\tloss = self._commitment_cost * e_latent_loss\n\n# \t\t# Straight Through Estimator\n# \t\tquantized = inputs + (quantized - inputs).detach()\n# \t\tavg_probs = torch.mean(encodings, dim=0)\n# \t\tperplexity = torch.exp(-torch.sum(avg_probs * torch.log(avg_probs + 1e-10)))\n\n# \t\t# convert quantized from BHWC -> BCHW\n# \t\treturn loss, quantized.permute(0, 3, 1, 2).contiguous(), perplexity, encodings\n"
  },
  {
    "path": "FQ-StyleGAN/LICENSE.txt",
    "content": "Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n\n\nNvidia Source Code License-NC\n\n=======================================================================\n\n1. Definitions\n\n\"Licensor\" means any person or entity that distributes its Work.\n\n\"Software\" means the original work of authorship made available under\nthis License.\n\n\"Work\" means the Software and any additions to or derivative works of\nthe Software that are made available under this License.\n\n\"Nvidia Processors\" means any central processing unit (CPU), graphics\nprocessing unit (GPU), field-programmable gate array (FPGA),\napplication-specific integrated circuit (ASIC) or any combination\nthereof designed, made, sold, or provided by Nvidia or its affiliates.\n\nThe terms \"reproduce,\" \"reproduction,\" \"derivative works,\" and\n\"distribution\" have the meaning as provided under U.S. copyright law;\nprovided, however, that for the purposes of this License, derivative\nworks shall not include works that remain separable from, or merely\nlink (or bind by name) to the interfaces of, the Work.\n\nWorks, including the Software, are \"made available\" under this License\nby including in or with the Work either (a) a copyright notice\nreferencing the applicability of this License to the Work, or (b) a\ncopy of this License.\n\n2. License Grants\n\n    2.1 Copyright Grant. Subject to the terms and conditions of this\n    License, each Licensor grants to you a perpetual, worldwide,\n    non-exclusive, royalty-free, copyright license to reproduce,\n    prepare derivative works of, publicly display, publicly perform,\n    sublicense and distribute its Work and any resulting derivative\n    works in any form.\n\n3. Limitations\n\n    3.1 Redistribution. You may reproduce or distribute the Work only\n    if (a) you do so under this License, (b) you include a complete\n    copy of this License with your distribution, and (c) you retain\n    without modification any copyright, patent, trademark, or\n    attribution notices that are present in the Work.\n\n    3.2 Derivative Works. You may specify that additional or different\n    terms apply to the use, reproduction, and distribution of your\n    derivative works of the Work (\"Your Terms\") only if (a) Your Terms\n    provide that the use limitation in Section 3.3 applies to your\n    derivative works, and (b) you identify the specific derivative\n    works that are subject to Your Terms. Notwithstanding Your Terms,\n    this License (including the redistribution requirements in Section\n    3.1) will continue to apply to the Work itself.\n\n    3.3 Use Limitation. The Work and any derivative works thereof only\n    may be used or intended for use non-commercially. The Work or\n    derivative works thereof may be used or intended for use by Nvidia\n    or its affiliates commercially or non-commercially. As used herein,\n    \"non-commercially\" means for research or evaluation purposes only.\n\n    3.4 Patent Claims. If you bring or threaten to bring a patent claim\n    against any Licensor (including any claim, cross-claim or\n    counterclaim in a lawsuit) to enforce any patents that you allege\n    are infringed by any Work, then your rights under this License from\n    such Licensor (including the grants in Sections 2.1 and 2.2) will\n    terminate immediately.\n\n    3.5 Trademarks. This License does not grant any rights to use any\n    Licensor's or its affiliates' names, logos, or trademarks, except\n    as necessary to reproduce the notices described in this License.\n\n    3.6 Termination. If you violate any term of this License, then your\n    rights under this License (including the grants in Sections 2.1 and\n    2.2) will terminate immediately.\n\n4. Disclaimer of Warranty.\n\nTHE WORK IS PROVIDED \"AS IS\" WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR\nNON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER\nTHIS LICENSE. \n\n5. Limitation of Liability.\n\nEXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL\nTHEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE\nSHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT,\nINDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF\nOR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK\n(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION,\nLOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER\nCOMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF\nTHE POSSIBILITY OF SUCH DAMAGES.\n\n=======================================================================\n"
  },
  {
    "path": "FQ-StyleGAN/dataset_tool.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Tool for creating multi-resolution TFRecords datasets.\"\"\"\n\n# pylint: disable=too-many-lines\nimport os\nimport sys\nimport glob\nimport argparse\nimport threading\nimport six.moves.queue as Queue # pylint: disable=import-error\nimport traceback\nimport numpy as np\nimport tensorflow as tf\nimport PIL.Image\nimport dnnlib.tflib as tflib\n\nfrom training import dataset\n\n#----------------------------------------------------------------------------\n\ndef error(msg):\n    print('Error: ' + msg)\n    exit(1)\n\n#----------------------------------------------------------------------------\n\nclass TFRecordExporter:\n    def __init__(self, tfrecord_dir, expected_images, print_progress=True, progress_interval=10):\n        self.tfrecord_dir       = tfrecord_dir\n        self.tfr_prefix         = os.path.join(self.tfrecord_dir, os.path.basename(self.tfrecord_dir))\n        self.expected_images    = expected_images\n        self.cur_images         = 0\n        self.shape              = None\n        self.resolution_log2    = None\n        self.tfr_writers        = []\n        self.print_progress     = print_progress\n        self.progress_interval  = progress_interval\n\n        if self.print_progress:\n            print('Creating dataset \"%s\"' % tfrecord_dir)\n        if not os.path.isdir(self.tfrecord_dir):\n            os.makedirs(self.tfrecord_dir)\n        assert os.path.isdir(self.tfrecord_dir)\n\n    def close(self):\n        if self.print_progress:\n            print('%-40s\\r' % 'Flushing data...', end='', flush=True)\n        for tfr_writer in self.tfr_writers:\n            tfr_writer.close()\n        self.tfr_writers = []\n        if self.print_progress:\n            print('%-40s\\r' % '', end='', flush=True)\n            print('Added %d images.' % self.cur_images)\n\n    def choose_shuffled_order(self): # Note: Images and labels must be added in shuffled order.\n        order = np.arange(self.expected_images)\n        np.random.RandomState(123).shuffle(order)\n        return order\n\n    def add_image(self, img):\n        if self.print_progress and self.cur_images % self.progress_interval == 0:\n            print('%d / %d\\r' % (self.cur_images, self.expected_images), end='', flush=True)\n        if self.shape is None:\n            self.shape = img.shape\n            self.resolution_log2 = int(np.log2(self.shape[1]))\n            assert self.shape[0] in [1, 3]\n            assert self.shape[1] == self.shape[2]\n            assert self.shape[1] == 2**self.resolution_log2\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for lod in range(self.resolution_log2 - 1):\n                tfr_file = self.tfr_prefix + '-r%02d.tfrecords' % (self.resolution_log2 - lod)\n                self.tfr_writers.append(tf.python_io.TFRecordWriter(tfr_file, tfr_opt))\n        assert img.shape == self.shape\n        for lod, tfr_writer in enumerate(self.tfr_writers):\n            if lod:\n                img = img.astype(np.float32)\n                img = (img[:, 0::2, 0::2] + img[:, 0::2, 1::2] + img[:, 1::2, 0::2] + img[:, 1::2, 1::2]) * 0.25\n            quant = np.rint(img).clip(0, 255).astype(np.uint8)\n            ex = tf.train.Example(features=tf.train.Features(feature={\n                'shape': tf.train.Feature(int64_list=tf.train.Int64List(value=quant.shape)),\n                'data': tf.train.Feature(bytes_list=tf.train.BytesList(value=[quant.tostring()]))}))\n            tfr_writer.write(ex.SerializeToString())\n        self.cur_images += 1\n\n    def add_labels(self, labels):\n        if self.print_progress:\n            print('%-40s\\r' % 'Saving labels...', end='', flush=True)\n        assert labels.shape[0] == self.cur_images\n        with open(self.tfr_prefix + '-rxx.labels', 'wb') as f:\n            np.save(f, labels.astype(np.float32))\n\n    def __enter__(self):\n        return self\n\n    def __exit__(self, *args):\n        self.close()\n\n#----------------------------------------------------------------------------\n\nclass ExceptionInfo(object):\n    def __init__(self):\n        self.value = sys.exc_info()[1]\n        self.traceback = traceback.format_exc()\n\n#----------------------------------------------------------------------------\n\nclass WorkerThread(threading.Thread):\n    def __init__(self, task_queue):\n        threading.Thread.__init__(self)\n        self.task_queue = task_queue\n\n    def run(self):\n        while True:\n            func, args, result_queue = self.task_queue.get()\n            if func is None:\n                break\n            try:\n                result = func(*args)\n            except:\n                result = ExceptionInfo()\n            result_queue.put((result, args))\n\n#----------------------------------------------------------------------------\n\nclass ThreadPool(object):\n    def __init__(self, num_threads):\n        assert num_threads >= 1\n        self.task_queue = Queue.Queue()\n        self.result_queues = dict()\n        self.num_threads = num_threads\n        for _idx in range(self.num_threads):\n            thread = WorkerThread(self.task_queue)\n            thread.daemon = True\n            thread.start()\n\n    def add_task(self, func, args=()):\n        assert hasattr(func, '__call__') # must be a function\n        if func not in self.result_queues:\n            self.result_queues[func] = Queue.Queue()\n        self.task_queue.put((func, args, self.result_queues[func]))\n\n    def get_result(self, func): # returns (result, args)\n        result, args = self.result_queues[func].get()\n        if isinstance(result, ExceptionInfo):\n            print('\\n\\nWorker thread caught an exception:\\n' + result.traceback)\n            raise result.value\n        return result, args\n\n    def finish(self):\n        for _idx in range(self.num_threads):\n            self.task_queue.put((None, (), None))\n\n    def __enter__(self): # for 'with' statement\n        return self\n\n    def __exit__(self, *excinfo):\n        self.finish()\n\n    def process_items_concurrently(self, item_iterator, process_func=lambda x: x, pre_func=lambda x: x, post_func=lambda x: x, max_items_in_flight=None):\n        if max_items_in_flight is None: max_items_in_flight = self.num_threads * 4\n        assert max_items_in_flight >= 1\n        results = []\n        retire_idx = [0]\n\n        def task_func(prepared, _idx):\n            return process_func(prepared)\n\n        def retire_result():\n            processed, (_prepared, idx) = self.get_result(task_func)\n            results[idx] = processed\n            while retire_idx[0] < len(results) and results[retire_idx[0]] is not None:\n                yield post_func(results[retire_idx[0]])\n                results[retire_idx[0]] = None\n                retire_idx[0] += 1\n\n        for idx, item in enumerate(item_iterator):\n            prepared = pre_func(item)\n            results.append(None)\n            self.add_task(func=task_func, args=(prepared, idx))\n            while retire_idx[0] < idx - max_items_in_flight + 2:\n                for res in retire_result(): yield res\n        while retire_idx[0] < len(results):\n            for res in retire_result(): yield res\n\n#----------------------------------------------------------------------------\n\ndef display(tfrecord_dir):\n    print('Loading dataset \"%s\"' % tfrecord_dir)\n    tflib.init_tf({'gpu_options.allow_growth': True})\n    dset = dataset.TFRecordDataset(tfrecord_dir, max_label_size='full', repeat=False, shuffle_mb=0)\n    tflib.init_uninitialized_vars()\n    import cv2  # pip install opencv-python\n\n    idx = 0\n    while True:\n        try:\n            images, labels = dset.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            break\n        if idx == 0:\n            print('Displaying images')\n            cv2.namedWindow('dataset_tool')\n            print('Press SPACE or ENTER to advance, ESC to exit')\n        print('\\nidx = %-8d\\nlabel = %s' % (idx, labels[0].tolist()))\n        cv2.imshow('dataset_tool', images[0].transpose(1, 2, 0)[:, :, ::-1]) # CHW => HWC, RGB => BGR\n        idx += 1\n        if cv2.waitKey() == 27:\n            break\n    print('\\nDisplayed %d images.' % idx)\n\n#----------------------------------------------------------------------------\n\ndef extract(tfrecord_dir, output_dir):\n    print('Loading dataset \"%s\"' % tfrecord_dir)\n    tflib.init_tf({'gpu_options.allow_growth': True})\n    dset = dataset.TFRecordDataset(tfrecord_dir, max_label_size=0, repeat=False, shuffle_mb=0)\n    tflib.init_uninitialized_vars()\n\n    print('Extracting images to \"%s\"' % output_dir)\n    if not os.path.isdir(output_dir):\n        os.makedirs(output_dir)\n    idx = 0\n    while True:\n        if idx % 10 == 0:\n            print('%d\\r' % idx, end='', flush=True)\n        try:\n            images, _labels = dset.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            break\n        if images.shape[1] == 1:\n            img = PIL.Image.fromarray(images[0][0], 'L')\n        else:\n            img = PIL.Image.fromarray(images[0].transpose(1, 2, 0), 'RGB')\n        img.save(os.path.join(output_dir, 'img%08d.png' % idx))\n        idx += 1\n    print('Extracted %d images.' % idx)\n\n#----------------------------------------------------------------------------\n\ndef compare(tfrecord_dir_a, tfrecord_dir_b, ignore_labels):\n    max_label_size = 0 if ignore_labels else 'full'\n    print('Loading dataset \"%s\"' % tfrecord_dir_a)\n    tflib.init_tf({'gpu_options.allow_growth': True})\n    dset_a = dataset.TFRecordDataset(tfrecord_dir_a, max_label_size=max_label_size, repeat=False, shuffle_mb=0)\n    print('Loading dataset \"%s\"' % tfrecord_dir_b)\n    dset_b = dataset.TFRecordDataset(tfrecord_dir_b, max_label_size=max_label_size, repeat=False, shuffle_mb=0)\n    tflib.init_uninitialized_vars()\n\n    print('Comparing datasets')\n    idx = 0\n    identical_images = 0\n    identical_labels = 0\n    while True:\n        if idx % 100 == 0:\n            print('%d\\r' % idx, end='', flush=True)\n        try:\n            images_a, labels_a = dset_a.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            images_a, labels_a = None, None\n        try:\n            images_b, labels_b = dset_b.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            images_b, labels_b = None, None\n        if images_a is None or images_b is None:\n            if images_a is not None or images_b is not None:\n                print('Datasets contain different number of images')\n            break\n        if images_a.shape == images_b.shape and np.all(images_a == images_b):\n            identical_images += 1\n        else:\n            print('Image %d is different' % idx)\n        if labels_a.shape == labels_b.shape and np.all(labels_a == labels_b):\n            identical_labels += 1\n        else:\n            print('Label %d is different' % idx)\n        idx += 1\n    print('Identical images: %d / %d' % (identical_images, idx))\n    if not ignore_labels:\n        print('Identical labels: %d / %d' % (identical_labels, idx))\n\n#----------------------------------------------------------------------------\n\ndef create_mnist(tfrecord_dir, mnist_dir):\n    print('Loading MNIST from \"%s\"' % mnist_dir)\n    import gzip\n    with gzip.open(os.path.join(mnist_dir, 'train-images-idx3-ubyte.gz'), 'rb') as file:\n        images = np.frombuffer(file.read(), np.uint8, offset=16)\n    with gzip.open(os.path.join(mnist_dir, 'train-labels-idx1-ubyte.gz'), 'rb') as file:\n        labels = np.frombuffer(file.read(), np.uint8, offset=8)\n    images = images.reshape(-1, 1, 28, 28)\n    images = np.pad(images, [(0,0), (0,0), (2,2), (2,2)], 'constant', constant_values=0)\n    assert images.shape == (60000, 1, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (60000,) and labels.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_mnistrgb(tfrecord_dir, mnist_dir, num_images=1000000, random_seed=123):\n    print('Loading MNIST from \"%s\"' % mnist_dir)\n    import gzip\n    with gzip.open(os.path.join(mnist_dir, 'train-images-idx3-ubyte.gz'), 'rb') as file:\n        images = np.frombuffer(file.read(), np.uint8, offset=16)\n    images = images.reshape(-1, 28, 28)\n    images = np.pad(images, [(0,0), (2,2), (2,2)], 'constant', constant_values=0)\n    assert images.shape == (60000, 32, 32) and images.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n\n    with TFRecordExporter(tfrecord_dir, num_images) as tfr:\n        rnd = np.random.RandomState(random_seed)\n        for _idx in range(num_images):\n            tfr.add_image(images[rnd.randint(images.shape[0], size=3)])\n\n#----------------------------------------------------------------------------\n\ndef create_cifar10(tfrecord_dir, cifar10_dir):\n    print('Loading CIFAR-10 from \"%s\"' % cifar10_dir)\n    import pickle\n    images = []\n    labels = []\n    for batch in range(1, 6):\n        with open(os.path.join(cifar10_dir, 'data_batch_%d' % batch), 'rb') as file:\n            data = pickle.load(file, encoding='latin1')\n        images.append(data['data'].reshape(-1, 3, 32, 32))\n        labels.append(data['labels'])\n    images = np.concatenate(images)\n    labels = np.concatenate(labels)\n    assert images.shape == (50000, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (50000,) and labels.dtype == np.int32\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_cifar100(tfrecord_dir, cifar100_dir):\n    print('Loading CIFAR-100 from \"%s\"' % cifar100_dir)\n    import pickle\n    with open(os.path.join(cifar100_dir, 'train'), 'rb') as file:\n        data = pickle.load(file, encoding='latin1')\n    images = data['data'].reshape(-1, 3, 32, 32)\n    labels = np.array(data['fine_labels'])\n    assert images.shape == (50000, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (50000,) and labels.dtype == np.int32\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 99\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_svhn(tfrecord_dir, svhn_dir):\n    print('Loading SVHN from \"%s\"' % svhn_dir)\n    import pickle\n    images = []\n    labels = []\n    for batch in range(1, 4):\n        with open(os.path.join(svhn_dir, 'train_%d.pkl' % batch), 'rb') as file:\n            data = pickle.load(file, encoding='latin1')\n        images.append(data[0])\n        labels.append(data[1])\n    images = np.concatenate(images)\n    labels = np.concatenate(labels)\n    assert images.shape == (73257, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (73257,) and labels.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_lsun(tfrecord_dir, lmdb_dir, resolution=256, max_images=None):\n    print('Loading LSUN dataset from \"%s\"' % lmdb_dir)\n    import lmdb # pip install lmdb # pylint: disable=import-error\n    import cv2 # pip install opencv-python\n    import io\n    with lmdb.open(lmdb_dir, readonly=True).begin(write=False) as txn:\n        total_images = txn.stat()['entries'] # pylint: disable=no-value-for-parameter\n        if max_images is None:\n            max_images = total_images\n        with TFRecordExporter(tfrecord_dir, max_images) as tfr:\n            for _idx, (_key, value) in enumerate(txn.cursor()):\n                try:\n                    try:\n                        img = cv2.imdecode(np.fromstring(value, dtype=np.uint8), 1)\n                        if img is None:\n                            raise IOError('cv2.imdecode failed')\n                        img = img[:, :, ::-1] # BGR => RGB\n                    except IOError:\n                        img = np.asarray(PIL.Image.open(io.BytesIO(value)))\n                    crop = np.min(img.shape[:2])\n                    img = img[(img.shape[0] - crop) // 2 : (img.shape[0] + crop) // 2, (img.shape[1] - crop) // 2 : (img.shape[1] + crop) // 2]\n                    img = PIL.Image.fromarray(img, 'RGB')\n                    img = img.resize((resolution, resolution), PIL.Image.ANTIALIAS)\n                    img = np.asarray(img)\n                    img = img.transpose([2, 0, 1]) # HWC => CHW\n                    tfr.add_image(img)\n                except:\n                    print(sys.exc_info()[1])\n                if tfr.cur_images == max_images:\n                    break\n\n#----------------------------------------------------------------------------\n\ndef create_lsun_wide(tfrecord_dir, lmdb_dir, width=512, height=384, max_images=None):\n    assert width == 2 ** int(np.round(np.log2(width)))\n    assert height <= width\n    print('Loading LSUN dataset from \"%s\"' % lmdb_dir)\n    import lmdb # pip install lmdb # pylint: disable=import-error\n    import cv2 # pip install opencv-python\n    import io\n    with lmdb.open(lmdb_dir, readonly=True).begin(write=False) as txn:\n        total_images = txn.stat()['entries'] # pylint: disable=no-value-for-parameter\n        if max_images is None:\n            max_images = total_images\n        with TFRecordExporter(tfrecord_dir, max_images, print_progress=False) as tfr:\n            for idx, (_key, value) in enumerate(txn.cursor()):\n                try:\n                    try:\n                        img = cv2.imdecode(np.fromstring(value, dtype=np.uint8), 1)\n                        if img is None:\n                            raise IOError('cv2.imdecode failed')\n                        img = img[:, :, ::-1] # BGR => RGB\n                    except IOError:\n                        img = np.asarray(PIL.Image.open(io.BytesIO(value)))\n\n                    ch = int(np.round(width * img.shape[0] / img.shape[1]))\n                    if img.shape[1] < width or ch < height:\n                        continue\n\n                    img = img[(img.shape[0] - ch) // 2 : (img.shape[0] + ch) // 2]\n                    img = PIL.Image.fromarray(img, 'RGB')\n                    img = img.resize((width, height), PIL.Image.ANTIALIAS)\n                    img = np.asarray(img)\n                    img = img.transpose([2, 0, 1]) # HWC => CHW\n\n                    canvas = np.zeros([3, width, width], dtype=np.uint8)\n                    canvas[:, (width - height) // 2 : (width + height) // 2] = img\n                    tfr.add_image(canvas)\n                    print('\\r%d / %d => %d ' % (idx + 1, total_images, tfr.cur_images), end='')\n\n                except:\n                    print(sys.exc_info()[1])\n                if tfr.cur_images == max_images:\n                    break\n    print()\n\n#----------------------------------------------------------------------------\n\ndef create_celeba(tfrecord_dir, celeba_dir, cx=89, cy=121):\n    print('Loading CelebA from \"%s\"' % celeba_dir)\n    glob_pattern = os.path.join(celeba_dir, 'img_align_celeba_png', '*.png')\n    image_filenames = sorted(glob.glob(glob_pattern))\n    expected_images = 202599\n    if len(image_filenames) != expected_images:\n        error('Expected to find %d images' % expected_images)\n\n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            img = np.asarray(PIL.Image.open(image_filenames[order[idx]]))\n            assert img.shape == (218, 178, 3)\n            img = img[cy - 64 : cy + 64, cx - 64 : cx + 64]\n            img = img.transpose(2, 0, 1) # HWC => CHW\n            tfr.add_image(img)\n\n#----------------------------------------------------------------------------\n\ndef create_from_images(tfrecord_dir, image_dir, shuffle):\n    print('Loading images from \"%s\"' % image_dir)\n    image_filenames = sorted(glob.glob(os.path.join(image_dir, '*')))\n    if len(image_filenames) == 0:\n        error('No input images found')\n\n    img = np.asarray(PIL.Image.open(image_filenames[0]))\n    resolution = img.shape[0]\n    channels = img.shape[2] if img.ndim == 3 else 1\n    if img.shape[1] != resolution:\n        error('Input images must have the same width and height')\n    if resolution != 2 ** int(np.floor(np.log2(resolution))):\n        error('Input image resolution must be a power-of-two')\n    if channels not in [1, 3]:\n        error('Input images must be stored as RGB or grayscale')\n\n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order() if shuffle else np.arange(len(image_filenames))\n        for idx in range(order.size):\n            img = np.asarray(PIL.Image.open(image_filenames[order[idx]]))\n            if channels == 1:\n                img = img[np.newaxis, :, :] # HW => CHW\n            else:\n                img = img.transpose([2, 0, 1]) # HWC => CHW\n            tfr.add_image(img)\n\n#----------------------------------------------------------------------------\n\ndef create_from_hdf5(tfrecord_dir, hdf5_filename, shuffle):\n    print('Loading HDF5 archive from \"%s\"' % hdf5_filename)\n    import h5py # conda install h5py\n    with h5py.File(hdf5_filename, 'r') as hdf5_file:\n        hdf5_data = max([value for key, value in hdf5_file.items() if key.startswith('data')], key=lambda lod: lod.shape[3])\n        with TFRecordExporter(tfrecord_dir, hdf5_data.shape[0]) as tfr:\n            order = tfr.choose_shuffled_order() if shuffle else np.arange(hdf5_data.shape[0])\n            for idx in range(order.size):\n                tfr.add_image(hdf5_data[order[idx]])\n            npy_filename = os.path.splitext(hdf5_filename)[0] + '-labels.npy'\n            if os.path.isfile(npy_filename):\n                tfr.add_labels(np.load(npy_filename)[order])\n\n#----------------------------------------------------------------------------\n\ndef execute_cmdline(argv):\n    prog = argv[0]\n    parser = argparse.ArgumentParser(\n        prog        = prog,\n        description = 'Tool for creating multi-resolution TFRecords datasets for StyleGAN and ProGAN.',\n        epilog      = 'Type \"%s <command> -h\" for more information.' % prog)\n\n    subparsers = parser.add_subparsers(dest='command')\n    subparsers.required = True\n    def add_command(cmd, desc, example=None):\n        epilog = 'Example: %s %s' % (prog, example) if example is not None else None\n        return subparsers.add_parser(cmd, description=desc, help=desc, epilog=epilog)\n\n    p = add_command(    'display',          'Display images in dataset.',\n                                            'display datasets/mnist')\n    p.add_argument(     'tfrecord_dir',     help='Directory containing dataset')\n\n    p = add_command(    'extract',          'Extract images from dataset.',\n                                            'extract datasets/mnist mnist-images')\n    p.add_argument(     'tfrecord_dir',     help='Directory containing dataset')\n    p.add_argument(     'output_dir',       help='Directory to extract the images into')\n\n    p = add_command(    'compare',          'Compare two datasets.',\n                                            'compare datasets/mydataset datasets/mnist')\n    p.add_argument(     'tfrecord_dir_a',   help='Directory containing first dataset')\n    p.add_argument(     'tfrecord_dir_b',   help='Directory containing second dataset')\n    p.add_argument(     '--ignore_labels',  help='Ignore labels (default: 0)', type=int, default=0)\n\n    p = add_command(    'create_mnist',     'Create dataset for MNIST.',\n                                            'create_mnist datasets/mnist ~/downloads/mnist')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'mnist_dir',        help='Directory containing MNIST')\n\n    p = add_command(    'create_mnistrgb',  'Create dataset for MNIST-RGB.',\n                                            'create_mnistrgb datasets/mnistrgb ~/downloads/mnist')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'mnist_dir',        help='Directory containing MNIST')\n    p.add_argument(     '--num_images',     help='Number of composite images to create (default: 1000000)', type=int, default=1000000)\n    p.add_argument(     '--random_seed',    help='Random seed (default: 123)', type=int, default=123)\n\n    p = add_command(    'create_cifar10',   'Create dataset for CIFAR-10.',\n                                            'create_cifar10 datasets/cifar10 ~/downloads/cifar10')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'cifar10_dir',      help='Directory containing CIFAR-10')\n\n    p = add_command(    'create_cifar100',  'Create dataset for CIFAR-100.',\n                                            'create_cifar100 datasets/cifar100 ~/downloads/cifar100')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'cifar100_dir',     help='Directory containing CIFAR-100')\n\n    p = add_command(    'create_svhn',      'Create dataset for SVHN.',\n                                            'create_svhn datasets/svhn ~/downloads/svhn')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'svhn_dir',         help='Directory containing SVHN')\n\n    p = add_command(    'create_lsun',      'Create dataset for single LSUN category.',\n                                            'create_lsun datasets/lsun-car-100k ~/downloads/lsun/car_lmdb --resolution 256 --max_images 100000')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'lmdb_dir',         help='Directory containing LMDB database')\n    p.add_argument(     '--resolution',     help='Output resolution (default: 256)', type=int, default=256)\n    p.add_argument(     '--max_images',     help='Maximum number of images (default: none)', type=int, default=None)\n\n    p = add_command(    'create_lsun_wide', 'Create LSUN dataset with non-square aspect ratio.',\n                                            'create_lsun_wide datasets/lsun-car-512x384 ~/downloads/lsun/car_lmdb --width 512 --height 384')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'lmdb_dir',         help='Directory containing LMDB database')\n    p.add_argument(     '--width',          help='Output width (default: 512)', type=int, default=512)\n    p.add_argument(     '--height',         help='Output height (default: 384)', type=int, default=384)\n    p.add_argument(     '--max_images',     help='Maximum number of images (default: none)', type=int, default=None)\n\n    p = add_command(    'create_celeba',    'Create dataset for CelebA.',\n                                            'create_celeba datasets/celeba ~/downloads/celeba')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'celeba_dir',       help='Directory containing CelebA')\n    p.add_argument(     '--cx',             help='Center X coordinate (default: 89)', type=int, default=89)\n    p.add_argument(     '--cy',             help='Center Y coordinate (default: 121)', type=int, default=121)\n\n    p = add_command(    'create_from_images', 'Create dataset from a directory full of images.',\n                                            'create_from_images datasets/mydataset myimagedir')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'image_dir',        help='Directory containing the images')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    p = add_command(    'create_from_hdf5', 'Create dataset from legacy HDF5 archive.',\n                                            'create_from_hdf5 datasets/celebahq ~/downloads/celeba-hq-1024x1024.h5')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'hdf5_filename',    help='HDF5 archive containing the images')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    args = parser.parse_args(argv[1:] if len(argv) > 1 else ['-h'])\n    func = globals()[args.command]\n    del args.command\n    func(**vars(args))\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    execute_cmdline(sys.argv)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nfrom . import submission\n\nfrom .submission.run_context import RunContext\n\nfrom .submission.submit import SubmitTarget\nfrom .submission.submit import PathType\nfrom .submission.submit import SubmitConfig\nfrom .submission.submit import submit_run\nfrom .submission.submit import get_path_from_template\nfrom .submission.submit import convert_path\nfrom .submission.submit import make_run_dir_path\n\nfrom .util import EasyDict\n\nsubmit_config: SubmitConfig = None # Package level variable for SubmitConfig which is only valid when inside the run function.\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/submission/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nfrom . import run_context\nfrom . import submit\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/submission/internal/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nfrom . import local\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/submission/internal/local.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nclass TargetOptions():\n    def __init__(self):\n        self.do_not_copy_source_files = False\n\nclass Target():\n    def __init__(self):\n        pass\n\n    def finalize_submit_config(self, submit_config, host_run_dir):\n        print ('Local submit ', end='', flush=True)\n        submit_config.run_dir = host_run_dir\n\n    def submit(self, submit_config, host_run_dir):\n        from ..submit import run_wrapper, convert_path\n        print('- run_dir: %s' % convert_path(submit_config.run_dir), flush=True)\n        return run_wrapper(submit_config)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/submission/run_context.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Helpers for managing the run/training loop.\"\"\"\n\nimport datetime\nimport json\nimport os\nimport pprint\nimport time\nimport types\n\nfrom typing import Any\n\nfrom . import submit\n\n# Singleton RunContext\n_run_context = None\n\nclass RunContext(object):\n    \"\"\"Helper class for managing the run/training loop.\n\n    The context will hide the implementation details of a basic run/training loop.\n    It will set things up properly, tell if run should be stopped, and then cleans up.\n    User should call update periodically and use should_stop to determine if run should be stopped.\n\n    Args:\n        submit_config: The SubmitConfig that is used for the current run.\n        config_module: (deprecated) The whole config module that is used for the current run.\n    \"\"\"\n\n    def __init__(self, submit_config: submit.SubmitConfig, config_module: types.ModuleType = None):\n        global _run_context\n        # Only a single RunContext can be alive\n        assert _run_context is None\n        _run_context = self\n        self.submit_config = submit_config\n        self.should_stop_flag = False\n        self.has_closed = False\n        self.start_time = time.time()\n        self.last_update_time = time.time()\n        self.last_update_interval = 0.0\n        self.progress_monitor_file_path = None\n\n        # vestigial config_module support just prints a warning\n        if config_module is not None:\n            print(\"RunContext.config_module parameter support has been removed.\")\n\n        # write out details about the run to a text file\n        self.run_txt_data = {\"task_name\": submit_config.task_name, \"host_name\": submit_config.host_name, \"start_time\": datetime.datetime.now().isoformat(sep=\" \")}\n        with open(os.path.join(submit_config.run_dir, \"run.txt\"), \"w\") as f:\n            pprint.pprint(self.run_txt_data, stream=f, indent=4, width=200, compact=False)\n\n    def __enter__(self) -> \"RunContext\":\n        return self\n\n    def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:\n        self.close()\n\n    def update(self, loss: Any = 0, cur_epoch: Any = 0, max_epoch: Any = None) -> None:\n        \"\"\"Do general housekeeping and keep the state of the context up-to-date.\n        Should be called often enough but not in a tight loop.\"\"\"\n        assert not self.has_closed\n\n        self.last_update_interval = time.time() - self.last_update_time\n        self.last_update_time = time.time()\n\n        if os.path.exists(os.path.join(self.submit_config.run_dir, \"abort.txt\")):\n            self.should_stop_flag = True\n\n    def should_stop(self) -> bool:\n        \"\"\"Tell whether a stopping condition has been triggered one way or another.\"\"\"\n        return self.should_stop_flag\n\n    def get_time_since_start(self) -> float:\n        \"\"\"How much time has passed since the creation of the context.\"\"\"\n        return time.time() - self.start_time\n\n    def get_time_since_last_update(self) -> float:\n        \"\"\"How much time has passed since the last call to update.\"\"\"\n        return time.time() - self.last_update_time\n\n    def get_last_update_interval(self) -> float:\n        \"\"\"How much time passed between the previous two calls to update.\"\"\"\n        return self.last_update_interval\n\n    def close(self) -> None:\n        \"\"\"Close the context and clean up.\n        Should only be called once.\"\"\"\n        if not self.has_closed:\n            # update the run.txt with stopping time\n            self.run_txt_data[\"stop_time\"] = datetime.datetime.now().isoformat(sep=\" \")\n            with open(os.path.join(self.submit_config.run_dir, \"run.txt\"), \"w\") as f:\n                pprint.pprint(self.run_txt_data, stream=f, indent=4, width=200, compact=False)\n            self.has_closed = True\n\n            # detach the global singleton\n            global _run_context\n            if _run_context is self:\n                _run_context = None\n\n    @staticmethod\n    def get():\n        import dnnlib\n        if _run_context is not None:\n            return _run_context\n        return RunContext(dnnlib.submit_config)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/submission/submit.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Submit a function to be run either locally or in a computing cluster.\"\"\"\n\nimport copy\nimport inspect\nimport os\nimport pathlib\nimport pickle\nimport platform\nimport pprint\nimport re\nimport shutil\nimport sys\nimport time\nimport traceback\n\nfrom enum import Enum\n\nfrom .. import util\nfrom ..util import EasyDict\n\nfrom . import internal\n\nclass SubmitTarget(Enum):\n    \"\"\"The target where the function should be run.\n\n    LOCAL: Run it locally.\n    \"\"\"\n    LOCAL = 1\n\n\nclass PathType(Enum):\n    \"\"\"Determines in which format should a path be formatted.\n\n    WINDOWS: Format with Windows style.\n    LINUX: Format with Linux/Posix style.\n    AUTO: Use current OS type to select either WINDOWS or LINUX.\n    \"\"\"\n    WINDOWS = 1\n    LINUX = 2\n    AUTO = 3\n\n\nclass PlatformExtras:\n    \"\"\"A mixed bag of values used by dnnlib heuristics.\n\n    Attributes:\n\n        data_reader_buffer_size: Used by DataReader to size internal shared memory buffers.\n        data_reader_process_count: Number of worker processes to spawn (zero for single thread operation)\n    \"\"\"\n    def __init__(self):\n        self.data_reader_buffer_size = 1<<30    # 1 GB\n        self.data_reader_process_count = 0      # single threaded default\n\n\n_user_name_override = None\n\nclass SubmitConfig(util.EasyDict):\n    \"\"\"Strongly typed config dict needed to submit runs.\n\n    Attributes:\n        run_dir_root: Path to the run dir root. Can be optionally templated with tags. Needs to always be run through get_path_from_template.\n        run_desc: Description of the run. Will be used in the run dir and task name.\n        run_dir_ignore: List of file patterns used to ignore files when copying files to the run dir.\n        run_dir_extra_files: List of (abs_path, rel_path) tuples of file paths. rel_path root will be the src directory inside the run dir.\n        submit_target: Submit target enum value. Used to select where the run is actually launched.\n        num_gpus: Number of GPUs used/requested for the run.\n        print_info: Whether to print debug information when submitting.\n        local.do_not_copy_source_files: Do not copy source files from the working directory to the run dir.\n        run_id: Automatically populated value during submit.\n        run_name: Automatically populated value during submit.\n        run_dir: Automatically populated value during submit.\n        run_func_name: Automatically populated value during submit.\n        run_func_kwargs: Automatically populated value during submit.\n        user_name: Automatically populated value during submit. Can be set by the user which will then override the automatic value.\n        task_name: Automatically populated value during submit.\n        host_name: Automatically populated value during submit.\n        platform_extras: Automatically populated values during submit.  Used by various dnnlib libraries such as the DataReader class.\n    \"\"\"\n\n    def __init__(self):\n        super().__init__()\n\n        # run (set these)\n        self.run_dir_root = \"\"  # should always be passed through get_path_from_template\n        self.run_desc = \"\"\n        self.run_dir_ignore = [\"__pycache__\", \"*.pyproj\", \"*.sln\", \"*.suo\", \".cache\", \".idea\", \".vs\", \".vscode\", \"_cudacache\"]\n        self.run_dir_extra_files = []\n\n        # submit (set these)\n        self.submit_target = SubmitTarget.LOCAL\n        self.num_gpus = 1\n        self.print_info = False\n        self.nvprof = False\n        self.local = internal.local.TargetOptions()\n        self.datasets = []\n\n        # (automatically populated)\n        self.run_id = None\n        self.run_name = None\n        self.run_dir = None\n        self.run_func_name = None\n        self.run_func_kwargs = None\n        self.user_name = None\n        self.task_name = None\n        self.host_name = \"localhost\"\n        self.platform_extras = PlatformExtras()\n\n\ndef get_path_from_template(path_template: str, path_type: PathType = PathType.AUTO) -> str:\n    \"\"\"Replace tags in the given path template and return either Windows or Linux formatted path.\"\"\"\n    # automatically select path type depending on running OS\n    if path_type == PathType.AUTO:\n        if platform.system() == \"Windows\":\n            path_type = PathType.WINDOWS\n        elif platform.system() == \"Linux\":\n            path_type = PathType.LINUX\n        else:\n            raise RuntimeError(\"Unknown platform\")\n\n    path_template = path_template.replace(\"<USERNAME>\", get_user_name())\n\n    # return correctly formatted path\n    if path_type == PathType.WINDOWS:\n        return str(pathlib.PureWindowsPath(path_template))\n    elif path_type == PathType.LINUX:\n        return str(pathlib.PurePosixPath(path_template))\n    else:\n        raise RuntimeError(\"Unknown platform\")\n\n\ndef get_template_from_path(path: str) -> str:\n    \"\"\"Convert a normal path back to its template representation.\"\"\"\n    path = path.replace(\"\\\\\", \"/\")\n    return path\n\n\ndef convert_path(path: str, path_type: PathType = PathType.AUTO) -> str:\n    \"\"\"Convert a normal path to template and the convert it back to a normal path with given path type.\"\"\"\n    path_template = get_template_from_path(path)\n    path = get_path_from_template(path_template, path_type)\n    return path\n\n\ndef set_user_name_override(name: str) -> None:\n    \"\"\"Set the global username override value.\"\"\"\n    global _user_name_override\n    _user_name_override = name\n\n\ndef get_user_name():\n    \"\"\"Get the current user name.\"\"\"\n    if _user_name_override is not None:\n        return _user_name_override\n    elif platform.system() == \"Windows\":\n        return os.getlogin()\n    elif platform.system() == \"Linux\":\n        try:\n            import pwd\n            return pwd.getpwuid(os.geteuid()).pw_name\n        except:\n            return \"unknown\"\n    else:\n        raise RuntimeError(\"Unknown platform\")\n\n\ndef make_run_dir_path(*paths):\n    \"\"\"Make a path/filename that resides under the current submit run_dir.\n\n    Args:\n        *paths: Path components to be passed to os.path.join\n\n    Returns:\n        A file/dirname rooted at submit_config.run_dir.  If there's no\n        submit_config or run_dir, the base directory is the current\n        working directory.\n\n    E.g., `os.path.join(dnnlib.submit_config.run_dir, \"output.txt\"))`\n    \"\"\"\n    import dnnlib\n    if (dnnlib.submit_config is None) or (dnnlib.submit_config.run_dir is None):\n        return os.path.join(os.getcwd(), *paths)\n    return os.path.join(dnnlib.submit_config.run_dir, *paths)\n\n\ndef _create_run_dir_local(submit_config: SubmitConfig) -> str:\n    \"\"\"Create a new run dir with increasing ID number at the start.\"\"\"\n    run_dir_root = get_path_from_template(submit_config.run_dir_root, PathType.AUTO)\n\n    if not os.path.exists(run_dir_root):\n        os.makedirs(run_dir_root)\n\n    submit_config.run_id = _get_next_run_id_local(run_dir_root)\n    submit_config.run_name = \"{0:05d}-{1}\".format(submit_config.run_id, submit_config.run_desc)\n    run_dir = os.path.join(run_dir_root, submit_config.run_name)\n\n    if os.path.exists(run_dir):\n        raise RuntimeError(\"The run dir already exists! ({0})\".format(run_dir))\n\n    os.makedirs(run_dir)\n\n    return run_dir\n\n\ndef _get_next_run_id_local(run_dir_root: str) -> int:\n    \"\"\"Reads all directory names in a given directory (non-recursive) and returns the next (increasing) run id. Assumes IDs are numbers at the start of the directory names.\"\"\"\n    dir_names = [d for d in os.listdir(run_dir_root) if os.path.isdir(os.path.join(run_dir_root, d))]\n    r = re.compile(\"^\\\\d+\")  # match one or more digits at the start of the string\n    run_id = 0\n\n    for dir_name in dir_names:\n        m = r.match(dir_name)\n\n        if m is not None:\n            i = int(m.group())\n            run_id = max(run_id, i + 1)\n\n    return run_id\n\n\ndef _populate_run_dir(submit_config: SubmitConfig, run_dir: str) -> None:\n    \"\"\"Copy all necessary files into the run dir. Assumes that the dir exists, is local, and is writable.\"\"\"\n    pickle.dump(submit_config, open(os.path.join(run_dir, \"submit_config.pkl\"), \"wb\"))\n    with open(os.path.join(run_dir, \"submit_config.txt\"), \"w\") as f:\n        pprint.pprint(submit_config, stream=f, indent=4, width=200, compact=False)\n\n    if (submit_config.submit_target == SubmitTarget.LOCAL) and submit_config.local.do_not_copy_source_files:\n        return\n\n    files = []\n\n    run_func_module_dir_path = util.get_module_dir_by_obj_name(submit_config.run_func_name)\n    assert '.' in submit_config.run_func_name\n    for _idx in range(submit_config.run_func_name.count('.') - 1):\n        run_func_module_dir_path = os.path.dirname(run_func_module_dir_path)\n    files += util.list_dir_recursively_with_ignore(run_func_module_dir_path, ignores=submit_config.run_dir_ignore, add_base_to_relative=False)\n\n    dnnlib_module_dir_path = util.get_module_dir_by_obj_name(\"dnnlib\")\n    files += util.list_dir_recursively_with_ignore(dnnlib_module_dir_path, ignores=submit_config.run_dir_ignore, add_base_to_relative=True)\n\n    files += submit_config.run_dir_extra_files\n\n    files = [(f[0], os.path.join(run_dir, \"src\", f[1])) for f in files]\n    files += [(os.path.join(dnnlib_module_dir_path, \"submission\", \"internal\", \"run.py\"), os.path.join(run_dir, \"run.py\"))]\n\n    util.copy_files_and_create_dirs(files)\n\n\n\ndef run_wrapper(submit_config: SubmitConfig) -> None:\n    \"\"\"Wrap the actual run function call for handling logging, exceptions, typing, etc.\"\"\"\n    is_local = submit_config.submit_target == SubmitTarget.LOCAL\n\n    # when running locally, redirect stderr to stdout, log stdout to a file, and force flushing\n    if is_local:\n        logger = util.Logger(file_name=os.path.join(submit_config.run_dir, \"log.txt\"), file_mode=\"w\", should_flush=True)\n    else:  # when running in a cluster, redirect stderr to stdout, and just force flushing (log writing is handled by run.sh)\n        logger = util.Logger(file_name=None, should_flush=True)\n\n    import dnnlib\n    dnnlib.submit_config = submit_config\n\n    exit_with_errcode = False\n    try:\n        print(\"dnnlib: Running {0}() on {1}...\".format(submit_config.run_func_name, submit_config.host_name))\n        start_time = time.time()\n\n        run_func_obj = util.get_obj_by_name(submit_config.run_func_name)\n        assert callable(run_func_obj)\n        sig = inspect.signature(run_func_obj)\n        if 'submit_config' in sig.parameters:\n            run_func_obj(submit_config=submit_config, **submit_config.run_func_kwargs)\n        else:\n            run_func_obj(**submit_config.run_func_kwargs)\n\n        print(\"dnnlib: Finished {0}() in {1}.\".format(submit_config.run_func_name, util.format_time(time.time() - start_time)))\n    except:\n        if is_local:\n            raise\n        else:\n            traceback.print_exc()\n\n            log_src = os.path.join(submit_config.run_dir, \"log.txt\")\n            log_dst = os.path.join(get_path_from_template(submit_config.run_dir_root), \"{0}-error.txt\".format(submit_config.run_name))\n            shutil.copyfile(log_src, log_dst)\n\n            # Defer sys.exit(1) to happen after we close the logs and create a _finished.txt\n            exit_with_errcode = True\n    finally:\n        open(os.path.join(submit_config.run_dir, \"_finished.txt\"), \"w\").close()\n\n    dnnlib.RunContext.get().close()\n    dnnlib.submit_config = None\n    logger.close()\n\n    # If we hit an error, get out of the script now and signal the error\n    # to whatever process that started this script.\n    if exit_with_errcode:\n        sys.exit(1)\n\n    return submit_config\n\n\ndef submit_run(submit_config: SubmitConfig, run_func_name: str, **run_func_kwargs) -> None:\n    \"\"\"Create a run dir, gather files related to the run, copy files to the run dir, and launch the run in appropriate place.\"\"\"\n    submit_config = copy.deepcopy(submit_config)\n\n    submit_target = submit_config.submit_target\n    farm = None\n    if submit_target == SubmitTarget.LOCAL:\n        farm = internal.local.Target()\n    assert farm is not None # unknown target\n\n    # Disallow submitting jobs with zero num_gpus.\n    if (submit_config.num_gpus is None) or (submit_config.num_gpus == 0):\n        raise RuntimeError(\"submit_config.num_gpus must be set to a non-zero value\")\n\n    if submit_config.user_name is None:\n        submit_config.user_name = get_user_name()\n\n    submit_config.run_func_name = run_func_name\n    submit_config.run_func_kwargs = run_func_kwargs\n\n    #--------------------------------------------------------------------\n    # Prepare submission by populating the run dir\n    #--------------------------------------------------------------------\n    host_run_dir = _create_run_dir_local(submit_config)\n\n    submit_config.task_name = \"{0}-{1:05d}-{2}\".format(submit_config.user_name, submit_config.run_id, submit_config.run_desc)\n    docker_valid_name_regex = \"^[a-zA-Z0-9][a-zA-Z0-9_.-]+$\"\n    if not re.match(docker_valid_name_regex, submit_config.task_name):\n        raise RuntimeError(\"Invalid task name.  Probable reason: unacceptable characters in your submit_config.run_desc.  Task name must be accepted by the following regex: \" + docker_valid_name_regex + \", got \" + submit_config.task_name)\n\n    # Farm specific preparations for a submit\n    farm.finalize_submit_config(submit_config, host_run_dir)\n    _populate_run_dir(submit_config, host_run_dir)\n    return farm.submit(submit_config, host_run_dir)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nfrom . import autosummary\nfrom . import network\nfrom . import optimizer\nfrom . import tfutil\nfrom . import custom_ops\n\nfrom .tfutil import *\nfrom .network import Network\n\nfrom .optimizer import Optimizer\n\nfrom .custom_ops import get_plugin\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/autosummary.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Helper for adding automatically tracked values to Tensorboard.\n\nAutosummary creates an identity op that internally keeps track of the input\nvalues and automatically shows up in TensorBoard. The reported value\nrepresents an average over input components. The average is accumulated\nconstantly over time and flushed when save_summaries() is called.\n\nNotes:\n- The output tensor must be used as an input for something else in the\n  graph. Otherwise, the autosummary op will not get executed, and the average\n  value will not get accumulated.\n- It is perfectly fine to include autosummaries with the same name in\n  several places throughout the graph, even if they are executed concurrently.\n- It is ok to also pass in a python scalar or numpy array. In this case, it\n  is added to the average immediately.\n\"\"\"\n\nfrom collections import OrderedDict\nimport numpy as np\nimport tensorflow as tf\nfrom tensorboard import summary as summary_lib\nfrom tensorboard.plugins.custom_scalar import layout_pb2\n\nfrom . import tfutil\nfrom .tfutil import TfExpression\nfrom .tfutil import TfExpressionEx\n\n# Enable \"Custom scalars\" tab in TensorBoard for advanced formatting.\n# Disabled by default to reduce tfevents file size.\nenable_custom_scalars = False\n\n_dtype = tf.float64\n_vars = OrderedDict()  # name => [var, ...]\n_immediate = OrderedDict()  # name => update_op, update_value\n_finalized = False\n_merge_op = None\n\n\ndef _create_var(name: str, value_expr: TfExpression) -> TfExpression:\n    \"\"\"Internal helper for creating autosummary accumulators.\"\"\"\n    assert not _finalized\n    name_id = name.replace(\"/\", \"_\")\n    v = tf.cast(value_expr, _dtype)\n\n    if v.shape.is_fully_defined():\n        size = np.prod(v.shape.as_list())\n        size_expr = tf.constant(size, dtype=_dtype)\n    else:\n        size = None\n        size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype))\n\n    if size == 1:\n        if v.shape.ndims != 0:\n            v = tf.reshape(v, [])\n        v = [size_expr, v, tf.square(v)]\n    else:\n        v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))]\n    v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack(v), lambda: tf.zeros(3, dtype=_dtype))\n\n    with tfutil.absolute_name_scope(\"Autosummary/\" + name_id), tf.control_dependencies(None):\n        var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False)  # [sum(1), sum(x), sum(x**2)]\n    update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v))\n\n    if name in _vars:\n        _vars[name].append(var)\n    else:\n        _vars[name] = [var]\n    return update_op\n\n\ndef autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None, condition: TfExpressionEx = True) -> TfExpressionEx:\n    \"\"\"Create a new autosummary.\n\n    Args:\n        name:     Name to use in TensorBoard\n        value:    TensorFlow expression or python value to track\n        passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node.\n\n    Example use of the passthru mechanism:\n\n    n = autosummary('l2loss', loss, passthru=n)\n\n    This is a shorthand for the following code:\n\n    with tf.control_dependencies([autosummary('l2loss', loss)]):\n        n = tf.identity(n)\n    \"\"\"\n    tfutil.assert_tf_initialized()\n    name_id = name.replace(\"/\", \"_\")\n\n    if tfutil.is_tf_expression(value):\n        with tf.name_scope(\"summary_\" + name_id), tf.device(value.device):\n            condition = tf.convert_to_tensor(condition, name='condition')\n            update_op = tf.cond(condition, lambda: tf.group(_create_var(name, value)), tf.no_op)\n            with tf.control_dependencies([update_op]):\n                return tf.identity(value if passthru is None else passthru)\n\n    else:  # python scalar or numpy array\n        assert not tfutil.is_tf_expression(passthru)\n        assert not tfutil.is_tf_expression(condition)\n        if condition:\n            if name not in _immediate:\n                with tfutil.absolute_name_scope(\"Autosummary/\" + name_id), tf.device(None), tf.control_dependencies(None):\n                    update_value = tf.placeholder(_dtype)\n                    update_op = _create_var(name, update_value)\n                    _immediate[name] = update_op, update_value\n            update_op, update_value = _immediate[name]\n            tfutil.run(update_op, {update_value: value})\n        return value if passthru is None else passthru\n\n\ndef finalize_autosummaries() -> None:\n    \"\"\"Create the necessary ops to include autosummaries in TensorBoard report.\n    Note: This should be done only once per graph.\n    \"\"\"\n    global _finalized\n    tfutil.assert_tf_initialized()\n\n    if _finalized:\n        return None\n\n    _finalized = True\n    tfutil.init_uninitialized_vars([var for vars_list in _vars.values() for var in vars_list])\n\n    # Create summary ops.\n    with tf.device(None), tf.control_dependencies(None):\n        for name, vars_list in _vars.items():\n            name_id = name.replace(\"/\", \"_\")\n            with tfutil.absolute_name_scope(\"Autosummary/\" + name_id):\n                moments = tf.add_n(vars_list)\n                moments /= moments[0]\n                with tf.control_dependencies([moments]):  # read before resetting\n                    reset_ops = [tf.assign(var, tf.zeros(3, dtype=_dtype)) for var in vars_list]\n                    with tf.name_scope(None), tf.control_dependencies(reset_ops):  # reset before reporting\n                        mean = moments[1]\n                        std = tf.sqrt(moments[2] - tf.square(moments[1]))\n                        tf.summary.scalar(name, mean)\n                        if enable_custom_scalars:\n                            tf.summary.scalar(\"xCustomScalars/\" + name + \"/margin_lo\", mean - std)\n                            tf.summary.scalar(\"xCustomScalars/\" + name + \"/margin_hi\", mean + std)\n\n    # Setup layout for custom scalars.\n    layout = None\n    if enable_custom_scalars:\n        cat_dict = OrderedDict()\n        for series_name in sorted(_vars.keys()):\n            p = series_name.split(\"/\")\n            cat = p[0] if len(p) >= 2 else \"\"\n            chart = \"/\".join(p[1:-1]) if len(p) >= 3 else p[-1]\n            if cat not in cat_dict:\n                cat_dict[cat] = OrderedDict()\n            if chart not in cat_dict[cat]:\n                cat_dict[cat][chart] = []\n            cat_dict[cat][chart].append(series_name)\n        categories = []\n        for cat_name, chart_dict in cat_dict.items():\n            charts = []\n            for chart_name, series_names in chart_dict.items():\n                series = []\n                for series_name in series_names:\n                    series.append(layout_pb2.MarginChartContent.Series(\n                        value=series_name,\n                        lower=\"xCustomScalars/\" + series_name + \"/margin_lo\",\n                        upper=\"xCustomScalars/\" + series_name + \"/margin_hi\"))\n                margin = layout_pb2.MarginChartContent(series=series)\n                charts.append(layout_pb2.Chart(title=chart_name, margin=margin))\n            categories.append(layout_pb2.Category(title=cat_name, chart=charts))\n        layout = summary_lib.custom_scalar_pb(layout_pb2.Layout(category=categories))\n    return layout\n\ndef save_summaries(file_writer, global_step=None):\n    \"\"\"Call FileWriter.add_summary() with all summaries in the default graph,\n    automatically finalizing and merging them on the first call.\n    \"\"\"\n    global _merge_op\n    tfutil.assert_tf_initialized()\n\n    if _merge_op is None:\n        layout = finalize_autosummaries()\n        if layout is not None:\n            file_writer.add_summary(layout)\n        with tf.device(None), tf.control_dependencies(None):\n            _merge_op = tf.summary.merge_all()\n\n    file_writer.add_summary(_merge_op.eval(), global_step)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/custom_ops.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"TensorFlow custom ops builder.\n\"\"\"\n\nimport os\nimport re\nimport uuid\nimport hashlib\nimport tempfile\nimport shutil\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib # pylint: disable=no-name-in-module\n\n#----------------------------------------------------------------------------\n# Global options.\n\ncuda_cache_path = os.path.join(os.path.dirname(__file__), '_cudacache')\ncuda_cache_version_tag = 'v1'\ndo_not_hash_included_headers = False # Speed up compilation by assuming that headers included by the CUDA code never change. Unsafe!\nverbose = True # Print status messages to stdout.\n\ncompiler_bindir_search_path = [\n    'C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin/Hostx64/x64',\n    'C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.23.28105/bin/Hostx64/x64',\n    'C:/Program Files (x86)/Microsoft Visual Studio 14.0/vc/bin',\n]\n\n#----------------------------------------------------------------------------\n# Internal helper funcs.\n\ndef _find_compiler_bindir():\n    for compiler_path in compiler_bindir_search_path:\n        if os.path.isdir(compiler_path):\n            return compiler_path\n    return None\n\ndef _get_compute_cap(device):\n    caps_str = device.physical_device_desc\n    m = re.search('compute capability: (\\\\d+).(\\\\d+)', caps_str)\n    major = m.group(1)\n    minor = m.group(2)\n    return (major, minor)\n\ndef _get_cuda_gpu_arch_string():\n    gpus = [x for x in device_lib.list_local_devices() if x.device_type == 'GPU']\n    if len(gpus) == 0:\n        raise RuntimeError('No GPU devices found')\n    (major, minor) = _get_compute_cap(gpus[0])\n    return 'sm_%s%s' % (major, minor)\n\ndef _run_cmd(cmd):\n    with os.popen(cmd) as pipe:\n        output = pipe.read()\n        status = pipe.close()\n    if status is not None:\n        raise RuntimeError('NVCC returned an error. See below for full command line and output log:\\n\\n%s\\n\\n%s' % (cmd, output))\n\ndef _prepare_nvcc_cli(opts):\n    cmd = 'nvcc --std=c++11 -DNDEBUG ' + opts.strip()\n    cmd += ' --disable-warnings'\n    cmd += ' --include-path \"%s\"' % tf.sysconfig.get_include()\n    cmd += ' --include-path \"%s\"' % os.path.join(tf.sysconfig.get_include(), 'external', 'protobuf_archive', 'src')\n    cmd += ' --include-path \"%s\"' % os.path.join(tf.sysconfig.get_include(), 'external', 'com_google_absl')\n    cmd += ' --include-path \"%s\"' % os.path.join(tf.sysconfig.get_include(), 'external', 'eigen_archive')\n\n    compiler_bindir = _find_compiler_bindir()\n    if compiler_bindir is None:\n        # Require that _find_compiler_bindir succeeds on Windows.  Allow\n        # nvcc to use whatever is the default on Linux.\n        if os.name == 'nt':\n            raise RuntimeError('Could not find MSVC/GCC/CLANG installation on this computer. Check compiler_bindir_search_path list in \"%s\".' % __file__)\n    else:\n        cmd += ' --compiler-bindir \"%s\"' % compiler_bindir\n    cmd += ' 2>&1'\n    return cmd\n\n#----------------------------------------------------------------------------\n# Main entry point.\n\n_plugin_cache = dict()\n\ndef get_plugin(cuda_file):\n    cuda_file_base = os.path.basename(cuda_file)\n    cuda_file_name, cuda_file_ext = os.path.splitext(cuda_file_base)\n\n    # Already in cache?\n    if cuda_file in _plugin_cache:\n        return _plugin_cache[cuda_file]\n\n    # Setup plugin.\n    if verbose:\n        print('Setting up TensorFlow plugin \"%s\": ' % cuda_file_base, end='', flush=True)\n    try:\n        # Hash CUDA source.\n        md5 = hashlib.md5()\n        with open(cuda_file, 'rb') as f:\n            md5.update(f.read())\n        md5.update(b'\\n')\n\n        # Hash headers included by the CUDA code by running it through the preprocessor.\n        if not do_not_hash_included_headers:\n            if verbose:\n                print('Preprocessing... ', end='', flush=True)\n            with tempfile.TemporaryDirectory() as tmp_dir:\n                tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + cuda_file_ext)\n                _run_cmd(_prepare_nvcc_cli('\"%s\" --preprocess -o \"%s\" --keep --keep-dir \"%s\"' % (cuda_file, tmp_file, tmp_dir)))\n                with open(tmp_file, 'rb') as f:\n                    bad_file_str = ('\"' + cuda_file.replace('\\\\', '/') + '\"').encode('utf-8') # __FILE__ in error check macros\n                    good_file_str = ('\"' + cuda_file_base + '\"').encode('utf-8')\n                    for ln in f:\n                        if not ln.startswith(b'# ') and not ln.startswith(b'#line '): # ignore line number pragmas\n                            ln = ln.replace(bad_file_str, good_file_str)\n                            md5.update(ln)\n                    md5.update(b'\\n')\n\n        # Select compiler options.\n        compile_opts = ''\n        if os.name == 'nt':\n            compile_opts += '\"%s\"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.lib')\n        elif os.name == 'posix':\n            compile_opts += '\"%s\"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.so')\n            compile_opts += ' --compiler-options \\'-fPIC -D_GLIBCXX_USE_CXX11_ABI=1\\''\n        else:\n            assert False # not Windows or Linux, w00t?\n        compile_opts += ' --gpu-architecture=%s' % _get_cuda_gpu_arch_string()\n        compile_opts += ' --use_fast_math'\n        nvcc_cmd = _prepare_nvcc_cli(compile_opts)\n\n        # Hash build configuration.\n        md5.update(('nvcc_cmd: ' + nvcc_cmd).encode('utf-8') + b'\\n')\n        md5.update(('tf.VERSION: ' + tf.VERSION).encode('utf-8') + b'\\n')\n        md5.update(('cuda_cache_version_tag: ' + cuda_cache_version_tag).encode('utf-8') + b'\\n')\n\n        # Compile if not already compiled.\n        bin_file_ext = '.dll' if os.name == 'nt' else '.so'\n        bin_file = os.path.join(cuda_cache_path, cuda_file_name + '_' + md5.hexdigest() + bin_file_ext)\n        if not os.path.isfile(bin_file):\n            if verbose:\n                print('Compiling... ', end='', flush=True)\n            with tempfile.TemporaryDirectory() as tmp_dir:\n                tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + bin_file_ext)\n                _run_cmd(nvcc_cmd + ' \"%s\" --shared -o \"%s\" --keep --keep-dir \"%s\"' % (cuda_file, tmp_file, tmp_dir))\n                os.makedirs(cuda_cache_path, exist_ok=True)\n                intermediate_file = os.path.join(cuda_cache_path, cuda_file_name + '_' + uuid.uuid4().hex + '_tmp' + bin_file_ext)\n                shutil.copyfile(tmp_file, intermediate_file)\n                os.rename(intermediate_file, bin_file) # atomic\n\n        # Load.\n        if verbose:\n            print('Loading... ', end='', flush=True)\n        plugin = tf.load_op_library(bin_file)\n\n        # Add to cache.\n        _plugin_cache[cuda_file] = plugin\n        if verbose:\n            print('Done.', flush=True)\n        return plugin\n\n    except:\n        if verbose:\n            print('Failed!', flush=True)\n        raise\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/network.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Helper for managing networks.\"\"\"\n\nimport types\nimport inspect\nimport re\nimport uuid\nimport sys\nimport numpy as np\nimport tensorflow as tf\n\nfrom collections import OrderedDict\nfrom typing import Any, List, Tuple, Union\n\nfrom . import tfutil\nfrom .. import util\n\nfrom .tfutil import TfExpression, TfExpressionEx\n\n_import_handlers = []  # Custom import handlers for dealing with legacy data in pickle import.\n_import_module_src = dict()  # Source code for temporary modules created during pickle import.\n\n\ndef import_handler(handler_func):\n    \"\"\"Function decorator for declaring custom import handlers.\"\"\"\n    _import_handlers.append(handler_func)\n    return handler_func\n\n\nclass Network:\n    \"\"\"Generic network abstraction.\n\n    Acts as a convenience wrapper for a parameterized network construction\n    function, providing several utility methods and convenient access to\n    the inputs/outputs/weights.\n\n    Network objects can be safely pickled and unpickled for long-term\n    archival purposes. The pickling works reliably as long as the underlying\n    network construction function is defined in a standalone Python module\n    that has no side effects or application-specific imports.\n\n    Args:\n        name: Network name. Used to select TensorFlow name and variable scopes.\n        func_name: Fully qualified name of the underlying network construction function, or a top-level function object.\n        static_kwargs: Keyword arguments to be passed in to the network construction function.\n\n    Attributes:\n        name: User-specified name, defaults to build func name if None.\n        scope: Unique TensorFlow scope containing template graph and variables, derived from the user-specified name.\n        static_kwargs: Arguments passed to the user-supplied build func.\n        components: Container for sub-networks. Passed to the build func, and retained between calls.\n        num_inputs: Number of input tensors.\n        num_outputs: Number of output tensors.\n        input_shapes: Input tensor shapes (NC or NCHW), including minibatch dimension.\n        output_shapes: Output tensor shapes (NC or NCHW), including minibatch dimension.\n        input_shape: Short-hand for input_shapes[0].\n        output_shape: Short-hand for output_shapes[0].\n        input_templates: Input placeholders in the template graph.\n        output_templates: Output tensors in the template graph.\n        input_names: Name string for each input.\n        output_names: Name string for each output.\n        own_vars: Variables defined by this network (local_name => var), excluding sub-networks.\n        vars: All variables (local_name => var).\n        trainables: All trainable variables (local_name => var).\n        var_global_to_local: Mapping from variable global names to local names.\n    \"\"\"\n\n    def __init__(self, name: str = None, func_name: Any = None, **static_kwargs):\n        tfutil.assert_tf_initialized()\n        assert isinstance(name, str) or name is None\n        assert func_name is not None\n        assert isinstance(func_name, str) or util.is_top_level_function(func_name)\n        assert util.is_pickleable(static_kwargs)\n\n        self._init_fields()\n        self.name = name\n        self.static_kwargs = util.EasyDict(static_kwargs)\n\n        # Locate the user-specified network build function.\n        if util.is_top_level_function(func_name):\n            func_name = util.get_top_level_function_name(func_name)\n        module, self._build_func_name = util.get_module_from_obj_name(func_name)\n        self._build_func = util.get_obj_from_module(module, self._build_func_name)\n        assert callable(self._build_func)\n\n        # Dig up source code for the module containing the build function.\n        self._build_module_src = _import_module_src.get(module, None)\n        if self._build_module_src is None:\n            self._build_module_src = inspect.getsource(module)\n\n        # Init TensorFlow graph.\n        self._init_graph()\n        self.reset_own_vars()\n\n    def _init_fields(self) -> None:\n        self.name = None\n        self.scope = None\n        self.static_kwargs = util.EasyDict()\n        self.components = util.EasyDict()\n        self.num_inputs = 0\n        self.num_outputs = 0\n        self.input_shapes = [[]]\n        self.output_shapes = [[]]\n        self.input_shape = []\n        self.output_shape = []\n        self.input_templates = []\n        self.output_templates = []\n        self.input_names = []\n        self.output_names = []\n        self.own_vars = OrderedDict()\n        self.vars = OrderedDict()\n        self.trainables = OrderedDict()\n        self.var_global_to_local = OrderedDict()\n\n        self._build_func = None  # User-supplied build function that constructs the network.\n        self._build_func_name = None  # Name of the build function.\n        self._build_module_src = None  # Full source code of the module containing the build function.\n        self._run_cache = dict()  # Cached graph data for Network.run().\n\n    def _init_graph(self) -> None:\n        # Collect inputs.\n        self.input_names = []\n\n        for param in inspect.signature(self._build_func).parameters.values():\n            if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty:\n                self.input_names.append(param.name)\n\n        self.num_inputs = len(self.input_names)\n        assert self.num_inputs >= 1\n\n        # Choose name and scope.\n        if self.name is None:\n            self.name = self._build_func_name\n        assert re.match(\"^[A-Za-z0-9_.\\\\-]*$\", self.name)\n        with tf.name_scope(None):\n            self.scope = tf.get_default_graph().unique_name(self.name, mark_as_used=True)\n\n        # Finalize build func kwargs.\n        build_kwargs = dict(self.static_kwargs)\n        build_kwargs[\"is_template_graph\"] = True\n        build_kwargs[\"components\"] = self.components\n\n        # Build template graph.\n        with tfutil.absolute_variable_scope(self.scope, reuse=False), tfutil.absolute_name_scope(self.scope):  # ignore surrounding scopes\n            assert tf.get_variable_scope().name == self.scope\n            assert tf.get_default_graph().get_name_scope() == self.scope\n            with tf.control_dependencies(None):  # ignore surrounding control dependencies\n                self.input_templates = [tf.placeholder(tf.float32, name=name) for name in self.input_names]\n                out_expr = self._build_func(*self.input_templates, **build_kwargs)\n\n        # Collect outputs.\n        assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple)\n        self.output_templates = [out_expr] if tfutil.is_tf_expression(out_expr) else list(out_expr)\n        self.num_outputs = len(self.output_templates)\n        assert self.num_outputs >= 1\n        assert all(tfutil.is_tf_expression(t) for t in self.output_templates)\n\n        # Perform sanity checks.\n        if any(t.shape.ndims is None for t in self.input_templates):\n            raise ValueError(\"Network input shapes not defined. Please call x.set_shape() for each input.\")\n        if any(t.shape.ndims is None for t in self.output_templates):\n            raise ValueError(\"Network output shapes not defined. Please call x.set_shape() where applicable.\")\n        if any(not isinstance(comp, Network) for comp in self.components.values()):\n            raise ValueError(\"Components of a Network must be Networks themselves.\")\n        if len(self.components) != len(set(comp.name for comp in self.components.values())):\n            raise ValueError(\"Components of a Network must have unique names.\")\n\n        # List inputs and outputs.\n        self.input_shapes = [t.shape.as_list() for t in self.input_templates]\n        self.output_shapes = [t.shape.as_list() for t in self.output_templates]\n        self.input_shape = self.input_shapes[0]\n        self.output_shape = self.output_shapes[0]\n        self.output_names = [t.name.split(\"/\")[-1].split(\":\")[0] for t in self.output_templates]\n\n        # List variables.\n        self.own_vars = OrderedDict((var.name[len(self.scope) + 1:].split(\":\")[0], var) for var in tf.global_variables(self.scope + \"/\"))\n        self.vars = OrderedDict(self.own_vars)\n        self.vars.update((comp.name + \"/\" + name, var) for comp in self.components.values() for name, var in comp.vars.items())\n        self.trainables = OrderedDict((name, var) for name, var in self.vars.items() if var.trainable)\n        self.var_global_to_local = OrderedDict((var.name.split(\":\")[0], name) for name, var in self.vars.items())\n\n    def reset_own_vars(self) -> None:\n        \"\"\"Re-initialize all variables of this network, excluding sub-networks.\"\"\"\n        tfutil.run([var.initializer for var in self.own_vars.values()])\n\n    def reset_vars(self) -> None:\n        \"\"\"Re-initialize all variables of this network, including sub-networks.\"\"\"\n        tfutil.run([var.initializer for var in self.vars.values()])\n\n    def reset_trainables(self) -> None:\n        \"\"\"Re-initialize all trainable variables of this network, including sub-networks.\"\"\"\n        tfutil.run([var.initializer for var in self.trainables.values()])\n\n    def get_output_for(self, *in_expr: TfExpression, return_as_list: bool = False, **dynamic_kwargs) -> Union[TfExpression, List[TfExpression]]:\n        \"\"\"Construct TensorFlow expression(s) for the output(s) of this network, given the input expression(s).\"\"\"\n        assert len(in_expr) == self.num_inputs\n        assert not all(expr is None for expr in in_expr)\n\n        # Finalize build func kwargs.\n        build_kwargs = dict(self.static_kwargs)\n        build_kwargs.update(dynamic_kwargs)\n        build_kwargs[\"is_template_graph\"] = False\n        build_kwargs[\"components\"] = self.components\n\n        # Build TensorFlow graph to evaluate the network.\n        with tfutil.absolute_variable_scope(self.scope, reuse=True), tf.name_scope(self.name):\n            assert tf.get_variable_scope().name == self.scope\n            valid_inputs = [expr for expr in in_expr if expr is not None]\n            final_inputs = []\n            for expr, name, shape in zip(in_expr, self.input_names, self.input_shapes):\n                if expr is not None:\n                    expr = tf.identity(expr, name=name)\n                else:\n                    expr = tf.zeros([tf.shape(valid_inputs[0])[0]] + shape[1:], name=name)\n                final_inputs.append(expr)\n            out_expr = self._build_func(*final_inputs, **build_kwargs)\n\n        # Propagate input shapes back to the user-specified expressions.\n        for expr, final in zip(in_expr, final_inputs):\n            if isinstance(expr, tf.Tensor):\n                expr.set_shape(final.shape)\n\n        # Express outputs in the desired format.\n        assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple)\n        if return_as_list:\n            out_expr = [out_expr] if tfutil.is_tf_expression(out_expr) else list(out_expr)\n        return out_expr\n\n    def get_var_local_name(self, var_or_global_name: Union[TfExpression, str]) -> str:\n        \"\"\"Get the local name of a given variable, without any surrounding name scopes.\"\"\"\n        assert tfutil.is_tf_expression(var_or_global_name) or isinstance(var_or_global_name, str)\n        global_name = var_or_global_name if isinstance(var_or_global_name, str) else var_or_global_name.name\n        return self.var_global_to_local[global_name]\n\n    def find_var(self, var_or_local_name: Union[TfExpression, str]) -> TfExpression:\n        \"\"\"Find variable by local or global name.\"\"\"\n        assert tfutil.is_tf_expression(var_or_local_name) or isinstance(var_or_local_name, str)\n        return self.vars[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name\n\n    def get_var(self, var_or_local_name: Union[TfExpression, str]) -> np.ndarray:\n        \"\"\"Get the value of a given variable as NumPy array.\n        Note: This method is very inefficient -- prefer to use tflib.run(list_of_vars) whenever possible.\"\"\"\n        return self.find_var(var_or_local_name).eval()\n\n    def set_var(self, var_or_local_name: Union[TfExpression, str], new_value: Union[int, float, np.ndarray]) -> None:\n        \"\"\"Set the value of a given variable based on the given NumPy array.\n        Note: This method is very inefficient -- prefer to use tflib.set_vars() whenever possible.\"\"\"\n        tfutil.set_vars({self.find_var(var_or_local_name): new_value})\n\n    def __getstate__(self) -> dict:\n        \"\"\"Pickle export.\"\"\"\n        state = dict()\n        state[\"version\"]            = 4\n        state[\"name\"]               = self.name\n        state[\"static_kwargs\"]      = dict(self.static_kwargs)\n        state[\"components\"]         = dict(self.components)\n        state[\"build_module_src\"]   = self._build_module_src\n        state[\"build_func_name\"]    = self._build_func_name\n        state[\"variables\"]          = list(zip(self.own_vars.keys(), tfutil.run(list(self.own_vars.values()))))\n        return state\n\n    def __setstate__(self, state: dict) -> None:\n        \"\"\"Pickle import.\"\"\"\n        # pylint: disable=attribute-defined-outside-init\n        tfutil.assert_tf_initialized()\n        self._init_fields()\n\n        # Execute custom import handlers.\n        for handler in _import_handlers:\n            state = handler(state)\n\n        # Set basic fields.\n        assert state[\"version\"] in [2, 3, 4]\n        self.name = state[\"name\"]\n        self.static_kwargs = util.EasyDict(state[\"static_kwargs\"])\n        self.components = util.EasyDict(state.get(\"components\", {}))\n        self._build_module_src = state[\"build_module_src\"]\n        self._build_func_name = state[\"build_func_name\"]\n\n        # Create temporary module from the imported source code.\n        module_name = \"_tflib_network_import_\" + uuid.uuid4().hex\n        module = types.ModuleType(module_name)\n        sys.modules[module_name] = module\n        _import_module_src[module] = self._build_module_src\n        exec(self._build_module_src, module.__dict__) # pylint: disable=exec-used\n\n        # Locate network build function in the temporary module.\n        self._build_func = util.get_obj_from_module(module, self._build_func_name)\n        assert callable(self._build_func)\n\n        # Init TensorFlow graph.\n        self._init_graph()\n        self.reset_own_vars()\n        tfutil.set_vars({self.find_var(name): value for name, value in state[\"variables\"]})\n\n    def clone(self, name: str = None, **new_static_kwargs) -> \"Network\":\n        \"\"\"Create a clone of this network with its own copy of the variables.\"\"\"\n        # pylint: disable=protected-access\n        net = object.__new__(Network)\n        net._init_fields()\n        net.name = name if name is not None else self.name\n        net.static_kwargs = util.EasyDict(self.static_kwargs)\n        net.static_kwargs.update(new_static_kwargs)\n        net._build_module_src = self._build_module_src\n        net._build_func_name = self._build_func_name\n        net._build_func = self._build_func\n        net._init_graph()\n        net.copy_vars_from(self)\n        return net\n\n    def copy_own_vars_from(self, src_net: \"Network\") -> None:\n        \"\"\"Copy the values of all variables from the given network, excluding sub-networks.\"\"\"\n        names = [name for name in self.own_vars.keys() if name in src_net.own_vars]\n        tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names}))\n\n    def copy_vars_from(self, src_net: \"Network\") -> None:\n        \"\"\"Copy the values of all variables from the given network, including sub-networks.\"\"\"\n        names = [name for name in self.vars.keys() if name in src_net.vars]\n        tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names}))\n\n    def copy_trainables_from(self, src_net: \"Network\") -> None:\n        \"\"\"Copy the values of all trainable variables from the given network, including sub-networks.\"\"\"\n        names = [name for name in self.trainables.keys() if name in src_net.trainables]\n        tfutil.set_vars(tfutil.run({self.vars[name]: src_net.vars[name] for name in names}))\n\n    def convert(self, new_func_name: str, new_name: str = None, **new_static_kwargs) -> \"Network\":\n        \"\"\"Create new network with the given parameters, and copy all variables from this network.\"\"\"\n        if new_name is None:\n            new_name = self.name\n        static_kwargs = dict(self.static_kwargs)\n        static_kwargs.update(new_static_kwargs)\n        net = Network(name=new_name, func_name=new_func_name, **static_kwargs)\n        net.copy_vars_from(self)\n        return net\n\n    def setup_as_moving_average_of(self, src_net: \"Network\", beta: TfExpressionEx = 0.99, beta_nontrainable: TfExpressionEx = 0.0) -> tf.Operation:\n        \"\"\"Construct a TensorFlow op that updates the variables of this network\n        to be slightly closer to those of the given network.\"\"\"\n        with tfutil.absolute_name_scope(self.scope + \"/_MovingAvg\"):\n            ops = []\n            for name, var in self.vars.items():\n                if name in src_net.vars:\n                    cur_beta = beta if name in self.trainables else beta_nontrainable\n                    new_value = tfutil.lerp(src_net.vars[name], var, cur_beta)\n                    ops.append(var.assign(new_value))\n            return tf.group(*ops)\n\n    def run(self,\n            *in_arrays: Tuple[Union[np.ndarray, None], ...],\n            input_transform: dict = None,\n            output_transform: dict = None,\n            return_as_list: bool = False,\n            print_progress: bool = False,\n            minibatch_size: int = None,\n            num_gpus: int = 1,\n            assume_frozen: bool = False,\n            **dynamic_kwargs) -> Union[np.ndarray, Tuple[np.ndarray, ...], List[np.ndarray]]:\n        \"\"\"Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s).\n\n        Args:\n            input_transform:    A dict specifying a custom transformation to be applied to the input tensor(s) before evaluating the network.\n                                The dict must contain a 'func' field that points to a top-level function. The function is called with the input\n                                TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs.\n            output_transform:   A dict specifying a custom transformation to be applied to the output tensor(s) after evaluating the network.\n                                The dict must contain a 'func' field that points to a top-level function. The function is called with the output\n                                TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs.\n            return_as_list:     True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs.\n            print_progress:     Print progress to the console? Useful for very large input arrays.\n            minibatch_size:     Maximum minibatch size to use, None = disable batching.\n            num_gpus:           Number of GPUs to use.\n            assume_frozen:      Improve multi-GPU performance by assuming that the trainable parameters will remain changed between calls.\n            dynamic_kwargs:     Additional keyword arguments to be passed into the network build function.\n        \"\"\"\n        assert len(in_arrays) == self.num_inputs\n        assert not all(arr is None for arr in in_arrays)\n        assert input_transform is None or util.is_top_level_function(input_transform[\"func\"])\n        assert output_transform is None or util.is_top_level_function(output_transform[\"func\"])\n        output_transform, dynamic_kwargs = _handle_legacy_output_transforms(output_transform, dynamic_kwargs)\n        num_items = in_arrays[0].shape[0]\n        if minibatch_size is None:\n            minibatch_size = num_items\n\n        # Construct unique hash key from all arguments that affect the TensorFlow graph.\n        key = dict(input_transform=input_transform, output_transform=output_transform, num_gpus=num_gpus, assume_frozen=assume_frozen, dynamic_kwargs=dynamic_kwargs)\n        def unwind_key(obj):\n            if isinstance(obj, dict):\n                return [(key, unwind_key(value)) for key, value in sorted(obj.items())]\n            if callable(obj):\n                return util.get_top_level_function_name(obj)\n            return obj\n        key = repr(unwind_key(key))\n\n        # Build graph.\n        if key not in self._run_cache:\n            with tfutil.absolute_name_scope(self.scope + \"/_Run\"), tf.control_dependencies(None):\n                with tf.device(\"/cpu:0\"):\n                    in_expr = [tf.placeholder(tf.float32, name=name) for name in self.input_names]\n                    in_split = list(zip(*[tf.split(x, num_gpus) for x in in_expr]))\n\n                out_split = []\n                for gpu in range(num_gpus):\n                    with tf.device(\"/gpu:%d\" % gpu):\n                        net_gpu = self.clone() if assume_frozen else self\n                        in_gpu = in_split[gpu]\n\n                        if input_transform is not None:\n                            in_kwargs = dict(input_transform)\n                            in_gpu = in_kwargs.pop(\"func\")(*in_gpu, **in_kwargs)\n                            in_gpu = [in_gpu] if tfutil.is_tf_expression(in_gpu) else list(in_gpu)\n\n                        assert len(in_gpu) == self.num_inputs\n                        out_gpu = net_gpu.get_output_for(*in_gpu, return_as_list=True, **dynamic_kwargs)\n\n                        if output_transform is not None:\n                            out_kwargs = dict(output_transform)\n                            out_gpu = out_kwargs.pop(\"func\")(*out_gpu, **out_kwargs)\n                            out_gpu = [out_gpu] if tfutil.is_tf_expression(out_gpu) else list(out_gpu)\n\n                        assert len(out_gpu) == self.num_outputs\n                        out_split.append(out_gpu)\n\n                with tf.device(\"/cpu:0\"):\n                    out_expr = [tf.concat(outputs, axis=0) for outputs in zip(*out_split)]\n                    self._run_cache[key] = in_expr, out_expr\n\n        # Run minibatches.\n        in_expr, out_expr = self._run_cache[key]\n        out_arrays = [np.empty([num_items] + expr.shape.as_list()[1:], expr.dtype.name) for expr in out_expr]\n\n        for mb_begin in range(0, num_items, minibatch_size):\n            if print_progress:\n                print(\"\\r%d / %d\" % (mb_begin, num_items), end=\"\")\n\n            mb_end = min(mb_begin + minibatch_size, num_items)\n            mb_num = mb_end - mb_begin\n            mb_in = [src[mb_begin : mb_end] if src is not None else np.zeros([mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)]\n            mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in)))\n\n            for dst, src in zip(out_arrays, mb_out):\n                dst[mb_begin: mb_end] = src\n\n        # Done.\n        if print_progress:\n            print(\"\\r%d / %d\" % (num_items, num_items))\n\n        if not return_as_list:\n            out_arrays = out_arrays[0] if len(out_arrays) == 1 else tuple(out_arrays)\n        return out_arrays\n\n    def list_ops(self) -> List[TfExpression]:\n        include_prefix = self.scope + \"/\"\n        exclude_prefix = include_prefix + \"_\"\n        ops = tf.get_default_graph().get_operations()\n        ops = [op for op in ops if op.name.startswith(include_prefix)]\n        ops = [op for op in ops if not op.name.startswith(exclude_prefix)]\n        return ops\n\n    def list_layers(self) -> List[Tuple[str, TfExpression, List[TfExpression]]]:\n        \"\"\"Returns a list of (layer_name, output_expr, trainable_vars) tuples corresponding to\n        individual layers of the network. Mainly intended to be used for reporting.\"\"\"\n        layers = []\n\n        def recurse(scope, parent_ops, parent_vars, level):\n            # Ignore specific patterns.\n            if any(p in scope for p in [\"/Shape\", \"/strided_slice\", \"/Cast\", \"/concat\", \"/Assign\"]):\n                return\n\n            # Filter ops and vars by scope.\n            global_prefix = scope + \"/\"\n            local_prefix = global_prefix[len(self.scope) + 1:]\n            cur_ops = [op for op in parent_ops if op.name.startswith(global_prefix) or op.name == global_prefix[:-1]]\n            cur_vars = [(name, var) for name, var in parent_vars if name.startswith(local_prefix) or name == local_prefix[:-1]]\n            if not cur_ops and not cur_vars:\n                return\n\n            # Filter out all ops related to variables.\n            for var in [op for op in cur_ops if op.type.startswith(\"Variable\")]:\n                var_prefix = var.name + \"/\"\n                cur_ops = [op for op in cur_ops if not op.name.startswith(var_prefix)]\n\n            # Scope does not contain ops as immediate children => recurse deeper.\n            contains_direct_ops = any(\"/\" not in op.name[len(global_prefix):] and op.type not in [\"Identity\", \"Cast\", \"Transpose\"] for op in cur_ops)\n            if (level == 0 or not contains_direct_ops) and (len(cur_ops) + len(cur_vars)) > 1:\n                visited = set()\n                for rel_name in [op.name[len(global_prefix):] for op in cur_ops] + [name[len(local_prefix):] for name, _var in cur_vars]:\n                    token = rel_name.split(\"/\")[0]\n                    if token not in visited:\n                        recurse(global_prefix + token, cur_ops, cur_vars, level + 1)\n                        visited.add(token)\n                return\n\n            # Report layer.\n            layer_name = scope[len(self.scope) + 1:]\n            layer_output = cur_ops[-1].outputs[0] if cur_ops else cur_vars[-1][1]\n            layer_trainables = [var for _name, var in cur_vars if var.trainable]\n            layers.append((layer_name, layer_output, layer_trainables))\n\n        recurse(self.scope, self.list_ops(), list(self.vars.items()), 0)\n        return layers\n\n    def print_layers(self, title: str = None, hide_layers_with_no_params: bool = False) -> None:\n        \"\"\"Print a summary table of the network structure.\"\"\"\n        rows = [[title if title is not None else self.name, \"Params\", \"OutputShape\", \"WeightShape\"]]\n        rows += [[\"---\"] * 4]\n        total_params = 0\n\n        for layer_name, layer_output, layer_trainables in self.list_layers():\n            num_params = sum(int(np.prod(var.shape.as_list())) for var in layer_trainables)\n            weights = [var for var in layer_trainables if var.name.endswith(\"/weight:0\")]\n            weights.sort(key=lambda x: len(x.name))\n            if len(weights) == 0 and len(layer_trainables) == 1:\n                weights = layer_trainables\n            total_params += num_params\n\n            if not hide_layers_with_no_params or num_params != 0:\n                num_params_str = str(num_params) if num_params > 0 else \"-\"\n                output_shape_str = str(layer_output.shape)\n                weight_shape_str = str(weights[0].shape) if len(weights) >= 1 else \"-\"\n                rows += [[layer_name, num_params_str, output_shape_str, weight_shape_str]]\n\n        rows += [[\"---\"] * 4]\n        rows += [[\"Total\", str(total_params), \"\", \"\"]]\n\n        widths = [max(len(cell) for cell in column) for column in zip(*rows)]\n        print()\n        for row in rows:\n            print(\"  \".join(cell + \" \" * (width - len(cell)) for cell, width in zip(row, widths)))\n        print()\n\n    def setup_weight_histograms(self, title: str = None) -> None:\n        \"\"\"Construct summary ops to include histograms of all trainable parameters in TensorBoard.\"\"\"\n        if title is None:\n            title = self.name\n\n        with tf.name_scope(None), tf.device(None), tf.control_dependencies(None):\n            for local_name, var in self.trainables.items():\n                if \"/\" in local_name:\n                    p = local_name.split(\"/\")\n                    name = title + \"_\" + p[-1] + \"/\" + \"_\".join(p[:-1])\n                else:\n                    name = title + \"_toplevel/\" + local_name\n\n                tf.summary.histogram(name, var)\n\n#----------------------------------------------------------------------------\n# Backwards-compatible emulation of legacy output transformation in Network.run().\n\n_print_legacy_warning = True\n\ndef _handle_legacy_output_transforms(output_transform, dynamic_kwargs):\n    global _print_legacy_warning\n    legacy_kwargs = [\"out_mul\", \"out_add\", \"out_shrink\", \"out_dtype\"]\n    if not any(kwarg in dynamic_kwargs for kwarg in legacy_kwargs):\n        return output_transform, dynamic_kwargs\n\n    if _print_legacy_warning:\n        _print_legacy_warning = False\n        print()\n        print(\"WARNING: Old-style output transformations in Network.run() are deprecated.\")\n        print(\"Consider using 'output_transform=dict(func=tflib.convert_images_to_uint8)'\")\n        print(\"instead of 'out_mul=127.5, out_add=127.5, out_dtype=np.uint8'.\")\n        print()\n    assert output_transform is None\n\n    new_kwargs = dict(dynamic_kwargs)\n    new_transform = {kwarg: new_kwargs.pop(kwarg) for kwarg in legacy_kwargs if kwarg in dynamic_kwargs}\n    new_transform[\"func\"] = _legacy_output_transform_func\n    return new_transform, new_kwargs\n\ndef _legacy_output_transform_func(*expr, out_mul=1.0, out_add=0.0, out_shrink=1, out_dtype=None):\n    if out_mul != 1.0:\n        expr = [x * out_mul for x in expr]\n\n    if out_add != 0.0:\n        expr = [x + out_add for x in expr]\n\n    if out_shrink > 1:\n        ksize = [1, 1, out_shrink, out_shrink]\n        expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding=\"VALID\", data_format=\"NCHW\") for x in expr]\n\n    if out_dtype is not None:\n        if tf.as_dtype(out_dtype).is_integer:\n            expr = [tf.round(x) for x in expr]\n        expr = [tf.saturate_cast(x, out_dtype) for x in expr]\n    return expr\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/ops/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n# empty\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/ops/fused_bias_act.cu",
    "content": "// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#define EIGEN_USE_GPU\n#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#include <stdio.h>\n\nusing namespace tensorflow;\nusing namespace tensorflow::shape_inference;\n\n#define OP_CHECK_CUDA_ERROR(CTX, CUDA_CALL) do { cudaError_t err = CUDA_CALL; OP_REQUIRES(CTX, err == cudaSuccess, errors::Internal(cudaGetErrorName(err))); } while (false)\n\n//------------------------------------------------------------------------\n// CUDA kernel.\n\ntemplate <class T>\nstruct FusedBiasActKernelParams\n{\n    const T*    x;      // [sizeX]\n    const T*    b;      // [sizeB] or NULL\n    const T*    ref;    // [sizeX] or NULL\n    T*          y;      // [sizeX]\n\n    int         grad;\n    int         axis;\n    int         act;\n    float       alpha;\n    float       gain;\n\n    int         sizeX;\n    int         sizeB;\n    int         stepB;\n    int         loopX;\n};\n\ntemplate <class T>\nstatic __global__ void FusedBiasActKernel(const FusedBiasActKernelParams<T> p)\n{\n    const float expRange        = 80.0f;\n    const float halfExpRange    = 40.0f;\n    const float seluScale       = 1.0507009873554804934193349852946f;\n    const float seluAlpha       = 1.6732632423543772848170429916717f;\n\n    // Loop over elements.\n    int xi = blockIdx.x * p.loopX * blockDim.x + threadIdx.x;\n    for (int loopIdx = 0; loopIdx < p.loopX && xi < p.sizeX; loopIdx++, xi += blockDim.x)\n    {\n        // Load and apply bias.\n        float x = (float)p.x[xi];\n        if (p.b)\n            x += (float)p.b[(xi / p.stepB) % p.sizeB];\n        float ref = (p.ref) ? (float)p.ref[xi] : 0.0f;\n        if (p.gain != 0.0f & p.act != 9)\n            ref /= p.gain;\n\n        // Evaluate activation func.\n        float y;\n        switch (p.act * 10 + p.grad)\n        {\n            // linear\n            default:\n            case 10: y = x; break;\n            case 11: y = x; break;\n            case 12: y = 0.0f; break;\n\n            // relu\n            case 20: y = (x > 0.0f) ? x : 0.0f; break;\n            case 21: y = (ref > 0.0f) ? x : 0.0f; break;\n            case 22: y = 0.0f; break;\n\n            // lrelu\n            case 30: y = (x > 0.0f) ? x : x * p.alpha; break;\n            case 31: y = (ref > 0.0f) ? x : x * p.alpha; break;\n            case 32: y = 0.0f; break;\n\n            // tanh\n            case 40: { float c = expf(x); float d = 1.0f / c; y = (x < -expRange) ? -1.0f : (x > expRange) ? 1.0f : (c - d) / (c + d); } break;\n            case 41: y = x * (1.0f - ref * ref); break;\n            case 42: y = x * (1.0f - ref * ref) * (-2.0f * ref); break;\n\n            // sigmoid\n            case 50: y = (x < -expRange) ? 0.0f : 1.0f / (expf(-x) + 1.0f); break;\n            case 51: y = x * ref * (1.0f - ref); break;\n            case 52: y = x * ref * (1.0f - ref) * (1.0f - 2.0f * ref); break;\n\n            // elu\n            case 60: y = (x >= 0.0f) ? x : expf(x) - 1.0f; break;\n            case 61: y = (ref >= 0.0f) ? x : x * (ref + 1.0f); break;\n            case 62: y = (ref >= 0.0f) ? 0.0f : x * (ref + 1.0f); break;\n\n            // selu\n            case 70: y = (x >= 0.0f) ? seluScale * x : (seluScale * seluAlpha) * (expf(x) - 1.0f); break;\n            case 71: y = (ref >= 0.0f) ? x * seluScale : x * (ref + seluScale * seluAlpha); break;\n            case 72: y = (ref >= 0.0f) ? 0.0f : x * (ref + seluScale * seluAlpha); break;\n\n            // softplus\n            case 80: y = (x > expRange) ? x : logf(expf(x) + 1.0f); break;\n            case 81: y = x * (1.0f - expf(-ref)); break;\n            case 82: { float c = expf(-ref); y = x * c * (1.0f - c); } break;\n\n            // swish\n            case 90: y = (x < -expRange) ? 0.0f : x / (expf(-x) + 1.0f); break;\n            case 91: { float c = expf(ref); float d = c + 1.0f; y = (ref > halfExpRange) ? x : x * c * (ref + d) / (d * d); } break;\n            case 92: { float c = expf(ref); float d = c + 1.0f; y = (ref > halfExpRange) ? 0.0f : x * c * (ref * (2.0f - d) + 2.0f * d) / (d * d * d); } break;\n        }\n\n        // Apply gain and store.\n        p.y[xi] = (T)(y * p.gain);\n    }\n}\n\n//------------------------------------------------------------------------\n// TensorFlow op.\n\ntemplate <class T>\nstruct FusedBiasActOp : public OpKernel\n{\n    FusedBiasActKernelParams<T> m_attribs;\n\n    FusedBiasActOp(OpKernelConstruction* ctx) : OpKernel(ctx)\n    {\n        memset(&m_attribs, 0, sizeof(m_attribs));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"grad\", &m_attribs.grad));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"axis\", &m_attribs.axis));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"act\", &m_attribs.act));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"alpha\", &m_attribs.alpha));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"gain\", &m_attribs.gain));\n        OP_REQUIRES(ctx, m_attribs.grad >= 0, errors::InvalidArgument(\"grad must be non-negative\"));\n        OP_REQUIRES(ctx, m_attribs.axis >= 0, errors::InvalidArgument(\"axis must be non-negative\"));\n        OP_REQUIRES(ctx, m_attribs.act >= 0, errors::InvalidArgument(\"act must be non-negative\"));\n    }\n\n    void Compute(OpKernelContext* ctx)\n    {\n        FusedBiasActKernelParams<T> p = m_attribs;\n        cudaStream_t stream = ctx->eigen_device<Eigen::GpuDevice>().stream();\n\n        const Tensor& x     = ctx->input(0); // [...]\n        const Tensor& b     = ctx->input(1); // [sizeB] or [0]\n        const Tensor& ref   = ctx->input(2); // x.shape or [0]\n        p.x = x.flat<T>().data();\n        p.b = (b.NumElements()) ? b.flat<T>().data() : NULL;\n        p.ref = (ref.NumElements()) ? ref.flat<T>().data() : NULL;\n        OP_REQUIRES(ctx, b.NumElements() == 0 || m_attribs.axis < x.dims(), errors::InvalidArgument(\"axis out of bounds\"));\n        OP_REQUIRES(ctx, b.dims() == 1, errors::InvalidArgument(\"b must have rank 1\"));\n        OP_REQUIRES(ctx, b.NumElements() == 0 || b.NumElements() == x.dim_size(m_attribs.axis), errors::InvalidArgument(\"b has wrong number of elements\"));\n        OP_REQUIRES(ctx, ref.NumElements() == ((p.grad == 0) ? 0 : x.NumElements()), errors::InvalidArgument(\"ref has wrong number of elements\"));\n        OP_REQUIRES(ctx, x.NumElements() <= kint32max, errors::InvalidArgument(\"x is too large\"));\n\n        p.sizeX = (int)x.NumElements();\n        p.sizeB = (int)b.NumElements();\n        p.stepB = 1;\n        for (int i = m_attribs.axis + 1; i < x.dims(); i++)\n            p.stepB *= (int)x.dim_size(i);\n\n        Tensor* y = NULL; // x.shape\n        OP_REQUIRES_OK(ctx, ctx->allocate_output(0, x.shape(), &y));\n        p.y = y->flat<T>().data();\n\n        p.loopX = 4;\n        int blockSize = 4 * 32;\n        int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1;\n        void* args[] = {&p};\n        OP_CHECK_CUDA_ERROR(ctx, cudaLaunchKernel((void*)FusedBiasActKernel<T>, gridSize, blockSize, args, 0, stream));\n    }\n};\n\nREGISTER_OP(\"FusedBiasAct\")\n    .Input      (\"x: T\")\n    .Input      (\"b: T\")\n    .Input      (\"ref: T\")\n    .Output     (\"y: T\")\n    .Attr       (\"T: {float, half}\")\n    .Attr       (\"grad: int = 0\")\n    .Attr       (\"axis: int = 1\")\n    .Attr       (\"act: int = 0\")\n    .Attr       (\"alpha: float = 0.0\")\n    .Attr       (\"gain: float = 1.0\");\nREGISTER_KERNEL_BUILDER(Name(\"FusedBiasAct\").Device(DEVICE_GPU).TypeConstraint<float>(\"T\"), FusedBiasActOp<float>);\nREGISTER_KERNEL_BUILDER(Name(\"FusedBiasAct\").Device(DEVICE_GPU).TypeConstraint<Eigen::half>(\"T\"), FusedBiasActOp<Eigen::half>);\n\n//------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/ops/fused_bias_act.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Custom TensorFlow ops for efficient bias and activation.\"\"\"\n\nimport os\nimport numpy as np\nimport tensorflow as tf\nfrom .. import custom_ops\nfrom ...util import EasyDict\n\ndef _get_plugin():\n    return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu')\n\n#----------------------------------------------------------------------------\n\nactivation_funcs = {\n    'linear':   EasyDict(func=lambda x, **_:        x,                          def_alpha=None, def_gain=1.0,           cuda_idx=1, ref='y', zero_2nd_grad=True),\n    'relu':     EasyDict(func=lambda x, **_:        tf.nn.relu(x),              def_alpha=None, def_gain=np.sqrt(2),    cuda_idx=2, ref='y', zero_2nd_grad=True),\n    'lrelu':    EasyDict(func=lambda x, alpha, **_: tf.nn.leaky_relu(x, alpha), def_alpha=0.2,  def_gain=np.sqrt(2),    cuda_idx=3, ref='y', zero_2nd_grad=True),\n    'tanh':     EasyDict(func=lambda x, **_:        tf.nn.tanh(x),              def_alpha=None, def_gain=1.0,           cuda_idx=4, ref='y', zero_2nd_grad=False),\n    'sigmoid':  EasyDict(func=lambda x, **_:        tf.nn.sigmoid(x),           def_alpha=None, def_gain=1.0,           cuda_idx=5, ref='y', zero_2nd_grad=False),\n    'elu':      EasyDict(func=lambda x, **_:        tf.nn.elu(x),               def_alpha=None, def_gain=1.0,           cuda_idx=6, ref='y', zero_2nd_grad=False),\n    'selu':     EasyDict(func=lambda x, **_:        tf.nn.selu(x),              def_alpha=None, def_gain=1.0,           cuda_idx=7, ref='y', zero_2nd_grad=False),\n    'softplus': EasyDict(func=lambda x, **_:        tf.nn.softplus(x),          def_alpha=None, def_gain=1.0,           cuda_idx=8, ref='y', zero_2nd_grad=False),\n    'swish':    EasyDict(func=lambda x, **_:        tf.nn.sigmoid(x) * x,       def_alpha=None, def_gain=np.sqrt(2),    cuda_idx=9, ref='x', zero_2nd_grad=False),\n}\n\n#----------------------------------------------------------------------------\n\ndef fused_bias_act(x, b=None, axis=1, act='linear', alpha=None, gain=None, impl='cuda'):\n    r\"\"\"Fused bias and activation function.\n\n    Adds bias `b` to activation tensor `x`, evaluates activation function `act`,\n    and scales the result by `gain`. Each of the steps is optional. In most cases,\n    the fused op is considerably more efficient than performing the same calculation\n    using standard TensorFlow ops. It supports first and second order gradients,\n    but not third order gradients.\n\n    Args:\n        x:      Input activation tensor. Can have any shape, but if `b` is defined, the\n                dimension corresponding to `axis`, as well as the rank, must be known.\n        b:      Bias vector, or `None` to disable. Must be a 1D tensor of the same type\n                as `x`. The shape must be known, and it must match the dimension of `x`\n                corresponding to `axis`.\n        axis:   The dimension in `x` corresponding to the elements of `b`.\n                The value of `axis` is ignored if `b` is not specified.\n        act:    Name of the activation function to evaluate, or `\"linear\"` to disable.\n                Can be e.g. `\"relu\"`, `\"lrelu\"`, `\"tanh\"`, `\"sigmoid\"`, `\"swish\"`, etc.\n                See `activation_funcs` for a full list. `None` is not allowed.\n        alpha:  Shape parameter for the activation function, or `None` to use the default.\n        gain:   Scaling factor for the output tensor, or `None` to use default.\n                See `activation_funcs` for the default scaling of each activation function.\n                If unsure, consider specifying `1.0`.\n        impl:   Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the same shape and datatype as `x`.\n    \"\"\"\n\n    impl_dict = {\n        'ref':  _fused_bias_act_ref,\n        'cuda': _fused_bias_act_cuda,\n    }\n    return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain)\n\n#----------------------------------------------------------------------------\n\ndef _fused_bias_act_ref(x, b, axis, act, alpha, gain):\n    \"\"\"Slow reference implementation of `fused_bias_act()` using standard TensorFlow ops.\"\"\"\n\n    # Validate arguments.\n    x = tf.convert_to_tensor(x)\n    b = tf.convert_to_tensor(b) if b is not None else tf.constant([], dtype=x.dtype)\n    act_spec = activation_funcs[act]\n    assert b.shape.rank == 1 and (b.shape[0] == 0 or b.shape[0] == x.shape[axis])\n    assert b.shape[0] == 0 or 0 <= axis < x.shape.rank\n    if alpha is None:\n        alpha = act_spec.def_alpha\n    if gain is None:\n        gain = act_spec.def_gain\n\n    # Add bias.\n    if b.shape[0] != 0:\n        x += tf.reshape(b, [-1 if i == axis else 1 for i in range(x.shape.rank)])\n\n    # Evaluate activation function.\n    x = act_spec.func(x, alpha=alpha)\n\n    # Scale by gain.\n    if gain != 1:\n        x *= gain\n    return x\n\n#----------------------------------------------------------------------------\n\ndef _fused_bias_act_cuda(x, b, axis, act, alpha, gain):\n    \"\"\"Fast CUDA implementation of `fused_bias_act()` using custom ops.\"\"\"\n\n    # Validate arguments.\n    x = tf.convert_to_tensor(x)\n    empty_tensor = tf.constant([], dtype=x.dtype)\n    b = tf.convert_to_tensor(b) if b is not None else empty_tensor\n    act_spec = activation_funcs[act]\n    assert b.shape.rank == 1 and (b.shape[0] == 0 or b.shape[0] == x.shape[axis])\n    assert b.shape[0] == 0 or 0 <= axis < x.shape.rank\n    if alpha is None:\n        alpha = act_spec.def_alpha\n    if gain is None:\n        gain = act_spec.def_gain\n\n    # Special cases.\n    if act == 'linear' and b is None and gain == 1.0:\n        return x\n    if act_spec.cuda_idx is None:\n        return _fused_bias_act_ref(x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain)\n\n    # CUDA kernel.\n    cuda_kernel = _get_plugin().fused_bias_act\n    cuda_kwargs = dict(axis=axis, act=act_spec.cuda_idx, alpha=alpha, gain=gain)\n\n    # Forward pass: y = func(x, b).\n    def func_y(x, b):\n        y = cuda_kernel(x=x, b=b, ref=empty_tensor, grad=0, **cuda_kwargs)\n        y.set_shape(x.shape)\n        return y\n\n    # Backward pass: dx, db = grad(dy, x, y)\n    def grad_dx(dy, x, y):\n        ref = {'x': x, 'y': y}[act_spec.ref]\n        dx = cuda_kernel(x=dy, b=empty_tensor, ref=ref, grad=1, **cuda_kwargs)\n        dx.set_shape(x.shape)\n        return dx\n    def grad_db(dx):\n        if b.shape[0] == 0:\n            return empty_tensor\n        db = dx\n        if axis < x.shape.rank - 1:\n            db = tf.reduce_sum(db, list(range(axis + 1, x.shape.rank)))\n        if axis > 0:\n            db = tf.reduce_sum(db, list(range(axis)))\n        db.set_shape(b.shape)\n        return db\n\n    # Second order gradients: d_dy, d_x = grad2(d_dx, d_db, x, y)\n    def grad2_d_dy(d_dx, d_db, x, y):\n        ref = {'x': x, 'y': y}[act_spec.ref]\n        d_dy = cuda_kernel(x=d_dx, b=d_db, ref=ref, grad=1, **cuda_kwargs)\n        d_dy.set_shape(x.shape)\n        return d_dy\n    def grad2_d_x(d_dx, d_db, x, y):\n        ref = {'x': x, 'y': y}[act_spec.ref]\n        d_x = cuda_kernel(x=d_dx, b=d_db, ref=ref, grad=2, **cuda_kwargs)\n        d_x.set_shape(x.shape)\n        return d_x\n\n    # Fast version for piecewise-linear activation funcs.\n    @tf.custom_gradient\n    def func_zero_2nd_grad(x, b):\n        y = func_y(x, b)\n        @tf.custom_gradient\n        def grad(dy):\n            dx = grad_dx(dy, x, y)\n            db = grad_db(dx)\n            def grad2(d_dx, d_db):\n                d_dy = grad2_d_dy(d_dx, d_db, x, y)\n                return d_dy\n            return (dx, db), grad2\n        return y, grad\n\n    # Slow version for general activation funcs.\n    @tf.custom_gradient\n    def func_nonzero_2nd_grad(x, b):\n        y = func_y(x, b)\n        def grad_wrap(dy):\n            @tf.custom_gradient\n            def grad_impl(dy, x):\n                dx = grad_dx(dy, x, y)\n                db = grad_db(dx)\n                def grad2(d_dx, d_db):\n                    d_dy = grad2_d_dy(d_dx, d_db, x, y)\n                    d_x = grad2_d_x(d_dx, d_db, x, y)\n                    return d_dy, d_x\n                return (dx, db), grad2\n            return grad_impl(dy, x)\n        return y, grad_wrap\n\n    # Which version to use?\n    if act_spec.zero_2nd_grad:\n        return func_zero_2nd_grad(x, b)\n    return func_nonzero_2nd_grad(x, b)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/ops/upfirdn_2d.cu",
    "content": "// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#define EIGEN_USE_GPU\n#define __CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#include <stdio.h>\n\nusing namespace tensorflow;\nusing namespace tensorflow::shape_inference;\n\n//------------------------------------------------------------------------\n// Helpers.\n\n#define OP_CHECK_CUDA_ERROR(CTX, CUDA_CALL) do { cudaError_t err = CUDA_CALL; OP_REQUIRES(CTX, err == cudaSuccess, errors::Internal(cudaGetErrorName(err))); } while (false)\n\nstatic __host__ __device__ __forceinline__ int floorDiv(int a, int b)\n{\n    int c = a / b;\n    if (c * b > a)\n        c--;\n    return c;\n}\n\n//------------------------------------------------------------------------\n// CUDA kernel params.\n\ntemplate <class T>\nstruct UpFirDn2DKernelParams\n{\n    const T*    x;          // [majorDim, inH, inW, minorDim]\n    const T*    k;          // [kernelH, kernelW]\n    T*          y;          // [majorDim, outH, outW, minorDim]\n\n    int         upx;\n    int         upy;\n    int         downx;\n    int         downy;\n    int         padx0;\n    int         padx1;\n    int         pady0;\n    int         pady1;\n\n    int         majorDim;\n    int         inH;\n    int         inW;\n    int         minorDim;\n    int         kernelH;\n    int         kernelW;\n    int         outH;\n    int         outW;\n    int         loopMajor;\n    int         loopX;\n};\n\n//------------------------------------------------------------------------\n// General CUDA implementation for large filter kernels.\n\ntemplate <class T>\nstatic __global__ void UpFirDn2DKernel_large(const UpFirDn2DKernelParams<T> p)\n{\n    // Calculate thread index.\n    int minorIdx = blockIdx.x * blockDim.x + threadIdx.x;\n    int outY = minorIdx / p.minorDim;\n    minorIdx -= outY * p.minorDim;\n    int outXBase = blockIdx.y * p.loopX * blockDim.y + threadIdx.y;\n    int majorIdxBase = blockIdx.z * p.loopMajor;\n    if (outXBase >= p.outW || outY >= p.outH || majorIdxBase >= p.majorDim)\n        return;\n\n    // Setup Y receptive field.\n    int midY = outY * p.downy + p.upy - 1 - p.pady0;\n    int inY = min(max(floorDiv(midY, p.upy), 0), p.inH);\n    int h = min(max(floorDiv(midY + p.kernelH, p.upy), 0), p.inH) - inY;\n    int kernelY = midY + p.kernelH - (inY + 1) * p.upy;\n\n    // Loop over majorDim and outX.\n    for (int loopMajor = 0, majorIdx = majorIdxBase; loopMajor < p.loopMajor && majorIdx < p.majorDim; loopMajor++, majorIdx++)\n    for (int loopX = 0, outX = outXBase; loopX < p.loopX && outX < p.outW; loopX++, outX += blockDim.y)\n    {\n        // Setup X receptive field.\n        int midX = outX * p.downx + p.upx - 1 - p.padx0;\n        int inX = min(max(floorDiv(midX, p.upx), 0), p.inW);\n        int w = min(max(floorDiv(midX + p.kernelW, p.upx), 0), p.inW) - inX;\n        int kernelX = midX + p.kernelW - (inX + 1) * p.upx;\n\n        // Initialize pointers.\n        const T* xp = &p.x[((majorIdx * p.inH + inY) * p.inW + inX) * p.minorDim + minorIdx];\n        const T* kp = &p.k[kernelY * p.kernelW + kernelX];\n        int xpx = p.minorDim;\n        int kpx = -p.upx;\n        int xpy = p.inW * p.minorDim;\n        int kpy = -p.upy * p.kernelW;\n\n        // Inner loop.\n        float v = 0.0f;\n        for (int y = 0; y < h; y++)\n        {\n            for (int x = 0; x < w; x++)\n            {\n                v += (float)(*xp) * (float)(*kp);\n                xp += xpx;\n                kp += kpx;\n            }\n            xp += xpy - w * xpx;\n            kp += kpy - w * kpx;\n        }\n\n        // Store result.\n        p.y[((majorIdx * p.outH + outY) * p.outW + outX) * p.minorDim + minorIdx] = (T)v;\n    }\n}\n\n//------------------------------------------------------------------------\n// Specialized CUDA implementation for small filter kernels.\n\ntemplate <class T, int upx, int upy, int downx, int downy, int kernelW, int kernelH, int tileOutW, int tileOutH>\nstatic __global__ void UpFirDn2DKernel_small(const UpFirDn2DKernelParams<T> p)\n{\n    //assert(kernelW % upx == 0);\n    //assert(kernelH % upy == 0);\n    const int tileInW = ((tileOutW - 1) * downx + kernelW - 1) / upx + 1;\n    const int tileInH = ((tileOutH - 1) * downy + kernelH - 1) / upy + 1;\n    __shared__ volatile float sk[kernelH][kernelW];\n    __shared__ volatile float sx[tileInH][tileInW];\n\n    // Calculate tile index.\n    int minorIdx = blockIdx.x;\n    int tileOutY = minorIdx / p.minorDim;\n    minorIdx -= tileOutY * p.minorDim;\n    tileOutY *= tileOutH;\n    int tileOutXBase = blockIdx.y * p.loopX * tileOutW;\n    int majorIdxBase = blockIdx.z * p.loopMajor;\n    if (tileOutXBase >= p.outW | tileOutY >= p.outH | majorIdxBase >= p.majorDim)\n        return;\n\n    // Load filter kernel (flipped).\n    for (int tapIdx = threadIdx.x; tapIdx < kernelH * kernelW; tapIdx += blockDim.x)\n    {\n        int ky = tapIdx / kernelW;\n        int kx = tapIdx - ky * kernelW;\n        float v = 0.0f;\n        if (kx < p.kernelW & ky < p.kernelH)\n            v = (float)p.k[(p.kernelH - 1 - ky) * p.kernelW + (p.kernelW - 1 - kx)];\n        sk[ky][kx] = v;\n    }\n\n    // Loop over majorDim and outX.\n    for (int loopMajor = 0, majorIdx = majorIdxBase; loopMajor < p.loopMajor & majorIdx < p.majorDim; loopMajor++, majorIdx++)\n    for (int loopX = 0, tileOutX = tileOutXBase; loopX < p.loopX & tileOutX < p.outW; loopX++, tileOutX += tileOutW)\n    {\n        // Load input pixels.\n        int tileMidX = tileOutX * downx + upx - 1 - p.padx0;\n        int tileMidY = tileOutY * downy + upy - 1 - p.pady0;\n        int tileInX = floorDiv(tileMidX, upx);\n        int tileInY = floorDiv(tileMidY, upy);\n        __syncthreads();\n        for (int inIdx = threadIdx.x; inIdx < tileInH * tileInW; inIdx += blockDim.x)\n        {\n            int relInY = inIdx / tileInW;\n            int relInX = inIdx - relInY * tileInW;\n            int inX = relInX + tileInX;\n            int inY = relInY + tileInY;\n            float v = 0.0f;\n            if (inX >= 0 & inY >= 0 & inX < p.inW & inY < p.inH)\n                v = (float)p.x[((majorIdx * p.inH + inY) * p.inW + inX) * p.minorDim + minorIdx];\n            sx[relInY][relInX] = v;\n        }\n\n        // Loop over output pixels.\n        __syncthreads();\n        for (int outIdx = threadIdx.x; outIdx < tileOutH * tileOutW; outIdx += blockDim.x)\n        {\n            int relOutY = outIdx / tileOutW;\n            int relOutX = outIdx - relOutY * tileOutW;\n            int outX = relOutX + tileOutX;\n            int outY = relOutY + tileOutY;\n\n            // Setup receptive field.\n            int midX = tileMidX + relOutX * downx;\n            int midY = tileMidY + relOutY * downy;\n            int inX = floorDiv(midX, upx);\n            int inY = floorDiv(midY, upy);\n            int relInX = inX - tileInX;\n            int relInY = inY - tileInY;\n            int kernelX = (inX + 1) * upx - midX - 1; // flipped\n            int kernelY = (inY + 1) * upy - midY - 1; // flipped\n\n            // Inner loop.\n            float v = 0.0f;\n            #pragma unroll\n            for (int y = 0; y < kernelH / upy; y++)\n                #pragma unroll\n                for (int x = 0; x < kernelW / upx; x++)\n                    v += sx[relInY + y][relInX + x] * sk[kernelY + y * upy][kernelX + x * upx];\n\n            // Store result.\n            if (outX < p.outW & outY < p.outH)\n                p.y[((majorIdx * p.outH + outY) * p.outW + outX) * p.minorDim + minorIdx] = (T)v;\n        }\n    }\n}\n\n//------------------------------------------------------------------------\n// TensorFlow op.\n\ntemplate <class T>\nstruct UpFirDn2DOp : public OpKernel\n{\n    UpFirDn2DKernelParams<T> m_attribs;\n\n    UpFirDn2DOp(OpKernelConstruction* ctx) : OpKernel(ctx)\n    {\n        memset(&m_attribs, 0, sizeof(m_attribs));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"upx\", &m_attribs.upx));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"upy\", &m_attribs.upy));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"downx\", &m_attribs.downx));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"downy\", &m_attribs.downy));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"padx0\", &m_attribs.padx0));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"padx1\", &m_attribs.padx1));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"pady0\", &m_attribs.pady0));\n        OP_REQUIRES_OK(ctx, ctx->GetAttr(\"pady1\", &m_attribs.pady1));\n        OP_REQUIRES(ctx, m_attribs.upx >= 1 && m_attribs.upy >= 1, errors::InvalidArgument(\"upx and upy must be at least 1x1\"));\n        OP_REQUIRES(ctx, m_attribs.downx >= 1 && m_attribs.downy >= 1, errors::InvalidArgument(\"downx and downy must be at least 1x1\"));\n    }\n\n    void Compute(OpKernelContext* ctx)\n    {\n        UpFirDn2DKernelParams<T> p = m_attribs;\n        cudaStream_t stream = ctx->eigen_device<Eigen::GpuDevice>().stream();\n\n        const Tensor& x = ctx->input(0); // [majorDim, inH, inW, minorDim]\n        const Tensor& k = ctx->input(1); // [kernelH, kernelW]\n        p.x = x.flat<T>().data();\n        p.k = k.flat<T>().data();\n        OP_REQUIRES(ctx, x.dims() == 4, errors::InvalidArgument(\"input must have rank 4\"));\n        OP_REQUIRES(ctx, k.dims() == 2, errors::InvalidArgument(\"kernel must have rank 2\"));\n        OP_REQUIRES(ctx, x.NumElements() <= kint32max, errors::InvalidArgument(\"input too large\"));\n        OP_REQUIRES(ctx, k.NumElements() <= kint32max, errors::InvalidArgument(\"kernel too large\"));\n\n        p.majorDim  = (int)x.dim_size(0);\n        p.inH       = (int)x.dim_size(1);\n        p.inW       = (int)x.dim_size(2);\n        p.minorDim  = (int)x.dim_size(3);\n        p.kernelH   = (int)k.dim_size(0);\n        p.kernelW   = (int)k.dim_size(1);\n        OP_REQUIRES(ctx, p.kernelW >= 1 && p.kernelH >= 1, errors::InvalidArgument(\"kernel must be at least 1x1\"));\n\n        p.outW = (p.inW * p.upx + p.padx0 + p.padx1 - p.kernelW + p.downx) / p.downx;\n        p.outH = (p.inH * p.upy + p.pady0 + p.pady1 - p.kernelH + p.downy) / p.downy;\n        OP_REQUIRES(ctx, p.outW >= 1 && p.outH >= 1, errors::InvalidArgument(\"output must be at least 1x1\"));\n\n        Tensor* y = NULL; // [majorDim, outH, outW, minorDim]\n        TensorShape ys;\n        ys.AddDim(p.majorDim);\n        ys.AddDim(p.outH);\n        ys.AddDim(p.outW);\n        ys.AddDim(p.minorDim);\n        OP_REQUIRES_OK(ctx, ctx->allocate_output(0, ys, &y));\n        p.y = y->flat<T>().data();\n        OP_REQUIRES(ctx, y->NumElements() <= kint32max, errors::InvalidArgument(\"output too large\"));\n\n        // Choose CUDA kernel to use.\n        void* cudaKernel = (void*)UpFirDn2DKernel_large<T>;\n        int tileOutW = -1;\n        int tileOutH = -1;\n        if (p.upx == 1 && p.upy == 1 && p.downx == 1 && p.downy == 1 && p.kernelW <= 7 && p.kernelH <= 7) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 1,1, 7,7, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 1 && p.downy == 1 && p.kernelW <= 6 && p.kernelH <= 6) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 1,1, 6,6, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 1 && p.downy == 1 && p.kernelW <= 5 && p.kernelH <= 5) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 1,1, 5,5, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 1 && p.downy == 1 && p.kernelW <= 4 && p.kernelH <= 4) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 1,1, 4,4, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 1 && p.downy == 1 && p.kernelW <= 3 && p.kernelH <= 3) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 1,1, 3,3, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 2 && p.upy == 2 && p.downx == 1 && p.downy == 1 && p.kernelW <= 8 && p.kernelH <= 8) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 2,2, 1,1, 8,8, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 2 && p.upy == 2 && p.downx == 1 && p.downy == 1 && p.kernelW <= 6 && p.kernelH <= 6) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 2,2, 1,1, 6,6, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 2 && p.upy == 2 && p.downx == 1 && p.downy == 1 && p.kernelW <= 4 && p.kernelH <= 4) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 2,2, 1,1, 4,4, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 2 && p.upy == 2 && p.downx == 1 && p.downy == 1 && p.kernelW <= 2 && p.kernelH <= 2) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 2,2, 1,1, 2,2, 64,16>; tileOutW = 64; tileOutH = 16; }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 2 && p.downy == 2 && p.kernelW <= 8 && p.kernelH <= 8) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 2,2, 8,8, 32,8>;  tileOutW = 32; tileOutH = 8;  }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 2 && p.downy == 2 && p.kernelW <= 6 && p.kernelH <= 6) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 2,2, 6,6, 32,8>;  tileOutW = 32; tileOutH = 8;  }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 2 && p.downy == 2 && p.kernelW <= 4 && p.kernelH <= 4) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 2,2, 4,4, 32,8>;  tileOutW = 32; tileOutH = 8;  }\n        if (p.upx == 1 && p.upy == 1 && p.downx == 2 && p.downy == 2 && p.kernelW <= 2 && p.kernelH <= 2) { cudaKernel = (void*)UpFirDn2DKernel_small<T, 1,1, 2,2, 2,2, 32,8>;  tileOutW = 32; tileOutH = 8;  }\n\n        // Choose launch params.\n        dim3 blockSize;\n        dim3 gridSize;\n        if (tileOutW > 0 && tileOutH > 0) // small\n        {\n            p.loopMajor = (p.majorDim - 1) / 16384 + 1;\n            p.loopX = 1;\n            blockSize = dim3(32 * 8, 1, 1);\n            gridSize = dim3(((p.outH - 1) / tileOutH + 1) * p.minorDim, (p.outW - 1) / (p.loopX * tileOutW) + 1, (p.majorDim - 1) / p.loopMajor + 1);\n        }\n        else // large\n        {\n            p.loopMajor = (p.majorDim - 1) / 16384 + 1;\n            p.loopX = 4;\n            blockSize = dim3(4, 32, 1);\n            gridSize = dim3((p.outH * p.minorDim - 1) / blockSize.x + 1, (p.outW - 1) / (p.loopX * blockSize.y) + 1, (p.majorDim - 1) / p.loopMajor + 1);\n        }\n\n        // Launch CUDA kernel.\n        void* args[] = {&p};\n        OP_CHECK_CUDA_ERROR(ctx, cudaLaunchKernel(cudaKernel, gridSize, blockSize, args, 0, stream));\n    }\n};\n\nREGISTER_OP(\"UpFirDn2D\")\n    .Input      (\"x: T\")\n    .Input      (\"k: T\")\n    .Output     (\"y: T\")\n    .Attr       (\"T: {float, half}\")\n    .Attr       (\"upx: int = 1\")\n    .Attr       (\"upy: int = 1\")\n    .Attr       (\"downx: int = 1\")\n    .Attr       (\"downy: int = 1\")\n    .Attr       (\"padx0: int = 0\")\n    .Attr       (\"padx1: int = 0\")\n    .Attr       (\"pady0: int = 0\")\n    .Attr       (\"pady1: int = 0\");\nREGISTER_KERNEL_BUILDER(Name(\"UpFirDn2D\").Device(DEVICE_GPU).TypeConstraint<float>(\"T\"), UpFirDn2DOp<float>);\nREGISTER_KERNEL_BUILDER(Name(\"UpFirDn2D\").Device(DEVICE_GPU).TypeConstraint<Eigen::half>(\"T\"), UpFirDn2DOp<Eigen::half>);\n\n//------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/ops/upfirdn_2d.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Custom TensorFlow ops for efficient resampling of 2D images.\"\"\"\n\nimport os\nimport numpy as np\nimport tensorflow as tf\nfrom .. import custom_ops\n\ndef _get_plugin():\n    return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu')\n\n#----------------------------------------------------------------------------\n\ndef upfirdn_2d(x, k, upx=1, upy=1, downx=1, downy=1, padx0=0, padx1=0, pady0=0, pady1=0, impl='cuda'):\n    r\"\"\"Pad, upsample, FIR filter, and downsample a batch of 2D images.\n\n    Accepts a batch of 2D images of the shape `[majorDim, inH, inW, minorDim]`\n    and performs the following operations for each image, batched across\n    `majorDim` and `minorDim`:\n\n    1. Pad the image with zeros by the specified number of pixels on each side\n       (`padx0`, `padx1`, `pady0`, `pady1`). Specifying a negative value\n       corresponds to cropping the image.\n\n    2. Upsample the image by inserting the zeros after each pixel (`upx`, `upy`).\n\n    3. Convolve the image with the specified 2D FIR filter (`k`), shrinking the\n       image so that the footprint of all output pixels lies within the input image.\n\n    4. Downsample the image by throwing away pixels (`downx`, `downy`).\n\n    This sequence of operations bears close resemblance to scipy.signal.upfirdn().\n    The fused op is considerably more efficient than performing the same calculation\n    using standard TensorFlow ops. It supports gradients of arbitrary order.\n\n    Args:\n        x:      Input tensor of the shape `[majorDim, inH, inW, minorDim]`.\n        k:      2D FIR filter of the shape `[firH, firW]`.\n        upx:    Integer upsampling factor along the X-axis (default: 1).\n        upy:    Integer upsampling factor along the Y-axis (default: 1).\n        downx:  Integer downsampling factor along the X-axis (default: 1).\n        downy:  Integer downsampling factor along the Y-axis (default: 1).\n        padx0:  Number of pixels to pad on the left side (default: 0).\n        padx1:  Number of pixels to pad on the right side (default: 0).\n        pady0:  Number of pixels to pad on the top side (default: 0).\n        pady1:  Number of pixels to pad on the bottom side (default: 0).\n        impl:   Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the shape `[majorDim, outH, outW, minorDim]`, and same datatype as `x`.\n    \"\"\"\n\n    impl_dict = {\n        'ref':  _upfirdn_2d_ref,\n        'cuda': _upfirdn_2d_cuda,\n    }\n    return impl_dict[impl](x=x, k=k, upx=upx, upy=upy, downx=downx, downy=downy, padx0=padx0, padx1=padx1, pady0=pady0, pady1=pady1)\n\n#----------------------------------------------------------------------------\n\ndef _upfirdn_2d_ref(x, k, upx, upy, downx, downy, padx0, padx1, pady0, pady1):\n    \"\"\"Slow reference implementation of `upfirdn_2d()` using standard TensorFlow ops.\"\"\"\n\n    x = tf.convert_to_tensor(x)\n    k = np.asarray(k, dtype=np.float32)\n    assert x.shape.rank == 4\n    inH = x.shape[1].value\n    inW = x.shape[2].value\n    minorDim = _shape(x, 3)\n    kernelH, kernelW = k.shape\n    assert inW >= 1 and inH >= 1\n    assert kernelW >= 1 and kernelH >= 1\n    assert isinstance(upx, int) and isinstance(upy, int)\n    assert isinstance(downx, int) and isinstance(downy, int)\n    assert isinstance(padx0, int) and isinstance(padx1, int)\n    assert isinstance(pady0, int) and isinstance(pady1, int)\n\n    # Upsample (insert zeros).\n    x = tf.reshape(x, [-1, inH, 1, inW, 1, minorDim])\n    x = tf.pad(x, [[0, 0], [0, 0], [0, upy - 1], [0, 0], [0, upx - 1], [0, 0]])\n    x = tf.reshape(x, [-1, inH * upy, inW * upx, minorDim])\n\n    # Pad (crop if negative).\n    x = tf.pad(x, [[0, 0], [max(pady0, 0), max(pady1, 0)], [max(padx0, 0), max(padx1, 0)], [0, 0]])\n    x = x[:, max(-pady0, 0) : x.shape[1].value - max(-pady1, 0), max(-padx0, 0) : x.shape[2].value - max(-padx1, 0), :]\n\n    # Convolve with filter.\n    x = tf.transpose(x, [0, 3, 1, 2])\n    x = tf.reshape(x, [-1, 1, inH * upy + pady0 + pady1, inW * upx + padx0 + padx1])\n    w = tf.constant(k[::-1, ::-1, np.newaxis, np.newaxis], dtype=x.dtype)\n    x = tf.nn.conv2d(x, w, strides=[1,1,1,1], padding='VALID', data_format='NCHW')\n    x = tf.reshape(x, [-1, minorDim, inH * upy + pady0 + pady1 - kernelH + 1, inW * upx + padx0 + padx1 - kernelW + 1])\n    x = tf.transpose(x, [0, 2, 3, 1])\n\n    # Downsample (throw away pixels).\n    return x[:, ::downy, ::downx, :]\n\n#----------------------------------------------------------------------------\n\ndef _upfirdn_2d_cuda(x, k, upx, upy, downx, downy, padx0, padx1, pady0, pady1):\n    \"\"\"Fast CUDA implementation of `upfirdn_2d()` using custom ops.\"\"\"\n\n    x = tf.convert_to_tensor(x)\n    k = np.asarray(k, dtype=np.float32)\n    majorDim, inH, inW, minorDim = x.shape.as_list()\n    kernelH, kernelW = k.shape\n    assert inW >= 1 and inH >= 1\n    assert kernelW >= 1 and kernelH >= 1\n    assert isinstance(upx, int) and isinstance(upy, int)\n    assert isinstance(downx, int) and isinstance(downy, int)\n    assert isinstance(padx0, int) and isinstance(padx1, int)\n    assert isinstance(pady0, int) and isinstance(pady1, int)\n\n    outW = (inW * upx + padx0 + padx1 - kernelW) // downx + 1\n    outH = (inH * upy + pady0 + pady1 - kernelH) // downy + 1\n    assert outW >= 1 and outH >= 1\n\n    kc = tf.constant(k, dtype=x.dtype)\n    gkc = tf.constant(k[::-1, ::-1], dtype=x.dtype)\n    gpadx0 = kernelW - padx0 - 1\n    gpady0 = kernelH - pady0 - 1\n    gpadx1 = inW * upx - outW * downx + padx0 - upx + 1\n    gpady1 = inH * upy - outH * downy + pady0 - upy + 1\n\n    @tf.custom_gradient\n    def func(x):\n        y = _get_plugin().up_fir_dn2d(x=x, k=kc, upx=upx, upy=upy, downx=downx, downy=downy, padx0=padx0, padx1=padx1, pady0=pady0, pady1=pady1)\n        y.set_shape([majorDim, outH, outW, minorDim])\n        @tf.custom_gradient\n        def grad(dy):\n            dx = _get_plugin().up_fir_dn2d(x=dy, k=gkc, upx=downx, upy=downy, downx=upx, downy=upy, padx0=gpadx0, padx1=gpadx1, pady0=gpady0, pady1=gpady1)\n            dx.set_shape([majorDim, inH, inW, minorDim])\n            return dx, func\n        return y, grad\n    return func(x)\n\n#----------------------------------------------------------------------------\n\ndef filter_2d(x, k, gain=1, data_format='NCHW', impl='cuda'):\n    r\"\"\"Filter a batch of 2D images with the given FIR filter.\n\n    Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]`\n    and filters each image with the given filter. The filter is normalized so that\n    if the input pixels are constant, they will be scaled by the specified `gain`.\n    Pixels outside the image are assumed to be zero.\n\n    Args:\n        x:            Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n        k:            FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n        gain:         Scaling factor for signal magnitude (default: 1.0).\n        data_format:  `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n        impl:         Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the same shape and datatype as `x`.\n    \"\"\"\n\n    k = _setup_kernel(k) * gain\n    p = k.shape[0] - 1\n    return _simple_upfirdn_2d(x, k, pad0=(p+1)//2, pad1=p//2, data_format=data_format, impl=impl)\n\n#----------------------------------------------------------------------------\n\ndef upsample_2d(x, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'):\n    r\"\"\"Upsample a batch of 2D images with the given filter.\n\n    Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]`\n    and upsamples each image with the given filter. The filter is normalized so that\n    if the input pixels are constant, they will be scaled by the specified `gain`.\n    Pixels outside the image are assumed to be zero, and the filter is padded with\n    zeros so that its shape is a multiple of the upsampling factor.\n\n    Args:\n        x:            Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n        k:            FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n                      The default is `[1] * factor`, which corresponds to nearest-neighbor\n                      upsampling.\n        factor:       Integer upsampling factor (default: 2).\n        gain:         Scaling factor for signal magnitude (default: 1.0).\n        data_format:  `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n        impl:         Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the shape `[N, C, H * factor, W * factor]` or\n        `[N, H * factor, W * factor, C]`, and same datatype as `x`.\n    \"\"\"\n\n    assert isinstance(factor, int) and factor >= 1\n    if k is None:\n        k = [1] * factor\n    k = _setup_kernel(k) * (gain * (factor ** 2))\n    p = k.shape[0] - factor\n    return _simple_upfirdn_2d(x, k, up=factor, pad0=(p+1)//2+factor-1, pad1=p//2, data_format=data_format, impl=impl)\n\n#----------------------------------------------------------------------------\n\ndef downsample_2d(x, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'):\n    r\"\"\"Downsample a batch of 2D images with the given filter.\n\n    Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]`\n    and downsamples each image with the given filter. The filter is normalized so that\n    if the input pixels are constant, they will be scaled by the specified `gain`.\n    Pixels outside the image are assumed to be zero, and the filter is padded with\n    zeros so that its shape is a multiple of the downsampling factor.\n\n    Args:\n        x:            Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n        k:            FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n                      The default is `[1] * factor`, which corresponds to average pooling.\n        factor:       Integer downsampling factor (default: 2).\n        gain:         Scaling factor for signal magnitude (default: 1.0).\n        data_format:  `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n        impl:         Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the shape `[N, C, H // factor, W // factor]` or\n        `[N, H // factor, W // factor, C]`, and same datatype as `x`.\n    \"\"\"\n\n    assert isinstance(factor, int) and factor >= 1\n    if k is None:\n        k = [1] * factor\n    k = _setup_kernel(k) * gain\n    p = k.shape[0] - factor\n    return _simple_upfirdn_2d(x, k, down=factor, pad0=(p+1)//2, pad1=p//2, data_format=data_format, impl=impl)\n\n#----------------------------------------------------------------------------\n\ndef upsample_conv_2d(x, w, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'):\n    r\"\"\"Fused `upsample_2d()` followed by `tf.nn.conv2d()`.\n\n    Padding is performed only once at the beginning, not between the operations.\n    The fused op is considerably more efficient than performing the same calculation\n    using standard TensorFlow ops. It supports gradients of arbitrary order.\n\n    Args:\n        x:            Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n        w:            Weight tensor of the shape `[filterH, filterW, inChannels, outChannels]`.\n                      Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`.\n        k:            FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n                      The default is `[1] * factor`, which corresponds to nearest-neighbor\n                      upsampling.\n        factor:       Integer upsampling factor (default: 2).\n        gain:         Scaling factor for signal magnitude (default: 1.0).\n        data_format:  `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n        impl:         Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the shape `[N, C, H * factor, W * factor]` or\n        `[N, H * factor, W * factor, C]`, and same datatype as `x`.\n    \"\"\"\n\n    assert isinstance(factor, int) and factor >= 1\n\n    # Check weight shape.\n    w = tf.convert_to_tensor(w)\n    assert w.shape.rank == 4\n    convH = w.shape[0].value\n    convW = w.shape[1].value\n    inC = _shape(w, 2)\n    outC = _shape(w, 3)\n    assert convW == convH\n\n    # Setup filter kernel.\n    if k is None:\n        k = [1] * factor\n    k = _setup_kernel(k) * (gain * (factor ** 2))\n    p = (k.shape[0] - factor) - (convW - 1)\n\n    # Determine data dimensions.\n    if data_format == 'NCHW':\n        stride = [1, 1, factor, factor]\n        output_shape = [_shape(x, 0), outC, (_shape(x, 2) - 1) * factor + convH, (_shape(x, 3) - 1) * factor + convW]\n        num_groups = _shape(x, 1) // inC\n    else:\n        stride = [1, factor, factor, 1]\n        output_shape = [_shape(x, 0), (_shape(x, 1) - 1) * factor + convH, (_shape(x, 2) - 1) * factor + convW, outC]\n        num_groups = _shape(x, 3) // inC\n\n    # Transpose weights.\n    w = tf.reshape(w, [convH, convW, inC, num_groups, -1])\n    w = tf.transpose(w[::-1, ::-1], [0, 1, 4, 3, 2])\n    w = tf.reshape(w, [convH, convW, -1, num_groups * inC])\n\n    # Execute.\n    x = tf.nn.conv2d_transpose(x, w, output_shape=output_shape, strides=stride, padding='VALID', data_format=data_format)\n    return _simple_upfirdn_2d(x, k, pad0=(p+1)//2+factor-1, pad1=p//2+1, data_format=data_format, impl=impl)\n\n#----------------------------------------------------------------------------\n\ndef conv_downsample_2d(x, w, k=None, factor=2, gain=1, data_format='NCHW', impl='cuda'):\n    r\"\"\"Fused `tf.nn.conv2d()` followed by `downsample_2d()`.\n\n    Padding is performed only once at the beginning, not between the operations.\n    The fused op is considerably more efficient than performing the same calculation\n    using standard TensorFlow ops. It supports gradients of arbitrary order.\n\n    Args:\n        x:            Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.\n        w:            Weight tensor of the shape `[filterH, filterW, inChannels, outChannels]`.\n                      Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`.\n        k:            FIR filter of the shape `[firH, firW]` or `[firN]` (separable).\n                      The default is `[1] * factor`, which corresponds to average pooling.\n        factor:       Integer downsampling factor (default: 2).\n        gain:         Scaling factor for signal magnitude (default: 1.0).\n        data_format:  `'NCHW'` or `'NHWC'` (default: `'NCHW'`).\n        impl:         Name of the implementation to use. Can be `\"ref\"` or `\"cuda\"` (default).\n\n    Returns:\n        Tensor of the shape `[N, C, H // factor, W // factor]` or\n        `[N, H // factor, W // factor, C]`, and same datatype as `x`.\n    \"\"\"\n\n    assert isinstance(factor, int) and factor >= 1\n    w = tf.convert_to_tensor(w)\n    convH, convW, _inC, _outC = w.shape.as_list()\n    assert convW == convH\n    if k is None:\n        k = [1] * factor\n    k = _setup_kernel(k) * gain\n    p = (k.shape[0] - factor) + (convW - 1)\n    if data_format == 'NCHW':\n        s = [1, 1, factor, factor]\n    else:\n        s = [1, factor, factor, 1]\n    x = _simple_upfirdn_2d(x, k, pad0=(p+1)//2, pad1=p//2, data_format=data_format, impl=impl)\n    return tf.nn.conv2d(x, w, strides=s, padding='VALID', data_format=data_format)\n\n#----------------------------------------------------------------------------\n# Internal helper funcs.\n\ndef _shape(tf_expr, dim_idx):\n    if tf_expr.shape.rank is not None:\n        dim = tf_expr.shape[dim_idx].value\n        if dim is not None:\n            return dim\n    return tf.shape(tf_expr)[dim_idx]\n\ndef _setup_kernel(k):\n    k = np.asarray(k, dtype=np.float32)\n    if k.ndim == 1:\n        k = np.outer(k, k)\n    k /= np.sum(k)\n    assert k.ndim == 2\n    assert k.shape[0] == k.shape[1]\n    return k\n\ndef _simple_upfirdn_2d(x, k, up=1, down=1, pad0=0, pad1=0, data_format='NCHW', impl='cuda'):\n    assert data_format in ['NCHW', 'NHWC']\n    assert x.shape.rank == 4\n    y = x\n    if data_format == 'NCHW':\n        y = tf.reshape(y, [-1, _shape(y, 2), _shape(y, 3), 1])\n    y = upfirdn_2d(y, k, upx=up, upy=up, downx=down, downy=down, padx0=pad0, padx1=pad1, pady0=pad0, pady1=pad1, impl=impl)\n    if data_format == 'NCHW':\n        y = tf.reshape(y, [-1, _shape(x, 1), _shape(y, 1), _shape(y, 2)])\n    return y\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/optimizer.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Helper wrapper for a Tensorflow optimizer.\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom collections import OrderedDict\nfrom typing import List, Union\n\nfrom . import autosummary\nfrom . import tfutil\nfrom .. import util\n\nfrom .tfutil import TfExpression, TfExpressionEx\n\ntry:\n    # TensorFlow 1.13\n    from tensorflow.python.ops import nccl_ops\nexcept:\n    # Older TensorFlow versions\n    import tensorflow.contrib.nccl as nccl_ops\n\nclass Optimizer:\n    \"\"\"A Wrapper for tf.train.Optimizer.\n\n    Automatically takes care of:\n    - Gradient averaging for multi-GPU training.\n    - Gradient accumulation for arbitrarily large minibatches.\n    - Dynamic loss scaling and typecasts for FP16 training.\n    - Ignoring corrupted gradients that contain NaNs/Infs.\n    - Reporting statistics.\n    - Well-chosen default settings.\n    \"\"\"\n\n    def __init__(self,\n        name:                   str             = \"Train\",                  # Name string that will appear in TensorFlow graph.\n        tf_optimizer:           str             = \"tf.train.AdamOptimizer\", # Underlying optimizer class.\n        learning_rate:          TfExpressionEx  = 0.001,                    # Learning rate. Can vary over time.\n        minibatch_multiplier:   TfExpressionEx  = None,                     # Treat N consecutive minibatches as one by accumulating gradients.\n        share:                  \"Optimizer\"     = None,                     # Share internal state with a previously created optimizer?\n        use_loss_scaling:       bool            = False,                    # Enable dynamic loss scaling for robust mixed-precision training?\n        loss_scaling_init:      float           = 64.0,                     # Log2 of initial loss scaling factor.\n        loss_scaling_inc:       float           = 0.0005,                   # Log2 of per-minibatch loss scaling increment when there is no overflow.\n        loss_scaling_dec:       float           = 1.0,                      # Log2 of per-minibatch loss scaling decrement when there is an overflow.\n        report_mem_usage:       bool            = False,                    # Report fine-grained memory usage statistics in TensorBoard?\n        **kwargs):\n\n        # Public fields.\n        self.name                   = name\n        self.learning_rate          = learning_rate\n        self.minibatch_multiplier   = minibatch_multiplier\n        self.id                     = self.name.replace(\"/\", \".\")\n        self.scope                  = tf.get_default_graph().unique_name(self.id)\n        self.optimizer_class        = util.get_obj_by_name(tf_optimizer)\n        self.optimizer_kwargs       = dict(kwargs)\n        self.use_loss_scaling       = use_loss_scaling\n        self.loss_scaling_init      = loss_scaling_init\n        self.loss_scaling_inc       = loss_scaling_inc\n        self.loss_scaling_dec       = loss_scaling_dec\n\n        # Private fields.\n        self._updates_applied       = False\n        self._devices               = OrderedDict() # device_name => EasyDict()\n        self._shared_optimizers     = OrderedDict() # device_name => optimizer_class\n        self._gradient_shapes       = None          # [shape, ...]\n        self._report_mem_usage      = report_mem_usage\n\n        # Validate arguments.\n        assert callable(self.optimizer_class)\n\n        # Share internal state if requested.\n        if share is not None:\n            assert isinstance(share, Optimizer)\n            assert self.optimizer_class is share.optimizer_class\n            assert self.learning_rate is share.learning_rate\n            assert self.optimizer_kwargs == share.optimizer_kwargs\n            self._shared_optimizers = share._shared_optimizers # pylint: disable=protected-access\n\n    def _get_device(self, device_name: str):\n        \"\"\"Get internal state for the given TensorFlow device.\"\"\"\n        tfutil.assert_tf_initialized()\n        if device_name in self._devices:\n            return self._devices[device_name]\n\n        # Initialize fields.\n        device = util.EasyDict()\n        device.name             = device_name\n        device.optimizer        = None          # Underlying optimizer:     optimizer_class\n        device.loss_scaling_var = None          # Log2 of loss scaling:     tf.Variable\n        device.grad_raw         = OrderedDict() # Raw gradients:            var => [grad, ...]\n        device.grad_clean       = OrderedDict() # Clean gradients:          var => grad\n        device.grad_acc_vars    = OrderedDict() # Accumulation sums:        var => tf.Variable\n        device.grad_acc_count   = None          # Accumulation counter:     tf.Variable\n        device.grad_acc         = OrderedDict() # Accumulated gradients:    var => grad\n\n        # Setup TensorFlow objects.\n        with tfutil.absolute_name_scope(self.scope + \"/Devices\"), tf.device(device_name), tf.control_dependencies(None):\n            if device_name not in self._shared_optimizers:\n                optimizer_name = self.scope.replace(\"/\", \"_\") + \"_opt%d\" % len(self._shared_optimizers)\n                self._shared_optimizers[device_name] = self.optimizer_class(name=optimizer_name, learning_rate=self.learning_rate, **self.optimizer_kwargs)\n            device.optimizer = self._shared_optimizers[device_name]\n            if self.use_loss_scaling:\n                device.loss_scaling_var = tf.Variable(np.float32(self.loss_scaling_init), trainable=False, name=\"loss_scaling_var\")\n\n        # Register device.\n        self._devices[device_name] = device\n        return device\n\n    def register_gradients(self, loss: TfExpression, trainable_vars: Union[List, dict]) -> None:\n        \"\"\"Register the gradients of the given loss function with respect to the given variables.\n        Intended to be called once per GPU.\"\"\"\n        tfutil.assert_tf_initialized()\n        assert not self._updates_applied\n        device = self._get_device(loss.device)\n\n        # Validate trainables.\n        if isinstance(trainable_vars, dict):\n            trainable_vars = list(trainable_vars.values())  # allow passing in Network.trainables as vars\n        assert isinstance(trainable_vars, list) and len(trainable_vars) >= 1\n        assert all(tfutil.is_tf_expression(expr) for expr in trainable_vars + [loss])\n        assert all(var.device == device.name for var in trainable_vars)\n\n        # Validate shapes.\n        if self._gradient_shapes is None:\n            self._gradient_shapes = [var.shape.as_list() for var in trainable_vars]\n        assert len(trainable_vars) == len(self._gradient_shapes)\n        assert all(var.shape.as_list() == var_shape for var, var_shape in zip(trainable_vars, self._gradient_shapes))\n\n        # Report memory usage if requested.\n        deps = []\n        if self._report_mem_usage:\n            self._report_mem_usage = False\n            try:\n                with tf.name_scope(self.id + '_mem'), tf.device(device.name), tf.control_dependencies([loss]):\n                    deps.append(autosummary.autosummary(self.id + \"/mem_usage_gb\", tf.contrib.memory_stats.BytesInUse() / 2**30))\n            except tf.errors.NotFoundError:\n                pass\n\n        # Compute gradients.\n        with tf.name_scope(self.id + \"_grad\"), tf.device(device.name), tf.control_dependencies(deps):\n            loss = self.apply_loss_scaling(tf.cast(loss, tf.float32))\n            gate = tf.train.Optimizer.GATE_NONE  # disable gating to reduce memory usage\n            grad_list = device.optimizer.compute_gradients(loss=loss, var_list=trainable_vars, gate_gradients=gate)\n\n        # Register gradients.\n        for grad, var in grad_list:\n            if var not in device.grad_raw:\n                device.grad_raw[var] = []\n            device.grad_raw[var].append(grad)\n\n    def apply_updates(self, allow_no_op: bool = False) -> tf.Operation:\n        \"\"\"Construct training op to update the registered variables based on their gradients.\"\"\"\n        tfutil.assert_tf_initialized()\n        assert not self._updates_applied\n        self._updates_applied = True\n        all_ops = []\n\n        # Check for no-op.\n        if allow_no_op and len(self._devices) == 0:\n            with tfutil.absolute_name_scope(self.scope):\n                return tf.no_op(name='TrainingOp')\n\n        # Clean up gradients.\n        for device_idx, device in enumerate(self._devices.values()):\n            with tfutil.absolute_name_scope(self.scope + \"/Clean%d\" % device_idx), tf.device(device.name):\n                for var, grad in device.grad_raw.items():\n\n                    # Filter out disconnected gradients and convert to float32.\n                    grad = [g for g in grad if g is not None]\n                    grad = [tf.cast(g, tf.float32) for g in grad]\n\n                    # Sum within the device.\n                    if len(grad) == 0:\n                        grad = tf.zeros(var.shape)  # No gradients => zero.\n                    elif len(grad) == 1:\n                        grad = grad[0]              # Single gradient => use as is.\n                    else:\n                        grad = tf.add_n(grad)       # Multiple gradients => sum.\n\n                    # Scale as needed.\n                    scale = 1.0 / len(device.grad_raw[var]) / len(self._devices)\n                    scale = tf.constant(scale, dtype=tf.float32, name=\"scale\")\n                    if self.minibatch_multiplier is not None:\n                        scale /= tf.cast(self.minibatch_multiplier, tf.float32)\n                    scale = self.undo_loss_scaling(scale)\n                    device.grad_clean[var] = grad * scale\n\n        # Sum gradients across devices.\n        if len(self._devices) > 1:\n            with tfutil.absolute_name_scope(self.scope + \"/Broadcast\"), tf.device(None):\n                for all_vars in zip(*[device.grad_clean.keys() for device in self._devices.values()]):\n                    if len(all_vars) > 0 and all(dim > 0 for dim in all_vars[0].shape.as_list()): # NCCL does not support zero-sized tensors.\n                        all_grads = [device.grad_clean[var] for device, var in zip(self._devices.values(), all_vars)]\n                        all_grads = nccl_ops.all_sum(all_grads)\n                        for device, var, grad in zip(self._devices.values(), all_vars, all_grads):\n                            device.grad_clean[var] = grad\n\n        # Apply updates separately on each device.\n        for device_idx, device in enumerate(self._devices.values()):\n            with tfutil.absolute_name_scope(self.scope + \"/Apply%d\" % device_idx), tf.device(device.name):\n                # pylint: disable=cell-var-from-loop\n\n                # Accumulate gradients over time.\n                if self.minibatch_multiplier is None:\n                    acc_ok = tf.constant(True, name='acc_ok')\n                    device.grad_acc = OrderedDict(device.grad_clean)\n                else:\n                    # Create variables.\n                    with tf.control_dependencies(None):\n                        for var in device.grad_clean.keys():\n                            device.grad_acc_vars[var] = tf.Variable(tf.zeros(var.shape), trainable=False, name=\"grad_acc_var\")\n                        device.grad_acc_count = tf.Variable(tf.zeros([]), trainable=False, name=\"grad_acc_count\")\n\n                    # Track counter.\n                    count_cur = device.grad_acc_count + 1.0\n                    count_inc_op = lambda: tf.assign(device.grad_acc_count, count_cur)\n                    count_reset_op = lambda: tf.assign(device.grad_acc_count, tf.zeros([]))\n                    acc_ok = (count_cur >= tf.cast(self.minibatch_multiplier, tf.float32))\n                    all_ops.append(tf.cond(acc_ok, count_reset_op, count_inc_op))\n\n                    # Track gradients.\n                    for var, grad in device.grad_clean.items():\n                        acc_var = device.grad_acc_vars[var]\n                        acc_cur = acc_var + grad\n                        device.grad_acc[var] = acc_cur\n                        with tf.control_dependencies([acc_cur]):\n                            acc_inc_op = lambda: tf.assign(acc_var, acc_cur)\n                            acc_reset_op = lambda: tf.assign(acc_var, tf.zeros(var.shape))\n                            all_ops.append(tf.cond(acc_ok, acc_reset_op, acc_inc_op))\n\n                # No overflow => apply gradients.\n                all_ok = tf.reduce_all(tf.stack([acc_ok] + [tf.reduce_all(tf.is_finite(g)) for g in device.grad_acc.values()]))\n                apply_op = lambda: device.optimizer.apply_gradients([(tf.cast(grad, var.dtype), var) for var, grad in device.grad_acc.items()])\n                all_ops.append(tf.cond(all_ok, apply_op, tf.no_op))\n\n                # Adjust loss scaling.\n                if self.use_loss_scaling:\n                    ls_inc_op = lambda: tf.assign_add(device.loss_scaling_var, self.loss_scaling_inc)\n                    ls_dec_op = lambda: tf.assign_sub(device.loss_scaling_var, self.loss_scaling_dec)\n                    ls_update_op = lambda: tf.group(tf.cond(all_ok, ls_inc_op, ls_dec_op))\n                    all_ops.append(tf.cond(acc_ok, ls_update_op, tf.no_op))\n\n                # Last device => report statistics.\n                if device_idx == len(self._devices) - 1:\n                    all_ops.append(autosummary.autosummary(self.id + \"/learning_rate\", self.learning_rate))\n                    all_ops.append(autosummary.autosummary(self.id + \"/overflow_frequency\", tf.where(all_ok, 0, 1), condition=acc_ok))\n                    if self.use_loss_scaling:\n                        all_ops.append(autosummary.autosummary(self.id + \"/loss_scaling_log2\", device.loss_scaling_var))\n\n        # Initialize variables.\n        self.reset_optimizer_state()\n        if self.use_loss_scaling:\n            tfutil.init_uninitialized_vars([device.loss_scaling_var for device in self._devices.values()])\n        if self.minibatch_multiplier is not None:\n            tfutil.run([var.initializer for device in self._devices.values() for var in list(device.grad_acc_vars.values()) + [device.grad_acc_count]])\n\n        # Group everything into a single op.\n        with tfutil.absolute_name_scope(self.scope):\n            return tf.group(*all_ops, name=\"TrainingOp\")\n\n    def reset_optimizer_state(self) -> None:\n        \"\"\"Reset internal state of the underlying optimizer.\"\"\"\n        tfutil.assert_tf_initialized()\n        tfutil.run([var.initializer for device in self._devices.values() for var in device.optimizer.variables()])\n\n    def get_loss_scaling_var(self, device: str) -> Union[tf.Variable, None]:\n        \"\"\"Get or create variable representing log2 of the current dynamic loss scaling factor.\"\"\"\n        return self._get_device(device).loss_scaling_var\n\n    def apply_loss_scaling(self, value: TfExpression) -> TfExpression:\n        \"\"\"Apply dynamic loss scaling for the given expression.\"\"\"\n        assert tfutil.is_tf_expression(value)\n        if not self.use_loss_scaling:\n            return value\n        return value * tfutil.exp2(self.get_loss_scaling_var(value.device))\n\n    def undo_loss_scaling(self, value: TfExpression) -> TfExpression:\n        \"\"\"Undo the effect of dynamic loss scaling for the given expression.\"\"\"\n        assert tfutil.is_tf_expression(value)\n        if not self.use_loss_scaling:\n            return value\n        return value * tfutil.exp2(-self.get_loss_scaling_var(value.device)) # pylint: disable=invalid-unary-operand-type\n\n\nclass SimpleAdam:\n    \"\"\"Simplified version of tf.train.AdamOptimizer that behaves identically when used with dnnlib.tflib.Optimizer.\"\"\"\n\n    def __init__(self, name=\"Adam\", learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8):\n        self.name = name\n        self.learning_rate = learning_rate\n        self.beta1 = beta1\n        self.beta2 = beta2\n        self.epsilon = epsilon\n        self.all_state_vars = []\n\n    def variables(self):\n        return self.all_state_vars\n\n    def compute_gradients(self, loss, var_list, gate_gradients=tf.train.Optimizer.GATE_NONE):\n        assert gate_gradients == tf.train.Optimizer.GATE_NONE\n        return list(zip(tf.gradients(loss, var_list), var_list))\n\n    def apply_gradients(self, grads_and_vars):\n        with tf.name_scope(self.name):\n            state_vars = []\n            update_ops = []\n\n            # Adjust learning rate to deal with startup bias.\n            with tf.control_dependencies(None):\n                b1pow_var = tf.Variable(dtype=tf.float32, initial_value=1, trainable=False)\n                b2pow_var = tf.Variable(dtype=tf.float32, initial_value=1, trainable=False)\n                state_vars += [b1pow_var, b2pow_var]\n            b1pow_new = b1pow_var * self.beta1\n            b2pow_new = b2pow_var * self.beta2\n            update_ops += [tf.assign(b1pow_var, b1pow_new), tf.assign(b2pow_var, b2pow_new)]\n            lr_new = self.learning_rate * tf.sqrt(1 - b2pow_new) / (1 - b1pow_new)\n\n            # Construct ops to update each variable.\n            for grad, var in grads_and_vars:\n                with tf.control_dependencies(None):\n                    m_var = tf.Variable(dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False)\n                    v_var = tf.Variable(dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False)\n                    state_vars += [m_var, v_var]\n                m_new = self.beta1 * m_var + (1 - self.beta1) * grad\n                v_new = self.beta2 * v_var + (1 - self.beta2) * tf.square(grad)\n                var_delta = lr_new * m_new / (tf.sqrt(v_new) + self.epsilon)\n                update_ops += [tf.assign(m_var, m_new), tf.assign(v_var, v_new), tf.assign_sub(var, var_delta)]\n\n            # Group everything together.\n            self.all_state_vars += state_vars\n            return tf.group(*update_ops)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/tflib/tfutil.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Miscellaneous helper utils for Tensorflow.\"\"\"\n\nimport os\nimport numpy as np\nimport tensorflow as tf\n\n# Silence deprecation warnings from TensorFlow 1.13 onwards\nimport logging\nlogging.getLogger('tensorflow').setLevel(logging.ERROR)\nimport tensorflow.contrib   # requires TensorFlow 1.x!\ntf.contrib = tensorflow.contrib\n\nfrom typing import Any, Iterable, List, Union\n\nTfExpression = Union[tf.Tensor, tf.Variable, tf.Operation]\n\"\"\"A type that represents a valid Tensorflow expression.\"\"\"\n\nTfExpressionEx = Union[TfExpression, int, float, np.ndarray]\n\"\"\"A type that can be converted to a valid Tensorflow expression.\"\"\"\n\n\ndef run(*args, **kwargs) -> Any:\n    \"\"\"Run the specified ops in the default session.\"\"\"\n    assert_tf_initialized()\n    return tf.get_default_session().run(*args, **kwargs)\n\n\ndef is_tf_expression(x: Any) -> bool:\n    \"\"\"Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation.\"\"\"\n    return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation))\n\n\ndef shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:\n    \"\"\"Convert a Tensorflow shape to a list of ints. Retained for backwards compatibility -- use TensorShape.as_list() in new code.\"\"\"\n    return [dim.value for dim in shape]\n\n\ndef flatten(x: TfExpressionEx) -> TfExpression:\n    \"\"\"Shortcut function for flattening a tensor.\"\"\"\n    with tf.name_scope(\"Flatten\"):\n        return tf.reshape(x, [-1])\n\n\ndef log2(x: TfExpressionEx) -> TfExpression:\n    \"\"\"Logarithm in base 2.\"\"\"\n    with tf.name_scope(\"Log2\"):\n        return tf.log(x) * np.float32(1.0 / np.log(2.0))\n\n\ndef exp2(x: TfExpressionEx) -> TfExpression:\n    \"\"\"Exponent in base 2.\"\"\"\n    with tf.name_scope(\"Exp2\"):\n        return tf.exp(x * np.float32(np.log(2.0)))\n\n\ndef lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx:\n    \"\"\"Linear interpolation.\"\"\"\n    with tf.name_scope(\"Lerp\"):\n        return a + (b - a) * t\n\n\ndef lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression:\n    \"\"\"Linear interpolation with clip.\"\"\"\n    with tf.name_scope(\"LerpClip\"):\n        return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)\n\n\ndef absolute_name_scope(scope: str) -> tf.name_scope:\n    \"\"\"Forcefully enter the specified name scope, ignoring any surrounding scopes.\"\"\"\n    return tf.name_scope(scope + \"/\")\n\n\ndef absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope:\n    \"\"\"Forcefully enter the specified variable scope, ignoring any surrounding scopes.\"\"\"\n    return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False)\n\n\ndef _sanitize_tf_config(config_dict: dict = None) -> dict:\n    # Defaults.\n    cfg = dict()\n    cfg[\"rnd.np_random_seed\"]               = None      # Random seed for NumPy. None = keep as is.\n    cfg[\"rnd.tf_random_seed\"]               = \"auto\"    # Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is.\n    cfg[\"env.TF_CPP_MIN_LOG_LEVEL\"]         = \"1\"       # 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.\n    cfg[\"graph_options.place_pruned_graph\"] = True      # False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.\n    cfg[\"gpu_options.allow_growth\"]         = True      # False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.\n\n    # Remove defaults for environment variables that are already set.\n    for key in list(cfg):\n        fields = key.split(\".\")\n        if fields[0] == \"env\":\n            assert len(fields) == 2\n            if fields[1] in os.environ:\n                del cfg[key]\n\n    # User overrides.\n    if config_dict is not None:\n        cfg.update(config_dict)\n    return cfg\n\n\ndef init_tf(config_dict: dict = None) -> None:\n    \"\"\"Initialize TensorFlow session using good default settings.\"\"\"\n    # Skip if already initialized.\n    if tf.get_default_session() is not None:\n        return\n\n    # Setup config dict and random seeds.\n    cfg = _sanitize_tf_config(config_dict)\n    np_random_seed = cfg[\"rnd.np_random_seed\"]\n    if np_random_seed is not None:\n        np.random.seed(np_random_seed)\n    tf_random_seed = cfg[\"rnd.tf_random_seed\"]\n    if tf_random_seed == \"auto\":\n        tf_random_seed = np.random.randint(1 << 31)\n    if tf_random_seed is not None:\n        tf.set_random_seed(tf_random_seed)\n\n    # Setup environment variables.\n    for key, value in cfg.items():\n        fields = key.split(\".\")\n        if fields[0] == \"env\":\n            assert len(fields) == 2\n            os.environ[fields[1]] = str(value)\n\n    # Create default TensorFlow session.\n    create_session(cfg, force_as_default=True)\n\n\ndef assert_tf_initialized():\n    \"\"\"Check that TensorFlow session has been initialized.\"\"\"\n    if tf.get_default_session() is None:\n        raise RuntimeError(\"No default TensorFlow session found. Please call dnnlib.tflib.init_tf().\")\n\n\ndef create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session:\n    \"\"\"Create tf.Session based on config dict.\"\"\"\n    # Setup TensorFlow config proto.\n    cfg = _sanitize_tf_config(config_dict)\n    config_proto = tf.ConfigProto()\n    for key, value in cfg.items():\n        fields = key.split(\".\")\n        if fields[0] not in [\"rnd\", \"env\"]:\n            obj = config_proto\n            for field in fields[:-1]:\n                obj = getattr(obj, field)\n            setattr(obj, fields[-1], value)\n\n    # Create session.\n    session = tf.Session(config=config_proto)\n    if force_as_default:\n        # pylint: disable=protected-access\n        session._default_session = session.as_default()\n        session._default_session.enforce_nesting = False\n        session._default_session.__enter__()\n    return session\n\n\ndef init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None:\n    \"\"\"Initialize all tf.Variables that have not already been initialized.\n\n    Equivalent to the following, but more efficient and does not bloat the tf graph:\n    tf.variables_initializer(tf.report_uninitialized_variables()).run()\n    \"\"\"\n    assert_tf_initialized()\n    if target_vars is None:\n        target_vars = tf.global_variables()\n\n    test_vars = []\n    test_ops = []\n\n    with tf.control_dependencies(None):  # ignore surrounding control_dependencies\n        for var in target_vars:\n            assert is_tf_expression(var)\n\n            try:\n                tf.get_default_graph().get_tensor_by_name(var.name.replace(\":0\", \"/IsVariableInitialized:0\"))\n            except KeyError:\n                # Op does not exist => variable may be uninitialized.\n                test_vars.append(var)\n\n                with absolute_name_scope(var.name.split(\":\")[0]):\n                    test_ops.append(tf.is_variable_initialized(var))\n\n    init_vars = [var for var, inited in zip(test_vars, run(test_ops)) if not inited]\n    run([var.initializer for var in init_vars])\n\n\ndef set_vars(var_to_value_dict: dict) -> None:\n    \"\"\"Set the values of given tf.Variables.\n\n    Equivalent to the following, but more efficient and does not bloat the tf graph:\n    tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()]\n    \"\"\"\n    assert_tf_initialized()\n    ops = []\n    feed_dict = {}\n\n    for var, value in var_to_value_dict.items():\n        assert is_tf_expression(var)\n\n        try:\n            setter = tf.get_default_graph().get_tensor_by_name(var.name.replace(\":0\", \"/setter:0\"))  # look for existing op\n        except KeyError:\n            with absolute_name_scope(var.name.split(\":\")[0]):\n                with tf.control_dependencies(None):  # ignore surrounding control_dependencies\n                    setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, \"new_value\"), name=\"setter\")  # create new setter\n\n        ops.append(setter)\n        feed_dict[setter.op.inputs[1]] = value\n\n    run(ops, feed_dict)\n\n\ndef create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs):\n    \"\"\"Create tf.Variable with large initial value without bloating the tf graph.\"\"\"\n    assert_tf_initialized()\n    assert isinstance(initial_value, np.ndarray)\n    zeros = tf.zeros(initial_value.shape, initial_value.dtype)\n    var = tf.Variable(zeros, *args, **kwargs)\n    set_vars({var: initial_value})\n    return var\n\n\ndef convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False):\n    \"\"\"Convert a minibatch of images from uint8 to float32 with configurable dynamic range.\n    Can be used as an input transformation for Network.run().\n    \"\"\"\n    images = tf.cast(images, tf.float32)\n    if nhwc_to_nchw:\n        images = tf.transpose(images, [0, 3, 1, 2])\n    return images * ((drange[1] - drange[0]) / 255) + drange[0]\n\n\ndef convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False, shrink=1):\n    \"\"\"Convert a minibatch of images from float32 to uint8 with configurable dynamic range.\n    Can be used as an output transformation for Network.run().\n    \"\"\"\n    images = tf.cast(images, tf.float32)\n    if shrink > 1:\n        ksize = [1, 1, shrink, shrink]\n        images = tf.nn.avg_pool(images, ksize=ksize, strides=ksize, padding=\"VALID\", data_format=\"NCHW\")\n    if nchw_to_nhwc:\n        images = tf.transpose(images, [0, 2, 3, 1])\n    scale = 255 / (drange[1] - drange[0])\n    images = images * scale + (0.5 - drange[0] * scale)\n    return tf.saturate_cast(images, tf.uint8)\n"
  },
  {
    "path": "FQ-StyleGAN/dnnlib/util.py",
    "content": "﻿# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Miscellaneous utility classes and functions.\"\"\"\n\nimport ctypes\nimport fnmatch\nimport importlib\nimport inspect\nimport numpy as np\nimport os\nimport shutil\nimport sys\nimport types\nimport io\nimport pickle\nimport re\nimport requests\nimport html\nimport hashlib\nimport glob\nimport uuid\n\nfrom distutils.util import strtobool\nfrom typing import Any, List, Tuple, Union\n\n\n# Util classes\n# ------------------------------------------------------------------------------------------\n\n\nclass EasyDict(dict):\n    \"\"\"Convenience class that behaves like a dict but allows access with the attribute syntax.\"\"\"\n\n    def __getattr__(self, name: str) -> Any:\n        try:\n            return self[name]\n        except KeyError:\n            raise AttributeError(name)\n\n    def __setattr__(self, name: str, value: Any) -> None:\n        self[name] = value\n\n    def __delattr__(self, name: str) -> None:\n        del self[name]\n\n\nclass Logger(object):\n    \"\"\"Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.\"\"\"\n\n    def __init__(self, file_name: str = None, file_mode: str = \"w\", should_flush: bool = True):\n        self.file = None\n\n        if file_name is not None:\n            self.file = open(file_name, file_mode)\n\n        self.should_flush = should_flush\n        self.stdout = sys.stdout\n        self.stderr = sys.stderr\n\n        sys.stdout = self\n        sys.stderr = self\n\n    def __enter__(self) -> \"Logger\":\n        return self\n\n    def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:\n        self.close()\n\n    def write(self, text: str) -> None:\n        \"\"\"Write text to stdout (and a file) and optionally flush.\"\"\"\n        if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash\n            return\n\n        if self.file is not None:\n            self.file.write(text)\n\n        self.stdout.write(text)\n\n        if self.should_flush:\n            self.flush()\n\n    def flush(self) -> None:\n        \"\"\"Flush written text to both stdout and a file, if open.\"\"\"\n        if self.file is not None:\n            self.file.flush()\n\n        self.stdout.flush()\n\n    def close(self) -> None:\n        \"\"\"Flush, close possible files, and remove stdout/stderr mirroring.\"\"\"\n        self.flush()\n\n        # if using multiple loggers, prevent closing in wrong order\n        if sys.stdout is self:\n            sys.stdout = self.stdout\n        if sys.stderr is self:\n            sys.stderr = self.stderr\n\n        if self.file is not None:\n            self.file.close()\n\n\n# Small util functions\n# ------------------------------------------------------------------------------------------\n\n\ndef format_time(seconds: Union[int, float]) -> str:\n    \"\"\"Convert the seconds to human readable string with days, hours, minutes and seconds.\"\"\"\n    s = int(np.rint(seconds))\n\n    if s < 60:\n        return \"{0}s\".format(s)\n    elif s < 60 * 60:\n        return \"{0}m {1:02}s\".format(s // 60, s % 60)\n    elif s < 24 * 60 * 60:\n        return \"{0}h {1:02}m {2:02}s\".format(s // (60 * 60), (s // 60) % 60, s % 60)\n    else:\n        return \"{0}d {1:02}h {2:02}m\".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60)\n\n\ndef ask_yes_no(question: str) -> bool:\n    \"\"\"Ask the user the question until the user inputs a valid answer.\"\"\"\n    while True:\n        try:\n            print(\"{0} [y/n]\".format(question))\n            return strtobool(input().lower())\n        except ValueError:\n            pass\n\n\ndef tuple_product(t: Tuple) -> Any:\n    \"\"\"Calculate the product of the tuple elements.\"\"\"\n    result = 1\n\n    for v in t:\n        result *= v\n\n    return result\n\n\n_str_to_ctype = {\n    \"uint8\": ctypes.c_ubyte,\n    \"uint16\": ctypes.c_uint16,\n    \"uint32\": ctypes.c_uint32,\n    \"uint64\": ctypes.c_uint64,\n    \"int8\": ctypes.c_byte,\n    \"int16\": ctypes.c_int16,\n    \"int32\": ctypes.c_int32,\n    \"int64\": ctypes.c_int64,\n    \"float32\": ctypes.c_float,\n    \"float64\": ctypes.c_double\n}\n\n\ndef get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]:\n    \"\"\"Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.\"\"\"\n    type_str = None\n\n    if isinstance(type_obj, str):\n        type_str = type_obj\n    elif hasattr(type_obj, \"__name__\"):\n        type_str = type_obj.__name__\n    elif hasattr(type_obj, \"name\"):\n        type_str = type_obj.name\n    else:\n        raise RuntimeError(\"Cannot infer type name from input\")\n\n    assert type_str in _str_to_ctype.keys()\n\n    my_dtype = np.dtype(type_str)\n    my_ctype = _str_to_ctype[type_str]\n\n    assert my_dtype.itemsize == ctypes.sizeof(my_ctype)\n\n    return my_dtype, my_ctype\n\n\ndef is_pickleable(obj: Any) -> bool:\n    try:\n        with io.BytesIO() as stream:\n            pickle.dump(obj, stream)\n        return True\n    except:\n        return False\n\n\n# Functionality to import modules/objects by name, and call functions by name\n# ------------------------------------------------------------------------------------------\n\ndef get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]:\n    \"\"\"Searches for the underlying module behind the name to some python object.\n    Returns the module and the object name (original name with module part removed).\"\"\"\n\n    # allow convenience shorthands, substitute them by full names\n    obj_name = re.sub(\"^np.\", \"numpy.\", obj_name)\n    obj_name = re.sub(\"^tf.\", \"tensorflow.\", obj_name)\n\n    # list alternatives for (module_name, local_obj_name)\n    parts = obj_name.split(\".\")\n    name_pairs = [(\".\".join(parts[:i]), \".\".join(parts[i:])) for i in range(len(parts), 0, -1)]\n\n    # try each alternative in turn\n    for module_name, local_obj_name in name_pairs:\n        try:\n            module = importlib.import_module(module_name) # may raise ImportError\n            get_obj_from_module(module, local_obj_name) # may raise AttributeError\n            return module, local_obj_name\n        except:\n            pass\n\n    # maybe some of the modules themselves contain errors?\n    for module_name, _local_obj_name in name_pairs:\n        try:\n            importlib.import_module(module_name) # may raise ImportError\n        except ImportError:\n            if not str(sys.exc_info()[1]).startswith(\"No module named '\" + module_name + \"'\"):\n                raise\n\n    # maybe the requested attribute is missing?\n    for module_name, local_obj_name in name_pairs:\n        try:\n            module = importlib.import_module(module_name) # may raise ImportError\n            get_obj_from_module(module, local_obj_name) # may raise AttributeError\n        except ImportError:\n            pass\n\n    # we are out of luck, but we have no idea why\n    raise ImportError(obj_name)\n\n\ndef get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any:\n    \"\"\"Traverses the object name and returns the last (rightmost) python object.\"\"\"\n    if obj_name == '':\n        return module\n    obj = module\n    for part in obj_name.split(\".\"):\n        obj = getattr(obj, part)\n    return obj\n\n\ndef get_obj_by_name(name: str) -> Any:\n    \"\"\"Finds the python object with the given name.\"\"\"\n    module, obj_name = get_module_from_obj_name(name)\n    return get_obj_from_module(module, obj_name)\n\n\ndef call_func_by_name(*args, func_name: str = None, **kwargs) -> Any:\n    \"\"\"Finds the python object with the given name and calls it as a function.\"\"\"\n    assert func_name is not None\n    func_obj = get_obj_by_name(func_name)\n    assert callable(func_obj)\n    return func_obj(*args, **kwargs)\n\n\ndef get_module_dir_by_obj_name(obj_name: str) -> str:\n    \"\"\"Get the directory path of the module containing the given object name.\"\"\"\n    module, _ = get_module_from_obj_name(obj_name)\n    return os.path.dirname(inspect.getfile(module))\n\n\ndef is_top_level_function(obj: Any) -> bool:\n    \"\"\"Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.\"\"\"\n    return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__\n\n\ndef get_top_level_function_name(obj: Any) -> str:\n    \"\"\"Return the fully-qualified name of a top-level function.\"\"\"\n    assert is_top_level_function(obj)\n    return obj.__module__ + \".\" + obj.__name__\n\n\n# File system helpers\n# ------------------------------------------------------------------------------------------\n\ndef list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]:\n    \"\"\"List all files recursively in a given directory while ignoring given file and directory names.\n    Returns list of tuples containing both absolute and relative paths.\"\"\"\n    assert os.path.isdir(dir_path)\n    base_name = os.path.basename(os.path.normpath(dir_path))\n\n    if ignores is None:\n        ignores = []\n\n    result = []\n\n    for root, dirs, files in os.walk(dir_path, topdown=True):\n        for ignore_ in ignores:\n            dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)]\n\n            # dirs need to be edited in-place\n            for d in dirs_to_remove:\n                dirs.remove(d)\n\n            files = [f for f in files if not fnmatch.fnmatch(f, ignore_)]\n\n        absolute_paths = [os.path.join(root, f) for f in files]\n        relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths]\n\n        if add_base_to_relative:\n            relative_paths = [os.path.join(base_name, p) for p in relative_paths]\n\n        assert len(absolute_paths) == len(relative_paths)\n        result += zip(absolute_paths, relative_paths)\n\n    return result\n\n\ndef copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None:\n    \"\"\"Takes in a list of tuples of (src, dst) paths and copies files.\n    Will create all necessary directories.\"\"\"\n    for file in files:\n        target_dir_name = os.path.dirname(file[1])\n\n        # will create all intermediate-level directories\n        if not os.path.exists(target_dir_name):\n            os.makedirs(target_dir_name)\n\n        shutil.copyfile(file[0], file[1])\n\n\n# URL helpers\n# ------------------------------------------------------------------------------------------\n\ndef is_url(obj: Any, allow_file_urls: bool = False) -> bool:\n    \"\"\"Determine whether the given object is a valid URL string.\"\"\"\n    if not isinstance(obj, str) or not \"://\" in obj:\n        return False\n    if allow_file_urls and obj.startswith('file:///'):\n        return True\n    try:\n        res = requests.compat.urlparse(obj)\n        if not res.scheme or not res.netloc or not \".\" in res.netloc:\n            return False\n        res = requests.compat.urlparse(requests.compat.urljoin(obj, \"/\"))\n        if not res.scheme or not res.netloc or not \".\" in res.netloc:\n            return False\n    except:\n        return False\n    return True\n\n\ndef open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True) -> Any:\n    \"\"\"Download the given URL and return a binary-mode file object to access the data.\"\"\"\n    assert is_url(url, allow_file_urls=True)\n    assert num_attempts >= 1\n\n    # Handle file URLs.\n    if url.startswith('file:///'):\n        return open(url[len('file:///'):], \"rb\")\n\n    # Lookup from cache.\n    url_md5 = hashlib.md5(url.encode(\"utf-8\")).hexdigest()\n    if cache_dir is not None:\n        cache_files = glob.glob(os.path.join(cache_dir, url_md5 + \"_*\"))\n        if len(cache_files) == 1:\n            return open(cache_files[0], \"rb\")\n\n    # Download.\n    url_name = None\n    url_data = None\n    with requests.Session() as session:\n        if verbose:\n            print(\"Downloading %s ...\" % url, end=\"\", flush=True)\n        for attempts_left in reversed(range(num_attempts)):\n            try:\n                with session.get(url) as res:\n                    res.raise_for_status()\n                    if len(res.content) == 0:\n                        raise IOError(\"No data received\")\n\n                    if len(res.content) < 8192:\n                        content_str = res.content.decode(\"utf-8\")\n                        if \"download_warning\" in res.headers.get(\"Set-Cookie\", \"\"):\n                            links = [html.unescape(link) for link in content_str.split('\"') if \"export=download\" in link]\n                            if len(links) == 1:\n                                url = requests.compat.urljoin(url, links[0])\n                                raise IOError(\"Google Drive virus checker nag\")\n                        if \"Google Drive - Quota exceeded\" in content_str:\n                            raise IOError(\"Google Drive download quota exceeded -- please try again later\")\n\n                    match = re.search(r'filename=\"([^\"]*)\"', res.headers.get(\"Content-Disposition\", \"\"))\n                    url_name = match[1] if match else url\n                    url_data = res.content\n                    if verbose:\n                        print(\" done\")\n                    break\n            except:\n                if not attempts_left:\n                    if verbose:\n                        print(\" failed\")\n                    raise\n                if verbose:\n                    print(\".\", end=\"\", flush=True)\n\n    # Save to cache.\n    if cache_dir is not None:\n        safe_name = re.sub(r\"[^0-9a-zA-Z-._]\", \"_\", url_name)\n        cache_file = os.path.join(cache_dir, url_md5 + \"_\" + safe_name)\n        temp_file = os.path.join(cache_dir, \"tmp_\" + uuid.uuid4().hex + \"_\" + url_md5 + \"_\" + safe_name)\n        os.makedirs(cache_dir, exist_ok=True)\n        with open(temp_file, \"wb\") as f:\n            f.write(url_data)\n        os.replace(temp_file, cache_file) # atomic\n\n    # Return data as file object.\n    return io.BytesIO(url_data)\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n# empty\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/frechet_inception_distance.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Frechet Inception Distance (FID).\"\"\"\n\nimport os\nimport numpy as np\nimport scipy\nimport tensorflow as tf\nimport dnnlib.tflib as tflib\n\nfrom metrics import metric_base\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\nclass FID(metric_base.MetricBase):\n    def __init__(self, num_images, minibatch_per_gpu, **kwargs):\n        super().__init__(**kwargs)\n        self.num_images = num_images\n        self.minibatch_per_gpu = minibatch_per_gpu\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        minibatch_size = num_gpus * self.minibatch_per_gpu\n        inception = misc.load_pkl('http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/inception_v3_features.pkl')\n        activations = np.empty([self.num_images, inception.output_shape[1]], dtype=np.float32)\n\n        # Calculate statistics for reals.\n        cache_file = self._get_cache_file_for_reals(num_images=self.num_images)\n        os.makedirs(os.path.dirname(cache_file), exist_ok=True)\n        if os.path.isfile(cache_file):\n            mu_real, sigma_real = misc.load_pkl(cache_file)\n        else:\n            for idx, images in enumerate(self._iterate_reals(minibatch_size=minibatch_size)):\n                begin = idx * minibatch_size\n                end = min(begin + minibatch_size, self.num_images)\n                activations[begin:end] = inception.run(images[:end-begin], num_gpus=num_gpus, assume_frozen=True)\n                if end == self.num_images:\n                    break\n            mu_real = np.mean(activations, axis=0)\n            sigma_real = np.cov(activations, rowvar=False)\n            misc.save_pkl((mu_real, sigma_real), cache_file)\n\n        # Construct TensorFlow graph.\n        result_expr = []\n        for gpu_idx in range(num_gpus):\n            with tf.device('/gpu:%d' % gpu_idx):\n                Gs_clone = Gs.clone()\n                inception_clone = inception.clone()\n                latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:])\n                labels = self._get_random_labels_tf(self.minibatch_per_gpu)\n                images = Gs_clone.get_output_for(latents, labels, **Gs_kwargs)\n                images = tflib.convert_images_to_uint8(images)\n                result_expr.append(inception_clone.get_output_for(images))\n\n        # Calculate statistics for fakes.\n        for begin in range(0, self.num_images, minibatch_size):\n            self._report_progress(begin, self.num_images)\n            end = min(begin + minibatch_size, self.num_images)\n            activations[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin]\n        mu_fake = np.mean(activations, axis=0)\n        sigma_fake = np.cov(activations, rowvar=False)\n\n        # Calculate FID.\n        m = np.square(mu_fake - mu_real).sum()\n        s, _ = scipy.linalg.sqrtm(np.dot(sigma_fake, sigma_real), disp=False) # pylint: disable=no-member\n        dist = m + np.trace(sigma_fake + sigma_real - 2*s)\n        self._report_result(np.real(dist))\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/inception_score.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Inception Score (IS).\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib.tflib as tflib\n\nfrom metrics import metric_base\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\nclass IS(metric_base.MetricBase):\n    def __init__(self, num_images, num_splits, minibatch_per_gpu, **kwargs):\n        super().__init__(**kwargs)\n        self.num_images = num_images\n        self.num_splits = num_splits\n        self.minibatch_per_gpu = minibatch_per_gpu\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        minibatch_size = num_gpus * self.minibatch_per_gpu\n        inception = misc.load_pkl('http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/inception_v3_softmax.pkl')\n        activations = np.empty([self.num_images, inception.output_shape[1]], dtype=np.float32)\n\n        # Construct TensorFlow graph.\n        result_expr = []\n        for gpu_idx in range(num_gpus):\n            with tf.device('/gpu:%d' % gpu_idx):\n                Gs_clone = Gs.clone()\n                inception_clone = inception.clone()\n                latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:])\n                labels = self._get_random_labels_tf(self.minibatch_per_gpu)\n                images = Gs_clone.get_output_for(latents, labels, **Gs_kwargs)\n                images = tflib.convert_images_to_uint8(images)\n                result_expr.append(inception_clone.get_output_for(images))\n\n        # Calculate activations for fakes.\n        for begin in range(0, self.num_images, minibatch_size):\n            self._report_progress(begin, self.num_images)\n            end = min(begin + minibatch_size, self.num_images)\n            activations[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin]\n\n        # Calculate IS.\n        scores = []\n        for i in range(self.num_splits):\n            part = activations[i * self.num_images // self.num_splits : (i + 1) * self.num_images // self.num_splits]\n            kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))\n            kl = np.mean(np.sum(kl, 1))\n            scores.append(np.exp(kl))\n        self._report_result(np.mean(scores), suffix='_mean')\n        self._report_result(np.std(scores), suffix='_std')\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/linear_separability.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Linear Separability (LS).\"\"\"\n\nfrom collections import defaultdict\nimport numpy as np\nimport sklearn.svm\nimport tensorflow as tf\nimport dnnlib.tflib as tflib\n\nfrom metrics import metric_base\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\nclassifier_urls = [\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-00-male.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-01-smiling.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-02-attractive.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-03-wavy-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-04-young.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-05-5-o-clock-shadow.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-06-arched-eyebrows.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-07-bags-under-eyes.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-08-bald.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-09-bangs.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-10-big-lips.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-11-big-nose.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-12-black-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-13-blond-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-14-blurry.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-15-brown-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-16-bushy-eyebrows.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-17-chubby.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-18-double-chin.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-19-eyeglasses.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-20-goatee.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-21-gray-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-22-heavy-makeup.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-23-high-cheekbones.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-24-mouth-slightly-open.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-25-mustache.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-26-narrow-eyes.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-27-no-beard.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-28-oval-face.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-29-pale-skin.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-30-pointy-nose.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-31-receding-hairline.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-32-rosy-cheeks.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-33-sideburns.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-34-straight-hair.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-35-wearing-earrings.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-36-wearing-hat.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-37-wearing-lipstick.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-38-wearing-necklace.pkl',\n    'http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/celebahq-classifier-39-wearing-necktie.pkl',\n]\n\n#----------------------------------------------------------------------------\n\ndef prob_normalize(p):\n    p = np.asarray(p).astype(np.float32)\n    assert len(p.shape) == 2\n    return p / np.sum(p)\n\ndef mutual_information(p):\n    p = prob_normalize(p)\n    px = np.sum(p, axis=1)\n    py = np.sum(p, axis=0)\n    result = 0.0\n    for x in range(p.shape[0]):\n        p_x = px[x]\n        for y in range(p.shape[1]):\n            p_xy = p[x][y]\n            p_y = py[y]\n            if p_xy > 0.0:\n                result += p_xy * np.log2(p_xy / (p_x * p_y)) # get bits as output\n    return result\n\ndef entropy(p):\n    p = prob_normalize(p)\n    result = 0.0\n    for x in range(p.shape[0]):\n        for y in range(p.shape[1]):\n            p_xy = p[x][y]\n            if p_xy > 0.0:\n                result -= p_xy * np.log2(p_xy)\n    return result\n\ndef conditional_entropy(p):\n    # H(Y|X) where X corresponds to axis 0, Y to axis 1\n    # i.e., How many bits of additional information are needed to where we are on axis 1 if we know where we are on axis 0?\n    p = prob_normalize(p)\n    y = np.sum(p, axis=0, keepdims=True) # marginalize to calculate H(Y)\n    return max(0.0, entropy(y) - mutual_information(p)) # can slip just below 0 due to FP inaccuracies, clean those up.\n\n#----------------------------------------------------------------------------\n\nclass LS(metric_base.MetricBase):\n    def __init__(self, num_samples, num_keep, attrib_indices, minibatch_per_gpu, **kwargs):\n        assert num_keep <= num_samples\n        super().__init__(**kwargs)\n        self.num_samples = num_samples\n        self.num_keep = num_keep\n        self.attrib_indices = attrib_indices\n        self.minibatch_per_gpu = minibatch_per_gpu\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        minibatch_size = num_gpus * self.minibatch_per_gpu\n\n        # Construct TensorFlow graph for each GPU.\n        result_expr = []\n        for gpu_idx in range(num_gpus):\n            with tf.device('/gpu:%d' % gpu_idx):\n                Gs_clone = Gs.clone()\n\n                # Generate images.\n                latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:])\n                labels = self._get_random_labels_tf(self.minibatch_per_gpu)\n                dlatents = Gs_clone.components.mapping.get_output_for(latents, labels, **Gs_kwargs)\n                images = Gs_clone.get_output_for(latents, None, **Gs_kwargs)\n\n                # Downsample to 256x256. The attribute classifiers were built for 256x256.\n                if images.shape[2] > 256:\n                    factor = images.shape[2] // 256\n                    images = tf.reshape(images, [-1, images.shape[1], images.shape[2] // factor, factor, images.shape[3] // factor, factor])\n                    images = tf.reduce_mean(images, axis=[3, 5])\n\n                # Run classifier for each attribute.\n                result_dict = dict(latents=latents, dlatents=dlatents[:,-1])\n                for attrib_idx in self.attrib_indices:\n                    classifier = misc.load_pkl(classifier_urls[attrib_idx])\n                    logits = classifier.get_output_for(images, None)\n                    predictions = tf.nn.softmax(tf.concat([logits, -logits], axis=1))\n                    result_dict[attrib_idx] = predictions\n                result_expr.append(result_dict)\n\n        # Sampling loop.\n        results = []\n        for begin in range(0, self.num_samples, minibatch_size):\n            self._report_progress(begin, self.num_samples)\n            results += tflib.run(result_expr)\n        results = {key: np.concatenate([value[key] for value in results], axis=0) for key in results[0].keys()}\n\n        # Calculate conditional entropy for each attribute.\n        conditional_entropies = defaultdict(list)\n        for attrib_idx in self.attrib_indices:\n            # Prune the least confident samples.\n            pruned_indices = list(range(self.num_samples))\n            pruned_indices = sorted(pruned_indices, key=lambda i: -np.max(results[attrib_idx][i]))\n            pruned_indices = pruned_indices[:self.num_keep]\n\n            # Fit SVM to the remaining samples.\n            svm_targets = np.argmax(results[attrib_idx][pruned_indices], axis=1)\n            for space in ['latents', 'dlatents']:\n                svm_inputs = results[space][pruned_indices]\n                try:\n                    svm = sklearn.svm.LinearSVC()\n                    svm.fit(svm_inputs, svm_targets)\n                    svm.score(svm_inputs, svm_targets)\n                    svm_outputs = svm.predict(svm_inputs)\n                except:\n                    svm_outputs = svm_targets # assume perfect prediction\n\n                # Calculate conditional entropy.\n                p = [[np.mean([case == (row, col) for case in zip(svm_outputs, svm_targets)]) for col in (0, 1)] for row in (0, 1)]\n                conditional_entropies[space].append(conditional_entropy(p))\n\n        # Calculate separability scores.\n        scores = {key: 2**np.sum(values) for key, values in conditional_entropies.items()}\n        self._report_result(scores['latents'], suffix='_z')\n        self._report_result(scores['dlatents'], suffix='_w')\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/metric_base.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Common definitions for GAN metrics.\"\"\"\n\nimport os\nimport time\nimport hashlib\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\n\nfrom training import misc\nfrom training import dataset\n\n#----------------------------------------------------------------------------\n# Base class for metrics.\n\nclass MetricBase:\n    def __init__(self, name):\n        self.name = name\n        self._dataset_obj = None\n        self._progress_lo = None\n        self._progress_hi = None\n        self._progress_max = None\n        self._progress_sec = None\n        self._progress_time = None\n        self._reset()\n\n    def close(self):\n        self._reset()\n\n    def _reset(self, network_pkl=None, run_dir=None, data_dir=None, dataset_args=None, mirror_augment=None):\n        if self._dataset_obj is not None:\n            self._dataset_obj.close()\n\n        self._network_pkl = network_pkl\n        self._data_dir = data_dir\n        self._dataset_args = dataset_args\n        self._dataset_obj = None\n        self._mirror_augment = mirror_augment\n        self._eval_time = 0\n        self._results = []\n\n        if (dataset_args is None or mirror_augment is None) and run_dir is not None:\n            run_config = misc.parse_config_for_previous_run(run_dir)\n            self._dataset_args = dict(run_config['dataset'])\n            self._dataset_args['shuffle_mb'] = 0\n            self._mirror_augment = run_config['train'].get('mirror_augment', False)\n\n    def configure_progress_reports(self, plo, phi, pmax, psec=15):\n        self._progress_lo = plo\n        self._progress_hi = phi\n        self._progress_max = pmax\n        self._progress_sec = psec\n\n    def run(self, network_pkl, run_dir=None, data_dir=None, dataset_args=None, mirror_augment=None, num_gpus=1, tf_config=None, log_results=True, Gs_kwargs=dict(is_validation=True)):\n        self._reset(network_pkl=network_pkl, run_dir=run_dir, data_dir=data_dir, dataset_args=dataset_args, mirror_augment=mirror_augment)\n        time_begin = time.time()\n        with tf.Graph().as_default(), tflib.create_session(tf_config).as_default(): # pylint: disable=not-context-manager\n            self._report_progress(0, 1)\n            _G, _D, Gs = misc.load_pkl(self._network_pkl)\n            self._evaluate(Gs, Gs_kwargs=Gs_kwargs, num_gpus=num_gpus)\n            self._report_progress(1, 1)\n        self._eval_time = time.time() - time_begin # pylint: disable=attribute-defined-outside-init\n\n        if log_results:\n            if run_dir is not None:\n                log_file = os.path.join(run_dir, 'metric-%s.txt' % self.name)\n                with dnnlib.util.Logger(log_file, 'a'):\n                    print(self.get_result_str().strip())\n            else:\n                print(self.get_result_str().strip())\n\n    def get_result_str(self):\n        network_name = os.path.splitext(os.path.basename(self._network_pkl))[0]\n        if len(network_name) > 29:\n            network_name = '...' + network_name[-26:]\n        result_str = '%-30s' % network_name\n        result_str += ' time %-12s' % dnnlib.util.format_time(self._eval_time)\n        for res in self._results:\n            result_str += ' ' + self.name + res.suffix + ' '\n            result_str += res.fmt % res.value\n        return result_str\n\n    def update_autosummaries(self):\n        for res in self._results:\n            tflib.autosummary.autosummary('Metrics/' + self.name + res.suffix, res.value)\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        raise NotImplementedError # to be overridden by subclasses\n\n    def _report_result(self, value, suffix='', fmt='%-10.4f'):\n        self._results += [dnnlib.EasyDict(value=value, suffix=suffix, fmt=fmt)]\n\n    def _report_progress(self, pcur, pmax, status_str=''):\n        if self._progress_lo is None or self._progress_hi is None or self._progress_max is None:\n            return\n        t = time.time()\n        if self._progress_sec is not None and self._progress_time is not None and t < self._progress_time + self._progress_sec:\n            return\n        self._progress_time = t\n        val = self._progress_lo + (pcur / pmax) * (self._progress_hi - self._progress_lo)\n        dnnlib.RunContext.get().update(status_str, int(val), self._progress_max)\n\n    def _get_cache_file_for_reals(self, extension='pkl', **kwargs):\n        all_args = dnnlib.EasyDict(metric_name=self.name, mirror_augment=self._mirror_augment)\n        all_args.update(self._dataset_args)\n        all_args.update(kwargs)\n        md5 = hashlib.md5(repr(sorted(all_args.items())).encode('utf-8'))\n        dataset_name = self._dataset_args.get('tfrecord_dir', None) or self._dataset_args.get('h5_file', None)\n        dataset_name = os.path.splitext(os.path.basename(dataset_name))[0]\n        return os.path.join('.stylegan2-cache', '%s-%s-%s.%s' % (md5.hexdigest(), self.name, dataset_name, extension))\n\n    def _get_dataset_obj(self):\n        if self._dataset_obj is None:\n            self._dataset_obj = dataset.load_dataset(data_dir=self._data_dir, **self._dataset_args)\n        return self._dataset_obj\n\n    def _iterate_reals(self, minibatch_size):\n        dataset_obj = self._get_dataset_obj()\n        while True:\n            images, _labels = dataset_obj.get_minibatch_np(minibatch_size)\n            if self._mirror_augment:\n                images = misc.apply_mirror_augment(images)\n            yield images\n\n    def _iterate_fakes(self, Gs, minibatch_size, num_gpus):\n        while True:\n            latents = np.random.randn(minibatch_size, *Gs.input_shape[1:])\n            fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n            images = Gs.run(latents, None, output_transform=fmt, is_validation=True, num_gpus=num_gpus, assume_frozen=True)\n            yield images\n\n    def _get_random_labels_tf(self, minibatch_size):\n        return self._get_dataset_obj().get_random_labels_tf(minibatch_size)\n\n#----------------------------------------------------------------------------\n# Group of multiple metrics.\n\nclass MetricGroup:\n    def __init__(self, metric_kwarg_list):\n        self.metrics = [dnnlib.util.call_func_by_name(**kwargs) for kwargs in metric_kwarg_list]\n\n    def run(self, *args, **kwargs):\n        for metric in self.metrics:\n            metric.run(*args, **kwargs)\n\n    def get_result_str(self):\n        return ' '.join(metric.get_result_str() for metric in self.metrics)\n\n    def update_autosummaries(self):\n        for metric in self.metrics:\n            metric.update_autosummaries()\n\n#----------------------------------------------------------------------------\n# Dummy metric for debugging purposes.\n\nclass DummyMetric(MetricBase):\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        _ = Gs, Gs_kwargs, num_gpus\n        self._report_result(0.0)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/metric_defaults.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Default metric definitions.\"\"\"\n\nfrom dnnlib import EasyDict\n\n#----------------------------------------------------------------------------\n\nmetric_defaults = EasyDict([(args.name, args) for args in [\n    EasyDict(name='fid50k',    func_name='metrics.frechet_inception_distance.FID', num_images=50000, minibatch_per_gpu=8),\n    EasyDict(name='is50k',     func_name='metrics.inception_score.IS',             num_images=50000, num_splits=10, minibatch_per_gpu=8),\n    EasyDict(name='ppl_zfull', func_name='metrics.perceptual_path_length.PPL',     num_samples=50000, epsilon=1e-4, space='z', sampling='full', crop=True, minibatch_per_gpu=4, Gs_overrides=dict(dtype='float32', mapping_dtype='float32')),\n    EasyDict(name='ppl_wfull', func_name='metrics.perceptual_path_length.PPL',     num_samples=50000, epsilon=1e-4, space='w', sampling='full', crop=True, minibatch_per_gpu=4, Gs_overrides=dict(dtype='float32', mapping_dtype='float32')),\n    EasyDict(name='ppl_zend',  func_name='metrics.perceptual_path_length.PPL',     num_samples=50000, epsilon=1e-4, space='z', sampling='end', crop=True, minibatch_per_gpu=4, Gs_overrides=dict(dtype='float32', mapping_dtype='float32')),\n    EasyDict(name='ppl_wend',  func_name='metrics.perceptual_path_length.PPL',     num_samples=50000, epsilon=1e-4, space='w', sampling='end', crop=True, minibatch_per_gpu=4, Gs_overrides=dict(dtype='float32', mapping_dtype='float32')),\n    EasyDict(name='ppl2_wend', func_name='metrics.perceptual_path_length.PPL',     num_samples=50000, epsilon=1e-4, space='w', sampling='end', crop=False, minibatch_per_gpu=4, Gs_overrides=dict(dtype='float32', mapping_dtype='float32')),\n    EasyDict(name='ls',        func_name='metrics.linear_separability.LS',         num_samples=200000, num_keep=100000, attrib_indices=range(40), minibatch_per_gpu=4),\n    EasyDict(name='pr50k3',    func_name='metrics.precision_recall.PR',            num_images=50000, nhood_size=3, minibatch_per_gpu=8, row_batch_size=10000, col_batch_size=10000),\n]])\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/perceptual_path_length.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Perceptual Path Length (PPL).\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib.tflib as tflib\n\nfrom metrics import metric_base\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\n# Normalize batch of vectors.\ndef normalize(v):\n    return v / tf.sqrt(tf.reduce_sum(tf.square(v), axis=-1, keepdims=True))\n\n# Spherical interpolation of a batch of vectors.\ndef slerp(a, b, t):\n    a = normalize(a)\n    b = normalize(b)\n    d = tf.reduce_sum(a * b, axis=-1, keepdims=True)\n    p = t * tf.math.acos(d)\n    c = normalize(b - d * a)\n    d = a * tf.math.cos(p) + c * tf.math.sin(p)\n    return normalize(d)\n\n#----------------------------------------------------------------------------\n\nclass PPL(metric_base.MetricBase):\n    def __init__(self, num_samples, epsilon, space, sampling, crop, minibatch_per_gpu, Gs_overrides, **kwargs):\n        assert space in ['z', 'w']\n        assert sampling in ['full', 'end']\n        super().__init__(**kwargs)\n        self.num_samples = num_samples\n        self.epsilon = epsilon\n        self.space = space\n        self.sampling = sampling\n        self.crop = crop\n        self.minibatch_per_gpu = minibatch_per_gpu\n        self.Gs_overrides = Gs_overrides\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        Gs_kwargs = dict(Gs_kwargs)\n        Gs_kwargs.update(self.Gs_overrides)\n        minibatch_size = num_gpus * self.minibatch_per_gpu\n\n        # Construct TensorFlow graph.\n        distance_expr = []\n        for gpu_idx in range(num_gpus):\n            with tf.device('/gpu:%d' % gpu_idx):\n                Gs_clone = Gs.clone()\n                noise_vars = [var for name, var in Gs_clone.components.synthesis.vars.items() if name.startswith('noise')]\n\n                # Generate random latents and interpolation t-values.\n                lat_t01 = tf.random_normal([self.minibatch_per_gpu * 2] + Gs_clone.input_shape[1:])\n                lerp_t = tf.random_uniform([self.minibatch_per_gpu], 0.0, 1.0 if self.sampling == 'full' else 0.0)\n                labels = tf.reshape(tf.tile(self._get_random_labels_tf(self.minibatch_per_gpu), [1, 2]), [self.minibatch_per_gpu * 2, -1])\n\n                # Interpolate in W or Z.\n                if self.space == 'w':\n                    dlat_t01 = Gs_clone.components.mapping.get_output_for(lat_t01, labels, **Gs_kwargs)\n                    dlat_t01 = tf.cast(dlat_t01, tf.float32)\n                    dlat_t0, dlat_t1 = dlat_t01[0::2], dlat_t01[1::2]\n                    dlat_e0 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis])\n                    dlat_e1 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis] + self.epsilon)\n                    dlat_e01 = tf.reshape(tf.stack([dlat_e0, dlat_e1], axis=1), dlat_t01.shape)\n                else: # space == 'z'\n                    lat_t0, lat_t1 = lat_t01[0::2], lat_t01[1::2]\n                    lat_e0 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis])\n                    lat_e1 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis] + self.epsilon)\n                    lat_e01 = tf.reshape(tf.stack([lat_e0, lat_e1], axis=1), lat_t01.shape)\n                    dlat_e01 = Gs_clone.components.mapping.get_output_for(lat_e01, labels, **Gs_kwargs)\n\n                # Synthesize images.\n                with tf.control_dependencies([var.initializer for var in noise_vars]): # use same noise inputs for the entire minibatch\n                    images = Gs_clone.components.synthesis.get_output_for(dlat_e01, randomize_noise=False, **Gs_kwargs)\n                    images = tf.cast(images, tf.float32)\n\n                # Crop only the face region.\n                if self.crop:\n                    c = int(images.shape[2] // 8)\n                    images = images[:, :, c*3 : c*7, c*2 : c*6]\n\n                # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images.\n                factor = images.shape[2] // 256\n                if factor > 1:\n                    images = tf.reshape(images, [-1, images.shape[1], images.shape[2] // factor, factor, images.shape[3] // factor, factor])\n                    images = tf.reduce_mean(images, axis=[3,5])\n\n                # Scale dynamic range from [-1,1] to [0,255] for VGG.\n                images = (images + 1) * (255 / 2)\n\n                # Evaluate perceptual distance.\n                img_e0, img_e1 = images[0::2], images[1::2]\n                distance_measure = misc.load_pkl('http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/vgg16_zhang_perceptual.pkl')\n                distance_expr.append(distance_measure.get_output_for(img_e0, img_e1) * (1 / self.epsilon**2))\n\n        # Sampling loop.\n        all_distances = []\n        for begin in range(0, self.num_samples, minibatch_size):\n            self._report_progress(begin, self.num_samples)\n            all_distances += tflib.run(distance_expr)\n        all_distances = np.concatenate(all_distances, axis=0)\n\n        # Reject outliers.\n        lo = np.percentile(all_distances, 1, interpolation='lower')\n        hi = np.percentile(all_distances, 99, interpolation='higher')\n        filtered_distances = np.extract(np.logical_and(lo <= all_distances, all_distances <= hi), all_distances)\n        self._report_result(np.mean(filtered_distances))\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/metrics/precision_recall.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Precision/Recall (PR).\"\"\"\n\nimport os\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\n\nfrom metrics import metric_base\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\ndef batch_pairwise_distances(U, V):\n    \"\"\" Compute pairwise distances between two batches of feature vectors.\"\"\"\n    with tf.variable_scope('pairwise_dist_block'):\n        # Squared norms of each row in U and V.\n        norm_u = tf.reduce_sum(tf.square(U), 1)\n        norm_v = tf.reduce_sum(tf.square(V), 1)\n\n        # norm_u as a row and norm_v as a column vectors.\n        norm_u = tf.reshape(norm_u, [-1, 1])\n        norm_v = tf.reshape(norm_v, [1, -1])\n\n        # Pairwise squared Euclidean distances.\n        D = tf.maximum(norm_u - 2*tf.matmul(U, V, False, True) + norm_v, 0.0)\n\n    return D\n\n#----------------------------------------------------------------------------\n\nclass DistanceBlock():\n    \"\"\"Distance block.\"\"\"\n    def __init__(self, num_features, num_gpus):\n        self.num_features = num_features\n        self.num_gpus = num_gpus\n\n        # Initialize TF graph to calculate pairwise distances.\n        with tf.device('/cpu:0'):\n            self._features_batch1 = tf.placeholder(tf.float16, shape=[None, self.num_features])\n            self._features_batch2 = tf.placeholder(tf.float16, shape=[None, self.num_features])\n            features_split2 = tf.split(self._features_batch2, self.num_gpus, axis=0)\n            distances_split = []\n            for gpu_idx in range(self.num_gpus):\n                with tf.device('/gpu:%d' % gpu_idx):\n                    distances_split.append(batch_pairwise_distances(self._features_batch1, features_split2[gpu_idx]))\n            self._distance_block = tf.concat(distances_split, axis=1)\n\n    def pairwise_distances(self, U, V):\n        \"\"\"Evaluate pairwise distances between two batches of feature vectors.\"\"\"\n        return self._distance_block.eval(feed_dict={self._features_batch1: U, self._features_batch2: V})\n\n#----------------------------------------------------------------------------\n\nclass ManifoldEstimator():\n    \"\"\"Finds an estimate for the manifold of given feature vectors.\"\"\"\n    def __init__(self, distance_block, features, row_batch_size, col_batch_size, nhood_sizes, clamp_to_percentile=None):\n        \"\"\"Find an estimate of the manifold of given feature vectors.\"\"\"\n        num_images = features.shape[0]\n        self.nhood_sizes = nhood_sizes\n        self.num_nhoods = len(nhood_sizes)\n        self.row_batch_size = row_batch_size\n        self.col_batch_size = col_batch_size\n        self._ref_features = features\n        self._distance_block = distance_block\n\n        # Estimate manifold of features by calculating distances to kth nearest neighbor of each sample.\n        self.D = np.zeros([num_images, self.num_nhoods], dtype=np.float16)\n        distance_batch = np.zeros([row_batch_size, num_images], dtype=np.float16)\n        seq = np.arange(max(self.nhood_sizes) + 1, dtype=np.int32)\n\n        for begin1 in range(0, num_images, row_batch_size):\n            end1 = min(begin1 + row_batch_size, num_images)\n            row_batch = features[begin1:end1]\n\n            for begin2 in range(0, num_images, col_batch_size):\n                end2 = min(begin2 + col_batch_size, num_images)\n                col_batch = features[begin2:end2]\n\n                # Compute distances between batches.\n                distance_batch[0:end1-begin1, begin2:end2] = self._distance_block.pairwise_distances(row_batch, col_batch)\n\n            # Find the kth nearest neighbor from the current batch.\n            self.D[begin1:end1, :] = np.partition(distance_batch[0:end1-begin1, :], seq, axis=1)[:, self.nhood_sizes]\n\n        if clamp_to_percentile is not None:\n            max_distances = np.percentile(self.D, clamp_to_percentile, axis=0)\n            self.D[self.D > max_distances] = 0  #max_distances  # 0\n\n    def evaluate(self, eval_features, return_realism=False, return_neighbors=False):\n        \"\"\"Evaluate if new feature vectors are in the estimated manifold.\"\"\"\n        num_eval_images = eval_features.shape[0]\n        num_ref_images = self.D.shape[0]\n        distance_batch = np.zeros([self.row_batch_size, num_ref_images], dtype=np.float16)\n        batch_predictions = np.zeros([num_eval_images, self.num_nhoods], dtype=np.int32)\n        #max_realism_score = np.zeros([num_eval_images,], dtype=np.float32)\n        realism_score = np.zeros([num_eval_images,], dtype=np.float32)\n        nearest_indices = np.zeros([num_eval_images,], dtype=np.int32)\n\n        for begin1 in range(0, num_eval_images, self.row_batch_size):\n            end1 = min(begin1 + self.row_batch_size, num_eval_images)\n            feature_batch = eval_features[begin1:end1]\n\n            for begin2 in range(0, num_ref_images, self.col_batch_size):\n                end2 = min(begin2 + self.col_batch_size, num_ref_images)\n                ref_batch = self._ref_features[begin2:end2]\n\n                distance_batch[0:end1-begin1, begin2:end2] = self._distance_block.pairwise_distances(feature_batch, ref_batch)\n\n            # From the minibatch of new feature vectors, determine if they are in the estimated manifold.\n            # If a feature vector is inside a hypersphere of some reference sample, then the new sample lies on the estimated manifold.\n            # The radii of the hyperspheres are determined from distances of neighborhood size k.\n            samples_in_manifold = distance_batch[0:end1-begin1, :, None] <= self.D\n            batch_predictions[begin1:end1] = np.any(samples_in_manifold, axis=1).astype(np.int32)\n\n            #max_realism_score[begin1:end1] = np.max(self.D[:, 0] / (distance_batch[0:end1-begin1, :] + 1e-18), axis=1)\n            #nearest_indices[begin1:end1] = np.argmax(self.D[:, 0] / (distance_batch[0:end1-begin1, :] + 1e-18), axis=1)\n            nearest_indices[begin1:end1] = np.argmin(distance_batch[0:end1-begin1, :], axis=1)\n            realism_score[begin1:end1] = self.D[nearest_indices[begin1:end1], 0] / np.min(distance_batch[0:end1-begin1, :], axis=1)\n\n        if return_realism and return_neighbors:\n            return batch_predictions, realism_score, nearest_indices\n        elif return_realism:\n            return batch_predictions, realism_score\n        elif return_neighbors:\n            return batch_predictions, nearest_indices\n\n        return batch_predictions\n\n#----------------------------------------------------------------------------\n\ndef knn_precision_recall_features(ref_features, eval_features, feature_net, nhood_sizes,\n                                  row_batch_size, col_batch_size, num_gpus):\n    \"\"\"Calculates k-NN precision and recall for two sets of feature vectors.\"\"\"\n    state = dnnlib.EasyDict()\n    #num_images = ref_features.shape[0]\n    num_features = feature_net.output_shape[1]\n    state.ref_features = ref_features\n    state.eval_features = eval_features\n\n    # Initialize DistanceBlock and ManifoldEstimators.\n    distance_block = DistanceBlock(num_features, num_gpus)\n    state.ref_manifold = ManifoldEstimator(distance_block, state.ref_features, row_batch_size, col_batch_size, nhood_sizes)\n    state.eval_manifold = ManifoldEstimator(distance_block, state.eval_features, row_batch_size, col_batch_size, nhood_sizes)\n\n    # Evaluate precision and recall using k-nearest neighbors.\n    #print('Evaluating k-NN precision and recall with %i samples...' % num_images)\n    #start = time.time()\n\n    # Precision: How many points from eval_features are in ref_features manifold.\n    state.precision, state.realism_scores, state.nearest_neighbors = state.ref_manifold.evaluate(state.eval_features, return_realism=True, return_neighbors=True)\n    state.knn_precision = state.precision.mean(axis=0)\n\n    # Recall: How many points from ref_features are in eval_features manifold.\n    state.recall = state.eval_manifold.evaluate(state.ref_features)\n    state.knn_recall = state.recall.mean(axis=0)\n\n    #elapsed_time = time.time() - start\n    #print('Done evaluation in: %gs' % elapsed_time)\n\n    return state\n\n#----------------------------------------------------------------------------\n\nclass PR(metric_base.MetricBase):\n    def __init__(self, num_images, nhood_size, minibatch_per_gpu, row_batch_size, col_batch_size, **kwargs):\n        super().__init__(**kwargs)\n        self.num_images = num_images\n        self.nhood_size = nhood_size\n        self.minibatch_per_gpu = minibatch_per_gpu\n        self.row_batch_size = row_batch_size\n        self.col_batch_size = col_batch_size\n\n    def _evaluate(self, Gs, Gs_kwargs, num_gpus):\n        minibatch_size = num_gpus * self.minibatch_per_gpu\n        feature_net = misc.load_pkl('http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/vgg16.pkl')\n\n        # Calculate features for reals.\n        cache_file = self._get_cache_file_for_reals(num_images=self.num_images)\n        os.makedirs(os.path.dirname(cache_file), exist_ok=True)\n        if os.path.isfile(cache_file):\n            ref_features = misc.load_pkl(cache_file)\n        else:\n            ref_features = np.empty([self.num_images, feature_net.output_shape[1]], dtype=np.float32)\n            for idx, images in enumerate(self._iterate_reals(minibatch_size=minibatch_size)):\n                begin = idx * minibatch_size\n                end = min(begin + minibatch_size, self.num_images)\n                ref_features[begin:end] = feature_net.run(images[:end-begin], num_gpus=num_gpus, assume_frozen=True)\n                if end == self.num_images:\n                    break\n            misc.save_pkl(ref_features, cache_file)\n\n        # Construct TensorFlow graph.\n        result_expr = []\n        for gpu_idx in range(num_gpus):\n            with tf.device('/gpu:%d' % gpu_idx):\n                Gs_clone = Gs.clone()\n                feature_net_clone = feature_net.clone()\n                latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:])\n                labels = self._get_random_labels_tf(self.minibatch_per_gpu)\n                images = Gs_clone.get_output_for(latents, labels, **Gs_kwargs)\n                images = tflib.convert_images_to_uint8(images)\n                result_expr.append(feature_net_clone.get_output_for(images))\n\n        # Calculate features for fakes.\n        eval_features = np.empty([self.num_images, feature_net.output_shape[1]], dtype=np.float32)\n        for begin in range(0, self.num_images, minibatch_size):\n            self._report_progress(begin, self.num_images)\n            end = min(begin + minibatch_size, self.num_images)\n            eval_features[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin]\n\n        # Calculate precision and recall.\n        state = knn_precision_recall_features(ref_features=ref_features, eval_features=eval_features, feature_net=feature_net,\n            nhood_sizes=[self.nhood_size], row_batch_size=self.row_batch_size, col_batch_size=self.row_batch_size, num_gpus=num_gpus)\n        self._report_result(state.knn_precision[0], suffix='_precision')\n        self._report_result(state.knn_recall[0], suffix='_recall')\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/pretrained_networks.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"List of pre-trained StyleGAN2 networks located on Google Drive.\"\"\"\n\nimport pickle\nimport dnnlib\nimport dnnlib.tflib as tflib\n\n#----------------------------------------------------------------------------\n# StyleGAN2 Google Drive root: https://drive.google.com/open?id=1QHc-yF5C3DChRwSdZKcx1w6K8JvSxQi7\n\ngdrive_urls = {\n    'gdrive:networks/stylegan2-car-config-a.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-a.pkl',\n    'gdrive:networks/stylegan2-car-config-b.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-b.pkl',\n    'gdrive:networks/stylegan2-car-config-c.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-c.pkl',\n    'gdrive:networks/stylegan2-car-config-d.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-d.pkl',\n    'gdrive:networks/stylegan2-car-config-e.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-e.pkl',\n    'gdrive:networks/stylegan2-car-config-f.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-car-config-f.pkl',\n    'gdrive:networks/stylegan2-cat-config-a.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-cat-config-a.pkl',\n    'gdrive:networks/stylegan2-cat-config-f.pkl':                           'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-cat-config-f.pkl',\n    'gdrive:networks/stylegan2-church-config-a.pkl':                        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-church-config-a.pkl',\n    'gdrive:networks/stylegan2-church-config-f.pkl':                        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-church-config-f.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-a.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-a.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-b.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-b.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-c.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-c.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-d.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-d.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-e.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-e.pkl',\n    'gdrive:networks/stylegan2-ffhq-config-f.pkl':                          'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-ffhq-config-f.pkl',\n    'gdrive:networks/stylegan2-horse-config-a.pkl':                         'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-horse-config-a.pkl',\n    'gdrive:networks/stylegan2-horse-config-f.pkl':                         'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/stylegan2-horse-config-f.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gorig-Dorig.pkl':        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gorig-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gorig-Dresnet.pkl':      'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gorig-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gorig-Dskip.pkl':        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gorig-Dskip.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gresnet-Dorig.pkl':      'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gresnet-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gresnet-Dresnet.pkl':    'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gresnet-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gresnet-Dskip.pkl':      'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gresnet-Dskip.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gskip-Dorig.pkl':        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gskip-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gskip-Dresnet.pkl':      'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gskip-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-car-config-e-Gskip-Dskip.pkl':        'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-car-config-e-Gskip-Dskip.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gorig-Dorig.pkl':       'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gorig-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gorig-Dresnet.pkl':     'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gorig-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gorig-Dskip.pkl':       'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gorig-Dskip.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gresnet-Dorig.pkl':     'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gresnet-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gresnet-Dresnet.pkl':   'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gresnet-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gresnet-Dskip.pkl':     'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gresnet-Dskip.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gskip-Dorig.pkl':       'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gskip-Dorig.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gskip-Dresnet.pkl':     'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gskip-Dresnet.pkl',\n    'gdrive:networks/table2/stylegan2-ffhq-config-e-Gskip-Dskip.pkl':       'http://d36zk2xti64re0.cloudfront.net/stylegan2/networks/table2/stylegan2-ffhq-config-e-Gskip-Dskip.pkl',\n}\n\n#----------------------------------------------------------------------------\n\ndef get_path_or_url(path_or_gdrive_path):\n    return gdrive_urls.get(path_or_gdrive_path, path_or_gdrive_path)\n\n#----------------------------------------------------------------------------\n\n_cached_networks = dict()\n\ndef load_networks(path_or_gdrive_path):\n    path_or_url = get_path_or_url(path_or_gdrive_path)\n    if path_or_url in _cached_networks:\n        return _cached_networks[path_or_url]\n\n    if dnnlib.util.is_url(path_or_url):\n        stream = dnnlib.util.open_url(path_or_url, cache_dir='.stylegan2-cache')\n    else:\n        stream = open(path_or_url, 'rb')\n\n    tflib.init_tf()\n    with stream:\n        G, D, Gs = pickle.load(stream, encoding='latin1')\n    _cached_networks[path_or_url] = G, D, Gs\n    return G, D, Gs\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/projector.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\n\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\nclass Projector:\n    def __init__(self):\n        self.num_steps                  = 1000\n        self.dlatent_avg_samples        = 10000\n        self.initial_learning_rate      = 0.1\n        self.initial_noise_factor       = 0.05\n        self.lr_rampdown_length         = 0.25\n        self.lr_rampup_length           = 0.05\n        self.noise_ramp_length          = 0.75\n        self.regularize_noise_weight    = 1e5\n        self.verbose                    = False\n        self.clone_net                  = True\n\n        self._Gs                    = None\n        self._minibatch_size        = None\n        self._dlatent_avg           = None\n        self._dlatent_std           = None\n        self._noise_vars            = None\n        self._noise_init_op         = None\n        self._noise_normalize_op    = None\n        self._dlatents_var          = None\n        self._noise_in              = None\n        self._dlatents_expr         = None\n        self._images_expr           = None\n        self._target_images_var     = None\n        self._lpips                 = None\n        self._dist                  = None\n        self._loss                  = None\n        self._reg_sizes             = None\n        self._lrate_in              = None\n        self._opt                   = None\n        self._opt_step              = None\n        self._cur_step              = None\n\n    def _info(self, *args):\n        if self.verbose:\n            print('Projector:', *args)\n\n    def set_network(self, Gs, minibatch_size=1):\n        assert minibatch_size == 1\n        self._Gs = Gs\n        self._minibatch_size = minibatch_size\n        if self._Gs is None:\n            return\n        if self.clone_net:\n            self._Gs = self._Gs.clone()\n\n        # Find dlatent stats.\n        self._info('Finding W midpoint and stddev using %d samples...' % self.dlatent_avg_samples)\n        latent_samples = np.random.RandomState(123).randn(self.dlatent_avg_samples, *self._Gs.input_shapes[0][1:])\n        dlatent_samples = self._Gs.components.mapping.run(latent_samples, None)[:, :1, :] # [N, 1, 512]\n        self._dlatent_avg = np.mean(dlatent_samples, axis=0, keepdims=True) # [1, 1, 512]\n        self._dlatent_std = (np.sum((dlatent_samples - self._dlatent_avg) ** 2) / self.dlatent_avg_samples) ** 0.5\n        self._info('std = %g' % self._dlatent_std)\n\n        # Find noise inputs.\n        self._info('Setting up noise inputs...')\n        self._noise_vars = []\n        noise_init_ops = []\n        noise_normalize_ops = []\n        while True:\n            n = 'G_synthesis/noise%d' % len(self._noise_vars)\n            if not n in self._Gs.vars:\n                break\n            v = self._Gs.vars[n]\n            self._noise_vars.append(v)\n            noise_init_ops.append(tf.assign(v, tf.random_normal(tf.shape(v), dtype=tf.float32)))\n            noise_mean = tf.reduce_mean(v)\n            noise_std = tf.reduce_mean((v - noise_mean)**2)**0.5\n            noise_normalize_ops.append(tf.assign(v, (v - noise_mean) / noise_std))\n            self._info(n, v)\n        self._noise_init_op = tf.group(*noise_init_ops)\n        self._noise_normalize_op = tf.group(*noise_normalize_ops)\n\n        # Image output graph.\n        self._info('Building image output graph...')\n        self._dlatents_var = tf.Variable(tf.zeros([self._minibatch_size] + list(self._dlatent_avg.shape[1:])), name='dlatents_var')\n        self._noise_in = tf.placeholder(tf.float32, [], name='noise_in')\n        dlatents_noise = tf.random.normal(shape=self._dlatents_var.shape) * self._noise_in\n        self._dlatents_expr = tf.tile(self._dlatents_var + dlatents_noise, [1, self._Gs.components.synthesis.input_shape[1], 1])\n        self._images_expr = self._Gs.components.synthesis.get_output_for(self._dlatents_expr, randomize_noise=False)\n\n        # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images.\n        proc_images_expr = (self._images_expr + 1) * (255 / 2)\n        sh = proc_images_expr.shape.as_list()\n        if sh[2] > 256:\n            factor = sh[2] // 256\n            proc_images_expr = tf.reduce_mean(tf.reshape(proc_images_expr, [-1, sh[1], sh[2] // factor, factor, sh[2] // factor, factor]), axis=[3,5])\n\n        # Loss graph.\n        self._info('Building loss graph...')\n        self._target_images_var = tf.Variable(tf.zeros(proc_images_expr.shape), name='target_images_var')\n        if self._lpips is None:\n            self._lpips = misc.load_pkl('http://d36zk2xti64re0.cloudfront.net/stylegan1/networks/metrics/vgg16_zhang_perceptual.pkl')\n        self._dist = self._lpips.get_output_for(proc_images_expr, self._target_images_var)\n        self._loss = tf.reduce_sum(self._dist)\n\n        # Noise regularization graph.\n        self._info('Building noise regularization graph...')\n        reg_loss = 0.0\n        for v in self._noise_vars:\n            sz = v.shape[2]\n            while True:\n                reg_loss += tf.reduce_mean(v * tf.roll(v, shift=1, axis=3))**2 + tf.reduce_mean(v * tf.roll(v, shift=1, axis=2))**2\n                if sz <= 8:\n                    break # Small enough already\n                v = tf.reshape(v, [1, 1, sz//2, 2, sz//2, 2]) # Downscale\n                v = tf.reduce_mean(v, axis=[3, 5])\n                sz = sz // 2\n        self._loss += reg_loss * self.regularize_noise_weight\n\n        # Optimizer.\n        self._info('Setting up optimizer...')\n        self._lrate_in = tf.placeholder(tf.float32, [], name='lrate_in')\n        self._opt = dnnlib.tflib.Optimizer(learning_rate=self._lrate_in)\n        self._opt.register_gradients(self._loss, [self._dlatents_var] + self._noise_vars)\n        self._opt_step = self._opt.apply_updates()\n\n    def run(self, target_images):\n        # Run to completion.\n        self.start(target_images)\n        while self._cur_step < self.num_steps:\n            self.step()\n\n        # Collect results.\n        pres = dnnlib.EasyDict()\n        pres.dlatents = self.get_dlatents()\n        pres.noises = self.get_noises()\n        pres.images = self.get_images()\n        return pres\n\n    def start(self, target_images):\n        assert self._Gs is not None\n\n        # Prepare target images.\n        self._info('Preparing target images...')\n        target_images = np.asarray(target_images, dtype='float32')\n        target_images = (target_images + 1) * (255 / 2)\n        sh = target_images.shape\n        assert sh[0] == self._minibatch_size\n        if sh[2] > self._target_images_var.shape[2]:\n            factor = sh[2] // self._target_images_var.shape[2]\n            target_images = np.reshape(target_images, [-1, sh[1], sh[2] // factor, factor, sh[3] // factor, factor]).mean((3, 5))\n\n        # Initialize optimization state.\n        self._info('Initializing optimization state...')\n        tflib.set_vars({self._target_images_var: target_images, self._dlatents_var: np.tile(self._dlatent_avg, [self._minibatch_size, 1, 1])})\n        tflib.run(self._noise_init_op)\n        self._opt.reset_optimizer_state()\n        self._cur_step = 0\n\n    def step(self):\n        assert self._cur_step is not None\n        if self._cur_step >= self.num_steps:\n            return\n        if self._cur_step == 0:\n            self._info('Running...')\n\n        # Hyperparameters.\n        t = self._cur_step / self.num_steps\n        noise_strength = self._dlatent_std * self.initial_noise_factor * max(0.0, 1.0 - t / self.noise_ramp_length) ** 2\n        lr_ramp = min(1.0, (1.0 - t) / self.lr_rampdown_length)\n        lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi)\n        lr_ramp = lr_ramp * min(1.0, t / self.lr_rampup_length)\n        learning_rate = self.initial_learning_rate * lr_ramp\n\n        # Train.\n        feed_dict = {self._noise_in: noise_strength, self._lrate_in: learning_rate}\n        _, dist_value, loss_value = tflib.run([self._opt_step, self._dist, self._loss], feed_dict)\n        tflib.run(self._noise_normalize_op)\n\n        # Print status.\n        self._cur_step += 1\n        if self._cur_step == self.num_steps or self._cur_step % 10 == 0:\n            self._info('%-8d%-12g%-12g' % (self._cur_step, dist_value, loss_value))\n        if self._cur_step == self.num_steps:\n            self._info('Done.')\n\n    def get_cur_step(self):\n        return self._cur_step\n\n    def get_dlatents(self):\n        return tflib.run(self._dlatents_expr, {self._noise_in: 0})\n\n    def get_noises(self):\n        return tflib.run(self._noise_vars)\n\n    def get_images(self):\n        return tflib.run(self._images_expr, {self._noise_in: 0})\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/run_generator.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nimport argparse\nimport numpy as np\nimport PIL.Image\nimport dnnlib\nimport dnnlib.tflib as tflib\nimport re\nimport sys\n\nimport pretrained_networks\n\n#----------------------------------------------------------------------------\n\ndef generate_images(network_pkl, seeds, truncation_psi):\n    print('Loading networks from \"%s\"...' % network_pkl)\n    _G, _D, Gs = pretrained_networks.load_networks(network_pkl)\n    noise_vars = [var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]\n\n    Gs_kwargs = dnnlib.EasyDict()\n    Gs_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n    Gs_kwargs.randomize_noise = False\n    if truncation_psi is not None:\n        Gs_kwargs.truncation_psi = truncation_psi\n\n    for seed_idx, seed in enumerate(seeds):\n        print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))\n        rnd = np.random.RandomState(seed)\n        z = rnd.randn(1, *Gs.input_shape[1:]) # [minibatch, component]\n        tflib.set_vars({var: rnd.randn(*var.shape.as_list()) for var in noise_vars}) # [height, width]\n        images = Gs.run(z, None, **Gs_kwargs) # [minibatch, height, width, channel]\n        PIL.Image.fromarray(images[0], 'RGB').save(dnnlib.make_run_dir_path('seed%04d.png' % seed))\n\n#----------------------------------------------------------------------------\n\ndef style_mixing_example(network_pkl, row_seeds, col_seeds, truncation_psi, col_styles, minibatch_size=4):\n    print('Loading networks from \"%s\"...' % network_pkl)\n    _G, _D, Gs = pretrained_networks.load_networks(network_pkl)\n    w_avg = Gs.get_var('dlatent_avg') # [component]\n\n    Gs_syn_kwargs = dnnlib.EasyDict()\n    Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)\n    Gs_syn_kwargs.randomize_noise = False\n    Gs_syn_kwargs.minibatch_size = minibatch_size\n\n    print('Generating W vectors...')\n    all_seeds = list(set(row_seeds + col_seeds))\n    all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component]\n    all_w = Gs.components.mapping.run(all_z, None) # [minibatch, layer, component]\n    all_w = w_avg + (all_w - w_avg) * truncation_psi # [minibatch, layer, component]\n    w_dict = {seed: w for seed, w in zip(all_seeds, list(all_w))} # [layer, component]\n\n    print('Generating images...')\n    all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) # [minibatch, height, width, channel]\n    image_dict = {(seed, seed): image for seed, image in zip(all_seeds, list(all_images))}\n\n    print('Generating style-mixed images...')\n    for row_seed in row_seeds:\n        for col_seed in col_seeds:\n            w = w_dict[row_seed].copy()\n            w[col_styles] = w_dict[col_seed][col_styles]\n            image = Gs.components.synthesis.run(w[np.newaxis], **Gs_syn_kwargs)[0]\n            image_dict[(row_seed, col_seed)] = image\n\n    print('Saving images...')\n    for (row_seed, col_seed), image in image_dict.items():\n        PIL.Image.fromarray(image, 'RGB').save(dnnlib.make_run_dir_path('%d-%d.png' % (row_seed, col_seed)))\n\n    print('Saving image grid...')\n    _N, _C, H, W = Gs.output_shape\n    canvas = PIL.Image.new('RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black')\n    for row_idx, row_seed in enumerate([None] + row_seeds):\n        for col_idx, col_seed in enumerate([None] + col_seeds):\n            if row_seed is None and col_seed is None:\n                continue\n            key = (row_seed, col_seed)\n            if row_seed is None:\n                key = (col_seed, col_seed)\n            if col_seed is None:\n                key = (row_seed, row_seed)\n            canvas.paste(PIL.Image.fromarray(image_dict[key], 'RGB'), (W * col_idx, H * row_idx))\n    canvas.save(dnnlib.make_run_dir_path('grid.png'))\n\n#----------------------------------------------------------------------------\n\ndef _parse_num_range(s):\n    '''Accept either a comma separated list of numbers 'a,b,c' or a range 'a-c' and return as a list of ints.'''\n\n    range_re = re.compile(r'^(\\d+)-(\\d+)$')\n    m = range_re.match(s)\n    if m:\n        return range(int(m.group(1)), int(m.group(2))+1)\n    vals = s.split(',')\n    return [int(x) for x in vals]\n\n#----------------------------------------------------------------------------\n\n_examples = '''examples:\n\n  # Generate ffhq uncurated images (matches paper Figure 12)\n  python %(prog)s generate-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --seeds=6600-6625 --truncation-psi=0.5\n\n  # Generate ffhq curated images (matches paper Figure 11)\n  python %(prog)s generate-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --seeds=66,230,389,1518 --truncation-psi=1.0\n\n  # Generate uncurated car images (matches paper Figure 12)\n  python %(prog)s generate-images --network=gdrive:networks/stylegan2-car-config-f.pkl --seeds=6000-6025 --truncation-psi=0.5\n\n  # Generate style mixing example (matches style mixing video clip)\n  python %(prog)s style-mixing-example --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --row-seeds=85,100,75,458,1500 --col-seeds=55,821,1789,293 --truncation-psi=1.0\n'''\n\n#----------------------------------------------------------------------------\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description='''StyleGAN2 generator.\n\nRun 'python %(prog)s <subcommand> --help' for subcommand help.''',\n        epilog=_examples,\n        formatter_class=argparse.RawDescriptionHelpFormatter\n    )\n\n    subparsers = parser.add_subparsers(help='Sub-commands', dest='command')\n\n    parser_generate_images = subparsers.add_parser('generate-images', help='Generate images')\n    parser_generate_images.add_argument('--network', help='Network pickle filename', dest='network_pkl', required=True)\n    parser_generate_images.add_argument('--seeds', type=_parse_num_range, help='List of random seeds', required=True)\n    parser_generate_images.add_argument('--truncation-psi', type=float, help='Truncation psi (default: %(default)s)', default=0.5)\n    parser_generate_images.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n\n    parser_style_mixing_example = subparsers.add_parser('style-mixing-example', help='Generate style mixing video')\n    parser_style_mixing_example.add_argument('--network', help='Network pickle filename', dest='network_pkl', required=True)\n    parser_style_mixing_example.add_argument('--row-seeds', type=_parse_num_range, help='Random seeds to use for image rows', required=True)\n    parser_style_mixing_example.add_argument('--col-seeds', type=_parse_num_range, help='Random seeds to use for image columns', required=True)\n    parser_style_mixing_example.add_argument('--col-styles', type=_parse_num_range, help='Style layer range (default: %(default)s)', default='0-6')\n    parser_style_mixing_example.add_argument('--truncation-psi', type=float, help='Truncation psi (default: %(default)s)', default=0.5)\n    parser_style_mixing_example.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n\n    args = parser.parse_args()\n    kwargs = vars(args)\n    subcmd = kwargs.pop('command')\n\n    if subcmd is None:\n        print ('Error: missing subcommand.  Re-run with --help for usage.')\n        sys.exit(1)\n\n    sc = dnnlib.SubmitConfig()\n    sc.num_gpus = 1\n    sc.submit_target = dnnlib.SubmitTarget.LOCAL\n    sc.local.do_not_copy_source_files = True\n    sc.run_dir_root = kwargs.pop('result_dir')\n    sc.run_desc = subcmd\n\n    func_name_map = {\n        'generate-images': 'run_generator.generate_images',\n        'style-mixing-example': 'run_generator.style_mixing_example'\n    }\n    dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs)\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    main()\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/run_metrics.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nimport argparse\nimport os\nimport sys\n\nimport dnnlib\nimport dnnlib.tflib as tflib\n\nimport pretrained_networks\nfrom metrics import metric_base\nfrom metrics.metric_defaults import metric_defaults\n\n#----------------------------------------------------------------------------\n\ndef run(network_pkl, metrics, dataset, data_dir, mirror_augment):\n    print('Evaluating metrics \"%s\" for \"%s\"...' % (','.join(metrics), network_pkl))\n    tflib.init_tf()\n    network_pkl = pretrained_networks.get_path_or_url(network_pkl)\n    dataset_args = dnnlib.EasyDict(tfrecord_dir=dataset, shuffle_mb=0)\n    num_gpus = dnnlib.submit_config.num_gpus\n    metric_group = metric_base.MetricGroup([metric_defaults[metric] for metric in metrics])\n    metric_group.run(network_pkl, data_dir=data_dir, dataset_args=dataset_args, mirror_augment=mirror_augment, num_gpus=num_gpus)\n\n#----------------------------------------------------------------------------\n\ndef _str_to_bool(v):\n    if isinstance(v, bool):\n        return v\n    if v.lower() in ('yes', 'true', 't', 'y', '1'):\n        return True\n    elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n        return False\n    else:\n        raise argparse.ArgumentTypeError('Boolean value expected.')\n\n#----------------------------------------------------------------------------\n\n_examples = '''examples:\n\n  python %(prog)s --data-dir=~/datasets --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --metrics=fid50k,ppl_wend --dataset=ffhq --mirror-augment=true\n\nvalid metrics:\n\n  ''' + ', '.join(sorted([x for x in metric_defaults.keys()])) + '''\n'''\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description='Run StyleGAN2 metrics.',\n        epilog=_examples,\n        formatter_class=argparse.RawDescriptionHelpFormatter\n    )\n    parser.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n    parser.add_argument('--network', help='Network pickle filename', dest='network_pkl', required=True)\n    parser.add_argument('--metrics', help='Metrics to compute (default: %(default)s)', default='fid50k', type=lambda x: x.split(','))\n    parser.add_argument('--dataset', help='Training dataset', required=True)\n    parser.add_argument('--data-dir', help='Dataset root directory', required=True)\n    parser.add_argument('--mirror-augment', help='Mirror augment (default: %(default)s)', default=False, type=_str_to_bool, metavar='BOOL')\n    parser.add_argument('--num-gpus', help='Number of GPUs to use', type=int, default=1, metavar='N')\n\n    args = parser.parse_args()\n\n    if not os.path.exists(args.data_dir):\n        print ('Error: dataset root directory does not exist.')\n        sys.exit(1)\n\n    kwargs = vars(args)\n    sc = dnnlib.SubmitConfig()\n    sc.num_gpus = kwargs.pop('num_gpus')\n    sc.submit_target = dnnlib.SubmitTarget.LOCAL\n    sc.local.do_not_copy_source_files = True\n    sc.run_dir_root = kwargs.pop('result_dir')\n    sc.run_desc = 'run-metrics'\n    dnnlib.submit_run(sc, 'run_metrics.run', **kwargs)\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    main()\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/run_projector.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nimport argparse\nimport numpy as np\nimport dnnlib\nimport dnnlib.tflib as tflib\nimport re\nimport sys\n\nimport projector\nimport pretrained_networks\nfrom training import dataset\nfrom training import misc\n\n#----------------------------------------------------------------------------\n\ndef project_image(proj, targets, png_prefix, num_snapshots):\n    snapshot_steps = set(proj.num_steps - np.linspace(0, proj.num_steps, num_snapshots, endpoint=False, dtype=int))\n    misc.save_image_grid(targets, png_prefix + 'target.png', drange=[-1,1])\n    proj.start(targets)\n    while proj.get_cur_step() < proj.num_steps:\n        print('\\r%d / %d ... ' % (proj.get_cur_step(), proj.num_steps), end='', flush=True)\n        proj.step()\n        if proj.get_cur_step() in snapshot_steps:\n            misc.save_image_grid(proj.get_images(), png_prefix + 'step%04d.png' % proj.get_cur_step(), drange=[-1,1])\n    print('\\r%-30s\\r' % '', end='', flush=True)\n\n#----------------------------------------------------------------------------\n\ndef project_generated_images(network_pkl, seeds, num_snapshots, truncation_psi):\n    print('Loading networks from \"%s\"...' % network_pkl)\n    _G, _D, Gs = pretrained_networks.load_networks(network_pkl)\n    proj = projector.Projector()\n    proj.set_network(Gs)\n    noise_vars = [var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]\n\n    Gs_kwargs = dnnlib.EasyDict()\n    Gs_kwargs.randomize_noise = False\n    Gs_kwargs.truncation_psi = truncation_psi\n\n    for seed_idx, seed in enumerate(seeds):\n        print('Projecting seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))\n        rnd = np.random.RandomState(seed)\n        z = rnd.randn(1, *Gs.input_shape[1:])\n        tflib.set_vars({var: rnd.randn(*var.shape.as_list()) for var in noise_vars})\n        images = Gs.run(z, None, **Gs_kwargs)\n        project_image(proj, targets=images, png_prefix=dnnlib.make_run_dir_path('seed%04d-' % seed), num_snapshots=num_snapshots)\n\n#----------------------------------------------------------------------------\n\ndef project_real_images(network_pkl, dataset_name, data_dir, num_images, num_snapshots):\n    print('Loading networks from \"%s\"...' % network_pkl)\n    _G, _D, Gs = pretrained_networks.load_networks(network_pkl)\n    proj = projector.Projector()\n    proj.set_network(Gs)\n\n    print('Loading images from \"%s\"...' % dataset_name)\n    dataset_obj = dataset.load_dataset(data_dir=data_dir, tfrecord_dir=dataset_name, max_label_size=0, repeat=False, shuffle_mb=0)\n    assert dataset_obj.shape == Gs.output_shape[1:]\n\n    for image_idx in range(num_images):\n        print('Projecting image %d/%d ...' % (image_idx, num_images))\n        images, _labels = dataset_obj.get_minibatch_np(1)\n        images = misc.adjust_dynamic_range(images, [0, 255], [-1, 1])\n        project_image(proj, targets=images, png_prefix=dnnlib.make_run_dir_path('image%04d-' % image_idx), num_snapshots=num_snapshots)\n\n#----------------------------------------------------------------------------\n\ndef _parse_num_range(s):\n    '''Accept either a comma separated list of numbers 'a,b,c' or a range 'a-c' and return as a list of ints.'''\n\n    range_re = re.compile(r'^(\\d+)-(\\d+)$')\n    m = range_re.match(s)\n    if m:\n        return range(int(m.group(1)), int(m.group(2))+1)\n    vals = s.split(',')\n    return [int(x) for x in vals]\n\n#----------------------------------------------------------------------------\n\n_examples = '''examples:\n\n  # Project generated images\n  python %(prog)s project-generated-images --network=gdrive:networks/stylegan2-car-config-f.pkl --seeds=0,1,5\n\n  # Project real images\n  python %(prog)s project-real-images --network=gdrive:networks/stylegan2-car-config-f.pkl --dataset=car --data-dir=~/datasets\n\n'''\n\n#----------------------------------------------------------------------------\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description='''StyleGAN2 projector.\n\nRun 'python %(prog)s <subcommand> --help' for subcommand help.''',\n        epilog=_examples,\n        formatter_class=argparse.RawDescriptionHelpFormatter\n    )\n\n    subparsers = parser.add_subparsers(help='Sub-commands', dest='command')\n\n    project_generated_images_parser = subparsers.add_parser('project-generated-images', help='Project generated images')\n    project_generated_images_parser.add_argument('--network', help='Network pickle filename', dest='network_pkl', required=True)\n    project_generated_images_parser.add_argument('--seeds', type=_parse_num_range, help='List of random seeds', default=range(3))\n    project_generated_images_parser.add_argument('--num-snapshots', type=int, help='Number of snapshots (default: %(default)s)', default=5)\n    project_generated_images_parser.add_argument('--truncation-psi', type=float, help='Truncation psi (default: %(default)s)', default=1.0)\n    project_generated_images_parser.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n\n    project_real_images_parser = subparsers.add_parser('project-real-images', help='Project real images')\n    project_real_images_parser.add_argument('--network', help='Network pickle filename', dest='network_pkl', required=True)\n    project_real_images_parser.add_argument('--data-dir', help='Dataset root directory', required=True)\n    project_real_images_parser.add_argument('--dataset', help='Training dataset', dest='dataset_name', required=True)\n    project_real_images_parser.add_argument('--num-snapshots', type=int, help='Number of snapshots (default: %(default)s)', default=5)\n    project_real_images_parser.add_argument('--num-images', type=int, help='Number of images to project (default: %(default)s)', default=3)\n    project_real_images_parser.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n\n    args = parser.parse_args()\n    subcmd = args.command\n    if subcmd is None:\n        print ('Error: missing subcommand.  Re-run with --help for usage.')\n        sys.exit(1)\n\n    kwargs = vars(args)\n    sc = dnnlib.SubmitConfig()\n    sc.num_gpus = 1\n    sc.submit_target = dnnlib.SubmitTarget.LOCAL\n    sc.local.do_not_copy_source_files = True\n    sc.run_dir_root = kwargs.pop('result_dir')\n    sc.run_desc = kwargs.pop('command')\n\n    func_name_map = {\n        'project-generated-images': 'run_projector.project_generated_images',\n        'project-real-images': 'run_projector.project_real_images'\n    }\n    dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs)\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    main()\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/run_training.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\nimport argparse\nimport copy\nimport os\nimport sys\n\nimport dnnlib\nfrom dnnlib import EasyDict\n\nfrom metrics.metric_defaults import metric_defaults\n\n#----------------------------------------------------------------------------\n\n_valid_configs = [\n    # Table 1\n    'config-a', # Baseline StyleGAN\n    'config-b', # + Weight demodulation\n    'config-c', # + Lazy regularization\n    'config-d', # + Path length regularization\n    'config-e', # + No growing, new G & D arch.\n    'config-f', # + Large networks (default)\n\n    # Table 2\n    'config-e-Gorig-Dorig',   'config-e-Gorig-Dresnet',   'config-e-Gorig-Dskip',\n    'config-e-Gresnet-Dorig', 'config-e-Gresnet-Dresnet', 'config-e-Gresnet-Dskip',\n    'config-e-Gskip-Dorig',   'config-e-Gskip-Dresnet',   'config-e-Gskip-Dskip',\n]\n\n#----------------------------------------------------------------------------\n\ndef run(dataset, data_dir, result_dir, config_id, num_gpus, total_kimg, gamma, mirror_augment,\n        metrics,\n        commitment_cost, discrete_layer, decay, D_type):\n    train     = EasyDict(run_func_name='training.training_loop.training_loop') # Options for training loop.\n    G         = EasyDict(func_name='training.networks_stylegan2.G_main')       # Options for generator network.\n    if D_type == 1:\n        D         = EasyDict(func_name='training.networks_stylegan2.D_stylegan2_quant')  # Options for\n    else:\n        D         = EasyDict(func_name='training.networks_stylegan2.D_stylegan2')  # Options\n    # for\n    # discriminator network.\n    G_opt     = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8)                  # Options for generator optimizer.\n    D_opt     = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8)                  # Options for discriminator optimizer.\n    G_loss    = EasyDict(func_name='training.loss.G_logistic_ns_pathreg')      # Options for generator loss.\n    D_loss    = EasyDict(func_name='training.loss.D_logistic_r1')              # Options for discriminator loss.\n    sched     = EasyDict()                                                     # Options for TrainingSchedule.\n    grid      = EasyDict(size='8k', layout='random')                           # Options for setup_snapshot_image_grid().\n    sc        = dnnlib.SubmitConfig()                                          # Options for dnnlib.submit_run().\n    tf_config = {'rnd.np_random_seed': 1000}                                   # Options for tflib.init_tf().\n    D.commitment_cost = commitment_cost\n    D.discrete_layer = discrete_layer\n    D.decay = decay\n    train.data_dir = data_dir\n    train.total_kimg = total_kimg\n\n    train.mirror_augment = mirror_augment\n    train.image_snapshot_ticks = train.network_snapshot_ticks = 10\n    sched.G_lrate_base = sched.D_lrate_base = 0.002\n    sched.minibatch_size_base = 32\n    sched.minibatch_gpu_base = 4\n    D_loss.gamma = 10\n    metrics = [metric_defaults[x] for x in metrics]\n    desc = 'stylegan2'\n\n    desc += '-' + dataset\n    dataset_args = EasyDict(tfrecord_dir=dataset)\n\n    assert num_gpus in [1, 2, 4, 8]\n    sc.num_gpus = num_gpus\n    desc += '-%dgpu' % num_gpus\n\n    assert config_id in _valid_configs\n    desc += '-' + config_id\n\n    # Configs A-E: Shrink networks to match original StyleGAN.\n    if config_id != 'config-f':\n        G.fmap_base = D.fmap_base = 8 << 10\n\n    # Config E: Set gamma to 100 and override G & D architecture.\n    if config_id.startswith('config-e'):\n        D_loss.gamma = 100\n        if 'Gorig'   in config_id: G.architecture = 'orig'\n        if 'Gskip'   in config_id: G.architecture = 'skip' # (default)\n        if 'Gresnet' in config_id: G.architecture = 'resnet'\n        if 'Dorig'   in config_id: D.architecture = 'orig'\n        if 'Dskip'   in config_id: D.architecture = 'skip'\n        if 'Dresnet' in config_id: D.architecture = 'resnet' # (default)\n\n    # Configs A-D: Enable progressive growing and switch to networks that support it.\n    if config_id in ['config-a', 'config-b', 'config-c', 'config-d']:\n        sched.lod_initial_resolution = 8\n        sched.G_lrate_base = sched.D_lrate_base = 0.001\n        sched.G_lrate_dict = sched.D_lrate_dict = {128: 0.0015, 256: 0.002, 512: 0.003, 1024: 0.003}\n        sched.minibatch_size_base = 32 # (default)\n        sched.minibatch_size_dict = {8: 256, 16: 128, 32: 64, 64: 32}\n        sched.minibatch_gpu_base = 4 # (default)\n        sched.minibatch_gpu_dict = {8: 32, 16: 16, 32: 8, 64: 4}\n        G.synthesis_func = 'G_synthesis_stylegan_revised'\n        D.func_name = 'training.networks_stylegan2.D_stylegan'\n\n    # Configs A-C: Disable path length regularization.\n    if config_id in ['config-a', 'config-b', 'config-c']:\n        G_loss = EasyDict(func_name='training.loss.G_logistic_ns')\n\n    # Configs A-B: Disable lazy regularization.\n    if config_id in ['config-a', 'config-b']:\n        train.lazy_regularization = False\n\n    # Config A: Switch to original StyleGAN networks.\n    if config_id == 'config-a':\n        G = EasyDict(func_name='training.networks_stylegan.G_style')\n        D = EasyDict(func_name='training.networks_stylegan.D_basic')\n\n    if gamma is not None:\n        D_loss.gamma = gamma\n\n    sc.submit_target = dnnlib.SubmitTarget.LOCAL\n    sc.local.do_not_copy_source_files = True\n    kwargs = EasyDict(train)\n    kwargs.update(G_args=G, D_args=D, G_opt_args=G_opt, D_opt_args=D_opt, G_loss_args=G_loss, D_loss_args=D_loss)\n    kwargs.update(dataset_args=dataset_args, sched_args=sched, grid_args=grid, metric_arg_list=metrics, tf_config=tf_config)\n    kwargs.submit_config = copy.deepcopy(sc)\n    kwargs.submit_config.run_dir_root = result_dir\n    kwargs.submit_config.run_desc = desc\n    dnnlib.submit_run(**kwargs)\n\n#----------------------------------------------------------------------------\n\ndef _str_to_bool(v):\n    if isinstance(v, bool):\n        return v\n    if v.lower() in ('yes', 'true', 't', 'y', '1'):\n        return True\n    elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n        return False\n    else:\n        raise argparse.ArgumentTypeError('Boolean value expected.')\n\ndef _parse_comma_sep(s):\n    if s is None or s.lower() == 'none' or s == '':\n        return []\n    return s.split(',')\n\n#----------------------------------------------------------------------------\n\n_examples = '''examples:\n\n  # Train StyleGAN2 using the FFHQ dataset\n  python %(prog)s --num-gpus=8 --data-dir=~/datasets --config=config-f --dataset=ffhq --mirror-augment=true\n\nvalid configs:\n\n  ''' + ', '.join(_valid_configs) + '''\n\nvalid metrics:\n\n  ''' + ', '.join(sorted([x for x in metric_defaults.keys()])) + '''\n\n'''\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description='Train StyleGAN2.',\n        epilog=_examples,\n        formatter_class=argparse.RawDescriptionHelpFormatter\n    )\n    \n    parser.add_argument('--result-dir', help='Root directory for run results (default: %(default)s)', default='results', metavar='DIR')\n    parser.add_argument('--data-dir', help='Dataset root directory', required=True)\n    parser.add_argument('--dataset', help='Training dataset', required=True)\n    parser.add_argument('--config', help='Training config (default: %(default)s)', default='config-f', required=True, dest='config_id', metavar='CONFIG')\n    parser.add_argument('--num-gpus', help='Number of GPUs (default: %(default)s)', default=1, type=int, metavar='N')\n    parser.add_argument('--total-kimg', help='Training length in thousands of images (default: %(default)s)', metavar='KIMG', default=25000, type=int)\n    parser.add_argument('--gamma', help='R1 regularization weight (default is config dependent)', default=None, type=float)\n    parser.add_argument('--mirror-augment', help='Mirror augment (default: %(default)s)', default=False, metavar='BOOL', type=_str_to_bool)\n    parser.add_argument('--metrics', help='Comma-separated list of metrics or \"none\" (default: %(default)s)', default='fid50k', type=_parse_comma_sep)\n    parser.add_argument('--discrete_layer', default='45',type=str)\n    parser.add_argument('--commitment_cost', default=0.25,type=float)\n    parser.add_argument('--decay', default=0.8, type=float)\n    parser.add_argument('--D_type', default=1, type=int)\n\n    args = parser.parse_args()\n\n    if not os.path.exists(args.data_dir):\n        print ('Error: dataset root directory does not exist.')\n        sys.exit(1)\n\n    if args.config_id not in _valid_configs:\n        print ('Error: --config value must be one of: ', ', '.join(_valid_configs))\n        sys.exit(1)\n\n    for metric in args.metrics:\n        if metric not in metric_defaults:\n            print ('Error: unknown metric \\'%s\\'' % metric)\n            sys.exit(1)\n\n    run(**vars(args))\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    main()\n\n#----------------------------------------------------------------------------\n\n"
  },
  {
    "path": "FQ-StyleGAN/test_nvcc.cu",
    "content": "// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#include <cstdio>\n\nvoid checkCudaError(cudaError_t err)\n{\n    if (err != cudaSuccess)\n    {\n        printf(\"%s: %s\\n\", cudaGetErrorName(err), cudaGetErrorString(err));\n        exit(1);\n    }\n}\n\n__global__ void cudaKernel(void)\n{\n    printf(\"GPU says hello.\\n\");\n}\n\nint main(void)\n{\n    printf(\"CPU says hello.\\n\");\n    checkCudaError(cudaLaunchKernel((void*)cudaKernel, 1, 1, NULL, 0, NULL));\n    checkCudaError(cudaDeviceSynchronize());\n    return 0;\n}\n"
  },
  {
    "path": "FQ-StyleGAN/training/__init__.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n# empty\n"
  },
  {
    "path": "FQ-StyleGAN/training/dataset.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Multi-resolution input data pipeline.\"\"\"\n\nimport os\nimport glob\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\n\n#----------------------------------------------------------------------------\n# Dataset class that loads data from tfrecords files.\n\nclass TFRecordDataset:\n    def __init__(self,\n        tfrecord_dir,               # Directory containing a collection of tfrecords files.\n        resolution      = None,     # Dataset resolution, None = autodetect.\n        label_file      = None,     # Relative path of the labels file, None = autodetect.\n        max_label_size  = 0,        # 0 = no labels, 'full' = full labels, <int> = N first label components.\n        max_images      = None,     # Maximum number of images to use, None = use all images.\n        repeat          = True,     # Repeat dataset indefinitely?\n        shuffle_mb      = 4096,     # Shuffle data within specified window (megabytes), 0 = disable shuffling.\n        prefetch_mb     = 2048,     # Amount of data to prefetch (megabytes), 0 = disable prefetching.\n        buffer_mb       = 256,      # Read buffer size (megabytes).\n        num_threads     = 2):       # Number of concurrent threads.\n\n        self.tfrecord_dir       = tfrecord_dir\n        self.resolution         = None\n        self.resolution_log2    = None\n        self.shape              = []        # [channels, height, width]\n        self.dtype              = 'uint8'\n        self.dynamic_range      = [0, 255]\n        self.label_file         = label_file\n        self.label_size         = None      # components\n        self.label_dtype        = None\n        self._np_labels         = None\n        self._tf_minibatch_in   = None\n        self._tf_labels_var     = None\n        self._tf_labels_dataset = None\n        self._tf_datasets       = dict()\n        self._tf_iterator       = None\n        self._tf_init_ops       = dict()\n        self._tf_minibatch_np   = None\n        self._cur_minibatch     = -1\n        self._cur_lod           = -1\n\n        # List tfrecords files and inspect their shapes.\n        assert os.path.isdir(self.tfrecord_dir)\n        tfr_files = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.tfrecords')))\n        assert len(tfr_files) >= 1\n        tfr_shapes = []\n        for tfr_file in tfr_files:\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for record in tf.python_io.tf_record_iterator(tfr_file, tfr_opt):\n                tfr_shapes.append(self.parse_tfrecord_np(record).shape)\n                break\n\n        # Autodetect label filename.\n        if self.label_file is None:\n            guess = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.labels')))\n            if len(guess):\n                self.label_file = guess[0]\n        elif not os.path.isfile(self.label_file):\n            guess = os.path.join(self.tfrecord_dir, self.label_file)\n            if os.path.isfile(guess):\n                self.label_file = guess\n\n        # Determine shape and resolution.\n        max_shape = max(tfr_shapes, key=np.prod)\n        self.resolution = resolution if resolution is not None else max_shape[1]\n        self.resolution_log2 = int(np.log2(self.resolution))\n        self.shape = [max_shape[0], self.resolution, self.resolution]\n        tfr_lods = [self.resolution_log2 - int(np.log2(shape[1])) for shape in tfr_shapes]\n        assert all(shape[0] == max_shape[0] for shape in tfr_shapes)\n        assert all(shape[1] == shape[2] for shape in tfr_shapes)\n        assert all(shape[1] == self.resolution // (2**lod) for shape, lod in zip(tfr_shapes, tfr_lods))\n        assert all(lod in tfr_lods for lod in range(self.resolution_log2 - 1))\n\n        # Load labels.\n        assert max_label_size == 'full' or max_label_size >= 0\n        self._np_labels = np.zeros([1<<30, 0], dtype=np.float32)\n        if self.label_file is not None and max_label_size != 0:\n            self._np_labels = np.load(self.label_file)\n            assert self._np_labels.ndim == 2\n        if max_label_size != 'full' and self._np_labels.shape[1] > max_label_size:\n            self._np_labels = self._np_labels[:, :max_label_size]\n        if max_images is not None and self._np_labels.shape[0] > max_images:\n            self._np_labels = self._np_labels[:max_images]\n        self.label_size = self._np_labels.shape[1]\n        self.label_dtype = self._np_labels.dtype.name\n\n        # Build TF expressions.\n        with tf.name_scope('Dataset'), tf.device('/cpu:0'):\n            self._tf_minibatch_in = tf.placeholder(tf.int64, name='minibatch_in', shape=[])\n            self._tf_labels_var = tflib.create_var_with_large_initial_value(self._np_labels, name='labels_var')\n            self._tf_labels_dataset = tf.data.Dataset.from_tensor_slices(self._tf_labels_var)\n            for tfr_file, tfr_shape, tfr_lod in zip(tfr_files, tfr_shapes, tfr_lods):\n                if tfr_lod < 0:\n                    continue\n                dset = tf.data.TFRecordDataset(tfr_file, compression_type='', buffer_size=buffer_mb<<20)\n                if max_images is not None:\n                    dset = dset.take(max_images)\n                dset = dset.map(self.parse_tfrecord_tf, num_parallel_calls=num_threads)\n                dset = tf.data.Dataset.zip((dset, self._tf_labels_dataset))\n                bytes_per_item = np.prod(tfr_shape) * np.dtype(self.dtype).itemsize\n                if shuffle_mb > 0:\n                    dset = dset.shuffle(((shuffle_mb << 20) - 1) // bytes_per_item + 1)\n                if repeat:\n                    dset = dset.repeat()\n                if prefetch_mb > 0:\n                    dset = dset.prefetch(((prefetch_mb << 20) - 1) // bytes_per_item + 1)\n                dset = dset.batch(self._tf_minibatch_in)\n                self._tf_datasets[tfr_lod] = dset\n            self._tf_iterator = tf.data.Iterator.from_structure(self._tf_datasets[0].output_types, self._tf_datasets[0].output_shapes)\n            self._tf_init_ops = {lod: self._tf_iterator.make_initializer(dset) for lod, dset in self._tf_datasets.items()}\n\n    def close(self):\n        pass\n\n    # Use the given minibatch size and level-of-detail for the data returned by get_minibatch_tf().\n    def configure(self, minibatch_size, lod=0):\n        lod = int(np.floor(lod))\n        assert minibatch_size >= 1 and lod in self._tf_datasets\n        if self._cur_minibatch != minibatch_size or self._cur_lod != lod:\n            self._tf_init_ops[lod].run({self._tf_minibatch_in: minibatch_size})\n            self._cur_minibatch = minibatch_size\n            self._cur_lod = lod\n\n    # Get next minibatch as TensorFlow expressions.\n    def get_minibatch_tf(self): # => images, labels\n        return self._tf_iterator.get_next()\n\n    # Get next minibatch as NumPy arrays.\n    def get_minibatch_np(self, minibatch_size, lod=0): # => images, labels\n        self.configure(minibatch_size, lod)\n        with tf.name_scope('Dataset'):\n            if self._tf_minibatch_np is None:\n                self._tf_minibatch_np = self.get_minibatch_tf()\n            return tflib.run(self._tf_minibatch_np)\n\n    # Get random labels as TensorFlow expression.\n    def get_random_labels_tf(self, minibatch_size): # => labels\n        with tf.name_scope('Dataset'):\n            if self.label_size > 0:\n                with tf.device('/cpu:0'):\n                    return tf.gather(self._tf_labels_var, tf.random_uniform([minibatch_size], 0, self._np_labels.shape[0], dtype=tf.int32))\n            return tf.zeros([minibatch_size, 0], self.label_dtype)\n\n    # Get random labels as NumPy array.\n    def get_random_labels_np(self, minibatch_size): # => labels\n        if self.label_size > 0:\n            return self._np_labels[np.random.randint(self._np_labels.shape[0], size=[minibatch_size])]\n        return np.zeros([minibatch_size, 0], self.label_dtype)\n\n    # Parse individual image from a tfrecords file into TensorFlow expression.\n    @staticmethod\n    def parse_tfrecord_tf(record):\n        features = tf.parse_single_example(record, features={\n            'shape': tf.FixedLenFeature([3], tf.int64),\n            'data': tf.FixedLenFeature([], tf.string)})\n        data = tf.decode_raw(features['data'], tf.uint8)\n        return tf.reshape(data, features['shape'])\n\n    # Parse individual image from a tfrecords file into NumPy array.\n    @staticmethod\n    def parse_tfrecord_np(record):\n        ex = tf.train.Example()\n        ex.ParseFromString(record)\n        shape = ex.features.feature['shape'].int64_list.value # pylint: disable=no-member\n        data = ex.features.feature['data'].bytes_list.value[0] # pylint: disable=no-member\n        return np.fromstring(data, np.uint8).reshape(shape)\n\n#----------------------------------------------------------------------------\n# Helper func for constructing a dataset object using the given options.\n\ndef load_dataset(class_name=None, data_dir=None, verbose=False, **kwargs):\n    kwargs = dict(kwargs)\n    if 'tfrecord_dir' in kwargs:\n        if class_name is None:\n            class_name = __name__ + '.TFRecordDataset'\n        if data_dir is not None:\n            kwargs['tfrecord_dir'] = os.path.join(data_dir, kwargs['tfrecord_dir'])\n\n    assert class_name is not None\n    if verbose:\n        print('Streaming data using %s...' % class_name)\n    dataset = dnnlib.util.get_obj_by_name(class_name)(**kwargs)\n    if verbose:\n        print('Dataset shape =', np.int32(dataset.shape).tolist())\n        print('Dynamic range =', dataset.dynamic_range)\n        print('Label size    =', dataset.label_size)\n    return dataset\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/training/loss.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Loss functions.\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib.tflib as tflib\nfrom dnnlib.tflib.autosummary import autosummary\n\n#----------------------------------------------------------------------------\n# Logistic loss from the paper\n# \"Generative Adversarial Nets\", Goodfellow et al. 2014\n\ndef G_logistic(G, D, opt, training_set, minibatch_size):\n    _ = opt\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    labels = training_set.get_random_labels_tf(minibatch_size)\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    loss = -tf.nn.softplus(fake_scores_out) # log(1-sigmoid(fake_scores_out)) # pylint: disable=invalid-unary-operand-type\n    return loss, None\n\ndef G_logistic_ns(G, D, opt, training_set, minibatch_size):\n    _ = opt\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    labels = training_set.get_random_labels_tf(minibatch_size)\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))\n    return loss, None\n\ndef D_logistic(G, D, opt, training_set, minibatch_size, reals, labels):\n    _ = opt, training_set\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out = D.get_output_for(reals, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n    fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n    loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))\n    loss += tf.nn.softplus(-real_scores_out) # -log(sigmoid(real_scores_out)) # pylint: disable=invalid-unary-operand-type\n    return loss, None\n\n#----------------------------------------------------------------------------\n# R1 and R2 regularizers from the paper\n# \"Which Training Methods for GANs do actually Converge?\", Mescheder et al. 2018\n\ndef D_logistic_r1(G, D, opt, training_set, minibatch_size, reals, labels, gamma=10.0):\n    _ = opt, training_set\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out = D.get_output_for(reals, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    ppl_real, ppl_fake = None, None\n    if isinstance(real_scores_out, tuple):\n        real_scores_out, real_quant_loss, ppl_real = real_scores_out[0], real_scores_out[1], real_scores_out[2]\n        fake_scores_out, fake_quant_loss, ppl_fake = fake_scores_out[0], fake_scores_out[1], fake_scores_out[2]\n        real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n        fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n        loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))\n        loss += tf.nn.softplus(-real_scores_out) + real_quant_loss + fake_quant_loss # -log(sigmoid(real_scores_out)) # pylint:\n        # disable=invalid-unary-operand-type\n    else:\n        real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n        fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n        loss = tf.nn.softplus(fake_scores_out)  # -log(1 - logistic(fake_scores_out))\n        loss += tf.nn.softplus(\n            -real_scores_out)  # -log(logistic(real_scores_out)) # temporary pylint workaround # pylint: disable=invalid-unary-operand-type\n\n    with tf.name_scope('GradientPenalty'):\n        real_grads = tf.gradients(tf.reduce_sum(real_scores_out), [reals])[0]\n        gradient_penalty = tf.reduce_sum(tf.square(real_grads), axis=[1,2,3])\n        gradient_penalty = autosummary('Loss/gradient_penalty', gradient_penalty)\n        reg = gradient_penalty * (gamma * 0.5)\n    if ppl_fake is not None:\n        ppl = (ppl_fake + ppl_real) / 2\n    else:\n        ppl = tf.zeros(1)\n    return loss, reg, ppl\n\ndef D_logistic_r2(G, D, opt, training_set, minibatch_size, reals, labels, gamma=10.0):\n    _ = opt, training_set\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out = D.get_output_for(reals, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n    fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n    loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))\n    loss += tf.nn.softplus(-real_scores_out) # -log(sigmoid(real_scores_out)) # pylint: disable=invalid-unary-operand-type\n\n    with tf.name_scope('GradientPenalty'):\n        fake_grads = tf.gradients(tf.reduce_sum(fake_scores_out), [fake_images_out])[0]\n        gradient_penalty = tf.reduce_sum(tf.square(fake_grads), axis=[1,2,3])\n        gradient_penalty = autosummary('Loss/gradient_penalty', gradient_penalty)\n        reg = gradient_penalty * (gamma * 0.5)\n    return loss, reg\n\n#----------------------------------------------------------------------------\n# WGAN loss from the paper\n# \"Wasserstein Generative Adversarial Networks\", Arjovsky et al. 2017\n\ndef G_wgan(G, D, opt, training_set, minibatch_size):\n    _ = opt\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    labels = training_set.get_random_labels_tf(minibatch_size)\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    loss = -fake_scores_out\n    return loss, None\n\ndef D_wgan(G, D, opt, training_set, minibatch_size, reals, labels, wgan_epsilon=0.001):\n    _ = opt, training_set\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out = D.get_output_for(reals, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n    fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n    loss = fake_scores_out - real_scores_out\n    with tf.name_scope('EpsilonPenalty'):\n        epsilon_penalty = autosummary('Loss/epsilon_penalty', tf.square(real_scores_out))\n        loss += epsilon_penalty * wgan_epsilon\n    return loss, None\n\n#----------------------------------------------------------------------------\n# WGAN-GP loss from the paper\n# \"Improved Training of Wasserstein GANs\", Gulrajani et al. 2017\n\ndef D_wgan_gp(G, D, opt, training_set, minibatch_size, reals, labels, wgan_lambda=10.0, wgan_epsilon=0.001, wgan_target=1.0):\n    _ = opt, training_set\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out = D.get_output_for(reals, labels, is_training=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    real_scores_out = autosummary('Loss/scores/real', real_scores_out)\n    fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)\n    loss = fake_scores_out - real_scores_out\n    with tf.name_scope('EpsilonPenalty'):\n        epsilon_penalty = autosummary('Loss/epsilon_penalty', tf.square(real_scores_out))\n    loss += epsilon_penalty * wgan_epsilon\n\n    with tf.name_scope('GradientPenalty'):\n        mixing_factors = tf.random_uniform([minibatch_size, 1, 1, 1], 0.0, 1.0, dtype=fake_images_out.dtype)\n        mixed_images_out = tflib.lerp(tf.cast(reals, fake_images_out.dtype), fake_images_out, mixing_factors)\n        mixed_scores_out = D.get_output_for(mixed_images_out, labels, is_training=True)\n        mixed_scores_out = autosummary('Loss/scores/mixed', mixed_scores_out)\n        mixed_grads = tf.gradients(tf.reduce_sum(mixed_scores_out), [mixed_images_out])[0]\n        mixed_norms = tf.sqrt(tf.reduce_sum(tf.square(mixed_grads), axis=[1,2,3]))\n        mixed_norms = autosummary('Loss/mixed_norms', mixed_norms)\n        gradient_penalty = tf.square(mixed_norms - wgan_target)\n        reg = gradient_penalty * (wgan_lambda / (wgan_target**2))\n    return loss, reg\n\n#----------------------------------------------------------------------------\n# Non-saturating logistic loss with path length regularizer from the paper\n# \"Analyzing and Improving the Image Quality of StyleGAN\", Karras et al. 2019\n\ndef G_logistic_ns_pathreg(G, D, opt, training_set, minibatch_size, pl_minibatch_shrink=2, pl_decay=0.01, pl_weight=2.0):\n    _ = opt\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    labels = training_set.get_random_labels_tf(minibatch_size)\n    fake_images_out, fake_dlatents_out = G.get_output_for(latents, labels, is_training=True, return_dlatents=True)\n    fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)\n    if isinstance(fake_scores_out, tuple):\n        fake_scores_out, quant_loss = fake_scores_out[0], fake_scores_out[1]\n        loss = tf.nn.softplus(-fake_scores_out) + quant_loss # -log(logistic(fake_scores_out))\n    else:\n        loss = tf.nn.softplus(-fake_scores_out)  # -log(logistic(fake_scores_out))\n\n    # Path length regularization.\n    with tf.name_scope('PathReg'):\n\n        # Evaluate the regularization term using a smaller minibatch to conserve memory.\n        if pl_minibatch_shrink > 1:\n            pl_minibatch = minibatch_size // pl_minibatch_shrink\n            pl_latents = tf.random_normal([pl_minibatch] + G.input_shapes[0][1:])\n            pl_labels = training_set.get_random_labels_tf(pl_minibatch)\n            fake_images_out, fake_dlatents_out = G.get_output_for(pl_latents, pl_labels, is_training=True, return_dlatents=True)\n\n        # Compute |J*y|.\n        pl_noise = tf.random_normal(tf.shape(fake_images_out)) / np.sqrt(np.prod(G.output_shape[2:]))\n        pl_grads = tf.gradients(tf.reduce_sum(fake_images_out * pl_noise), [fake_dlatents_out])[0]\n        pl_lengths = tf.sqrt(tf.reduce_mean(tf.reduce_sum(tf.square(pl_grads), axis=2), axis=1))\n        pl_lengths = autosummary('Loss/pl_lengths', pl_lengths)\n\n        # Track exponential moving average of |J*y|.\n        with tf.control_dependencies(None):\n            pl_mean_var = tf.Variable(name='pl_mean', trainable=False, initial_value=0.0, dtype=tf.float32)\n        pl_mean = pl_mean_var + pl_decay * (tf.reduce_mean(pl_lengths) - pl_mean_var)\n        pl_update = tf.assign(pl_mean_var, pl_mean)\n\n        # Calculate (|J*y|-a)^2.\n        with tf.control_dependencies([pl_update]):\n            pl_penalty = tf.square(pl_lengths - pl_mean)\n            pl_penalty = autosummary('Loss/pl_penalty', pl_penalty)\n\n        # Apply weight.\n        #\n        # Note: The division in pl_noise decreases the weight by num_pixels, and the reduce_mean\n        # in pl_lengths decreases it by num_affine_layers. The effective weight then becomes:\n        #\n        # gamma_pl = pl_weight / num_pixels / num_affine_layers\n        # = 2 / (r^2) / (log2(r) * 2 - 2)\n        # = 1 / (r^2 * (log2(r) - 1))\n        # = ln(2) / (r^2 * (ln(r) - ln(2))\n        #\n        reg = pl_penalty * pl_weight\n\n    return loss, reg\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/training/misc.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Miscellaneous utility functions.\"\"\"\n\nimport os\nimport pickle\nimport numpy as np\nimport PIL.Image\nimport PIL.ImageFont\nimport dnnlib\n\n#----------------------------------------------------------------------------\n# Convenience wrappers for pickle that are able to load data produced by\n# older versions of the code, and from external URLs.\n\ndef open_file_or_url(file_or_url):\n    if dnnlib.util.is_url(file_or_url):\n        return dnnlib.util.open_url(file_or_url, cache_dir='.stylegan2-cache')\n    return open(file_or_url, 'rb')\n\ndef load_pkl(file_or_url):\n    with open_file_or_url(file_or_url) as file:\n        return pickle.load(file, encoding='latin1')\n\ndef save_pkl(obj, filename):\n    with open(filename, 'wb') as file:\n        pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL)\n\n#----------------------------------------------------------------------------\n# Image utils.\n\ndef adjust_dynamic_range(data, drange_in, drange_out):\n    if drange_in != drange_out:\n        scale = (np.float32(drange_out[1]) - np.float32(drange_out[0])) / (np.float32(drange_in[1]) - np.float32(drange_in[0]))\n        bias = (np.float32(drange_out[0]) - np.float32(drange_in[0]) * scale)\n        data = data * scale + bias\n    return data\n\ndef create_image_grid(images, grid_size=None):\n    assert images.ndim == 3 or images.ndim == 4\n    num, img_w, img_h = images.shape[0], images.shape[-1], images.shape[-2]\n\n    if grid_size is not None:\n        grid_w, grid_h = tuple(grid_size)\n    else:\n        grid_w = max(int(np.ceil(np.sqrt(num))), 1)\n        grid_h = max((num - 1) // grid_w + 1, 1)\n\n    grid = np.zeros(list(images.shape[1:-2]) + [grid_h * img_h, grid_w * img_w], dtype=images.dtype)\n    for idx in range(num):\n        x = (idx % grid_w) * img_w\n        y = (idx // grid_w) * img_h\n        grid[..., y : y + img_h, x : x + img_w] = images[idx]\n    return grid\n\ndef convert_to_pil_image(image, drange=[0,1]):\n    assert image.ndim == 2 or image.ndim == 3\n    if image.ndim == 3:\n        if image.shape[0] == 1:\n            image = image[0] # grayscale CHW => HW\n        else:\n            image = image.transpose(1, 2, 0) # CHW -> HWC\n\n    image = adjust_dynamic_range(image, drange, [0,255])\n    image = np.rint(image).clip(0, 255).astype(np.uint8)\n    fmt = 'RGB' if image.ndim == 3 else 'L'\n    return PIL.Image.fromarray(image, fmt)\n\ndef save_image_grid(images, filename, drange=[0,1], grid_size=None):\n    convert_to_pil_image(create_image_grid(images, grid_size), drange).save(filename)\n\ndef apply_mirror_augment(minibatch):\n    mask = np.random.rand(minibatch.shape[0]) < 0.5\n    minibatch = np.array(minibatch)\n    minibatch[mask] = minibatch[mask, :, :, ::-1]\n    return minibatch\n\n#----------------------------------------------------------------------------\n# Loading data from previous training runs.\n\ndef parse_config_for_previous_run(run_dir):\n    with open(os.path.join(run_dir, 'submit_config.pkl'), 'rb') as f:\n        data = pickle.load(f)\n    data = data.get('run_func_kwargs', {})\n    return dict(train=data, dataset=data.get('dataset_args', {}))\n\n#----------------------------------------------------------------------------\n# Size and contents of the image snapshot grids that are exported\n# periodically during training.\n\ndef setup_snapshot_image_grid(training_set,\n    size    = '1080p',      # '1080p' = to be viewed on 1080p display, '4k' = to be viewed on 4k display.\n    layout  = 'random'):    # 'random' = grid contents are selected randomly, 'row_per_class' = each row corresponds to one class label.\n\n    # Select size.\n    gw = 1; gh = 1\n    if size == '1080p':\n        gw = np.clip(1920 // training_set.shape[2], 3, 32)\n        gh = np.clip(1080 // training_set.shape[1], 2, 32)\n    if size == '4k':\n        gw = np.clip(3840 // training_set.shape[2], 7, 32)\n        gh = np.clip(2160 // training_set.shape[1], 4, 32)\n    if size == '8k':\n        gw = np.clip(7680 // training_set.shape[2], 7, 32)\n        gh = np.clip(4320 // training_set.shape[1], 4, 32)\n\n    # Initialize data arrays.\n    reals = np.zeros([gw * gh] + training_set.shape, dtype=training_set.dtype)\n    labels = np.zeros([gw * gh, training_set.label_size], dtype=training_set.label_dtype)\n\n    # Random layout.\n    if layout == 'random':\n        reals[:], labels[:] = training_set.get_minibatch_np(gw * gh)\n\n    # Class-conditional layouts.\n    class_layouts = dict(row_per_class=[gw,1], col_per_class=[1,gh], class4x4=[4,4])\n    if layout in class_layouts:\n        bw, bh = class_layouts[layout]\n        nw = (gw - 1) // bw + 1\n        nh = (gh - 1) // bh + 1\n        blocks = [[] for _i in range(nw * nh)]\n        for _iter in range(1000000):\n            real, label = training_set.get_minibatch_np(1)\n            idx = np.argmax(label[0])\n            while idx < len(blocks) and len(blocks[idx]) >= bw * bh:\n                idx += training_set.label_size\n            if idx < len(blocks):\n                blocks[idx].append((real, label))\n                if all(len(block) >= bw * bh for block in blocks):\n                    break\n        for i, block in enumerate(blocks):\n            for j, (real, label) in enumerate(block):\n                x = (i %  nw) * bw + j %  bw\n                y = (i // nw) * bh + j // bw\n                if x < gw and y < gh:\n                    reals[x + y * gw] = real[0]\n                    labels[x + y * gw] = label[0]\n\n    return (gw, gh), reals, labels\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/training/networks_stylegan.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Network architectures used in the StyleGAN paper.\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\n\n# NOTE: Do not import any application-specific modules here!\n# Specify all network parameters as kwargs.\n\n#----------------------------------------------------------------------------\n# Primitive ops for manipulating 4D activation tensors.\n# The gradients of these are not necessary efficient or even meaningful.\n\ndef _blur2d(x, f=[1,2,1], normalize=True, flip=False, stride=1):\n    assert x.shape.ndims == 4 and all(dim.value is not None for dim in x.shape[1:])\n    assert isinstance(stride, int) and stride >= 1\n\n    # Finalize filter kernel.\n    f = np.array(f, dtype=np.float32)\n    if f.ndim == 1:\n        f = f[:, np.newaxis] * f[np.newaxis, :]\n    assert f.ndim == 2\n    if normalize:\n        f /= np.sum(f)\n    if flip:\n        f = f[::-1, ::-1]\n    f = f[:, :, np.newaxis, np.newaxis]\n    f = np.tile(f, [1, 1, int(x.shape[1]), 1])\n\n    # No-op => early exit.\n    if f.shape == (1, 1) and f[0,0] == 1:\n        return x\n\n    # Convolve using depthwise_conv2d.\n    orig_dtype = x.dtype\n    x = tf.cast(x, tf.float32)  # tf.nn.depthwise_conv2d() doesn't support fp16\n    f = tf.constant(f, dtype=x.dtype, name='filter')\n    strides = [1, 1, stride, stride]\n    x = tf.nn.depthwise_conv2d(x, f, strides=strides, padding='SAME', data_format='NCHW')\n    x = tf.cast(x, orig_dtype)\n    return x\n\ndef _upscale2d(x, factor=2, gain=1):\n    assert x.shape.ndims == 4 and all(dim.value is not None for dim in x.shape[1:])\n    assert isinstance(factor, int) and factor >= 1\n\n    # Apply gain.\n    if gain != 1:\n        x *= gain\n\n    # No-op => early exit.\n    if factor == 1:\n        return x\n\n    # Upscale using tf.tile().\n    s = x.shape\n    x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1])\n    x = tf.tile(x, [1, 1, 1, factor, 1, factor])\n    x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor])\n    return x\n\ndef _downscale2d(x, factor=2, gain=1):\n    assert x.shape.ndims == 4 and all(dim.value is not None for dim in x.shape[1:])\n    assert isinstance(factor, int) and factor >= 1\n\n    # 2x2, float32 => downscale using _blur2d().\n    if factor == 2 and x.dtype == tf.float32:\n        f = [np.sqrt(gain) / factor] * factor\n        return _blur2d(x, f=f, normalize=False, stride=factor)\n\n    # Apply gain.\n    if gain != 1:\n        x *= gain\n\n    # No-op => early exit.\n    if factor == 1:\n        return x\n\n    # Large factor => downscale using tf.nn.avg_pool().\n    # NOTE: Requires tf_config['graph_options.place_pruned_graph']=True to work.\n    ksize = [1, 1, factor, factor]\n    return tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# High-level ops for manipulating 4D activation tensors.\n# The gradients of these are meant to be as efficient as possible.\n\ndef blur2d(x, f=[1,2,1], normalize=True):\n    with tf.variable_scope('Blur2D'):\n        @tf.custom_gradient\n        def func(x):\n            y = _blur2d(x, f, normalize)\n            @tf.custom_gradient\n            def grad(dy):\n                dx = _blur2d(dy, f, normalize, flip=True)\n                return dx, lambda ddx: _blur2d(ddx, f, normalize)\n            return y, grad\n        return func(x)\n\ndef upscale2d(x, factor=2):\n    with tf.variable_scope('Upscale2D'):\n        @tf.custom_gradient\n        def func(x):\n            y = _upscale2d(x, factor)\n            @tf.custom_gradient\n            def grad(dy):\n                dx = _downscale2d(dy, factor, gain=factor**2)\n                return dx, lambda ddx: _upscale2d(ddx, factor)\n            return y, grad\n        return func(x)\n\ndef downscale2d(x, factor=2):\n    with tf.variable_scope('Downscale2D'):\n        @tf.custom_gradient\n        def func(x):\n            y = _downscale2d(x, factor)\n            @tf.custom_gradient\n            def grad(dy):\n                dx = _upscale2d(dy, factor, gain=1/factor**2)\n                return dx, lambda ddx: _downscale2d(ddx, factor)\n            return y, grad\n        return func(x)\n\n#----------------------------------------------------------------------------\n# Get/create weight tensor for a convolutional or fully-connected layer.\n\ndef get_weight(shape, gain=np.sqrt(2), use_wscale=False, lrmul=1):\n    fan_in = np.prod(shape[:-1]) # [kernel, kernel, fmaps_in, fmaps_out] or [in, out]\n    he_std = gain / np.sqrt(fan_in) # He init\n\n    # Equalized learning rate and custom learning rate multiplier.\n    if use_wscale:\n        init_std = 1.0 / lrmul\n        runtime_coef = he_std * lrmul\n    else:\n        init_std = he_std / lrmul\n        runtime_coef = lrmul\n\n    # Create variable.\n    init = tf.initializers.random_normal(0, init_std)\n    return tf.get_variable('weight', shape=shape, initializer=init) * runtime_coef\n\n#----------------------------------------------------------------------------\n# Fully-connected layer.\n\ndef dense(x, fmaps, **kwargs):\n    if len(x.shape) > 2:\n        x = tf.reshape(x, [-1, np.prod([d.value for d in x.shape[1:]])])\n    w = get_weight([x.shape[1].value, fmaps], **kwargs)\n    w = tf.cast(w, x.dtype)\n    return tf.matmul(x, w)\n\n#----------------------------------------------------------------------------\n# Convolutional layer.\n\ndef conv2d(x, fmaps, kernel, **kwargs):\n    assert kernel >= 1 and kernel % 2 == 1\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], **kwargs)\n    w = tf.cast(w, x.dtype)\n    return tf.nn.conv2d(x, w, strides=[1,1,1,1], padding='SAME', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# Fused convolution + scaling.\n# Faster and uses less memory than performing the operations separately.\n\ndef upscale2d_conv2d(x, fmaps, kernel, fused_scale='auto', **kwargs):\n    assert kernel >= 1 and kernel % 2 == 1\n    assert fused_scale in [True, False, 'auto']\n    if fused_scale == 'auto':\n        fused_scale = min(x.shape[2:]) * 2 >= 128\n\n    # Not fused => call the individual ops directly.\n    if not fused_scale:\n        return conv2d(upscale2d(x), fmaps, kernel, **kwargs)\n\n    # Fused => perform both ops simultaneously using tf.nn.conv2d_transpose().\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], **kwargs)\n    w = tf.transpose(w, [0, 1, 3, 2]) # [kernel, kernel, fmaps_out, fmaps_in]\n    w = tf.pad(w, [[1,1], [1,1], [0,0], [0,0]], mode='CONSTANT')\n    w = tf.add_n([w[1:, 1:], w[:-1, 1:], w[1:, :-1], w[:-1, :-1]])\n    w = tf.cast(w, x.dtype)\n    os = [tf.shape(x)[0], fmaps, x.shape[2] * 2, x.shape[3] * 2]\n    return tf.nn.conv2d_transpose(x, w, os, strides=[1,1,2,2], padding='SAME', data_format='NCHW')\n\ndef conv2d_downscale2d(x, fmaps, kernel, fused_scale='auto', **kwargs):\n    assert kernel >= 1 and kernel % 2 == 1\n    assert fused_scale in [True, False, 'auto']\n    if fused_scale == 'auto':\n        fused_scale = min(x.shape[2:]) >= 128\n\n    # Not fused => call the individual ops directly.\n    if not fused_scale:\n        return downscale2d(conv2d(x, fmaps, kernel, **kwargs))\n\n    # Fused => perform both ops simultaneously using tf.nn.conv2d().\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], **kwargs)\n    w = tf.pad(w, [[1,1], [1,1], [0,0], [0,0]], mode='CONSTANT')\n    w = tf.add_n([w[1:, 1:], w[:-1, 1:], w[1:, :-1], w[:-1, :-1]]) * 0.25\n    w = tf.cast(w, x.dtype)\n    return tf.nn.conv2d(x, w, strides=[1,1,2,2], padding='SAME', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# Apply bias to the given activation tensor.\n\ndef apply_bias(x, lrmul=1):\n    b = tf.get_variable('bias', shape=[x.shape[1]], initializer=tf.initializers.zeros()) * lrmul\n    b = tf.cast(b, x.dtype)\n    if len(x.shape) == 2:\n        return x + b\n    return x + tf.reshape(b, [1, -1, 1, 1])\n\n#----------------------------------------------------------------------------\n# Leaky ReLU activation. More efficient than tf.nn.leaky_relu() and supports FP16.\n\ndef leaky_relu(x, alpha=0.2):\n    with tf.variable_scope('LeakyReLU'):\n        alpha = tf.constant(alpha, dtype=x.dtype, name='alpha')\n        @tf.custom_gradient\n        def func(x):\n            y = tf.maximum(x, x * alpha)\n            @tf.custom_gradient\n            def grad(dy):\n                dx = tf.where(y >= 0, dy, dy * alpha)\n                return dx, lambda ddx: tf.where(y >= 0, ddx, ddx * alpha)\n            return y, grad\n        return func(x)\n\n#----------------------------------------------------------------------------\n# Pixelwise feature vector normalization.\n\ndef pixel_norm(x, epsilon=1e-8):\n    with tf.variable_scope('PixelNorm'):\n        epsilon = tf.constant(epsilon, dtype=x.dtype, name='epsilon')\n        return x * tf.rsqrt(tf.reduce_mean(tf.square(x), axis=1, keepdims=True) + epsilon)\n\n#----------------------------------------------------------------------------\n# Instance normalization.\n\ndef instance_norm(x, epsilon=1e-8):\n    assert len(x.shape) == 4 # NCHW\n    with tf.variable_scope('InstanceNorm'):\n        orig_dtype = x.dtype\n        x = tf.cast(x, tf.float32)\n        x -= tf.reduce_mean(x, axis=[2,3], keepdims=True)\n        epsilon = tf.constant(epsilon, dtype=x.dtype, name='epsilon')\n        x *= tf.rsqrt(tf.reduce_mean(tf.square(x), axis=[2,3], keepdims=True) + epsilon)\n        x = tf.cast(x, orig_dtype)\n        return x\n\n#----------------------------------------------------------------------------\n# Style modulation.\n\ndef style_mod(x, dlatent, **kwargs):\n    with tf.variable_scope('StyleMod'):\n        style = apply_bias(dense(dlatent, fmaps=x.shape[1]*2, gain=1, **kwargs))\n        style = tf.reshape(style, [-1, 2, x.shape[1]] + [1] * (len(x.shape) - 2))\n        return x * (style[:,0] + 1) + style[:,1]\n\n#----------------------------------------------------------------------------\n# Noise input.\n\ndef apply_noise(x, noise_var=None, randomize_noise=True):\n    assert len(x.shape) == 4 # NCHW\n    with tf.variable_scope('Noise'):\n        if noise_var is None or randomize_noise:\n            noise = tf.random_normal([tf.shape(x)[0], 1, x.shape[2], x.shape[3]], dtype=x.dtype)\n        else:\n            noise = tf.cast(noise_var, x.dtype)\n        weight = tf.get_variable('weight', shape=[x.shape[1].value], initializer=tf.initializers.zeros())\n        return x + noise * tf.reshape(tf.cast(weight, x.dtype), [1, -1, 1, 1])\n\n#----------------------------------------------------------------------------\n# Minibatch standard deviation.\n\ndef minibatch_stddev_layer(x, group_size=4, num_new_features=1):\n    with tf.variable_scope('MinibatchStddev'):\n        group_size = tf.minimum(group_size, tf.shape(x)[0])     # Minibatch must be divisible by (or smaller than) group_size.\n        s = x.shape                                             # [NCHW]  Input shape.\n        y = tf.reshape(x, [group_size, -1, num_new_features, s[1]//num_new_features, s[2], s[3]])   # [GMncHW] Split minibatch into M groups of size G. Split channels into n channel groups c.\n        y = tf.cast(y, tf.float32)                              # [GMncHW] Cast to FP32.\n        y -= tf.reduce_mean(y, axis=0, keepdims=True)           # [GMncHW] Subtract mean over group.\n        y = tf.reduce_mean(tf.square(y), axis=0)                # [MncHW]  Calc variance over group.\n        y = tf.sqrt(y + 1e-8)                                   # [MncHW]  Calc stddev over group.\n        y = tf.reduce_mean(y, axis=[2,3,4], keepdims=True)      # [Mn111]  Take average over fmaps and pixels.\n        y = tf.reduce_mean(y, axis=[2])                         # [Mn11] Split channels into c channel groups\n        y = tf.cast(y, x.dtype)                                 # [Mn11]  Cast back to original data type.\n        y = tf.tile(y, [group_size, 1, s[2], s[3]])             # [NnHW]  Replicate over group and pixels.\n        return tf.concat([x, y], axis=1)                        # [NCHW]  Append as new fmap.\n\n#----------------------------------------------------------------------------\n# Style-based generator used in the StyleGAN paper.\n# Composed of two sub-networks (G_mapping and G_synthesis) that are defined below.\n\ndef G_style(\n    latents_in,                                     # First input: Latent vectors (Z) [minibatch, latent_size].\n    labels_in,                                      # Second input: Conditioning labels [minibatch, label_size].\n    truncation_psi          = 0.7,                  # Style strength multiplier for the truncation trick. None = disable.\n    truncation_cutoff       = 8,                    # Number of layers for which to apply the truncation trick. None = disable.\n    truncation_psi_val      = None,                 # Value for truncation_psi to use during validation.\n    truncation_cutoff_val   = None,                 # Value for truncation_cutoff to use during validation.\n    dlatent_avg_beta        = 0.995,                # Decay for tracking the moving average of W during training. None = disable.\n    style_mixing_prob       = 0.9,                  # Probability of mixing styles during training. None = disable.\n    is_training             = False,                # Network is under training? Enables and disables specific features.\n    is_validation           = False,                # Network is under validation? Chooses which value to use for truncation_psi.\n    is_template_graph       = False,                # True = template graph constructed by the Network class, False = actual evaluation.\n    components              = dnnlib.EasyDict(),    # Container for sub-networks. Retained between calls.\n    **kwargs):                                      # Arguments for sub-networks (G_mapping and G_synthesis).\n\n    # Validate arguments.\n    assert not is_training or not is_validation\n    assert isinstance(components, dnnlib.EasyDict)\n    if is_validation:\n        truncation_psi = truncation_psi_val\n        truncation_cutoff = truncation_cutoff_val\n    if is_training or (truncation_psi is not None and not tflib.is_tf_expression(truncation_psi) and truncation_psi == 1):\n        truncation_psi = None\n    if is_training or (truncation_cutoff is not None and not tflib.is_tf_expression(truncation_cutoff) and truncation_cutoff <= 0):\n        truncation_cutoff = None\n    if not is_training or (dlatent_avg_beta is not None and not tflib.is_tf_expression(dlatent_avg_beta) and dlatent_avg_beta == 1):\n        dlatent_avg_beta = None\n    if not is_training or (style_mixing_prob is not None and not tflib.is_tf_expression(style_mixing_prob) and style_mixing_prob <= 0):\n        style_mixing_prob = None\n\n    # Setup components.\n    if 'synthesis' not in components:\n        components.synthesis = tflib.Network('G_synthesis', func_name=G_synthesis, **kwargs)\n    num_layers = components.synthesis.input_shape[1]\n    dlatent_size = components.synthesis.input_shape[2]\n    if 'mapping' not in components:\n        components.mapping = tflib.Network('G_mapping', func_name=G_mapping, dlatent_broadcast=num_layers, **kwargs)\n\n    # Setup variables.\n    lod_in = tf.get_variable('lod', initializer=np.float32(0), trainable=False)\n    dlatent_avg = tf.get_variable('dlatent_avg', shape=[dlatent_size], initializer=tf.initializers.zeros(), trainable=False)\n\n    # Evaluate mapping network.\n    dlatents = components.mapping.get_output_for(latents_in, labels_in, **kwargs)\n\n    # Update moving average of W.\n    if dlatent_avg_beta is not None:\n        with tf.variable_scope('DlatentAvg'):\n            batch_avg = tf.reduce_mean(dlatents[:, 0], axis=0)\n            update_op = tf.assign(dlatent_avg, tflib.lerp(batch_avg, dlatent_avg, dlatent_avg_beta))\n            with tf.control_dependencies([update_op]):\n                dlatents = tf.identity(dlatents)\n\n    # Perform style mixing regularization.\n    if style_mixing_prob is not None:\n        with tf.name_scope('StyleMix'):\n            latents2 = tf.random_normal(tf.shape(latents_in))\n            dlatents2 = components.mapping.get_output_for(latents2, labels_in, **kwargs)\n            layer_idx = np.arange(num_layers)[np.newaxis, :, np.newaxis]\n            cur_layers = num_layers - tf.cast(lod_in, tf.int32) * 2\n            mixing_cutoff = tf.cond(\n                tf.random_uniform([], 0.0, 1.0) < style_mixing_prob,\n                lambda: tf.random_uniform([], 1, cur_layers, dtype=tf.int32),\n                lambda: cur_layers)\n            dlatents = tf.where(tf.broadcast_to(layer_idx < mixing_cutoff, tf.shape(dlatents)), dlatents, dlatents2)\n\n    # Apply truncation trick.\n    if truncation_psi is not None and truncation_cutoff is not None:\n        with tf.variable_scope('Truncation'):\n            layer_idx = np.arange(num_layers)[np.newaxis, :, np.newaxis]\n            ones = np.ones(layer_idx.shape, dtype=np.float32)\n            coefs = tf.where(layer_idx < truncation_cutoff, truncation_psi * ones, ones)\n            dlatents = tflib.lerp(dlatent_avg, dlatents, coefs)\n\n    # Evaluate synthesis network.\n    with tf.control_dependencies([tf.assign(components.synthesis.find_var('lod'), lod_in)]):\n        images_out = components.synthesis.get_output_for(dlatents, force_clean_graph=is_template_graph, **kwargs)\n    return tf.identity(images_out, name='images_out')\n\n#----------------------------------------------------------------------------\n# Mapping network used in the StyleGAN paper.\n\ndef G_mapping(\n    latents_in,                             # First input: Latent vectors (Z) [minibatch, latent_size].\n    labels_in,                              # Second input: Conditioning labels [minibatch, label_size].\n    latent_size             = 512,          # Latent vector (Z) dimensionality.\n    label_size              = 0,            # Label dimensionality, 0 if no labels.\n    dlatent_size            = 512,          # Disentangled latent (W) dimensionality.\n    dlatent_broadcast       = None,         # Output disentangled latent (W) as [minibatch, dlatent_size] or [minibatch, dlatent_broadcast, dlatent_size].\n    mapping_layers          = 8,            # Number of mapping layers.\n    mapping_fmaps           = 512,          # Number of activations in the mapping layers.\n    mapping_lrmul           = 0.01,         # Learning rate multiplier for the mapping layers.\n    mapping_nonlinearity    = 'lrelu',      # Activation function: 'relu', 'lrelu'.\n    use_wscale              = True,         # Enable equalized learning rate?\n    normalize_latents       = True,         # Normalize latent vectors (Z) before feeding them to the mapping layers?\n    dtype                   = 'float32',    # Data type to use for activations and outputs.\n    **_kwargs):                             # Ignore unrecognized keyword args.\n\n    act, gain = {'relu': (tf.nn.relu, np.sqrt(2)), 'lrelu': (leaky_relu, np.sqrt(2))}[mapping_nonlinearity]\n\n    # Inputs.\n    latents_in.set_shape([None, latent_size])\n    labels_in.set_shape([None, label_size])\n    latents_in = tf.cast(latents_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n    x = latents_in\n\n    # Embed labels and concatenate them with latents.\n    if label_size:\n        with tf.variable_scope('LabelConcat'):\n            w = tf.get_variable('weight', shape=[label_size, latent_size], initializer=tf.initializers.random_normal())\n            y = tf.matmul(labels_in, tf.cast(w, dtype))\n            x = tf.concat([x, y], axis=1)\n\n    # Normalize latents.\n    if normalize_latents:\n        x = pixel_norm(x)\n\n    # Mapping layers.\n    for layer_idx in range(mapping_layers):\n        with tf.variable_scope('Dense%d' % layer_idx):\n            fmaps = dlatent_size if layer_idx == mapping_layers - 1 else mapping_fmaps\n            x = dense(x, fmaps=fmaps, gain=gain, use_wscale=use_wscale, lrmul=mapping_lrmul)\n            x = apply_bias(x, lrmul=mapping_lrmul)\n            x = act(x)\n\n    # Broadcast.\n    if dlatent_broadcast is not None:\n        with tf.variable_scope('Broadcast'):\n            x = tf.tile(x[:, np.newaxis], [1, dlatent_broadcast, 1])\n\n    # Output.\n    assert x.dtype == tf.as_dtype(dtype)\n    return tf.identity(x, name='dlatents_out')\n\n#----------------------------------------------------------------------------\n# Synthesis network used in the StyleGAN paper.\n\ndef G_synthesis(\n    dlatents_in,                        # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].\n    dlatent_size        = 512,          # Disentangled latent (W) dimensionality.\n    num_channels        = 3,            # Number of output color channels.\n    resolution          = 1024,         # Output resolution.\n    fmap_base           = 8192,         # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    use_styles          = True,         # Enable style inputs?\n    const_input_layer   = True,         # First layer is a learned constant?\n    use_noise           = True,         # Enable noise inputs?\n    randomize_noise     = True,         # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu'\n    use_wscale          = True,         # Enable equalized learning rate?\n    use_pixel_norm      = False,        # Enable pixelwise feature vector normalization?\n    use_instance_norm   = True,         # Enable instance normalization?\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    fused_scale         = 'auto',       # True = fused convolution + scaling, False = separate ops, 'auto' = decide automatically.\n    blur_filter         = [1,2,1],      # Low-pass filter to apply when resampling activations. None = no filtering.\n    structure           = 'auto',       # 'fixed' = no progressive growing, 'linear' = human-readable, 'recursive' = efficient, 'auto' = select automatically.\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    force_clean_graph   = False,        # True = construct a clean graph that looks nice in TensorBoard, False = default behavior.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)\n    def blur(x): return blur2d(x, blur_filter) if blur_filter else x\n    if is_template_graph: force_clean_graph = True\n    if force_clean_graph: randomize_noise = False\n    if structure == 'auto': structure = 'linear' if force_clean_graph else 'recursive'\n    act, gain = {'relu': (tf.nn.relu, np.sqrt(2)), 'lrelu': (leaky_relu, np.sqrt(2))}[nonlinearity]\n    num_layers = resolution_log2 * 2 - 2\n    num_styles = num_layers if use_styles else 1\n    images_out = None\n\n    # Primary inputs.\n    dlatents_in.set_shape([None, num_styles, dlatent_size])\n    dlatents_in = tf.cast(dlatents_in, dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype)\n\n    # Noise inputs.\n    noise_inputs = []\n    if use_noise:\n        for layer_idx in range(num_layers):\n            res = layer_idx // 2 + 2\n            shape = [1, use_noise, 2**res, 2**res]\n            noise_inputs.append(tf.get_variable('noise%d' % layer_idx, shape=shape, initializer=tf.initializers.random_normal(), trainable=False))\n\n    # Things to do at the end of each layer.\n    def layer_epilogue(x, layer_idx):\n        if use_noise:\n            x = apply_noise(x, noise_inputs[layer_idx], randomize_noise=randomize_noise)\n        x = apply_bias(x)\n        x = act(x)\n        if use_pixel_norm:\n            x = pixel_norm(x)\n        if use_instance_norm:\n            x = instance_norm(x)\n        if use_styles:\n            x = style_mod(x, dlatents_in[:, layer_idx], use_wscale=use_wscale)\n        return x\n\n    # Early layers.\n    with tf.variable_scope('4x4'):\n        if const_input_layer:\n            with tf.variable_scope('Const'):\n                x = tf.get_variable('const', shape=[1, nf(1), 4, 4], initializer=tf.initializers.ones())\n                x = layer_epilogue(tf.tile(tf.cast(x, dtype), [tf.shape(dlatents_in)[0], 1, 1, 1]), 0)\n        else:\n            with tf.variable_scope('Dense'):\n                x = dense(dlatents_in[:, 0], fmaps=nf(1)*16, gain=gain/4, use_wscale=use_wscale) # tweak gain to match the official implementation of Progressing GAN\n                x = layer_epilogue(tf.reshape(x, [-1, nf(1), 4, 4]), 0)\n        with tf.variable_scope('Conv'):\n            x = layer_epilogue(conv2d(x, fmaps=nf(1), kernel=3, gain=gain, use_wscale=use_wscale), 1)\n\n    # Building blocks for remaining layers.\n    def block(res, x): # res = 3..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            with tf.variable_scope('Conv0_up'):\n                x = layer_epilogue(blur(upscale2d_conv2d(x, fmaps=nf(res-1), kernel=3, gain=gain, use_wscale=use_wscale, fused_scale=fused_scale)), res*2-4)\n            with tf.variable_scope('Conv1'):\n                x = layer_epilogue(conv2d(x, fmaps=nf(res-1), kernel=3, gain=gain, use_wscale=use_wscale), res*2-3)\n            return x\n    def torgb(res, x): # res = 2..resolution_log2\n        lod = resolution_log2 - res\n        with tf.variable_scope('ToRGB_lod%d' % lod):\n            return apply_bias(conv2d(x, fmaps=num_channels, kernel=1, gain=1, use_wscale=use_wscale))\n\n    # Fixed structure: simple and efficient, but does not support progressive growing.\n    if structure == 'fixed':\n        for res in range(3, resolution_log2 + 1):\n            x = block(res, x)\n        images_out = torgb(resolution_log2, x)\n\n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        images_out = torgb(2, x)\n        for res in range(3, resolution_log2 + 1):\n            lod = resolution_log2 - res\n            x = block(res, x)\n            img = torgb(res, x)\n            images_out = upscale2d(images_out)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                images_out = tflib.lerp_clip(img, images_out, lod_in - lod)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def cset(cur_lambda, new_cond, new_lambda):\n            return lambda: tf.cond(new_cond, new_lambda, cur_lambda)\n        def grow(x, res, lod):\n            y = block(res, x)\n            img = lambda: upscale2d(torgb(res, y), 2**lod)\n            img = cset(img, (lod_in > lod), lambda: upscale2d(tflib.lerp(torgb(res, y), upscale2d(torgb(res - 1, x)), lod_in - lod), 2**lod))\n            if lod > 0: img = cset(img, (lod_in < lod), lambda: grow(y, res + 1, lod - 1))\n            return img()\n        images_out = grow(x, 3, resolution_log2 - 3)\n\n    assert images_out.dtype == tf.as_dtype(dtype)\n    return tf.identity(images_out, name='images_out')\n\n#----------------------------------------------------------------------------\n# Discriminator used in the StyleGAN paper.\n\ndef D_basic(\n    images_in,                          # First input: Images [minibatch, channel, height, width].\n    labels_in,                          # Second input: Labels [minibatch, label_size].\n    num_channels        = 1,            # Number of input color channels. Overridden based on dataset.\n    resolution          = 32,           # Input resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 8192,         # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu',\n    use_wscale          = True,         # Enable equalized learning rate?\n    mbstd_group_size    = 4,            # Group size for the minibatch standard deviation layer, 0 = disable.\n    mbstd_num_features  = 1,            # Number of features for the minibatch standard deviation layer.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    fused_scale         = 'auto',       # True = fused convolution + scaling, False = separate ops, 'auto' = decide automatically.\n    blur_filter         = [1,2,1],      # Low-pass filter to apply when resampling activations. None = no filtering.\n    structure           = 'auto',       # 'fixed' = no progressive growing, 'linear' = human-readable, 'recursive' = efficient, 'auto' = select automatically.\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)\n    def blur(x): return blur2d(x, blur_filter) if blur_filter else x\n    if structure == 'auto': structure = 'linear' if is_template_graph else 'recursive'\n    act, gain = {'relu': (tf.nn.relu, np.sqrt(2)), 'lrelu': (leaky_relu, np.sqrt(2))}[nonlinearity]\n\n    images_in.set_shape([None, num_channels, resolution, resolution])\n    labels_in.set_shape([None, label_size])\n    images_in = tf.cast(images_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0.0), trainable=False), dtype)\n    scores_out = None\n\n    # Building blocks.\n    def fromrgb(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('FromRGB_lod%d' % (resolution_log2 - res)):\n            return act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=1, gain=gain, use_wscale=use_wscale)))\n    def block(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            if res >= 3: # 8x8 and up\n                with tf.variable_scope('Conv0'):\n                    x = act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, gain=gain, use_wscale=use_wscale)))\n                with tf.variable_scope('Conv1_down'):\n                    x = act(apply_bias(conv2d_downscale2d(blur(x), fmaps=nf(res-2), kernel=3, gain=gain, use_wscale=use_wscale, fused_scale=fused_scale)))\n            else: # 4x4\n                if mbstd_group_size > 1:\n                    x = minibatch_stddev_layer(x, mbstd_group_size, mbstd_num_features)\n                with tf.variable_scope('Conv'):\n                    x = act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, gain=gain, use_wscale=use_wscale)))\n                with tf.variable_scope('Dense0'):\n                    x = act(apply_bias(dense(x, fmaps=nf(res-2), gain=gain, use_wscale=use_wscale)))\n                with tf.variable_scope('Dense1'):\n                    x = apply_bias(dense(x, fmaps=max(label_size, 1), gain=1, use_wscale=use_wscale))\n            return x\n\n    # Fixed structure: simple and efficient, but does not support progressive growing.\n    if structure == 'fixed':\n        x = fromrgb(images_in, resolution_log2)\n        for res in range(resolution_log2, 2, -1):\n            x = block(x, res)\n        scores_out = block(x, 2)\n\n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        img = images_in\n        x = fromrgb(img, resolution_log2)\n        for res in range(resolution_log2, 2, -1):\n            lod = resolution_log2 - res\n            x = block(x, res)\n            img = downscale2d(img)\n            y = fromrgb(img, res - 1)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                x = tflib.lerp_clip(x, y, lod_in - lod)\n        scores_out = block(x, 2)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def cset(cur_lambda, new_cond, new_lambda):\n            return lambda: tf.cond(new_cond, new_lambda, cur_lambda)\n        def grow(res, lod):\n            x = lambda: fromrgb(downscale2d(images_in, 2**lod), res)\n            if lod > 0: x = cset(x, (lod_in < lod), lambda: grow(res + 1, lod - 1))\n            x = block(x(), res); y = lambda: x\n            if res > 2: y = cset(y, (lod_in > lod), lambda: tflib.lerp(x, fromrgb(downscale2d(images_in, 2**(lod+1)), res - 1), lod_in - lod))\n            return y()\n        scores_out = grow(2, resolution_log2 - 2)\n\n    # Label conditioning from \"Which Training Methods for GANs do actually Converge?\"\n    if label_size:\n        with tf.variable_scope('LabelSwitch'):\n            scores_out = tf.reduce_sum(scores_out * labels_in, axis=1, keepdims=True)\n\n    assert scores_out.dtype == tf.as_dtype(dtype)\n    scores_out = tf.identity(scores_out, name='scores_out')\n    return scores_out\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/training/networks_stylegan2.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Network architectures used in the StyleGAN2 paper.\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\nfrom dnnlib.tflib.ops.upfirdn_2d import upsample_2d, downsample_2d, upsample_conv_2d, conv_downsample_2d\nfrom dnnlib.tflib.ops.fused_bias_act import fused_bias_act\nfrom tensorflow.python.training import moving_averages\n# NOTE: Do not import any application-specific modules here!\n# Specify all network parameters as kwargs.\n\n#----------------------------------------------------------------------------\n# Get/create weight tensor for a convolution or fully-connected layer.\n\ndef get_weight(shape, gain=1, use_wscale=True, lrmul=1, weight_var='weight'):\n    fan_in = np.prod(shape[:-1]) # [kernel, kernel, fmaps_in, fmaps_out] or [in, out]\n    he_std = gain / np.sqrt(fan_in) # He init\n\n    # Equalized learning rate and custom learning rate multiplier.\n    if use_wscale:\n        init_std = 1.0 / lrmul\n        runtime_coef = he_std * lrmul\n    else:\n        init_std = he_std / lrmul\n        runtime_coef = lrmul\n\n    # Create variable.\n    init = tf.initializers.random_normal(0, init_std)\n    return tf.get_variable(weight_var, shape=shape, initializer=init) * runtime_coef\n\n#----------------------------------------------------------------------------\n# Fully-connected layer.\n\ndef dense_layer(x, fmaps, gain=1, use_wscale=True, lrmul=1, weight_var='weight'):\n    if len(x.shape) > 2:\n        x = tf.reshape(x, [-1, np.prod([d.value for d in x.shape[1:]])])\n    w = get_weight([x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale, lrmul=lrmul, weight_var=weight_var)\n    w = tf.cast(w, x.dtype)\n    return tf.matmul(x, w)\n\n#----------------------------------------------------------------------------\n# Convolution layer with optional upsampling or downsampling.\n\ndef conv2d_layer(x, fmaps, kernel, up=False, down=False, resample_kernel=None, gain=1, use_wscale=True, lrmul=1, weight_var='weight'):\n    assert not (up and down)\n    assert kernel >= 1 and kernel % 2 == 1\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale, lrmul=lrmul, weight_var=weight_var)\n    if up:\n        x = upsample_conv_2d(x, tf.cast(w, x.dtype), data_format='NCHW', k=resample_kernel)\n    elif down:\n        x = conv_downsample_2d(x, tf.cast(w, x.dtype), data_format='NCHW', k=resample_kernel)\n    else:\n        x = tf.nn.conv2d(x, tf.cast(w, x.dtype), data_format='NCHW', strides=[1,1,1,1], padding='SAME')\n    return x\n\n#----------------------------------------------------------------------------\n# Apply bias and activation func.\n\ndef apply_bias_act(x, act='linear', alpha=None, gain=None, lrmul=1, bias_var='bias'):\n    b = tf.get_variable(bias_var, shape=[x.shape[1]], initializer=tf.initializers.zeros()) * lrmul\n    return fused_bias_act(x, b=tf.cast(b, x.dtype), act=act, alpha=alpha, gain=gain)\n\n#----------------------------------------------------------------------------\n# Naive upsampling (nearest neighbor) and downsampling (average pooling).\n\ndef naive_upsample_2d(x, factor=2):\n    with tf.variable_scope('NaiveUpsample'):\n        _N, C, H, W = x.shape.as_list()\n        x = tf.reshape(x, [-1, C, H, 1, W, 1])\n        x = tf.tile(x, [1, 1, 1, factor, 1, factor])\n        return tf.reshape(x, [-1, C, H * factor, W * factor])\n\ndef naive_downsample_2d(x, factor=2):\n    with tf.variable_scope('NaiveDownsample'):\n        _N, C, H, W = x.shape.as_list()\n        x = tf.reshape(x, [-1, C, H // factor, factor, W // factor, factor])\n        return tf.reduce_mean(x, axis=[3,5])\n\n#----------------------------------------------------------------------------\n# Modulated convolution layer.\n\ndef modulated_conv2d_layer(x, y, fmaps, kernel, up=False, down=False, demodulate=True, resample_kernel=None, gain=1, use_wscale=True, lrmul=1, fused_modconv=True, weight_var='weight', mod_weight_var='mod_weight', mod_bias_var='mod_bias'):\n    assert not (up and down)\n    assert kernel >= 1 and kernel % 2 == 1\n\n    # Get weight.\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale, lrmul=lrmul, weight_var=weight_var)\n    ww = w[np.newaxis] # [BkkIO] Introduce minibatch dimension.\n\n    # Modulate.\n    s = dense_layer(y, fmaps=x.shape[1].value, weight_var=mod_weight_var) # [BI] Transform incoming W to style.\n    s = apply_bias_act(s, bias_var=mod_bias_var) + 1 # [BI] Add bias (initially 1).\n    ww *= tf.cast(s[:, np.newaxis, np.newaxis, :, np.newaxis], w.dtype) # [BkkIO] Scale input feature maps.\n\n    # Demodulate.\n    if demodulate:\n        d = tf.rsqrt(tf.reduce_sum(tf.square(ww), axis=[1,2,3]) + 1e-8) # [BO] Scaling factor.\n        ww *= d[:, np.newaxis, np.newaxis, np.newaxis, :] # [BkkIO] Scale output feature maps.\n\n    # Reshape/scale input.\n    if fused_modconv:\n        x = tf.reshape(x, [1, -1, x.shape[2], x.shape[3]]) # Fused => reshape minibatch to convolution groups.\n        w = tf.reshape(tf.transpose(ww, [1, 2, 3, 0, 4]), [ww.shape[1], ww.shape[2], ww.shape[3], -1])\n    else:\n        x *= tf.cast(s[:, :, np.newaxis, np.newaxis], x.dtype) # [BIhw] Not fused => scale input activations.\n\n    # Convolution with optional up/downsampling.\n    if up:\n        x = upsample_conv_2d(x, tf.cast(w, x.dtype), data_format='NCHW', k=resample_kernel)\n    elif down:\n        x = conv_downsample_2d(x, tf.cast(w, x.dtype), data_format='NCHW', k=resample_kernel)\n    else:\n        x = tf.nn.conv2d(x, tf.cast(w, x.dtype), data_format='NCHW', strides=[1,1,1,1], padding='SAME')\n\n    # Reshape/scale output.\n    if fused_modconv:\n        x = tf.reshape(x, [-1, fmaps, x.shape[2], x.shape[3]]) # Fused => reshape convolution groups back to minibatch.\n    elif demodulate:\n        x *= tf.cast(d[:, :, np.newaxis, np.newaxis], x.dtype) # [BOhw] Not fused => scale output activations.\n    return x\n\n#----------------------------------------------------------------------------\n# Minibatch standard deviation layer.\n\ndef minibatch_stddev_layer(x, group_size=4, num_new_features=1):\n    group_size = tf.minimum(group_size, tf.shape(x)[0])     # Minibatch must be divisible by (or smaller than) group_size.\n    s = x.shape                                             # [NCHW]  Input shape.\n    y = tf.reshape(x, [group_size, -1, num_new_features, s[1]//num_new_features, s[2], s[3]])   # [GMncHW] Split minibatch into M groups of size G. Split channels into n channel groups c.\n    y = tf.cast(y, tf.float32)                              # [GMncHW] Cast to FP32.\n    y -= tf.reduce_mean(y, axis=0, keepdims=True)           # [GMncHW] Subtract mean over group.\n    y = tf.reduce_mean(tf.square(y), axis=0)                # [MncHW]  Calc variance over group.\n    y = tf.sqrt(y + 1e-8)                                   # [MncHW]  Calc stddev over group.\n    y = tf.reduce_mean(y, axis=[2,3,4], keepdims=True)      # [Mn111]  Take average over fmaps and pixels.\n    y = tf.reduce_mean(y, axis=[2])                         # [Mn11] Split channels into c channel groups\n    y = tf.cast(y, x.dtype)                                 # [Mn11]  Cast back to original data type.\n    y = tf.tile(y, [group_size, 1, s[2], s[3]])             # [NnHW]  Replicate over group and pixels.\n    return tf.concat([x, y], axis=1)                        # [NCHW]  Append as new fmap.\n\n#----------------------------------------------------------------------------\n# Main generator network.\n# Composed of two sub-networks (mapping and synthesis) that are defined below.\n# Used in configs B-F (Table 1).\n\ndef G_main(\n    latents_in,                                         # First input: Latent vectors (Z) [minibatch, latent_size].\n    labels_in,                                          # Second input: Conditioning labels [minibatch, label_size].\n    truncation_psi          = 0.5,                      # Style strength multiplier for the truncation trick. None = disable.\n    truncation_cutoff       = None,                     # Number of layers for which to apply the truncation trick. None = disable.\n    truncation_psi_val      = None,                     # Value for truncation_psi to use during validation.\n    truncation_cutoff_val   = None,                     # Value for truncation_cutoff to use during validation.\n    dlatent_avg_beta        = 0.995,                    # Decay for tracking the moving average of W during training. None = disable.\n    style_mixing_prob       = 0.9,                      # Probability of mixing styles during training. None = disable.\n    is_training             = False,                    # Network is under training? Enables and disables specific features.\n    is_validation           = False,                    # Network is under validation? Chooses which value to use for truncation_psi.\n    return_dlatents         = False,                    # Return dlatents in addition to the images?\n    is_template_graph       = False,                    # True = template graph constructed by the Network class, False = actual evaluation.\n    components              = dnnlib.EasyDict(),        # Container for sub-networks. Retained between calls.\n    mapping_func            = 'G_mapping',              # Build func name for the mapping network.\n    synthesis_func          = 'G_synthesis_stylegan2',  # Build func name for the synthesis network.\n    **kwargs):                                          # Arguments for sub-networks (mapping and synthesis).\n\n    # Validate arguments.\n    assert not is_training or not is_validation\n    assert isinstance(components, dnnlib.EasyDict)\n    if is_validation:\n        truncation_psi = truncation_psi_val\n        truncation_cutoff = truncation_cutoff_val\n    if is_training or (truncation_psi is not None and not tflib.is_tf_expression(truncation_psi) and truncation_psi == 1):\n        truncation_psi = None\n    if is_training:\n        truncation_cutoff = None\n    if not is_training or (dlatent_avg_beta is not None and not tflib.is_tf_expression(dlatent_avg_beta) and dlatent_avg_beta == 1):\n        dlatent_avg_beta = None\n    if not is_training or (style_mixing_prob is not None and not tflib.is_tf_expression(style_mixing_prob) and style_mixing_prob <= 0):\n        style_mixing_prob = None\n\n    # Setup components.\n    if 'synthesis' not in components:\n        components.synthesis = tflib.Network('G_synthesis', func_name=globals()[synthesis_func], **kwargs)\n    num_layers = components.synthesis.input_shape[1]\n    dlatent_size = components.synthesis.input_shape[2]\n    if 'mapping' not in components:\n        components.mapping = tflib.Network('G_mapping', func_name=globals()[mapping_func], dlatent_broadcast=num_layers, **kwargs)\n\n    # Setup variables.\n    lod_in = tf.get_variable('lod', initializer=np.float32(0), trainable=False)\n    dlatent_avg = tf.get_variable('dlatent_avg', shape=[dlatent_size], initializer=tf.initializers.zeros(), trainable=False)\n\n    # Evaluate mapping network.\n    dlatents = components.mapping.get_output_for(latents_in, labels_in, is_training=is_training, **kwargs)\n    dlatents = tf.cast(dlatents, tf.float32)\n\n    # Update moving average of W.\n    if dlatent_avg_beta is not None:\n        with tf.variable_scope('DlatentAvg'):\n            batch_avg = tf.reduce_mean(dlatents[:, 0], axis=0)\n            update_op = tf.assign(dlatent_avg, tflib.lerp(batch_avg, dlatent_avg, dlatent_avg_beta))\n            with tf.control_dependencies([update_op]):\n                dlatents = tf.identity(dlatents)\n\n    # Perform style mixing regularization.\n    if style_mixing_prob is not None:\n        with tf.variable_scope('StyleMix'):\n            latents2 = tf.random_normal(tf.shape(latents_in))\n            dlatents2 = components.mapping.get_output_for(latents2, labels_in, is_training=is_training, **kwargs)\n            dlatents2 = tf.cast(dlatents2, tf.float32)\n            layer_idx = np.arange(num_layers)[np.newaxis, :, np.newaxis]\n            cur_layers = num_layers - tf.cast(lod_in, tf.int32) * 2\n            mixing_cutoff = tf.cond(\n                tf.random_uniform([], 0.0, 1.0) < style_mixing_prob,\n                lambda: tf.random_uniform([], 1, cur_layers, dtype=tf.int32),\n                lambda: cur_layers)\n            dlatents = tf.where(tf.broadcast_to(layer_idx < mixing_cutoff, tf.shape(dlatents)), dlatents, dlatents2)\n\n    # Apply truncation trick.\n    if truncation_psi is not None:\n        with tf.variable_scope('Truncation'):\n            layer_idx = np.arange(num_layers)[np.newaxis, :, np.newaxis]\n            layer_psi = np.ones(layer_idx.shape, dtype=np.float32)\n            if truncation_cutoff is None:\n                layer_psi *= truncation_psi\n            else:\n                layer_psi = tf.where(layer_idx < truncation_cutoff, layer_psi * truncation_psi, layer_psi)\n            dlatents = tflib.lerp(dlatent_avg, dlatents, layer_psi)\n\n    # Evaluate synthesis network.\n    deps = []\n    if 'lod' in components.synthesis.vars:\n        deps.append(tf.assign(components.synthesis.vars['lod'], lod_in))\n    with tf.control_dependencies(deps):\n        images_out = components.synthesis.get_output_for(dlatents, is_training=is_training, force_clean_graph=is_template_graph, **kwargs)\n\n    # Return requested outputs.\n    images_out = tf.identity(images_out, name='images_out')\n    if return_dlatents:\n        return images_out, dlatents\n    return images_out\n\n#----------------------------------------------------------------------------\n# Mapping network.\n# Transforms the input latent code (z) to the disentangled latent code (w).\n# Used in configs B-F (Table 1).\n\ndef G_mapping(\n    latents_in,                             # First input: Latent vectors (Z) [minibatch, latent_size].\n    labels_in,                              # Second input: Conditioning labels [minibatch, label_size].\n    latent_size             = 512,          # Latent vector (Z) dimensionality.\n    label_size              = 0,            # Label dimensionality, 0 if no labels.\n    dlatent_size            = 512,          # Disentangled latent (W) dimensionality.\n    dlatent_broadcast       = None,         # Output disentangled latent (W) as [minibatch, dlatent_size] or [minibatch, dlatent_broadcast, dlatent_size].\n    mapping_layers          = 8,            # Number of mapping layers.\n    mapping_fmaps           = 512,          # Number of activations in the mapping layers.\n    mapping_lrmul           = 0.01,         # Learning rate multiplier for the mapping layers.\n    mapping_nonlinearity    = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    normalize_latents       = True,         # Normalize latent vectors (Z) before feeding them to the mapping layers?\n    dtype                   = 'float32',    # Data type to use for activations and outputs.\n    **_kwargs):                             # Ignore unrecognized keyword args.\n\n    act = mapping_nonlinearity\n\n    # Inputs.\n    latents_in.set_shape([None, latent_size])\n    labels_in.set_shape([None, label_size])\n    latents_in = tf.cast(latents_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n    x = latents_in\n\n    # Embed labels and concatenate them with latents.\n    if label_size:\n        with tf.variable_scope('LabelConcat'):\n            w = tf.get_variable('weight', shape=[label_size, latent_size], initializer=tf.initializers.random_normal())\n            y = tf.matmul(labels_in, tf.cast(w, dtype))\n            x = tf.concat([x, y], axis=1)\n\n    # Normalize latents.\n    if normalize_latents:\n        with tf.variable_scope('Normalize'):\n            x *= tf.rsqrt(tf.reduce_mean(tf.square(x), axis=1, keepdims=True) + 1e-8)\n\n    # Mapping layers.\n    for layer_idx in range(mapping_layers):\n        with tf.variable_scope('Dense%d' % layer_idx):\n            fmaps = dlatent_size if layer_idx == mapping_layers - 1 else mapping_fmaps\n            x = apply_bias_act(dense_layer(x, fmaps=fmaps, lrmul=mapping_lrmul), act=act, lrmul=mapping_lrmul)\n\n    # Broadcast.\n    if dlatent_broadcast is not None:\n        with tf.variable_scope('Broadcast'):\n            x = tf.tile(x[:, np.newaxis], [1, dlatent_broadcast, 1])\n\n    # Output.\n    assert x.dtype == tf.as_dtype(dtype)\n    return tf.identity(x, name='dlatents_out')\n\n#----------------------------------------------------------------------------\n# StyleGAN synthesis network with revised architecture (Figure 2d).\n# Implements progressive growing, but no skip connections or residual nets (Figure 7).\n# Used in configs B-D (Table 1).\n\ndef G_synthesis_stylegan_revised(\n    dlatents_in,                        # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].\n    dlatent_size        = 512,          # Disentangled latent (W) dimensionality.\n    num_channels        = 3,            # Number of output color channels.\n    resolution          = 1024,         # Output resolution.\n    fmap_base           = 16 << 10,     # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_min            = 1,            # Minimum number of feature maps in any layer.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    randomize_noise     = True,         # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    resample_kernel     = [1,3,3,1],    # Low-pass filter to apply when resampling activations. None = no filtering.\n    fused_modconv       = True,         # Implement modulated_conv2d_layer() as a single fused op?\n    structure           = 'auto',       # 'fixed' = no progressive growing, 'linear' = human-readable, 'recursive' = efficient, 'auto' = select automatically.\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    force_clean_graph   = False,        # True = construct a clean graph that looks nice in TensorBoard, False = default behavior.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return np.clip(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_min, fmap_max)\n    if is_template_graph: force_clean_graph = True\n    if force_clean_graph: randomize_noise = False\n    if structure == 'auto': structure = 'linear' if force_clean_graph else 'recursive'\n    act = nonlinearity\n    num_layers = resolution_log2 * 2 - 2\n    images_out = None\n\n    # Primary inputs.\n    dlatents_in.set_shape([None, num_layers, dlatent_size])\n    dlatents_in = tf.cast(dlatents_in, dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype)\n\n    # Noise inputs.\n    noise_inputs = []\n    for layer_idx in range(num_layers - 1):\n        res = (layer_idx + 5) // 2\n        shape = [1, 1, 2**res, 2**res]\n        noise_inputs.append(tf.get_variable('noise%d' % layer_idx, shape=shape, initializer=tf.initializers.random_normal(), trainable=False))\n\n    # Single convolution layer with all the bells and whistles.\n    def layer(x, layer_idx, fmaps, kernel, up=False):\n        x = modulated_conv2d_layer(x, dlatents_in[:, layer_idx], fmaps=fmaps, kernel=kernel, up=up, resample_kernel=resample_kernel, fused_modconv=fused_modconv)\n        if randomize_noise:\n            noise = tf.random_normal([tf.shape(x)[0], 1, x.shape[2], x.shape[3]], dtype=x.dtype)\n        else:\n            noise = tf.cast(noise_inputs[layer_idx], x.dtype)\n        noise_strength = tf.get_variable('noise_strength', shape=[], initializer=tf.initializers.zeros())\n        x += noise * tf.cast(noise_strength, x.dtype)\n        return apply_bias_act(x, act=act)\n\n    # Early layers.\n    with tf.variable_scope('4x4'):\n        with tf.variable_scope('Const'):\n            x = tf.get_variable('const', shape=[1, nf(1), 4, 4], initializer=tf.initializers.random_normal())\n            x = tf.tile(tf.cast(x, dtype), [tf.shape(dlatents_in)[0], 1, 1, 1])\n        with tf.variable_scope('Conv'):\n            x = layer(x, layer_idx=0, fmaps=nf(1), kernel=3)\n\n    # Building blocks for remaining layers.\n    def block(res, x): # res = 3..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            with tf.variable_scope('Conv0_up'):\n                x = layer(x, layer_idx=res*2-5, fmaps=nf(res-1), kernel=3, up=True)\n            with tf.variable_scope('Conv1'):\n                x = layer(x, layer_idx=res*2-4, fmaps=nf(res-1), kernel=3)\n            return x\n    def torgb(res, x): # res = 2..resolution_log2\n        with tf.variable_scope('ToRGB_lod%d' % (resolution_log2 - res)):\n            return apply_bias_act(modulated_conv2d_layer(x, dlatents_in[:, res*2-3], fmaps=num_channels, kernel=1, demodulate=False, fused_modconv=fused_modconv))\n\n    # Fixed structure: simple and efficient, but does not support progressive growing.\n    if structure == 'fixed':\n        for res in range(3, resolution_log2 + 1):\n            x = block(res, x)\n        images_out = torgb(resolution_log2, x)\n\n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        images_out = torgb(2, x)\n        for res in range(3, resolution_log2 + 1):\n            lod = resolution_log2 - res\n            x = block(res, x)\n            img = torgb(res, x)\n            with tf.variable_scope('Upsample_lod%d' % lod):\n                images_out = upsample_2d(images_out)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                images_out = tflib.lerp_clip(img, images_out, lod_in - lod)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def cset(cur_lambda, new_cond, new_lambda):\n            return lambda: tf.cond(new_cond, new_lambda, cur_lambda)\n        def grow(x, res, lod):\n            y = block(res, x)\n            img = lambda: naive_upsample_2d(torgb(res, y), factor=2**lod)\n            img = cset(img, (lod_in > lod), lambda: naive_upsample_2d(tflib.lerp(torgb(res, y), upsample_2d(torgb(res - 1, x)), lod_in - lod), factor=2**lod))\n            if lod > 0: img = cset(img, (lod_in < lod), lambda: grow(y, res + 1, lod - 1))\n            return img()\n        images_out = grow(x, 3, resolution_log2 - 3)\n\n    assert images_out.dtype == tf.as_dtype(dtype)\n    return tf.identity(images_out, name='images_out')\n\n#----------------------------------------------------------------------------\n# StyleGAN2 synthesis network (Figure 7).\n# Implements skip connections and residual nets (Figure 7), but no progressive growing.\n# Used in configs E-F (Table 1).\n\ndef G_synthesis_stylegan2(\n    dlatents_in,                        # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].\n    dlatent_size        = 512,          # Disentangled latent (W) dimensionality.\n    num_channels        = 3,            # Number of output color channels.\n    resolution          = 1024,         # Output resolution.\n    fmap_base           = 16 << 10,     # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_min            = 1,            # Minimum number of feature maps in any layer.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    randomize_noise     = True,         # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.\n    architecture        = 'skip',       # Architecture: 'orig', 'skip', 'resnet'.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    resample_kernel     = [1,3,3,1],    # Low-pass filter to apply when resampling activations. None = no filtering.\n    fused_modconv       = True,         # Implement modulated_conv2d_layer() as a single fused op?\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return np.clip(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_min, fmap_max)\n    assert architecture in ['orig', 'skip', 'resnet']\n    act = nonlinearity\n    num_layers = resolution_log2 * 2 - 2\n    images_out = None\n\n    # Primary inputs.\n    dlatents_in.set_shape([None, num_layers, dlatent_size])\n    dlatents_in = tf.cast(dlatents_in, dtype)\n\n    # Noise inputs.\n    noise_inputs = []\n    for layer_idx in range(num_layers - 1):\n        res = (layer_idx + 5) // 2\n        shape = [1, 1, 2**res, 2**res]\n        noise_inputs.append(tf.get_variable('noise%d' % layer_idx, shape=shape, initializer=tf.initializers.random_normal(), trainable=False))\n\n    # Single convolution layer with all the bells and whistles.\n    def layer(x, layer_idx, fmaps, kernel, up=False):\n        x = modulated_conv2d_layer(x, dlatents_in[:, layer_idx], fmaps=fmaps, kernel=kernel, up=up, resample_kernel=resample_kernel, fused_modconv=fused_modconv)\n        if randomize_noise:\n            noise = tf.random_normal([tf.shape(x)[0], 1, x.shape[2], x.shape[3]], dtype=x.dtype)\n        else:\n            noise = tf.cast(noise_inputs[layer_idx], x.dtype)\n        noise_strength = tf.get_variable('noise_strength', shape=[], initializer=tf.initializers.zeros())\n        x += noise * tf.cast(noise_strength, x.dtype)\n        return apply_bias_act(x, act=act)\n\n    # Building blocks for main layers.\n    def block(x, res): # res = 3..resolution_log2\n        t = x\n        with tf.variable_scope('Conv0_up'):\n            x = layer(x, layer_idx=res*2-5, fmaps=nf(res-1), kernel=3, up=True)\n        with tf.variable_scope('Conv1'):\n            x = layer(x, layer_idx=res*2-4, fmaps=nf(res-1), kernel=3)\n        if architecture == 'resnet':\n            with tf.variable_scope('Skip'):\n                t = conv2d_layer(t, fmaps=nf(res-1), kernel=1, up=True, resample_kernel=resample_kernel)\n                x = (x + t) * (1 / np.sqrt(2))\n        return x\n    def upsample(y):\n        with tf.variable_scope('Upsample'):\n            return upsample_2d(y, k=resample_kernel)\n    def torgb(x, y, res): # res = 2..resolution_log2\n        with tf.variable_scope('ToRGB'):\n            t = apply_bias_act(modulated_conv2d_layer(x, dlatents_in[:, res*2-3], fmaps=num_channels, kernel=1, demodulate=False, fused_modconv=fused_modconv))\n            return t if y is None else y + t\n\n    # Early layers.\n    y = None\n    with tf.variable_scope('4x4'):\n        with tf.variable_scope('Const'):\n            x = tf.get_variable('const', shape=[1, nf(1), 4, 4], initializer=tf.initializers.random_normal())\n            x = tf.tile(tf.cast(x, dtype), [tf.shape(dlatents_in)[0], 1, 1, 1])\n        with tf.variable_scope('Conv'):\n            x = layer(x, layer_idx=0, fmaps=nf(1), kernel=3)\n        if architecture == 'skip':\n            y = torgb(x, y, 2)\n\n    # Main layers.\n    for res in range(3, resolution_log2 + 1):\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            x = block(x, res)\n            if architecture == 'skip':\n                y = upsample(y)\n            if architecture == 'skip' or res == resolution_log2:\n                y = torgb(x, y, res)\n    images_out = y\n\n    assert images_out.dtype == tf.as_dtype(dtype)\n    return tf.identity(images_out, name='images_out')\n\n\n\n#----------------------------------------------------------------------------\n# Define a VectorQuantize function\ndef VectorQuantizerEMA(inputs, is_training=True, embedding_dim=512,\n             num_embeddings=2**8,\n             decay=0.8, commitment_cost=1.0,\n             epsilon=1e-5,\n                       **_kwargs):\n    _embedding_dim = embedding_dim\n    _num_embeddings = num_embeddings\n    _decay = decay\n    _commitment_cost = commitment_cost\n    _epsilon = epsilon\n    # with self._enter_variable_scope():\n    # initializer = tf.random_normal_initializer()\n    # initializer = tf.initializers.variance_scaling(distribution='truncated_normal')\n\n    # w is a matrix with an embedding in each column. When training, the\n    # embedding is assigned to be the average of all inputs assigned to that\n    # embedding.\n    embedding_shape = [embedding_dim, num_embeddings]\n    _w = tf.get_variable(\n        'embedding', embedding_shape,\n        initializer=tf.variance_scaling_initializer(), use_resource=True)\n    _ema_cluster_size = tf.get_variable(\n        'ema_cluster_size', [num_embeddings],\n        initializer=tf.constant_initializer(0), use_resource=True)\n    _ema_w = tf.get_variable(\n        'ema_dw', initializer=_w.initialized_value(), use_resource=True)\n    inputs.set_shape([None, None, None, embedding_dim])\n\n    def quantize(encoding_indices):\n        with tf.control_dependencies([encoding_indices]):\n            w = tf.transpose(_w.read_value(), [1, 0])\n        return tf.nn.embedding_lookup(w, encoding_indices, validate_indices=False)\n\n    with tf.control_dependencies([inputs]):\n        w = _w.read_value()\n    input_shape = tf.shape(inputs)\n    with tf.control_dependencies([\n        tf.Assert(tf.equal(input_shape[-1], _embedding_dim),\n                  [input_shape])]):\n        flat_inputs = tf.reshape(inputs, [-1, _embedding_dim])\n\n    distances = (tf.reduce_sum(flat_inputs ** 2, 1, keepdims=True)\n                 - 2 * tf.matmul(flat_inputs, w)\n                 + tf.reduce_sum(w ** 2, 0, keepdims=True))\n\n    encoding_indices = tf.argmax(- distances, 1)\n    encodings = tf.one_hot(encoding_indices, _num_embeddings)\n    encoding_indices = tf.reshape(encoding_indices, tf.shape(inputs)[:-1])\n    quantized = quantize(encoding_indices)\n    e_latent_loss = tf.reduce_mean((tf.stop_gradient(quantized) - inputs) ** 2, axis=[1, 2, 3])\n\n    if is_training:\n        updated_ema_cluster_size = moving_averages.assign_moving_average(\n            _ema_cluster_size, tf.reduce_sum(encodings, 0), _decay)\n        dw = tf.matmul(flat_inputs, encodings, transpose_a=True)\n        updated_ema_w = moving_averages.assign_moving_average(_ema_w, dw,\n                                                              _decay)\n        n = tf.reduce_sum(updated_ema_cluster_size)\n        updated_ema_cluster_size = (\n                (updated_ema_cluster_size + _epsilon)\n                / (n + _num_embeddings * _epsilon) * n)\n        # print('here')\n        normalised_updated_ema_w = (\n                updated_ema_w / tf.reshape(updated_ema_cluster_size, [1, -1]))\n        with tf.control_dependencies([e_latent_loss]):\n            update_w = tf.assign(_w, normalised_updated_ema_w)\n            with tf.control_dependencies([update_w]):\n                loss = _commitment_cost * e_latent_loss\n    else:\n        loss = _commitment_cost * e_latent_loss\n    quantized = inputs + tf.stop_gradient(quantized - inputs)\n    avg_probs = tf.reduce_mean(encodings, 0)\n    perplexity = tf.exp(- tf.reduce_sum(avg_probs * tf.log(avg_probs + 1e-10)))\n\n    return loss, perplexity, tf.transpose(quantized, perm=(0, 3, 1, 2))\n\n\n#----------------------------------------------------------------------------\n# Original StyleGAN discriminator.\n# Used in configs B-D (Table 1).\n\ndef D_stylegan(\n    images_in,                          # First input: Images [minibatch, channel, height, width].\n    labels_in,                          # Second input: Labels [minibatch, label_size].\n    num_channels        = 3,            # Number of input color channels. Overridden based on dataset.\n    resolution          = 1024,         # Input resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 16 << 10,     # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_min            = 1,            # Minimum number of feature maps in any layer.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    mbstd_group_size    = 4,            # Group size for the minibatch standard deviation layer, 0 = disable.\n    mbstd_num_features  = 1,            # Number of features for the minibatch standard deviation layer.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    resample_kernel     = [1,3,3,1],    # Low-pass filter to apply when resampling activations. None = no filtering.\n    structure           = 'auto',       # 'fixed' = no progressive growing, 'linear' = human-readable, 'recursive' = efficient, 'auto' = select automatically.\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return np.clip(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_min, fmap_max)\n    if structure == 'auto': structure = 'linear' if is_template_graph else 'recursive'\n    act = nonlinearity\n\n    images_in.set_shape([None, num_channels, resolution, resolution])\n    labels_in.set_shape([None, label_size])\n    images_in = tf.cast(images_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0.0), trainable=False), dtype)\n\n    # Building blocks for spatial layers.\n    def fromrgb(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('FromRGB_lod%d' % (resolution_log2 - res)):\n            return apply_bias_act(conv2d_layer(x, fmaps=nf(res-1), kernel=1), act=act)\n    def block(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            with tf.variable_scope('Conv0'):\n                x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-1), kernel=3), act=act)\n            with tf.variable_scope('Conv1_down'):\n                x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-2), kernel=3, down=True, resample_kernel=resample_kernel), act=act)\n            return x\n\n    # Fixed structure: simple and efficient, but does not support progressive growing.\n    if structure == 'fixed':\n        x = fromrgb(images_in, resolution_log2)\n        for res in range(resolution_log2, 2, -1):\n            x = block(x, res)\n\n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        img = images_in\n        x = fromrgb(img, resolution_log2)\n        for res in range(resolution_log2, 2, -1):\n            lod = resolution_log2 - res\n            x = block(x, res)\n            with tf.variable_scope('Downsample_lod%d' % lod):\n                img = downsample_2d(img)\n            y = fromrgb(img, res - 1)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                x = tflib.lerp_clip(x, y, lod_in - lod)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def cset(cur_lambda, new_cond, new_lambda):\n            return lambda: tf.cond(new_cond, new_lambda, cur_lambda)\n        def grow(res, lod):\n            x = lambda: fromrgb(naive_downsample_2d(images_in, factor=2**lod), res)\n            if lod > 0: x = cset(x, (lod_in < lod), lambda: grow(res + 1, lod - 1))\n            x = block(x(), res); y = lambda: x\n            y = cset(y, (lod_in > lod), lambda: tflib.lerp(x, fromrgb(naive_downsample_2d(images_in, factor=2**(lod+1)), res - 1), lod_in - lod))\n            return y()\n        x = grow(3, resolution_log2 - 3)\n\n    # Final layers at 4x4 resolution.\n    with tf.variable_scope('4x4'):\n        if mbstd_group_size > 1:\n            with tf.variable_scope('MinibatchStddev'):\n                x = minibatch_stddev_layer(x, mbstd_group_size, mbstd_num_features)\n        with tf.variable_scope('Conv'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(1), kernel=3), act=act)\n        with tf.variable_scope('Dense0'):\n            x = apply_bias_act(dense_layer(x, fmaps=nf(0)), act=act)\n\n    # Output layer with label conditioning from \"Which Training Methods for GANs do actually Converge?\"\n    with tf.variable_scope('Output'):\n        x = apply_bias_act(dense_layer(x, fmaps=max(labels_in.shape[1], 1)))\n        if labels_in.shape[1] > 0:\n            x = tf.reduce_sum(x * labels_in, axis=1, keepdims=True)\n    scores_out = x\n\n    # Output.\n    assert scores_out.dtype == tf.as_dtype(dtype)\n    scores_out = tf.identity(scores_out, name='scores_out')\n    return scores_out\n\n#----------------------------------------------------------------------------\n# StyleGAN2 discriminator (Figure 7).\n# Implements skip connections and residual nets (Figure 7), but no progressive growing.\n# Used in configs E-F (Table 1).\n\ndef D_stylegan2(\n    images_in,                          # First input: Images [minibatch, channel, height, width].\n    labels_in,                          # Second input: Labels [minibatch, label_size].\n    num_channels        = 3,            # Number of input color channels. Overridden based on dataset.\n    resolution          = 1024,         # Input resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 16 << 10,     # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_min            = 1,            # Minimum number of feature maps in any layer.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    architecture        = 'resnet',     # Architecture: 'orig', 'skip', 'resnet'.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    mbstd_group_size    = 4,            # Group size for the minibatch standard deviation layer, 0 = disable.\n    mbstd_num_features  = 1,            # Number of features for the minibatch standard deviation layer.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    resample_kernel     = [1,3,3,1],    # Low-pass filter to apply when resampling activations. None = no filtering.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return np.clip(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_min, fmap_max)\n    assert architecture in ['orig', 'skip', 'resnet']\n    act = nonlinearity\n\n    images_in.set_shape([None, num_channels, resolution, resolution])\n    labels_in.set_shape([None, label_size])\n    images_in = tf.cast(images_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n\n    # Building blocks for main layers.\n    def fromrgb(x, y, res): # res = 2..resolution_log2\n        with tf.variable_scope('FromRGB'):\n            t = apply_bias_act(conv2d_layer(y, fmaps=nf(res-1), kernel=1), act=act)\n            return t if x is None else x + t\n    def block(x, res): # res = 2..resolution_log2\n        t = x\n        with tf.variable_scope('Conv0'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-1), kernel=3), act=act)\n        with tf.variable_scope('Conv1_down'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-2), kernel=3, down=True, resample_kernel=resample_kernel), act=act)\n        if architecture == 'resnet':\n            with tf.variable_scope('Skip'):\n                t = conv2d_layer(t, fmaps=nf(res-2), kernel=1, down=True, resample_kernel=resample_kernel)\n                x = (x + t) * (1 / np.sqrt(2))\n        return x\n    def downsample(y):\n        with tf.variable_scope('Downsample'):\n            return downsample_2d(y, k=resample_kernel)\n\n    # Main layers.\n    x = None\n    y = images_in\n    for res in range(resolution_log2, 2, -1):\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            if architecture == 'skip' or res == resolution_log2:\n                x = fromrgb(x, y, res)\n            x = block(x, res)\n\n            if architecture == 'skip':\n                y = downsample(y)\n\n    # Final layers.\n    with tf.variable_scope('4x4'):\n        if architecture == 'skip':\n            x = fromrgb(x, y, 2)\n        if mbstd_group_size > 1:\n            with tf.variable_scope('MinibatchStddev'):\n                x = minibatch_stddev_layer(x, mbstd_group_size, mbstd_num_features)\n        with tf.variable_scope('Conv'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(1), kernel=3), act=act)\n        with tf.variable_scope('Dense0'):\n            x = apply_bias_act(dense_layer(x, fmaps=nf(0)), act=act)\n\n    # Output layer with label conditioning from \"Which Training Methods for GANs do actually Converge?\"\n    with tf.variable_scope('Output'):\n        x = apply_bias_act(dense_layer(x, fmaps=max(labels_in.shape[1], 1)))\n        if labels_in.shape[1] > 0:\n            x = tf.reduce_sum(x * labels_in, axis=1, keepdims=True)\n    scores_out = x\n\n    # Output.\n    assert scores_out.dtype == tf.as_dtype(dtype)\n    scores_out = tf.identity(scores_out, name='scores_out')\n    return scores_out\n\n#----------------------------------------------------------------------------\n\n\ndef D_stylegan2_quant(\n    images_in,                          # First input: Images [minibatch, channel, height, width].\n    labels_in,                          # Second input: Labels [minibatch, label_size].\n    num_channels        = 3,            # Number of input color channels. Overridden based on dataset.\n    resolution          = 1024,         # Input resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 16 << 10,     # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_min            = 1,            # Minimum number of feature maps in any layer.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    architecture        = 'resnet',     # Architecture: 'orig', 'skip', 'resnet'.\n    nonlinearity        = 'lrelu',      # Activation function: 'relu', 'lrelu', etc.\n    mbstd_group_size    = 4,            # Group size for the minibatch standard deviation layer, 0 = disable.\n    mbstd_num_features  = 1,            # Number of features for the minibatch standard deviation layer.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    resample_kernel     = [1,3,3,1],    # Low-pass filter to apply when resampling activations. None = no filtering.\n    commitment_cost     = 1.0,\n    decay               = 0.8,\n    discrete_layer      = '2',\n    components          = dnnlib.EasyDict(),        # Container for sub-networks. Retained between calls.\n    **_kwargs):                         # Ignore unrecognized keyword args.\n\n    # q_layer = [int(x) for x in discrete_layer]\n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    q_layer = [int(x) for x in discrete_layer]\n    #K = {10:2**4, 9:2**4, 8:2**4, 7:2**5, 6:2**7, 5:2**8, 4:2**9, 3: 2**10}\n    #q_layer = [(resolution_log2-2)//2+2, (resolution_log2-2)//2+3]\n    res_dictsz_mapping = {10: 2**6, 9:2**6, 8:2**6, 7: 2**6, 6:2**7, 5:2**7, 4:2**7, 3:2**7}\n    res_ch_mapping = {10: 2**5, 9:2**6, 8:2**7, 7: 2**8, 6:2**9, 5:2**9, 4:2**9, 3:2**9}\n    def nf(stage): return np.clip(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_min, fmap_max)\n    assert architecture in ['orig', 'skip', 'resnet']\n    act = nonlinearity\n\n    images_in.set_shape([None, num_channels, resolution, resolution])\n    labels_in.set_shape([None, label_size])\n    images_in = tf.cast(images_in, dtype)\n    labels_in = tf.cast(labels_in, dtype)\n\n    for res in q_layer:\n        if 'discrete_mapping_%s'%str(res) not in components:\n            components['discrete_mapping_%s'%str(res)] = tflib.Network('Discrete_mapping_%s'%str(\n                res),  num_embeddings=res_dictsz_mapping[res], decay=decay, embedding_dim=res_ch_mapping[res],\n                                                                       commitment_cost=commitment_cost,\n                                                                       func_name=VectorQuantizerEMA, **_kwargs)\n\n    # Building blocks for main layers.\n    def fromrgb(x, y, res): # res = 2..resolution_log2\n        with tf.variable_scope('FromRGB'):\n            t = apply_bias_act(conv2d_layer(y, fmaps=nf(res-1), kernel=1), act=act)\n            return t if x is None else x + t\n    def block(x, res): # res = 2..resolution_log2\n        t = x\n        with tf.variable_scope('Conv0'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-1), kernel=3), act=act)\n        with tf.variable_scope('Conv1_down'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(res-2), kernel=3, down=True, resample_kernel=resample_kernel), act=act)\n        if architecture == 'resnet':\n            with tf.variable_scope('Skip'):\n                t = conv2d_layer(t, fmaps=nf(res-2), kernel=1, down=True, resample_kernel=resample_kernel)\n                x = (x + t) * (1 / np.sqrt(2))\n        return x\n    def downsample(y):\n        with tf.variable_scope('Downsample'):\n            return downsample_2d(y, k=resample_kernel)\n\n    # Main layers.\n    x = None\n    y = images_in\n    quant_loss = 0\n    for res in range(resolution_log2, 2, -1):\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            if architecture == 'skip' or res == resolution_log2:\n                x = fromrgb(x, y, res)\n            x = block(x, res)\n        if res in q_layer:\n            diff, ppl, quantized = components['discrete_mapping_%s'%str(res)].get_output_for(\n                tf.transpose(x,\n                                                                                           perm=(0, 2, 3, 1)), is_training=True)\n            quant_loss += diff\n\n            if architecture == 'skip':\n                y = downsample(y)\n\n    # Final layers.\n    with tf.variable_scope('4x4'):\n        if architecture == 'skip':\n            x = fromrgb(x, y, 2)\n        if mbstd_group_size > 1:\n            with tf.variable_scope('MinibatchStddev'):\n                x = minibatch_stddev_layer(x, mbstd_group_size, mbstd_num_features)\n        with tf.variable_scope('Conv'):\n            x = apply_bias_act(conv2d_layer(x, fmaps=nf(1), kernel=3), act=act)\n        with tf.variable_scope('Dense0'):\n            x = apply_bias_act(dense_layer(x, fmaps=nf(0)), act=act)\n\n    # Output layer with label conditioning from \"Which Training Methods for GANs do actually Converge?\"\n    with tf.variable_scope('Output'):\n        x = apply_bias_act(dense_layer(x, fmaps=max(labels_in.shape[1], 1)))\n        if labels_in.shape[1] > 0:\n            x = tf.reduce_sum(x * labels_in, axis=1, keepdims=True)\n    scores_out = x\n\n    # Output.\n    assert scores_out.dtype == tf.as_dtype(dtype)\n    scores_out = tf.identity(scores_out, name='scores_out')\n    return scores_out, quant_loss, ppl\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-StyleGAN/training/training_loop.py",
    "content": "# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n#\n# This work is made available under the Nvidia Source Code License-NC.\n# To view a copy of this license, visit\n# https://nvlabs.github.io/stylegan2/license.html\n\n\"\"\"Main training script.\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\nimport dnnlib\nimport dnnlib.tflib as tflib\nfrom dnnlib.tflib.autosummary import autosummary\n\nfrom training import dataset\nfrom training import misc\nfrom metrics import metric_base\n\n#----------------------------------------------------------------------------\n# Just-in-time processing of training images before feeding them to the networks.\n\ndef process_reals(x, labels, lod, mirror_augment, drange_data, drange_net):\n    with tf.name_scope('DynamicRange'):\n        x = tf.cast(x, tf.float32)\n        x = misc.adjust_dynamic_range(x, drange_data, drange_net)\n    if mirror_augment:\n        with tf.name_scope('MirrorAugment'):\n            x = tf.where(tf.random_uniform([tf.shape(x)[0]]) < 0.5, x, tf.reverse(x, [3]))\n    with tf.name_scope('FadeLOD'): # Smooth crossfade between consecutive levels-of-detail.\n        s = tf.shape(x)\n        y = tf.reshape(x, [-1, s[1], s[2]//2, 2, s[3]//2, 2])\n        y = tf.reduce_mean(y, axis=[3, 5], keepdims=True)\n        y = tf.tile(y, [1, 1, 1, 2, 1, 2])\n        y = tf.reshape(y, [-1, s[1], s[2], s[3]])\n        x = tflib.lerp(x, y, lod - tf.floor(lod))\n    with tf.name_scope('UpscaleLOD'): # Upscale to match the expected input/output size of the networks.\n        s = tf.shape(x)\n        factor = tf.cast(2 ** tf.floor(lod), tf.int32)\n        x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1])\n        x = tf.tile(x, [1, 1, 1, factor, 1, factor])\n        x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor])\n    return x, labels\n\n#----------------------------------------------------------------------------\n# Evaluate time-varying training parameters.\n\ndef training_schedule(\n    cur_nimg,\n    training_set,\n    lod_initial_resolution  = None,     # Image resolution used at the beginning.\n    lod_training_kimg       = 600,      # Thousands of real images to show before doubling the resolution.\n    lod_transition_kimg     = 600,      # Thousands of real images to show when fading in new layers.\n    minibatch_size_base     = 32,       # Global minibatch size.\n    minibatch_size_dict     = {},       # Resolution-specific overrides.\n    minibatch_gpu_base      = 4,        # Number of samples processed at a time by one GPU.\n    minibatch_gpu_dict      = {},       # Resolution-specific overrides.\n    G_lrate_base            = 0.002,    # Learning rate for the generator.\n    G_lrate_dict            = {},       # Resolution-specific overrides.\n    D_lrate_base            = 0.002,    # Learning rate for the discriminator.\n    D_lrate_dict            = {},       # Resolution-specific overrides.\n    lrate_rampup_kimg       = 0,        # Duration of learning rate ramp-up.\n    tick_kimg_base          = 4,        # Default interval of progress snapshots.\n    tick_kimg_dict          = {8:28, 16:24, 32:20, 64:16, 128:12, 256:8, 512:6, 1024:4}): # Resolution-specific overrides.\n\n    # Initialize result dict.\n    s = dnnlib.EasyDict()\n    s.kimg = cur_nimg / 1000.0\n\n    # Training phase.\n    phase_dur = lod_training_kimg + lod_transition_kimg\n    phase_idx = int(np.floor(s.kimg / phase_dur)) if phase_dur > 0 else 0\n    phase_kimg = s.kimg - phase_idx * phase_dur\n\n    # Level-of-detail and resolution.\n    if lod_initial_resolution is None:\n        s.lod = 0.0\n    else:\n        s.lod = training_set.resolution_log2\n        s.lod -= np.floor(np.log2(lod_initial_resolution))\n        s.lod -= phase_idx\n        if lod_transition_kimg > 0:\n            s.lod -= max(phase_kimg - lod_training_kimg, 0.0) / lod_transition_kimg\n        s.lod = max(s.lod, 0.0)\n    s.resolution = 2 ** (training_set.resolution_log2 - int(np.floor(s.lod)))\n\n    # Minibatch size.\n    s.minibatch_size = minibatch_size_dict.get(s.resolution, minibatch_size_base)\n    s.minibatch_gpu = minibatch_gpu_dict.get(s.resolution, minibatch_gpu_base)\n\n    # Learning rate.\n    s.G_lrate = G_lrate_dict.get(s.resolution, G_lrate_base)\n    s.D_lrate = D_lrate_dict.get(s.resolution, D_lrate_base)\n    if lrate_rampup_kimg > 0:\n        rampup = min(s.kimg / lrate_rampup_kimg, 1.0)\n        s.G_lrate *= rampup\n        s.D_lrate *= rampup\n\n    # Other parameters.\n    s.tick_kimg = tick_kimg_dict.get(s.resolution, tick_kimg_base)\n    return s\n\n#----------------------------------------------------------------------------\n# Main training script.\n\ndef training_loop(\n    G_args                  = {},       # Options for generator network.\n    D_args                  = {},       # Options for discriminator network.\n    G_opt_args              = {},       # Options for generator optimizer.\n    D_opt_args              = {},       # Options for discriminator optimizer.\n    G_loss_args             = {},       # Options for generator loss.\n    D_loss_args             = {},       # Options for discriminator loss.\n    dataset_args            = {},       # Options for dataset.load_dataset().\n    sched_args              = {},       # Options for train.TrainingSchedule.\n    grid_args               = {},       # Options for train.setup_snapshot_image_grid().\n    metric_arg_list         = [],       # Options for MetricGroup.\n    tf_config               = {},       # Options for tflib.init_tf().\n    data_dir                = None,     # Directory to load datasets from.\n    G_smoothing_kimg        = 10.0,     # Half-life of the running average of generator weights.\n    minibatch_repeats       = 4,        # Number of minibatches to run before adjusting training parameters.\n    lazy_regularization     = True,     # Perform regularization as a separate training step?\n    G_reg_interval          = 4,        # How often the perform regularization for G? Ignored if lazy_regularization=False.\n    D_reg_interval          = 16,       # How often the perform regularization for D? Ignored if lazy_regularization=False.\n    reset_opt_for_new_lod   = True,     # Reset optimizer internal state (e.g. Adam moments) when new layers are introduced?\n    total_kimg              = 25000,    # Total length of the training, measured in thousands of real images.\n    mirror_augment          = False,    # Enable mirror augment?\n    drange_net              = [-1,1],   # Dynamic range used when feeding image data to the networks.\n    image_snapshot_ticks    = 50,       # How often to save image snapshots? None = only save 'reals.png' and 'fakes-init.png'.\n    network_snapshot_ticks  = 50,       # How often to save network snapshots? None = only save 'networks-final.pkl'.\n    save_tf_graph           = False,    # Include full TensorFlow computation graph in the tfevents file?\n    save_weight_histograms  = False,    # Include weight histograms in the tfevents file?\n    resume_pkl              = None,     # Network pickle to resume training from, None = train from scratch.\n    resume_kimg             = 0.0,      # Assumed training progress at the beginning. Affects reporting and training schedule.\n    resume_time             = 0.0,      # Assumed wallclock time at the beginning. Affects reporting.\n    resume_with_new_nets    = False):   # Construct new networks according to G_args and D_args before resuming training?\n\n    # Initialize dnnlib and TensorFlow.\n    tflib.init_tf(tf_config)\n    num_gpus = dnnlib.submit_config.num_gpus\n\n    # Load training set.\n    training_set = dataset.load_dataset(data_dir=dnnlib.convert_path(data_dir), verbose=True, **dataset_args)\n    grid_size, grid_reals, grid_labels = misc.setup_snapshot_image_grid(training_set, **grid_args)\n    misc.save_image_grid(grid_reals, dnnlib.make_run_dir_path('reals.png'), drange=training_set.dynamic_range, grid_size=grid_size)\n\n    # Construct or load networks.\n    with tf.device('/gpu:0'):\n        if resume_pkl is None or resume_with_new_nets:\n            print('Constructing networks...')\n            G = tflib.Network('G', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **G_args)\n            D = tflib.Network('D', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **D_args)\n            Gs = G.clone('Gs')\n        if resume_pkl is not None:\n            print('Loading networks from \"%s\"...' % resume_pkl)\n            rG, rD, rGs = misc.load_pkl(resume_pkl)\n            if resume_with_new_nets: G.copy_vars_from(rG); D.copy_vars_from(rD); Gs.copy_vars_from(rGs)\n            else: G = rG; D = rD; Gs = rGs\n\n    # Print layers and generate initial image snapshot.\n    G.print_layers(); D.print_layers()\n    sched = training_schedule(cur_nimg=total_kimg*1000, training_set=training_set, **sched_args)\n    grid_latents = np.random.randn(np.prod(grid_size), *G.input_shape[1:])\n    grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch_gpu)\n    misc.save_image_grid(grid_fakes, dnnlib.make_run_dir_path('fakes_init.png'), drange=drange_net, grid_size=grid_size)\n\n    # Setup training inputs.\n    print('Building TensorFlow graph...')\n    with tf.name_scope('Inputs'), tf.device('/cpu:0'):\n        lod_in               = tf.placeholder(tf.float32, name='lod_in', shape=[])\n        lrate_in             = tf.placeholder(tf.float32, name='lrate_in', shape=[])\n        minibatch_size_in    = tf.placeholder(tf.int32, name='minibatch_size_in', shape=[])\n        minibatch_gpu_in     = tf.placeholder(tf.int32, name='minibatch_gpu_in', shape=[])\n        minibatch_multiplier = minibatch_size_in // (minibatch_gpu_in * num_gpus)\n        Gs_beta              = 0.5 ** tf.div(tf.cast(minibatch_size_in, tf.float32), G_smoothing_kimg * 1000.0) if G_smoothing_kimg > 0.0 else 0.0\n\n    # Setup optimizers.\n    G_opt_args = dict(G_opt_args)\n    D_opt_args = dict(D_opt_args)\n    for args, reg_interval in [(G_opt_args, G_reg_interval), (D_opt_args, D_reg_interval)]:\n        args['minibatch_multiplier'] = minibatch_multiplier\n        args['learning_rate'] = lrate_in\n        if lazy_regularization:\n            mb_ratio = reg_interval / (reg_interval + 1)\n            args['learning_rate'] *= mb_ratio\n            if 'beta1' in args: args['beta1'] **= mb_ratio\n            if 'beta2' in args: args['beta2'] **= mb_ratio\n    G_opt = tflib.Optimizer(name='TrainG', **G_opt_args)\n    D_opt = tflib.Optimizer(name='TrainD', **D_opt_args)\n    G_reg_opt = tflib.Optimizer(name='RegG', share=G_opt, **G_opt_args)\n    D_reg_opt = tflib.Optimizer(name='RegD', share=D_opt, **D_opt_args)\n\n    # Build training graph for each GPU.\n    data_fetch_ops = []\n    for gpu in range(num_gpus):\n        with tf.name_scope('GPU%d' % gpu), tf.device('/gpu:%d' % gpu):\n\n            # Create GPU-specific shadow copies of G and D.\n            G_gpu = G if gpu == 0 else G.clone(G.name + '_shadow')\n            D_gpu = D if gpu == 0 else D.clone(D.name + '_shadow')\n\n            # Fetch training data via temporary variables.\n            with tf.name_scope('DataFetch'):\n                sched = training_schedule(cur_nimg=int(resume_kimg*1000), training_set=training_set, **sched_args)\n                reals_var = tf.Variable(name='reals', trainable=False, initial_value=tf.zeros([sched.minibatch_gpu] + training_set.shape))\n                labels_var = tf.Variable(name='labels', trainable=False, initial_value=tf.zeros([sched.minibatch_gpu, training_set.label_size]))\n                reals_write, labels_write = training_set.get_minibatch_tf()\n                reals_write, labels_write = process_reals(reals_write, labels_write, lod_in, mirror_augment, training_set.dynamic_range, drange_net)\n                reals_write = tf.concat([reals_write, reals_var[minibatch_gpu_in:]], axis=0)\n                labels_write = tf.concat([labels_write, labels_var[minibatch_gpu_in:]], axis=0)\n                data_fetch_ops += [tf.assign(reals_var, reals_write)]\n                data_fetch_ops += [tf.assign(labels_var, labels_write)]\n                reals_read = reals_var[:minibatch_gpu_in]\n                labels_read = labels_var[:minibatch_gpu_in]\n\n            # Evaluate loss functions.\n            lod_assign_ops = []\n            if 'lod' in G_gpu.vars: lod_assign_ops += [tf.assign(G_gpu.vars['lod'], lod_in)]\n            if 'lod' in D_gpu.vars: lod_assign_ops += [tf.assign(D_gpu.vars['lod'], lod_in)]\n            with tf.control_dependencies(lod_assign_ops):\n                with tf.name_scope('G_loss'):\n                    G_loss, G_reg = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, opt=G_opt, training_set=training_set, minibatch_size=minibatch_gpu_in, **G_loss_args)\n                with tf.name_scope('D_loss'):\n                    D_loss, D_reg, perplexity = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu,\n                                                                        opt=D_opt,\n                                                                   training_set=training_set, minibatch_size=minibatch_gpu_in, reals=reals_read, labels=labels_read, **D_loss_args)\n\n            # Register gradients.\n            if not lazy_regularization:\n                if G_reg is not None: G_loss += G_reg\n                if D_reg is not None: D_loss += D_reg\n            else:\n                if G_reg is not None: G_reg_opt.register_gradients(tf.reduce_mean(G_reg * G_reg_interval), G_gpu.trainables)\n                if D_reg is not None: D_reg_opt.register_gradients(tf.reduce_mean(D_reg * D_reg_interval), D_gpu.trainables)\n            G_opt.register_gradients(tf.reduce_mean(G_loss), G_gpu.trainables)\n            D_opt.register_gradients(tf.reduce_mean(D_loss), D_gpu.trainables)\n\n    # Setup training ops.\n    data_fetch_op = tf.group(*data_fetch_ops)\n    G_train_op = G_opt.apply_updates()\n    D_train_op = D_opt.apply_updates()\n    G_reg_op = G_reg_opt.apply_updates(allow_no_op=True)\n    D_reg_op = D_reg_opt.apply_updates(allow_no_op=True)\n    Gs_update_op = Gs.setup_as_moving_average_of(G, beta=Gs_beta)\n\n    # Finalize graph.\n    with tf.device('/gpu:0'):\n        try:\n            peak_gpu_mem_op = tf.contrib.memory_stats.MaxBytesInUse()\n        except tf.errors.NotFoundError:\n            peak_gpu_mem_op = tf.constant(0)\n    tflib.init_uninitialized_vars()\n\n    print('Initializing logs...')\n    summary_log = tf.summary.FileWriter(dnnlib.make_run_dir_path())\n    if save_tf_graph:\n        summary_log.add_graph(tf.get_default_graph())\n    if save_weight_histograms:\n        G.setup_weight_histograms(); D.setup_weight_histograms()\n    metrics = metric_base.MetricGroup(metric_arg_list)\n\n    print('Training for %d kimg...\\n' % total_kimg)\n    dnnlib.RunContext.get().update('', cur_epoch=resume_kimg, max_epoch=total_kimg)\n    maintenance_time = dnnlib.RunContext.get().get_last_update_interval()\n    cur_nimg = int(resume_kimg * 1000)\n    cur_tick = -1\n    tick_start_nimg = cur_nimg\n    prev_lod = -1.0\n    running_mb_counter = 0\n    while cur_nimg < total_kimg * 1000:\n        if dnnlib.RunContext.get().should_stop(): break\n\n        # Choose training parameters and configure training ops.\n        sched = training_schedule(cur_nimg=cur_nimg, training_set=training_set, **sched_args)\n        assert sched.minibatch_size % (sched.minibatch_gpu * num_gpus) == 0\n        training_set.configure(sched.minibatch_gpu, sched.lod)\n        if reset_opt_for_new_lod:\n            if np.floor(sched.lod) != np.floor(prev_lod) or np.ceil(sched.lod) != np.ceil(prev_lod):\n                G_opt.reset_optimizer_state(); D_opt.reset_optimizer_state()\n        prev_lod = sched.lod\n\n        ppl = 0.0\n        # Run training ops.\n        feed_dict = {lod_in: sched.lod, lrate_in: sched.G_lrate, minibatch_size_in: sched.minibatch_size, minibatch_gpu_in: sched.minibatch_gpu}\n        for _repeat in range(minibatch_repeats):\n            rounds = range(0, sched.minibatch_size, sched.minibatch_gpu * num_gpus)\n            run_G_reg = (lazy_regularization and running_mb_counter % G_reg_interval == 0)\n            run_D_reg = (lazy_regularization and running_mb_counter % D_reg_interval == 0)\n            cur_nimg += sched.minibatch_size\n            running_mb_counter += 1\n\n            # Fast path without gradient accumulation.\n            if len(rounds) == 1:\n                tflib.run([G_train_op, data_fetch_op], feed_dict)\n                if run_G_reg:\n                    tflib.run(G_reg_op, feed_dict)\n                tflib.run([D_train_op, Gs_update_op], feed_dict)\n                if run_D_reg:\n                    _, ppl = tflib.run([D_reg_op, perplexity], feed_dict)\n\n            # Slow path with gradient accumulation.\n            else:\n                for _round in rounds:\n                    tflib.run(G_train_op, feed_dict)\n                if run_G_reg:\n                    for _round in rounds:\n                        tflib.run(G_reg_op, feed_dict)\n                tflib.run(Gs_update_op, feed_dict)\n                for _round in rounds:\n                    tflib.run(data_fetch_op, feed_dict)\n                    _, ppl = tflib.run([D_train_op, perplexity], feed_dict)\n                if run_D_reg:\n                    for _round in rounds:\n                        _, ppl = tflib.run([D_reg_op, perplexity], feed_dict)\n\n        # Perform maintenance tasks once per tick.\n        done = (cur_nimg >= total_kimg * 1000)\n        if cur_tick < 0 or cur_nimg >= tick_start_nimg + sched.tick_kimg * 1000 or done:\n            cur_tick += 1\n            tick_kimg = (cur_nimg - tick_start_nimg) / 1000.0\n            tick_start_nimg = cur_nimg\n            tick_time = dnnlib.RunContext.get().get_time_since_last_update()\n            total_time = dnnlib.RunContext.get().get_time_since_start() + resume_time\n\n            # Report progress.\n            print('tick %-5d kimg %-8.1f lod %-5.2f minibatch %-4d time %-12s sec/tick %-7.1f sec/kimg %-7.2f maintenance %-6.1f gpumem %.1f' % (\n                autosummary('Progress/tick', cur_tick),\n                autosummary('Progress/kimg', cur_nimg / 1000.0),\n                autosummary('Progress/lod', sched.lod),\n                autosummary('Progress/minibatch', sched.minibatch_size),\n                dnnlib.util.format_time(autosummary('Timing/total_sec', total_time)),\n                autosummary('Timing/sec_per_tick', tick_time),\n                autosummary('Timing/sec_per_kimg', tick_time / tick_kimg),\n                autosummary('Timing/maintenance_sec', maintenance_time),\n                autosummary('Resources/peak_gpu_mem_gb', peak_gpu_mem_op.eval() / 2**30)),\n                autosummary('Perplexity', ppl),\n                  )\n            autosummary('Timing/total_hours', total_time / (60.0 * 60.0))\n            autosummary('Timing/total_days', total_time / (24.0 * 60.0 * 60.0))\n\n            # Save snapshots.\n            if image_snapshot_ticks is not None and (cur_tick % image_snapshot_ticks == 0 or done):\n                grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch_gpu)\n                misc.save_image_grid(grid_fakes, dnnlib.make_run_dir_path('fakes%06d.png' % (cur_nimg // 1000)), drange=drange_net, grid_size=grid_size)\n            if network_snapshot_ticks is not None and (cur_tick % network_snapshot_ticks == 0 or done):\n                pkl = dnnlib.make_run_dir_path('network-snapshot-%06d.pkl' % (cur_nimg // 1000))\n                misc.save_pkl((G, D, Gs), pkl)\n                metrics.run(pkl, run_dir=dnnlib.make_run_dir_path(), data_dir=dnnlib.convert_path(data_dir), num_gpus=num_gpus, tf_config=tf_config)\n\n            # Update summaries and RunContext.\n            metrics.update_autosummaries()\n            tflib.autosummary.save_summaries(summary_log, cur_nimg)\n            dnnlib.RunContext.get().update('%.2f' % sched.lod, cur_epoch=cur_nimg // 1000, max_epoch=total_kimg)\n            maintenance_time = dnnlib.RunContext.get().get_last_update_interval() - tick_time\n\n    # Save final snapshot.\n    misc.save_pkl((G, D, Gs), dnnlib.make_run_dir_path('network-final.pkl'))\n\n    # All done.\n    summary_log.close()\n    training_set.close()\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "FQ-U-GAT-IT/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Junho Kim\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "FQ-U-GAT-IT/UGATIT.py",
    "content": "from ops import *\nfrom utils import *\nfrom glob import glob\nimport time\nfrom tensorflow.contrib.data import prefetch_to_device, shuffle_and_repeat, map_and_batch\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.training import moving_averages\nfrom vq_layer import VectorQuantizerEMA\nimport shutil\n\nclass UGATIT(object) :\n    def __init__(self, sess, args):\n        self.light = args.light\n        self.if_quant = args.quant\n        if self.light :\n            self.model_name = 'UGATIT_light'\n        else :\n            self.model_name = 'UGATIT'\n\n        self.sess = sess\n        self.phase = args.phase\n        self.checkpoint_dir = args.checkpoint_dir\n        self.result_dir = args.result_dir\n        self.log_dir = args.log_dir\n        self.dataset_name = args.dataset\n        self.augment_flag = args.augment_flag\n\n        self.epoch = args.epoch\n        self.iteration = args.iteration\n        self.decay_flag = args.decay_flag\n        self.decay_epoch = args.decay_epoch\n\n        self.gan_type = args.gan_type\n\n        self.batch_size = args.batch_size\n        self.print_freq = args.print_freq\n        self.save_freq = args.save_freq\n\n        self.init_lr = args.lr\n        self.ch = args.ch\n\n        \"\"\" Weight \"\"\"\n        self.adv_weight = args.adv_weight\n        self.cycle_weight = args.cycle_weight\n        self.identity_weight = args.identity_weight\n        self.cam_weight = args.cam_weight\n        self.ld = args.GP_ld\n        self.smoothing = args.smoothing\n\n        \"\"\" Generator \"\"\"\n        self.n_res = args.n_res\n\n        \"\"\" Discriminator \"\"\"\n        self.n_dis = args.n_dis\n        self.n_critic = args.n_critic\n        self.sn = args.sn\n\n        self.img_size = args.img_size\n        self.img_ch = args.img_ch\n\n        self.test_train = args.test_train\n\n        if self.if_quant:\n            self.commitment_cost = args.commitment_cost\n        else:\n            self.commitment_cost = 0.0\n        layerwise_channel = [64, 128, 256, 512, 1024, 2028]\n        \n        # \n        num_embed = [5, 6, 7, 7, 7, 7]\n#         num_embed = [5, 6, 7, 8, 9, 10]\n        self.quantization_layer = args.quantization_layer\n        self.quant_layers = [int(x) for x in args.quantization_layer]\n\n        self.decay = args.decay\n\n\n        self.sample_dir = os.path.join(args.sample_dir, self.model_dir)\n        check_folder(self.sample_dir)\n        self.result_dir = os.path.join(self.result_dir, self.model_dir)\n        check_folder(self.result_dir)\n        # self.trainA, self.trainB = prepare_data(dataset_name=self.dataset_name, size=self.img_size\n        self.trainA_dataset = glob('./dataset/{}/*.*'.format(self.dataset_name + '/trainA'))\n        self.trainB_dataset = glob('./dataset/{}/*.*'.format(self.dataset_name + '/trainB'))\n        self.dataset_num = max(len(self.trainA_dataset), len(self.trainB_dataset))\n\n        self.quantize = {}\n        for layer in self.quant_layers:\n            self.quantize[layer] = VectorQuantizerEMA(embedding_dim=layerwise_channel[layer],\n                                               num_embeddings=2**num_embed[layer],\n                                               commitment_cost=self.commitment_cost, decay=self.decay)\n        print()\n\n        print(\"##### Information #####\")\n        print(\"# light : \", self.light)\n        print(\"# gan type : \", self.gan_type)\n        print(\"# dataset : \", self.dataset_name)\n        print(\"# max dataset number : \", self.dataset_num)\n        print(\"# batch_size : \", self.batch_size)\n        print(\"# epoch : \", self.epoch)\n        print(\"# iteration per epoch : \", self.iteration)\n        print(\"# smoothing : \", self.smoothing)\n\n        print()\n\n        print(\"##### Generator #####\")\n        print(\"# residual blocks : \", self.n_res)\n\n        print()\n\n        print(\"##### Discriminator #####\")\n        print(\"# discriminator layer : \", self.n_dis)\n        print(\"# the number of critic : \", self.n_critic)\n        print(\"# spectral normalization : \", self.sn)\n\n        print()\n\n        print(\"##### Weight #####\")\n        print(\"# adv_weight : \", self.adv_weight)\n        print(\"# cycle_weight : \", self.cycle_weight)\n        print(\"# identity_weight : \", self.identity_weight)\n        print(\"# cam_weight : \", self.cam_weight)\n\n\n    @property\n    def model_dir(self):\n        n_res = str(self.n_res) + 'resblock'\n        n_dis = str(self.n_dis) + 'dis'\n\n        if self.smoothing :\n            smoothing = '_smoothing'\n        else :\n            smoothing = ''\n\n        if self.sn :\n            sn = '_sn'\n        else :\n            sn = ''\n\n        if not self.if_quant:\n            return \"{}_{}_{}_{}_{}_{}_{}_{}_{}_{}{}{}\".format(self.model_name, self.dataset_name,\n                                                             self.gan_type, n_res, n_dis,\n                                                             self.n_critic,\n                                                             self.adv_weight, self.cycle_weight,\n                                                              self.identity_weight, self.cam_weight,\n                                                              sn, smoothing)\n        else:\n            return \"{}_q_{}_{}_{}_{}_{}_{}_{}_{}_{}{}{}_{}_{}_{}\".format(self.model_name,\n                                                                  self.dataset_name,\n                                                             self.gan_type, n_res, n_dis,\n                                                             self.n_critic,\n                                                             self.adv_weight, self.cycle_weight,\n                                                              self.identity_weight, self.cam_weight,\n                                                              sn, smoothing, self.quantization_layer,\n                                                              self.commitment_cost, self.decay)\n\n    ##################################################################################\n    # Generator\n    ##################################################################################\n\n    def generator(self, x_init, reuse=False, scope=\"generator\"):\n        channel = self.ch\n        with tf.variable_scope(scope, reuse=reuse) :\n            x = conv(x_init, channel, kernel=7, stride=1, pad=3, pad_type='reflect', scope='conv')\n            x = instance_norm(x, scope='ins_norm')\n            x = relu(x)\n\n            # Down-Sampling\n            for i in range(2) :\n                x = conv(x, channel*2, kernel=3, stride=2, pad=1, pad_type='reflect', scope='conv_'+str(i))\n                x = instance_norm(x, scope='ins_norm_'+str(i))\n                x = relu(x)\n\n                channel = channel * 2\n\n            # Down-Sampling Bottleneck\n            for i in range(self.n_res):\n                x = resblock(x, channel, scope='resblock_' + str(i))\n\n            # Class Activation Map\n            cam_x = global_avg_pooling(x)\n            cam_gap_logit, cam_x_weight = fully_connected_with_w(cam_x, scope='CAM_logit')\n            x_gap = tf.multiply(x, cam_x_weight)\n\n            cam_x = global_max_pooling(x)\n            cam_gmp_logit, cam_x_weight = fully_connected_with_w(cam_x, reuse=True, scope='CAM_logit')\n            x_gmp = tf.multiply(x, cam_x_weight)\n\n            cam_logit = tf.concat([cam_gap_logit, cam_gmp_logit], axis=-1)\n            x = tf.concat([x_gap, x_gmp], axis=-1)\n\n            x = conv(x, channel, kernel=1, stride=1, scope='conv_1x1')\n            x = relu(x)\n\n            heatmap = tf.squeeze(tf.reduce_sum(x, axis=-1))\n\n            # Gamma, Beta block\n            gamma, beta = self.MLP(x, reuse=reuse)\n\n            # Up-Sampling Bottleneck\n            for i in range(self.n_res):\n                x = adaptive_ins_layer_resblock(x, channel, gamma, beta, smoothing=self.smoothing, scope='adaptive_resblock' + str(i))\n\n            # Up-Sampling\n            for i in range(2) :\n                x = up_sample(x, scale_factor=2)\n                x = conv(x, channel//2, kernel=3, stride=1, pad=1, pad_type='reflect', scope='up_conv_'+str(i))\n                x = layer_instance_norm(x, scope='layer_ins_norm_'+str(i))\n                x = relu(x)\n\n                channel = channel // 2\n\n\n            x = conv(x, channels=3, kernel=7, stride=1, pad=3, pad_type='reflect', scope='G_logit')\n            x = tanh(x)\n\n            return x, cam_logit, heatmap\n\n    def MLP(self, x, use_bias=True, reuse=False, scope='MLP'):\n        channel = self.ch * self.n_res\n\n        if self.light :\n            x = global_avg_pooling(x)\n\n        with tf.variable_scope(scope, reuse=reuse):\n            for i in range(2) :\n                x = fully_connected(x, channel, use_bias, scope='linear_' + str(i))\n                x = relu(x)\n\n            gamma = fully_connected(x, channel, use_bias, scope='gamma')\n            beta = fully_connected(x, channel, use_bias, scope='beta')\n\n            gamma = tf.reshape(gamma, shape=[self.batch_size, 1, 1, channel])\n            beta = tf.reshape(beta, shape=[self.batch_size, 1, 1, channel])\n\n            return gamma, beta\n\n    ##################################################################################\n    # Discriminator\n    ##################################################################################\n\n    def discriminator(self, x_init, reuse=False, scope=\"discriminator\"):\n        D_logit = []\n        D_CAM_logit = []\n        with tf.variable_scope(scope, reuse=reuse) :\n            local_x, local_cam, local_heatmap = self.discriminator_local(x_init, reuse=reuse, scope='local')\n            global_x, global_cam, global_heatmap, quant_loss, ppl = self.discriminator_global(\n                x_init, reuse=reuse, scope='global')\n\n            D_logit.extend([local_x, global_x])\n            D_CAM_logit.extend([local_cam, global_cam])\n\n            return D_logit, D_CAM_logit, local_heatmap, global_heatmap, quant_loss, ppl\n\n    def discriminator_global(self, x_init, reuse=False, scope='discriminator_global'):\n        with tf.variable_scope(scope, reuse=reuse):\n            quant_loss = 0\n            channel = self.ch\n            x = conv(x_init, channel, kernel=4, stride=2, pad=1, pad_type='reflect', sn=self.sn, scope='conv_0')\n            x = lrelu(x, 0.2)\n\n            for i in range(1, self.n_dis - 1):\n                x = conv(x, channel * 2, kernel=4, stride=2, pad=1, pad_type='reflect', sn=self.sn, scope='conv_' + str(i))\n                x = lrelu(x, 0.2)\n                if i in self.quant_layers:\n                    diff, ppl = self.quantize[i](x, reuse, layer=i)\n                    quant_loss += diff\n                channel = channel * 2\n\n            x = conv(x, channel * 2, kernel=4, stride=1, pad=1, pad_type='reflect', sn=self.sn, scope='conv_last')\n            x = lrelu(x, 0.2)\n\n            channel = channel * 2\n\n            cam_x = global_avg_pooling(x)\n            cam_gap_logit, cam_x_weight = fully_connected_with_w(cam_x, sn=self.sn, scope='CAM_logit')\n            x_gap = tf.multiply(x, cam_x_weight)\n\n            cam_x = global_max_pooling(x)\n            cam_gmp_logit, cam_x_weight = fully_connected_with_w(cam_x, sn=self.sn, reuse=True, scope='CAM_logit')\n            x_gmp = tf.multiply(x, cam_x_weight)\n\n            cam_logit = tf.concat([cam_gap_logit, cam_gmp_logit], axis=-1)\n            x = tf.concat([x_gap, x_gmp], axis=-1)\n\n            x = conv(x, channel, kernel=1, stride=1, scope='conv_1x1')\n            x = lrelu(x, 0.2)\n\n            heatmap = tf.squeeze(tf.reduce_sum(x, axis=-1))\n\n\n            x = conv(x, channels=1, kernel=4, stride=1, pad=1, pad_type='reflect', sn=self.sn, scope='D_logit')\n\n            return x, cam_logit, heatmap, quant_loss, ppl\n\n    def discriminator_local(self, x_init, reuse=False, scope='discriminator_local'):\n        with tf.variable_scope(scope, reuse=reuse) :\n            channel = self.ch\n            x = conv(x_init, channel, kernel=4, stride=2, pad=1, pad_type='reflect', sn=self.sn, scope='conv_0')\n            x = lrelu(x, 0.2)\n\n            for i in range(1, self.n_dis - 2 - 1):\n                x = conv(x, channel * 2, kernel=4, stride=2, pad=1, pad_type='reflect', sn=self.sn, scope='conv_' + str(i))\n                x = lrelu(x, 0.2)\n\n                channel = channel * 2\n\n            x = conv(x, channel * 2, kernel=4, stride=1, pad=1, pad_type='reflect', sn=self.sn, scope='conv_last')\n            x = lrelu(x, 0.2)\n\n            channel = channel * 2\n\n            cam_x = global_avg_pooling(x)\n            cam_gap_logit, cam_x_weight = fully_connected_with_w(cam_x, sn=self.sn, scope='CAM_logit')\n            x_gap = tf.multiply(x, cam_x_weight)\n\n            cam_x = global_max_pooling(x)\n            cam_gmp_logit, cam_x_weight = fully_connected_with_w(cam_x, sn=self.sn, reuse=True, scope='CAM_logit')\n            x_gmp = tf.multiply(x, cam_x_weight)\n\n            cam_logit = tf.concat([cam_gap_logit, cam_gmp_logit], axis=-1)\n            x = tf.concat([x_gap, x_gmp], axis=-1)\n\n            x = conv(x, channel, kernel=1, stride=1, scope='conv_1x1')\n            x = lrelu(x, 0.2)\n\n            heatmap = tf.squeeze(tf.reduce_sum(x, axis=-1))\n\n            x = conv(x, channels=1, kernel=4, stride=1, pad=1, pad_type='reflect', sn=self.sn, scope='D_logit')\n\n            return x, cam_logit, heatmap\n\n    ##################################################################################\n    # Model\n    ##################################################################################\n\n    def generate_a2b(self, x_A, reuse=False):\n        out, cam, _ = self.generator(x_A, reuse=reuse, scope=\"generator_B\")\n\n        return out, cam\n\n    def generate_b2a(self, x_B, reuse=False):\n        out, cam, _ = self.generator(x_B, reuse=reuse, scope=\"generator_A\")\n\n        return out, cam\n\n    def discriminate_real(self, x_A, x_B):\n        real_A_logit, real_A_cam_logit, _, _, quant_loss_A, ppl_A = self.discriminator(x_A,\n                                                                        scope=\"discriminator_A\")\n        real_B_logit, real_B_cam_logit, _, _, quant_loss_B, ppl_B = self.discriminator(x_B,\n                                                                                       scope=\"discriminator_B\")\n\n        return real_A_logit, real_A_cam_logit, real_B_logit, real_B_cam_logit, \\\n               quant_loss_A+quant_loss_B, ppl_A+ppl_B\n\n    def discriminate_fake(self, x_ba, x_ab):\n        fake_A_logit, fake_A_cam_logit, _, _, quant_loss_A, ppl_A = self.discriminator(x_ba, reuse=True,\n                                                                   scope=\"discriminator_A\")\n        fake_B_logit, fake_B_cam_logit, _, _, quant_loss_B, ppl_B = self.discriminator(x_ab,\n                                                                                       reuse=True,\n                                                                   scope=\"discriminator_B\")\n\n        return fake_A_logit, fake_A_cam_logit, fake_B_logit, fake_B_cam_logit, \\\n               quant_loss_A+quant_loss_B, (ppl_A+ppl_B)/2\n\n    def gradient_panalty(self, real, fake, scope=\"discriminator_A\"):\n        if self.gan_type.__contains__('dragan'):\n            eps = tf.random_uniform(shape=tf.shape(real), minval=0., maxval=1.)\n            _, x_var = tf.nn.moments(real, axes=[0, 1, 2, 3])\n            x_std = tf.sqrt(x_var)  # magnitude of noise decides the size of local region\n\n            fake = real + 0.5 * x_std * eps\n\n        alpha = tf.random_uniform(shape=[self.batch_size, 1, 1, 1], minval=0., maxval=1.)\n        interpolated = real + alpha * (fake - real)\n\n        logit, cam_logit, _, _, _, _ = self.discriminator(interpolated, reuse=True,\n                                                               scope=scope)\n\n\n        GP = []\n        cam_GP = []\n\n        for i in range(2) :\n            grad = tf.gradients(logit[i], interpolated)[0] # gradient of D(interpolated)\n            grad_norm = tf.norm(flatten(grad), axis=1) # l2 norm\n\n            # WGAN - LP\n            if self.gan_type == 'wgan-lp' :\n                GP.append(self.ld * tf.reduce_mean(tf.square(tf.maximum(0.0, grad_norm - 1.))))\n\n            elif self.gan_type == 'wgan-gp' or self.gan_type == 'dragan':\n                GP.append(self.ld * tf.reduce_mean(tf.square(grad_norm - 1.)))\n\n        for i in range(2) :\n            grad = tf.gradients(cam_logit[i], interpolated)[0] # gradient of D(interpolated)\n            grad_norm = tf.norm(flatten(grad), axis=1) # l2 norm\n\n            # WGAN - LP\n            if self.gan_type == 'wgan-lp' :\n                cam_GP.append(self.ld * tf.reduce_mean(tf.square(tf.maximum(0.0, grad_norm - 1.))))\n\n            elif self.gan_type == 'wgan-gp' or self.gan_type == 'dragan':\n                cam_GP.append(self.ld * tf.reduce_mean(tf.square(grad_norm - 1.)))\n\n\n        return sum(GP), sum(cam_GP)\n\n    def build_model(self):\n        if self.phase == 'train' :\n            self.lr = tf.placeholder(tf.float32, name='learning_rate')\n\n\n            \"\"\" Input Image\"\"\"\n            Image_Data_Class = ImageData(self.img_size, self.img_ch, self.augment_flag)\n\n            trainA = tf.data.Dataset.from_tensor_slices(self.trainA_dataset)\n            trainB = tf.data.Dataset.from_tensor_slices(self.trainB_dataset)\n\n\n            gpu_device = '/gpu:0'\n            trainA = trainA.apply(shuffle_and_repeat(self.dataset_num)).apply(map_and_batch(Image_Data_Class.image_processing, self.batch_size, num_parallel_batches=16, drop_remainder=True)).apply(prefetch_to_device(gpu_device, None))\n            trainB = trainB.apply(shuffle_and_repeat(self.dataset_num)).apply(map_and_batch(Image_Data_Class.image_processing, self.batch_size, num_parallel_batches=16, drop_remainder=True)).apply(prefetch_to_device(gpu_device, None))\n\n\n            trainA_iterator = trainA.make_one_shot_iterator()\n            trainB_iterator = trainB.make_one_shot_iterator()\n\n            self.domain_A = trainA_iterator.get_next()\n            self.domain_B = trainB_iterator.get_next()\n\n            \"\"\" Define Generator, Discriminator \"\"\"\n            x_ab, cam_ab = self.generate_a2b(self.domain_A) # real a\n            x_ba, cam_ba = self.generate_b2a(self.domain_B) # real b\n\n            x_aba, _ = self.generate_b2a(x_ab, reuse=True) # real b\n            x_bab, _ = self.generate_a2b(x_ba, reuse=True) # real a\n\n            x_aa, cam_aa = self.generate_b2a(self.domain_A, reuse=True) # fake b\n            x_bb, cam_bb = self.generate_a2b(self.domain_B, reuse=True) # fake a\n\n            real_A_logit, real_A_cam_logit, real_B_logit, real_B_cam_logit, real_quant_loss,\\\n            real_ppl = self.discriminate_real(self.domain_A, self.domain_B)\n            fake_A_logit, fake_A_cam_logit, fake_B_logit, fake_B_cam_logit, fake_quant_loss,  \\\n            fake_ppl = self.discriminate_fake(x_ba, x_ab)\n            self.ppl = real_ppl + fake_ppl\n\n            \"\"\" Define Loss \"\"\"\n            if self.gan_type.__contains__('wgan') or self.gan_type == 'dragan' :\n                GP_A, GP_CAM_A = self.gradient_panalty(real=self.domain_A, fake=x_ba, scope=\"discriminator_A\")\n                GP_B, GP_CAM_B = self.gradient_panalty(real=self.domain_B, fake=x_ab, scope=\"discriminator_B\")\n            else :\n                GP_A, GP_CAM_A  = 0, 0\n                GP_B, GP_CAM_B = 0, 0\n\n            G_ad_loss_A = (generator_loss(self.gan_type, fake_A_logit) + generator_loss(self.gan_type, fake_A_cam_logit))\n            G_ad_loss_B = (generator_loss(self.gan_type, fake_B_logit) + generator_loss(self.gan_type, fake_B_cam_logit))\n\n            D_ad_loss_A = (discriminator_loss(self.gan_type, real_A_logit, fake_A_logit) + discriminator_loss(self.gan_type, real_A_cam_logit, fake_A_cam_logit) + GP_A + GP_CAM_A)\n            D_ad_loss_B = (discriminator_loss(self.gan_type, real_B_logit, fake_B_logit) + discriminator_loss(self.gan_type, real_B_cam_logit, fake_B_cam_logit) + GP_B + GP_CAM_B)\n\n            reconstruction_A = L1_loss(x_aba, self.domain_A) # reconstruction\n            reconstruction_B = L1_loss(x_bab, self.domain_B) # reconstruction\n\n            identity_A = L1_loss(x_aa, self.domain_A)\n            identity_B = L1_loss(x_bb, self.domain_B)\n\n            cam_A = cam_loss(source=cam_ba, non_source=cam_aa)\n            cam_B = cam_loss(source=cam_ab, non_source=cam_bb)\n\n            Generator_A_gan = self.adv_weight * G_ad_loss_A\n            Generator_A_cycle = self.cycle_weight * reconstruction_B\n            Generator_A_identity = self.identity_weight * identity_A\n            Generator_A_cam = self.cam_weight * cam_A\n\n            Generator_B_gan = self.adv_weight * G_ad_loss_B\n            Generator_B_cycle = self.cycle_weight * reconstruction_A\n            Generator_B_identity = self.identity_weight * identity_B\n            Generator_B_cam = self.cam_weight * cam_B\n\n\n            Generator_A_loss = Generator_A_gan + Generator_A_cycle + Generator_A_identity + Generator_A_cam\n            Generator_B_loss = Generator_B_gan + Generator_B_cycle + Generator_B_identity + Generator_B_cam\n\n\n            Discriminator_A_loss = self.adv_weight * D_ad_loss_A\n            Discriminator_B_loss = self.adv_weight * D_ad_loss_B\n\n            self.Generator_loss = Generator_A_loss + Generator_B_loss + regularization_loss(\n                'generator') + fake_quant_loss\n            self.Discriminator_loss = Discriminator_A_loss + Discriminator_B_loss + \\\n                                      regularization_loss('discriminator') + real_quant_loss + fake_quant_loss\n\n\n            \"\"\" Result Image \"\"\"\n            self.fake_A = x_ba\n            self.fake_B = x_ab\n\n            self.real_A = self.domain_A\n            self.real_B = self.domain_B\n\n\n            \"\"\" Training \"\"\"\n            t_vars = tf.trainable_variables()\n            G_vars = [var for var in t_vars if 'generator' in var.name]\n            D_vars = [var for var in t_vars if 'discriminator' in var.name]\n\n            self.G_optim = tf.train.AdamOptimizer(self.lr, beta1=0.5, beta2=0.999).minimize(self.Generator_loss, var_list=G_vars)\n            self.D_optim = tf.train.AdamOptimizer(self.lr, beta1=0.5, beta2=0.999).minimize(self.Discriminator_loss, var_list=D_vars)\n\n\n            \"\"\"\" Summary \"\"\"\n            self.all_G_loss = tf.summary.scalar(\"Generator_loss\", self.Generator_loss)\n            self.all_D_loss = tf.summary.scalar(\"Discriminator_loss\", self.Discriminator_loss)\n\n            self.G_A_loss = tf.summary.scalar(\"G_A_loss\", Generator_A_loss)\n            self.G_A_gan = tf.summary.scalar(\"G_A_gan\", Generator_A_gan)\n            self.G_A_cycle = tf.summary.scalar(\"G_A_cycle\", Generator_A_cycle)\n            self.G_A_identity = tf.summary.scalar(\"G_A_identity\", Generator_A_identity)\n            self.G_A_cam = tf.summary.scalar(\"G_A_cam\", Generator_A_cam)\n\n            self.G_B_loss = tf.summary.scalar(\"G_B_loss\", Generator_B_loss)\n            self.G_B_gan = tf.summary.scalar(\"G_B_gan\", Generator_B_gan)\n            self.G_B_cycle = tf.summary.scalar(\"G_B_cycle\", Generator_B_cycle)\n            self.G_B_identity = tf.summary.scalar(\"G_B_identity\", Generator_B_identity)\n            self.G_B_cam = tf.summary.scalar(\"G_B_cam\", Generator_B_cam)\n\n            self.D_A_loss = tf.summary.scalar(\"D_A_loss\", Discriminator_A_loss)\n            self.D_B_loss = tf.summary.scalar(\"D_B_loss\", Discriminator_B_loss)\n\n            self.rho_var = []\n            for var in tf.trainable_variables():\n                if 'rho' in var.name:\n                    self.rho_var.append(tf.summary.histogram(var.name, var))\n                    self.rho_var.append(tf.summary.scalar(var.name + \"_min\", tf.reduce_min(var)))\n                    self.rho_var.append(tf.summary.scalar(var.name + \"_max\", tf.reduce_max(var)))\n                    self.rho_var.append(tf.summary.scalar(var.name + \"_mean\", tf.reduce_mean(var)))\n\n            g_summary_list = [self.G_A_loss, self.G_A_gan, self.G_A_cycle, self.G_A_identity, self.G_A_cam,\n                              self.G_B_loss, self.G_B_gan, self.G_B_cycle, self.G_B_identity, self.G_B_cam,\n                              self.all_G_loss]\n\n            g_summary_list.extend(self.rho_var)\n            d_summary_list = [self.D_A_loss, self.D_B_loss, self.all_D_loss]\n\n            self.G_loss = tf.summary.merge(g_summary_list)\n            self.D_loss = tf.summary.merge(d_summary_list)\n            # self.ppl = tf.summary.scalar('Perplexity', self.ppl)\n            if self.test_train:\n                \"\"\" Test \"\"\"\n                self.test_domain_A = tf.placeholder(tf.float32, [1, self.img_size, self.img_size, self.img_ch], name='test_domain_A')\n                self.test_domain_B = tf.placeholder(tf.float32, [1, self.img_size, self.img_size, self.img_ch], name='test_domain_B')\n\n                self.test_fake_B, _ = self.generate_a2b(self.test_domain_A, reuse=True)\n                self.test_fake_A, _ = self.generate_b2a(self.test_domain_B, reuse=True)\n        elif self.phase == 'test':\n            self.test_domain_A = tf.placeholder(tf.float32, [1, self.img_size, self.img_size, self.img_ch], name='test_domain_A')\n            self.test_domain_B = tf.placeholder(tf.float32, [1, self.img_size, self.img_size, self.img_ch], name='test_domain_B')\n\n            self.test_fake_B, _ = self.generate_a2b(self.test_domain_A)\n            self.test_fake_A, _ = self.generate_b2a(self.test_domain_B)\n\n    def train(self):\n        # initialize all variables\n        tf.global_variables_initializer().run()\n\n        # saver to save model\n        self.saver = tf.train.Saver()\n\n        # summary writer\n        self.writer = tf.summary.FileWriter(self.log_dir + '/' + self.model_dir, self.sess.graph)\n\n\n        # restore check-point if it exits\n        could_load, checkpoint_counter = self.load(self.checkpoint_dir)\n        if could_load:\n            start_epoch = (int)(checkpoint_counter / self.iteration)\n            start_batch_id = checkpoint_counter - start_epoch * self.iteration\n            counter = checkpoint_counter\n            print(\" [*] Load SUCCESS\")\n        else:\n            start_epoch = 0\n            start_batch_id = 0\n            counter = 1\n            print(\" [!] Load failed...\")\n\n        # loop for epoch\n        start_time = time.time()\n        past_g_loss = -1.\n        lr = self.init_lr\n        for epoch in range(start_epoch, self.epoch):\n            # lr = self.init_lr if epoch < self.decay_epoch else self.init_lr * (self.epoch - epoch) / (self.epoch - self.decay_epoch)\n            if self.decay_flag :\n                #lr = self.init_lr * pow(0.5, epoch // self.decay_epoch)\n                lr = self.init_lr if epoch < self.decay_epoch else self.init_lr * (self.epoch - epoch) / (self.epoch - self.decay_epoch)\n            for idx in range(start_batch_id, self.iteration):\n                train_feed_dict = {\n                    self.lr : lr\n                }\n\n                # Update D\n                _, d_loss, summary_str, ppl = self.sess.run([self.D_optim,\n                                                        self.Discriminator_loss, self.D_loss,\n                                                             self.ppl], feed_dict = train_feed_dict)\n                self.writer.add_summary(summary_str, counter)\n\n                # Update G\n                g_loss = None\n                if (counter - 1) % self.n_critic == 0 :\n                    batch_A_images, batch_B_images, fake_A, fake_B, _, g_loss, summary_str = self.sess.run([self.real_A, self.real_B,\n                                                                                                            self.fake_A, self.fake_B,\n                                                                                                            self.G_optim,\n                                                                                                            self.Generator_loss, self.G_loss], feed_dict = train_feed_dict)\n                    self.writer.add_summary(summary_str, counter)\n                    past_g_loss = g_loss\n\n                # display training status\n                counter += 1\n                if g_loss == None :\n                    g_loss = past_g_loss\n                if idx % 1000==0:\n                    print(\"Epoch: [%2d] [%5d/%5d] time: %4.4f d_loss: %.8f, g_loss: %.8f, ppl: %.4f\"\n                          \"\" % (epoch, idx, self.iteration, time.time() - start_time, d_loss,\n                                g_loss, ppl))\n\n                if np.mod(idx+1, self.print_freq) == 0 :\n                    save_images(batch_A_images, [self.batch_size, 1],\n                                './{}/real_A_{:03d}_{:05d}.png'.format(self.sample_dir, epoch, idx+1))\n                    # save_images(batch_B_images, [self.batch_size, 1],\n                    #             './{}/real_B_{:03d}_{:05d}.png'.format(self.sample_dir, epoch, idx+1))\n\n                    # save_images(fake_A, [self.batch_size, 1],\n                    #             './{}/fake_A_{:03d}_{:05d}.png'.format(self.sample_dir, epoch, idx+1))\n                    save_images(fake_B, [self.batch_size, 1],\n                                './{}/fake_B_{:03d}_{:05d}.png'.format(self.sample_dir, epoch, idx+1))\n\n                # if np.mod(idx + 1, self.save_freq) == 0:\n                #     self.save(self.checkpoint_dir, counter)\n\n            # After an epoch, start_batch_id is set to zero\n            # non-zero value is only for the first epoch after loading pre-trained model\n            start_batch_id = 0\n            # if epoch % 2 == 0:\n            self.test(epoch)\n            # save model for final step\n            if np.mod(epoch+1, 5) == 0:\n                self.save(self.checkpoint_dir, counter)\n\n\n\n\n    def save(self, checkpoint_dir, step):\n        checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir)\n\n        if not os.path.exists(checkpoint_dir):\n            os.makedirs(checkpoint_dir)\n        save_solid = False\n        while not save_solid:\n            try:\n                self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)\n                # print('ckpt saved...')\n                save_solid = True\n            except:\n                pass\n\n    def load(self, checkpoint_dir):\n        print(\" [*] Reading checkpoints...\")\n        checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir)\n\n        ckpt = tf.train.get_checkpoint_state(checkpoint_dir)\n        if ckpt and ckpt.model_checkpoint_path:\n            ckpt_name = os.path.basename(ckpt.model_checkpoint_path)\n            self.saver.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name))\n            counter = int(ckpt_name.split('-')[-1])\n            print(\" [*] Success to read {}\".format(ckpt_name))\n            return True, counter\n        else:\n            print(\" [*] Failed to find a checkpoint\")\n            return False, 0\n\n    def test(self, epoch):\n        if not self.test_train:\n            tf.global_variables_initializer().run()\n            self.saver = tf.train.Saver()\n            could_load, checkpoint_counter = self.load(self.checkpoint_dir)\n            if could_load :\n                print(\" [*] Load SUCCESS\")\n            else :\n                print(\" [!] Load failed...\")\n\n        test_A_root = './dataset/{}'.format(self.dataset_name+'/testA')\n        test_B_root = './dataset/{}'.format(self.dataset_name+'/testB')\n        train_A_root = './dataset/{}'.format(self.dataset_name + '/trainA')\n        train_B_root = './dataset/{}'.format(self.dataset_name + '/trainB')\n        test_A_files = glob('./dataset/{}/*.*'.format(self.dataset_name + '/testA'))\n        test_B_files = glob('./dataset/{}/*.*'.format(self.dataset_name + '/testB'))\n        A2B_root = os.path.join(self.result_dir, '{:03d}-{}'.format(epoch, 'A-B'))\n        B2A_root = os.path.join(self.result_dir, '{:03d}-{}'.format(epoch, 'B-A'))\n        # check_folder(self.result_dir)\n        check_folder(A2B_root)\n        check_folder(B2A_root)\n\n        for sample_file  in test_A_files : # A -> B\n            # print('Processing A image: ' + sample_file)\n            sample_image = np.asarray(load_test_data(sample_file, size=self.img_size))\n\n            image_path = os.path.join(A2B_root, os.path.basename(sample_file))\n\n            fake_img = self.sess.run(self.test_fake_B, feed_dict = {self.test_domain_A : sample_image})\n            save_images(fake_img, [1, 1], image_path)\n\n        for sample_file  in test_B_files : # B -> A\n            \n            sample_image = np.asarray(load_test_data(sample_file, size=self.img_size))\n\n            image_path = os.path.join(B2A_root, os.path.basename(sample_file))\n\n            fake_img = self.sess.run(self.test_fake_A, feed_dict = {self.test_domain_B : sample_image})\n\n            save_images(fake_img, [1, 1], image_path)\n\n"
  },
  {
    "path": "FQ-U-GAT-IT/dataset/download_dataset_1.sh",
    "content": "DATASET=$1\n\nif [[$DATASET != \"portrait\" && $DATASET != \"cat2dog\"]]; then\n  echo \"dataset not available\"\n  exit\nfi\n\nURL=http://vllab.ucmerced.edu/hylee/DRIT/datasets/$DATASET.zip\nwget -N $URL -O ../dataset/$DATASET.zip\nunzip ../dataset/$DATASET.zip -d ../dataset\nrm ../dataset/$DATASET.zip\n"
  },
  {
    "path": "FQ-U-GAT-IT/download_dataset_2.sh",
    "content": "#!/bin/bash\n# https://github.com/junyanz/CycleGAN/blob/master/datasets/download_dataset.sh\n\nFILE=$1\n\nif [[ $FILE != \"ae_photos\" && $FILE != \"apple2orange\" && $FILE != \"summer2winter_yosemite\" &&  $FILE != \"horse2zebra\" && $FILE != \"monet2photo\" && $FILE != \"cezanne2photo\" && $FILE != \"ukiyoe2photo\" && $FILE != \"vangogh2photo\" && $FILE != \"maps\" && $FILE != \"cityscapes\" && $FILE != \"facades\" && $FILE != \"iphone2dslr_flower\" && $FILE != \"ae_photos\" ]]; then\n    echo \"Available datasets are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos\"\n    exit 1\nfi\n\nURL=https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/$FILE.zip\nZIP_FILE=./dataset/$FILE.zip\nTARGET_DIR=./dataset/$FILE/\nwget -N $URL -O $ZIP_FILE\nmkdir -p $TARGET_DIR\nunzip $ZIP_FILE -d ./dataset/\nrm $ZIP_FILE\n"
  },
  {
    "path": "FQ-U-GAT-IT/logger.py",
    "content": "import sys\n\nclass Logger(object):\n  def __init__(self, output_file):\n    self.terminal = sys.stdout\n    self.log = open(output_file, \"w\")\n\n  def write(self, message):\n    print(message, end=\"\", file=self.terminal, flush=True)\n    print(message, end=\"\", file=self.log, flush=True)\n\n  def flush(self):\n    self.terminal.flush()\n    self.log.flush()"
  },
  {
    "path": "FQ-U-GAT-IT/main.py",
    "content": "from UGATIT import UGATIT\nimport argparse\nfrom utils import *\nfrom logger import Logger\nimport sys\n\"\"\"parsing and configuration\"\"\"\n\ndef parse_args():\n    desc = \"Tensorflow implementation of U-GAT-IT\"\n    parser = argparse.ArgumentParser(description=desc)\n    parser.add_argument('--phase', type=str, default='test', help='[train / test]')\n    parser.add_argument('--light', type=str2bool, default=False, help='[U-GAT-IT full version / '\n                                                                  'U-GAT-IT light version]')\n    parser.add_argument('--dataset', type=str, default='selfie2anime', help='dataset_name')\n\n    parser.add_argument('--epoch', type=int, default=101, help='The number of epochs to run')\n    parser.add_argument('--iteration', type=int, default=10000, help='The number of training '\n                                                                  'iterations')\n    parser.add_argument('--batch_size', type=int, default=1, help='The size of batch size')\n    parser.add_argument('--print_freq', type=int, default=1000, help='The number of '\n                                                                    'image_print_freq')\n    parser.add_argument('--save_freq', type=int, default=10, help='The number of ckpt_save_freq')\n    parser.add_argument('--decay_flag', type=str2bool, default=True, help='The decay_flag')\n    parser.add_argument('--decay_epoch', type=int, default=50, help='decay epoch')\n\n    parser.add_argument('--lr', type=float, default=0.0001, help='The learning rate')\n    parser.add_argument('--GP_ld', type=int, default=10, help='The gradient penalty lambda')\n    parser.add_argument('--adv_weight', type=int, default=1, help='Weight about GAN')\n    parser.add_argument('--cycle_weight', type=int, default=10, help='Weight about Cycle')\n    parser.add_argument('--identity_weight', type=int, default=10, help='Weight about Identity')\n    parser.add_argument('--cam_weight', type=int, default=1000, help='Weight about CAM')\n    parser.add_argument('--gan_type', type=str, default='lsgan', help='[gan / lsgan / wgan-gp / wgan-lp / dragan / hinge]')\n\n    parser.add_argument('--smoothing', type=str2bool, default=True, help='AdaLIN smoothing effect')\n\n    parser.add_argument('--ch', type=int, default=64, help='base channel number per layer')\n    parser.add_argument('--n_res', type=int, default=4, help='The number of resblock')\n    parser.add_argument('--n_dis', type=int, default=6, help='The number of discriminator layer')\n    parser.add_argument('--n_critic', type=int, default=1, help='The number of critic')\n    parser.add_argument('--sn', type=str2bool, default=True, help='using spectral norm')\n\n    parser.add_argument('--img_size', type=int, default=256, help='The size of image')\n    parser.add_argument('--img_ch', type=int, default=3, help='The size of image channel')\n    parser.add_argument('--augment_flag', type=str2bool, default=True, help='Image augmentation use or not')\n\n    parser.add_argument('--checkpoint_dir', type=str, default='checkpoint',\n                        help='Directory name to save the checkpoints')\n    parser.add_argument('--result_dir', type=str, default='results',\n                        help='Directory name to save the generated images')\n    parser.add_argument('--log_dir', type=str, default='logs',\n                        help='Directory name to save training logs')\n    parser.add_argument('--sample_dir', type=str, default='samples',\n                        help='Directory name to save the samples on training')\n\n    # Quantization argument\n    parser.add_argument('--quant', type=str2bool, default=True,\n                        help='quantization or not?')\n    parser.add_argument('--commitment_cost', type=float, default=2.0, help='commitment cost')\n    parser.add_argument('--quantization_layer', type=str, default='123', help='which layer?')\n    parser.add_argument('--decay', type=float, default=0.85, help='dictionary learning decay')\n    parser.add_argument('--test_train', type=str2bool, default=True, help='if test while training')\n\n    return check_args(parser.parse_args())\n\n\"\"\"checking arguments\"\"\"\ndef check_args(args):\n    # --checkpoint_dir\n\n    if args.quant:\n        args.checkpoint_dir += '_quant'\n        args.result_dir += '_quant'\n        args.log_dir += '_quant'\n        args.sample_dir += '_quant'\n\n    check_folder(args.checkpoint_dir)\n\n    # --result_dir\n    check_folder(args.result_dir)\n\n    # --result_dir\n    check_folder(args.log_dir)\n\n    # --sample_dir\n    check_folder(args.sample_dir)\n    # --epoch\n    try:\n        assert args.epoch >= 1\n    except:\n        print('number of epochs must be larger than or equal to one')\n\n    # --batch_size\n    try:\n        assert args.batch_size >= 1\n    except:\n        print('batch size must be larger than or equal to one')\n    return args\n\n\"\"\"main\"\"\"\ndef main():\n    # parse arguments\n    args = parse_args()\n    if args is None:\n      exit()\n\n    # open session\n    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:\n        gan = UGATIT(sess, args)\n\n        # build graph\n        gan.build_model()\n\n        # show network architecture\n        show_all_variables()\n        # check_folder(gan.model_dir)\n        # sys.stdout = Logger(os.path.join(gan.model_dir, 'log.txt'))\n        if args.phase == 'train' :\n            gan.train()\n            print(\" [*] Training finished!\")\n\n        if args.phase == 'test' :\n            gan.test(epoch=0)\n            print(\" [*] Test finished!\")\n\nif __name__ == '__main__':\n\n    main()\n"
  },
  {
    "path": "FQ-U-GAT-IT/ops.py",
    "content": "import tensorflow as tf\nimport tensorflow.contrib as tf_contrib\n\n# Xavier : tf_contrib.layers.xavier_initializer()\n# He : tf_contrib.layers.variance_scaling_initializer()\n# Normal : tf.random_normal_initializer(mean=0.0, stddev=0.02)\n# l2_decay : tf_contrib.layers.l2_regularizer(0.0001)\n\nweight_init = tf.random_normal_initializer(mean=0.0, stddev=0.02)\nweight_regularizer = tf_contrib.layers.l2_regularizer(scale=0.0001)\n\n##################################################################################\n# Layer\n##################################################################################\n\ndef conv(x, channels, kernel=4, stride=2, pad=0, pad_type='zero', use_bias=True, sn=False, scope='conv_0'):\n    with tf.variable_scope(scope):\n        if pad > 0 :\n            if (kernel - stride) % 2 == 0:\n                pad_top = pad\n                pad_bottom = pad\n                pad_left = pad\n                pad_right = pad\n\n            else:\n                pad_top = pad\n                pad_bottom = kernel - stride - pad_top\n                pad_left = pad\n                pad_right = kernel - stride - pad_left\n\n            if pad_type == 'zero':\n                x = tf.pad(x, [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]])\n            if pad_type == 'reflect':\n                x = tf.pad(x, [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]], mode='REFLECT')\n\n        if sn :\n            w = tf.get_variable(\"kernel\", shape=[kernel, kernel, x.get_shape()[-1], channels], initializer=weight_init,\n                                regularizer=weight_regularizer)\n            x = tf.nn.conv2d(input=x, filter=spectral_norm(w),\n                             strides=[1, stride, stride, 1], padding='VALID')\n            if use_bias :\n                bias = tf.get_variable(\"bias\", [channels], initializer=tf.constant_initializer(0.0))\n                x = tf.nn.bias_add(x, bias)\n\n        else :\n            x = tf.layers.conv2d(inputs=x, filters=channels,\n                                 kernel_size=kernel, kernel_initializer=weight_init,\n                                 kernel_regularizer=weight_regularizer,\n                                 strides=stride, use_bias=use_bias)\n\n\n        return x\n\ndef fully_connected_with_w(x, use_bias=True, sn=False, reuse=False, scope='linear'):\n    with tf.variable_scope(scope, reuse=reuse):\n        x = flatten(x)\n        bias = 0.0\n        shape = x.get_shape().as_list()\n        channels = shape[-1]\n\n        w = tf.get_variable(\"kernel\", [channels, 1], tf.float32,\n                            initializer=weight_init, regularizer=weight_regularizer)\n\n        if sn :\n            w = spectral_norm(w)\n\n        if use_bias :\n            bias = tf.get_variable(\"bias\", [1],\n                                   initializer=tf.constant_initializer(0.0))\n\n            x = tf.matmul(x, w) + bias\n        else :\n            x = tf.matmul(x, w)\n\n        if use_bias :\n            weights = tf.gather(tf.transpose(tf.nn.bias_add(w, bias)), 0)\n        else :\n            weights = tf.gather(tf.transpose(w), 0)\n\n        return x, weights\n\ndef fully_connected(x, units, use_bias=True, sn=False, scope='linear'):\n    with tf.variable_scope(scope):\n        x = flatten(x)\n        shape = x.get_shape().as_list()\n        channels = shape[-1]\n\n        if sn:\n            w = tf.get_variable(\"kernel\", [channels, units], tf.float32,\n                                initializer=weight_init, regularizer=weight_regularizer)\n            if use_bias:\n                bias = tf.get_variable(\"bias\", [units],\n                                       initializer=tf.constant_initializer(0.0))\n\n                x = tf.matmul(x, spectral_norm(w)) + bias\n            else:\n                x = tf.matmul(x, spectral_norm(w))\n\n        else :\n            x = tf.layers.dense(x, units=units, kernel_initializer=weight_init, kernel_regularizer=weight_regularizer, use_bias=use_bias)\n\n        return x\n\ndef flatten(x) :\n    return tf.layers.flatten(x)\n\n##################################################################################\n# Residual-block\n##################################################################################\n\ndef resblock(x_init, channels, use_bias=True, scope='resblock_0'):\n    with tf.variable_scope(scope):\n        with tf.variable_scope('res1'):\n            x = conv(x_init, channels, kernel=3, stride=1, pad=1, pad_type='reflect', use_bias=use_bias)\n            x = instance_norm(x)\n            x = relu(x)\n\n        with tf.variable_scope('res2'):\n            x = conv(x, channels, kernel=3, stride=1, pad=1, pad_type='reflect', use_bias=use_bias)\n            x = instance_norm(x)\n\n        return x + x_init\n\ndef adaptive_ins_layer_resblock(x_init, channels, gamma, beta, use_bias=True, smoothing=True, scope='adaptive_resblock') :\n    with tf.variable_scope(scope):\n        with tf.variable_scope('res1'):\n            x = conv(x_init, channels, kernel=3, stride=1, pad=1, pad_type='reflect', use_bias=use_bias)\n            x = adaptive_instance_layer_norm(x, gamma, beta, smoothing)\n            x = relu(x)\n\n        with tf.variable_scope('res2'):\n            x = conv(x, channels, kernel=3, stride=1, pad=1, pad_type='reflect', use_bias=use_bias)\n            x = adaptive_instance_layer_norm(x, gamma, beta, smoothing)\n\n        return x + x_init\n\n\n##################################################################################\n# Sampling\n##################################################################################\n\ndef up_sample(x, scale_factor=2):\n    _, h, w, _ = x.get_shape().as_list()\n    new_size = [h * scale_factor, w * scale_factor]\n    return tf.image.resize_nearest_neighbor(x, size=new_size)\n\n\ndef global_avg_pooling(x):\n    gap = tf.reduce_mean(x, axis=[1, 2])\n    return gap\n\ndef global_max_pooling(x):\n    gmp = tf.reduce_max(x, axis=[1, 2])\n    return gmp\n\n##################################################################################\n# Activation function\n##################################################################################\n\ndef lrelu(x, alpha=0.01):\n    # pytorch alpha is 0.01\n    return tf.nn.leaky_relu(x, alpha)\n\n\ndef relu(x):\n    return tf.nn.relu(x)\n\n\ndef tanh(x):\n    return tf.tanh(x)\n\ndef sigmoid(x) :\n    return tf.sigmoid(x)\n\n##################################################################################\n# Normalization function\n##################################################################################\n\ndef adaptive_instance_layer_norm(x, gamma, beta, smoothing=True, scope='instance_layer_norm') :\n    with tf.variable_scope(scope):\n        ch = x.shape[-1]\n        eps = 1e-5\n\n        ins_mean, ins_sigma = tf.nn.moments(x, axes=[1, 2], keep_dims=True)\n        x_ins = (x - ins_mean) / (tf.sqrt(ins_sigma + eps))\n\n        ln_mean, ln_sigma = tf.nn.moments(x, axes=[1, 2, 3], keep_dims=True)\n        x_ln = (x - ln_mean) / (tf.sqrt(ln_sigma + eps))\n\n        rho = tf.get_variable(\"rho\", [ch], initializer=tf.constant_initializer(1.0), constraint=lambda x: tf.clip_by_value(x, clip_value_min=0.0, clip_value_max=1.0))\n\n        if smoothing :\n            rho = tf.clip_by_value(rho - tf.constant(0.1), 0.0, 1.0)\n\n        x_hat = rho * x_ins + (1 - rho) * x_ln\n\n\n        x_hat = x_hat * gamma + beta\n\n        return x_hat\n\ndef instance_norm(x, scope='instance_norm'):\n    return tf_contrib.layers.instance_norm(x,\n                                           epsilon=1e-05,\n                                           center=True, scale=True,\n                                           scope=scope)\n\ndef layer_norm(x, scope='layer_norm') :\n    return tf_contrib.layers.layer_norm(x,\n                                        center=True, scale=True,\n                                        scope=scope)\n\ndef layer_instance_norm(x, scope='layer_instance_norm') :\n    with tf.variable_scope(scope):\n        ch = x.shape[-1]\n        eps = 1e-5\n\n        ins_mean, ins_sigma = tf.nn.moments(x, axes=[1, 2], keep_dims=True)\n        x_ins = (x - ins_mean) / (tf.sqrt(ins_sigma + eps))\n\n        ln_mean, ln_sigma = tf.nn.moments(x, axes=[1, 2, 3], keep_dims=True)\n        x_ln = (x - ln_mean) / (tf.sqrt(ln_sigma + eps))\n\n        rho = tf.get_variable(\"rho\", [ch], initializer=tf.constant_initializer(0.0), constraint=lambda x: tf.clip_by_value(x, clip_value_min=0.0, clip_value_max=1.0))\n\n        gamma = tf.get_variable(\"gamma\", [ch], initializer=tf.constant_initializer(1.0))\n        beta = tf.get_variable(\"beta\", [ch], initializer=tf.constant_initializer(0.0))\n\n        x_hat = rho * x_ins + (1 - rho) * x_ln\n\n        x_hat = x_hat * gamma + beta\n\n        return x_hat\n\ndef spectral_norm(w, iteration=1):\n    w_shape = w.shape.as_list()\n    w = tf.reshape(w, [-1, w_shape[-1]])\n\n    u = tf.get_variable(\"u\", [1, w_shape[-1]], initializer=tf.random_normal_initializer(), trainable=False)\n\n    u_hat = u\n    v_hat = None\n    for i in range(iteration):\n        \"\"\"\n        power iteration\n        Usually iteration = 1 will be enough\n        \"\"\"\n        v_ = tf.matmul(u_hat, tf.transpose(w))\n        v_hat = tf.nn.l2_normalize(v_)\n\n        u_ = tf.matmul(v_hat, w)\n        u_hat = tf.nn.l2_normalize(u_)\n\n    u_hat = tf.stop_gradient(u_hat)\n    v_hat = tf.stop_gradient(v_hat)\n\n    sigma = tf.matmul(tf.matmul(v_hat, w), tf.transpose(u_hat))\n\n    with tf.control_dependencies([u.assign(u_hat)]):\n        w_norm = w / sigma\n        w_norm = tf.reshape(w_norm, w_shape)\n\n\n    return w_norm\n\n##################################################################################\n# Loss function\n##################################################################################\n\ndef L1_loss(x, y):\n    loss = tf.reduce_mean(tf.abs(x - y))\n\n    return loss\n\ndef cam_loss(source, non_source) :\n\n    identity_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(source), logits=source))\n    non_identity_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(non_source), logits=non_source))\n\n    loss = identity_loss + non_identity_loss\n\n    return loss\n\ndef regularization_loss(scope_name) :\n    \"\"\"\n    If you want to use \"Regularization\"\n    g_loss += regularization_loss('generator')\n    d_loss += regularization_loss('discriminator')\n    \"\"\"\n    collection_regularization = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\n\n    loss = []\n    for item in collection_regularization :\n        if scope_name in item.name :\n            loss.append(item)\n\n    return tf.reduce_sum(loss)\n\n\ndef discriminator_loss(loss_func, real, fake):\n    loss = []\n    real_loss = 0\n    fake_loss = 0\n\n    for i in range(2) :\n        if loss_func.__contains__('wgan') :\n            real_loss = -tf.reduce_mean(real[i])\n            fake_loss = tf.reduce_mean(fake[i])\n\n        if loss_func == 'lsgan' :\n            real_loss = tf.reduce_mean(tf.squared_difference(real[i], 1.0))\n            fake_loss = tf.reduce_mean(tf.square(fake[i]))\n\n        if loss_func == 'gan' or loss_func == 'dragan' :\n            real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(real[i]), logits=real[i]))\n            fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(fake[i]), logits=fake[i]))\n\n        if loss_func == 'hinge' :\n            real_loss = tf.reduce_mean(relu(1.0 - real[i]))\n            fake_loss = tf.reduce_mean(relu(1.0 + fake[i]))\n\n        loss.append(real_loss + fake_loss)\n\n    return sum(loss)\n\ndef generator_loss(loss_func, fake):\n    loss = []\n    fake_loss = 0\n\n    for i in range(2) :\n        if loss_func.__contains__('wgan') :\n            fake_loss = -tf.reduce_mean(fake[i])\n\n        if loss_func == 'lsgan' :\n            fake_loss = tf.reduce_mean(tf.squared_difference(fake[i], 1.0))\n\n        if loss_func == 'gan' or loss_func == 'dragan' :\n            fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(fake[i]), logits=fake[i]))\n\n        if loss_func == 'hinge' :\n            fake_loss = -tf.reduce_mean(fake[i])\n\n        loss.append(fake_loss)\n\n    return sum(loss)"
  },
  {
    "path": "FQ-U-GAT-IT/utils.py",
    "content": "import tensorflow as tf\nfrom tensorflow.contrib import slim\nimport cv2\nimport os, random\nimport numpy as np\n\nclass ImageData:\n\n    def __init__(self, load_size, channels, augment_flag):\n        self.load_size = load_size\n        self.channels = channels\n        self.augment_flag = augment_flag\n\n    def image_processing(self, filename):\n        x = tf.read_file(filename)\n        x_decode = tf.image.decode_jpeg(x, channels=self.channels)\n        img = tf.image.resize_images(x_decode, [self.load_size, self.load_size])\n        img = tf.cast(img, tf.float32) / 127.5 - 1\n\n        if self.augment_flag :\n            augment_size = self.load_size + (30 if self.load_size == 256 else 15)\n            p = random.random()\n            if p > 0.5:\n                img = augmentation(img, augment_size)\n\n        return img\n\ndef load_test_data(image_path, size=256):\n    img = cv2.imread(image_path, flags=cv2.IMREAD_COLOR)\n    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n    img = cv2.resize(img, dsize=(size, size))\n\n    img = np.expand_dims(img, axis=0)\n    img = img/127.5 - 1\n\n    return img\n\ndef augmentation(image, augment_size):\n    seed = random.randint(0, 2 ** 31 - 1)\n    ori_image_shape = tf.shape(image)\n    image = tf.image.random_flip_left_right(image, seed=seed)\n    image = tf.image.resize_images(image, [augment_size, augment_size])\n    image = tf.random_crop(image, ori_image_shape, seed=seed)\n    return image\n\ndef save_images(images, size, image_path):\n    return imsave(inverse_transform(images), size, image_path)\n\ndef inverse_transform(images):\n    return ((images+1.) / 2) * 255.0\n\n\ndef imsave(images, size, path):\n    images = merge(images, size)\n    images = cv2.cvtColor(images.astype('uint8'), cv2.COLOR_RGB2BGR)\n\n    return cv2.imwrite(path, images)\n\ndef merge(images, size):\n    h, w = images.shape[1], images.shape[2]\n    img = np.zeros((h * size[0], w * size[1], 3))\n    for idx, image in enumerate(images):\n        i = idx % size[1]\n        j = idx // size[1]\n        img[h*j:h*(j+1), w*i:w*(i+1), :] = image\n\n    return img\n\ndef show_all_variables():\n    model_vars = tf.trainable_variables()\n    slim.model_analyzer.analyze_vars(model_vars, print_info=True)\n\ndef check_folder(log_dir):\n    if not os.path.exists(log_dir):\n        os.makedirs(log_dir)\n    return log_dir\n\ndef str2bool(x):\n    return x.lower() in ('true')\n"
  },
  {
    "path": "FQ-U-GAT-IT/vq_layer.py",
    "content": "import tensorflow as tf\nfrom tensorflow.python.training import moving_averages\n\nclass VectorQuantizerEMA:\n  \"\"\"Sonnet module representing the VQ-VAE layer.\n\n  Args:\n    embedding_dim: integer representing the dimensionality of the tensors in the\n      quantized space. Inputs to the modules must be in this format as well.\n    num_embeddings: integer, the number of vectors in the quantized space.\n    commitment_cost: scalar which controls the weighting of the loss terms (see\n      equation 4 in the paper).\n    decay: float, decay for the moving averages.\n    epsilon: small float constant to avoid numerical instability.\n  \"\"\"\n\n  def __init__(self, embedding_dim, num_embeddings, commitment_cost, decay,\n               epsilon=1e-5, name='VectorQuantizerEMA'):\n    # super(VectorQuantizerEMA, self).__init__(name=name)\n    self._embedding_dim = embedding_dim\n    self._num_embeddings = num_embeddings\n    self._decay = decay\n    self._commitment_cost = commitment_cost\n    self._epsilon = epsilon\n\n\n  def __call__(self, inputs, reuse=False, layer=None, is_training=True):\n    \"\"\"Connects the module to some inputs.\n\n    Args:\n      inputs: Tensor, final dimension must be equal to embedding_dim. All other\n        leading dimensions will be flattened and treated as a large batch.\n      is_training: boolean, whether this connection is to training data. When\n        this is set to False, the internal moving average statistics will not be\n        updated.\n\n    Returns:\n      dict containing the following keys and values:\n        quantize: Tensor containing the quantized version of the input.\n        loss: Tensor containing the loss to optimize.\n        perplexity: Tensor containing the perplexity of the encodings.\n        encodings: Tensor containing the discrete encodings, ie which element\n          of the quantized space each input element was mapped to.\n        encoding_indices: Tensor containing the discrete encoding indices, ie\n          which element of the quantized space each input element was mapped to.\n    \"\"\"\n    # Ensure that the weights are read fresh for each timestep, which otherwise\n    # would not be guaranteed in an RNN setup. Note that this relies on inputs\n    # having a data dependency with the output of the previous timestep - if\n    # this is not the case, there is no way to serialize the order of weight\n    # updates within the module, so explicit external dependencies must be used.\n    with tf.variable_scope('vq_layer%d'%layer, reuse=reuse):\n      initializer = tf.random_normal_initializer()\n      # w is a matrix with an embedding in each column. When training, the\n      # embedding is assigned to be the average of all inputs assigned to that\n      # embedding.\n      self._w = tf.get_variable(\n          'embedding', [self._embedding_dim, self._num_embeddings],\n          initializer=initializer, use_resource=True)\n      self._ema_cluster_size = tf.get_variable(\n          'ema_cluster_size', [self._num_embeddings],\n          initializer=tf.constant_initializer(0), use_resource=True)\n      self._ema_w = tf.get_variable(\n          'ema_dw', initializer=self._w.initialized_value(), use_resource=True)\n\n      with tf.control_dependencies([inputs]):\n        w = self._w.read_value()\n      input_shape = tf.shape(inputs)\n      with tf.control_dependencies([\n          tf.Assert(tf.equal(input_shape[-1], self._embedding_dim),\n                    [input_shape])]):\n        flat_inputs = tf.reshape(inputs, [-1, self._embedding_dim])\n\n      distances = (tf.reduce_sum(flat_inputs**2, 1, keepdims=True)\n                   - 2 * tf.matmul(flat_inputs, w)\n                   + tf.reduce_sum(w ** 2, 0, keepdims=True))\n\n      encoding_indices = tf.argmax(- distances, 1)\n      encodings = tf.one_hot(encoding_indices, self._num_embeddings)\n      encoding_indices = tf.reshape(encoding_indices, tf.shape(inputs)[:-1])\n      quantized = self.quantize(encoding_indices)\n      e_latent_loss = tf.reduce_mean((tf.stop_gradient(quantized) - inputs) ** 2)\n\n      if is_training:\n        updated_ema_cluster_size = moving_averages.assign_moving_average(\n            self._ema_cluster_size, tf.reduce_sum(encodings, 0), self._decay)\n        dw = tf.matmul(flat_inputs, encodings, transpose_a=True)\n        updated_ema_w = moving_averages.assign_moving_average(self._ema_w, dw,\n                                                              self._decay)\n        n = tf.reduce_sum(updated_ema_cluster_size)\n        updated_ema_cluster_size = (\n            (updated_ema_cluster_size + self._epsilon)\n            / (n + self._num_embeddings * self._epsilon) * n)\n\n        normalised_updated_ema_w = (\n            updated_ema_w / tf.reshape(updated_ema_cluster_size, [1, -1]))\n        with tf.control_dependencies([e_latent_loss]):\n          update_w = tf.assign(self._w, normalised_updated_ema_w)\n          with tf.control_dependencies([update_w]):\n            loss = self._commitment_cost * e_latent_loss\n\n      else:\n        loss = self._commitment_cost * e_latent_loss\n      quantized = inputs + tf.stop_gradient(quantized - inputs)\n      avg_probs = tf.reduce_mean(encodings, 0)\n      perplexity = tf.exp(- tf.reduce_sum(avg_probs * tf.log(avg_probs + 1e-10)))\n\n      return loss, perplexity\n\n  @property\n  def embeddings(self):\n    return self._w\n\n  def quantize(self, encoding_indices):\n    with tf.control_dependencies([encoding_indices]):\n      w = tf.transpose(self.embeddings.read_value(), [1, 0])\n    return tf.nn.embedding_lookup(w, encoding_indices, validate_indices=False)\n"
  },
  {
    "path": "README.md",
    "content": "# FQ-GAN\n\n### Recent Update  \n\n* May 22, 2020 Releasing the pre-trained FQ-BigGAN/BigGAN at resolution 64x64 and their training logs at the [link](https://textae.blob.core.windows.net/qgan/qgan/ibm_ckpt.zip) (10.34G):\n\n\n* May 22, 2020  [`Selfie2Anime Demo`](http://40.71.23.172:8888/)  is released. Try it out.\n\n* [Colab](https://colab.research.google.com/drive/1XdhEBen8vBlqIE-XPuu8j7FHYMs-x83z?usp=sharing) file for training and testing. Put it into```FQ-GAN/FQ-U-GAT-IT``` and follow the training/testing instruction.\n\n* Selfie2Anime pretrained models are available now!! [Halfway checkpoint](https://drive.google.com/drive/folders/1okZAuNYSZvhXtOuHXJOkcMQW4aIWar_M?usp=sharing) and [Final checkpoint](https://drive.google.com/drive/folders/1UIcC6OLa7aEXQjKI8CU3ZfTT3PpuhXGW?usp=sharing).\n\n* [Photo2Portrait](https://drive.google.com/drive/folders/1hE8p0CcsQOvOtbVzoBql0wsdtsMgFvEZ?usp=sharing) pretrained model is released!\n\n***\n\nThis repository contains source code to reproduce the results presented in the paper:\n\n[Feature Quantization Improves GAN Training](https://arxiv.org/abs/2004.02088), ICML 2020\n<br>\n [Yang Zhao*](https://sites.google.com/view/zhao-yang/),\n [Chunyuan Li*](http://chunyuan.li/),\n [Ping Yu](http://irisyu.me/),\n Jianfeng Gao,\n [Changyou Chen](https://cse.buffalo.edu/~changyou/)\n \n\n<p align=\"center\">\n  <img width=\"%80\" height=\"%80\" src=images/architecture.png>\n</p>\n\n\n\n## Contents\n\n1. [FQ-BigGAN](#FQ-BigGAN)\n2. [FQ-U-GAT-IT](#FQ-U-GAT-IT)\n3. [FQ-StyleGAN](#FQ-StyleGAN)\n\n\n\n##  FQ-BigGAN\n\nThis code is based on [PyTorchGAN](https://github.com/ajbrock/BigGAN-PyTorch). Here we will give more details of the code usage. You will need **python 3.x, pytorch 1.x, tqdm ,h5py**\n\n### Prepare datasets\n1. CIFAR-10 or CIFAR-100 (change C10 to C100 to prepare CIFAR-100)\n```\npython make_hdf5.py --dataset C10 --batch_size 256 --data_root data\npython calculate_inception_moments.py --dataset C10 --data_root data --batch_size 128\n```\n2. ImageNet, first you need to manually download ImageNet and put all image class folders into `./data/ImageNet`, then execute the following command to prepare ImageNet (128&times;128)\n\n```\npython make_hdf5.py --dataset I128 --batch_size 256 --data_root data\npython calculate_inception_moments.py --dataset I128_hdf5 --data_root data --batch_size 128\n```\n\n### Training \nWe have four bash scripts in  FQ-BigGAN/scripts to train CIFAR-10, CIFAR-100, ImageNet (64&times;64) and ImageNet (128&times;128), respectively. For example, to train CIFAR-100, you may simply run\n\n```\nsh scripts/launch_C100.sh\n```\n\nTo modify the FQ hyper-parameters, we provide the following options in each script as arguments:\n\n1. `--discrete_layer`: it specifies which layers you want quantization to be added, i.e. 0123 \n2. `--commitment` : it is the quantization loss coefficient, default=1.0\n3. `--dict_size`:  the size of the EMA dictionary, default=8, meaning there are 2^8 keys in the dictionary.\n4. `--dict_decay`:  the momentum when learning the dictionary, default=0.8.\n\n### Experiment results\nLearning curves on CIFAR-100.\n<p align=\"center\">\n  <img width=\"70%\" height=\"70%\" src=images/cifar100.png>\n</p>\n\nFID score comparison with BigGAN on ImageNet\n\n<center>\n\t\n| Model     | 64&times;64  | 128&times;128|\n|:--------:|:-------:|:-------------:|\n| BigGAN    |10.55   |  14.88 | \n| FQ-BigGAN | 9.67   |  13.77  |\n\t\n</center>\n\n<!--\n\nGenerated sample comparison on ImageNet (64x64)\n| BigGAN | FQ-BigGAN |\n:-------------------------:|:-------------------------:|\n![](images/bird.jpg) | ![](images/bird_quant.jpg)\n![](images/insects.jpg) | ![](images/insects_quant.jpg)\n\n-->\n\n## FQ-U-GAT-IT\n\nThis experiment is based on the official codebase [U-GAT-IT](https://github.com/taki0112/UGATIT). Here we plan to give more details of the dataset preparation and code usage. You will need **python 3.6.x, tensorflow-gpu-1.14.0, opencv-python, tensorboardX**\n\n<p align=\"center\">\n  <img width=\"%100\" height=\"%100\" src=images/i2i_samples.png>\n</p>\n\n### Prepare datasets\nWe use selfie2anime, cat2dog, horse2zebra, photo2portrait, vangogh2photo.\n\n1. selfie2anime: go to  [U-GAT-IT](https://github.com/taki0112/UGATIT) to download the dataset and unzip it to `./dataset`.\n2. cat2dog and photo2portrait: here we provide a bash script adapted from [DRIT](https://github.com/HsinYingLee/DRIT) to download the two datasets.\n```\ncd FQ-U-GAT-IT/dataset && sh download_dataset_1.sh [cat2dog, portrait]\n```\n3. horse2zebra and vangogh2photo: here we provide a bash script adapted from [CycleGAN](https://github.com/junyanz/CycleGAN) to download the two datasets.\n```\ncd FQ-U-GAT-IT && bash download_dataset_2.sh [horse2zebra, vangogh2photo]\n```\n\n### Training\n```\npython main.py --phase train --dataset [type=str, selfie2anime/portrait/cat2dog/horse2zebra/vangogh2photo] --quant [type=bool, True/False] --commitment_cost [type=float, default=2.0] --quantization_layer [type=str, i.e. 123] --decay [type=float, default=0.85]\n```\nBy  default, the training procedure will output checkpoints and intermediate translations from (testA, testB) to `checkpoints (checkpoints_quant)` and `results (results_quant)` respectively.\n\n### Testing\n```\npython main.py --phase test --test_train False --dataset [type=str, selfie2anime/portrait/cat2dog/horse2zebra/vangogh2photo] --quant [type=bool, True/False] --commitment_cost [type=float, default=2.0] --quantization_layer [type=str, i.e. 123] --decay [type=float, default=0.85]\n```\nIf the model is freshly loaded from what I have shared, remember to put them into\n```checkpoint_quant/UGATIT_q_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85```\nby default and modify the file ```checkpoint``` accordingly. This structure is inherited from the official U-GAT-IT. Please feel free to modify it for convinience.\n\n### Usage\n```\n├── FQ-GAN\n   └── FQ-U-GAT-IT\n       ├── dataset\n           ├── selfie2anime\n           ├── portrait\n\t   ├── vangogh2photo\n\t   ├── horse2zebra\n           └── cat2dog\n       ├── checkpoint_quant\n           ├── UGATIT_q_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85\n\t       ├── checkpoint\n\t       ├── UGATIT.model-480000.data-00000-of-00001\n\t       ├── UGATIT.model-480000.index\n\t       ├── UGATIT.model-480000.meta\n           ├── UGATIT_q_portrait_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85\n           └── ...\n```\nIf you choose the halfway pretrained model, contents in ```checkpoint``` should be\n```\nmodel_checkpoint_path: \"UGATIT.model-480000\"\nall_model_checkpoint_paths: \"UGATIT.model-480000\"\n\n```\n## FQ-StyleGAN\n\nThis experiment is based on the official codebase [StyleGAN2](https://github.com/NVlabs/stylegan2). The original [Flicker-Faces](https://arxiv.org/abs/1812.04948) dataset includes multi-resolution data.\nYou will need **python 3.6.x, tensorflow-gpu 1.14.0, numpy**\n\n### Prepare datasets\nTo obtain the FFHQ dataset, please refer to [FFHQ repository](https://github.com/NVlabs/ffhq-dataset) and download the tfrecords dataset [FFHQ-tfrecords](https://drive.google.com/drive/folders/1LTBpJ0W_WLjqza3zdayligS8Dh1V1gA6) into `datasets/ffhq`.\n\n### Training\n```\npython run_training.py --num-gpus=8 --data-dir=datasets --config=config-e --dataset=ffhq --mirror-augment=true --total-kimg 25000 --gamma=100 --D_type=1 --discrete_layer [type=string, default=45] --commitment_cost [type=float, default=0.25] --decay [type=float, default=0.8]\n```\n\n<center>\n\t\n| Model     | 32&times;32| 64&times;64  | 128&times;128| 1024&times;1024|\n|:--------:|:-------:|:-------------:|:-------:|:-------------:|\n| StyleGAN    |3.28 | 4.82 | 6.33  | 5.24\n| FQ-StyleGAN |3.01 | 4.36 | 5.98  | 4.89\n\t\n</center>\n\n## Acknowledgements\nWe thank official open-source implementations of [BigGAN](https://arxiv.org/abs/1809.11096), [StyleGAN](https://arxiv.org/abs/1812.04948), [StyleGAN2](https://arxiv.org/abs/1912.04958) and [U-GAT-IT](https://arxiv.org/abs/1907.10830). \n"
  }
]