[
  {
    "path": "README.md",
    "content": "## Autoregressive Predictive Coding\nThis repository contains the official implementation (in PyTorch) of Autoregressive Predictive Coding (APC) proposed in [An Unsupervised Autoregressive Model for Speech Representation Learning](https://arxiv.org/abs/1904.03240).\n\nAPC is a speech feature extractor trained on a large amount of unlabeled data. With an unsupervised, autoregressive training objective, representations learned by APC not only capture general acoustic characteristics such as speaker and phone information from the speech signals, but are also highly accessible to downstream models--our experimental results on phone classification show that a linear classifier taking the APC representations as the input features significantly outperforms a multi-layer percepron using the surface features.\n\n## Dependencies\n* Python 3.5\n* PyTorch 1.0\n\n## Dataset\nIn the paper, we used the train-clean-360 split from the [LibriSpeech](http://www.openslr.org/12/) corpus for training the APC models, and the dev-clean split for keeping track of the training loss. We used the log Mel spectrograms, which were generated by running the Kaldi scripts, as the input acoustic features to the APC models. Of course you can generate the log Mel spectrograms yourself, but to help you better reproduce our results, here we provide the links to the data proprocessed by us that can be directly fed to the APC models. We also include other data splits that we did not use in the paper for you to explore, e.g., you can try training an APC model on a larger and nosier set (e.g., train-other-500) and see if it learns more robust speech representations.\n* [train-clean-100](https://www.dropbox.com/s/kl6ivulhucukdz1/train-clean-100.xz?dl=0)\n* [train-clean-360](https://www.dropbox.com/s/0hzg2momellrpoj/train-clean-360.xz?dl=0) (used for training APC models in our paper)\n* [train-other-500](https://www.dropbox.com/s/uy0aex30ufq2po8/train-other.xz?dl=0)\n* [dev-clean](https://www.dropbox.com/s/4f1ypyowwmkfapx/dev-clean.xz?dl=0) (used for tracing the training loss)\n\n## Training APC\nBelow we will follow the paper and use train-clean-360 and dev-clean as demonstration. Once you have downloaded the data, unzip them by running:\n```bash\nxz -d train-clean-360.xz\nxz -d dev-clean.xz\n```\nThen, create a directory `librispeech_data/kaldi` and move the data into it:\n```bash\nmkdir -p librispeech_data/kaldi\nmv train-clean-360-hires-norm.blogmel librispeech_data/kaldi\nmv dev-clean-hires-norm.blogmel librispeech_data/kaldi\n```\nNow we will have to transform the data into the format loadable by the PyTorch DataLoader. To do so, simply run:\n```bash\n# Prepare the training set\npython prepare_data.py --librispeech_from_kaldi librispeech_data/kaldi/train-clean-360-hires-norm.blogmel --save_dir librispeech_data/preprocessed/train-clean-360-hires-norm.blogmel\n# Prepare the valication set\npython prepare_data.py --librispeech_from_kaldi librispeech_data/kaldi/dev-clean-hires-norm.blogmel --save_dir librispeech_data/preprocessed/dev-clean-hires-norm-blogmel\n```\nOnce the program is done, you will see a directory `preprocessed/` inside `librispeech_data/` that contains all the preprocessed PyTorch tensors.\n\nTo train an APC model, simply run:\n```bash\npython train_apc.py\n```\nBy default, the trained models will be put in `logs/`. You can also use Tensorboard to trace the training progress. There are many other configurations you can try, check `train_apc.py` for more details--it is highly documented and should be self-explanatory.\n\n## Feature extraction\nOnce you have trained your APC model, you can use it to extract speech features from your target dataset. To do so, feed-forward the trained model on the target dataset and retrieve the extracted features by running:\n```bash\n_, feats = model.forward(inputs, lengths)\n```\n`feats` is a PyTorch tensor of shape (`num_layers`, `batch_size`, `seq_len`, `rnn_hidden_size`) where:\n- `num_layers` is the RNN depth of your APC model\n- `batch_size` is your inference batch size\n- `seq_len` is the maximum sequence length and is determined when you run `prepare_data.py`. By default this value is 1600.\n- `rnn_hidden_size` is the dimensionality of the RNN hidden unit.\n\nAs you can see, `feats` is essentially the RNN hidden states in an APC model. You can think of APC as a speech version of [ELMo](https://www.aclweb.org/anthology/N18-1202) if you are familiar with it.\n\nThere are many ways to incorporate `feats` into your downstream task. One of the easiest way is to take only the outputs of the last RNN layer (i.e., `feats[-1, :, :, :]`) as the input features to your downstream model, which is what we did in our paper. Feel free to explore other mechanisms.\n\n## Pre-trained models\nWe release the pre-trained models that were used to produce the numbers reported in the paper. `load_pretrained_model.py` provides a simple example of loading a pre-trained model.\n* [n = 1](https://www.dropbox.com/s/qyb1gicjkhv0wz9/bs32-rhl3-rhs512-rd0-adam-res-ts1.pt?dl=0)\n* [n = 2](https://www.dropbox.com/s/76amvx3fccfmp2n/bs32-rhl3-rhs512-rd0-adam-res-ts2.pt?dl=0)\n* [n = 3](https://www.dropbox.com/s/9nwj8y0djiw9pek/bs32-rhl3-rhs512-rd0-adam-res-ts3.pt?dl=0)\n* [n = 5](https://www.dropbox.com/s/8pqlr5wg89eicwk/bs32-rhl3-rhs512-rd0-adam-res-ts5.pt?dl=0)\n* [n = 10](https://www.dropbox.com/s/ucpf66k89xkm1jw/bs32-rhl3-rhs512-rd0-adam-res-ts10.pt?dl=0)\n* [n = 20](https://www.dropbox.com/s/wa01myucfifloqo/bs32-rhl3-rhs512-rd0-adam-res-ts20.pt?dl=0)\n\n## Reference\nPlease cite our paper(s) if you find this repository useful. This first paper proposes the APC objective, while the second paper applies it to speech recognition, speech translation, and speaker identification, and provides more systematic analysis on the learned representations. Cite both if you are kind enough!\n```\n@inproceedings{chung2019unsupervised,\n  title = {An unsupervised autoregressive model for speech representation learning},\n  author = {Chung, Yu-An and Hsu, Wei-Ning and Tang, Hao and Glass, James},\n  booktitle = {Interspeech},\n  year = {2019}\n}\n```\n```\n@inproceedings{chung2020generative,\n  title = {Generative pre-training for speech with autoregressive predictive coding},\n  author = {Chung, Yu-An and Glass, James},\n  booktitle = {ICASSP},\n  year = {2020}\n}\n```\n\n## Contact\nFeel free to shoot me an <a href=\"mailto:andyyuan@mit.edu\">email</a> for any inquiries about the paper and this repository.\n"
  },
  {
    "path": "apc_model.py",
    "content": "import torch\nfrom torch import nn\nfrom torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence\n\n\nclass Prenet(nn.Module):\n  \"\"\"Prenet is a multi-layer fully-connected network with ReLU activations.\n  During training and testing (i.e., feature extraction), each input frame is\n  passed into the Prenet, and the Prenet output is then fed to the RNN. If\n  Prenet configuration is None, the input frames will be directly fed to the\n  RNN without any transformation.\n  \"\"\"\n\n  def __init__(self, input_size, num_layers, hidden_size, dropout):\n    super(Prenet, self).__init__()\n    input_sizes = [input_size] + [hidden_size] * (num_layers - 1)\n    output_sizes = [hidden_size] * num_layers\n\n    self.layers = nn.ModuleList(\n      [nn.Linear(in_features=in_size, out_features=out_size)\n      for (in_size, out_size) in zip(input_sizes, output_sizes)])\n\n    self.relu = nn.ReLU()\n    self.dropout = nn.Dropout(dropout)\n\n\n  def forward(self, inputs):\n    # inputs: (batch_size, seq_len, mel_dim)\n    for layer in self.layers:\n      inputs = self.dropout(self.relu(layer(inputs)))\n\n    return inputs\n    # inputs: (batch_size, seq_len, out_dim)\n\n\nclass Postnet(nn.Module):\n  \"\"\"Postnet is a simple linear layer for predicting the target frames given\n  the RNN context during training. We don't need the Postnet for feature\n  extraction.\n  \"\"\"\n\n  def __init__(self, input_size, output_size=80):\n    super(Postnet, self).__init__()\n    self.layer = nn.Conv1d(in_channels=input_size, out_channels=output_size,\n                           kernel_size=1, stride=1)\n\n\n  def forward(self, inputs):\n    # inputs: (batch_size, seq_len, hidden_size)\n    inputs = torch.transpose(inputs, 1, 2)\n    # inputs: (batch_size, hidden_size, seq_len) -- for conv1d operation\n\n    return torch.transpose(self.layer(inputs), 1, 2)\n    # (batch_size, seq_len, output_size) -- back to the original shape\n\n\nclass APCModel(nn.Module):\n  \"\"\"This class defines Autoregressive Predictive Coding (APC), a model that\n  learns to extract general speech features from unlabeled speech data. These\n  features are shown to contain rich speaker and phone information, and are\n  useful for a wide range of downstream tasks such as speaker verification\n  and phone classification.\n\n  An APC model consists of a Prenet (optional), a multi-layer GRU network,\n  and a Postnet. For each time step during training, the Prenet transforms\n  the input frame into a latent representation, which is then consumed by\n  the GRU network for generating internal representations across the layers.\n  Finally, the Postnet takes the output of the last GRU layer and attempts to\n  predict the target frame.\n\n  After training, to extract features from the data of your interest, which\n  do not have to be i.i.d. with the training data, simply feed-forward the\n  the data through the APC model, and take the the internal representations\n  (i.e., the GRU hidden states) as the extracted features and use them in\n  your tasks.\n  \"\"\"\n\n  def __init__(self, mel_dim, prenet_config, rnn_config):\n    super(APCModel, self).__init__()\n    self.mel_dim = mel_dim\n\n    if prenet_config is not None:\n      # Make sure the dimensionalities are correct\n      assert prenet_config.input_size == mel_dim\n      assert prenet_config.hidden_size == rnn_config.input_size\n      assert rnn_config.input_size == rnn_config.hidden_size\n      self.prenet = Prenet(\n        input_size=prenet_config.input_size,\n        num_layers=prenet_config.num_layers,\n        hidden_size=prenet_config.hidden_size,\n        dropout=prenet_config.dropout)\n    else:\n      assert rnn_config.input_size == mel_dim\n      self.prenet = None\n\n    in_sizes = ([rnn_config.input_size] +\n                [rnn_config.hidden_size] * (rnn_config.num_layers - 1))\n    out_sizes = [rnn_config.hidden_size] * rnn_config.num_layers\n    self.rnns = nn.ModuleList(\n      [nn.GRU(input_size=in_size, hidden_size=out_size, batch_first=True)\n      for (in_size, out_size) in zip(in_sizes, out_sizes)])\n\n    self.rnn_dropout = nn.Dropout(rnn_config.dropout)\n    self.rnn_residual = rnn_config.residual\n\n    self.postnet = Postnet(\n      input_size=rnn_config.hidden_size,\n      output_size=self.mel_dim)\n\n\n  def forward(self, inputs, lengths):\n    \"\"\"Forward function for both training and testing (feature extraction).\n\n    input:\n      inputs: (batch_size, seq_len, mel_dim)\n      lengths: (batch_size,)\n\n    return:\n      predicted_mel: (batch_size, seq_len, mel_dim)\n      internal_reps: (num_layers + x, batch_size, seq_len, rnn_hidden_size),\n        where x is 1 if there's a prenet, otherwise 0\n    \"\"\"\n    seq_len = inputs.size(1)\n\n    if self.prenet is not None:\n      rnn_inputs = self.prenet(inputs)\n      # rnn_inputs: (batch_size, seq_len, rnn_input_size)\n      internal_reps = [rnn_inputs]\n      # also include prenet_outputs in internal_reps\n    else:\n      rnn_inputs = inputs\n      internal_reps = []\n\n    packed_rnn_inputs = pack_padded_sequence(rnn_inputs, lengths, True)\n\n    for i, layer in enumerate(self.rnns):\n      packed_rnn_outputs, _ = layer(packed_rnn_inputs)\n\n      rnn_outputs, _ = pad_packed_sequence(\n        packed_rnn_outputs, True, total_length=seq_len)\n      # outputs: (batch_size, seq_len, rnn_hidden_size)\n\n      if i + 1 < len(self.rnns):\n        # apply dropout except the last rnn layer\n        rnn_outputs = self.rnn_dropout(rnn_outputs)\n\n      rnn_inputs, _ = pad_packed_sequence(\n        packed_rnn_inputs, True, total_length=seq_len)\n      # rnn_inputs: (batch_size, seq_len, rnn_hidden_size)\n\n      if self.rnn_residual and rnn_inputs.size(-1) == rnn_outputs.size(-1):\n        # Residual connections\n        rnn_outputs = rnn_outputs + rnn_inputs\n\n      internal_reps.append(rnn_outputs)\n\n      packed_rnn_inputs = pack_padded_sequence(rnn_outputs, lengths, True)\n\n    predicted_mel = self.postnet(rnn_outputs)\n    # predicted_mel: (batch_size, seq_len, mel_dim)\n\n    internal_reps = torch.stack(internal_reps)\n\n    return predicted_mel, internal_reps\n    # predicted_mel is only for training; internal_reps is the extracted\n    # features\n"
  },
  {
    "path": "datasets.py",
    "content": "from os import listdir\nfrom os.path import join\nimport pickle\n\nimport torch\nfrom torch.utils import data\n\n\nclass LibriSpeech(data.Dataset):\n  def __init__(self, path):\n    self.path = path\n    self.ids = [f for f in listdir(self.path) if f.endswith('.pt')]\n    with open(join(path, 'lengths.pkl'), 'rb') as f:\n      self.lengths = pickle.load(f)\n\n  def __len__(self):\n    return len(self.ids)\n\n  def __getitem__(self, index):\n    x = torch.load(join(self.path, self.ids[index]))\n    l = self.lengths[self.ids[index]]\n    return x, l\n"
  },
  {
    "path": "load_pretrained_model.py",
    "content": "\"\"\"Example of loading a pre-trained APC model.\"\"\"\n\nimport torch\n\nfrom apc_model import APCModel\nfrom utils import PrenetConfig, RNNConfig\n\n\ndef main():\n  prenet_config = None\n  rnn_config = RNNConfig(input_size=80, hidden_size=512, num_layers=3,\n                         dropout=0.)\n  pretrained_apc = APCModel(mel_dim=80, prenet_config=prenet_config,\n                            rnn_config=rnn_config).cuda()\n\n  pretrained_weights_path = 'bs32-rhl3-rhs512-rd0-adam-res-ts3.pt'\n  pretrained_apc.load_state_dict(torch.load(pretrained_weights_path))\n\n  # Load data and perform your task ...\n"
  },
  {
    "path": "prepare_data.py",
    "content": "import os\nimport argparse\nimport pickle\n\nimport torch\nimport torch.nn.functional as F\n\n\ndef main():\n  parser = argparse.ArgumentParser(\"Configuration for data preparation\")\n  parser.add_argument(\"--librispeech_from_kaldi\", default=\"./librispeech_data/kaldi/dev-clean-hires-norm.blogmel\", type=str,\n    help=\"Path to the librispeech log Mel features generated by the Kaldi scripts\")\n  parser.add_argument(\"--max_seq_len\", default=1600, type=int,\n    help=\"The maximum length (number of frames) of each sequence; sequences will be truncated or padded (with zero vectors)\n          to this length\")\n  parser.add_argument(\"--save_dir\", default=\"./librispeech_data/preprocessed/dev-clean\", type=str,\n    help=\"Directory to save the preprocessed pytorch tensors\")\n  config = parser.parse_args()\n\n  os.makedirs(config.save_dir, exist_ok=True)\n\n  id2len = {}\n  with open(config.librispeech_from_kaldi, 'r') as f:\n    # process the file line by line\n    for line in f:\n      data = line.strip().split()\n\n      if len(data) == 1:\n        if data[0] == '.':  # end of the current utterance\n          id2len[utt_id + '.pt'] = min(len(log_mel), config.max_seq_len)\n          log_mel = torch.FloatTensor(log_mel)  # convert the 2D list to a pytorch tensor\n          log_mel = F.pad(log_mel, (0, 0, 0, config.max_seq_len - log_mel.size(0))) # pad or truncate\n          torch.save(log_mel, os.path.join(config.save_dir, utt_id + '.pt'))\n\n        else: # here starts a new utterance\n          utt_id = data[0]\n          log_mel = []\n\n      else:\n        log_mel.append([float(i) for i in data])\n\n  with open(os.path.join(config.save_dir, 'lengths.pkl'), 'wb') as f:\n    pickle.dump(id2len, f, protocol=4)\n\n\nif __name__ == '__main__':\n  main()\n"
  },
  {
    "path": "train_apc.py",
    "content": "import os\nimport logging\nimport argparse\n\nimport numpy as np\nimport torch\nfrom torch.autograd import Variable\nfrom torch import nn, optim\nfrom torch.utils import data\nimport tensorboard_logger\nfrom tensorboard_logger import log_value\n\nfrom apc_model import APCModel\nfrom datasets import LibriSpeech\nfrom utils import PrenetConfig, RNNConfig\n\n\n\ndef main():\n  parser = argparse.ArgumentParser(\n    description=\"Configuration for training an APC model\")\n\n  # Prenet architecture (note that all APC models in the paper DO NOT\n  # incoporate a prenet)\n  parser.add_argument(\"--prenet_num_layers\", default=0, type=int,\n    help=\"Number of ReLU layers as prenet\")\n  parser.add_argument(\"--prenet_dropout\", default=0., type=float,\n    help=\"Dropout for prenet\")\n\n  # RNN architecture\n  parser.add_argument(\"--rnn_num_layers\", default=3, type=int,\n    help=\"Number of RNN layers in the APC model\")\n  parser.add_argument(\"--rnn_hidden_size\", default=512, type=int,\n    help=\"Number of hidden units in each RNN layer\")\n  parser.add_argument(\"--rnn_dropout\", default=0., type=float,\n    help=\"Dropout for each RNN output layer except the last one\")\n  parser.add_argument(\"--rnn_residual\", action=\"store_true\",\n    help=\"Apply residual connections between RNN layers if specified\")\n\n  # Training configuration\n  parser.add_argument(\"--optimizer\", default=\"adam\", type=str,\n    help=\"The gradient descent optimizer (e.g., sgd, adam, etc.)\")\n  parser.add_argument(\"--batch_size\", default=32, type=int,\n    help=\"Training minibatch size\")\n  parser.add_argument(\"--learning_rate\", default=0.0001, type=float,\n    help=\"Initial learning rate\")\n  parser.add_argument(\"--epochs\", default=100, type=int,\n    help=\"Number of training epochs\")\n  parser.add_argument(\"--time_shift\", default=1, type=int,\n    help=\"Given f_{t}, predict f_{t + n}, where n is the time_shift\")\n  parser.add_argument(\"--clip_thresh\", default=1.0, type=float,\n    help=\"Threshold for clipping the gradients\")\n\n  # Misc configurations\n  parser.add_argument(\"--feature_dim\", default=80, type=int,\n    help=\"The dimension of the input frame\")\n  parser.add_argument(\"--load_data_workers\", default=2, type=int,\n    help=\"Number of parallel data loaders\")\n  parser.add_argument(\"--experiment_name\", default=\"foo\", type=str,\n    help=\"Name of this experiment\")\n  parser.add_argument(\"--store_path\", default=\"./logs\", type=str,\n    help=\"Where to save the trained models and logs\")\n  parser.add_argument(\"--librispeech_path\",\n    default=\"./librispeech_data/preprocessed\", type=str,\n    help=\"Path to the librispeech directory\")\n\n  config = parser.parse_args()\n\n  model_dir = os.path.join(config.store_path, config.experiment_name + '.dir')\n  os.makedirs(config.store_path, exist_ok=True)\n  os.makedirs(model_dir, exist_ok=True)\n\n  logging.basicConfig(\n    level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s',\n    filename=os.path.join(model_dir, config.experiment_name),\n    filemode='w')\n\n  # define a new Handler to log to console as well\n  console = logging.StreamHandler()\n  console.setLevel(logging.INFO)\n  formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')\n  console.setFormatter(formatter)\n  logging.getLogger('').addHandler(console)\n\n  logging.info('Model Parameters: ')\n  logging.info('Prenet Depth: %d' % (config.prenet_num_layers))\n  logging.info('Prenet Dropout: %f' % (config.prenet_dropout))\n  logging.info('RNN Depth: %d ' % (config.rnn_num_layers))\n  logging.info('RNN Hidden Dim: %d' % (config.rnn_hidden_size))\n  logging.info('RNN Residual Connections: %s' % (config.rnn_residual))\n  logging.info('RNN Dropout: %f' % (config.rnn_dropout))\n  logging.info('Optimizer: %s ' % (config.optimizer))\n  logging.info('Batch Size: %d ' % (config.batch_size))\n  logging.info('Initial Learning Rate: %f ' % (config.learning_rate))\n  logging.info('Time Shift: %d' % (config.time_shift))\n  logging.info('Gradient Clip Threshold: %f' % (config.clip_thresh))\n\n  if config.prenet_num_layers == 0:\n    prenet_config = None\n    rnn_config = RNNConfig(\n      config.feature_dim, config.rnn_hidden_size, config.rnn_num_layers,\n      config.rnn_dropout, config.rnn_residual)\n  else:\n    prenet_config = PrenetConfig(\n      config.feature_dim, config.rnn_hidden_size, config.prenet_num_layers,\n      config.prenet_dropout)\n    rnn_config = RNNConfig(\n      config.rnn_hidden_size, config.rnn_hidden_size, config.rnn_num_layers,\n      config.rnn_dropout, config.rnn_residual)\n\n  model = APCModel(\n    mel_dim=config.feature_dim,\n    prenet_config=prenet_config,\n    rnn_config=rnn_config).cuda()\n\n  criterion = nn.L1Loss()\n\n  if config.optimizer == 'adam':\n    optimizer = optim.Adam(model.parameters(), lr=config.learning_rate)\n  elif config.optimizer == 'adadelta':\n    optimizer = optim.Adadelta(model.parameters())\n  elif config.optimizer == 'sgd':\n    optimizer = optim.SGD(model.parameters(), lr=config.learning_rate)\n  elif config.optimizer == 'adagrad':\n    optimizer = optim.Adagrad(model.parameters(), lr=config.learning_rate)\n  elif config.optimizer == 'rmsprop':\n    optimizer = optim.RMSprop(model.parameters(), lr=config.learning_rate)\n  else:\n    raise NotImplementedError(\"Learning method not supported for the task\")\n\n  # setup tensorboard logger\n  tensorboard_logger.configure(\n    os.path.join(model_dir, config.experiment_name + '.tb_log'))\n\n  train_set = LibriSpeech(os.path.join(\n    config.librispeech_path, 'train-clean-360'))\n  train_data_loader = data.DataLoader(\n    train_set, batch_size=config.batch_size,\n    num_workers=config.load_data_workers, shuffle=True)\n\n  val_set = LibriSpeech(os.path.join(config.librispeech_path, 'dev-clean'))\n  val_data_loader = data.DataLoader(\n    val_set, batch_size=config.batch_size,\n    num_workers=config.load_data_workers, shuffle=False)\n\n  torch.save(model.state_dict(),\n    open(os.path.join(model_dir, config.experiment_name + '__epoch_0.model'),\n    'wb'))\n\n  global_step = 0\n  for epoch_i in range(config.epochs):\n\n    ####################\n    ##### Training #####\n    ####################\n\n    model.train()\n    train_losses = []\n    for batch_x, batch_l in train_data_loader:\n\n      _, indices = torch.sort(batch_l, descending=True)\n\n      batch_x = Variable(batch_x[indices]).cuda()\n      batch_l = Variable(batch_l[indices]).cuda()\n\n      outputs, _ = model(\n        batch_x[:, :-config.time_shift, :], batch_l - config.time_shift)\n\n      optimizer.zero_grad()\n      loss = criterion(outputs, batch_x[:, config.time_shift:, :])\n      train_losses.append(loss.item())\n      loss.backward()\n      grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(),\n                                                 config.clip_thresh)\n      optimizer.step()\n\n      log_value(\"training loss (step-wise)\", float(loss.item()), global_step)\n      log_value(\"gradient norm\", grad_norm, global_step)\n\n      global_step += 1\n\n    ######################\n    ##### Validation #####\n    ######################\n\n    model.eval()\n    val_losses = []\n    with torch.set_grad_enabled(False):\n      for val_batch_x, val_batch_l in val_data_loader:\n        _, val_indices = torch.sort(val_batch_l, descending=True)\n\n        val_batch_x = Variable(val_batch_x[val_indices]).cuda()\n        val_batch_l = Variable(val_batch_l[val_indices]).cuda()\n\n        val_outputs, _ = model(\n          val_batch_x[:, :-config.time_shift, :],\n          val_batch_l - config.time_shift)\n\n        val_loss = criterion(val_outputs,\n                             val_batch_x[:, config.time_shift:, :])\n        val_losses.append(val_loss.item())\n\n    logging.info('Epoch: %d Training Loss: %.5f Validation Loss: %.5f' % (epoch_i + 1, np.mean(train_losses), np.mean(val_losses)))\n\n    log_value(\"training loss (epoch-wise)\", np.mean(train_losses), epoch_i)\n    log_value(\"validation loss (epoch-wise)\", np.mean(val_losses), epoch_i)\n\n    torch.save(model.state_dict(),\n      open(os.path.join(model_dir, config.experiment_name + '__epoch_%d' % (epoch_i + 1) + '.model'), 'wb'))\n\n\nif __name__ == '__main__':\n  main()\n"
  },
  {
    "path": "utils.py",
    "content": "from collections import namedtuple\n\n\nPrenetConfig = namedtuple(\n  'PrenetConfig', ['input_size', 'hidden_size', 'num_layers', 'dropout'])\n\nRNNConfig = namedtuple(\n  'RNNConfig',\n  ['input_size', 'hidden_size', 'num_layers', 'dropout', 'residual'])\n"
  }
]