[
  {
    "path": "README.md",
    "content": "# PyTorch Dual Learning\n\nThis is the PyTorch implementation for [Dual Learning for Machine Translation](https://arxiv.org/abs/1611.00179).\n\nThe NMT models used as channels are heavily depend on [pcyin/pytorch\\_nmt](https://github.com/pcyin/pytorch_nmt).\n\n### Usage\n\nYou shall prepare these models for dual learning step:\n- Language Models x 2\n- Translation Models x 2\n\n##### Warm-up Step\n\n- Language Models \\\n    Check here [lm/](https://github.com/yistLin/pytorch-dual-learning/tree/master/lm)\n- Translation Models \\\n    Check here [nmt/](https://github.com/yistLin/pytorch-dual-learning/tree/master/nmt)\n\n##### Dual Learning Step\n\nDuring the reinforcement learning process, it will gain rewards from language models and translation models, and update the translation models. \\\nYou can find more details in the paper.\n\n- Training \\\n    You can simply use this [script](https://github.com/yistLin/pytorch-dual-learning/blob/master/train-dual.sh),\n you have to modify the path and name to your models.\n- Test \\\n    To use the trained models, you can just treat it as [NMT models](https://github.com/pcyin/pytorch_nmt).\n\n\n### Test (Basic)\n\nFirstly, we trained our basic model with 450K bilingual pair, which is only 10% data, as warm-start. Then, we set up a dual-learning game, and trained two models using reinforcement technique.\n\n##### Configs\n\n- Reward\n    - language model reward: average over square rooted length of string\n    - final reward:\n        ```\n        rk = 0.01 x r1 + 0.99 x r2\n        ```\n\n- Optimizer\n    ```\n    torch.optim.SGD(models[m].parameters(), lr=1e-3, momentum=0.9)\n    ```\n\n##### Results\n\n- English-Deutsch\n    - after 600 iterations\n        ```\n        BLEU = 21.39, 49.1/26.8/17.6/12.2\n        ```\n    - after 1200 iterations\n        ```\n        BLEU = 21.49, 48.6/26.6/17.4/12.0\n        ```\n\n- Deutsch-English\n    - after 600 iterations\n        ```\n        BLEU = 25.89, 56.0/32.8/22.3/15.8\n        ```\n    - after 1200 iterations\n        ```\n        BLEU = 25.94, 55.9/32.7/22.2/15.8\n        ```\n\n##### Comparisons\n\n| Model        | Original | iter300 | iter600 | iter900 | iter1200 | iter1500 | iter3000 | iter4500 | iter6600 |\n|--------------|---------:|--------:|--------:|--------:|---------:|---------:|---------:|---------:|---------:|\n| EN-DE        | 20.54    | 21.27   | 21.39   | 21.49   | 21.46    | 21.49    | 21.56    | 21.62    | 21.60    |\n| EN-DE (bleu) |          | 21.42   | 21.57   | 21.55   | 21.55    |          |          |          |          |\n| DE-EN        | 24.69    | 25.90   | 25.89   | 25.91   | 26.03    | 25.94    | 26.02    | 26.18    | 26.20    |\n| DE-EN (bleu) |          | 25.96   | 26.25   | 26.22   | 26.18    |          |          |          |          |\n"
  },
  {
    "path": "data.py",
    "content": "import os\nimport torch\nimport pickle\n\n\nclass Dictionary(object):\n    def __init__(self):\n        self.word2idx = {'<unk>': 0}\n        self.idx2word = ['<unk>']\n        self.wordcnt = {'<unk>': 1}\n\n    def add_word(self, word):\n        if word not in self.word2idx:\n            self.idx2word.append(word)\n            self.word2idx[word] = len(self.idx2word) - 1\n            self.wordcnt[word] = 1\n        else:\n            self.wordcnt[word] = self.wordcnt[word] + 1\n        return self.word2idx[word]\n\n    def getid(self, word, thresh=10):\n        if (word not in self.word2idx) or (self.wordcnt[word] < thresh):\n            return self.word2idx['<unk>']\n        return self.word2idx[word]\n\n    def __len__(self):\n        return len(self.idx2word)\n\n\nclass Corpus(object):\n    def __init__(self, path):\n        self.dictionary = Dictionary()\n        self.train = self.tokenize(os.path.join(path, 'train.txt'))\n        self.valid = self.tokenize(os.path.join(path, 'valid.txt'))\n        self.test = self.tokenize(os.path.join(path, 'test.txt'))\n\n        with open(os.path.join(path, 'dict.pkl'), 'wb') as f:\n            pickle.dump(self.dictionary, f)\n\n    def tokenize(self, path):\n        \"\"\"Tokenizes a text file.\"\"\"\n        assert os.path.exists(path)\n        # Add words to the dictionary\n        with open(path, 'r') as f:\n            tokens = 0\n            for line in f:\n                words = ['<sos>'] + line.split() + ['<eos>']\n                tokens += len(words)\n                for word in words:\n                    self.dictionary.add_word(word)\n\n        # Tokenize file content\n        with open(path, 'r') as f:\n            ids = torch.LongTensor(tokens)\n            token = 0\n            for line in f:\n                words = ['<sos>'] + line.split() + ['<eos>']\n                for word in words:\n                    ids[token] = self.dictionary.getid(word)\n                    token += 1\n\n        return ids\n\n"
  },
  {
    "path": "dual.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport sys\nimport torch\nimport argparse\nimport random\n\nfrom torch.autograd import Variable\n\nfrom nmt import read_corpus, data_iter\nfrom nmt import NMT, to_input_variable\n\nfrom lm import LMProb\nfrom lm import model\n\n\ndef dual(args):\n    vocabs = {}\n    opts = {}\n    state_dicts = {}\n    train_srcs = {}\n    lms = {}\n\n    # load model params & training data\n    for i in range(2):\n        model_id = (['A', 'B'])[i]\n        print('loading pieces, part {:s}'.format(model_id))\n\n        print('  load model{:s}     from [{:s}]'.format(model_id, args.nmt[i]), file=sys.stderr)\n        params = torch.load(args.nmt[i], map_location=lambda storage, loc: storage)  # load model onto CPU\n        vocabs[model_id] = params['vocab']\n        opts[model_id] = params['args']\n        state_dicts[model_id] = params['state_dict']\n\n        print('  load train_src{:s} from [{:s}]'.format(model_id, args.src[i]), file=sys.stderr)\n        train_srcs[model_id] = read_corpus(args.src[i], source='src')\n\n        print('  load lm{:s}        from [{:s}]'.format(model_id, args.lm[i]), file=sys.stderr)\n        lms[model_id] = LMProb(args.lm[i], args.dict[i])\n\n    models = {}\n    optimizers = {}\n\n    for m in ['A', 'B']:\n        # build model\n        opts[m].cuda = args.cuda\n\n        models[m] = NMT(opts[m], vocabs[m])\n        models[m].load_state_dict(state_dicts[m])\n        models[m].train()\n\n        if args.cuda:\n            models[m] = models[m].cuda()\n\n        random.shuffle(train_srcs[m])\n\n        # optimizer\n        # optimizers[m] = torch.optim.Adam(models[m].parameters())\n        optimizers[m] = torch.optim.SGD(models[m].parameters(), lr=1e-3, momentum=0.9)\n\n    # loss function\n    loss_nll = torch.nn.NLLLoss()\n    loss_ce = torch.nn.CrossEntropyLoss()\n\n    epoch = 0\n    start = args.start_iter\n\n    while True:\n        epoch += 1\n        print('\\nstart of epoch {:d}'.format(epoch))\n\n        data = {}\n        data['A'] = iter(train_srcs['A'])\n        data['B'] = iter(train_srcs['B'])\n\n        start += (epoch - 1) * len(train_srcs['A']) + 1\n\n        for t in range(start, start + len(train_srcs['A'])):\n            show_log = False\n            if t % args.log_every == 0:\n                show_log = True\n\n            if show_log:\n                print('\\nstep', t)\n\n            for m in ['A', 'B']:\n                lm_probs = []\n\n                NLL_losses = []\n                CE_losses = []\n\n                modelA = models[m]\n                modelB = models[change(m)]\n                lmB = lms[change(m)]\n                optimizerA = optimizers[m]\n                optimizerB = optimizers[change(m)]\n                vocabB = vocabs[change(m)]\n                s = next(data[m])\n\n                if show_log:\n                    print('\\n{:s} -> {:s}'.format(m, change(m)))\n                    print('[s]', ' '.join(s))\n\n                hyps = modelA.beam(s, beam_size=5)\n\n                for ids, smid, dist in hyps:\n                    if show_log:\n                        print('[smid]', ' '.join(smid))\n\n                    var_ids = Variable(torch.LongTensor(ids[1:]), requires_grad=False)\n                    NLL_losses.append(loss_nll(dist, var_ids).cpu())\n\n                    lm_probs.append(lmB.get_prob(smid))\n\n                    src_sent_var = to_input_variable([smid], vocabB.src, cuda=args.cuda)\n                    tgt_sent_var = to_input_variable([['<s>'] + s + ['</s>']], vocabB.tgt, cuda=args.cuda)\n                    src_sent_len = [len(smid)]\n\n                    score = modelB(src_sent_var, src_sent_len, tgt_sent_var[:-1]).squeeze(1)\n\n                    CE_losses.append(loss_ce(score, tgt_sent_var[1:].view(-1)).cpu())\n\n                # losses on target language\n                fw_losses = torch.cat(NLL_losses)\n\n                # losses on reconstruction\n                bw_losses = torch.cat(CE_losses)\n\n                # r1, language model reward\n                r1s = Variable(torch.FloatTensor(lm_probs), requires_grad=False)\n                r1s = (r1s - torch.mean(r1s)) / torch.std(r1s)\n\n                # r2, communication reward\n                r2s = Variable(bw_losses.data, requires_grad=False)\n                r2s = (torch.mean(r2s) - r2s) / torch.std(r2s)\n\n                # rk = alpha * r1 + (1 - alpha) * r2\n                rks = r1s * args.alpha + r2s * (1 - args.alpha)\n\n                # averaging loss over samples\n                A_loss = torch.mean(fw_losses * rks)\n                B_loss = torch.mean(bw_losses * (1 - args.alpha))\n\n                if show_log:\n                    for r1, r2, rk, fw_loss, bw_loss in zip(r1s.data.numpy(), r2s.data.numpy(), rks.data.numpy(), fw_losses.data.numpy(), bw_losses.data.numpy()):\n                        print('r1={:7.4f}\\t r2={:7.4f}\\t rk={:7.4f}\\t fw_loss={:7.4f}\\t bw_loss={:7.4f}'.format(r1, r2, rk, fw_loss, bw_loss))\n                    print('A loss = {:.7f} \\t B loss = {:.7f}'.format(A_loss.data.numpy().item(), B_loss.data.numpy().item()))\n\n                optimizerA.zero_grad()\n                optimizerB.zero_grad()\n\n                A_loss.backward()\n                B_loss.backward()\n\n                optimizerA.step()\n                optimizerB.step()\n\n            if t % args.save_n_iter == 0:\n                print('\\nsaving model')\n                models['A'].save('{}.iter{}.bin'.format(args.model[0], t))\n                models['B'].save('{}.iter{}.bin'.format(args.model[1], t))\n\n\ndef change(m):\n    if m == 'A':\n        return 'B'\n    else:\n        return 'A'\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--nmt', nargs=2, required=True, help='pre-train nmt model path')\n    parser.add_argument('--lm', nargs=2, required=True, help='language model path')\n    parser.add_argument('--dict', nargs=2, required=True, help='dictionary path')\n    parser.add_argument('--src', nargs=2, required=True, help='training data path')\n    parser.add_argument('--model', nargs=2, type=str, default=['modelA', 'modelB'])\n    parser.add_argument('--log_every', type=int, default=10)\n    parser.add_argument('--save_n_iter', type=int, default=1000)\n    parser.add_argument('--alpha', type=float, default=0.5)\n    parser.add_argument('--start_iter', type=int, default=0)\n    parser.add_argument('--cuda', action='store_true')\n    args = parser.parse_args()\n\n    print(args)\n\n    dual(args)\n\n"
  },
  {
    "path": "lm/README.md",
    "content": "# Language Model\n\nThis language model is heavily depended on [Word-level language modeling RNN - pytorch/examples](https://github.com/pytorch/examples/tree/master/word_language_model). To train it, just use the code here and follow the steps provided there.\n\n### Usage\n\nReload pre-trained model and dictionary first, and use `get_prob()` to get language model probability. \n\n```python\nwords = ['we', 'have', 'told', 'that', 'this', 'will']\nlmprob = LMProb('wmt16-en.pt', 'data/wmt16-en/dict.pkl')\nnorm_prob = lmprob.get_prob(words, verbose=True)\nprint('norm_prob = {:.4f}'.format(norm_prob))\n```\n"
  },
  {
    "path": "lm/__init__.py",
    "content": "from lm.lm_prob import LMProb\nfrom lm import model\n"
  },
  {
    "path": "lm/data.py",
    "content": "import os\nimport torch\nimport pickle\n\n\nclass Dictionary(object):\n    def __init__(self):\n        self.word2idx = {'<unk>': 0}\n        self.idx2word = ['<unk>']\n        self.wordcnt = {'<unk>': 1}\n\n    def add_word(self, word):\n        if word not in self.word2idx:\n            self.idx2word.append(word)\n            self.word2idx[word] = len(self.idx2word) - 1\n            self.wordcnt[word] = 1\n        else:\n            self.wordcnt[word] = self.wordcnt[word] + 1\n        return self.word2idx[word]\n\n    def getid(self, word, thresh=10):\n        if (word not in self.word2idx) or (self.wordcnt[word] < thresh):\n            return self.word2idx['<unk>']\n        return self.word2idx[word]\n\n    def __len__(self):\n        return len(self.idx2word)\n\n\nclass Corpus(object):\n    def __init__(self, path):\n        self.dictionary = Dictionary()\n        self.train = self.tokenize(os.path.join(path, 'train.txt'))\n        self.valid = self.tokenize(os.path.join(path, 'valid.txt'))\n        self.test = self.tokenize(os.path.join(path, 'test.txt'))\n\n        with open(os.path.join(path, 'dict.pkl'), 'wb') as f:\n            pickle.dump(self.dictionary, f)\n\n    def tokenize(self, path):\n        \"\"\"Tokenizes a text file.\"\"\"\n        assert os.path.exists(path)\n        # Add words to the dictionary\n        with open(path, 'r') as f:\n            tokens = 0\n            for line in f:\n                words = ['<sos>'] + line.split() + ['<eos>']\n                tokens += len(words)\n                for word in words:\n                    self.dictionary.add_word(word)\n\n        # Tokenize file content\n        with open(path, 'r') as f:\n            ids = torch.LongTensor(tokens)\n            token = 0\n            for line in f:\n                words = ['<sos>'] + line.split() + ['<eos>']\n                for word in words:\n                    ids[token] = self.dictionary.getid(word)\n                    token += 1\n\n        return ids\n\n"
  },
  {
    "path": "lm/generate.py",
    "content": "###############################################################################\n# Language Modeling on Penn Tree Bank\n#\n# This file generates new sentences sampled from the language model\n#\n###############################################################################\n\nimport argparse\n\nimport torch\nfrom torch.autograd import Variable\n\nimport data\n\nparser = argparse.ArgumentParser(description='PyTorch PTB Language Model')\n\n# Model parameters.\nparser.add_argument('--data', type=str, default='./data/penn',\n                    help='location of the data corpus')\nparser.add_argument('--checkpoint', type=str, default='./model.pt',\n                    help='model checkpoint to use')\nparser.add_argument('--outf', type=str, default='output.txt',\n                    help='output file for generated text')\nparser.add_argument('--words', type=int, default='1000',\n                    help='number of words to generate')\nparser.add_argument('--seed', type=int, default=1111,\n                    help='random seed')\nparser.add_argument('--cuda', action='store_true',\n                    help='use CUDA')\nparser.add_argument('--temperature', type=float, default=1.0,\n                    help='temperature - higher will increase diversity')\nparser.add_argument('--log-interval', type=int, default=100,\n                    help='reporting interval')\nargs = parser.parse_args()\n\n# Set the random seed manually for reproducibility.\ntorch.manual_seed(args.seed)\nif torch.cuda.is_available():\n    if not args.cuda:\n        print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n    else:\n        torch.cuda.manual_seed(args.seed)\n\nif args.temperature < 1e-3:\n    parser.error(\"--temperature has to be greater or equal 1e-3\")\n\nwith open(args.checkpoint, 'rb') as f:\n    model = torch.load(f)\nmodel.eval()\n\nif args.cuda:\n    model.cuda()\nelse:\n    model.cpu()\n\ncorpus = data.Corpus(args.data)\nntokens = len(corpus.dictionary)\nhidden = model.init_hidden(1)\ninput = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)\nif args.cuda:\n    input.data = input.data.cuda()\n\nwith open(args.outf, 'w') as outf:\n    for i in range(args.words):\n        output, hidden = model(input, hidden)\n        word_weights = output.squeeze().data.div(args.temperature).exp().cpu()\n        word_idx = torch.multinomial(word_weights, 1)[0]\n        input.data.fill_(word_idx)\n        word = corpus.dictionary.idx2word[word_idx]\n\n        outf.write(word + ('\\n' if i % 20 == 19 else ' '))\n\n        if i % args.log_interval == 0:\n            print('| Generated {}/{} words'.format(i, args.words))\n"
  },
  {
    "path": "lm/lm_prob.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport math\n\nimport torch\nimport pickle\nimport numpy as np\nfrom torch.autograd import Variable\n\n\nclass LMProb():\n\n    def __init__(self, model_path, dict_path):\n        with open(model_path, 'rb') as f:\n            self.model = torch.load(f)\n            self.model.eval()\n            self.model = self.model.cpu()\n\n        with open(dict_path, 'rb') as f:\n            self.dictionary = pickle.load(f)\n\n    def get_prob(self, words, verbose=False):\n        pad_words = ['<sos>'] + words + ['<eos>']\n        indxs = [self.dictionary.getid(w) for w in pad_words]\n        input = Variable(torch.LongTensor([int(indxs[0])]).unsqueeze(0), volatile=True)\n\n        if verbose:\n            print('words =', pad_words)\n            print('indxs =', indxs)\n\n        hidden = self.model.init_hidden(1)\n        log_probs = []\n        for i in range(1, len(pad_words)):\n            output, hidden = self.model(input, hidden)\n            word_weights = output.squeeze().data.exp()\n\n            prob = word_weights[indxs[i]] / word_weights.sum()\n            log_probs.append(math.log(prob))\n            input.data.fill_(int(indxs[i]))\n\n        if verbose:\n            for i in range(len(log_probs)):\n                print('  {} => {:d},\\tlogP(w|s)={:.4f}'.format(pad_words[i+1], indxs[i+1], log_probs[i]))\n            print('\\n  => sum_prob = {:.4f}'.format(sum(log_probs)))\n\n        return sum(log_probs) / math.sqrt(len(log_probs))\n\n\nif __name__ == '__main__':\n    words = ['we', 'have', 'told', 'that', 'this', 'will']\n    lmprob = LMProb('wmt16-en.pt', 'data/wmt16-en/dict.pkl')\n    norm_prob = lmprob.get_prob(words, verbose=True)\n    print('\\n  => norm_prob = {:.4f}'.format(norm_prob))\n\n"
  },
  {
    "path": "lm/main.py",
    "content": "# coding: utf-8\nimport argparse\nimport time\nimport math\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nimport data\nimport model\n\nparser = argparse.ArgumentParser(description='PyTorch PennTreeBank RNN/LSTM Language Model')\nparser.add_argument('--data', type=str, default='./data/penn',\n                    help='location of the data corpus')\nparser.add_argument('--emsize', type=int, default=200,\n                    help='size of word embeddings')\nparser.add_argument('--nhid', type=int, default=200,\n                    help='number of hidden units per layer')\nparser.add_argument('--nlayers', type=int, default=2,\n                    help='number of layers')\nparser.add_argument('--lr', type=float, default=20,\n                    help='initial learning rate')\nparser.add_argument('--clip', type=float, default=0.25,\n                    help='gradient clipping')\nparser.add_argument('--epochs', type=int, default=40,\n                    help='upper epoch limit')\nparser.add_argument('--batch_size', type=int, default=20, metavar='N',\n                    help='batch size')\nparser.add_argument('--bptt', type=int, default=35,\n                    help='sequence length')\nparser.add_argument('--dropout', type=float, default=0.2,\n                    help='dropout applied to layers (0 = no dropout)')\nparser.add_argument('--tied', action='store_true',\n                    help='tie the word embedding and softmax weights')\nparser.add_argument('--seed', type=int, default=1111,\n                    help='random seed')\nparser.add_argument('--cuda', action='store_true',\n                    help='use CUDA')\nparser.add_argument('--log-interval', type=int, default=200, metavar='N',\n                    help='report interval')\nparser.add_argument('--save', type=str,  default='model.pt',\n                    help='path to save the final model')\nargs = parser.parse_args()\n\n# Set the random seed manually for reproducibility.\ntorch.manual_seed(args.seed)\nif torch.cuda.is_available():\n    if not args.cuda:\n        print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n    else:\n        torch.cuda.manual_seed(args.seed)\n\n###############################################################################\n# Load data\n###############################################################################\n\ncorpus = data.Corpus(args.data)\n\n# Starting from sequential data, batchify arranges the dataset into columns.\n# For instance, with the alphabet as the sequence and batch size 4, we'd get\n# ┌ a g m s ┐\n# │ b h n t │\n# │ c i o u │\n# │ d j p v │\n# │ e k q w │\n# └ f l r x ┘.\n# These columns are treated as independent by the model, which means that the\n# dependence of e. g. 'g' on 'f' can not be learned, but allows more efficient\n# batch processing.\n\ndef batchify(data, bsz):\n    # Work out how cleanly we can divide the dataset into bsz parts.\n    nbatch = data.size(0) // bsz\n    # Trim off any extra elements that wouldn't cleanly fit (remainders).\n    data = data.narrow(0, 0, nbatch * bsz)\n    # Evenly divide the data across the bsz batches.\n    data = data.view(bsz, -1).t().contiguous()\n    if args.cuda:\n        data = data.cuda()\n    return data\n\neval_batch_size = 10\ntrain_data = batchify(corpus.train, args.batch_size)\nval_data = batchify(corpus.valid, eval_batch_size)\ntest_data = batchify(corpus.test, eval_batch_size)\n\n###############################################################################\n# Build the model\n###############################################################################\n\nntokens = len(corpus.dictionary)\nmodel = model.RNNModel(ntokens, args.emsize, args.nhid, args.nlayers, args.dropout, args.tied)\nif args.cuda:\n    model.cuda()\n\ncriterion = nn.CrossEntropyLoss()\n\n###############################################################################\n# Training code\n###############################################################################\n\ndef repackage_hidden(h):\n    \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n    if type(h) == Variable:\n        return Variable(h.data)\n    else:\n        return tuple(repackage_hidden(v) for v in h)\n\n\n# get_batch subdivides the source data into chunks of length args.bptt.\n# If source is equal to the example output of the batchify function, with\n# a bptt-limit of 2, we'd get the following two Variables for i = 0:\n# ┌ a g m s ┐ ┌ b h n t ┐\n# └ b h n t ┘ └ c i o u ┘\n# Note that despite the name of the function, the subdivison of data is not\n# done along the batch dimension (i.e. dimension 1), since that was handled\n# by the batchify function. The chunks are along dimension 0, corresponding\n# to the seq_len dimension in the LSTM.\n\ndef get_batch(source, i, evaluation=False):\n    seq_len = min(args.bptt, len(source) - 1 - i)\n    data = Variable(source[i:i+seq_len], volatile=evaluation)\n    target = Variable(source[i+1:i+1+seq_len].view(-1))\n    return data, target\n\n\ndef evaluate(data_source):\n    # Turn on evaluation mode which disables dropout.\n    model.eval()\n    total_loss = 0\n    ntokens = len(corpus.dictionary)\n    hidden = model.init_hidden(eval_batch_size)\n    for i in range(0, data_source.size(0) - 1, args.bptt):\n        data, targets = get_batch(data_source, i, evaluation=True)\n        output, hidden = model(data, hidden)\n        output_flat = output.view(-1, ntokens)\n        total_loss += len(data) * criterion(output_flat, targets).data\n        hidden = repackage_hidden(hidden)\n    return total_loss[0] / len(data_source)\n\n\ndef train():\n    # Turn on training mode which enables dropout.\n    model.train()\n    total_loss = 0\n    start_time = time.time()\n    ntokens = len(corpus.dictionary)\n    hidden = model.init_hidden(args.batch_size)\n    for batch, i in enumerate(range(0, train_data.size(0) - 1, args.bptt)):\n        data, targets = get_batch(train_data, i)\n        # Starting each batch, we detach the hidden state from how it was previously produced.\n        # If we didn't, the model would try backpropagating all the way to start of the dataset.\n        hidden = repackage_hidden(hidden)\n        model.zero_grad()\n        output, hidden = model(data, hidden)\n        loss = criterion(output.view(-1, ntokens), targets)\n        loss.backward()\n\n        # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n        torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)\n        for p in model.parameters():\n            p.data.add_(-lr, p.grad.data)\n\n        total_loss += loss.data\n\n        if batch % args.log_interval == 0 and batch > 0:\n            cur_loss = total_loss[0] / args.log_interval\n            elapsed = time.time() - start_time\n            print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} | ms/batch {:5.2f} | '\n                    'loss {:5.2f} | ppl {:8.2f}'.format(\n                epoch, batch, len(train_data) // args.bptt, lr,\n                elapsed * 1000 / args.log_interval, cur_loss, math.exp(cur_loss)))\n            total_loss = 0\n            start_time = time.time()\n\n# Loop over epochs.\nlr = args.lr\nbest_val_loss = None\n\n# At any point you can hit Ctrl + C to break out of training early.\ntry:\n    for epoch in range(1, args.epochs+1):\n        epoch_start_time = time.time()\n        train()\n        val_loss = evaluate(val_data)\n        print('-' * 89)\n        print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '\n                'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),\n                                           val_loss, math.exp(val_loss)))\n        print('-' * 89)\n        # Save the model if the validation loss is the best we've seen so far.\n        if not best_val_loss or val_loss < best_val_loss:\n            with open(args.save, 'wb') as f:\n                torch.save(model, f)\n            best_val_loss = val_loss\n        else:\n            # Anneal the learning rate if no improvement has been seen in the validation dataset.\n            lr /= 4.0\nexcept KeyboardInterrupt:\n    print('-' * 89)\n    print('Exiting from training early')\n\n# Load the best saved model.\nwith open(args.save, 'rb') as f:\n    model = torch.load(f)\n\n# Run on test data.\ntest_loss = evaluate(test_data)\nprint('=' * 89)\nprint('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(\n    test_loss, math.exp(test_loss)))\nprint('=' * 89)\n\n"
  },
  {
    "path": "lm/model.py",
    "content": "import torch.nn as nn\nfrom torch.autograd import Variable\n\n\nclass RNNModel(nn.Module):\n    \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n    def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5, tie_weights=False):\n        super(RNNModel, self).__init__()\n        self.drop = nn.Dropout(dropout)\n        self.encoder = nn.Embedding(ntoken, ninp)\n        self.rnn = nn.GRU(ninp, nhid, nlayers, dropout=dropout)\n        self.decoder = nn.Linear(nhid, ntoken)\n\n        # Optionally tie weights as in:\n        # \"Using the Output Embedding to Improve Language Models\" (Press & Wolf 2016)\n        # https://arxiv.org/abs/1608.05859\n        # and\n        # \"Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling\" (Inan et al. 2016)\n        # https://arxiv.org/abs/1611.01462\n        if tie_weights:\n            if nhid != ninp:\n                raise ValueError('When using the tied flag, nhid must be equal to emsize')\n            self.decoder.weight = self.encoder.weight\n\n        self.init_weights()\n\n        self.nhid = nhid\n        self.nlayers = nlayers\n\n    def init_weights(self):\n        initrange = 0.1\n        self.encoder.weight.data.uniform_(-initrange, initrange)\n        self.decoder.bias.data.fill_(0)\n        self.decoder.weight.data.uniform_(-initrange, initrange)\n\n    def forward(self, input, hidden):\n        emb = self.drop(self.encoder(input))\n        output, hidden = self.rnn(emb, hidden)\n        output = self.drop(output)\n        decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))\n        return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden\n\n    def init_hidden(self, bsz):\n        weight = next(self.parameters()).data\n        return Variable(weight.new(self.nlayers, bsz, self.nhid).zero_())\n\n"
  },
  {
    "path": "model.py",
    "content": "import torch.nn as nn\nfrom torch.autograd import Variable\n\n\nclass RNNModel(nn.Module):\n    \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n    def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5, tie_weights=False):\n        super(RNNModel, self).__init__()\n        self.drop = nn.Dropout(dropout)\n        self.encoder = nn.Embedding(ntoken, ninp)\n        self.rnn = nn.GRU(ninp, nhid, nlayers, dropout=dropout)\n        self.decoder = nn.Linear(nhid, ntoken)\n\n        # Optionally tie weights as in:\n        # \"Using the Output Embedding to Improve Language Models\" (Press & Wolf 2016)\n        # https://arxiv.org/abs/1608.05859\n        # and\n        # \"Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling\" (Inan et al. 2016)\n        # https://arxiv.org/abs/1611.01462\n        if tie_weights:\n            if nhid != ninp:\n                raise ValueError('When using the tied flag, nhid must be equal to emsize')\n            self.decoder.weight = self.encoder.weight\n\n        self.init_weights()\n\n        self.nhid = nhid\n        self.nlayers = nlayers\n\n    def init_weights(self):\n        initrange = 0.1\n        self.encoder.weight.data.uniform_(-initrange, initrange)\n        self.decoder.bias.data.fill_(0)\n        self.decoder.weight.data.uniform_(-initrange, initrange)\n\n    def forward(self, input, hidden):\n        emb = self.drop(self.encoder(input))\n        output, hidden = self.rnn(emb, hidden)\n        output = self.drop(output)\n        decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))\n        return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden\n\n    def init_hidden(self, bsz):\n        weight = next(self.parameters()).data\n        return Variable(weight.new(self.nlayers, bsz, self.nhid).zero_())\n\n"
  },
  {
    "path": "nmt/README.md",
    "content": "# Neural Machine Translation\n\nThis NMT model is heavily depended on [pcyin/pytorch\\_nmt](https://github.com/pcyin/pytorch_nmt). To train model, just follow the steps provided there.\n\nBasiscally, you need to:\n1. use `vocab.py` to generate vocab file\n2. use `nmt.py` to train model\n\nAnd you may find `scripts/train.sh` helpful.\n\n### Test Results\n\n##### WMT16\n\n- English-Deutsch\n\t- with 10% data\n\t\t```\n\t\tBLEU = 20.54, 49.0/26.7/17.4/11.9 (BP=0.900, ratio=0.904, hyp_len=129552, ref_len=143246)\n\t\t```\n\t- with 100% data\n\t\t```\n\t\tBLEU = 22.94, 50.9/28.9/19.5/13.8 (BP=0.915, ratio=0.919, hyp_len=131583, ref_len=143246)\n\t\t```\n\n- Deutsch-English\n\t- with 10% data\n\t\t```\n\t\tBLEU = 24.69, 56.2/32.5/22.0/15.5 (BP=0.880, ratio=0.886, hyp_len=123720, ref_len=139584)\n\t\t```\n\t- with 100% data\n\t\t```\n\t\tBLEU = 26.73, 57.6/34.4/23.7/17.1 (BP=0.894, ratio=0.899, hyp_len=125477, ref_len=139584)\n\t\t```\n\n"
  },
  {
    "path": "nmt/__init__.py",
    "content": "from nmt.util import read_corpus, data_iter\nfrom nmt import vocab\nfrom nmt.model import NMT, to_input_variable\n"
  },
  {
    "path": "nmt/channel.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport sys\nimport torch\nimport argparse\n\nfrom util import read_corpus, data_iter\nfrom model import NMT\n\n\ndef sample(args):\n    train_data_src = read_corpus(args.src_file, source='src')\n    train_data_tgt = read_corpus(args.tgt_file, source='tgt')\n    train_data = zip(train_data_src, train_data_tgt)\n\n    # load model params\n    print('load model from [%s]' % args.model_bin, file=sys.stderr)\n    params = torch.load(args.model_bin, map_location=lambda storage, loc: storage)\n    vocab = params['vocab']\n    opt = params['args']\n    state_dict = params['state_dict']\n\n    # build model\n    model = NMT(opt, vocab)\n    model.load_state_dict(state_dict)\n    model.eval()\n    model = model.cuda()\n\n    # sampling\n    print('begin sampling')\n    train_iter = cum_samples = 0\n    for src_sents, tgt_sents in data_iter(train_data, batch_size=1):\n        train_iter += 1\n        samples = model.sample(src_sents, sample_size=5, to_word=True)\n        cum_samples += sum(len(sample) for sample in samples)\n\n        for i, tgt_sent in enumerate(tgt_sents):\n            print('*' * 80)\n            print('target:' + ' '.join(tgt_sent))\n            tgt_samples = samples[i]\n            print('samples:')\n            for sid, sample in enumerate(tgt_samples, 1):\n                print('[%d] %s' % (sid, ' '.join(sample[1:-1])))\n            print('*' * 80)\n\n\ndef beam(args):\n    # load model params\n    print('load model from [%s]' % args.model_bin, file=sys.stderr)\n    params = torch.load(args.model_bin, map_location=lambda storage, loc: storage)\n    vocab = params['vocab']\n    opt = params['args']\n    state_dict = params['state_dict']\n\n    # build model\n    model = NMT(opt, vocab)\n    model.load_state_dict(state_dict)\n    model.train()\n    # model.eval()\n    model = model.cuda()\n\n    # loss function\n    loss_fn = torch.nn.NLLLoss()\n\n    # sampling\n    print('begin beam searching')\n    src_sent = ['we', 'have', 'told', 'that', '.']\n    hyps = model.beam(src_sent)\n\n    print('src_sent:', ' '.join(src_sent))\n    for ids, hyp, dist in hyps:\n        print('tgt_sent:', ' '.join(hyp))\n        print('tgt_ids :', end=' ')\n        for id in ids:\n            print(id, end=', ')\n        print()\n        print('out_dist:', dist)\n\n        var_ids = torch.autograd.Variable(torch.LongTensor(ids[1:]), requires_grad=False)\n        loss = loss_fn(dist, var_ids)\n        print('NLL loss =', loss)\n\n    loss.backward()\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('model_bin')\n    parser.add_argument('src_file')\n    parser.add_argument('tgt_file')\n    args = parser.parse_args()\n\n    # sample(args)\n    beam(args)\n\n"
  },
  {
    "path": "nmt/model.py",
    "content": "# -*- coding: utf-8 -*-\nimport sys\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.utils\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nfrom torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence\n\n\ndef input_transpose(sents, pad_token):\n    max_len = max(len(s) for s in sents)\n    batch_size = len(sents)\n\n    sents_t = []\n    masks = []\n    for i in range(max_len):\n        sents_t.append([sents[k][i] if len(sents[k]) > i else pad_token for k in range(batch_size)])\n        masks.append([1 if len(sents[k]) > i else 0 for k in range(batch_size)])\n\n    return sents_t, masks\n\n\ndef word2id(sents, vocab):\n    if type(sents[0]) == list:\n        return [[vocab[w] for w in s] for s in sents]\n    else:\n        return [vocab[w] for w in sents]\n\n\ndef tensor_transform(linear, X):\n    # X is a 3D tensor\n    return linear(X.contiguous().view(-1, X.size(2))).view(X.size(0), X.size(1), -1)\n\n\nclass NMT(nn.Module):\n    def __init__(self, args, vocab):\n        super(NMT, self).__init__()\n\n        self.args = args\n\n        self.vocab = vocab\n\n        self.src_embed = nn.Embedding(len(vocab.src), self.args.embed_size, padding_idx=vocab.src['<pad>'])\n        self.tgt_embed = nn.Embedding(len(vocab.tgt), self.args.embed_size, padding_idx=vocab.tgt['<pad>'])\n\n        self.encoder_lstm = nn.LSTM(self.args.embed_size, self.args.hidden_size, bidirectional=True, dropout=self.args.dropout)\n        self.decoder_lstm = nn.LSTMCell(self.args.embed_size + self.args.hidden_size, self.args.hidden_size)\n\n        # attention: dot product attention\n        # project source encoding to decoder rnn's h space\n        self.att_src_linear = nn.Linear(self.args.hidden_size * 2, self.args.hidden_size, bias=False)\n\n        # transformation of decoder hidden states and context vectors before reading out target words\n        # this produces the `attentional vector` in (Luong et al., 2015)\n        self.att_vec_linear = nn.Linear(self.args.hidden_size * 2 + self.args.hidden_size, self.args.hidden_size, bias=False)\n\n        # prediction layer of the target vocabulary\n        self.readout = nn.Linear(self.args.hidden_size, len(vocab.tgt), bias=False)\n\n        # dropout layer\n        self.dropout = nn.Dropout(self.args.dropout)\n\n        # initialize the decoder's state and cells with encoder hidden states\n        self.decoder_cell_init = nn.Linear(self.args.hidden_size * 2, self.args.hidden_size)\n\n    def forward(self, src_sents, src_sents_len, tgt_words):\n        src_encodings, init_ctx_vec = self.encode(src_sents, src_sents_len)\n        scores = self.decode(src_encodings, init_ctx_vec, tgt_words)\n\n        return scores\n\n    def encode(self, src_sents, src_sents_len):\n        \"\"\"\n        :param src_sents: (src_sent_len, batch_size), sorted by the length of the source\n        :param src_sents_len: (src_sent_len)\n        \"\"\"\n        # (src_sent_len, batch_size, embed_size)\n        src_word_embed = self.src_embed(src_sents)\n        packed_src_embed = pack_padded_sequence(src_word_embed, src_sents_len)\n\n        # output: (src_sent_len, batch_size, hidden_size)\n        output, (last_state, last_cell) = self.encoder_lstm(packed_src_embed)\n        output, _ = pad_packed_sequence(output)\n\n        dec_init_cell = self.decoder_cell_init(torch.cat([last_cell[0], last_cell[1]], 1))\n        dec_init_state = F.tanh(dec_init_cell)\n\n        return output, (dec_init_state, dec_init_cell)\n\n    def decode(self, src_encoding, dec_init_vec, tgt_sents):\n        \"\"\"\n        :param src_encoding: (src_sent_len, batch_size, hidden_size)\n        :param dec_init_vec: (batch_size, hidden_size)\n        :param tgt_sents: (tgt_sent_len, batch_size)\n        :return:\n        \"\"\"\n        init_state = dec_init_vec[0]\n        init_cell = dec_init_vec[1]\n        hidden = (init_state, init_cell)\n\n        new_tensor = init_cell.data.new\n        batch_size = src_encoding.size(1)\n\n        # (batch_size, src_sent_len, hidden_size * 2)\n        src_encoding = src_encoding.permute(1, 0, 2)\n        # (batch_size, src_sent_len, hidden_size)\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n        # initialize attentional vector\n        att_tm1 = Variable(new_tensor(batch_size, self.args.hidden_size).zero_(), requires_grad=False)\n\n        tgt_word_embed = self.tgt_embed(tgt_sents)\n        scores = []\n\n        # start from `<s>`, until y_{T-1}\n        for y_tm1_embed in tgt_word_embed.split(split_size=1):\n            # input feeding: concate y_tm1 and previous attentional vector\n            x = torch.cat([y_tm1_embed.squeeze(0), att_tm1], 1)\n\n            # h_t: (batch_size, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, src_encoding, src_encoding_att_linear)\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))   # E.q. (5)\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)   # E.q. (6)\n            scores.append(score_t)\n\n            att_tm1 = att_t\n            hidden = h_t, cell_t\n\n        scores = torch.stack(scores)\n        return scores\n\n    def translate(self, src_sents, beam_size=None, to_word=True):\n        \"\"\"\n        perform beam search\n        TODO: batched beam search\n        \"\"\"\n        if not type(src_sents[0]) == list:\n            src_sents = [src_sents]\n        if not beam_size:\n            beam_size = self.args.beam_size\n\n        src_sents_var = to_input_variable(src_sents, self.vocab.src, cuda=self.args.cuda, is_test=True)\n\n        src_encoding, dec_init_vec = self.encode(src_sents_var, [len(src_sents[0])])\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n\n        init_state = dec_init_vec[0]\n        init_cell = dec_init_vec[1]\n        hidden = (init_state, init_cell)\n\n        att_tm1 = Variable(torch.zeros(1, self.args.hidden_size), volatile=True)\n        hyp_scores = Variable(torch.zeros(1), volatile=True)\n        if self.args.cuda:\n            att_tm1 = att_tm1.cuda()\n            hyp_scores = hyp_scores.cuda()\n\n        eos_id = self.vocab.tgt['</s>']\n        bos_id = self.vocab.tgt['<s>']\n        tgt_vocab_size = len(self.vocab.tgt)\n\n        hypotheses = [[bos_id]]\n        completed_hypotheses = []\n        completed_hypothesis_scores = []\n\n        t = 0\n        while len(completed_hypotheses) < beam_size and t < self.args.decode_max_time_step:\n            t += 1\n            hyp_num = len(hypotheses)\n\n            expanded_src_encoding = src_encoding.expand(src_encoding.size(0), hyp_num, src_encoding.size(2))\n            expanded_src_encoding_att_linear = src_encoding_att_linear.expand(src_encoding_att_linear.size(0), hyp_num, src_encoding_att_linear.size(2))\n\n            y_tm1 = Variable(torch.LongTensor([hyp[-1] for hyp in hypotheses]), volatile=True)\n            if self.args.cuda:\n                y_tm1 = y_tm1.cuda()\n\n            y_tm1_embed = self.tgt_embed(y_tm1)\n\n            x = torch.cat([y_tm1_embed, att_tm1], 1)\n\n            # h_t: (hyp_num, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, expanded_src_encoding.permute(1, 0, 2), expanded_src_encoding_att_linear.permute(1, 0, 2))\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)\n            p_t = F.log_softmax(score_t)\n\n            live_hyp_num = beam_size - len(completed_hypotheses)\n            new_hyp_scores = (hyp_scores.unsqueeze(1).expand_as(p_t) + p_t).view(-1)\n            top_new_hyp_scores, top_new_hyp_pos = torch.topk(new_hyp_scores, k=live_hyp_num)\n            prev_hyp_ids = top_new_hyp_pos / tgt_vocab_size\n            word_ids = top_new_hyp_pos % tgt_vocab_size\n            # new_hyp_scores = new_hyp_scores[top_new_hyp_pos.data]\n\n            new_hypotheses = []\n\n            live_hyp_ids = []\n            new_hyp_scores = []\n            for prev_hyp_id, word_id, new_hyp_score in zip(prev_hyp_ids.cpu().data, word_ids.cpu().data, top_new_hyp_scores.cpu().data):\n                hyp_tgt_words = hypotheses[prev_hyp_id] + [word_id]\n                if word_id == eos_id:\n                    completed_hypotheses.append(hyp_tgt_words)\n                    completed_hypothesis_scores.append(new_hyp_score)\n                else:\n                    new_hypotheses.append(hyp_tgt_words)\n                    live_hyp_ids.append(prev_hyp_id)\n                    new_hyp_scores.append(new_hyp_score)\n\n            if len(completed_hypotheses) == beam_size:\n                break\n\n            live_hyp_ids = torch.LongTensor(live_hyp_ids)\n            if self.args.cuda:\n                live_hyp_ids = live_hyp_ids.cuda()\n\n            hidden = (h_t[live_hyp_ids], cell_t[live_hyp_ids])\n            att_tm1 = att_t[live_hyp_ids]\n\n            hyp_scores = Variable(torch.FloatTensor(new_hyp_scores), volatile=True) # new_hyp_scores[live_hyp_ids]\n            if self.args.cuda:\n                hyp_scores = hyp_scores.cuda()\n            hypotheses = new_hypotheses\n\n        if len(completed_hypotheses) == 0:\n            completed_hypotheses = [hypotheses[0]]\n            completed_hypothesis_scores = [0.0]\n\n        if to_word:\n            for i, hyp in enumerate(completed_hypotheses):\n                completed_hypotheses[i] = [self.vocab.tgt.id2word[w] for w in hyp]\n\n        ranked_hypotheses = sorted(zip(completed_hypotheses, completed_hypothesis_scores), key=lambda x: x[1], reverse=True)\n\n        return [hyp for hyp, score in ranked_hypotheses]\n\n    def sample(self, src_sents, sample_size=None, to_word=False):\n        if not type(src_sents[0]) == list:\n            src_sents = [src_sents]\n        if not sample_size:\n            sample_size = self.args.sample_size\n\n        src_sents_num = len(src_sents)\n        batch_size = src_sents_num * sample_size\n\n        src_sents_var = to_input_variable(src_sents, self.vocab.src, cuda=self.args.cuda, is_test=True)\n        src_encoding, (dec_init_state, dec_init_cell) = self.encode(src_sents_var, [len(s) for s in src_sents])\n\n        dec_init_state = dec_init_state.repeat(sample_size, 1)\n        dec_init_cell = dec_init_cell.repeat(sample_size, 1)\n        hidden = (dec_init_state, dec_init_cell)\n\n        src_encoding = src_encoding.repeat(1, sample_size, 1)\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n        src_encoding = src_encoding.permute(1, 0, 2)\n        src_encoding_att_linear = src_encoding_att_linear.permute(1, 0, 2)\n\n        new_tensor = dec_init_state.data.new\n        att_tm1 = Variable(new_tensor(batch_size, self.args.hidden_size).zero_(), volatile=True)\n        y_0 = Variable(torch.LongTensor([self.vocab.tgt['<s>'] for _ in range(batch_size)]), volatile=True)\n\n        eos = self.vocab.tgt['</s>']\n        # eos_batch = torch.LongTensor([eos] * batch_size)\n        sample_ends = torch.ByteTensor([0] * batch_size)\n        all_ones = torch.ByteTensor([1] * batch_size)\n        if self.args.cuda:\n            y_0 = y_0.cuda()\n            sample_ends = sample_ends.cuda()\n            all_ones = all_ones.cuda()\n\n        samples = [y_0]\n\n        t = 0\n        while t < self.args.decode_max_time_step:\n            t += 1\n\n            # (sample_size)\n            y_tm1 = samples[-1]\n\n            y_tm1_embed = self.tgt_embed(y_tm1)\n\n            x = torch.cat([y_tm1_embed, att_tm1], 1)\n\n            # h_t: (batch_size, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, src_encoding, src_encoding_att_linear)\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))  # E.q. (5)\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)  # E.q. (6)\n            p_t = F.softmax(score_t)\n\n            if self.args.sample_method == 'random':\n                y_t = torch.multinomial(p_t, num_samples=1).squeeze(1)\n            elif self.args.sample_method == 'greedy':\n                _, y_t = torch.topk(p_t, k=1, dim=1)\n                y_t = y_t.squeeze(1)\n\n            samples.append(y_t)\n\n            sample_ends |= torch.eq(y_t, eos).byte().data\n            if torch.equal(sample_ends, all_ones):\n                break\n\n            # if torch.equal(y_t.data, eos_batch):\n            #     break\n\n            att_tm1 = att_t\n            hidden = h_t, cell_t\n\n        # post-processing\n        completed_samples = [list([list() for _ in range(sample_size)]) for _ in range(src_sents_num)]\n        for y_t in samples:\n            for i, sampled_word in enumerate(y_t.cpu().data):\n                src_sent_id = i % src_sents_num\n                sample_id = i // src_sents_num\n                if len(completed_samples[src_sent_id][sample_id]) == 0 or completed_samples[src_sent_id][sample_id][-1] != eos:\n                    completed_samples[src_sent_id][sample_id].append(sampled_word)\n\n        if to_word:\n            for i, src_sent_samples in enumerate(completed_samples):\n                completed_samples[i] = word2id(src_sent_samples, self.vocab.tgt.id2word)\n\n        return completed_samples\n\n    def beam(self, src_sents, beam_size=3):\n        \"\"\"\n        perform beam search\n        \"\"\"\n        if not type(src_sents[0]) == list:\n            src_sents = [src_sents]\n\n        src_sents_var = to_input_variable(src_sents, self.vocab.src, cuda=self.args.cuda, is_test=False)\n\n        src_encoding, dec_init_vec = self.encode(src_sents_var, [len(src_sents[0])])\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n\n        init_state = dec_init_vec[0]\n        init_cell = dec_init_vec[1]\n        hidden = (init_state, init_cell)\n\n        att_tm1 = Variable(torch.zeros(1, self.args.hidden_size), requires_grad=False)\n        hyp_scores = Variable(torch.zeros(1), requires_grad=False)\n        if self.args.cuda:\n            att_tm1 = att_tm1.cuda()\n            hyp_scores = hyp_scores.cuda()\n\n        eos_id = self.vocab.tgt['</s>']\n        bos_id = self.vocab.tgt['<s>']\n        tgt_vocab_size = len(self.vocab.tgt)\n\n        # store output distributions\n        out_dists = [[]]\n        completed_out_dists = []\n\n        hypotheses = [[bos_id]]\n        completed_hypotheses = []\n        completed_hypothesis_scores = []\n\n        t = 0\n        while len(completed_hypotheses) < beam_size and t < self.args.decode_max_time_step:\n            t += 1\n            hyp_num = len(hypotheses)\n\n            expanded_src_encoding = src_encoding.expand(src_encoding.size(0), hyp_num, src_encoding.size(2))\n            expanded_src_encoding_att_linear = src_encoding_att_linear.expand(src_encoding_att_linear.size(0), hyp_num, src_encoding_att_linear.size(2))\n\n            y_tm1 = Variable(torch.LongTensor([hyp[-1] for hyp in hypotheses]), requires_grad=False)\n            if self.args.cuda:\n                y_tm1 = y_tm1.cuda()\n\n            y_tm1_embed = self.tgt_embed(y_tm1)\n\n            x = torch.cat([y_tm1_embed, att_tm1], 1)\n\n            # h_t: (hyp_num, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, expanded_src_encoding.permute(1, 0, 2), expanded_src_encoding_att_linear.permute(1, 0, 2))\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)\n            p_t = F.log_softmax(score_t)\n\n            live_hyp_num = beam_size - len(completed_hypotheses)\n            new_hyp_scores = (hyp_scores.unsqueeze(1).expand_as(p_t) + p_t).view(-1)\n            top_new_hyp_scores, top_new_hyp_pos = torch.topk(new_hyp_scores, k=live_hyp_num)\n            prev_hyp_ids = top_new_hyp_pos / tgt_vocab_size\n            word_ids = top_new_hyp_pos % tgt_vocab_size\n            # new_hyp_scores = new_hyp_scores[top_new_hyp_pos.data]\n\n            # get output distributions\n            p_t_cpu = p_t.cpu()\n\n            new_out_dists = []\n            new_hypotheses = []\n\n            live_hyp_ids = []\n            new_hyp_scores = []\n            for prev_hyp_id, word_id, new_hyp_score in zip(prev_hyp_ids.cpu().data, word_ids.cpu().data, top_new_hyp_scores.cpu().data):\n                tgt_dists = out_dists[prev_hyp_id] + [p_t_cpu[prev_hyp_id].unsqueeze(0)]\n                hyp_tgt_words = hypotheses[prev_hyp_id] + [word_id]\n                if word_id == eos_id:\n                    completed_out_dists.append(tgt_dists)\n                    completed_hypotheses.append(hyp_tgt_words)\n                    completed_hypothesis_scores.append(new_hyp_score)\n                else:\n                    new_out_dists.append(tgt_dists)\n                    new_hypotheses.append(hyp_tgt_words)\n                    live_hyp_ids.append(prev_hyp_id)\n                    new_hyp_scores.append(new_hyp_score)\n\n            if len(completed_hypotheses) == beam_size:\n                break\n\n            live_hyp_ids = torch.LongTensor(live_hyp_ids)\n            if self.args.cuda:\n                live_hyp_ids = live_hyp_ids.cuda()\n\n            hidden = (h_t[live_hyp_ids], cell_t[live_hyp_ids])\n            att_tm1 = att_t[live_hyp_ids]\n\n            hyp_scores = Variable(torch.FloatTensor(new_hyp_scores), requires_grad=False) # new_hyp_scores[live_hyp_ids]\n            if self.args.cuda:\n                hyp_scores = hyp_scores.cuda()\n\n            out_dists = new_out_dists\n            hypotheses = new_hypotheses\n\n        if len(completed_hypotheses) == 0:\n            completed_out_dists = [out_dists[0]]\n            completed_hypotheses = [hypotheses[0]]\n            completed_hypothesis_scores = [0.0]\n\n        # convert to words\n        completed_hypotheses_words = []\n        for i, hyp in enumerate(completed_hypotheses):\n            completed_hypotheses_words.append([self.vocab.tgt.id2word[w] for w in hyp])\n\n        # merge variables\n        for i, dists in enumerate(completed_out_dists):\n            completed_out_dists[i] = torch.cat(dists, 0)\n\n        # sort with scores\n        ranked_hypotheses = sorted(zip(completed_hypotheses, completed_hypothesis_scores, completed_hypotheses_words, completed_out_dists), key=lambda x: x[1], reverse=True)\n\n        return [(hyp, words, dist) for hyp, score, words, dist in ranked_hypotheses]\n\n    def attention(self, h_t, src_encoding, src_linear_for_att):\n        # (1, batch_size, attention_size) + (src_sent_len, batch_size, attention_size) =>\n        # (src_sent_len, batch_size, attention_size)\n        att_hidden = F.tanh(self.att_h_linear(h_t).unsqueeze(0).expand_as(src_linear_for_att) + src_linear_for_att)\n\n        # (batch_size, src_sent_len)\n        att_weights = F.softmax(tensor_transform(self.att_vec_linear, att_hidden).squeeze(2).permute(1, 0))\n\n        # (batch_size, hidden_size * 2)\n        ctx_vec = torch.bmm(src_encoding.permute(1, 2, 0), att_weights.unsqueeze(2)).squeeze(2)\n\n        return ctx_vec, att_weights\n\n    def dot_prod_attention(self, h_t, src_encoding, src_encoding_att_linear, mask=None):\n        \"\"\"\n        :param h_t: (batch_size, hidden_size)\n        :param src_encoding: (batch_size, src_sent_len, hidden_size * 2)\n        :param src_encoding_att_linear: (batch_size, src_sent_len, hidden_size)\n        :param mask: (batch_size, src_sent_len)\n        \"\"\"\n        # (batch_size, src_sent_len)\n        att_weight = torch.bmm(src_encoding_att_linear, h_t.unsqueeze(2)).squeeze(2)\n        if mask:\n            att_weight.data.masked_fill_(mask, -float('inf'))\n        att_weight = F.softmax(att_weight)\n\n        att_view = (att_weight.size(0), 1, att_weight.size(1))\n        # (batch_size, hidden_size)\n        ctx_vec = torch.bmm(att_weight.view(*att_view), src_encoding).squeeze(1)\n\n        return ctx_vec, att_weight\n\n    def save(self, path):\n        print('save parameters to [%s]' % path, file=sys.stderr)\n        params = {\n            'args': self.args,\n            'vocab': self.vocab,\n            'state_dict': self.state_dict()\n        }\n        torch.save(params, path)\n\n\ndef to_input_variable(sents, vocab, cuda=False, is_test=False):\n    \"\"\"\n    return a tensor of shape (src_sent_len, batch_size)\n    \"\"\"\n\n    word_ids = word2id(sents, vocab)\n    sents_t, masks = input_transpose(word_ids, vocab['<pad>'])\n\n    sents_var = Variable(torch.LongTensor(sents_t), volatile=is_test, requires_grad=False)\n    if cuda:\n        sents_var = sents_var.cuda()\n\n    return sents_var\n\n"
  },
  {
    "path": "nmt/nmt.py",
    "content": "from __future__ import print_function\n\nimport os\nimport sys\nimport time\nimport argparse\nfrom itertools import tee\n\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.utils\nfrom torch.autograd import Variable\nfrom torch import optim\nfrom torch.nn import Parameter\nimport torch.nn.functional as F\nfrom torch.nn.utils.rnn import pad_packed_sequence, pack_padded_sequence\n\nfrom nltk.translate.bleu_score import corpus_bleu\n\nfrom util import read_corpus, data_iter\nfrom vocab import Vocab, VocabEntry\n\n\ndef init_config():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--seed', default=5783287, type=int, help='random seed')\n    parser.add_argument('--cuda', action='store_true', default=False, help='use gpu')\n    parser.add_argument('--mode', choices=['train', 'raml_train', 'test', 'sample', 'prob', 'interactive'],\n                        default='train', help='run mode')\n    parser.add_argument('--vocab', type=str, help='path of the serialized vocabulary')\n    parser.add_argument('--batch_size', default=32, type=int, help='batch size')\n    parser.add_argument('--beam_size', default=5, type=int, help='beam size for beam search')\n    parser.add_argument('--sample_size', default=10, type=int, help='sample size')\n    parser.add_argument('--embed_size', default=256, type=int, help='size of word embeddings')\n    parser.add_argument('--hidden_size', default=256, type=int, help='size of LSTM hidden states')\n    parser.add_argument('--dropout', default=0., type=float, help='dropout rate')\n\n    parser.add_argument('--train_src', type=str, help='path to the training source file')\n    parser.add_argument('--train_tgt', type=str, help='path to the training target file')\n    parser.add_argument('--dev_src', type=str, help='path to the dev source file')\n    parser.add_argument('--dev_tgt', type=str, help='path to the dev target file')\n    parser.add_argument('--test_src', type=str, help='path to the test source file')\n    parser.add_argument('--test_tgt', type=str, help='path to the test target file')\n\n    parser.add_argument('--decode_max_time_step', default=200, type=int, help='maximum number of time steps used '\n                                                                              'in decoding and sampling')\n\n    parser.add_argument('--valid_niter', default=500, type=int, help='every n iterations to perform validation')\n    parser.add_argument('--valid_metric', default='bleu', choices=['bleu', 'ppl', 'word_acc', 'sent_acc'], help='metric used for validation')\n    parser.add_argument('--log_every', default=50, type=int, help='every n iterations to log training statistics')\n    parser.add_argument('--load_model', default=None, type=str, help='load a pre-trained model')\n    parser.add_argument('--save_to', default='model', type=str, help='save trained model to')\n    parser.add_argument('--save_model_after', default=2, type=int, help='save the model only after n validation iterations')\n    parser.add_argument('--save_to_file', default=None, type=str, help='if provided, save decoding results to file')\n    parser.add_argument('--save_nbest', default=False, action='store_true', help='save nbest decoding results')\n    parser.add_argument('--patience', default=5, type=int, help='training patience')\n    parser.add_argument('--uniform_init', default=None, type=float, help='if specified, use uniform initialization for all parameters')\n    parser.add_argument('--clip_grad', default=5., type=float, help='clip gradients')\n    parser.add_argument('--max_niter', default=-1, type=int, help='maximum number of training iterations')\n    parser.add_argument('--lr', default=0.001, type=float, help='learning rate')\n    parser.add_argument('--lr_decay', default=0.5, type=float, help='decay learning rate if the validation performance drops')\n\n    # raml training\n    parser.add_argument('--debug', default=False, action='store_true')\n    parser.add_argument('--temp', default=0.85, type=float, help='temperature in reward distribution')\n    parser.add_argument('--raml_sample_mode', default='pre_sample',\n                        choices=['pre_sample', 'hamming_distance', 'hamming_distance_impt_sample'],\n                        help='sample mode when using RAML')\n    parser.add_argument('--raml_sample_file', type=str, help='path to the sampled targets')\n    parser.add_argument('--raml_bias_groundtruth', action='store_true', default=False, help='make sure ground truth y* is in samples')\n\n    parser.add_argument('--smooth_bleu', action='store_true', default=False,\n                        help='smooth sentence level BLEU score.')\n\n    #TODO: greedy sampling is still buggy!\n    parser.add_argument('--sample_method', default='random', choices=['random', 'greedy'])\n\n    args = parser.parse_args()\n\n    # seed the RNG\n    torch.manual_seed(args.seed)\n    if args.cuda:\n        torch.cuda.manual_seed(args.seed)\n    np.random.seed(args.seed * 13 // 7)\n\n    return args\n\n\ndef input_transpose(sents, pad_token):\n    max_len = max(len(s) for s in sents)\n    batch_size = len(sents)\n\n    sents_t = []\n    masks = []\n    for i in range(max_len):\n        sents_t.append([sents[k][i] if len(sents[k]) > i else pad_token for k in range(batch_size)])\n        masks.append([1 if len(sents[k]) > i else 0 for k in range(batch_size)])\n\n    return sents_t, masks\n\n\ndef word2id(sents, vocab):\n    if type(sents[0]) == list:\n        return [[vocab[w] for w in s] for s in sents]\n    else:\n        return [vocab[w] for w in sents]\n\n\ndef tensor_transform(linear, X):\n    # X is a 3D tensor\n    return linear(X.contiguous().view(-1, X.size(2))).view(X.size(0), X.size(1), -1)\n\n\nclass NMT(nn.Module):\n    def __init__(self, args, vocab):\n        super(NMT, self).__init__()\n\n        self.args = args\n\n        self.vocab = vocab\n\n        self.src_embed = nn.Embedding(len(vocab.src), args.embed_size, padding_idx=vocab.src['<pad>'])\n        self.tgt_embed = nn.Embedding(len(vocab.tgt), args.embed_size, padding_idx=vocab.tgt['<pad>'])\n\n        self.encoder_lstm = nn.LSTM(args.embed_size, args.hidden_size, bidirectional=True, dropout=args.dropout)\n        self.decoder_lstm = nn.LSTMCell(args.embed_size + args.hidden_size, args.hidden_size)\n\n        # attention: dot product attention\n        # project source encoding to decoder rnn's h space\n        self.att_src_linear = nn.Linear(args.hidden_size * 2, args.hidden_size, bias=False)\n\n        # transformation of decoder hidden states and context vectors before reading out target words\n        # this produces the `attentional vector` in (Luong et al., 2015)\n        self.att_vec_linear = nn.Linear(args.hidden_size * 2 + args.hidden_size, args.hidden_size, bias=False)\n\n        # prediction layer of the target vocabulary\n        self.readout = nn.Linear(args.hidden_size, len(vocab.tgt), bias=False)\n\n        # dropout layer\n        self.dropout = nn.Dropout(args.dropout)\n\n        # initialize the decoder's state and cells with encoder hidden states\n        self.decoder_cell_init = nn.Linear(args.hidden_size * 2, args.hidden_size)\n\n    def forward(self, src_sents, src_sents_len, tgt_words):\n        src_encodings, init_ctx_vec = self.encode(src_sents, src_sents_len)\n        scores = self.decode(src_encodings, init_ctx_vec, tgt_words)\n\n        return scores\n\n    def encode(self, src_sents, src_sents_len):\n        \"\"\"\n        :param src_sents: (src_sent_len, batch_size), sorted by the length of the source\n        :param src_sents_len: (src_sent_len)\n        \"\"\"\n        # (src_sent_len, batch_size, embed_size)\n        src_word_embed = self.src_embed(src_sents)\n        packed_src_embed = pack_padded_sequence(src_word_embed, src_sents_len)\n\n        # output: (src_sent_len, batch_size, hidden_size)\n        output, (last_state, last_cell) = self.encoder_lstm(packed_src_embed)\n        output, _ = pad_packed_sequence(output)\n\n        dec_init_cell = self.decoder_cell_init(torch.cat([last_cell[0], last_cell[1]], 1))\n        dec_init_state = F.tanh(dec_init_cell)\n\n        return output, (dec_init_state, dec_init_cell)\n\n    def decode(self, src_encoding, dec_init_vec, tgt_sents):\n        \"\"\"\n        :param src_encoding: (src_sent_len, batch_size, hidden_size)\n        :param dec_init_vec: (batch_size, hidden_size)\n        :param tgt_sents: (tgt_sent_len, batch_size)\n        :return:\n        \"\"\"\n        init_state = dec_init_vec[0]\n        init_cell = dec_init_vec[1]\n        hidden = (init_state, init_cell)\n\n        new_tensor = init_cell.data.new\n        batch_size = src_encoding.size(1)\n\n        # (batch_size, src_sent_len, hidden_size * 2)\n        src_encoding = src_encoding.permute(1, 0, 2)\n        # (batch_size, src_sent_len, hidden_size)\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n        # initialize attentional vector\n        att_tm1 = Variable(new_tensor(batch_size, self.args.hidden_size).zero_(), requires_grad=False)\n\n        tgt_word_embed = self.tgt_embed(tgt_sents)\n        scores = []\n\n        # start from `<s>`, until y_{T-1}\n        for y_tm1_embed in tgt_word_embed.split(split_size=1):\n            # input feeding: concate y_tm1 and previous attentional vector\n            x = torch.cat([y_tm1_embed.squeeze(0), att_tm1], 1)\n\n            # h_t: (batch_size, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, src_encoding, src_encoding_att_linear)\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))   # E.q. (5)\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)   # E.q. (6)\n            scores.append(score_t)\n\n            att_tm1 = att_t\n            hidden = h_t, cell_t\n\n        scores = torch.stack(scores)\n        return scores\n\n    def translate(self, src_sents, beam_size=None, to_word=True):\n        \"\"\"\n        perform beam search\n        TODO: batched beam search\n        \"\"\"\n        if not type(src_sents[0]) == list:\n            src_sents = [src_sents]\n        if not beam_size:\n            beam_size = args.beam_size\n\n        src_sents_var = to_input_variable(src_sents, self.vocab.src, cuda=args.cuda, is_test=True)\n\n        src_encoding, dec_init_vec = self.encode(src_sents_var, [len(src_sents[0])])\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n\n        init_state = dec_init_vec[0]\n        init_cell = dec_init_vec[1]\n        hidden = (init_state, init_cell)\n\n        att_tm1 = Variable(torch.zeros(1, self.args.hidden_size), volatile=True)\n        hyp_scores = Variable(torch.zeros(1), volatile=True)\n        if args.cuda:\n            att_tm1 = att_tm1.cuda()\n            hyp_scores = hyp_scores.cuda()\n\n        eos_id = self.vocab.tgt['</s>']\n        bos_id = self.vocab.tgt['<s>']\n        tgt_vocab_size = len(self.vocab.tgt)\n\n        hypotheses = [[bos_id]]\n        completed_hypotheses = []\n        completed_hypothesis_scores = []\n\n        t = 0\n        while len(completed_hypotheses) < beam_size and t < args.decode_max_time_step:\n            t += 1\n            hyp_num = len(hypotheses)\n\n            expanded_src_encoding = src_encoding.expand(src_encoding.size(0), hyp_num, src_encoding.size(2))\n            expanded_src_encoding_att_linear = src_encoding_att_linear.expand(src_encoding_att_linear.size(0), hyp_num, src_encoding_att_linear.size(2))\n\n            y_tm1 = Variable(torch.LongTensor([hyp[-1] for hyp in hypotheses]), volatile=True)\n            if args.cuda:\n                y_tm1 = y_tm1.cuda()\n\n            y_tm1_embed = self.tgt_embed(y_tm1)\n\n            x = torch.cat([y_tm1_embed, att_tm1], 1)\n\n            # h_t: (hyp_num, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, expanded_src_encoding.permute(1, 0, 2), expanded_src_encoding_att_linear.permute(1, 0, 2))\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)\n            p_t = F.log_softmax(score_t)\n\n            live_hyp_num = beam_size - len(completed_hypotheses)\n            new_hyp_scores = (hyp_scores.unsqueeze(1).expand_as(p_t) + p_t).view(-1)\n            top_new_hyp_scores, top_new_hyp_pos = torch.topk(new_hyp_scores, k=live_hyp_num)\n            prev_hyp_ids = top_new_hyp_pos / tgt_vocab_size\n            word_ids = top_new_hyp_pos % tgt_vocab_size\n            # new_hyp_scores = new_hyp_scores[top_new_hyp_pos.data]\n\n            new_hypotheses = []\n\n            live_hyp_ids = []\n            new_hyp_scores = []\n            for prev_hyp_id, word_id, new_hyp_score in zip(prev_hyp_ids.cpu().data, word_ids.cpu().data, top_new_hyp_scores.cpu().data):\n                hyp_tgt_words = hypotheses[prev_hyp_id] + [word_id]\n                if word_id == eos_id:\n                    completed_hypotheses.append(hyp_tgt_words)\n                    completed_hypothesis_scores.append(new_hyp_score)\n                else:\n                    new_hypotheses.append(hyp_tgt_words)\n                    live_hyp_ids.append(prev_hyp_id)\n                    new_hyp_scores.append(new_hyp_score)\n\n            if len(completed_hypotheses) == beam_size:\n                break\n\n            live_hyp_ids = torch.LongTensor(live_hyp_ids)\n            if args.cuda:\n                live_hyp_ids = live_hyp_ids.cuda()\n\n            hidden = (h_t[live_hyp_ids], cell_t[live_hyp_ids])\n            att_tm1 = att_t[live_hyp_ids]\n\n            hyp_scores = Variable(torch.FloatTensor(new_hyp_scores), volatile=True) # new_hyp_scores[live_hyp_ids]\n            if args.cuda:\n                hyp_scores = hyp_scores.cuda()\n            hypotheses = new_hypotheses\n\n        if len(completed_hypotheses) == 0:\n            completed_hypotheses = [hypotheses[0]]\n            completed_hypothesis_scores = [0.0]\n\n        if to_word:\n            for i, hyp in enumerate(completed_hypotheses):\n                completed_hypotheses[i] = [self.vocab.tgt.id2word[w] for w in hyp]\n\n        ranked_hypotheses = sorted(zip(completed_hypotheses, completed_hypothesis_scores), key=lambda x: x[1], reverse=True)\n\n        return [hyp for hyp, score in ranked_hypotheses]\n\n    def sample(self, src_sents, sample_size=None, to_word=False):\n        if not type(src_sents[0]) == list:\n            src_sents = [src_sents]\n        if not sample_size:\n            sample_size = args.sample_size\n\n        src_sents_num = len(src_sents)\n        batch_size = src_sents_num * sample_size\n\n        src_sents_var = to_input_variable(src_sents, self.vocab.src, cuda=args.cuda, is_test=True)\n        src_encoding, (dec_init_state, dec_init_cell) = self.encode(src_sents_var, [len(s) for s in src_sents])\n\n        dec_init_state = dec_init_state.repeat(sample_size, 1)\n        dec_init_cell = dec_init_cell.repeat(sample_size, 1)\n        hidden = (dec_init_state, dec_init_cell)\n\n        # tile everything\n        # if args.sample_method == 'expand':\n        #     # src_enc: (src_sent_len, sample_size, enc_size)\n        #     # cat result: (src_sent_len, batch_size * sample_size, enc_size)\n        #     src_encoding = torch.cat([src_enc.expand(src_enc.size(0), sample_size, src_enc.size(2)) for src_enc in src_encoding.split(1, dim=1)], 1)\n        #     dec_init_state = torch.cat([x.expand(sample_size, x.size(1)) for x in dec_init_state.split(1, dim=0)], 0)\n        #     dec_init_cell = torch.cat([x.expand(sample_size, x.size(1)) for x in dec_init_cell.split(1, dim=0)], 0)\n        # elif args.sample_method == 'repeat':\n\n        src_encoding = src_encoding.repeat(1, sample_size, 1)\n        src_encoding_att_linear = tensor_transform(self.att_src_linear, src_encoding)\n        src_encoding = src_encoding.permute(1, 0, 2)\n        src_encoding_att_linear = src_encoding_att_linear.permute(1, 0, 2)\n\n        new_tensor = dec_init_state.data.new\n        att_tm1 = Variable(new_tensor(batch_size, self.args.hidden_size).zero_(), volatile=True)\n        y_0 = Variable(torch.LongTensor([self.vocab.tgt['<s>'] for _ in range(batch_size)]), volatile=True)\n\n        eos = self.vocab.tgt['</s>']\n        # eos_batch = torch.LongTensor([eos] * batch_size)\n        sample_ends = torch.ByteTensor([0] * batch_size)\n        all_ones = torch.ByteTensor([1] * batch_size)\n        if args.cuda:\n            y_0 = y_0.cuda()\n            sample_ends = sample_ends.cuda()\n            all_ones = all_ones.cuda()\n\n        samples = [y_0]\n\n        t = 0\n        while t < args.decode_max_time_step:\n            t += 1\n\n            # (sample_size)\n            y_tm1 = samples[-1]\n\n            y_tm1_embed = self.tgt_embed(y_tm1)\n\n            x = torch.cat([y_tm1_embed, att_tm1], 1)\n\n            # h_t: (batch_size, hidden_size)\n            h_t, cell_t = self.decoder_lstm(x, hidden)\n            h_t = self.dropout(h_t)\n\n            ctx_t, alpha_t = self.dot_prod_attention(h_t, src_encoding, src_encoding_att_linear)\n\n            att_t = F.tanh(self.att_vec_linear(torch.cat([h_t, ctx_t], 1)))  # E.q. (5)\n            att_t = self.dropout(att_t)\n\n            score_t = self.readout(att_t)  # E.q. (6)\n            p_t = F.softmax(score_t)\n\n            if args.sample_method == 'random':\n                y_t = torch.multinomial(p_t, num_samples=1).squeeze(1)\n            elif args.sample_method == 'greedy':\n                _, y_t = torch.topk(p_t, k=1, dim=1)\n                y_t = y_t.squeeze(1)\n\n            samples.append(y_t)\n\n            sample_ends |= torch.eq(y_t, eos).byte().data\n            if torch.equal(sample_ends, all_ones):\n                break\n\n            # if torch.equal(y_t.data, eos_batch):\n            #     break\n\n            att_tm1 = att_t\n            hidden = h_t, cell_t\n\n        # post-processing\n        completed_samples = [list([list() for _ in range(sample_size)]) for _ in range(src_sents_num)]\n        for y_t in samples:\n            for i, sampled_word in enumerate(y_t.cpu().data):\n                src_sent_id = i % src_sents_num\n                sample_id = i / src_sents_num\n                if len(completed_samples[src_sent_id][sample_id]) == 0 or completed_samples[src_sent_id][sample_id][-1] != eos:\n                    completed_samples[src_sent_id][sample_id].append(sampled_word)\n\n        if to_word:\n            for i, src_sent_samples in enumerate(completed_samples):\n                completed_samples[i] = word2id(src_sent_samples, self.vocab.tgt.id2word)\n\n        return completed_samples\n\n    def attention(self, h_t, src_encoding, src_linear_for_att):\n        # (1, batch_size, attention_size) + (src_sent_len, batch_size, attention_size) =>\n        # (src_sent_len, batch_size, attention_size)\n        att_hidden = F.tanh(self.att_h_linear(h_t).unsqueeze(0).expand_as(src_linear_for_att) + src_linear_for_att)\n\n        # (batch_size, src_sent_len)\n        att_weights = F.softmax(tensor_transform(self.att_vec_linear, att_hidden).squeeze(2).permute(1, 0))\n\n        # (batch_size, hidden_size * 2)\n        ctx_vec = torch.bmm(src_encoding.permute(1, 2, 0), att_weights.unsqueeze(2)).squeeze(2)\n\n        return ctx_vec, att_weights\n\n    def dot_prod_attention(self, h_t, src_encoding, src_encoding_att_linear, mask=None):\n        \"\"\"\n        :param h_t: (batch_size, hidden_size)\n        :param src_encoding: (batch_size, src_sent_len, hidden_size * 2)\n        :param src_encoding_att_linear: (batch_size, src_sent_len, hidden_size)\n        :param mask: (batch_size, src_sent_len)\n        \"\"\"\n        # (batch_size, src_sent_len)\n        att_weight = torch.bmm(src_encoding_att_linear, h_t.unsqueeze(2)).squeeze(2)\n        if mask:\n            att_weight.data.masked_fill_(mask, -float('inf'))\n        att_weight = F.softmax(att_weight)\n\n        att_view = (att_weight.size(0), 1, att_weight.size(1))\n        # (batch_size, hidden_size)\n        ctx_vec = torch.bmm(att_weight.view(*att_view), src_encoding).squeeze(1)\n\n        return ctx_vec, att_weight\n\n    def save(self, path):\n        print('save parameters to [%s]' % path, file=sys.stderr)\n        params = {\n            'args': self.args,\n            'vocab': self.vocab,\n            'state_dict': self.state_dict()\n        }\n        torch.save(params, path)\n\n\ndef to_input_variable(sents, vocab, cuda=False, is_test=False):\n    \"\"\"\n    return a tensor of shape (src_sent_len, batch_size)\n    \"\"\"\n\n    word_ids = word2id(sents, vocab)\n    sents_t, masks = input_transpose(word_ids, vocab['<pad>'])\n\n    sents_var = Variable(torch.LongTensor(sents_t), volatile=is_test, requires_grad=False)\n    if cuda:\n        sents_var = sents_var.cuda()\n\n    return sents_var\n\n\ndef evaluate_loss(model, data, crit):\n    print('[INFO] evaluating loss')\n    model.eval()\n    cum_loss = 0.\n    cum_tgt_words = 0.\n\n    for src_sents, tgt_sents in data_iter(data, batch_size=args.batch_size, shuffle=False):\n        pred_tgt_word_num = sum(len(s[1:]) for s in tgt_sents) # omitting leading `<s>`\n        src_sents_len = [len(s) for s in src_sents]\n\n        src_sents_var = to_input_variable(src_sents, model.vocab.src, cuda=args.cuda, is_test=True)\n        tgt_sents_var = to_input_variable(tgt_sents, model.vocab.tgt, cuda=args.cuda, is_test=True)\n\n        # (tgt_sent_len, batch_size, tgt_vocab_size)\n        scores = model(src_sents_var, src_sents_len, tgt_sents_var[:-1])\n        loss = crit(scores.view(-1, scores.size(2)), tgt_sents_var[1:].view(-1))\n\n        cum_loss += loss.data[0]\n        cum_tgt_words += pred_tgt_word_num\n\n    cum_tgt_words = 1. if cum_tgt_words < 1. else cum_tgt_words\n    loss = cum_loss / cum_tgt_words\n    return loss\n\n\ndef init_training(args):\n    if args.load_model:\n        print('load model from [%s]' % args.load_model, file=sys.stderr)\n        params = torch.load(args.load_model, map_location=lambda storage, loc: storage)\n        vocab = params['vocab']\n        opt = params['args']\n        state_dict = params['state_dict']\n        model = NMT(opt, vocab)\n        model.load_state_dict(state_dict)\n        model.train()\n    else:\n        vocab = torch.load(args.vocab)\n        model = NMT(args, vocab)\n        model.train()\n\n        if args.uniform_init:\n            print('uniformly initialize parameters [-%f, +%f]' % (args.uniform_init, args.uniform_init), file=sys.stderr)\n            for p in model.parameters():\n                p.data.uniform_(-args.uniform_init, args.uniform_init)\n\n    vocab_mask = torch.ones(len(vocab.tgt))\n    vocab_mask[vocab.tgt['<pad>']] = 0\n    nll_loss = nn.NLLLoss(weight=vocab_mask, size_average=False)\n    cross_entropy_loss = nn.CrossEntropyLoss(weight=vocab_mask, size_average=False)\n\n    if args.cuda:\n        model = model.cuda()\n        nll_loss = nll_loss.cuda()\n        cross_entropy_loss = cross_entropy_loss.cuda()\n\n    optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)\n\n    return vocab, model, optimizer, nll_loss, cross_entropy_loss\n\n\ndef train(args):\n    train_data_src = read_corpus(args.train_src, source='src')\n    train_data_tgt = read_corpus(args.train_tgt, source='tgt')\n\n    dev_data_src = read_corpus(args.dev_src, source='src')\n    dev_data_tgt = read_corpus(args.dev_tgt, source='tgt')\n\n    train_data = list(zip(train_data_src, train_data_tgt))\n    dev_data = list(zip(dev_data_src, dev_data_tgt))\n\n    vocab, model, optimizer, nll_loss, cross_entropy_loss = init_training(args)\n\n    train_iter = patience = cum_loss = report_loss = cum_tgt_words = report_tgt_words = 0\n    cum_examples = cum_batches = report_examples = epoch = valid_num = best_model_iter = 0\n\n    if args.load_model:\n        import re\n        train_iter = int(re.search('(?<=iter)\\d+', args.load_model).group(0))\n        print('start from train_iter = %d' % train_iter)\n\n        valid_num = train_iter // args.valid_niter\n\n    hist_valid_scores = []\n    train_time = begin_time = time.time()\n    print('begin Maximum Likelihood training')\n\n    while True:\n        epoch += 1\n        print('start of epoch {:d}'.format(epoch))\n\n        for src_sents, tgt_sents in data_iter(train_data, batch_size=args.batch_size):\n            train_iter += 1\n\n            src_sents_var = to_input_variable(src_sents, vocab.src, cuda=args.cuda)\n            tgt_sents_var = to_input_variable(tgt_sents, vocab.tgt, cuda=args.cuda)\n\n            batch_size = len(src_sents)\n            src_sents_len = [len(s) for s in src_sents]\n            pred_tgt_word_num = sum(len(s[1:]) for s in tgt_sents) # omitting leading `<s>`\n\n            optimizer.zero_grad()\n\n            # (tgt_sent_len, batch_size, tgt_vocab_size)\n            scores = model(src_sents_var, src_sents_len, tgt_sents_var[:-1])\n\n            word_loss = cross_entropy_loss(scores.view(-1, scores.size(2)), tgt_sents_var[1:].view(-1))\n            loss = word_loss / batch_size\n            word_loss_val = word_loss.data[0]\n            loss_val = loss.data[0]\n\n            loss.backward()\n            # clip gradient\n            grad_norm = torch.nn.utils.clip_grad_norm(model.parameters(), args.clip_grad)\n            optimizer.step()\n\n            report_loss += word_loss_val\n            cum_loss += word_loss_val\n            report_tgt_words += pred_tgt_word_num\n            cum_tgt_words += pred_tgt_word_num\n            report_examples += batch_size\n            cum_examples += batch_size\n            cum_batches += batch_size\n\n            if train_iter % args.log_every == 0:\n                print('epoch %d, iter %d, avg. loss %.2f, avg. ppl %.2f ' \\\n                      'cum. examples %d, speed %.2f words/sec, time elapsed %.2f sec' % (epoch, train_iter,\n                                                                                         report_loss / report_examples,\n                                                                                         np.exp(report_loss / report_tgt_words),\n                                                                                         cum_examples,\n                                                                                         report_tgt_words / (time.time() - train_time),\n                                                                                         time.time() - begin_time), file=sys.stderr)\n\n                train_time = time.time()\n                report_loss = report_tgt_words = report_examples = 0.\n\n            # perform validation\n            if train_iter % args.valid_niter == 0:\n                print('epoch %d, iter %d, cum. loss %.2f, cum. ppl %.2f cum. examples %d' % (epoch, train_iter,\n                                                                                         cum_loss / cum_batches,\n                                                                                         np.exp(cum_loss / cum_tgt_words),\n                                                                                         cum_examples), file=sys.stderr)\n\n                cum_loss = cum_batches = cum_tgt_words = 0.\n                valid_num += 1\n\n                print('begin validation ...', file=sys.stderr)\n                model.eval()\n\n                # compute dev. ppl and bleu\n\n                dev_loss = evaluate_loss(model, dev_data, cross_entropy_loss)\n                dev_ppl = np.exp(dev_loss)\n\n                if args.valid_metric in ['bleu', 'word_acc', 'sent_acc']:\n                    dev_hyps = decode(model, dev_data)\n                    dev_hyps = [hyps[0] for hyps in dev_hyps]\n                    if args.valid_metric == 'bleu':\n                        valid_metric = get_bleu([tgt for src, tgt in dev_data], dev_hyps)\n                    else:\n                        valid_metric = get_acc([tgt for src, tgt in dev_data], dev_hyps, acc_type=args.valid_metric)\n                    print('validation: iter %d, dev. ppl %f, dev. %s %f' % (train_iter, dev_ppl, args.valid_metric, valid_metric),\n                          file=sys.stderr)\n                else:\n                    valid_metric = -dev_ppl\n                    print('validation: iter %d, dev. ppl %f' % (train_iter, dev_ppl),\n                          file=sys.stderr)\n\n                model.train()\n\n                is_better = len(hist_valid_scores) == 0 or valid_metric > max(hist_valid_scores)\n                is_better_than_last = len(hist_valid_scores) == 0 or valid_metric > hist_valid_scores[-1]\n                hist_valid_scores.append(valid_metric)\n\n                if valid_num > args.save_model_after:\n                    model_file = args.save_to + '.iter%d.bin' % train_iter\n                    print('save model to [%s]' % model_file, file=sys.stderr)\n                    model.save(model_file)\n\n                if (not is_better_than_last) and args.lr_decay:\n                    lr = optimizer.param_groups[0]['lr'] * args.lr_decay\n                    print('decay learning rate to %f' % lr, file=sys.stderr)\n                    optimizer.param_groups[0]['lr'] = lr\n\n                if is_better:\n                    patience = 0\n                    best_model_iter = train_iter\n\n                    if valid_num > args.save_model_after:\n                        print('save currently the best model ..', file=sys.stderr)\n                        model_file_abs_path = os.path.abspath(model_file)\n                        symlin_file_abs_path = os.path.abspath(args.save_to + '.bin')\n                        os.system('ln -sf %s %s' % (model_file_abs_path, symlin_file_abs_path))\n                else:\n                    patience += 1\n                    print('hit patience %d' % patience, file=sys.stderr)\n                    if patience == args.patience:\n                        print('early stop!', file=sys.stderr)\n                        print('the best model is from iteration [%d]' % best_model_iter, file=sys.stderr)\n                        exit(0)\n\n\ndef get_bleu(references, hypotheses):\n    # compute BLEU\n    bleu_score = corpus_bleu([[ref[1:-1]] for ref in references],\n                             [hyp[1:-1] for hyp in hypotheses])\n\n    return bleu_score\n\n\ndef get_acc(references, hypotheses, acc_type='word'):\n    assert acc_type == 'word_acc' or acc_type == 'sent_acc'\n    cum_acc = 0.\n\n    for ref, hyp in zip(references, hypotheses):\n        ref = ref[1:-1]\n        hyp = hyp[1:-1]\n        if acc_type == 'word_acc':\n            acc = len([1 for ref_w, hyp_w in zip(ref, hyp) if ref_w == hyp_w]) / float(len(hyp) + 1e-6)\n        else:\n            acc = 1. if all(ref_w == hyp_w for ref_w, hyp_w in zip(ref, hyp)) else 0.\n        cum_acc += acc\n\n    acc = cum_acc / len(hypotheses)\n    return acc\n\n\ndef decode(model, data, verbose=True):\n    \"\"\"\n    decode the dataset and compute sentence level acc. and BLEU.\n    \"\"\"\n    hypotheses = []\n    begin_time = time.time()\n\n    data = list(data)\n    if type(data[0]) is tuple:\n        for src_sent, tgt_sent in data:\n            hyps = model.translate(src_sent)\n            hypotheses.append(hyps)\n\n            if verbose:\n                print('*' * 50)\n                print('Source: ', ' '.join(src_sent))\n                print('Target: ', ' '.join(tgt_sent))\n                print('Top Hypothesis: ', ' '.join(hyps[0]))\n    else:\n        for src_sent in data:\n            hyps = model.translate(src_sent)\n            hypotheses.append(hyps)\n\n            if verbose:\n                print('*' * 50)\n                print('Source: ', ' '.join(src_sent))\n                print('Top Hypothesis: ', ' '.join(hyps[0]))\n\n    elapsed = time.time() - begin_time\n\n    print('decoded %d examples, took %d s' % (len(data), elapsed), file=sys.stderr)\n\n    return hypotheses\n\n\ndef compute_lm_prob(args):\n    \"\"\"\n    given source-target sentence pairs, compute ppl and log-likelihood\n    \"\"\"\n    test_data_src = read_corpus(args.test_src, source='src')\n    test_data_tgt = read_corpus(args.test_tgt, source='tgt')\n    test_data = zip(test_data_src, test_data_tgt)\n\n    if args.load_model:\n        print('load model from [%s]' % args.load_model, file=sys.stderr)\n        params = torch.load(args.load_model, map_location=lambda storage, loc: storage)\n        vocab = params['vocab']\n        saved_args = params['args']\n        state_dict = params['state_dict']\n\n        model = NMT(saved_args, vocab)\n        model.load_state_dict(state_dict)\n    else:\n        vocab = torch.load(args.vocab)\n        model = NMT(args, vocab)\n\n    model.eval()\n\n    if args.cuda:\n        model = model.cuda()\n\n    f = open(args.save_to_file, 'w')\n    for src_sent, tgt_sent in test_data:\n        src_sents = [src_sent]\n        tgt_sents = [tgt_sent]\n\n        batch_size = len(src_sents)\n        src_sents_len = [len(s) for s in src_sents]\n        pred_tgt_word_nums = [len(s[1:]) for s in tgt_sents]  # omitting leading `<s>`\n\n        # (sent_len, batch_size)\n        src_sents_var = to_input_variable(src_sents, model.vocab.src, cuda=args.cuda, is_test=True)\n        tgt_sents_var = to_input_variable(tgt_sents, model.vocab.tgt, cuda=args.cuda, is_test=True)\n\n        # (tgt_sent_len, batch_size, tgt_vocab_size)\n        scores = model(src_sents_var, src_sents_len, tgt_sents_var[:-1])\n        # (tgt_sent_len * batch_size, tgt_vocab_size)\n        log_scores = F.log_softmax(scores.view(-1, scores.size(2)))\n        # remove leading <s> in tgt sent, which is not used as the target\n        # (batch_size * tgt_sent_len)\n        flattened_tgt_sents = tgt_sents_var[1:].view(-1)\n        # (batch_size * tgt_sent_len)\n        tgt_log_scores = torch.gather(log_scores, 1, flattened_tgt_sents.unsqueeze(1)).squeeze(1)\n        # 0-index is the <pad> symbol\n        tgt_log_scores = tgt_log_scores * (1. - torch.eq(flattened_tgt_sents, 0).float())\n        # (tgt_sent_len, batch_size)\n        tgt_log_scores = tgt_log_scores.view(-1, batch_size) # .permute(1, 0)\n        # (batch_size)\n        tgt_sent_scores = tgt_log_scores.sum(dim=0).squeeze()\n        tgt_sent_word_scores = [tgt_sent_scores[i].data[0] / pred_tgt_word_nums[i] for i in range(batch_size)]\n\n        for src_sent, tgt_sent, score in zip(src_sents, tgt_sents, tgt_sent_word_scores):\n            f.write('%s ||| %s ||| %f\\n' % (' '.join(src_sent), ' '.join(tgt_sent), score))\n\n    f.close()\n\n\ndef test(args):\n    test_data_src = read_corpus(args.test_src, source='src')\n    test_data_tgt = read_corpus(args.test_tgt, source='tgt')\n    test_data = zip(test_data_src, test_data_tgt)\n\n    if args.load_model:\n        print('load model from [%s]' % args.load_model, file=sys.stderr)\n        params = torch.load(args.load_model, map_location=lambda storage, loc: storage)\n        vocab = params['vocab']\n        saved_args = params['args']\n        state_dict = params['state_dict']\n\n        model = NMT(saved_args, vocab)\n        model.load_state_dict(state_dict)\n    else:\n        vocab = torch.load(args.vocab)\n        model = NMT(args, vocab)\n\n    model.eval()\n\n    if args.cuda:\n        model = model.cuda()\n\n    hypotheses = decode(model, test_data, verbose=False)\n    top_hypotheses = [hyps[0] for hyps in hypotheses]\n\n    # bleu_score = get_bleu([tgt for src, tgt in test_data], top_hypotheses)\n    # word_acc = get_acc([tgt for src, tgt in test_data], top_hypotheses, 'word_acc')\n    # sent_acc = get_acc([tgt for src, tgt in test_data], top_hypotheses, 'sent_acc')\n    # print('Corpus Level BLEU: %f, word level acc: %f, sentence level acc: %f' % (bleu_score, word_acc, sent_acc), file=sys.stderr)\n\n    if args.save_to_file:\n        print('save decoding results to %s' % args.save_to_file, file=sys.stderr)\n        with open(args.save_to_file, 'w') as f:\n            for hyps in hypotheses:\n                f.write(' '.join(hyps[0][1:-1]) + '\\n')\n\n        if args.save_nbest:\n            nbest_file = args.save_to_file + '.nbest'\n            print('save nbest decoding results to %s' % nbest_file, file=sys.stderr)\n            with open(nbest_file, 'w') as f:\n                for src_sent, tgt_sent, hyps in zip(test_data_src, test_data_tgt, hypotheses):\n                    print('Source: %s' % ' '.join(src_sent), file=f)\n                    print('Target: %s' % ' '.join(tgt_sent), file=f)\n                    print('Hypotheses:', file=f)\n                    for i, hyp in enumerate(hyps, 1):\n                        print('[%d] %s' % (i, ' '.join(hyp)), file=f)\n                    print('*' * 30, file=f)\n\n\ndef interactive(args):\n    assert args.load_model, 'You have to specify a pre-trained model'\n    print('load model from [%s]' % args.load_model, file=sys.stderr)\n    params = torch.load(args.load_model, map_location=lambda storage, loc: storage)\n    vocab = params['vocab']\n    saved_args = params['args']\n    state_dict = params['state_dict']\n\n    model = NMT(saved_args, vocab)\n    model.load_state_dict(state_dict)\n\n    model.eval()\n\n    if args.cuda:\n        model = model.cuda()\n\n    while True:\n        src_sent = raw_input('Source Sentence:')\n        src_sent = src_sent.strip().split(' ')\n        hyps = model.translate(src_sent)\n        for i, hyp in enumerate(hyps, 1):\n            print('Hypothesis #%d: %s' % (i, ' '.join(hyp)))\n\n\ndef sample(args):\n    train_data_src = read_corpus(args.train_src, source='src')\n    train_data_tgt = read_corpus(args.train_tgt, source='tgt')\n    train_data = zip(train_data_src, train_data_tgt)\n\n    if args.load_model:\n        print('load model from [%s]' % args.load_model, file=sys.stderr)\n        params = torch.load(args.load_model, map_location=lambda storage, loc: storage)\n        vocab = params['vocab']\n        opt = params['args']\n        state_dict = params['state_dict']\n\n        model = NMT(opt, vocab)\n        model.load_state_dict(state_dict)\n    else:\n        vocab = torch.load(args.vocab)\n        model = NMT(args, vocab)\n\n    model.eval()\n\n    if args.cuda:\n        model = model.cuda()\n\n    print('begin sampling')\n\n    check_every = 10\n    train_iter = cum_samples = 0\n    train_time = time.time()\n    for src_sents, tgt_sents in data_iter(train_data, batch_size=args.batch_size):\n        train_iter += 1\n        samples = model.sample(src_sents, sample_size=args.sample_size, to_word=True)\n        cum_samples += sum(len(sample) for sample in samples)\n\n        if train_iter % check_every == 0:\n            elapsed = time.time() - train_time\n            print('sampling speed: %d/s' % (cum_samples / elapsed), file=sys.stderr)\n            cum_samples = 0\n            train_time = time.time()\n\n        for i, tgt_sent in enumerate(tgt_sents):\n            print('*' * 80)\n            print('target:' + ' '.join(tgt_sent))\n            tgt_samples = samples[i]\n            print('samples:')\n            for sid, sample in enumerate(tgt_samples, 1):\n                print('[%d] %s' % (sid, ' '.join(sample[1:-1])))\n            print('*' * 80)\n\n\nif __name__ == '__main__':\n    args = init_config()\n    print(args, file=sys.stderr)\n\n    if args.mode == 'train':\n        train(args)\n    elif args.mode == 'raml_train':\n        train_raml(args)\n    elif args.mode == 'sample':\n        sample(args)\n    elif args.mode == 'test':\n        test(args)\n    elif args.mode == 'prob':\n        compute_lm_prob(args)\n    elif args.mode == 'interactive':\n        interactive(args)\n    else:\n        raise RuntimeError('unknown mode')\n"
  },
  {
    "path": "nmt/scripts/multi-bleu.perl",
    "content": "#!/usr/bin/env perl\n#\n# This file is part of moses.  Its use is licensed under the GNU Lesser General\n# Public License version 2.1 or, at your option, any later version.\n\n# $Id$\nuse warnings;\nuse strict;\n\nmy $lowercase = 0;\nif ($ARGV[0] eq \"-lc\") {\n  $lowercase = 1;\n  shift;\n}\n\nmy $stem = $ARGV[0];\nif (!defined $stem) {\n  print STDERR \"usage: multi-bleu.pl [-lc] reference < hypothesis\\n\";\n  print STDERR \"Reads the references from reference or reference0, reference1, ...\\n\";\n  exit(1);\n}\n\n$stem .= \".ref\" if !-e $stem && !-e $stem.\"0\" && -e $stem.\".ref0\";\n\nmy @REF;\nmy $ref=0;\nwhile(-e \"$stem$ref\") {\n    &add_to_ref(\"$stem$ref\",\\@REF);\n    $ref++;\n}\n&add_to_ref($stem,\\@REF) if -e $stem;\ndie(\"ERROR: could not find reference file $stem\") unless scalar @REF;\n\n# add additional references explicitly specified on the command line\nshift;\nforeach my $stem (@ARGV) {\n    &add_to_ref($stem,\\@REF) if -e $stem;\n}\n\n\n\nsub add_to_ref {\n    my ($file,$REF) = @_;\n    my $s=0;\n    if ($file =~ /.gz$/) {\n\topen(REF,\"gzip -dc $file|\") or die \"Can't read $file\";\n    } else { \n\topen(REF,$file) or die \"Can't read $file\";\n    }\n    while(<REF>) {\n\tchop;\n\tpush @{$$REF[$s++]}, $_;\n    }\n    close(REF);\n}\n\nmy(@CORRECT,@TOTAL,$length_translation,$length_reference);\nmy $s=0;\nwhile(<STDIN>) {\n    chop;\n    $_ = lc if $lowercase;\n    my @WORD = split;\n    my %REF_NGRAM = ();\n    my $length_translation_this_sentence = scalar(@WORD);\n    my ($closest_diff,$closest_length) = (9999,9999);\n    foreach my $reference (@{$REF[$s]}) {\n#      print \"$s $_ <=> $reference\\n\";\n  $reference = lc($reference) if $lowercase;\n\tmy @WORD = split(' ',$reference);\n\tmy $length = scalar(@WORD);\n        my $diff = abs($length_translation_this_sentence-$length);\n\tif ($diff < $closest_diff) {\n\t    $closest_diff = $diff;\n\t    $closest_length = $length;\n\t    # print STDERR \"$s: closest diff \".abs($length_translation_this_sentence-$length).\" = abs($length_translation_this_sentence-$length), setting len: $closest_length\\n\";\n\t} elsif ($diff == $closest_diff) {\n            $closest_length = $length if $length < $closest_length;\n            # from two references with the same closeness to me\n            # take the *shorter* into account, not the \"first\" one.\n        }\n\tfor(my $n=1;$n<=4;$n++) {\n\t    my %REF_NGRAM_N = ();\n\t    for(my $start=0;$start<=$#WORD-($n-1);$start++) {\n\t\tmy $ngram = \"$n\";\n\t\tfor(my $w=0;$w<$n;$w++) {\n\t\t    $ngram .= \" \".$WORD[$start+$w];\n\t\t}\n\t\t$REF_NGRAM_N{$ngram}++;\n\t    }\n\t    foreach my $ngram (keys %REF_NGRAM_N) {\n\t\tif (!defined($REF_NGRAM{$ngram}) ||\n\t\t    $REF_NGRAM{$ngram} < $REF_NGRAM_N{$ngram}) {\n\t\t    $REF_NGRAM{$ngram} = $REF_NGRAM_N{$ngram};\n#\t    print \"$i: REF_NGRAM{$ngram} = $REF_NGRAM{$ngram}<BR>\\n\";\n\t\t}\n\t    }\n\t}\n    }\n    $length_translation += $length_translation_this_sentence;\n    $length_reference += $closest_length;\n    for(my $n=1;$n<=4;$n++) {\n\tmy %T_NGRAM = ();\n\tfor(my $start=0;$start<=$#WORD-($n-1);$start++) {\n\t    my $ngram = \"$n\";\n\t    for(my $w=0;$w<$n;$w++) {\n\t\t$ngram .= \" \".$WORD[$start+$w];\n\t    }\n\t    $T_NGRAM{$ngram}++;\n\t}\n\tforeach my $ngram (keys %T_NGRAM) {\n\t    $ngram =~ /^(\\d+) /;\n\t    my $n = $1;\n            # my $corr = 0;\n#\tprint \"$i e $ngram $T_NGRAM{$ngram}<BR>\\n\";\n\t    $TOTAL[$n] += $T_NGRAM{$ngram};\n\t    if (defined($REF_NGRAM{$ngram})) {\n\t\tif ($REF_NGRAM{$ngram} >= $T_NGRAM{$ngram}) {\n\t\t    $CORRECT[$n] += $T_NGRAM{$ngram};\n                    # $corr =  $T_NGRAM{$ngram};\n#\t    print \"$i e correct1 $T_NGRAM{$ngram}<BR>\\n\";\n\t\t}\n\t\telse {\n\t\t    $CORRECT[$n] += $REF_NGRAM{$ngram};\n                    # $corr =  $REF_NGRAM{$ngram};\n#\t    print \"$i e correct2 $REF_NGRAM{$ngram}<BR>\\n\";\n\t\t}\n\t    }\n            # $REF_NGRAM{$ngram} = 0 if !defined $REF_NGRAM{$ngram};\n            # print STDERR \"$ngram: {$s, $REF_NGRAM{$ngram}, $T_NGRAM{$ngram}, $corr}\\n\"\n\t}\n    }\n    $s++;\n}\nmy $brevity_penalty = 1;\nmy $bleu = 0;\n\nmy @bleu=();\n\nfor(my $n=1;$n<=4;$n++) {\n  if (defined ($TOTAL[$n])){\n    $bleu[$n]=($TOTAL[$n])?$CORRECT[$n]/$TOTAL[$n]:0;\n    # print STDERR \"CORRECT[$n]:$CORRECT[$n] TOTAL[$n]:$TOTAL[$n]\\n\";\n  }else{\n    $bleu[$n]=0;\n  }\n}\n\nif ($length_reference==0){\n  printf \"BLEU = 0, 0/0/0/0 (BP=0, ratio=0, hyp_len=0, ref_len=0)\\n\";\n  exit(1);\n}\n\nif ($length_translation<$length_reference) {\n  $brevity_penalty = exp(1-$length_reference/$length_translation);\n}\n$bleu = $brevity_penalty * exp((my_log( $bleu[1] ) +\n\t\t\t\tmy_log( $bleu[2] ) +\n\t\t\t\tmy_log( $bleu[3] ) +\n\t\t\t\tmy_log( $bleu[4] ) ) / 4) ;\nprintf \"BLEU = %.2f, %.1f/%.1f/%.1f/%.1f (BP=%.3f, ratio=%.3f, hyp_len=%d, ref_len=%d)\\n\",\n    100*$bleu,\n    100*$bleu[1],\n    100*$bleu[2],\n    100*$bleu[3],\n    100*$bleu[4],\n    $brevity_penalty,\n    $length_translation / $length_reference,\n    $length_translation,\n    $length_reference;\n\nsub my_log {\n  return -9999999999 unless $_[0];\n  return log($_[0]);\n}\n"
  },
  {
    "path": "nmt/scripts/test.sh",
    "content": "#!/bin/bash\n\nsrc=$1\ntgt=$2\nmdl=$3\ntxt=$4\n\npython3 nmt.py --mode test --test_src $src --test_tgt $tgt --load_model $mdl --save_to_file $txt --cuda\n"
  },
  {
    "path": "nmt/scripts/train-small.sh",
    "content": "#!/bin/sh\n\nL1=$1\nL2=$2\nJOB=$3\n\ndata_dir=\"./wmt16-small-data\"\nvocab_bin=\"$data_dir/vocab.$L1$L2.bin\"\ntrain_src=\"$data_dir/train.$L1\"\ntrain_tgt=\"$data_dir/train.$L2\"\ndev_src=\"$data_dir/valid.$L1\"\ndev_tgt=\"$data_dir/valid.$L2\"\ntest_src=\"$data_dir/test.$L1\"\ntest_tgt=\"$data_dir/test.$L2\"\n\njob_name=\"$JOB\"\nmodel_name=\"model.${job_name}\"\n\npython3 nmt.py \\\n    --cuda \\\n    --mode train \\\n    --vocab ${vocab_bin} \\\n    --save_to ${model_name} \\\n    --log_every 50 \\\n    --valid_niter 2500 \\\n    --valid_metric ppl \\\n    --save_model_after 2 \\\n    --beam_size 5 \\\n    --batch_size 64 \\\n    --hidden_size 256 \\\n    --embed_size 256 \\\n    --uniform_init 0.1 \\\n    --dropout 0.2 \\\n    --clip_grad 5.0 \\\n    --lr_decay 0.5 \\\n    --train_src ${train_src} \\\n    --train_tgt ${train_tgt} \\\n    --dev_src ${dev_src} \\\n    --dev_tgt ${dev_tgt}\n\n"
  },
  {
    "path": "nmt/scripts/train.sh",
    "content": "#!/bin/sh\n\ndata_dir=\"/data/groups/chatbot/dl_data/wmt16\"\nvocab_bin=\"$data_dir/vocab.deen.bin\"\ntrain_src=\"$data_dir/train.de\"\ntrain_tgt=\"$data_dir/train.en\"\ndev_src=\"$data_dir/valid.de\"\ndev_tgt=\"$data_dir/valid.en\"\ntest_src=\"$data_dir/test.de\"\ntest_tgt=\"$data_dir/test.en\"\n\njob_name=\"wmt16-deen\"\nmodel_name=\"model.${job_name}\"\n\npython3 nmt.py \\\n    --cuda \\\n    --mode train \\\n    --vocab ${vocab_bin} \\\n    --save_to ${model_name} \\\n    --log_every 100 \\\n    --valid_niter 5000 \\\n    --valid_metric ppl \\\n    --save_model_after 1 \\\n    --beam_size 5 \\\n    --batch_size 64 \\\n    --hidden_size 256 \\\n    --embed_size 256 \\\n    --uniform_init 0.1 \\\n    --dropout 0.2 \\\n    --clip_grad 5.0 \\\n    --lr_decay 0.5 \\\n    --train_src ${train_src} \\\n    --train_tgt ${train_tgt} \\\n    --dev_src ${dev_src} \\\n    --dev_tgt ${dev_tgt} \\\n    --load_model \"$1\"\n\n"
  },
  {
    "path": "nmt/util.py",
    "content": "from collections import defaultdict\nimport numpy as np\n\ndef read_corpus(file_path, source):\n    data = []\n    for line in open(file_path):\n        sent = line.strip().split(' ')\n        # only append <s> and </s> to the target sentence\n        if source == 'tgt':\n            sent = ['<s>'] + sent + ['</s>']\n        data.append(sent)\n\n    return data\n\n\ndef batch_slice(data, batch_size, sort=True):\n    batched_data = []\n    batch_num = int(np.ceil(len(data) / float(batch_size)))\n    for i in range(batch_num):\n        cur_batch_size = batch_size if i < batch_num - 1 else len(data) - batch_size * i\n        src_sents = [data[i * batch_size + b][0] for b in range(cur_batch_size)]\n        tgt_sents = [data[i * batch_size + b][1] for b in range(cur_batch_size)]\n\n        if sort:\n            src_ids = sorted(range(cur_batch_size), key=lambda src_id: len(src_sents[src_id]), reverse=True)\n            src_sents = [src_sents[src_id] for src_id in src_ids]\n            tgt_sents = [tgt_sents[src_id] for src_id in src_ids]\n\n        batched_data.append((src_sents, tgt_sents))\n\n    return batched_data\n\n\ndef data_iter(data, batch_size, shuffle=True):\n    \"\"\"\n    randomly permute data, then sort by source length, and partition into batches\n    ensure that the length of source sentences in each batch is decreasing\n    \"\"\"\n\n    buckets = defaultdict(list)\n    for pair in data:\n        src_sent = pair[0]\n        buckets[len(src_sent)].append(pair)\n\n    batched_data = []\n    for src_len in buckets:\n        tuples = buckets[src_len]\n        if shuffle: np.random.shuffle(tuples)\n        batched_data.extend(batch_slice(tuples, batch_size))\n\n    if shuffle:\n        np.random.shuffle(batched_data)\n\n    for batch in batched_data:\n        yield batch\n\n"
  },
  {
    "path": "nmt/vocab.py",
    "content": "from __future__ import print_function\nimport argparse\nfrom collections import Counter\nfrom itertools import chain\n\nimport torch\n\nfrom util import read_corpus\n\n\nclass VocabEntry(object):\n    def __init__(self):\n        self.word2id = dict()\n        self.unk_id = 3\n        self.word2id['<pad>'] = 0\n        self.word2id['<s>'] = 1\n        self.word2id['</s>'] = 2\n        self.word2id['<unk>'] = 3\n\n        self.id2word = {v: k for k, v in self.word2id.items()}\n\n    def __getitem__(self, word):\n        return self.word2id.get(word, self.unk_id)\n\n    def __contains__(self, word):\n        return word in self.word2id\n\n    def __setitem__(self, key, value):\n        raise ValueError('vocabulary is readonly')\n\n    def __len__(self):\n        return len(self.word2id)\n\n    def __repr__(self):\n        return 'Vocabulary[size=%d]' % len(self)\n\n    def id2word(self, wid):\n        return self.id2word[wid]\n\n    def add(self, word):\n        if word not in self:\n            wid = self.word2id[word] = len(self)\n            self.id2word[wid] = word\n            return wid\n        else:\n            return self[word]\n\n    @staticmethod\n    def from_corpus(corpus, size, remove_singleton=True):\n        vocab_entry = VocabEntry()\n\n        word_freq = Counter(chain(*corpus))\n        non_singletons = [w for w in word_freq if word_freq[w] > 1]\n        print('number of word types: %d, number of word types w/ frequency > 1: %d' % (len(word_freq),\n                                                                                       len(non_singletons)))\n\n        top_k_words = sorted(word_freq.keys(), reverse=True, key=word_freq.get)[:size]\n\n        for word in top_k_words:\n            if len(vocab_entry) < size:\n                if not (word_freq[word] == 1 and remove_singleton):\n                    vocab_entry.add(word)\n\n        return vocab_entry\n\n\nclass Vocab(object):\n    def __init__(self, src_sents, tgt_sents, src_vocab_size, tgt_vocab_size, remove_singleton=True):\n        assert len(src_sents) == len(tgt_sents)\n\n        print('initialize source vocabulary ..')\n        self.src = VocabEntry.from_corpus(src_sents, src_vocab_size, remove_singleton=remove_singleton)\n\n        print('initialize target vocabulary ..')\n        self.tgt = VocabEntry.from_corpus(tgt_sents, tgt_vocab_size, remove_singleton=remove_singleton)\n\n    def __repr__(self):\n        return 'Vocab(source %d words, target %d words)' % (len(self.src), len(self.tgt))\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--src_vocab_size', default=50000, type=int, help='source vocabulary size')\n    parser.add_argument('--tgt_vocab_size', default=50000, type=int, help='target vocabulary size')\n    parser.add_argument('--include_singleton', action='store_true', default=False, help='whether to include singleton'\n                                                                                        'in the vocabulary (default=False)')\n\n    parser.add_argument('--train_src', type=str, required=True, help='file of source sentences')\n    parser.add_argument('--train_tgt', type=str, required=True, help='file of target sentences')\n\n    parser.add_argument('--output', default='vocab.bin', type=str, help='output vocabulary file')\n\n    args = parser.parse_args()\n\n    print('read in source sentences: %s' % args.train_src)\n    print('read in target sentences: %s' % args.train_tgt)\n\n    src_sents = read_corpus(args.train_src, source='src')\n    tgt_sents = read_corpus(args.train_tgt, source='tgt')\n\n    vocab = Vocab(src_sents, tgt_sents, args.src_vocab_size, args.tgt_vocab_size, remove_singleton=not args.include_singleton)\n    print('generated vocabulary, source %d words, target %d words' % (len(vocab.src), len(vocab.tgt)))\n\n    torch.save(vocab, args.output)\n    print('vocabulary saved to %s' % args.output)\n"
  },
  {
    "path": "train-dual.sh",
    "content": "#!/bin/bash\n\nnmtdir=/data/groups/chatbot/dl_data/wmt16-small\nlmdir=/data/groups/chatbot/dl_data/lm\nsrcdir=/data/groups/chatbot/dl_data/wmt16-dual\n\nnmtA=$nmtdir/model.wmt16-ende-small.best.bin\nnmtB=$nmtdir/model.wmt16-deen-small.best.bin\nlmA=$lmdir/wmt16-en.pt\nlmB=$lmdir/wmt16-de.pt\nlmA_dict=$lmdir/dict.en.pkl\nlmB_dict=$lmdir/dict.de.pkl\nsrcA=$srcdir/train-small.en\nsrcB=$srcdir/train-small.de\n\nsaveA=\"modelA\"\nsaveB=\"modelB\"\n\npython3 dual.py \\\n    --nmt $nmtA $nmtB \\\n    --lm $lmA $lmB \\\n    --dict $lmA_dict $lmB_dict \\\n    --src $srcA $srcB \\\n    --log_every 5 \\\n    --save_n_iter 400 \\\n    --alpha 0.01 \\\n    --model $saveA $saveB\n\n"
  },
  {
    "path": "util.py",
    "content": "from collections import defaultdict\nimport numpy as np\n\ndef read_corpus(file_path, source):\n    data = []\n    for line in open(file_path):\n        sent = line.strip().split(' ')\n        # only append <s> and </s> to the target sentence\n        if source == 'tgt':\n            sent = ['<s>'] + sent + ['</s>']\n        data.append(sent)\n\n    return data\n\n\ndef batch_slice(data, batch_size, sort=True):\n    batched_data = []\n    batch_num = int(np.ceil(len(data) / float(batch_size)))\n    for i in range(batch_num):\n        cur_batch_size = batch_size if i < batch_num - 1 else len(data) - batch_size * i\n        src_sents = [data[i * batch_size + b][0] for b in range(cur_batch_size)]\n        tgt_sents = [data[i * batch_size + b][1] for b in range(cur_batch_size)]\n\n        if sort:\n            src_ids = sorted(range(cur_batch_size), key=lambda src_id: len(src_sents[src_id]), reverse=True)\n            src_sents = [src_sents[src_id] for src_id in src_ids]\n            tgt_sents = [tgt_sents[src_id] for src_id in src_ids]\n\n        batched_data.append((src_sents, tgt_sents))\n\n    return batched_data\n\n\ndef data_iter(data, batch_size, shuffle=True):\n    \"\"\"\n    randomly permute data, then sort by source length, and partition into batches\n    ensure that the length of source sentences in each batch is decreasing\n    \"\"\"\n\n    buckets = defaultdict(list)\n    for pair in data:\n        src_sent = pair[0]\n        buckets[len(src_sent)].append(pair)\n\n    batched_data = []\n    for src_len in buckets:\n        tuples = buckets[src_len]\n        if shuffle: np.random.shuffle(tuples)\n        batched_data.extend(batch_slice(tuples, batch_size))\n\n    if shuffle:\n        np.random.shuffle(batched_data)\n\n    for batch in batched_data:\n        yield batch\n\n"
  },
  {
    "path": "vocab.py",
    "content": "from __future__ import print_function\nimport argparse\nfrom collections import Counter\nfrom itertools import chain\n\nimport torch\n\nfrom util import read_corpus\n\n\nclass VocabEntry(object):\n    def __init__(self):\n        self.word2id = dict()\n        self.unk_id = 3\n        self.word2id['<pad>'] = 0\n        self.word2id['<s>'] = 1\n        self.word2id['</s>'] = 2\n        self.word2id['<unk>'] = 3\n\n        self.id2word = {v: k for k, v in self.word2id.items()}\n\n    def __getitem__(self, word):\n        return self.word2id.get(word, self.unk_id)\n\n    def __contains__(self, word):\n        return word in self.word2id\n\n    def __setitem__(self, key, value):\n        raise ValueError('vocabulary is readonly')\n\n    def __len__(self):\n        return len(self.word2id)\n\n    def __repr__(self):\n        return 'Vocabulary[size=%d]' % len(self)\n\n    def id2word(self, wid):\n        return self.id2word[wid]\n\n    def add(self, word):\n        if word not in self:\n            wid = self.word2id[word] = len(self)\n            self.id2word[wid] = word\n            return wid\n        else:\n            return self[word]\n\n    @staticmethod\n    def from_corpus(corpus, size, remove_singleton=True):\n        vocab_entry = VocabEntry()\n\n        word_freq = Counter(chain(*corpus))\n        non_singletons = [w for w in word_freq if word_freq[w] > 1]\n        print('number of word types: %d, number of word types w/ frequency > 1: %d' % (len(word_freq),\n                                                                                       len(non_singletons)))\n\n        top_k_words = sorted(word_freq.keys(), reverse=True, key=word_freq.get)[:size]\n\n        for word in top_k_words:\n            if len(vocab_entry) < size:\n                if not (word_freq[word] == 1 and remove_singleton):\n                    vocab_entry.add(word)\n\n        return vocab_entry\n\n\nclass Vocab(object):\n    def __init__(self, src_sents, tgt_sents, src_vocab_size, tgt_vocab_size, remove_singleton=True):\n        assert len(src_sents) == len(tgt_sents)\n\n        print('initialize source vocabulary ..')\n        self.src = VocabEntry.from_corpus(src_sents, src_vocab_size, remove_singleton=remove_singleton)\n\n        print('initialize target vocabulary ..')\n        self.tgt = VocabEntry.from_corpus(tgt_sents, tgt_vocab_size, remove_singleton=remove_singleton)\n\n    def __repr__(self):\n        return 'Vocab(source %d words, target %d words)' % (len(self.src), len(self.tgt))\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--src_vocab_size', default=50000, type=int, help='source vocabulary size')\n    parser.add_argument('--tgt_vocab_size', default=50000, type=int, help='target vocabulary size')\n    parser.add_argument('--include_singleton', action='store_true', default=False, help='whether to include singleton'\n                                                                                        'in the vocabulary (default=False)')\n\n    parser.add_argument('--train_src', type=str, required=True, help='file of source sentences')\n    parser.add_argument('--train_tgt', type=str, required=True, help='file of target sentences')\n\n    parser.add_argument('--output', default='vocab.bin', type=str, help='output vocabulary file')\n\n    args = parser.parse_args()\n\n    print('read in source sentences: %s' % args.train_src)\n    print('read in target sentences: %s' % args.train_tgt)\n\n    src_sents = read_corpus(args.train_src, source='src')\n    tgt_sents = read_corpus(args.train_tgt, source='tgt')\n\n    vocab = Vocab(src_sents, tgt_sents, args.src_vocab_size, args.tgt_vocab_size, remove_singleton=not args.include_singleton)\n    print('generated vocabulary, source %d words, target %d words' % (len(vocab.src), len(vocab.tgt)))\n\n    torch.save(vocab, args.output)\n    print('vocabulary saved to %s' % args.output)\n"
  }
]