[
  {
    "path": ".gitignore",
    "content": "*.pyc\ndata/dcase\ndata/audioset\ndata/sequential\nworkspace/dcase\nworkspace/audioset\nworkspace/sequential\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Yun Wang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# cmu-thesis\n\nThis repository contains the code for three experiments in my PhD thesis, [Polyphonic Sound Event Detection with Weak Labeling](http://www.cs.cmu.edu/~yunwang/papers/cmu-thesis.pdf):\n\n* Sound event detection with **presence/absence labeling** on the **[DCASE 2017 challenge](http://www.cs.tut.fi/sgn/arg/dcase2017/challenge/task-large-scale-sound-event-detection)** (Chapter 3.2)\n* Sound event detection with **presence/absence labeling** on **[Google Audio Set](https://research.google.com/audioset/)** (Chapter 3.3)\n* Sound event detection with **sequential labeling** on a subset of **[Google Audio Set](https://research.google.com/audioset/)** (Chapter 4)\n\n## Prerequisites\n\nHardware:\n* A GPU\n* Large storage (1 TB recommended)\n\nSoftware:\n* Python 2.7\n* PyTorch (I used version 0.4.0a0+d3b6c5e)\n* numpy, scipy, [joblib](https://pypi.org/project/joblib/)\n\n## Quick Start\n\n```python\n# Clone the repository\ngit clone https://github.com/MaigoAkisame/cmu-thesis.git\n\n# Download the data: may take up to 1 day!\ncd cmu-thesis/data\n./download.sh\n\n# Train a model for the DCASE experiment using default settings\ncd ../code/dcase\npython train.py            # Needs to run on a GPU\n\n# Evaluate the model at Checkpoint 25\npython eval.py --ckpt=25   # Needs to run on a GPU for the first time\n\n# Download and evaluate the TALNet model for the Audio Set experiment\ncd ../audioset\n./eval-TALNet.sh           # Needs to run on a GPU for the first time\n```\n\n## Organization of the Repository\n\n### code\n\nThe `code` directory contains three sub-directories: `dcase`, `audioset`, and `sequential`. These contain the code for the three experiments. In each subdirectory:\n\n* `Net.py` defines the network architecture (you don't need to execute this script directly);\n* `train.py` trains the network;\n* `eval.py` evaluates the network's performance.\n\nThe `train.py` and `eval.py` script can take many command line arguments, which specify the architecture of the network and the hyperparameters used during training. If you encounter \"out of memory\" errors, a good idea is to reduce the batch size.\n\nSome scripts that may be of special interest:\n\n* `code/*/util_in.py`: Implements data balancing so that each minibatch contains roughly equal numbers of recordings of each event type;\n* `code/sequential/ctc.py`: My implementation of connectionist temporal classification (CTC);\n* `code/sequential/ctl.py`: My implementation of connectionist temporal localization (CTL).\n\n### data\n\nThe script `data/download.sh` will download and extract the following three archives in the `data` directory:\n\n* [dcase.tgz](http://islpc21.is.cs.cmu.edu/yunwang/git/cmu-thesis/data/dcase.tgz) (4.9 GB)\n* [audioset.tgz](http://islpc21.is.cs.cmu.edu/yunwang/git/cmu-thesis/data/audioset.tgz) (341 GB)\n* [sequential.tgz](http://islpc21.is.cs.cmu.edu/yunwang/git/cmu-thesis/data/sequential.tgz) (63 GB)\n\nThese archives contain Matlab data files (with the `.mat` extension) that store the filterbank features and ground truth labels. They can be loaded with the `scipy.io.loadmat` function in Python. Each Matlab file contains three matrices:\n\n* `feat`: Filterbank features, a float32 array of shape (n, 400, 64) (n recordings, 400 frames, 64 frequency bins);\n* `labels`:\n  * Presence/absence labeling, a boolean array of shape (n, m) (n recordings, m event types), or\n  * Strong labelng, a boolean array of shape (n, 100, m) (n recordings, 100 frames, m event types);\n* `hashes`: A character array of size (n, 11), containing the YouTube hash IDs of the recordings.\n\nTraining recordings are organized by class (so data balancing can be done easily), and each Matlab file contains up to 101 recordings. Validation and test/evaluation recordings are stored in Matlab files that contain up to 500 recordings each.\n\nBecause the data is so huge, I do not provide the code for downloading the raw data, extracting features, and organizing the features and labels into Matlab data files. The whole process took me more than a month and endless babysitting!\n\n### workspace\n\nThe training logs, trained models, predictions on the test/evaluation recordings, and evaluation results will be generated in this directory. The sub-directory names will reflect the network architecture and hyperparameters for training.\n\nThe script `code/audioset/eval-TALNet.py` will download the TALNet model and store it at `workspace/audioset/TALNet/model/TALNet.pt`. At the time of my graduation (October 2018), this is the best model that can both classify and localize sound events on Google Audio Set.\n\n## Citing\n\nIf you use this code in your research, please cite my PhD thesis:\n\n* Yun Wang, \"Polyphonic sound event detection with weak labeling\", PhD thesis, Carnegie Mellon University, Oct. 2018.\n\nand/or the following publications:\n\n* Yun Wang, Juncheng Li and Florian Metze, \"A comparison of five multiple instance learning pooling functions for sound event detection with weak labeling,\" arXiv e-prints, Oct. 2018. [Online]. Available: <http://arxiv.org/abs/1810.09050>.\n* Yun Wang and Florian Metze, \"Connectionist temporal localization for sound event detection with sequential labeling,\" arXiv e-prints, Oct. 2018. [Online]. Available: <http://arxiv.org/abs/1810.09052>.\n"
  },
  {
    "path": "code/audioset/Net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy\n\nclass ConvBlock(nn.Module):\n    def __init__(self, n_input_feature_maps, n_output_feature_maps, kernel_size, batch_norm = False, pool_stride = None):\n        super(ConvBlock, self).__init__()\n        assert all(x % 2 == 1 for x in kernel_size)\n        self.n_input = n_input_feature_maps\n        self.n_output = n_output_feature_maps\n        self.kernel_size = kernel_size\n        self.batch_norm = batch_norm\n        self.pool_stride = pool_stride\n        # \"~batch_norm\" should be written as \"not batch_norm\"; otherwise ~True will evaluate to -2 and be treated as True.\n        # But I'll keep this error to avoid breaking existing models.\n        self.conv = nn.Conv2d(self.n_input, self.n_output, self.kernel_size, padding = tuple(x/2 for x in self.kernel_size), bias = ~batch_norm)\n        if batch_norm: self.bn = nn.BatchNorm2d(self.n_output)\n        nn.init.xavier_uniform(self.conv.weight)\n\n    def forward(self, x):\n        x = self.conv(x)\n        if self.batch_norm: x = self.bn(x)\n        x = F.relu(x)\n        if self.pool_stride is not None: x = F.max_pool2d(x, self.pool_stride)\n        return x\n\nclass Net(nn.Module):\n    def __init__(self, args):\n        super(Net, self).__init__()\n        self.__dict__.update(args.__dict__)     # Instill all args into self\n        assert self.n_conv_layers % self.n_pool_layers == 0\n        self.input_n_freq_bins = n_freq_bins = 64\n        self.output_size = 527\n        self.conv = []\n        pool_interval = self.n_conv_layers / self.n_pool_layers\n        n_input = 1\n        for i in range(self.n_conv_layers):\n            if (i + 1) % pool_interval == 0:        # this layer has pooling\n                n_freq_bins /= 2\n                n_output = self.embedding_size / n_freq_bins\n                pool_stride = (2, 2) if i < pool_interval * 2 else (1, 2)\n            else:\n                n_output = self.embedding_size * 2 / n_freq_bins\n                pool_stride = None\n            layer = ConvBlock(n_input, n_output, self.kernel_size, batch_norm = self.batch_norm, pool_stride = pool_stride)\n            self.conv.append(layer)\n            self.__setattr__('conv' + str(i + 1), layer)\n            n_input = n_output\n        self.gru = nn.GRU(self.embedding_size, self.embedding_size / 2, 1, batch_first = True, bidirectional = True)\n        self.fc_prob = nn.Linear(self.embedding_size, self.output_size)\n        if self.pooling == 'att':\n            self.fc_att = nn.Linear(self.embedding_size, self.output_size)\n        # Better initialization\n        nn.init.orthogonal(self.gru.weight_ih_l0); nn.init.constant(self.gru.bias_ih_l0, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0); nn.init.constant(self.gru.bias_hh_l0, 0)\n        nn.init.orthogonal(self.gru.weight_ih_l0_reverse); nn.init.constant(self.gru.bias_ih_l0_reverse, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0_reverse); nn.init.constant(self.gru.bias_hh_l0_reverse, 0)\n        nn.init.xavier_uniform(self.fc_prob.weight); nn.init.constant(self.fc_prob.bias, 0)\n        if self.pooling == 'att':\n            nn.init.xavier_uniform(self.fc_att.weight); nn.init.constant(self.fc_att.bias, 0)\n\n    def forward(self, x):\n        x = x.view((-1, 1, x.size(1), x.size(2)))                                                           # x becomes (batch, channel, time, freq)\n        for i in range(len(self.conv)):\n            if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n            x = self.conv[i](x)                                                                             # x becomes (batch, channel, time, freq)\n        x = x.permute(0, 2, 1, 3).contiguous()                                                              # x becomes (batch, time, channel, freq)\n        x = x.view((-1, x.size(1), x.size(2) * x.size(3)))                                                  # x becomes (batch, time, embedding_size)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x, _ = self.gru(x)                                                                                  # x becomes (batch, time, embedding_size)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        frame_prob = F.sigmoid(self.fc_prob(x))                                                             # shape of frame_prob: (batch, time, output_size)\n        frame_prob = torch.clamp(frame_prob, 1e-7, 1 - 1e-7)\n        if self.pooling == 'max':\n            global_prob, _ = frame_prob.max(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'ave':\n            global_prob = frame_prob.mean(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'lin':\n            global_prob = (frame_prob * frame_prob).sum(dim = 1) / frame_prob.sum(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'exp':\n            global_prob = (frame_prob * frame_prob.exp()).sum(dim = 1) / frame_prob.exp().sum(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'att':\n            frame_att = F.softmax(self.fc_att(x), dim = 1)\n            global_prob = (frame_prob * frame_att).sum(dim = 1)\n            return global_prob, frame_prob, frame_att\n\n    def predict(self, x, verbose = True, batch_size = 100):\n        # Predict in batches. Both input and output are numpy arrays.\n        # If verbose == True, return all of global_prob, frame_prob and att\n        # If verbose == False, only return global_prob\n        result = []\n        for i in range(0, len(x), batch_size):\n            with torch.no_grad():\n                input = Variable(torch.from_numpy(x[i : i + batch_size])).cuda()\n                output = self.forward(input)\n                if not verbose: output = output[:1]\n                result.append([var.data.cpu().numpy() for var in output])\n        result = tuple(numpy.concatenate(items) for items in zip(*result))\n        return result if verbose else result[0]\n"
  },
  {
    "path": "code/audioset/eval-TALNet.sh",
    "content": "TALNet_FILE=../../workspace/audioset/TALNet/model/TALNet.pt\nif ! [ -f $TALNet_FILE ]; then\n  mkdir -p $(dirname $TALNet_FILE)\n  wget -O $TALNet_FILE http://islpc21.is.cs.cmu.edu/yunwang/git/cmu-thesis/model/TALNet.pt\nfi\npython eval.py --TALNet\n"
  },
  {
    "path": "code/audioset/eval.py",
    "content": "import sys, os, os.path\nimport argparse\nimport numpy\nfrom util_out import *\nfrom util_f1 import *\nfrom scipy.io import loadmat, savemat\n\n# Parse input arguments\ndef mybool(s):\n    return s.lower() in ['t', 'true', 'y', 'yes', '1']\nparser = argparse.ArgumentParser()\nparser.add_argument('--TALNet', action = 'store_true')              # specify this to evaluate the pre-trained TALNet model\nparser.add_argument('--embedding_size', type = int, default = 1024) # this is the embedding size after a pooling layer\n                                                                    # after a non-pooling layer, the embeddings size will be twice this much\nparser.add_argument('--n_conv_layers', type = int, default = 10)\nparser.add_argument('--kernel_size', type = str, default = '3')     # 'n' or 'nxm'\nparser.add_argument('--n_pool_layers', type = int, default = 5)     # the pooling layers will be inserted uniformly into the conv layers\n                                                                    # the should be at least 2 and at most 6 pooling layers\n                                                                    # the first two pooling layers will have stride (2,2); later ones will have stride (1,2)\nparser.add_argument('--batch_norm', type = mybool, default = True)\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--pooling', type = str, default = 'lin', choices = ['max', 'ave', 'lin', 'exp', 'att'])\nparser.add_argument('--batch_size', type = int, default = 250)\nparser.add_argument('--ckpt_size', type = int, default = 1000)      # how many batches per checkpoint\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 1e-3)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 0.8)\nparser.add_argument('--random_seed', type = int, default = 15213)\nparser.add_argument('--ckpt', type = int)\nargs = parser.parse_args()\nif 'x' not in args.kernel_size:\n    args.kernel_size = args.kernel_size + 'x' + args.kernel_size\n\n# Locate model file and prepare directories for prediction and evaluation\nexpid = 'TALNet' if args.TALNet else 'embed%d-%dC%dP-kernel%s-%s-drop%.1f-%s-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.embedding_size,\n    args.n_conv_layers,\n    args.n_pool_layers,\n    args.kernel_size,\n    'bn' if args.batch_norm else 'nobn',\n    args.dropout,\n    args.pooling,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/audioset', expid)\nPRED_PATH = os.path.join(WORKSPACE, 'pred')\nif not os.path.exists(PRED_PATH): os.makedirs(PRED_PATH)\nEVAL_PATH = os.path.join(WORKSPACE, 'eval')\nif not os.path.exists(EVAL_PATH): os.makedirs(EVAL_PATH)\nif args.TALNet:\n    MODEL_FILE = os.path.join(WORKSPACE, 'model', 'TALNet.pt')\n    PRED_FILE = os.path.join(PRED_PATH, 'TALNet.mat')\n    EVAL_FILE = os.path.join(EVAL_PATH, 'TALNet.txt')\nelse:\n    MODEL_FILE = os.path.join(WORKSPACE, 'model', 'checkpoint%d.pt' % args.ckpt)\n    PRED_FILE = os.path.join(PRED_PATH, 'checkpoint%d.mat' % args.ckpt)\n    EVAL_FILE = os.path.join(EVAL_PATH, 'checkpoint%d.txt' % args.ckpt)\nwith open(EVAL_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    print s\n    with open(EVAL_FILE, 'a') as f:\n        f.write(s + '\\n')\n\nif os.path.exists(PRED_FILE):\n    # Load saved predictions, no need to use GPU\n    data = loadmat(PRED_FILE)\n    dcase_thres = data['dcase_thres'].ravel()\n    dcase_test_y = data['dcase_test_y']\n    dcase_test_frame_y = data['dcase_test_frame_y']\n    dcase_test_outputs = []\n    dcase_test_outputs.append(data['dcase_test_global_prob'])\n    dcase_test_outputs.append(data['dcase_test_frame_prob'])\n    if args.pooling == 'att':\n        dcase_test_outputs.append(data['dcase_test_frame_att'])\n    gas_eval_y = data['gas_eval_y']\n    gas_eval_global_prob = data['gas_eval_global_prob']\nelse:\n    import torch\n    import torch.nn as nn\n    from torch.optim import *\n    from torch.optim.lr_scheduler import *\n    from torch.autograd import Variable\n    from Net import Net\n    from util_in import *\n\n    # Load model\n    args.kernel_size = tuple(int(x) for x in args.kernel_size.split('x'))\n    model = Net(args).cuda()\n    model.load_state_dict(torch.load(MODEL_FILE)['model'])\n    model.eval()\n\n    # Load DCASE data\n    dcase_valid_x, dcase_valid_y, _ = bulk_load('DCASE_valid')\n    dcase_test_x, dcase_test_y, dcase_test_hashes = bulk_load('DCASE_test')\n    dcase_test_frame_y = load_dcase_test_frame_truth()\n    DCASE_CLASS_IDS = [318, 324, 341, 321, 307, 310, 314, 397, 325, 326, 323, 319, 14, 342, 329, 331, 316]\n\n    # Predict on DCASE data\n    dcase_valid_global_prob = model.predict(dcase_valid_x, verbose = False)[:, DCASE_CLASS_IDS]\n    dcase_thres = optimize_micro_avg_f1(dcase_valid_global_prob, dcase_valid_y)\n    dcase_test_outputs = model.predict(dcase_test_x, verbose = True)\n    dcase_test_outputs = tuple(x[..., DCASE_CLASS_IDS] for x in dcase_test_outputs)\n\n    # Load GAS data\n    gas_eval_x, gas_eval_y, gas_eval_hashes = bulk_load('GAS_eval')\n\n    # Predict on GAS data\n    gas_eval_global_prob = model.predict(gas_eval_x, verbose = False)\n\n    # Save predictions\n    data = {}\n    data['dcase_thres'] = dcase_thres\n    data['dcase_test_hashes'] = dcase_test_hashes\n    data['dcase_test_y'] = dcase_test_y\n    data['dcase_test_frame_y'] = dcase_test_frame_y\n    data['dcase_test_global_prob'] = dcase_test_outputs[0]\n    data['dcase_test_frame_prob'] = dcase_test_outputs[1]\n    if args.pooling == 'att':\n        data['dcase_test_frame_att'] = dcase_test_outputs[2]\n    data['gas_eval_hashes'] = gas_eval_hashes\n    data['gas_eval_y'] = gas_eval_y\n    data['gas_eval_global_prob'] = gas_eval_global_prob\n    savemat(PRED_FILE, data)\n\n# Evaluation on DCASE 2017\nwrite_log('Performance on DCASE 2017:')\nwrite_log('')\nwrite_log('           ||          ||            Task A (recording level)           ||                       Task B (1-second segment level)                       ')\nwrite_log('     CLASS ||    THRES ||   TP |   FN |   FP |  Prec. | Recall |     F1 ||   TP |   FN |   FP |  Prec. | Recall |     F1 |  Sub |  Del |  Ins |     ER ')\nFORMAT1 = ' Micro Avg ||          || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f | %#4d | %#4d | %#4d | %6.02f '\nFORMAT2 = ' %######9d || %8.0006f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f |      |      |      |        '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT1)\nwrite_log(SEP)\n\n# dcase_test_y and dcase_test_frame_y are inconsistent in some places\n# so when you evaluate Task A, use a \"fake_dcase_test_frame_y\" derived from dcase_test_y\nfake_dcase_test_frame_y = numpy.tile(numpy.expand_dims(dcase_test_y, 1), (1, 100, 1))\n\n# Micro-average performance across all classes\nres_taskA = dcase_sed_eval(dcase_test_outputs, args.pooling, dcase_thres, fake_dcase_test_frame_y, 100, verbose = True)\nres_taskB = dcase_sed_eval(dcase_test_outputs, args.pooling, dcase_thres, dcase_test_frame_y, 10, verbose = True)\nwrite_log(FORMAT1 % (res_taskA.TP, res_taskA.FN, res_taskA.FP, res_taskA.precision, res_taskA.recall, res_taskA.F1,\n                     res_taskB.TP, res_taskB.FN, res_taskB.FP, res_taskB.precision, res_taskB.recall, res_taskB.F1,\n                     res_taskB.sub, res_taskB.dele, res_taskB.ins, res_taskB.ER))\nwrite_log(SEP)\n\n# Class-wise performance\nN_CLASSES = dcase_test_outputs[0].shape[-1]\nfor i in range(N_CLASSES):\n    outputs = [x[..., i:i+1] for x in dcase_test_outputs]\n    res_taskA = dcase_sed_eval(outputs, args.pooling, dcase_thres[i], fake_dcase_test_frame_y[..., i:i+1], 100, verbose = True)\n    res_taskB = dcase_sed_eval(outputs, args.pooling, dcase_thres[i], dcase_test_frame_y[..., i:i+1], 10, verbose = True)\n    write_log(FORMAT2 % (i, dcase_thres[i],\n                         res_taskA.TP, res_taskA.FN, res_taskA.FP, res_taskA.precision, res_taskA.recall, res_taskA.F1,\n                         res_taskB.TP, res_taskB.FN, res_taskB.FP, res_taskB.precision, res_taskB.recall, res_taskB.F1))\n\n# Evaluation on Google Audio Set\nwrite_log('')\nwrite_log('Performance on Google Audio Set:')\nwrite_log('')\nwrite_log(\"   CLASS ||    AP |   AUC |    d' \")\nFORMAT  = ' %00007s || %5.3f | %5.3f |%6.03f '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT)\nwrite_log(SEP)\n\nclasswise = []\nN_CLASSES = gas_eval_global_prob.shape[-1]\nfor i in range(N_CLASSES):\n    classwise.append(gas_eval(gas_eval_global_prob[:,i], gas_eval_y[:,i]))      # AP, AUC, dprime\nmap, mauc = numpy.array(classwise).mean(axis = 0)[:2]\nwrite_log(FORMAT % ('Average', map, mauc, dprime(mauc)))\nwrite_log(SEP)\nfor i in range(N_CLASSES):\n    write_log(FORMAT % ((str(i),) + classwise[i]))\n"
  },
  {
    "path": "code/audioset/train.py",
    "content": "import sys, os, os.path, time\nimport argparse\nimport numpy\nimport torch\nimport torch.nn as nn\nfrom torch.optim import *\nfrom torch.optim.lr_scheduler import *\nfrom torch.autograd import Variable\nfrom Net import Net\nfrom util_in import *\nfrom util_out import *\nfrom util_f1 import *\n\ntorch.backends.cudnn.benchmark = True\n\n# Parse input arguments\ndef mybool(s):\n    return s.lower() in ['t', 'true', 'y', 'yes', '1']\nparser = argparse.ArgumentParser()\nparser.add_argument('--embedding_size', type = int, default = 1024) # this is the embedding size after a pooling layer\n                                                                    # after a non-pooling layer, the embeddings size will be twice this much\nparser.add_argument('--n_conv_layers', type = int, default = 10)\nparser.add_argument('--kernel_size', type = str, default = '3')     # 'n' or 'nxm'\nparser.add_argument('--n_pool_layers', type = int, default = 5)     # the pooling layers will be inserted uniformly into the conv layers\n                                                                    # the should be at least 2 and at most 6 pooling layers\n                                                                    # the first two pooling layers will have stride (2,2); later ones will have stride (1,2)\nparser.add_argument('--batch_norm', type = mybool, default = True)\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--pooling', type = str, default = 'lin', choices = ['max', 'ave', 'lin', 'exp', 'att'])\nparser.add_argument('--batch_size', type = int, default = 250)\nparser.add_argument('--ckpt_size', type = int, default = 1000)      # how many batches per checkpoint\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 1e-3)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 0.8)\nparser.add_argument('--max_ckpt', type = int, default = 30)\nparser.add_argument('--random_seed', type = int, default = 15213)\nargs = parser.parse_args()\nif 'x' not in args.kernel_size:\n    args.kernel_size = args.kernel_size + 'x' + args.kernel_size\n\nnumpy.random.seed(args.random_seed)\n\n# Prepare log file and model directory\nexpid = 'embed%d-%dC%dP-kernel%s-%s-drop%.1f-%s-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.embedding_size,\n    args.n_conv_layers,\n    args.n_pool_layers,\n    args.kernel_size,\n    'bn' if args.batch_norm else 'nobn',\n    args.dropout,\n    args.pooling,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/audioset', expid)\nMODEL_PATH = os.path.join(WORKSPACE, 'model')\nif not os.path.exists(MODEL_PATH): os.makedirs(MODEL_PATH)\nLOG_FILE = os.path.join(WORKSPACE, 'train.log')\nwith open(LOG_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    timestamp = time.strftime('%m-%d %H:%M:%S')\n    msg = '[' + timestamp + '] ' + s\n    print msg\n    with open(LOG_FILE, 'a') as f:\n        f.write(msg + '\\n')\n\n# Load data\nwrite_log('Loading data ...')\ntrain_gen = batch_generator(batch_size = args.batch_size, random_seed = args.random_seed)\ngas_valid_x, gas_valid_y, _ = bulk_load('GAS_valid')\ngas_eval_x, gas_eval_y, _ = bulk_load('GAS_eval')\ndcase_valid_x, dcase_valid_y, _ = bulk_load('DCASE_valid')\ndcase_test_x, dcase_test_y, _ = bulk_load('DCASE_test')\ndcase_test_frame_truth = load_dcase_test_frame_truth()\nDCASE_CLASS_IDS = [318, 324, 341, 321, 307, 310, 314, 397, 325, 326, 323, 319, 14, 342, 329, 331, 316]\n\n# Build model\nargs.kernel_size = tuple(int(x) for x in args.kernel_size.split('x'))\nmodel = Net(args).cuda()\nif args.optimizer == 'sgd':\n    optimizer = SGD(model.parameters(), lr = args.init_lr, momentum = 0.9, nesterov = True)\nelif args.optimizer == 'adam':\n    optimizer = Adam(model.parameters(), lr = args.init_lr)\nscheduler = ReduceLROnPlateau(optimizer, mode = 'max', factor = args.lr_factor, patience = args.lr_patience) if args.lr_factor < 1.0 else None\ncriterion = nn.BCELoss()\n\n# Train model\nwrite_log('Training model ...')\nwrite_log('                            ||       GAS_VALID       ||        GAS_EVAL       || D_VAL ||              DCASE_TEST               ')\nwrite_log(\" CKPT |    LR    |  Tr.LOSS ||  MAP  |  MAUC |   d'  ||  MAP  |  MAUC |   d'  || Gl.F1 || Gl.F1 | Fr.ER | Fr.F1 | 1s.ER | 1s.F1 \")\nFORMAT  = ' %#4d | %8.0003g | %8.0006f || %5.3f | %5.3f |%6.03f || %5.3f | %5.3f |%6.03f || %5.3f || %5.3f | %5.3f | %5.3f | %5.3f | %5.3f '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT)\nwrite_log(SEP)\n\nfor checkpoint in range(1, args.max_ckpt + 1):\n    # Train for args.ckpt_size batches\n    model.train()\n    train_loss = 0\n    for batch in range(1, args.ckpt_size + 1):\n        x, y = next(train_gen)\n        optimizer.zero_grad()\n        global_prob = model(x)[0]\n        global_prob.clamp_(min = 1e-7, max = 1 - 1e-7)\n        loss = criterion(global_prob, y)\n        train_loss += loss.data[0]\n        if numpy.isnan(train_loss) or numpy.isinf(train_loss): break\n        loss.backward()\n        optimizer.step()\n        sys.stderr.write('Checkpoint %d, Batch %d / %d, avg train loss = %f\\r' % \\\n                         (checkpoint, batch, args.ckpt_size, train_loss / batch))\n        del x, y, global_prob, loss         # This line and next line: to save GPU memory\n        torch.cuda.empty_cache()            # I don't know if they're useful or not\n    train_loss /= args.ckpt_size\n\n    # Evaluate model\n    model.eval()\n    sys.stderr.write('Evaluating model on GAS_VALID ...\\r')\n    global_prob = model.predict(gas_valid_x, verbose = False)\n    gv_map, gv_mauc, gv_dprime = gas_eval(global_prob, gas_valid_y)\n    sys.stderr.write('Evaluating model on GAS_EVAL ... \\r')\n    global_prob = model.predict(gas_eval_x, verbose = False)\n    ge_map, ge_mauc, ge_dprime = gas_eval(global_prob, gas_eval_y)\n    sys.stderr.write('Evaluating model on DCASE_VALID ...\\r')\n    global_prob = model.predict(dcase_valid_x, verbose = False)[:, DCASE_CLASS_IDS]\n    thres = optimize_micro_avg_f1(global_prob, dcase_valid_y)\n    dv_f1 = f1(global_prob >= thres, dcase_valid_y)\n    sys.stderr.write('Evaluating model on DCASE_TEST ... \\r')\n    outputs = model.predict(dcase_test_x, verbose = True)\n    outputs = tuple(x[..., DCASE_CLASS_IDS] for x in outputs)\n    dt_f1 = f1(outputs[0] >= thres, dcase_test_y)\n    dt_frame_er, dt_frame_f1 = dcase_sed_eval(outputs, args.pooling, thres, dcase_test_frame_truth, 1)\n    dt_1s_er, dt_1s_f1 = dcase_sed_eval(outputs, args.pooling, thres, dcase_test_frame_truth, 10)\n\n    # Write log\n    write_log(FORMAT % (\n        checkpoint, optimizer.param_groups[0]['lr'], train_loss,\n        gv_map, gv_mauc, gv_dprime,\n        ge_map, ge_mauc, ge_dprime,\n        dv_f1, dt_f1, dt_frame_er, dt_frame_f1, dt_1s_er, dt_1s_f1\n    ))\n\n    # Abort if training has gone mad\n    if numpy.isnan(train_loss) or numpy.isinf(train_loss):\n        write_log('Aborted.')\n        break\n\n    # Save model. Too bad I can't save the scheduler\n    MODEL_FILE = os.path.join(MODEL_PATH, 'checkpoint%d.pt' % checkpoint)\n    state = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}\n    sys.stderr.write('Saving model to %s ...\\r' % MODEL_FILE)\n    torch.save(state, MODEL_FILE)\n\n    # Update learning rate\n    if scheduler is not None:\n        scheduler.step(gv_map)\n\nwrite_log('DONE!')\n"
  },
  {
    "path": "code/audioset/util_f1.py",
    "content": "import numpy\n\n# Compute F1 given predictions and truth\ndef f1(pred, truth):\n    return 2.0 * (pred & truth).sum() / (pred.sum() + truth.sum())\n\n# Given scores and truth for a single class (as 1-D numpy arrays), find optimal threshold and corresponding F1\n# Statistics of other classes may be given to optimize micro-average F1\ndef optimize_f1(scores, truth, extraNcorr = 0, extraNtrue = 0, extraNpred = 0):\n    # Start with predicting everything as negative\n    best_thres = numpy.inf\n    best_f1 = 0.0\n    num = extraNcorr                                # number of correctly predicted instances\n    den = extraNtrue + extraNpred + truth.sum()     # number of predicted instances + true instances\n    instances = [(-numpy.inf, False)] + sorted(zip(scores, truth))\n    # Lower the threshold gradually\n    for i in range(len(instances) - 1, 0, -1):\n        if instances[i][1]: num += 1\n        den += 1\n        if instances[i][0] > instances[i-1][0]:     # Can put threshold here\n            f1 = 2.0 * num / den\n            if f1 > best_f1:\n                best_thres = (instances[i][0] + instances[i-1][0]) / 2\n                best_f1 = f1\n    return best_thres, best_f1\n\n# Given scores and truth for many classes (as 2-D numpy arrays),\n# find the optimal class-specific thresholds (as a 1-D numpy array) that maximizes the micro-average F1\n# The algorithm is stochastic, but I have always observed deterministic results\ndef optimize_micro_avg_f1(scores, truth):\n    # First optimize each class individually\n    nClasses = truth.shape[1]\n    thres = numpy.zeros(nClasses, dtype = 'float64')\n    for i in range(nClasses):\n        thres[i], _ = optimize_f1(scores[:,i], truth[:,i])\n    Ntrue = truth.sum(axis = 0)\n    Npred = (scores >= thres).sum(axis = 0)\n    Ncorr = ((scores >= thres) & truth).sum(axis = 0)\n\n    # Repeatly re-tune the threshold for each class until convergence\n    candidates = range(nClasses)\n    while len(candidates) > 0:\n        i = numpy.random.choice(candidates)\n        candidates.remove(i)\n        old_thres = thres[i]\n        thres[i], _ = optimize_f1(\n            scores[:,i],\n            truth[:,i],\n            extraNcorr = Ncorr.sum() - Ncorr[i],\n            extraNtrue = Ntrue.sum() - Ntrue[i],\n            extraNpred = Npred.sum() - Npred[i],\n        )\n        if thres[i] != old_thres:\n            Npred[i] = (scores[:,i] >= thres[i]).sum(axis = 0)\n            Ncorr[i] = ((scores[:,i] >= thres[i]) & truth[:,i]).sum(axis = 0)\n            candidates = range(nClasses)\n            candidates.remove(i)\n\n    return thres\n"
  },
  {
    "path": "code/audioset/util_in.py",
    "content": "import sys, os, os.path, glob\nimport cPickle\nfrom scipy.io import loadmat\nimport numpy\nfrom multiprocessing import Process, Queue\nimport torch\nfrom torch.autograd import Variable\n\nN_CLASSES = 527\nN_WORKERS = 6\n\nGAS_FEATURE_DIR = '../../data/audioset'\nDCASE_FEATURE_DIR = '../../data/dcase'\nwith open(os.path.join(GAS_FEATURE_DIR, 'normalizer.pkl'), 'rb') as f:\n    mu, sigma = cPickle.load(f)\n\ndef sample_generator(file_list, random_seed = 15213):\n    rng = numpy.random.RandomState(random_seed)\n    while True:\n        rng.shuffle(file_list)\n        for filename in file_list:\n            data = loadmat(filename)\n            feat = ((data['feat'] - mu) / sigma).astype('float32')\n            labels = data['labels'].astype('float32')\n            for i in range(len(data['feat'])):\n                yield feat[i], labels[i]\n\ndef worker(queues, file_lists, random_seed):\n    generators = [sample_generator(file_lists[i], random_seed + i) for i in range(len(file_lists))]\n    while True:\n        for gen, q in zip(generators, queues):\n            q.put(next(gen))\n\ndef batch_generator(batch_size, random_seed = 15213):\n    queues = [Queue(5) for class_id in range(N_CLASSES)]\n    file_lists = [sorted(glob.glob(os.path.join(GAS_FEATURE_DIR, 'GAS_train_unbalanced_class%03d_part*.mat' % class_id))) for class_id in range(N_CLASSES)]\n\n    for worker_id in range(N_WORKERS):\n        p = Process(target = worker, args = (queues[worker_id::N_WORKERS], file_lists[worker_id::N_WORKERS], random_seed))\n        p.daemon = True\n        p.start()\n\n    rng = numpy.random.RandomState(random_seed)\n    batch = []\n    while True:\n        rng.shuffle(queues)\n        for q in queues:\n            batch.append(q.get())\n            if len(batch) == batch_size:\n                yield tuple(Variable(torch.from_numpy(numpy.stack(x))).cuda() for x in zip(*batch))\n                batch = []\n\ndef bulk_load(prefix):\n    feat = []; labels = []; hashes = []\n    for filename in sorted(glob.glob(os.path.join(GAS_FEATURE_DIR, '%s_*.mat' % prefix)) +\n                           glob.glob(os.path.join(DCASE_FEATURE_DIR, '%s_*.mat' % prefix))):\n        data = loadmat(filename)\n        feat.append(((data['feat'] - mu) / sigma).astype('float32'))\n        labels.append(data['labels'].astype('bool'))\n        hashes.append(data['hashes'])\n    return numpy.concatenate(feat), numpy.concatenate(labels), numpy.concatenate(hashes)\n\ndef load_dcase_test_frame_truth():\n    return cPickle.load(open(os.path.join(DCASE_FEATURE_DIR, 'DCASE_test_frame_label.pkl'), 'rb'))\n"
  },
  {
    "path": "code/audioset/util_out.py",
    "content": "from scipy import stats\nimport numpy\n\ndef roc(pred, truth):\n    data = numpy.array(sorted(zip(pred, truth), reverse = True))\n    pred, truth = data[:,0], data[:,1].astype(\"bool\")\n    TP = truth.cumsum()\n    FP = (1 - truth).cumsum()\n    mask = numpy.concatenate([numpy.diff(pred) < 0, numpy.array([True])])\n    TP = numpy.concatenate([numpy.array([0]), TP[mask]])\n    FP = numpy.concatenate([numpy.array([0]), FP[mask]])\n    return TP, FP\n\ndef ap_and_auc(pred, truth):\n    TP, FP = roc(pred, truth)\n    auc = ((TP[1:] + TP[:-1]) * numpy.diff(FP)).sum() / (2 * TP[-1] * FP[-1])\n    precision = TP[1:] / (TP + FP)[1:]\n    weight = numpy.diff(TP)\n    ap = (precision * weight).sum() / TP[-1]\n    return ap, auc\n\ndef dprime(auc):\n    return stats.norm().ppf(auc) * numpy.sqrt(2.0)\n\ndef gas_eval(pred, truth):\n    if truth.ndim == 1:\n        ap, auc = ap_and_auc(pred, truth)\n    else:\n        ap, auc = numpy.array([ap_and_auc(pred[:,i], truth[:,i]) for i in range(truth.shape[1]) if truth[:,i].any()]).mean(axis = 0)\n    return ap, auc, dprime(auc)\n\ndef dcase_sed_eval(outputs, pooling, thres, truth, seg_len, verbose = False):\n    pred = outputs[1].reshape((-1, seg_len, outputs[1].shape[-1]))\n    if pooling == 'max':\n        seg_prob = pred.max(axis = 1)\n    elif pooling == 'ave':\n        seg_prob = pred.mean(axis = 1)\n    elif pooling == 'lin':\n        seg_prob = (pred * pred).sum(axis = 1) / pred.sum(axis = 1)\n    elif pooling == 'exp':\n        seg_prob = (pred * numpy.exp(pred)).sum(axis = 1) / numpy.exp(pred).sum(axis = 1)\n    elif pooling == 'att':\n        att = outputs[2].reshape((-1, seg_len, outputs[2].shape[-1]))\n        seg_prob = (pred * att).sum(axis = 1) / att.sum(axis = 1)\n\n    pred = seg_prob >= thres\n    truth = truth.reshape((-1, seg_len, truth.shape[-1])).max(axis = 1)\n\n    if not verbose:\n        Ntrue = truth.sum(axis = 1)\n        Npred = pred.sum(axis = 1)\n        Ncorr = (truth & pred).sum(axis = 1)\n        Nmiss = Ntrue - Ncorr\n        Nfa = Npred - Ncorr\n\n        error_rate = 1.0 * numpy.maximum(Nmiss, Nfa).sum() / Ntrue.sum()\n        f1 = 2.0 * Ncorr.sum() / (Ntrue + Npred).sum()\n        return error_rate, f1\n    else:\n        class Object(object):\n            pass\n        res = Object()\n        res.TP = (truth & pred).sum()\n        res.FN = (truth & ~pred).sum()\n        res.FP = (~truth & pred).sum()\n        res.precision = 100.0 * res.TP / (res.TP + res.FP)\n        res.recall = 100.0 * res.TP / (res.TP + res.FN)\n        res.F1 = 200.0 * res.TP / (2 * res.TP + res.FP + res.FN)\n        res.sub = numpy.minimum((truth & ~pred).sum(axis = 1), (~truth & pred).sum(axis = 1)).sum()\n        res.dele = res.FN - res.sub\n        res.ins = res.FP - res.sub\n        res.ER = 100.0 * (res.sub + res.dele + res.ins) / (res.TP + res.FN)\n        return res\n"
  },
  {
    "path": "code/dcase/Net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy\n\nclass Net(nn.Module):\n    def __init__(self, args):\n        super(Net, self).__init__()\n        self.pooling = args.pooling\n        self.dropout = args.dropout\n        self.conv1 = nn.Conv2d(1, 32, (5, 5), padding = (2, 2))                         # (1, 400, 64) -> (32, 400, 64)\n        self.conv2 = nn.Conv2d(32, 64, (5, 5), padding = (2, 2))                        # (32, 400, 32) -> (64, 400, 32)\n        self.conv3 = nn.Conv2d(64, 128, (5, 5), padding = (2, 2))                       # (64, 200, 16) -> (128, 200, 16)\n        self.gru = nn.GRU(1024, 100, 1, batch_first = True, bidirectional = True)\n        self.fc_prob = nn.Linear(200, 17)\n        if self.pooling == 'att':\n            self.fc_att = nn.Linear(200, 17)\n        # Better initialization\n        nn.init.xavier_uniform(self.conv1.weight); nn.init.constant(self.conv1.bias, 0)\n        nn.init.xavier_uniform(self.conv2.weight); nn.init.constant(self.conv2.bias, 0)\n        nn.init.xavier_uniform(self.conv3.weight); nn.init.constant(self.conv3.bias, 0)\n        nn.init.orthogonal(self.gru.weight_ih_l0); nn.init.constant(self.gru.bias_ih_l0, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0); nn.init.constant(self.gru.bias_hh_l0, 0)\n        nn.init.orthogonal(self.gru.weight_ih_l0_reverse); nn.init.constant(self.gru.bias_ih_l0_reverse, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0_reverse); nn.init.constant(self.gru.bias_hh_l0_reverse, 0)\n        nn.init.xavier_uniform(self.fc_prob.weight); nn.init.constant(self.fc_prob.bias, 0)\n        if self.pooling == 'att':\n            nn.init.xavier_uniform(self.fc_att.weight); nn.init.constant(self.fc_att.bias, 0)\n\n    def forward(self, x):\n        # shape of x: (batch, time, frequency) = (batch, 400, 64)\n        x = x.view((-1, 1, x.size(1), x.size(2)))               # x becomes (batch, channel, time, frequency) = (batch, 1, 400, 64)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x = F.max_pool2d(F.relu(self.conv1(x)), (1, 2))         # (batch, 32, 400, 32)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))         # (batch, 64, 200, 16)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2))         # (batch, 128, 100, 8)\n        x = x.permute(0, 2, 1, 3).contiguous()                  # x becomes (batch, time, channel, frequency) = (batch, 100, 128, 8)\n        x = x.view((-1, x.size(1), x.size(2) * x.size(3)))      # x becomes (batch, time, channel * frequency) = (batch, 100, 1024)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x, _ = self.gru(x)                                          # (batch, 100, 200)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        frame_prob = F.sigmoid(self.fc_prob(x))                     # shape of frame_prob: (batch, time, class) = (batch, 100, 17)\n        if self.pooling == 'max':\n            global_prob, _ = frame_prob.max(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'ave':\n            global_prob = frame_prob.mean(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'lin':\n            global_prob = (frame_prob * frame_prob).sum(dim = 1) / frame_prob.sum(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'exp':\n            global_prob = (frame_prob * frame_prob.exp()).sum(dim = 1) / frame_prob.exp().sum(dim = 1)\n            return global_prob, frame_prob\n        elif self.pooling == 'att':\n            frame_att = F.softmax(self.fc_att(x), dim = 1)\n            global_prob = (frame_prob * frame_att).sum(dim = 1)\n            return global_prob, frame_prob, frame_att\n\n    def predict(self, x, verbose = True, batch_size = 100):\n        # Predict in batches. Both input and output are numpy arrays.\n        # If verbose == True, return all of global_prob, frame_prob and att\n        # If verbose == False, only return global_prob\n        result = []\n        for i in range(0, len(x), batch_size):\n            with torch.no_grad():\n                input = Variable(torch.from_numpy(x[i : i + batch_size])).cuda()\n                output = self.forward(input)\n                if not verbose: output = output[:1]\n                result.append([var.data.cpu().numpy() for var in output])\n        result = tuple(numpy.concatenate(items) for items in zip(*result))\n        return result if verbose else result[0]\n"
  },
  {
    "path": "code/dcase/eval.py",
    "content": "import sys, os, os.path\nimport argparse\nimport numpy\nfrom util_out import *\nfrom util_f1 import *\nfrom scipy.io import loadmat, savemat\n\n# Parse input arguments\nparser = argparse.ArgumentParser(description = '')\nparser.add_argument('--pooling', type = str, default = 'lin', choices = ['max', 'ave', 'lin', 'exp', 'att'])\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--batch_size', type = int, default = 100)\nparser.add_argument('--ckpt_size', type = int, default = 500)\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 3e-4)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 0.5)\nparser.add_argument('--random_seed', type = int, default = 15213)\nparser.add_argument('--ckpt', type = int)\nargs = parser.parse_args()\n\n# Locate model file and prepare directories for prediction and evaluation\nexpid = '%s-drop%.1f-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.pooling,\n    args.dropout,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/dcase', expid)\nMODEL_FILE = os.path.join(WORKSPACE, 'model', 'checkpoint%d.pt' % args.ckpt)\nPRED_PATH = os.path.join(WORKSPACE, 'pred')\nif not os.path.exists(PRED_PATH): os.makedirs(PRED_PATH)\nPRED_FILE = os.path.join(PRED_PATH, 'checkpoint%d.mat' % args.ckpt)\nEVAL_PATH = os.path.join(WORKSPACE, 'eval')\nif not os.path.exists(EVAL_PATH): os.makedirs(EVAL_PATH)\nEVAL_FILE = os.path.join(EVAL_PATH, 'checkpoint%d.txt' % args.ckpt)\nwith open(EVAL_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    print s\n    with open(EVAL_FILE, 'a') as f:\n        f.write(s + '\\n')\n\nif os.path.exists(PRED_FILE):\n    # Load saved predictions, no need to use GPU\n    data = loadmat(PRED_FILE)\n    thres = data['thres'].ravel()\n    test_y = data['test_y']\n    test_frame_y = data['test_frame_y']\n    test_outputs = []\n    test_outputs.append(data['test_global_prob'])\n    test_outputs.append(data['test_frame_prob'])\n    if args.pooling == 'att':\n        test_outputs.append(data['test_frame_att'])\nelse:\n    import torch\n    import torch.nn as nn\n    from torch.optim import *\n    from torch.optim.lr_scheduler import *\n    from torch.autograd import Variable\n    from Net import Net\n    from util_in import *\n\n    # Load model\n    model = Net(args).cuda()\n    model.load_state_dict(torch.load(MODEL_FILE)['model'])\n    model.eval()\n\n    # Load data\n    valid_x, valid_y, _ = bulk_load('DCASE_valid')\n    test_x, test_y, test_hashes = bulk_load('DCASE_test')\n    test_frame_y = load_dcase_test_frame_truth()\n\n    # Predict\n    valid_global_prob = model.predict(valid_x, verbose = False)\n    thres = optimize_micro_avg_f1(valid_global_prob, valid_y)\n    test_outputs = model.predict(test_x, verbose = True)\n\n    # Save predictions\n    data = {}\n    data['thres'] = thres\n    data['test_hashes'] = test_hashes\n    data['test_y'] = test_y\n    data['test_frame_y'] = test_frame_y\n    data['test_global_prob'] = test_outputs[0]\n    data['test_frame_prob'] = test_outputs[1]\n    if args.pooling == 'att':\n        data['test_frame_att'] = test_outputs[2]\n    savemat(PRED_FILE, data)\n\n# Evaluation\nwrite_log('           ||          ||            Task A (recording level)           ||                       Task B (1-second segment level)                       ')\nwrite_log('     CLASS ||    THRES ||   TP |   FN |   FP |  Prec. | Recall |     F1 ||   TP |   FN |   FP |  Prec. | Recall |     F1 |  Sub |  Del |  Ins |     ER ')\nFORMAT1 = ' Micro Avg ||          || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f | %#4d | %#4d | %#4d | %6.02f '\nFORMAT2 = ' %######9d || %8.0006f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f || %#4d | %#4d | %#4d | %6.02f | %6.02f | %6.02f |      |      |      |        '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT1)\nwrite_log(SEP)\n\n# test_y and test_frame_y are inconsistent in some places\n# so when you evaluate Task A, use a \"fake_test_frame_y\" derived from test_y\nfake_test_frame_y = numpy.tile(numpy.expand_dims(test_y, 1), (1, 100, 1))\n\n# Micro-average performance across all classes\nres_taskA = dcase_sed_eval(test_outputs, args.pooling, thres, fake_test_frame_y, 100, verbose = True)\nres_taskB = dcase_sed_eval(test_outputs, args.pooling, thres, test_frame_y, 10, verbose = True)\nwrite_log(FORMAT1 % (res_taskA.TP, res_taskA.FN, res_taskA.FP, res_taskA.precision, res_taskA.recall, res_taskA.F1,\n                     res_taskB.TP, res_taskB.FN, res_taskB.FP, res_taskB.precision, res_taskB.recall, res_taskB.F1,\n                     res_taskB.sub, res_taskB.dele, res_taskB.ins, res_taskB.ER))\nwrite_log(SEP)\n\n# Class-wise performance\nN_CLASSES = test_outputs[0].shape[-1]\nfor i in range(N_CLASSES):\n    outputs = [x[..., i:i+1] for x in test_outputs]\n    res_taskA = dcase_sed_eval(outputs, args.pooling, thres[i], fake_test_frame_y[..., i:i+1], 100, verbose = True)\n    res_taskB = dcase_sed_eval(outputs, args.pooling, thres[i], test_frame_y[..., i:i+1], 10, verbose = True)\n    write_log(FORMAT2 % (i, thres[i],\n                         res_taskA.TP, res_taskA.FN, res_taskA.FP, res_taskA.precision, res_taskA.recall, res_taskA.F1,\n                         res_taskB.TP, res_taskB.FN, res_taskB.FP, res_taskB.precision, res_taskB.recall, res_taskB.F1))\n"
  },
  {
    "path": "code/dcase/train.py",
    "content": "import sys, os, os.path, time\nimport argparse\nimport numpy\nimport torch\nimport torch.nn as nn\nfrom torch.optim import *\nfrom torch.optim.lr_scheduler import *\nfrom torch.autograd import Variable\nfrom Net import Net\nfrom util_in import *\nfrom util_out import *\nfrom util_f1 import *\n\ntorch.backends.cudnn.benchmark = True\n\n# Parse input arguments\nparser = argparse.ArgumentParser(description = '')\nparser.add_argument('--pooling', type = str, default = 'lin', choices = ['max', 'ave', 'lin', 'exp', 'att'])\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--batch_size', type = int, default = 100)\nparser.add_argument('--ckpt_size', type = int, default = 500)\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 3e-4)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 0.5)\nparser.add_argument('--max_ckpt', type = int, default = 50)\nparser.add_argument('--random_seed', type = int, default = 15213)\nargs = parser.parse_args()\n\nnumpy.random.seed(args.random_seed)\n\n# Prepare log file and model directory\nexpid = '%s-drop%.1f-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.pooling,\n    args.dropout,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/dcase', expid)\nMODEL_PATH = os.path.join(WORKSPACE, 'model')\nif not os.path.exists(MODEL_PATH): os.makedirs(MODEL_PATH)\nLOG_FILE = os.path.join(WORKSPACE, 'train.log')\nwith open(LOG_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    timestamp = time.strftime('%Y-%m-%d %H:%M:%S')\n    msg = '[' + timestamp + '] ' + s\n    print msg\n    with open(LOG_FILE, 'a') as f:\n        f.write(msg + '\\n')\n\n# Load data\nwrite_log('Loading data ...')\nvalid_x, valid_y, _ = bulk_load('DCASE_valid')\ntest_x, test_y, _ = bulk_load('DCASE_test')\ntest_frame_y = load_dcase_test_frame_truth()\n\n# Build model\nwrite_log('Building model ...')\nmodel = Net(args).cuda()\nif args.optimizer == 'sgd':\n    optimizer = SGD(model.parameters(), lr = args.init_lr, momentum = 0.9, nesterov = True)\nelif args.optimizer == 'adam':\n    optimizer = Adam(model.parameters(), lr = args.init_lr)\nif args.lr_factor < 1.0:\n    scheduler = ReduceLROnPlateau(optimizer, mode = 'min', factor = args.lr_factor, patience = args.lr_patience)\ncriterion = nn.BCELoss()\ndef bce_loss(input, target):\n    return -numpy.log(numpy.where(target, input, 1 - input)).sum() / input.size\n\n# Train model\nwrite_log('Training model ...')\nwrite_log('                                       || D_VAL ||              DCASE_TEST               ')\nwrite_log(' CKPT |    LR    |  Tr.LOSS | Val.LOSS || Gl.F1 || Gl.F1 | Fr.ER | Fr.F1 | 1s.ER | 1s.F1 ')\nFORMAT  = ' %#4d | %8.0003g | %8.0006f | %8.0006f || %5.3f || %5.3f | %5.3f | %5.3f | %5.3f | %5.3f '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT)\nwrite_log(SEP)\n\ngen_train = batch_generator(args.batch_size, args.random_seed)\nfor ckpt in range(1, args.max_ckpt + 1):\n    model.train()\n    train_loss = 0\n    for i in range(args.ckpt_size):\n        x, y = next(gen_train)\n        optimizer.zero_grad()\n        global_prob = model(x)[0]\n        global_prob.clamp_(min = 1e-7, max = 1 - 1e-7)\n        loss = criterion(global_prob, y)\n        train_loss += loss.data[0]\n        loss.backward()\n        optimizer.step()\n        sys.stderr.write('Checkpoint %d, Batch %d / %d, avg train loss = %f\\r' % (ckpt, i + 1, args.ckpt_size, train_loss / (i + 1)))\n    train_loss /= args.ckpt_size\n\n    # Compute validation loss, validation F1 and test F1\n    model.eval()\n    valid_global_prob = model.predict(valid_x, verbose = False)\n    valid_loss = bce_loss(valid_global_prob, valid_y)\n    thres = optimize_micro_avg_f1(valid_global_prob, valid_y)\n    valid_global_f1 = f1(valid_global_prob >= thres, valid_y)\n    test_outputs = model.predict(test_x, verbose = True)\n    test_global_f1 = f1(test_outputs[0] >= thres, test_y)\n    test_frame_er, test_frame_f1 = dcase_sed_eval(test_outputs, args.pooling, thres, test_frame_y, 1)   # every 1 frame is a segment\n    test_1s_er, test_1s_f1 = dcase_sed_eval(test_outputs, args.pooling, thres, test_frame_y, 10)        # every 10 frame is a segment\n\n    # Write log\n    write_log(FORMAT % (\n        ckpt, optimizer.param_groups[0]['lr'], train_loss, valid_loss,\n        valid_global_f1, test_global_f1, test_frame_er, test_frame_f1, test_1s_er, test_1s_f1\n    ))\n\n    # Abort if training has gone mad\n    if numpy.isnan(train_loss) or numpy.isinf(train_loss):\n        write_log('Aborted.')\n        break\n\n    # Save model. Too bad I can't save the scheduler\n    MODEL_FILE = os.path.join(MODEL_PATH, 'checkpoint%d.pt' % ckpt)\n    state = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}\n    torch.save(state, MODEL_FILE)\n\n    # Update learning rate\n    if args.lr_factor < 1.0:\n        scheduler.step(valid_loss)\n\nwrite_log('DONE!')\n"
  },
  {
    "path": "code/dcase/util_f1.py",
    "content": "import numpy\n\n# Compute F1 given predictions and truth\ndef f1(pred, truth):\n    return 2.0 * (pred & truth).sum() / (pred.sum() + truth.sum())\n\n# Given scores and truth for a single class (as 1-D numpy arrays), find optimal threshold and corresponding F1\n# Statistics of other classes may be given to optimize micro-average F1\ndef optimize_f1(scores, truth, extraNcorr = 0, extraNtrue = 0, extraNpred = 0):\n    # Start with predicting everything as negative\n    best_thres = numpy.inf\n    best_f1 = 0.0\n    num = extraNcorr                                # number of correctly predicted instances\n    den = extraNtrue + extraNpred + truth.sum()     # number of predicted instances + true instances\n    instances = [(-numpy.inf, False)] + sorted(zip(scores, truth))\n    # Lower the threshold gradually\n    for i in range(len(instances) - 1, 0, -1):\n        if instances[i][1]: num += 1\n        den += 1\n        if instances[i][0] > instances[i-1][0]:     # Can put threshold here\n            f1 = 2.0 * num / den\n            if f1 > best_f1:\n                best_thres = (instances[i][0] + instances[i-1][0]) / 2\n                best_f1 = f1\n    return best_thres, best_f1\n\n# Given scores and truth for many classes (as 2-D numpy arrays),\n# find the optimal class-specific thresholds (as a 1-D numpy array) that maximizes the micro-average F1\n# The algorithm is stochastic, but I have always observed deterministic results\ndef optimize_micro_avg_f1(scores, truth):\n    # First optimize each class individually\n    nClasses = truth.shape[1]\n    thres = numpy.zeros(nClasses, dtype = 'float64')\n    for i in range(nClasses):\n        thres[i], _ = optimize_f1(scores[:,i], truth[:,i])\n    Ntrue = truth.sum(axis = 0)\n    Npred = (scores >= thres).sum(axis = 0)\n    Ncorr = ((scores >= thres) & truth).sum(axis = 0)\n\n    # Repeatly re-tune the threshold for each class until convergence\n    candidates = range(nClasses)\n    while len(candidates) > 0:\n        i = numpy.random.choice(candidates)\n        candidates.remove(i)\n        old_thres = thres[i]\n        thres[i], _ = optimize_f1(\n            scores[:,i],\n            truth[:,i],\n            extraNcorr = Ncorr.sum() - Ncorr[i],\n            extraNtrue = Ntrue.sum() - Ntrue[i],\n            extraNpred = Npred.sum() - Npred[i],\n        )\n        if thres[i] != old_thres:\n            Npred[i] = (scores[:,i] >= thres[i]).sum(axis = 0)\n            Ncorr[i] = ((scores[:,i] >= thres[i]) & truth[:,i]).sum(axis = 0)\n            candidates = range(nClasses)\n            candidates.remove(i)\n\n    return thres\n"
  },
  {
    "path": "code/dcase/util_in.py",
    "content": "import sys, os, os.path, glob\nimport cPickle\nfrom scipy.io import loadmat\nimport numpy\nfrom multiprocessing import Process, Queue\nimport torch\nfrom torch.autograd import Variable\n\nN_CLASSES = 17\nN_WORKERS = 6\n\nFEATURE_DIR = '../../data/dcase'\nwith open(os.path.join(FEATURE_DIR, 'normalizer.pkl'), 'rb') as f:\n    mu, sigma = cPickle.load(f)\n\ndef sample_generator(file_list, random_seed = 15213):\n    rng = numpy.random.RandomState(random_seed)\n    while True:\n        rng.shuffle(file_list)\n        for filename in file_list:\n            data = loadmat(filename)\n            feat = ((data['feat'] - mu) / sigma).astype('float32')\n            labels = data['labels'].astype('float32')\n            for i in range(len(data['feat'])):\n                yield feat[i], labels[i]\n\ndef worker(queues, file_lists, random_seed):\n    generators = [sample_generator(file_lists[i], random_seed + i) for i in range(len(file_lists))]\n    while True:\n        for gen, q in zip(generators, queues):\n            q.put(next(gen))\n\ndef batch_generator(batch_size, random_seed = 15213):\n    queues = [Queue(5) for class_id in range(N_CLASSES)]\n    file_lists = [sorted(glob.glob(os.path.join(FEATURE_DIR, 'DCASE_train_class%02d_part*.mat' % class_id))) for class_id in range(N_CLASSES)]\n\n    for worker_id in range(N_WORKERS):\n        p = Process(target = worker, args = (queues[worker_id::N_WORKERS], file_lists[worker_id::N_WORKERS], random_seed))\n        p.daemon = True\n        p.start()\n\n    rng = numpy.random.RandomState(random_seed)\n    batch = []\n    while True:\n        rng.shuffle(queues)\n        for q in queues:\n            batch.append(q.get())\n            if len(batch) == batch_size:\n                yield tuple(Variable(torch.from_numpy(numpy.stack(x))).cuda() for x in zip(*batch))\n                batch = []\n\ndef bulk_load(prefix):\n    feat = []; labels = []; hashes = []\n    for filename in sorted(glob.glob(os.path.join(FEATURE_DIR, '%s_*.mat' % prefix))):\n        data = loadmat(filename)\n        feat.append(((data['feat'] - mu) / sigma).astype('float32'))\n        labels.append(data['labels'].astype('bool'))\n        hashes.append(data['hashes'])\n    return numpy.concatenate(feat), numpy.concatenate(labels), numpy.concatenate(hashes)\n\ndef load_dcase_test_frame_truth():\n    return cPickle.load(open(os.path.join(FEATURE_DIR, 'DCASE_test_frame_label.pkl'), 'rb'))\n"
  },
  {
    "path": "code/dcase/util_out.py",
    "content": "import numpy\n\ndef dcase_sed_eval(outputs, pooling, thres, truth, seg_len, verbose = False):\n    pred = outputs[1].reshape((-1, seg_len, outputs[1].shape[-1]))\n    if pooling == 'max':\n        seg_prob = pred.max(axis = 1)\n    elif pooling == 'ave':\n        seg_prob = pred.mean(axis = 1)\n    elif pooling == 'lin':\n        seg_prob = (pred * pred).sum(axis = 1) / pred.sum(axis = 1)\n    elif pooling == 'exp':\n        seg_prob = (pred * numpy.exp(pred)).sum(axis = 1) / numpy.exp(pred).sum(axis = 1)\n    elif pooling == 'att':\n        att = outputs[2].reshape((-1, seg_len, outputs[2].shape[-1]))\n        seg_prob = (pred * att).sum(axis = 1) / att.sum(axis = 1)\n\n    pred = seg_prob >= thres\n    truth = truth.reshape((-1, seg_len, truth.shape[-1])).max(axis = 1)\n\n    if not verbose:\n        Ntrue = truth.sum(axis = 1)\n        Npred = pred.sum(axis = 1)\n        Ncorr = (truth & pred).sum(axis = 1)\n        Nmiss = Ntrue - Ncorr\n        Nfa = Npred - Ncorr\n\n        error_rate = 1.0 * numpy.maximum(Nmiss, Nfa).sum() / Ntrue.sum()\n        f1 = 2.0 * Ncorr.sum() / (Ntrue + Npred).sum()\n        return error_rate, f1\n    else:\n        class Object(object):\n            pass\n        res = Object()\n        res.TP = (truth & pred).sum()\n        res.FN = (truth & ~pred).sum()\n        res.FP = (~truth & pred).sum()\n        res.precision = 100.0 * res.TP / (res.TP + res.FP)\n        res.recall = 100.0 * res.TP / (res.TP + res.FN)\n        res.F1 = 200.0 * res.TP / (2 * res.TP + res.FP + res.FN)\n        res.sub = numpy.minimum((truth & ~pred).sum(axis = 1), (~truth & pred).sum(axis = 1)).sum()\n        res.dele = res.FN - res.sub\n        res.ins = res.FP - res.sub\n        res.ER = 100.0 * (res.sub + res.dele + res.ins) / (res.TP + res.FN)\n        return res\n"
  },
  {
    "path": "code/sequential/Net.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy\n\nclass ConvBlock(nn.Module):\n    def __init__(self, n_input_feature_maps, n_output_feature_maps, kernel_size_2d, batch_norm = False, pool_stride = None):\n        super(ConvBlock, self).__init__()\n        assert all(x % 2 == 1 for x in kernel_size_2d)\n        self.n_input = n_input_feature_maps\n        self.n_output = n_output_feature_maps\n        self.kernel_size = kernel_size_2d\n        self.batch_norm = batch_norm\n        self.pool_stride = pool_stride\n        # \"~batch_norm\" should be written as \"not batch_norm\"; otherwise ~True will evaluate to -2 and be treated as True.\n        # But I'll keep this error to avoid breaking existing models.\n        self.conv = nn.Conv2d(self.n_input, self.n_output, self.kernel_size, padding = tuple(x/2 for x in self.kernel_size), bias = ~batch_norm)\n        if batch_norm: self.bn = nn.BatchNorm2d(self.n_output)\n        nn.init.xavier_uniform(self.conv.weight)\n\n    def forward(self, x):\n        x = self.conv(x)\n        if self.batch_norm: x = self.bn(x)\n        x = F.relu(x)\n        if self.pool_stride is not None: x = F.max_pool2d(x, self.pool_stride)\n        return x\n\nclass Net(nn.Module):\n    def __init__(self, args):\n        super(Net, self).__init__()\n        self.__dict__.update(args.__dict__)     # Instill all args into self\n        assert self.n_conv_layers % self.n_pool_layers == 0\n        self.input_n_freq_bins = n_freq_bins = 64\n        self.output_size = 71 if self.mode == 'ctc' else 35\n        self.conv = []\n        pool_interval = self.n_conv_layers / self.n_pool_layers\n        n_input = 1\n        for i in range(self.n_conv_layers):\n            if (i + 1) % pool_interval == 0:        # this layer has pooling\n                n_freq_bins /= 2\n                n_output = self.embedding_size / n_freq_bins\n                pool_stride = (2, 2) if i < pool_interval * 2 else (1, 2)\n            else:\n                n_output = self.embedding_size * 2 / n_freq_bins\n                pool_stride = None\n            layer = ConvBlock(n_input, n_output, self.kernel_size, batch_norm = self.batch_norm, pool_stride = pool_stride)\n            self.conv.append(layer)\n            self.__setattr__('conv' + str(i + 1), layer)\n            n_input = n_output\n        self.gru = nn.GRU(self.embedding_size, self.embedding_size / 2, 1, batch_first = True, bidirectional = True)\n        self.fc = nn.Linear(self.embedding_size, self.output_size)\n        # Better initialization\n        nn.init.orthogonal(self.gru.weight_ih_l0); nn.init.constant(self.gru.bias_ih_l0, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0); nn.init.constant(self.gru.bias_hh_l0, 0)\n        nn.init.orthogonal(self.gru.weight_ih_l0_reverse); nn.init.constant(self.gru.bias_ih_l0_reverse, 0)\n        nn.init.orthogonal(self.gru.weight_hh_l0_reverse); nn.init.constant(self.gru.bias_hh_l0_reverse, 0)\n        nn.init.xavier_uniform(self.fc.weight); nn.init.constant(self.fc.bias, 0)\n\n    def forward(self, x):\n        x = x.view((-1, 1, x.size(1), x.size(2)))                                               # x becomes (batch, channel, time, freq)\n        for i in range(len(self.conv)):\n            if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n            x = self.conv[i](x)                                                                 # x becomes (batch, channel, time, freq)\n        x = x.permute(0, 2, 1, 3).contiguous()                                                  # x becomes (batch, time, channel, freq)\n        x = x.view((-1, x.size(1), x.size(2) * x.size(3)))                                      # x becomes (batch, time, embedding_size)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        x, _ = self.gru(x)                                                                      # x becomes (batch, time, embedding_size)\n        if self.dropout > 0: x = F.dropout(x, p = self.dropout, training = self.training)\n        if self.mode == 'ctc':\n            log_prob = F.log_softmax(self.fc(x), dim = -1)                                      # shape of log_prob: (batch, time, output_size)\n            return log_prob                                                                     # returns the log probability\n        else:\n            frame_prob = F.sigmoid(self.fc(x))                                                  # shape of frame_prob: (batch, time, output_size)\n            frame_prob = torch.clamp(frame_prob, 1e-7, 1 - 1e-7)\n            return frame_prob\n\n    def predict(self, x, batch_size = 300):\n        # Predict in batches. Both input and output are numpy arrays.\n        result = []\n        for i in range(0, len(x), batch_size):\n            with torch.no_grad():\n                input = Variable(torch.from_numpy(x[i : i + batch_size])).cuda()\n                output = self.forward(input)\n                result.append(output.data.cpu().numpy())\n        return numpy.concatenate(result)\n"
  },
  {
    "path": "code/sequential/ctc.py",
    "content": "import numpy\nnumpy.seterr(divide = 'ignore')\nimport torch\nfrom torch.autograd import Variable\n\ndef logsumexp(*args):\n    M = reduce(torch.max, args)\n    mask = M != -numpy.inf\n    M[mask] += torch.log(sum(torch.exp(x[mask] - M[mask]) for x in args))\n        # Must pick the valid part out, otherwise the gradient will contain NaNs\n    return M\n\n# Input arguments:\n#   logProb: a 3-D Variable of size N_SEQS * N_FRAMES * N_LABELS containing LOG probabilities.\n#   seqLen: a list or numpy array indicating the number of valid frames in each sequence.\n#   label: a list of label sequences.\n# Note on implementation:\n#   Anything that will be backpropped must be a Variable;\n#   Anything used as an index must be a torch.cuda.LongTensor.\ndef ctc_loss(logProb, seqLen, label, debug = False):\n    seqLen = numpy.array(seqLen)\n    nSeqs, nFrames = logProb.size(0), logProb.size(1)\n\n    # Find out the lengths of the label sequences\n    labelLen = torch.from_numpy(numpy.array([len(x) for x in label])).cuda()\n\n    # Insert blank symbol at the beginning, at the end, and between all symbols of the label sequences\n    nStates = max(len(x) for x in label) * 2 + 1\n    extendedLabel = numpy.zeros((nSeqs, nStates), dtype = 'int64')\n    for i in range(nSeqs):\n        extendedLabel[i, 1 : (len(label[i]) * 2) : 2] = label[i]\n    label = torch.from_numpy(extendedLabel).cuda()\n\n    # Compute alpha trellis\n    dummyColumn = Variable(-numpy.inf * torch.ones((nSeqs, 1)).cuda())\n    allSeqIndex = torch.from_numpy(numpy.tile(numpy.arange(nSeqs), (nStates, 1)).T).cuda()\n    uttLogProb = Variable(torch.zeros(nSeqs).cuda())\n    for frame in range(nFrames):\n        if frame == 0:\n            # Initialize the log probability first two states to log(1), and other states to log(0)\n            alpha = Variable(-numpy.inf * torch.ones((nSeqs, nStates)).cuda())\n            alpha[:, :2] = 0\n        else:\n            # Receive probability from previous frame\n            p2 = alpha[:, :-2].clone()\n            p2[label[:, 2:] == label[:, :-2]] = -numpy.inf\n                # Probability can pass across labels two steps apart if they are different\n            alpha = logsumexp(alpha,\n                              torch.cat([dummyColumn, alpha[:, :-1]], 1),\n                              torch.cat([dummyColumn, dummyColumn, p2], 1))\n        # Multiply with the probability of current frame\n        alpha += logProb[allSeqIndex, frame, label]\n        # Collect probability for ends of utterances\n        seqIndex = (seqLen == frame + 1).nonzero()[0]\n        if len(seqIndex) > 0:\n            seqIndex = torch.from_numpy(seqIndex).cuda()\n            ll = labelLen[seqIndex]\n            p = alpha[seqIndex, ll * 2].clone()\n            if (ll > 0).any():\n                p[ll > 0] = logsumexp(p[ll > 0], alpha[seqIndex[ll > 0], ll[ll > 0] * 2 - 1])\n            uttLogProb[seqIndex] = p\n\n    # Return the per-frame negative log probability of all utterances (and per-utterance log probs if debug == True)\n    loss = -uttLogProb.sum() / seqLen.sum()\n    if debug:\n        return loss, uttLogProb\n    else:\n        return loss\n\nif __name__ == '__main__':\n    torch.set_printoptions(precision = 5)\n\n    label = numpy.array([[2, 1, 1, 3],   # BAAC\n                         [0, 0, 0, 0],   # null\n                         [1, 0, 0, 0],   # A\n                         [3, 2, 0, 0],   # CB\n                         [0, 0, 0, 0],   # null\n                         [1, 0, 0, 0],   # A\n                         [3, 2, 0, 0]])  # CB\n    seqLen = numpy.array([5, 3, 3, 3, 1, 1, 1])\n    logProb = numpy.log(numpy.tile(numpy.array([[[0.1, 0.2, 0.3, 0.4]]], dtype = 'float32'), (len(seqLen), max(seqLen), 1)))\n    logProb = Variable(torch.from_numpy(logProb).cuda(), requires_grad = True)\n    loss, uttLogProb = ctc_loss(logProb, seqLen, label, debug = True)\n    print loss, torch.exp(uttLogProb)\n    # Expected output of torch.exp(uttLogProb): [0.00048, 0.001, 0.022, 0.12, 0.1, 0.2, 0]\n    loss.backward()\n#    print logProb.grad\n"
  },
  {
    "path": "code/sequential/ctl.py",
    "content": "import numpy\nnumpy.seterr(divide = 'ignore')\nimport torch\nfrom torch.autograd import Variable\n\ndef cuda(x):\n    return x.cuda() if torch.cuda.is_available() else x\n\ndef tensor(array):\n    if array.dtype == 'bool':\n        array = array.astype('uint8')\n    return cuda(torch.from_numpy(array))\n\ndef variable(array):\n    if isinstance(array, numpy.ndarray):\n        array = tensor(array)\n    return cuda(Variable(array))\n\ndef logsumexp(*args):\n    M = reduce(torch.max, args)\n    mask = M != -numpy.inf\n    M[mask] += torch.log(sum(torch.exp(x[mask] - M[mask]) for x in args))\n        # Must pick the valid part out, otherwise the gradient will contain NaNs\n    return M\n\n# Input arguments:\n#   frameProb: a 3-D Variable of size N_SEQS * N_FRAMES * N_CLASSES containing the probability of each event at each frame.\n#   seqLen: a list or numpy array indicating the number of valid frames in each sequence.\n#   label: a list of label sequences.\n# Note on implementation:\n#   Anything that will be backpropped must be a Variable;\n#   Anything used as an index must be a torch.cuda.LongTensor.\ndef ctl_loss(frameProb, seqLen, label, maxConcur = 1, debug = False):\n    seqLen = numpy.array(seqLen)\n    nSeqs, nFrames, nClasses = frameProb.size()\n\n    # Clear the content in the frames of frameProb beyond seqLen\n    frameIndex = numpy.tile(numpy.arange(nFrames), (nSeqs, 1))\n    mask = variable(numpy.expand_dims(frameIndex < seqLen.reshape((nSeqs, 1)), 2))\n    z = variable(torch.zeros(frameProb.size()))\n    frameProb = torch.where(mask, frameProb, z)\n\n    # Convert frameProb (probabilities of events) into probabilities of event boundaries\n    z = variable(1e-7 * torch.ones((nSeqs, 1, nClasses)))       # Real zeros would cause NaNs in the gradient\n    frameProb = torch.cat([z, frameProb, z], dim = 1)\n    startProb = torch.clamp(frameProb[:, 1:] - frameProb[:, :-1], min = 1e-7)\n    endProb = torch.clamp(frameProb[:, :-1] - frameProb[:, 1:], min = 1e-7)\n    boundaryProb = torch.stack([startProb, endProb], dim = 3).view((nSeqs, nFrames + 1, nClasses * 2))\n\n    blankLogProb = torch.log(1 - boundaryProb).sum(dim = 2)\n        # blankLogProb[seq, frame] = log probability of emitting nothing at this frame\n    deltaLogProb = torch.log(boundaryProb) - torch.log(1 - boundaryProb)\n        # deltaLogProb[seq, frame, token] = log prob of emitting token minus log prob of not emitting token\n\n    # Find out the lengths of the label sequences\n    labelLen = tensor(numpy.array([len(x) for x in label]))\n\n    # Put the label sequences into a Variable\n    maxLabelLen = max(len(x) for x in label)\n    L = numpy.zeros((nSeqs, maxLabelLen), dtype = 'int64')\n    for i in range(nSeqs):\n        L[i, :len(label[i])] = numpy.array(label[i]) - 1        # minus one because we no longer have a dedicated blank token\n    label = tensor(L)\n\n    if maxConcur > maxLabelLen:\n        maxConcur = maxLabelLen\n\n    # Compute alpha trellis\n    # alpha[m, n] = log probability of having emitted n tokens in the m-th sequence at the current frame\n    nStates = maxLabelLen + 1\n    alpha = variable(-numpy.inf * torch.ones((nSeqs, nStates)))\n    alpha[:, 0] = 0\n    seqIndex = tensor(numpy.tile(numpy.arange(nSeqs), (nStates, 1)).T)\n    dummyColumns = variable(-numpy.inf * torch.ones((nSeqs, maxConcur)))\n    uttLogProb = variable(torch.zeros(nSeqs))\n    for frame in range(nFrames + 1):        # +1 because we are considering boundaries\n        # Case 0: don't emit anything at current frame\n        p = alpha + blankLogProb[:, frame].view((-1, 1))\n        alpha = p\n        for i in range(1, maxConcur + 1):\n            # Case i: emit i tokens at current frame\n            p = p[:, :-1] + deltaLogProb[seqIndex[:, i:], frame, label[:, (i-1):]]\n            alpha = logsumexp(alpha, torch.cat([dummyColumns[:, :i], p], dim = 1))\n        # Collect probability for ends of utterances\n        finishedSeqs = (seqLen == frame).nonzero()[0]\n        if len(finishedSeqs) > 0:\n            finishedSeqs = tensor(finishedSeqs)\n            uttLogProb[finishedSeqs] = alpha[finishedSeqs, labelLen[finishedSeqs]].clone()\n\n    # Return the per-frame negative log probability of all utterances (and per-utterance log probs if debug == True)\n    loss = -uttLogProb.sum() / (seqLen + 1).sum()\n    if debug:\n        return loss, uttLogProb\n    else:\n        return loss\n\nif __name__ == '__main__':\n    def strip(variable):\n        return variable.data.cpu().numpy()\n    torch.set_printoptions(precision = 5)\n\n    frameProb = numpy.array([[[0.1, 0.9, 0.9], [0.1, 0.9, 0.9], [0.1, 0.9, 0.9], [0.1, 0.9, 0.1]]], dtype = 'float32')  # event B all the time; event C in the first three frames\n    frameProb = numpy.tile(frameProb, (4, 1, 1))\n    frameProb = Variable(tensor(frameProb), requires_grad = True)\n    label = [[3, 5, 6, 4], [3, 4], [5, 6], []]  # <B><C></C></B>; <B></B>; <C></C>; empty\n    seqLen = numpy.array([4, 4, 4, 4])\n\n    loss, uttLogProb = ctl_loss(frameProb, seqLen, label, maxConcur = 1, debug = True)\n    print strip(loss), strip(torch.exp(uttLogProb))\n    loss, uttLogProb = ctl_loss(frameProb, seqLen, label, maxConcur = 2, debug = True)\n    print strip(loss), strip(torch.exp(uttLogProb))\n    loss, uttLogProb = ctl_loss(frameProb, seqLen, label, maxConcur = 3, debug = True)\n    print strip(loss), strip(torch.exp(uttLogProb))\n    # Reference output:\n    # [ 1.45882034] [  2.10689101e-03   2.61903927e-03   1.27433671e-03   3.03234774e-05]       # Prob of first label sequence is small\n    # [ 1.26348567] [  1.04593262e-01   2.61992868e-03   1.27623521e-03   3.03234774e-05]       # Prob of first label sequence gets big, because <B><C> can be emitted at the same time\n    # [ 1.263484  ] [  1.04596682e-01   2.61992868e-03   1.27623521e-03   3.03234774e-05]       # Prob of first label sequence stays almost the same, because it doesn't need to emit three tokens at the same time\n    loss.backward()\n    print frameProb.grad\n"
  },
  {
    "path": "code/sequential/eval.py",
    "content": "import sys, os, os.path\nimport argparse\nimport numpy\nfrom util_out import *\nfrom util_f1 import *\nfrom scipy.io import loadmat, savemat\n\n# Parse input arguments\ndef mybool(s):\n    return s.lower() in ['t', 'true', 'y', 'yes', '1']\nparser = argparse.ArgumentParser()\nparser.add_argument('--mode', type = str, default = 'ctl', choices = ['strong', 'mil', 'ctc', 'ctl', 'combine'])\nparser.add_argument('--embedding_size', type = int, default = 512)\n    # This is the embedding size after a pooling layer or after the GRU layer\n    # After a non-pooling layer, the embeddings size will be twice this much\nparser.add_argument('--n_conv_layers', type = int, default = 6)\nparser.add_argument('--kernel_size', type = str, default = '3')     # 'n' or 'nxm'\nparser.add_argument('--n_pool_layers', type = int, default = 6)\n    # the pooling layers will be inserted uniformly into the conv layers\n    # the should be at least 2 and at most 6 pooling layers\n    # the first two pooling layers will have stride (2,2); later ones will have stride (1,2)\nparser.add_argument('--max_concur', type = int, default = 1)\nparser.add_argument('--mil_weight', type = float, default = 3.3)\nparser.add_argument('--ctl_weight', type = float, default = 1.0)\nparser.add_argument('--batch_norm', type = mybool, default = True)\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--batch_size', type = int, default = 500)\nparser.add_argument('--ckpt_size', type = int, default = 200)       # how many batches per checkpoint\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 1e-3)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 1.0)\nparser.add_argument('--random_seed', type = int, default = 15213)\nparser.add_argument('--ckpt', type = int)\nargs = parser.parse_args()\nif 'x' not in args.kernel_size:\n    args.kernel_size = args.kernel_size + 'x' + args.kernel_size\n\n# Locate model file and prepare directories for prediction and evaluation\nexpid = '%s-embed%d-%dC%dP-kernel%s%s%s-%s-drop%.1f-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.mode,\n    args.embedding_size,\n    args.n_conv_layers,\n    args.n_pool_layers,\n    args.kernel_size,\n    '-concur%d' % args.max_concur if args.mode in ['ctl', 'combine'] else '',\n    '-weight%g:%g' % (args.mil_weight, args.ctl_weight) if args.mode == 'combine' else '',\n    'bn' if args.batch_norm else 'nobn',\n    args.dropout,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/sequential', expid)\nMODEL_FILE = os.path.join(WORKSPACE, 'model', 'checkpoint%d.pt' % args.ckpt)\nPRED_PATH = os.path.join(WORKSPACE, 'pred')\nif not os.path.exists(PRED_PATH): os.makedirs(PRED_PATH)\nPRED_FILE = os.path.join(PRED_PATH, 'checkpoint%d.mat' % args.ckpt)\nEVAL_PATH = os.path.join(WORKSPACE, 'eval')\nif not os.path.exists(EVAL_PATH): os.makedirs(EVAL_PATH)\nEVAL_FILE = os.path.join(EVAL_PATH, 'checkpoint%d.txt' % args.ckpt)\nwith open(EVAL_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    print s\n    with open(EVAL_FILE, 'a') as f:\n        f.write(s + '\\n')\n\nif os.path.exists(PRED_FILE):\n    # Load saved predictions, no need to use GPU\n    data = loadmat(PRED_FILE)\n    thres = data['thres'].ravel()\n    eval_frame_y = data['eval_frame_y']\n    eval_frame_prob = data['eval_frame_prob']\nelse:\n    import torch\n    import torch.nn as nn\n    from torch.optim import *\n    from torch.optim.lr_scheduler import *\n    from torch.autograd import Variable\n    from Net import Net\n    from util_in import *\n\n    # Load model\n    args.kernel_size = tuple(int(x) for x in args.kernel_size.split('x'))\n    model = Net(args).cuda()\n    model.load_state_dict(torch.load(MODEL_FILE)['model'])\n    model.eval()\n\n    # Load data\n    valid_x, valid_frame_y, _, _ = bulk_load('GAS_valid')\n    eval_x, eval_frame_y, _, eval_hashes = bulk_load('GAS_eval')\n\n    # Predict\n    if args.mode == 'ctc':\n        thres = numpy.array([0.5] * eval_frame_y.shape[-1])\n        eval_log_prob = model.predict(eval_x)\n        eval_frame_prob = ctc_decode(eval_log_prob).astype('float32')\n    else:\n        valid_frame_prob = model.predict(valid_x)\n        thres, valid_f1 = optimize_gas_valid(valid_frame_prob, valid_frame_y)\n        eval_frame_prob = model.predict(eval_x)\n\n    # Save predictions\n    data = {}\n    data['thres'] = thres\n    data['eval_hashes'] = eval_hashes\n    data['eval_frame_y'] = eval_frame_y\n    data['eval_frame_prob'] = eval_frame_prob\n    if args.mode == 'ctc':\n        data['eval_log_prob'] = eval_log_prob\n    savemat(PRED_FILE, data)\n\n# Evaluation\nwrite_log('     CLASS ||    THRES ||    TP |    FN |    FP |  Prec. | Recall |     F1 ')\nFORMAT1 = ' Macro Avg ||          ||       |       |       |        |        | %6.02f '\nFORMAT2 = ' %######9d || %8.0006f || %##5d | %##5d | %##5d | %6.02f | %6.02f | %6.02f '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT1)\nwrite_log(SEP)\n\nTP, FN, FP, precision, recall, f1 = evaluate_gas_eval(eval_frame_prob, thres, eval_frame_y, verbose = True)\nwrite_log(FORMAT1 % f1.mean())\nwrite_log(SEP)\nN_CLASSES = len(f1)\nfor i in range(N_CLASSES):\n    write_log(FORMAT2 % (i, thres[i], TP[i], FN[i], FP[i], precision[i], recall[i], f1[i]))\n"
  },
  {
    "path": "code/sequential/train.py",
    "content": "import sys, os, os.path, time\nimport argparse\nimport numpy\nimport torch\nimport torch.nn as nn\nfrom torch.optim import *\nfrom torch.optim.lr_scheduler import *\nfrom torch.autograd import Variable\nfrom Net import Net\nfrom ctc import ctc_loss\nfrom ctl import ctl_loss\nfrom util_in import *\nfrom util_out import *\nfrom util_f1 import *\n\ntorch.backends.cudnn.benchmark = True\n\n# Parse input arguments\ndef mybool(s):\n    return s.lower() in ['t', 'true', 'y', 'yes', '1']\nparser = argparse.ArgumentParser()\nparser.add_argument('--mode', type = str, default = 'ctl', choices = ['strong', 'mil', 'ctc', 'ctl', 'combine'])\nparser.add_argument('--embedding_size', type = int, default = 512)\n    # This is the embedding size after a pooling layer or after the GRU layer\n    # After a non-pooling layer, the embeddings size will be twice this much\nparser.add_argument('--n_conv_layers', type = int, default = 6)\nparser.add_argument('--kernel_size', type = str, default = '3')     # 'n' or 'nxm'\nparser.add_argument('--n_pool_layers', type = int, default = 6)\n    # the pooling layers will be inserted uniformly into the conv layers\n    # the should be at least 2 and at most 6 pooling layers\n    # the first two pooling layers will have stride (2,2); later ones will have stride (1,2)\nparser.add_argument('--max_concur', type = int, default = 1)        # for mode == 'ctl' or 'combine' only\nparser.add_argument('--mil_weight', type = float, default = 3.3)    # for mode == 'combine' only\nparser.add_argument('--ctl_weight', type = float, default = 1.0)    # for mode == 'combine' only\nparser.add_argument('--batch_norm', type = mybool, default = True)\nparser.add_argument('--dropout', type = float, default = 0.0)\nparser.add_argument('--batch_size', type = int, default = 500)\nparser.add_argument('--ckpt_size', type = int, default = 200)       # how many batches per checkpoint\nparser.add_argument('--optimizer', type = str, default = 'adam', choices = ['adam', 'sgd'])\nparser.add_argument('--init_lr', type = float, default = 1e-3)\nparser.add_argument('--lr_patience', type = int, default = 3)\nparser.add_argument('--lr_factor', type = float, default = 1.0)\nparser.add_argument('--max_ckpt', type = int, default = 100)\nparser.add_argument('--random_seed', type = int, default = 15213)\nargs = parser.parse_args()\nif 'x' not in args.kernel_size:\n    args.kernel_size = args.kernel_size + 'x' + args.kernel_size\n\nnumpy.random.seed(args.random_seed)\n\n# Prepare log file and model directory\nexpid = '%s-embed%d-%dC%dP-kernel%s%s%s-%s-drop%.1f-batch%d-ckpt%d-%s-lr%.0e-pat%d-fac%.1f-seed%d' % (\n    args.mode,\n    args.embedding_size,\n    args.n_conv_layers,\n    args.n_pool_layers,\n    args.kernel_size,\n    '-concur%d' % args.max_concur if args.mode in ['ctl', 'combine'] else '',\n    '-weight%g:%g' % (args.mil_weight, args.ctl_weight) if args.mode == 'combine' else '',\n    'bn' if args.batch_norm else 'nobn',\n    args.dropout,\n    args.batch_size,\n    args.ckpt_size,\n    args.optimizer,\n    args.init_lr,\n    args.lr_patience,\n    args.lr_factor,\n    args.random_seed\n)\nWORKSPACE = os.path.join('../../workspace/sequential', expid)\nMODEL_PATH = os.path.join(WORKSPACE, 'model')\nif not os.path.exists(MODEL_PATH): os.makedirs(MODEL_PATH)\nLOG_FILE = os.path.join(WORKSPACE, 'train.log')\nwith open(LOG_FILE, 'w'):\n    pass\n\ndef write_log(s):\n    timestamp = time.strftime('%m-%d %H:%M:%S')\n    msg = '[' + timestamp + '] ' + s\n    print msg\n    with open(LOG_FILE, 'a') as f:\n        f.write(msg + '\\n')\n\n# Load data\nwrite_log('Loading data ...')\ntrain_gen = batch_generator(batch_size = args.batch_size, random_seed = args.random_seed)\ngas_valid_x, gas_valid_y_frame, gas_valid_y_seq, _ = bulk_load('GAS_valid')\ngas_eval_x, gas_eval_y_frame, gas_eval_y_seq, _ = bulk_load('GAS_eval')\n\n# Build model\nargs.kernel_size = tuple(int(x) for x in args.kernel_size.split('x'))\nmodel = Net(args).cuda()\nif args.optimizer == 'sgd':\n    optimizer = SGD(model.parameters(), lr = args.init_lr, momentum = 0.9, nesterov = True)\nelif args.optimizer == 'adam':\n    optimizer = Adam(model.parameters(), lr = args.init_lr)\nscheduler = ReduceLROnPlateau(optimizer, mode = 'max', factor = args.lr_factor, patience = args.lr_patience) if args.lr_factor < 1.0 else None\n\n# Train model\nwrite_log('Training model ...')\nwrite_log(' CKPT |    LR    |  Tr.LOSS || G.Val.F1 |  G.Ev.F1 ')\nFORMAT  = ' %#4d | %8.0003g | %8.0006f || %8.0002f | %8.0002f '\nSEP     = ''.join('+' if c == '|' else '-' for c in FORMAT)\nwrite_log(SEP)\n\ncheckpoint = 0\nbest_gv_f1 = None\nbest_ge_f1 = None\n\nbce_loss = nn.BCELoss()\nfor checkpoint in range(1, args.max_ckpt + 1):\n    # Train for args.ckpt_size batches\n    model.train()\n    train_loss = 0\n    for batch in range(1, args.ckpt_size + 1):\n        x, y_global, y_seq, y_frame = next(train_gen)\n        optimizer.zero_grad()\n        if args.mode == 'strong':\n            frame_prob = model(x)\n            loss = bce_loss(frame_prob, y_frame)\n        elif args.mode == 'mil':\n            frame_prob = model(x)\n            global_prob = (frame_prob * frame_prob).sum(dim = 1) / frame_prob.sum(dim = 1)  # linear softmax pooling function\n            loss = bce_loss(global_prob, y_global)\n        elif args.mode == 'ctc':\n            log_prob = model(x)\n            seq_len = numpy.array([log_prob.shape[1]] * log_prob.shape[0])                  # actually all batches are the same size\n            loss = ctc_loss(log_prob, seq_len, y_seq)\n        elif args.mode == 'ctl':\n            frame_prob = model(x)\n            seq_len = numpy.array([frame_prob.shape[1]] * frame_prob.shape[0])              # actually all batches are the same size\n            loss = ctl_loss(frame_prob, seq_len, y_seq, args.max_concur)\n        elif args.mode == 'combine':\n            frame_prob = model(x)\n            global_prob = (frame_prob * frame_prob).sum(dim = 1) / frame_prob.sum(dim = 1)  # linear softmax pooling function\n            mil_loss = bce_loss(global_prob, y_global)\n            seq_len = numpy.array([frame_prob.shape[1]] * frame_prob.shape[0])              # actually all batches are the same size\n            ctl_loss_ = ctl_loss(frame_prob, seq_len, y_seq, args.max_concur)\n            loss = mil_loss * args.mil_weight + ctl_loss_ * args.ctl_weight\n        train_loss += loss.data[0]\n        if numpy.isnan(train_loss) or numpy.isinf(train_loss): break\n        loss.backward()\n        optimizer.step()\n        sys.stderr.write('Checkpoint %d, Batch %d / %d, avg train loss = %f\\r' % \\\n                         (checkpoint, batch, args.ckpt_size, train_loss / batch))\n    train_loss /= args.ckpt_size\n\n    # Evaluate model\n    model.eval()\n    def predict(x):\n        if args.mode != 'ctc':\n            return model.predict(x)\n        else:\n            log_prob = model.predict(x)\n            return ctc_decode(log_prob).astype('float32')\n    sys.stderr.write('Evaluating model on GAS_VALID ...\\r')\n    frame_prob = predict(gas_valid_x)\n    thres, gv_f1 = optimize_gas_valid(frame_prob, gas_valid_y_frame)\n    sys.stderr.write('Evaluating model on GAS_EVAL ...\\r')\n    frame_prob = predict(gas_eval_x)\n    ge_f1 = evaluate_gas_eval(frame_prob, thres, gas_eval_y_frame, verbose = False)\n\n    # Write log\n    write_log(FORMAT % (checkpoint, optimizer.param_groups[0]['lr'], train_loss, gv_f1, ge_f1))\n\n    # Abort if training has gone mad\n    if numpy.isnan(train_loss) or numpy.isinf(train_loss):\n        write_log('Aborted.')\n        break\n\n    # Save model regularly. Too bad I can't save the scheduler\n    MODEL_FILE = os.path.join(MODEL_PATH, 'checkpoint%d.pt' % checkpoint)\n    state = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}\n    sys.stderr.write('Saving model to %s ...\\r' % MODEL_FILE)\n    torch.save(state, MODEL_FILE)\n\n    # Update learning rate\n    if scheduler is not None:\n        scheduler.step(gv_f1)\n\n    # Update best results\n    if best_gv_f1 is None or gv_f1 > best_gv_f1:\n        best_gv_f1 = gv_f1\n        best_gv_ckpt = checkpoint\n    if best_ge_f1 is None or ge_f1 > best_ge_f1:\n        best_ge_f1 = ge_f1\n        best_ge_ckpt = checkpoint\n\nwrite_log('DONE!')\n"
  },
  {
    "path": "code/sequential/util_f1.py",
    "content": "import numpy\n\n# Compute F1 given predictions and truth\ndef f1(pred, truth):\n    return 2.0 * (pred & truth).sum() / (pred.sum() + truth.sum())\n\n# Given scores and truth for a single class (as 1-D numpy arrays), find optimal threshold and corresponding F1\n# Statistics of other classes may be given to optimize micro-average F1\ndef optimize_f1(scores, truth, extraNcorr = 0, extraNtrue = 0, extraNpred = 0):\n    # Start with predicting everything as negative\n    best_thres = numpy.inf\n    best_f1 = 0.0\n    num = extraNcorr                                # number of correctly predicted instances\n    den = extraNtrue + extraNpred + truth.sum()     # number of predicted instances + true instances\n    instances = [(-numpy.inf, False)] + sorted(zip(scores, truth))\n    # Lower the threshold gradually\n    for i in range(len(instances) - 1, 0, -1):\n        if instances[i][1]: num += 1\n        den += 1\n        if instances[i][0] > instances[i-1][0]:     # Can put threshold here\n            f1 = 2.0 * num / den\n            if f1 > best_f1:\n                best_thres = (instances[i][0] + instances[i-1][0]) / 2\n                best_f1 = f1\n    return best_thres, best_f1\n\n# Given scores and truth for many classes (as 2-D numpy arrays),\n# find the optimal class-specific thresholds (as a 1-D numpy array) that maximizes the micro-average F1\n# The algorithm is stochastic, but I have always observed deterministic results\ndef optimize_micro_avg_f1(scores, truth):\n    # First optimize each class individually\n    nClasses = truth.shape[1]\n    thres = numpy.zeros(nClasses, dtype = 'float64')\n    for i in range(nClasses):\n        thres[i], _ = optimize_f1(scores[:,i], truth[:,i])\n    Ntrue = truth.sum(axis = 0)\n    Npred = (scores >= thres).sum(axis = 0)\n    Ncorr = ((scores >= thres) & truth).sum(axis = 0)\n\n    # Repeatly re-tune the threshold for each class until convergence\n    candidates = range(nClasses)\n    while len(candidates) > 0:\n        i = numpy.random.choice(candidates)\n        candidates.remove(i)\n        old_thres = thres[i]\n        thres[i], _ = optimize_f1(\n            scores[:,i],\n            truth[:,i],\n            extraNcorr = Ncorr.sum() - Ncorr[i],\n            extraNtrue = Ntrue.sum() - Ntrue[i],\n            extraNpred = Npred.sum() - Npred[i],\n        )\n        if thres[i] != old_thres:\n            Npred[i] = (scores[:,i] >= thres[i]).sum(axis = 0)\n            Ncorr[i] = ((scores[:,i] >= thres[i]) & truth[:,i]).sum(axis = 0)\n            candidates = range(nClasses)\n            candidates.remove(i)\n\n    return thres\n"
  },
  {
    "path": "code/sequential/util_in.py",
    "content": "import sys, os, os.path, glob\nimport cPickle\nfrom scipy.io import loadmat\nimport numpy\nfrom multiprocessing import Process, Queue\nimport torch\nfrom torch.autograd import Variable\n\nN_CLASSES = 35\nN_WORKERS = 6\n\nFEATURE_DIR = '../../data/sequential'\nwith open(os.path.join(FEATURE_DIR, 'normalizer.pkl'), 'rb') as f:\n    mu, sigma = cPickle.load(f)\n\ndef sample_generator(file_list, random_seed = 15213):\n    rng = numpy.random.RandomState(random_seed)\n    while True:\n        rng.shuffle(file_list)\n        for filename in file_list:\n            data = loadmat(filename)\n            feat = ((data['feat'] - mu) / sigma).astype('float32')\n            labels = data['labels'].astype('bool')\n            for i in range(len(data['feat'])):\n                yield feat[i], labels[i]\n\ndef worker(queues, file_lists, random_seed):\n    generators = [sample_generator(file_lists[i], random_seed + i) for i in range(len(file_lists))]\n    while True:\n        for gen, q in zip(generators, queues):\n            q.put(next(gen))\n\ndef batch_generator(batch_size, random_seed = 15213):\n    queues = [Queue(5) for class_id in range(N_CLASSES)]\n    file_lists = [sorted(glob.glob(os.path.join(FEATURE_DIR, 'GAS_train_unbalanced_class%02d_part*.mat' % class_id))) for class_id in range(N_CLASSES)]\n\n    for worker_id in range(N_WORKERS):\n        p = Process(target = worker, args = (queues[worker_id::N_WORKERS], file_lists[worker_id::N_WORKERS], random_seed))\n        p.daemon = True\n        p.start()\n\n    rng = numpy.random.RandomState(random_seed)\n    batch_x = []; batch_y_global = []; batch_y_seq = []; batch_y_frame = []\n    while True:\n        rng.shuffle(queues)\n        for q in queues:\n            x, y_frame = q.get()\n            batch_x.append(x)\n            batch_y_global.append(y_frame.max(axis = -2))\n            batch_y_seq.append(mask2ctc(y_frame))\n            batch_y_frame.append(y_frame)\n            if len(batch_x) == batch_size:\n                yield Variable(torch.from_numpy(numpy.stack(batch_x))).cuda(), \\\n                      Variable(torch.from_numpy(numpy.stack(batch_y_global).astype('float32'))).cuda(), \\\n                      batch_y_seq, \\\n                      Variable(torch.from_numpy(numpy.stack(batch_y_frame).astype('float32'))).cuda()\n                batch_x = []; batch_y_global = []; batch_y_seq = []; batch_y_frame = []\n\ndef bulk_load(prefix):\n    data = loadmat(os.path.join(FEATURE_DIR, prefix + '.mat'))\n    x = ((data['feat'] - mu) / sigma).astype('float32')\n    y_frame = data['labels'].astype('bool')\n    y_seq = [mask2ctc(y) for y in y_frame]\n    return x, y_frame, y_seq, data['hashes']\n\ndef mask2ctc(mask):\n    z = numpy.zeros((1, mask.shape[-1]), dtype = 'bool')\n    zp = numpy.concatenate([z, mask])\n    pz = numpy.concatenate([mask, z])\n    onset = (pz & ~zp).nonzero()\n    offset = (zp & ~pz).nonzero()\n    boundaries = sorted([(t, 1, event) for (t, event) in zip(*onset)] + [(t, -1, event) for (t, event) in zip(*offset)])    # time, onset/offset, event id\n    return [bound[2] * 2 + {1:1, -1:2}[bound[1]] for bound in boundaries]\n"
  },
  {
    "path": "code/sequential/util_out.py",
    "content": "import numpy\nfrom util_f1 import *\nfrom joblib import Parallel, delayed\n\nN_JOBS = 6\n\ndef ctc_decode(log_prob):\n    # Decode log_prob (boundary probabilities, batch * frame * (2n+1)) to frame_pred (boolean event decisions, batch * frame * n)\n    nSeqs, nFrames, nLabels = log_prob.shape\n    nClasses = (nLabels - 1) / 2\n    frame_pred = numpy.zeros((nSeqs, nFrames, nClasses), dtype = 'bool')\n    for i in range(nSeqs):\n        onset = [None] * nClasses\n        prev_token = 0\n        for t, token in zip(range(nFrames), log_prob[i].argmax(axis = 1)):\n            if token == 0: continue\n            if token % 2 == 1:      # onset of event\n                event = (token - 1) / 2\n                onset[event] = t\n            else:                   # offset of event\n                event = token / 2 - 1\n                if onset[event] is not None:\n                    frame_pred[i, onset[event] : t + 1, event] = True\n                onset[event] = None\n    return frame_pred\n\ndef optimize_gas_valid(pred, y):\n    nClasses = y.shape[-1]\n    result = Parallel(n_jobs = N_JOBS)(delayed(optimize_f1)(pred[..., i].ravel(), y[..., i].ravel()) for i in range(nClasses))\n    thres = numpy.array([r[0] for r in result], dtype = 'float64')\n    class_f1 = numpy.array([r[1] for r in result], dtype = 'float32') * 100.0\n    return thres, class_f1.mean()\n\ndef TP_FN_FP(pred, truth):\n    TP = (pred & truth).sum()\n    FN = (~pred & truth).sum()\n    FP = (pred & ~truth).sum()\n    return (TP, FN, FP)\n\ndef evaluate_gas_eval(pred, thres, truth, verbose = False):\n    # if verbose == False, return only the macro-average F1\n    # if verbose == True, return the class-wise TP, FN, FP, precision, recall, F1\n    pred = pred >= thres\n    nClasses = len(thres)\n    stats = Parallel(n_jobs = N_JOBS)(delayed(TP_FN_FP)(pred[..., i], truth[..., i]) for i in range(nClasses))\n    TP, FN, FP = numpy.array(stats, dtype = 'int32').T\n    f1 = 200.0 * TP / (2 * TP + FN + FP)\n    if not verbose: return f1.mean()\n    precision = 100.0 * TP / (TP + FP)\n    recall = 100.0 * TP / (TP + FN)\n    return TP, FN, FP, precision, recall, f1\n"
  },
  {
    "path": "data/download.sh",
    "content": "archives=\"audioset.tgz sequential.tgz dcase.tgz\"\nfor archive in $archives; do\n  wget http://islpc21.is.cs.cmu.edu/yunwang/git/cmu-thesis/data/$archive && ((tar zxf $archive && rm $archive) &)\ndone\nwhile [ $(ls $archives 2>/dev/null | wc -l) -ne 0 ]; do\n  echo -ne \"Extracting file $(ls ${archives//.tgz/\\/*} 2>/dev/null | wc -l) of 47457 ...\\r\"\n  sleep 10;\ndone\necho -e \"\\nAll files extracted. DONE!\"\n"
  },
  {
    "path": "workspace/.gitignore",
    "content": ""
  }
]