[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2020 Eugene Lee\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/license-MIT-blue)](https://github.com/eugenelet/NeuralScale-Private/blob/master/LICENSE)\n![Python 3.6](https://img.shields.io/badge/python-3.6-green.svg)\n\n# Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner\n\nThis repository is the official implementation of *Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner* that has been accepted to ECCV 2020. \n\n<img src=\"rppg-overview.png\" width=\"600\">\n\n## Heatmap Visualization\n\nLeft to right: \n\n1. Cropped input image\n2. End-to-end trained model (baseline)\n3. Meta-rPPG (transducive inference)\n4. Top to down: rPPG signal, Power Spectral Density (PSD), Predicted and ground truth heart rate\n\n<img src=\"demo1.gif\" width=\"600\">\n\n<img src=\"demo2.gif\" width=\"600\">\n\n<img src=\"demo3.gif\" width=\"600\">\n\n\n## Requirements\n\nTo install requirements:\n\n```setup\npip install -r requirements.txt\n```\n\nAll experiments can be run on a single NVIDIA GTX1080Ti GPU.\n\n\nThe code was tested with python3.6 the following software versions:\n\n| Software        | version | \n| ------------- |-------------| \n| cuDNN         | 7.6.5 |\n| Pytorch      | 1.5.0  |\n| CUDA | 10.2    |\n\n\n## Training\n\n### Training Data Preparation\n\nDownload training data ([example.pth](https://drive.google.com/file/d/1Z4GWiYjoQSXMYBhxBRZK9gUa1mYP0JsN/view?usp=sharing)) from Google Drive. Due to privacy issue (face images), provided data contains only a subset of the entire training data, i.e. contains faces of the authors of this paper.\n\nMove `example.pth` to `data/` directory:\n```\nmv example.pth data/\n```\n\n### Begin Training\n\nTo begin training, run:\n\n```\npython3 train.py\n```\n\n\n## Validation Data\n\nValidation data can be requested from:\n\n[MAHNOB-HCI](https://mahnob-db.eu/hci-tagging/)\n\n[UBFC-rPPG](https://sites.google.com/view/ybenezeth/ubfcrppg)\n\n\n\n## Contributing\n\nIf you find this work useful, consider citing our work using the following bibTex:\n```\n@inproceedings{lee2020meta,\n  title={Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner},\n  author={Lee, Eugene and Chen, Evan and Lee, Chen-Yi},\n  booktitle={European Conference on Computer Vision (ECCV)},\n  year={2020}\n}\n```\n"
  },
  {
    "path": "data/__init__.py",
    "content": "\"\"\"This package includes all the modules related to data loading and preprocessing\n\n To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.\n You need to implement four functions:\n    -- <__init__>:                      initialize the class, first call BaseDataset.__init__(self, opt).\n    -- <__len__>:                       return the size of dataset.\n    -- <__getitem__>:                   get a data point from data loader.\n    -- <modify_commandline_options>:    (optionally) add dataset-specific options and set default options.\n\nNow you can use the dataset class by specifying flag '--dataset_mode dummy'.\nSee our template dataset class 'template_dataset.py' for more details.\n\"\"\"\nfrom .data_utils import testing\nfrom .dataload import SlideWindowDataLoader\n"
  },
  {
    "path": "data/data_utils.py",
    "content": "from __future__ import print_function\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\nimport pickle\nimport itertools\nimport torch\nfrom scipy import signal\nfrom scipy.signal import butter, lfilter\n\nclass FunctionSet():\n   def __init__(self, sample_rate=30.0, display_port=8093):\n      self.fps = sample_rate\n\n   def CHROM_method(self, data):\n      '''CHROM matrix'''\n      project_matrix = np.array([[3, -2, 0], [1.5, 1, -1.5]]) \n      frames = data['frame'].copy()\n      mask = data['mask'].copy()\n      mask /= 255\n      mask = mask.astype(float)\n\n      rgb_mean = self.spatial_mean(frames, mask)\n      rgb_mean = rgb_mean.transpose()\n      rgb_mean = rgb_mean[[2, 1, 0], :]\n      \n      win_size = rgb_mean.shape[1]\n      C_norm = np.zeros([3, win_size])\n      for i in range(win_size):\n         C_norm[:, i] = rgb_mean[:, i] / np.mean(rgb_mean, axis=1)\n      S = np.matmul(project_matrix, C_norm)\n      S1 = S[0,:]\n      S2 = S[1,:]\n      alpha = np.std(S1)/np.std(S2)\n      h = S1 + alpha*S2  # POS\n      h = butter_bandpass_filter(h, 0.4, 5, self.fps, order=6)\n\n      return h - np.mean(h)\n\n\n\n   def POS_method(self, data):\n      '''POS matrix'''\n      project_matrix = np.array([[0, 1, -1], [-2, 1, 1]]) \n      frames = data['frame'].copy()\n      mask = data['mask'].copy()\n      mask /= 255\n      mask = mask.astype(float)\n\n      rgb_mean = self.spatial_mean(frames, mask)\n      rgb_mean = rgb_mean.transpose()\n      rgb_mean = rgb_mean[[2, 1, 0], :]\n      \n      win_size = rgb_mean.shape[1]\n      C_norm = np.zeros([3, win_size])\n      for i in range(win_size):\n         C_norm[:, i] = rgb_mean[:, i] / np.mean(rgb_mean, axis=1)\n      S = np.matmul(project_matrix, C_norm)\n      S1 = S[0,:]\n      S2 = S[1,:]\n      alpha = np.std(S1)/np.std(S2)\n      h = S1 + alpha*S2  # POS\n      h = butter_bandpass_filter(h, 0.4, 5, self.fps, order=6)\n      return h - np.mean(h)\n   \n   def spatial_mean(self, frames, mask):\n      t0 = np.sum(frames, axis=(0, 2, 3))\n      t1 = np.sum(mask, axis=(0,2,3))\n\n      mean = t0/t1\n      return mean\n      # pdb.set_trace()\n\n\ndef butter_bandpass(lowcut, highcut, fs, order=5):\n   nyq = 0.5 * fs\n   low = lowcut / nyq\n   high = highcut / nyq\n   b, a = butter(order, [low, high], btype='band')\n   return b, a\n\n\ndef butter_bandpass_filter(data, lowcut, highcut, fs, order=5):\n   b, a = butter_bandpass(lowcut, highcut, fs, order=order)\n   # y = lfilter(b, a, data)\n   y = signal.filtfilt(b, a, data, method=\"pad\")\n   return y\n\n\ndef normed(a):\n   amin, amax = np.min(a), np.max(a)\n   t = a.copy()\n   # pdb.set_trace()\n   for i in range(a.shape[0]):\n      t[i] = (a[i]-amin) / (amax-amin)\n   return t\n\n\ndef testing(opt, model, testset, data_idx, epoch):\n   results, true_rPPG = model.get_current_results(0)\n   loss = model.get_current_losses(0)\n   test_data = testset[0, 0]\n\n   # model.eval() rnn can't be adapted in eval mode\n   model.set_input(test_data)\n   model.fewshot_test(epoch)\n\n   t_results, t_true_rPPG = model.get_current_results(1)\n   test_loss = model.get_current_losses(1)\n\n   model.train()\n\n   return loss[2], test_loss\n\n\ndef amp_equalize(sig):\n   # sig = Sig.clone()\n   mean = sig.mean()\n   min = sig.min()\n   max = sig.max()\n   ans = (sig - mean)/(max-min)*10\n   yhat = torch.from_numpy(signal.savgol_filter(ans, 11, 5))\n   # pdb.set_trace()\n   return yhat\n\n         \ndef get_bpm(Sig, rate= 30.0):\n   sig = Sig.copy()\n   n = len(sig)\n   # print(n)\n   fps = rate\n\n   win = signal.hann(sig.size)\n   sig = sig - np.expand_dims(np.mean(sig, -1), -1)\n   sig = sig * win\n\n   filtered_sig = butter_bandpass_filter(sig, 0.4, 4, fps, order=3)\n\n   f, Pxx_den = signal.welch(sig, fps, nperseg=n)\n   index = np.argmax(Pxx_den)\n   HR_estimate = round(f[index]*60.0)\n\n   return HR_estimate\n"
  },
  {
    "path": "data/dataload.py",
    "content": "import torch\r\nfrom data.pre_dataload import BaselineDataset\r\n# from Visualize.visualizer import Visualizer\r\nimport random\r\n\r\nfrom scipy import signal\r\nimport numpy as np\r\nimport pdb\r\n# pdb.set_trace()\r\n\r\nclass SlideWindowDataLoader():\r\n   \"\"\"Wrapper class of Dataset class that performs multi-threaded data loading.\r\n      The class is only a container of the dataset.\r\n\r\n      There are two ways to get a data out of the Loader. \r\n\r\n         1) feed in a list of videos: input = dataset[[0,3,5,10], 2020]. This gets the data starting at 2020 frame from 0, 3, 5, 10th video.\r\n         2) feed a single value of videos:  input = dataset[0, 2020]. This gets a batch of data starting at 2020 from the 0th video.\r\n   \"\"\"\r\n\r\n   def __init__(self, opt, isTrain):\r\n      \"\"\"Initialize this class\r\n      \"\"\"\r\n      # self.visualizer = Visualizer(opt, isTrain=True)\r\n      # self.visualizer.reset()\r\n      self.opt = opt\r\n      self.isTrain = isTrain\r\n\r\n      self.dataset = BaselineDataset(opt, isTrain)\r\n      if self.isTrain:\r\n         print(\"dataset [%s-%s] was created\" % ('rPPGDataset', 'train'))\r\n      else:\r\n         print(\"dataset [%s-%s] was created\" % ('rPPGDataset', 'test'))\r\n      self.length = int(len(self.dataset))\r\n\r\n      self.num_tasks = self.dataset.num_tasks\r\n      self.task_len = self.dataset.task_len\r\n\r\n   def load_data(self):\r\n      return self\r\n\r\n   def __len__(self):\r\n      \"\"\"Return the number of data in the dataset\"\"\"\r\n      return self.length\r\n\r\n   def __getitem__(self, items):\r\n      \"\"\"Return a batch of data\r\n         items -- [task_num, index of data for specified task]\r\n      \"\"\"\r\n\r\n      inputs = []\r\n      ppg = []\r\n      frame = []\r\n      mask = []\r\n\r\n      if self.isTrain:\r\n         batch = self.opt.batch_size\r\n      else:\r\n         batch = self.opt.batch_size + self.opt.fewshots\r\n\r\n      if not isinstance(items[0], list):\r\n         for i in range(batch):\r\n            dat = self.dataset[items[0], items[1]+60*i]\r\n            inputs.append(dat['input'])\r\n            ppg.append(dat['PPG'])\r\n      else:\r\n         for idx in items[0]:\r\n            dat = self.dataset[idx, items[1]]\r\n            inputs.append(dat['input'])\r\n            ppg.append(dat['PPG'])\r\n\r\n         # pdb.set_trace()\r\n\r\n      inputs = torch.stack(inputs)\r\n      ppg = torch.stack(ppg)\r\n      return {'input': inputs, 'rPPG': ppg}\r\n\r\n\r\n   def quantify(self,rppg):\r\n      quantified = torch.empty(rppg.shape[0], dtype=torch.long)\r\n      binary = torch.ones(rppg.shape[0], dtype=torch.long)\r\n      tmax = rppg.max()\r\n      tmin = rppg.min()\r\n      interval = (tmax - tmin)/39\r\n      for i in range(len(quantified)):\r\n         quantified[i] = ((rppg[i] - tmin)/interval).round().long()\r\n      return quantified\r\n\r\n   def __call__(self):\r\n      output_list = []\r\n      for idx in range(self.num_tasks):\r\n         tmp = self.dataset(idx)\r\n         tmp['rPPG'] = tmp.pop('PPG')\r\n         output_list.append(tmp)\r\n      return output_list\r\n      # pdb.set_trace()\r\n\r\n"
  },
  {
    "path": "data/pre_dataload.py",
    "content": "from __future__ import print_function\nimport torch\nimport os\n# import pickle\nimport numpy as np\nimport sys\n\nfrom sklearn.preprocessing import normalize\nfrom scipy import signal\nimport matplotlib.pyplot as plt\nfrom scipy.signal import butter, lfilter\nfrom data.data_utils import butter_bandpass_filter\nimport pdb\n\n\n\nclass BaselineDataset():\n   \"\"\"Preprocessing class of Dataset class that performs multi-threaded data loading\n\n   \"\"\"\n   def __init__(self, opt, isTrain):\n      \"\"\"Initialize this dataset class.\n\n      Parameters:\n         opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions\n\n         The self.dataset is a list of facial data, the length of the list is 18, and each element is a torch tensor of shape [2852, 3, 64, 64]\n         The self.maskset is the corresponding mask data, constructed of 0 and 255, so it determines the landmarks we're using in self.dataset  \n\n      \"\"\"\n      # get the image directory\n\n      self.isTrain = isTrain\n      self.opt = opt\n\n      temp_data = torch.load('data/example.pth')\n      if self.isTrain:\n         self.maskset = temp_data['mask'][:5]\n         self.dataset = temp_data['image'][:5]\n         self.ppg_dataset = temp_data['ppg'][:5]\n         self.num_tasks = len(self.dataset)\n         self.task_len = [self.dataset[i].shape[0]\n                          for i in range(len(self.dataset))]\n      else:\n         self.maskset = temp_data['mask'][-1:]\n         self.dataset = temp_data['image'][-1:]\n         self.ppg_dataset = temp_data['ppg'][-1:]\n         self.num_tasks = 1\n         self.task_len = self.dataset[0].shape[0]\n         # pdb.set_trace()\n\n      self.length = 0\n      for i in range(len(self.ppg_dataset)):\n         self.length += self.ppg_dataset[i].shape[0] - self.opt.win_size\n\n\n   def __getitem__(self, items):\n      \"\"\"Return a data point and its metadata information.\n\n      Parameters:\n         items -- [task_number, index of data for specified task]\n         items[0] -- a integer in range 0 to 4 in train mode, only 0 available in test mode\n         items[1] -- determined by the length of the video\n\n      Returns a dictionary that contains input, PPG, diff and orig\n         input - - a set of frames from the pickle file (60 x 3 x 64 x 64)\n         PPG - - the corresponding signal (60)\n      \"\"\"\n\n      inputs = []\n      masks = []\n      if not self.isTrain:\n         # pdb.set_trace()\n         for i in range(items[1], items[1] + self.opt.win_size):\n            frame = self.dataset[items[0]][i].clone()\n            mask = self.maskset[items[0]][i].clone()\n            inputs.append(frame)\n            masks.append(mask)\n         ppg = self.ppg_dataset[items[0]][items[1]: items[1] + self.opt.win_size].clone()\n      else:\n         for i in range(items[1], items[1] + self.opt.win_size):\n            frame = self.dataset[items[0]][i].clone()\n            mask = self.maskset[items[0]][i].clone()\n            inputs.append(frame)\n            masks.append(mask)\n         ppg = self.ppg_dataset[items[0]][items[1]\n            : items[1] + self.opt.win_size].clone()\n\n\n     \n\n      inputs = np.stack(inputs)\n      inputs = torch.from_numpy(inputs)\n      masks = np.stack(masks)\n      masks = torch.from_numpy(masks)\n\n      self.baseline_procress(inputs, masks.clone())\n      ppg = self.quantify(ppg)\n\n      return {'input': inputs, 'PPG': ppg}\n\n   def __len__(self):\n      \"\"\"Return the total number of images in the dataset.\"\"\"\n\n      return self.length\n\n   def quantify(self, rppg):\n      quantified = torch.empty(rppg.shape[0], dtype=torch.long)\n\n      tmax = rppg.max()\n      tmin = rppg.min()\n      interval = (tmax - tmin)/39\n      for i in range(len(quantified)):\n         quantified[i] = ((rppg[i] - tmin)/interval).round().long()\n\n      return quantified\n   \n   def baseline_procress(self, data, mask):\n\n      mask /= 255\n      mask = mask.float()\n\n      # pdb.set_trace()\n      input_mean = data.sum(dim=(0, 2, 3), keepdim=False) / \\\n          mask.sum(dim=(0, 2, 3), keepdim=False)  # mean of W H T\n      for i in range(data.shape[1]):\n         data[:, i, :, :] = data[:, i, :, :] - input_mean[i]  # minus the total mean\n      data = data*mask\n      \n      x_hat = data.sum(dim=(2, 3), keepdim=False)/ \\\n               mask.sum(dim=(2, 3), keepdim=False)  # mean of H T\n      G_x = np.empty(x_hat.size())  # filtered x_hat\n\n      for i in range(data.shape[1]):  # shape 1 is RGB channels\n         # pdb.set_trace()\n         G_x[:, i] = butter_bandpass_filter(x_hat[:, i], 1, 8, 30, order=3)\n         for j in range(data.shape[0]):\n            data[j, i, :, :] = data[j, i, :, :] - \\\n                  (x_hat[j, i] - G_x[j, i])\n      data = data*mask\n      # pdb.set_trace()\n      return data\n\n   def __call__(self, idx):\n      inputs = []\n      masks = []\n      items = [idx, 0]\n\n      if not self.isTrain:\n         # pdb.set_trace()\n         decision = 0\n         new_index = items[1] % (\n             self.task_len - (self.opt.batch_size + self.opt.fewshots)*self.opt.win_size)\n         for i in range(new_index, new_index + 15*self.opt.win_size):\n            frame = self.dataset[items[0]][i].clone()\n            mask = self.maskset[items[0]][i].clone()\n            inputs.append(frame)\n            masks.append(mask)\n         ppg = self.ppg_dataset[items[0]\n                                ][new_index: new_index + 15*self.opt.win_size].clone()\n         orig = self.original[items[0]\n                              ][new_index: new_index + 15*self.opt.win_size].clone()\n      else:\n         for i in range(items[1], items[1] + 15*self.opt.win_size):\n            frame = self.dataset[items[0]][i].clone()\n            mask = self.maskset[items[0]][i].clone()\n            inputs.append(frame)\n            masks.append(mask)\n         ppg = self.ppg_dataset[items[0]][items[1]: items[1] + 15*self.opt.win_size].clone()\n         orig = self.original[items[0]][items[1]: items[1] + 15*self.opt.win_size].clone()\n\n      inputs = np.stack(inputs)\n      inputs = torch.from_numpy(inputs)\n      masks = np.stack(masks)\n      masks = torch.from_numpy(masks)\n\n      self.baseline_procress(inputs, masks.clone())\n      ppg = self.quantify(ppg)\n\n      return {'input': inputs, 'PPG': ppg}\n"
  },
  {
    "path": "model/__init__.py",
    "content": "\"\"\"This package includes all the modules related to data loading and preprocessing\n\n To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.\n You need to implement four functions:\n    -- <__init__>:                      initialize the class, first call BaseDataset.__init__(self, opt).\n    -- <__len__>:                       return the size of dataset.\n    -- <__getitem__>:                   get a data point from data loader.\n    -- <modify_commandline_options>:    (optionally) add dataset-specific options and set default options.\n\nNow you can use the dataset class by specifying flag '--dataset_mode dummy'.\nSee our template dataset class 'template_dataset.py' for more details.\n\"\"\"\n\nfrom .main_model import meta_rPPG\n"
  },
  {
    "path": "model/loss.py",
    "content": "import torch\r\nimport numpy as np\r\nimport torch.nn as nn\r\nfrom torch.nn import init\r\nimport torch.optim as optim\r\nimport os\r\nfrom torch.autograd import Variable\r\nfrom torch.nn.functional import conv1d\r\n\r\nfrom scipy import signal\r\nimport torch.nn.functional as F\r\nimport pdb\r\n\r\n\r\nclass ordLoss(nn.Module):\r\n   \"\"\"\r\n   Ordinal loss is defined as the average of pixelwise ordinal loss F(h, w, X, O)\r\n   over the entire image domain:\r\n   \"\"\"\r\n\r\n   def __init__(self):\r\n      super(ordLoss, self).__init__()\r\n      self.loss = 0.0\r\n\r\n   def forward(self, orig_ord_labels, orig_target):\r\n      \"\"\"\r\n      :param ord_labels: ordinal labels for each position of Image I.\r\n      :param target:     the ground_truth discreted using SID strategy.\r\n      :return: ordinal loss\r\n      \"\"\"\r\n      device = orig_ord_labels.device\r\n      ord_labels = orig_ord_labels.clone()\r\n      # ord_labels = ord_labels.unsqueeze(0)\r\n      ord_labels = torch.transpose(ord_labels, 1, 2)\r\n\r\n      N, C, W = ord_labels.size()\r\n      ord_num = C \r\n\r\n      self.loss = 0.0\r\n\r\n      # faster version\r\n      if torch.cuda.is_available():\r\n         K = torch.zeros((N, C, W), dtype=torch.int).to(device)\r\n         for i in range(ord_num):\r\n               K[:, i, :] = K[:, i, :] + i * \\\r\n                  torch.ones((N, W), dtype=torch.int).to(device)\r\n      else:\r\n         K = torch.zeros((N, C, W), dtype=torch.int)\r\n         for i in range(ord_num):\r\n               K[:, i, :] = K[:, i, :] + i * \\\r\n                  torch.ones((N, W), dtype=torch.int)\r\n      # pdb.set_trace()\r\n\r\n      # target = orig_target.clone().type(torch.DoubleTensor)\r\n      if device == torch.device('cpu'):\r\n         target = orig_target.clone().type(torch.IntTensor)\r\n      else:\r\n         target = orig_target.clone().type(torch.cuda.IntTensor)\r\n\r\n      mask_0 = torch.zeros((N, C, W), dtype=torch.bool)\r\n      mask_1 = torch.zeros((N, C, W), dtype=torch.bool)\r\n      for i in range(N):\r\n         mask_0[i] = (K[i] <= target[i]).detach()\r\n         mask_1[i] = (K[i] > target[i]).detach()\r\n\r\n\r\n      one = torch.ones(ord_labels[mask_1].size())\r\n      if torch.cuda.is_available():\r\n         one = one.to(device)\r\n\r\n      self.loss += torch.sum(torch.log(torch.clamp(ord_labels[mask_0], min=1e-8, max=1e8))) \\\r\n         + torch.sum(torch.log(torch.clamp(one - ord_labels[mask_1], min=1e-8, max=1e8)))\r\n\r\n      N = N * W\r\n      self.loss /= (-N)  # negative\r\n      # pdb.set_trace()\r\n      return self.loss\r\n\r\nclass customLoss(nn.Module):\r\n   \"\"\"\r\n   This customize loss is contained of Ordloss and MSELoss of the frequency magnitude\r\n   \"\"\"\r\n   def __init__(self, device):\r\n      super(customLoss, self).__init__()\r\n      self.loss = 0.0\r\n      self.ord = ordLoss()\r\n\r\n      self.vis = Visdom(port=8093, env='main')\r\n\r\n      # self.cross = torch.nn.CrossEntropyLoss()\r\n      # self.cross = torch.nn.NLLLoss()\r\n      # self.cross = torch.nn.MSELoss()\r\n\r\n      self.reg = regressLoss()\r\n      # self.weight = torch.autograd.Variable(torch.tensor(1.0), requires_grad=True).to(device)\r\n      self.weight = nn.Linear(2,1).to(device)\r\n      with torch.no_grad():\r\n         self.weight.weight.copy_(torch.tensor([1.0,1.0]))\r\n      pdb.set_trace()\r\n      self.t = torch.tensor([2.0,2.0]).to(device)\r\n      self.device = device\r\n\r\n   def forward(self, predict, true_rPPG):\r\n\r\n      self.loss1 = self.ord(predict[0], true_rPPG)\r\n      self.true_fft = self.torch_style_fft(true_rPPG)  # (batch size x 60)\r\n      self.predict_fft = self.torch_style_fft(predict[1])  # (batch size x 60)\r\n\r\n      self.loss2 = self.reg(self.predict_fft, self.true_fft)\r\n      if torch.isnan(self.loss2):\r\n         pdb.set_trace()\r\n\r\n      # self.loss = self.loss1 + self.weight * self.loss2\r\n      # pdb.set_trace()\r\n      self.t1 = self.weight(self.t)\r\n      self.loss = self.weight(torch.stack([self.loss1, self.loss2]))\r\n      pdb.set_trace()\r\n\r\n      return self.loss\r\n      # pdb.set_trace()\r\n\r\n   def torch_style_fft(self, sig):\r\n      # pdb.set_trace()\r\n      S, _ = torch_welch(sig, fps = 30)\r\n\r\n      return S\r\n\r\n\r\nclass regressLoss(nn.Module):\r\n    def __init__(self):\r\n        super(regressLoss, self).__init__()\r\n        self.softmax = nn.Softmax(dim=1)\r\n      #   self.weight = weight\r\n\r\n    def forward(self, outputs, targets):\r\n\r\n      preoutput = outputs.clone()\r\n      if torch.isnan(preoutput.cpu().detach()).any():\r\n         pdb.set_trace()\r\n      # small_number = torch.tensor(1e-45).to(targets.get_device())\r\n      targets = self.softmax(targets)\r\n      outputs = self.softmax(outputs)\r\n      if torch.isnan(outputs.cpu().detach()).any():\r\n         pdb.set_trace()\r\n      # outputs = outputs + small_number\r\n\r\n      loss = -targets.float() * torch.log(outputs) \r\n      # if np.isnan(torch.mean(loss).cpu().detach().numpy()):\r\n      #    pdb.set_trace()\r\n      return torch.mean(loss)\r\n\r\n\r\nclass KLDivLoss(nn.Module):\r\n    def __init__(self, reduction=\"mean\"):\r\n        super(KLDivLoss, self).__init__()\r\n        self.criterion = torch.nn.KLDivLoss(reduction=reduction)\r\n      #   self.weight = weight\r\n\r\n    def forward(self, outputs, targets):\r\n      out = outputs.clone()\r\n      tar = targets.clone()\r\n      out.uniform_(0, 1)\r\n      tar.uniform_(0, 1)\r\n      # loss = self.criterion(F.log_softmax(out, -1), tar)\r\n      loss = self.criterion(F.log_softmax(outputs, dim=1), F.softmax(targets, dim=1))\r\n\r\n      return loss\r\n\r\n\r\ndef torch_welch(sig, fps):\r\n   nperseg = sig.size(1)\r\n   nfft = sig.size(1)\r\n   noverlap = nperseg//2\r\n   # pdb.set_trace()\r\n\r\n   sig = sig.type(torch.cuda.FloatTensor)\r\n   win = torch.from_numpy(signal.hann(sig.size(1))).to(sig.get_device()).type(torch.cuda.FloatTensor)\r\n   sig = sig.unsqueeze(1)\r\n   # pdb.set_trace()\r\n\r\n   '''detrend'''\r\n   sig = sig - torch.from_numpy(np.expand_dims(np.mean(sig.detach().cpu().numpy(), -1), -1)).to(sig.get_device())\r\n   sig = sig * win\r\n   S = torch.rfft(sig, 1, normalized=True, onesided=True)\r\n   S = torch.sqrt(S[..., 0]**2 + S[..., 1]**2)   \r\n   freqs = torch.from_numpy(np.fft.rfftfreq(nfft, 1/float(fps)))\r\n\r\n   S = S.squeeze(1)\r\n\r\n   return S, freqs\r\n\r\n"
  },
  {
    "path": "model/main_model.py",
    "content": "import torch\r\nimport numpy as np\r\nimport torch.nn as nn\r\nfrom torch.nn import init\r\nimport torch.optim as optim\r\nimport os\r\nimport itertools\r\nfrom model.sub_model import rPPG_Estimator, Convolutional_Encoder, Synthetic_Gradient_Generator\r\nfrom model.loss import ordLoss, KLDivLoss\r\nfrom scipy import signal\r\nimport pickle\r\nfrom data.data_utils import butter_bandpass_filter\r\nimport time\r\nimport pdb\r\n\r\n\r\nclass meta_rPPG(nn.Module):\r\n   \"\"\"\r\n   You can name your own checkpoint directory (opt.checkpoints_dir).\r\n\r\n   A_net refers to Conv_Encoder, B_net refers to rPPG_Estimator, Grad_net refers to Synth_Grad_Gen.\r\n   The loading directory can be changed to opt.checkpoints_dir if some other checkpoints are in need.\r\n\r\n   \"\"\"\r\n\r\n   def __init__(self, opt, isTrain, continue_train=False, norm_layer=nn.BatchNorm2d):\r\n      \"\"\"\r\n      Attention_ResNet -- using EfficientNet with LSTM\r\n      AttentionNet -- using a attention strcture without a LSTM\r\n      \"\"\"\r\n      super(meta_rPPG, self).__init__()\r\n      self.save_dir = os.path.join(os.getcwd(), opt.checkpoints_dir)\r\n      self.load_dir = os.path.join(os.getcwd(), opt.checkpoints_dir)\r\n      if os.path.exists(self.save_dir) == False:\r\n         os.makedirs(self.save_dir)\r\n      self.isTrain = isTrain\r\n      self.opt = opt\r\n      self.gpu_ids = opt.gpu_ids\r\n      self.thres = 0.5\r\n      self.continue_train = continue_train\r\n      self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') \r\n      # self.device = torch.device('cpu') \r\n\r\n      self.prototype = torch.zeros(120)\r\n      self.h = torch.zeros(2*opt.lstm_num_layers, opt.batch_size, 60).to(self.device)\r\n      self.c = torch.zeros(2*opt.lstm_num_layers, opt.batch_size, 60).to(self.device)\r\n\r\n      self.A_net = Convolutional_Encoder(input_channel=3, isTrain=self.isTrain, device=self.device)\r\n\r\n      self.B_net = rPPG_Estimator(input_channel=120, num_layers=opt.lstm_num_layers, \r\n            isTrain=self.isTrain, device=self.device, h=self.h, c=self.c)\r\n\r\n      self.Grad_net = Synthetic_Gradient_Generator(input_channel=120, isTrain=self.isTrain, device=self.device)\r\n      \r\n      \r\n      self.A_net.to(self.device)\r\n      self.B_net.to(self.device)\r\n      self.Grad_net.to(self.device)\r\n      self.model = [self.A_net, self.B_net, self.Grad_net]\r\n      self.fewloss = 0.0\r\n      self.ordloss = 0.0\r\n      self.gradloss = 0.0\r\n\r\n      self.criterion1 = torch.nn.MSELoss()\r\n      self.criterion2 = ordLoss()\r\n      self.criterion3 = torch.nn.MSELoss()\r\n\r\n      self.optimizerA = torch.optim.SGD(self.A_net.parameters(), opt.lr, momentum=0.9, weight_decay=5e-4)\r\n      self.optimizerB = torch.optim.SGD(self.B_net.parameters(), opt.lr, momentum=0.9, weight_decay=5e-4)\r\n      self.optimizerGrad = torch.optim.SGD(self.Grad_net.parameters(), opt.lr, momentum=0.9, weight_decay=5e-4)\r\n      if self.opt.adapt_position == \"extractor\":\r\n         self.optimizerPsi = torch.optim.SGD(self.A_net.parameters(), opt.lr*1e-2, momentum=0.9, weight_decay=5e-4)\r\n      elif self.opt.adapt_position == \"estimator\":\r\n         self.optimizerPsi = torch.optim.SGD(self.B_net.parameters(), opt.lr*1e-2, momentum=0.9, weight_decay=5e-4)\r\n      elif self.opt.adapt_position == \"both\":\r\n         self.optimizerPsi = torch.optim.SGD(itertools.chain(self.A_net.parameters(),\r\n                           self.B_net.parameters()), opt.lr*1e-2, momentum=0.9, weight_decay=5e-4)\r\n\r\n      self.schedulerA = optim.lr_scheduler.CosineAnnealingLR(self.optimizerA, T_max=5, eta_min=0.1*opt.lr)\r\n      self.schedulerB = optim.lr_scheduler.CosineAnnealingLR(self.optimizerB, T_max=5, eta_min=0.1*opt.lr)\r\n      self.schedulerGrad = optim.lr_scheduler.CosineAnnealingLR(self.optimizerGrad, T_max=5, eta_min=0.1*opt.lr)\r\n      self.schedulerPsi = optim.lr_scheduler.CosineAnnealingLR(self.optimizerPsi, T_max=5, eta_min=0.1*1e-2*opt.lr)\r\n\r\n\r\n\r\n      # pdb.set_trace()\r\n   def print_networks(self, print_net):\r\n      \"\"\"Print the total number of parameters in the network and (if verbose) network architecture\r\n\r\n      Parameters:\r\n      verbose (bool) -- if verbose: print the network architecture\r\n      \"\"\"\r\n      print('----------- Networks initialized -------------')\r\n      num_params = 0\r\n      for param in self.A_net.parameters():\r\n         num_params += param.numel()\r\n      for param in self.B_net.parameters():\r\n         num_params += param.numel()\r\n      for param in self.Grad_net.parameters():\r\n         num_params += param.numel()\r\n      if print_net:\r\n         print(self.model)\r\n      print('Total number of parameters : %.3f M' %\r\n            (num_params / 1e6))\r\n      # pdb.set_trace()\r\n      print('---------------------end----------------------')\r\n\r\n   def set_input(self, input):\r\n\r\n      self.input = input['input']\r\n      self.true_rPPG = input['rPPG']\r\n      if 'center' in input:\r\n         self.center = input['center']\r\n\r\n   def set_input_for_test(self, input):\r\n      self.input = input.to(self.device)\r\n      # if self.opt.lstm_hc_usage:\r\n      self.B_net.feed_hc([self.h, self.c])\r\n\r\n   def forward(self, x):\r\n      \"\"\"Run forward pass; called by both functions <optimize_parameters> and <test>.\"\"\"\r\n      # if not self.opt.branch:\r\n      self.inter = self.A_net(x)\r\n      self.decision, self.predict = self.B_net(self.inter)\r\n      if self.opt.adapt_position == \"extractor\":\r\n         self.gradient = self.Grad_net(self.inter.detach())\r\n      elif self.opt.adapt_position == \"estimator\":\r\n         self.gradient = self.Grad_net(self.predict.detach())\r\n      elif self.opt.adapt_position == \"both\":\r\n         self.gradient1 = self.Grad_net(self.inter.detach())\r\n         self.gradient2 = self.Grad_net(self.predict.detach())\r\n   \r\n   def new_theta_update(self, epoch):\r\n      inter = self.A_net(self.input.to(self.device))\r\n      decision, predict = self.B_net(inter)\r\n\r\n      fewloss = self.criterion1(self.prototype.expand(self.opt.batch_size,60,120), inter)\r\n      ordloss = self.criterion2(predict, self.true_rPPG.to(self.device))\r\n\r\n      self.optimizerA.zero_grad()\r\n      loss = fewloss + ordloss\r\n      loss.backward()\r\n      self.optimizerA.step()\r\n\r\n      if self.opt.adapt_position == \"extractor\":\r\n         for i in range(self.opt.fewshots):\r\n            inter = self.A_net(self.input.to(self.device))\r\n            decision, predict = self.B_net(inter)\r\n            inter_grad = self.Grad_net(inter.detach())\r\n            # self.optimizerA.zero_grad()\r\n            self.optimizerPsi.zero_grad()\r\n            grad = torch.autograd.grad(outputs=inter, inputs=self.A_net.parameters(),\r\n                                       grad_outputs=inter_grad, create_graph=False, retain_graph=False)\r\n            torch.autograd.backward(self.A_net.parameters(), grad_tensors=grad, retain_graph=False, create_graph=False)\r\n\r\n            self.optimizerPsi.step()\r\n         self.gradient = inter_grad.detach().clone()\r\n      elif self.opt.adapt_position == \"estimator\":\r\n         for i in range(self.opt.fewshots):\r\n            inter = self.A_net(self.input.to(self.device))\r\n            decision, predict = self.B_net(inter)\r\n            predict_grad = self.Grad_net(predict.detach())\r\n            # self.optimizerA.zero_grad()\r\n            self.optimizerPsi.zero_grad()\r\n            grad = torch.autograd.grad(outputs=predict, inputs=self.B_net.parameters(),\r\n                                       grad_outputs=predict_grad, create_graph=False, retain_graph=False)\r\n            torch.autograd.backward(self.B_net.parameters(), grad_tensors=grad, retain_graph=False, create_graph=False)\r\n            self.optimizerPsi.step()\r\n         self.gradient = predict_grad.detach().clone()\r\n\r\n      elif self.opt.adapt_position == \"both\":\r\n         for i in range(self.opt.fewshots):\r\n            inter = self.A_net(self.input.to(self.device))\r\n            decision, predict = self.B_net(inter)\r\n            inter_grad = self.Grad_net(inter.detach())\r\n            predict_grad = self.Grad_net(predict.detach())\r\n\r\n            self.optimizerPsi.zero_grad()\r\n            grad = torch.autograd.grad(outputs=inter, inputs=self.A_net.parameters(),\r\n                                       grad_outputs=inter_grad, create_graph=False, retain_graph=False)\r\n            torch.autograd.backward(self.A_net.parameters(), grad_tensors=grad, retain_graph=False, create_graph=False)\r\n\r\n            grad = torch.autograd.grad(outputs=predict, inputs=self.B_net.parameters(),\r\n                                       grad_outputs=predict_grad, create_graph=False, retain_graph=False)\r\n            torch.autograd.backward(self.B_net.parameters(), grad_tensors=grad, retain_graph=False, create_graph=False)\r\n\r\n            self.optimizerPsi.step()\r\n         self.gradient = predict_grad.detach().clone()\r\n\r\n      '''release the retained graph, free all the variables'''\r\n      self.fewloss = fewloss.detach().clone()\r\n      self.ordloss = ordloss.detach().clone()\r\n      self.inter = inter.detach().clone()\r\n\r\n   def new_psi_phi_update(self, epoch):\r\n      if self.opt.adapt_position == \"extractor\":\r\n         inter = self.A_net(self.input.to(self.device))\r\n         decision, predict = self.B_net(inter)\r\n         inter_grad = self.Grad_net(inter.detach())\r\n\r\n         inter.retain_grad()\r\n         ordloss = self.criterion2(predict, self.true_rPPG.to(self.device))\r\n         fewloss = self.criterion1(self.prototype.expand(self.opt.batch_size,60,120), inter)\r\n         loss = ordloss + fewloss\r\n\r\n         self.optimizerB.zero_grad()\r\n         self.optimizerA.zero_grad()\r\n         loss.backward()\r\n         self.optimizerA.step()\r\n         self.optimizerB.step()\r\n\r\n         # pdb.set_trace()\r\n         gradloss = self.criterion3(inter_grad, inter.grad)\r\n         self.optimizerGrad.zero_grad()\r\n         gradloss.backward()\r\n         self.optimizerGrad.step()\r\n         self.gradloss = gradloss.detach().clone()\r\n\r\n      elif self.opt.adapt_position == \"estimator\":\r\n         inter = self.A_net(self.input.to(self.device))\r\n         decision, predict = self.B_net(inter)\r\n         predict_grad = self.Grad_net(predict.detach())\r\n\r\n         predict.retain_grad()\r\n         ordloss = self.criterion2(predict, self.true_rPPG.to(self.device))\r\n         fewloss = self.criterion1(self.prototype.expand(\r\n             self.opt.batch_size, 60, 120), inter)\r\n         loss = ordloss + fewloss\r\n\r\n         self.optimizerB.zero_grad()\r\n         self.optimizerA.zero_grad()\r\n         loss.backward()\r\n         self.optimizerA.step()\r\n         self.optimizerB.step()\r\n\r\n         gradloss = self.criterion3(predict_grad, predict.grad)\r\n         self.optimizerGrad.zero_grad()\r\n         gradloss.backward()\r\n         self.optimizerGrad.step()\r\n         self.gradloss = gradloss.detach().clone()\r\n      \r\n      elif self.opt.adapt_position == \"both\":\r\n         inter = self.A_net(self.input.to(self.device))\r\n         decision, predict = self.B_net(inter)\r\n         predict_grad = self.Grad_net(predict.detach())\r\n         inter_grad = self.Grad_net(inter.detach())\r\n\r\n         predict.retain_grad()\r\n         inter.retain_grad()\r\n         ordloss = self.criterion2(predict, self.true_rPPG.to(self.device))\r\n         fewloss = self.criterion1(self.prototype.expand(\r\n             self.opt.batch_size, 60, 120), inter)\r\n         loss = ordloss + fewloss\r\n\r\n         self.optimizerB.zero_grad()\r\n         self.optimizerA.zero_grad()\r\n         loss.backward()\r\n         self.optimizerA.step()\r\n         self.optimizerB.step()\r\n\r\n         gradloss = self.criterion3(\r\n               predict_grad, predict.grad) + self.criterion3(inter_grad, inter.grad)\r\n         self.optimizerGrad.zero_grad()\r\n         gradloss.backward()\r\n         self.optimizerGrad.step()\r\n         self.gradloss = gradloss.detach().clone()\r\n\r\n      self.decision = decision.detach().clone()\r\n      self.predict = predict.detach().clone()\r\n      self.ordloss = ordloss.detach().clone()\r\n\r\n\r\n   def update_prototype(self):\r\n      proto_tmp = torch.zeros(120).to(self.device)\r\n      h_tmp = torch.zeros(2*self.opt.lstm_num_layers, self.opt.batch_size, 60).to(self.device)\r\n      c_tmp = torch.zeros(2*self.opt.lstm_num_layers, self.opt.batch_size, 60).to(self.device)\r\n      self.B_net.feed_hc([self.h, self.c])\r\n      # pdb.set_trace()\r\n\r\n      self.forward(self.input.to(self.device))\r\n      # pdb.set_trace()\r\n      proto_tmp += self.inter.data.mean(axis=[0,1])\r\n      h_tmp += self.B_net.h.data\r\n      c_tmp += self.B_net.c.data\r\n\r\n      if torch.sum(self.prototype) == 0: # first update\r\n         self.prototype = proto_tmp\r\n         (self.h, self.c) = (h_tmp, c_tmp)\r\n      else:\r\n         self.prototype = 0.8*self.prototype + 0.2*proto_tmp\r\n         (self.h, self.c) = (0.8*self.h + 0.2*h_tmp, 0.8*self.c + 0.2*c_tmp)\r\n\r\n\r\n\r\n   def setup(self, opt):\r\n      self.init_weights(self.A_net, self.B_net)\r\n      # pdb.set_trace()\r\n      if self.continue_train:\r\n         self.load_networks(opt.load_file)\r\n         self.thres = 0.01\r\n      if not self.isTrain:\r\n         # load_suffix = 'latest'\r\n         # load_suffix = 'iter_%d' % opt.load_iter if opt.load_iter > 0 else opt.epoch\r\n         self.load_networks(opt.load_file)\r\n         # self.progress = 1.45\r\n      # pdb.set_trace()\r\n      self.print_networks(opt.print_net)\r\n\r\n   def init_weights(net1, net2, init_type='normal', init_gain=0.02):\r\n      net1.apply(init_func)\r\n      net2.apply(init_func)\r\n\r\n   def save_networks(self, suffix):\r\n      \"\"\"Save all the networks to the disk.\r\n\r\n      Parameters:\r\n         epoch (int) -- current epoch; used in the file name '%s_%s.pth' % (epoch, name)\r\n      \"\"\"\r\n      save_filename1 = '%s_%s.pth' % (suffix, self.opt.name)\r\n      save_path1 = os.path.join(self.save_dir, save_filename1)\r\n      # pdb.set_trace()\r\n      torch.save({'A': self.A_net.state_dict(), \r\n                  'B': self.B_net.state_dict(),\r\n                  'Grad': self.Grad_net.state_dict(),\r\n                  'proto': self.prototype.cpu(),\r\n                  'h': self.h.data.cpu(), \r\n                  'c': self.c.data.cpu()},\r\n                   save_path1)\r\n\r\n\r\n   def get_current_losses(self, istest):\r\n      # return [self.criterion.loss1.clone(), self.criterion.loss2.clone()]\r\n      if istest:\r\n         return self.t_ordloss\r\n      else:\r\n         return [self.fewloss, self.gradloss, self.ordloss]\r\n\r\n   def eval(self):\r\n      \"\"\"Make models eval mode during test time\"\"\"\r\n      self.A_net.eval()\r\n      self.B_net.eval()\r\n      self.Grad_net.eval()\r\n      # self.attention.eval()\r\n      # self.lstm.eval()\r\n      # self.fc.eval()\r\n\r\n   def train(self):\r\n      \"\"\"Make models train mode after test time\"\"\"\r\n      self.A_net.train()\r\n      self.B_net.train()\r\n      self.Grad_net.train()\r\n      # self.attention.train()\r\n      # self.lstm.train()\r\n      # self.fc.train()\r\n\r\n   def test(self):\r\n      \"\"\"Forward function used in test time. \"\"\"\r\n      with torch.no_grad():\r\n         self.forward(self.input[len(self.input)-1].unsqueeze(0).to(self.device))\r\n\r\n      self.t_ordloss = self.criterion2(self.predict, self.true_rPPG[len(self.true_rPPG)-1].unsqueeze(0).to(self.device))\r\n\r\n   def fewshot_test(self, epoch):\r\n      A = pickle.loads(pickle.dumps(self.A_net))\r\n      optim = torch.optim.SGD(A.parameters(), self.opt.lr*1e-2, momentum=0.9, weight_decay=5e-4)\r\n      \r\n      for i in range(self.opt.fewshots):\r\n         optim.zero_grad()\r\n         inter = A(self.input[i].unsqueeze(0).to(self.device))\r\n         inter_grad = self.Grad_net(inter)\r\n         grad = torch.autograd.grad(outputs=inter, inputs=A.parameters(),\r\n                     grad_outputs=inter_grad, create_graph=False, retain_graph=False)\r\n         torch.autograd.backward(A.parameters(), grad_tensors=grad, retain_graph=False, create_graph=False)\r\n         optim.step()\r\n\r\n      for i in range(self.opt.fewshots):\r\n         optim.zero_grad()\r\n         inter = A(self.input[i].unsqueeze(0).to(self.device))\r\n         loss = self.criterion1(inter, self.prototype.expand(1, 60, 120))\r\n         loss.backward()\r\n         optim.step()\r\n\r\n      with torch.no_grad():\r\n         tmp_h = self.B_net.h\r\n         tmp_c = self.B_net.c\r\n         # if self.opt.lstm_hc_usage:\r\n         self.B_net.feed_hc([self.h, self.c])\r\n\r\n         data = self.input[self.opt.fewshots:]\r\n         inter = A(data.to(self.device))\r\n         self.decision, self.predict = self.B_net(inter)\r\n         self.B_net.feed_hc([tmp_h, tmp_c])\r\n\r\n      self.t_ordloss = self.criterion2(self.predict[0].unsqueeze(0), self.true_rPPG[0].unsqueeze(0).to(self.device))\r\n\r\n\r\n   def get_current_results(self, istest):\r\n      if istest:\r\n         return self.decision[-1].cpu().clone(), self.true_rPPG[-1].cpu().clone()\r\n      else:\r\n         return self.decision[-1].cpu().clone(), self.true_rPPG[-1].cpu().clone()\r\n         # return self.decision[0].cpu().clone(), self.true_rPPG[len(self.input)-1][0].cpu().clone()\r\n         \r\n   # def get_freq_results(self):\r\n   #    return self.criterion.true_fft[0].cpu().clone(), self.criterion.predict_fft[0].detach().cpu().clone()\r\n\r\n   def get_current_results_of_test(self):\r\n      # pdb.set_trace()\r\n      return self.decision[0].cpu().clone()\r\n\r\n   def load_networks(self, suffix):\r\n      \"\"\"Load all the networks from the disk.\r\n\r\n      Parameters:\r\n         suffix (str) -- current epoch; used in the file name '%s_%s.pth' % (suffix, name)\r\n      \"\"\"\r\n\r\n      load_filename1 = '%s_%s.pth' % (suffix, self.opt.name)\r\n      load_path1 = os.path.join(self.load_dir, load_filename1)\r\n\r\n      print('loading model from %s' % load_path1)\r\n      model_dict = torch.load(load_path1)\r\n      self.A_net.load_state_dict(model_dict['A'])\r\n      self.B_net.load_state_dict(model_dict['B'])\r\n      self.Grad_net.load_state_dict(model_dict['Grad'])\r\n\r\n      self.prototype = model_dict['proto'].to(self.device)\r\n      self.h = model_dict['h'].to(self.device)\r\n      self.c = model_dict['c'].to(self.device)\r\n\r\n      # self.A_net.eval()\r\n      # self.B_net.eval()\r\n      # self.Grad_net.eval()\r\n\r\n      \r\n\r\n      \r\n\r\n\r\n   def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):\r\n      \"\"\"Fix InstanceNorm checkpoints incompatibility (prior to 0.4)\"\"\"\r\n      key = keys[i]\r\n      if i + 1 == len(keys):  # at the end, pointing to a parameter/buffer\r\n         if module.__class__.__name__.startswith('InstanceNorm') and \\\r\n                 (key == 'running_mean' or key == 'running_var'):\r\n            if getattr(module, key) is None:\r\n               state_dict.pop('.'.join(keys))\r\n         if module.__class__.__name__.startswith('InstanceNorm') and \\\r\n            (key == 'num_batches_tracked'):\r\n               state_dict.pop('.'.join(keys))\r\n      else:\r\n         self.__patch_instance_norm_state_dict(\r\n             state_dict, getattr(module, key), keys, i + 1)\r\n\r\n   def get_param(self):\r\n      return [self.A_net.get_param(), self.B_net.get_param()]\r\n\r\n   def update_learning_rate(self, epoch):\r\n      \"\"\"Update learning rates for all the networks; called at the end of every epoch\"\"\"\r\n\r\n      self.schedulerA.step()\r\n      self.schedulerB.step()\r\n      self.schedulerGrad.step()\r\n      self.schedulerPsi.step()\r\n      \r\n\r\n      # pdb.set_trace()\r\n      lr = self.optimizerB.param_groups[0]['lr']\r\n      return lr\r\n      # print('\\nlearning rate = %.7f' % lr)\r\n\r\ndef init_func(m):  # define the initialization function\r\n   classname = m.__class__.__name__\r\n   if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):\r\n      init.normal_(m.weight.data, 0.0, 0.02)\r\n   # if hasattr(m, 'bias') and m.bias is not None:\r\n   #    init.constant_(m.bias.data, 0.0)\r\n   # BatchNorm Layer's weight is not a matrix; only normal distribution applies.\r\n   elif classname.find('BatchNorm2d') != -1:\r\n      init.normal_(m.weight.data, 1.0, 0.02)\r\n      init.constant_(m.bias.data, 0.0)\r\n\r\n"
  },
  {
    "path": "model/sub_model.py",
    "content": "import torch\r\nimport numpy as np\r\nimport torch.nn as nn\r\nfrom torch.nn import init\r\nimport torch.optim as optim\r\nimport os\r\nimport math\r\n# from model.sub_models import ResNet, BasicBlock\r\n# from model.sub_models import OrdinalRegressionLayer\r\nimport itertools\r\nfrom collections import OrderedDict\r\nimport torch.nn.functional as F\r\n\r\nimport pdb\r\n\r\n\r\n\r\nclass Synthetic_Gradient_Generator(nn.Module):\r\n   def __init__(self, input_channel, isTrain, device):\r\n      super(Synthetic_Gradient_Generator, self).__init__()\r\n      self.layer1 = nn.Sequential(\r\n          nn.Conv1d(60, 40, kernel_size=3, padding=1),\r\n          nn.BatchNorm1d(40),\r\n          nn.ReLU()\r\n      )\r\n      self.layer2 = nn.Sequential(\r\n          nn.Conv1d(40, 20, kernel_size=3, padding=1),\r\n          nn.BatchNorm1d(20),\r\n          nn.ReLU()\r\n      )\r\n      self.layer3 = nn.Sequential(\r\n          nn.ConvTranspose1d(20, 40, kernel_size=3, padding=1),\r\n          nn.BatchNorm1d(40),\r\n          nn.ReLU()\r\n      )\r\n      self.layer4 = nn.Sequential(\r\n          nn.ConvTranspose1d(40, 60, kernel_size=3, padding=1)\r\n      )\r\n\r\n   def forward(self, x): \r\n      # x's shape = [6, 60, 120]\r\n      res_x1 = self.layer1(x)  # res_x1's shape = [6, 40, 120]\r\n      res_x2 = self.layer2(res_x1)  # res_x2's shape = [6, 20, 120]\r\n      res_x3 = self.layer3(res_x2) + res_x1 # res_x3's shape = [6, 40, 120]\r\n      out = self.layer4(res_x3) # out's shape = [6, 60, 120]\r\n      # pdb.set_trace()\r\n\r\n      return out\r\n\r\n\r\nclass Convolutional_Encoder(nn.Module):\r\n   def __init__(self, input_channel, isTrain, device):\r\n      super(Convolutional_Encoder, self).__init__()\r\n      self.conv = nn.Conv3d\r\n      self.conv1 = self.conv(input_channel, 32, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1))\r\n      self.conv2 = self.conv(32, 48, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1))\r\n      self.conv3 = self.conv(48, 64, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1))\r\n      self.conv4 = self.conv(64, 80, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1))\r\n      self.conv5 = self.conv(80, 120, kernel_size=(1, 3, 3), stride=(1, 1, 1), padding=(0, 1, 1))\r\n\r\n      self.bn1 = nn.BatchNorm3d(32)\r\n      self.bn2 = nn.BatchNorm3d(48)\r\n      self.bn3 = nn.BatchNorm3d(64)\r\n      self.bn4 = nn.BatchNorm3d(80)\r\n      self.bn5 = nn.BatchNorm3d(120)\r\n\r\n      self.cnn = {'c1': self.conv1, 'c2': self.conv2, 'c3': self.conv3, 'c4': self.conv4,\r\n                  'c5': self.conv5, 'b1': self.bn1, 'b2': self.bn2, 'b3': self.bn3, \r\n                  'b4': self.bn4, 'b5': self.bn5}\r\n\r\n      self.relu = nn.ReLU(inplace=True)\r\n   \r\n   def forward(self, x):\r\n      win_size = x.shape[1]\r\n      x = x.permute(0, 2, 1, 3, 4)\r\n      \r\n      x = self.conv1(x)\r\n      # pdb.set_trace()\r\n      x = self.bn1(x)\r\n      x = F.avg_pool3d(x,(1,2,2))\r\n      x = self.relu(x)\r\n      x = self.conv2(x)\r\n      x = self.bn2(x)\r\n      x = F.avg_pool3d(x,(1,2,2))\r\n      x = self.relu(x)\r\n      x = self.conv3(x)\r\n      x = self.bn3(x)\r\n      x = F.avg_pool3d(x,(1,2,2))\r\n      x = self.relu(x)\r\n      x = self.conv4(x)\r\n      x = self.bn4(x)\r\n      x = F.avg_pool3d(x,(1,2,2))\r\n      x = self.relu(x)\r\n      x = self.conv5(x)\r\n      x = self.bn5(x)\r\n      x = F.avg_pool3d(x,(1,2,2))\r\n      x = self.relu(x)\r\n\r\n      x = F.adaptive_avg_pool3d(x, (win_size, 1, 1))\r\n      x = x.permute(0, 2, 1, 3, 4)\r\n      x = x.reshape(x.size(0), x.size(1),  - 1)\r\n\r\n      return x\r\n   \r\n   def return_grad(self):\r\n      # pdb.set_trace()\r\n      c1 = self.conv1.weight.grad.data.clone()\r\n      c2 = self.conv2.weight.grad.data.clone()\r\n      c3 = self.conv3.weight.grad.data.clone()\r\n      c4 = self.conv4.weight.grad.data.clone()\r\n      c5 = self.conv5.weight.grad.data.clone()\r\n      b1 = self.bn1.weight.grad.data.clone()\r\n      b2 = self.bn2.weight.grad.data.clone()\r\n      b3 = self.bn3.weight.grad.data.clone()\r\n      b4 = self.bn4.weight.grad.data.clone()\r\n      b5 = self.bn5.weight.grad.data.clone()\r\n\r\n      return {'c1': c1, 'c2': c2, 'c3': c3, 'c4': c4, 'c5': c5,\r\n              'b1': b1, 'b2': b2, 'b3': b3, 'b4': b4, 'b5': b5}\r\n   \r\n\r\nclass rPPG_Estimator(nn.Module):\r\n   def __init__(self, input_channel, num_layers, isTrain, device, num_classes=40, h=None, c=None):\r\n      super(rPPG_Estimator, self).__init__()\r\n      self.lstm = nn.LSTM(input_size=120, hidden_size=60,\r\n                          num_layers=num_layers, batch_first=True, bidirectional=True)\r\n      self.fc = nn.Linear(120, 80)\r\n      self.h, self.c = h, c\r\n      self.orl = OrdinalRegressionLayer()\r\n   \r\n   def forward(self, x):\r\n      self.lstm.flatten_parameters()\r\n      # pdb.set_trace()\r\n      if self.h is not None:\r\n         x, (self.h, self.c) = self.lstm(x, (self.h.data, self.c.data))\r\n      else:\r\n         x, _ = self.lstm(x)\r\n      # pdb.set_trace()\r\n\r\n      x = self.fc(x)\r\n      decision, prob = self.orl(x)\r\n      decision = decision.squeeze(2)\r\n      # pdb.set_trace()\r\n\r\n      return decision, prob\r\n   def feed_hc(self, data):\r\n      # pdb.set_trace()\r\n      self.h = data[0].data\r\n      self.c = data[1].data\r\n      # pdb.set_trace()\r\n\r\n   def return_grad(self):\r\n      fc_grad = self.fc.weight.grad.data.clone()\r\n      lstm_list = self.lstm._all_weights\r\n      lstm_dict = {}\r\n      for sublist in lstm_list:\r\n         for name in sublist:\r\n            # pdb.set_trace()\r\n            lstm_dict[name] = self.lstm._parameters[name].grad.data.clone()\r\n      return {'fc': fc_grad, 'lstm': lstm_dict}\r\n\r\n\r\n\r\nclass OrdinalRegressionLayer(nn.Module):\r\n   def __init__(self):\r\n      super(OrdinalRegressionLayer, self).__init__()\r\n\r\n   def forward(self, x):\r\n      \"\"\"\r\n      :param x: N X H X W X C, N is batch_size, C is channels of features\r\n      :return: ord_labels is ordinal outputs for each spatial locations , size is N x H X W X C (C = 2K, K is interval of SID)\r\n               decode_label is the ordinal labels for each position of Image I\r\n      \"\"\"\r\n      # pdb.set_trace()\r\n      x = x.permute(0, 2, 1)\r\n      N, C, W = x.size()\r\n      # N, W, C = x.size()\r\n      ord_num = C // 2\r\n\r\n      \"\"\"\r\n      replace iter with matrix operation\r\n      fast speed methods\r\n      \"\"\"\r\n      A = x[:, ::2, :].clone()\r\n      B = x[:, 1::2, :].clone()\r\n      # pdb.set_trace()\r\n      A = A.view(N, 1, ord_num * W)\r\n      B = B.view(N, 1, ord_num * W)\r\n      # pdb.set_trace()\r\n      C = torch.cat((A, B), dim=1)\r\n      C = torch.clamp(C, min=1e-8, max=1e8)  # prevent nans\r\n      # pdb.set_trace()\r\n      ord_c = nn.functional.softmax(C, dim=1)\r\n\r\n      # pdb.set_trace()\r\n      ord_c1 = ord_c[:, 1, :].clone()\r\n      ord_c1 = ord_c1.view(-1, ord_num, W)\r\n      decode_c = torch.sum((ord_c1 > 0.5), dim=1).view(-1, 1, W)\r\n      ord_c1 = ord_c1.permute(0, 2, 1)\r\n      decode_c = decode_c.permute(0, 2, 1)\r\n      # pdb.set_trace()\r\n      return decode_c, ord_c1\r\n\r\n   \r\n   \r\n\r\n\r\n"
  },
  {
    "path": "requirements.txt",
    "content": "tensorboardX\neasydict\ntqdm\nbypy\n"
  },
  {
    "path": "settings.py",
    "content": "import argparse\r\nimport torch.nn as nn\r\nimport torch\r\nfrom torch.optim import lr_scheduler\r\nimport numpy as np\r\nimport random\r\n\r\nimport pdb\r\n\r\nclass TrainOptions():\r\n   def __init__(self):\r\n      self.parser = argparse.ArgumentParser(\r\n         formatter_class=argparse.ArgumentDefaultsHelpFormatter)\r\n      self.parser.add_argument('--name', type=str, default='meta_rPPG')\r\n      self.parser.add_argument('--network', type=str, default='MAML')\r\n      self.parser.add_argument('--continue_train', action=\"store_true\")\r\n      self.parser.add_argument('--load_file', type=str, default='smallest')\r\n      self.parser.add_argument(\"--delay\", type=int, default=48)\r\n      self.parser.add_argument('--fewshots', type=int, default=1)\r\n      self.parser.add_argument('--lr_ratio', type=float, default=0.1)\r\n\r\n\r\n\r\n      self.parser.add_argument('--per_iter_task', type=int, default=3)\r\n      self.parser.add_argument('--lstm_num_layers', type=int, default=2)\r\n      self.parser.add_argument('--valid_ratio', type=float, default=0.75)\r\n\r\n      self.parser.add_argument('--batch_size', type=int, default=3)\r\n      self.parser.add_argument('--lr', type=float, default=1e-3)\r\n      self.parser.add_argument('--train_epoch', type=int, default=1)\r\n      self.parser.add_argument('--gpu_ids', type=str, default='0')\r\n      self.parser.add_argument('--print_net', action=\"store_true\")\r\n      self.parser.add_argument('--epoch_count', type=int, default=1)\r\n      # self.parser.add_argument('--lr_policy', type=str, default='cosine')\r\n      # self.parser.add_argument('--lr_decay_iters', type=int, default=1)\r\n      # self.parser.add_argument('--lr_update_iter', type=int, default=5000)\r\n\r\n      self.parser.add_argument('--print_freq', type=int, default=10)\r\n      self.parser.add_argument('--save_latest_freq', type=int, default=100)\r\n      self.parser.add_argument('--save_epoch_freq', type=int, default=50)\r\n      self.parser.add_argument('--save_by_iter', action=\"store_true\")\r\n\r\n\r\n      self.parser.add_argument('--display_id', type=int, default=1)\r\n      self.parser.add_argument(\r\n         '--display_server', type=str, default=\"http://localhost\")\r\n      self.parser.add_argument('--display_env', type=str, default='main')\r\n      self.parser.add_argument('--display_port', type=int, default=8800)\r\n      self.parser.add_argument('--display_winsize', type=int, default=256)\r\n      self.parser.add_argument('--verbose', type=bool, default=True)\r\n      self.parser.add_argument('--no_html', type=bool, default=True)\r\n      self.parser.add_argument(\r\n         '--checkpoints_dir', type=str, default='checkpoints')\r\n      self.parser.add_argument('--save_dir', type=str, default='save')\r\n      self.parser.add_argument('--max_dataset_size',type=int, default=float(\"inf\"))\r\n\r\n      self.parser.add_argument('--num_threads', type=int, default=4)\r\n      self.parser.add_argument('--phase', type=str, default='train')\r\n\r\n      self.parser.add_argument('--load_iter', type=int, default='0')\r\n      self.parser.add_argument('--epoch', type=str, default='latest')\r\n      self.parser.add_argument('--win_size', type=int, default=60)\r\n      self.parser.add_argument('--adapt_position', type=str, default=\"extractor\")\r\n\r\n   def get_options(self):\r\n      return self.parser.parse_args()\r\n   \r\n   def get_parser(self):\r\n      return self.parser\r\n\r\n\r\n\r\nclass custom_scheduler():\r\n   def __init__(self, optimizer, Tmax):\r\n      self.optimizer = optimizer\r\n      self.Tmax = Tmax\r\n      self.Max = optimizer.param_groups[0]['lr']\r\n      self.Min = self.Max*0.01\r\n      self.Tcur = 1\r\n\r\n   def step(self):\r\n      pi = torch.Tensor([np.pi])\r\n      for param_group in self.optimizer.param_groups:\r\n         param_group['lr'] = float(self.Min + 0.5*(self.Max - self.Min)*(1 + torch.cos(pi*self.Tcur/self.Tmax)))\r\n\r\n      if self.Tcur == 10 or self.Tcur == 30 or self.Tcur == 50 or self.Tcur == 70 or self.Tcur == 90:\r\n         self.Max = 10*self.optimizer.param_groups[0]['lr']\r\n      elif self.Tcur == 20 or self.Tcur == 40 or self.Tcur == 60 or self.Tcur == 80 or self.Tcur == 100:\r\n         self.Min = 0.01*self.optimizer.param_groups[0]['lr']\r\n      self.Tcur += 1\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "train.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.utils.data as Data\nimport numpy as np\nimport time\nimport os\nimport random\nimport matplotlib.pyplot as plt\nfrom data import SlideWindowDataLoader, testing\nfrom model import meta_rPPG\nfrom settings import TrainOptions\n\nimport pdb\n\n\nopt = TrainOptions().get_options()\niter_num = opt.batch_size\n\nmodel = meta_rPPG(opt, isTrain=True, continue_train=opt.continue_train)\nmodel.setup(opt)\n\ndataset = SlideWindowDataLoader(opt, isTrain=True)\ntestset = SlideWindowDataLoader(opt, isTrain=False)\n\nper_idx = opt.per_iter_task\ndataset_size = dataset.num_tasks * (dataset.task_len[0] - (opt.win_size))\ntask_len = (dataset.task_len[0] - per_idx*opt.win_size)\n\n\ntotal_iters = 0\n\nprint(\"Data Size: %d ||||| Batch Size: %d ||||| initial lr: %f\" %\n      (dataset_size, opt.batch_size, opt.lr))\n# pdb.set_trace()\n\ntask_list = random.sample(range(5), opt.batch_size)\nmodel.dataset = dataset\ndata = dataset[task_list, 0]\n# pdb.set_trace()\nmodel.set_input(data)\nmodel.update_prototype()\nmin_mae = [10, 10]\nmin_rmse = [10, 10]\nmin_merate = [10, 10]\nsaving = 1\n\n\nfor epoch in range(opt.epoch_count, opt.train_epoch + 1):\n   epoch_start_time = time.time()\n   epoch_iter = 0\n   i = 0\n   \n   \n\n   for data_idx in range(0, task_len, 1):\n      task_list = random.sample(range(5), opt.batch_size)\n      model.B_net.feed_hc([model.h, model.c])\n\n      model.progress = epoch + float(data_idx)/float(task_len)\n\n\n      for i in range(per_idx):\n         # pdb.set_trace()\n         data = dataset[task_list, data_idx + i*opt.win_size]\n         iter_start_time = time.time()\n         total_iters += opt.win_size\n         model.set_input(data)\n         if i == 0:\n            model.new_theta_update(epoch) # Adaptation phase\n         else:\n            model.new_psi_phi_update(epoch) # Learning phase\n      # pdb.set_trace()\n      loss, test_loss = testing(opt, model, testset, data_idx, epoch)\n      \n      epoch_iter += 1\n      data = dataset[task_list, np.random.randint(task_len)]\n      model.set_input(data)\n      model.update_prototype()\n\n\n   model.save_networks('latest')\n   model.save_networks(epoch)\n\n   # pdb.set_trace()\n   new_lr = model.update_learning_rate(epoch)\n   print('Epoch %d/%d ||||| Time: %d sec ||||| Lr: %.7f ||||| Loss: %.3f/%.3f' %\n         (epoch, opt.train_epoch, time.time() - epoch_start_time, new_lr,\n          loss, test_loss))\n"
  }
]