[
  {
    "path": ".gitignore",
    "content": "# virtualenv setting\nvenv_3DMPPE\n\n# output result\noutput\n\n# demo output\ndemo/*.pth.tar\n\n# byte-compiled\n/__pycache_/\n*/__pycache/*\n*/*/__pycache/\n*/*/*/__pycache/\n*.py[cod]\n*.pyc\n\n# nohup process\n*.out\n\n# idea\n.DS_Store\n.idea\n\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Gyeongsik Moon\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Github Code of \"MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices\"\n\n#### [2021.11.23] There will be massive refactoring and optimization expected. It will be released as soon as possible including new model.pth, Please wait for the model!(expecting end of December)\n#### [2022.05.19] Dummy dataloader is added. This will make reduce about to 100x faster that user to generate dummy pth.tar file of MobileHumanPose model for their PoC.\n\n## Introduction\n\nThis repo is official **[PyTorch](https://pytorch.org)** implementation of **[MobileHumanPose: Toward real-time 3D human pose estimation in mobile devices(CVPRW 2021)](https://openaccess.thecvf.com/content/CVPR2021W/MAI/html/Choi_MobileHumanPose_Toward_Real-Time_3D_Human_Pose_Estimation_in_Mobile_Devices_CVPRW_2021_paper.html)**.\n\n## Dependencies\n* [PyTorch](https://pytorch.org)\n* [CUDA](https://developer.nvidia.com/cuda-downloads)\n* [cuDNN](https://developer.nvidia.com/cudnn)\n* [Anaconda](https://www.anaconda.com/download/)\n* [COCO API](https://github.com/cocodataset/cocoapi)\n\nThis code is tested under Ubuntu 16.04, CUDA 11.2 environment with two NVIDIA RTX or V100 GPUs.\n\nPython 3.6.5 version with virtualenv is used for development.\n\n## Directory\n\n### Root\nThe `${ROOT}` is described as below.\n```\n${ROOT}\n|-- data\n|-- demo\n|-- common\n|-- main\n|-- tool\n|-- vis\n`-- output\n```\n* `data` contains data loading codes and soft links to images and annotations directories.\n* `demo` contains demo codes.\n* `common` contains kernel codes for 3d multi-person pose estimation system. Also custom backbone is implemented in this repo\n* `main` contains high-level codes for training or testing the network.\n* `tool` contains data pre-processing codes. You don't have to run this code. I provide pre-processed data below.\n* `vis` contains scripts for 3d visualization.\n* `output` contains log, trained models, visualized outputs, and test result.\n\n### Data\nYou need to follow directory structure of the `data` as below.\n```\n${POSE_ROOT}\n|-- data\n|   |-- Human36M\n|   |   |-- bbox_root\n|   |   |   |-- bbox_root_human36m_output.json\n|   |   |-- images\n|   |   |-- annotations\n|   |-- MPII\n|   |   |-- images\n|   |   |-- annotations\n|   |-- MSCOCO\n|   |   |-- bbox_root\n|   |   |   |-- bbox_root_coco_output.json\n|   |   |-- images\n|   |   |   |-- train2017\n|   |   |   |-- val2017\n|   |   |-- annotations\n|   |-- MuCo\n|   |   |-- data\n|   |   |   |-- augmented_set\n|   |   |   |-- unaugmented_set\n|   |   |   |-- MuCo-3DHP.json\n|   |-- MuPoTS\n|   |   |-- bbox_root\n|   |   |   |-- bbox_mupots_output.json\n|   |   |-- data\n|   |   |   |-- MultiPersonTestSet\n|   |   |   |-- MuPoTS-3D.json\n```\n* Download Human3.6M parsed data [[data](https://drive.google.com/drive/folders/1kgVH-GugrLoc9XyvP6nRoaFpw3TmM5xK?usp=sharing)]\n* Download MPII parsed data [[images](http://human-pose.mpi-inf.mpg.de/)][[annotations](https://drive.google.com/drive/folders/1MmQ2FRP0coxHGk0Ntj0JOGv9OxSNuCfK?usp=sharing)]\n* Download MuCo parsed and composited data [[data](https://drive.google.com/drive/folders/1yL2ey3aWHJnh8f_nhWP--IyC9krAPsQN?usp=sharing)]\n* Download MuPoTS parsed data [[images](http://gvv.mpi-inf.mpg.de/projects/SingleShotMultiPerson/)][[annotations](https://drive.google.com/drive/folders/1WmfQ8UEj6nuamMfAdkxmrNcsQTrTfKK_?usp=sharing)]\n* All annotation files follow [MS COCO format](http://cocodataset.org/#format-data).\n* If you want to add your own dataset, you have to convert it to [MS COCO format](http://cocodataset.org/#format-data).\n\n### Output\nYou need to follow the directory structure of the `output` folder as below.\n```\n${POSE_ROOT}\n|-- output\n|-- |-- log\n|-- |-- model_dump\n|-- |-- result\n`-- |-- vis\n```\n* Creating `output` folder as soft link form is recommended instead of folder form because it would take large storage capacity.\n* `log` folder contains training log file.\n* `model_dump` folder contains saved checkpoints for each epoch.\n* `result` folder contains final estimation files generated in the testing stage.\n* `vis` folder contains visualized results.\n\n### 3D visualization\n* Run `$DB_NAME_img_name.py` to get image file names in `.txt` format.\n* Place your test result files (`preds_2d_kpt_$DB_NAME.mat`, `preds_3d_kpt_$DB_NAME.mat`) in `single` or `multi` folder.\n* Run `draw_3Dpose_$DB_NAME.m`\n\n<p align=\"middle\">\n<img src=\"assets/test.JPG\">\n</p>\n\n## Running 3DMPPE_POSENET\n\n### Requirements\n\n```shell\ncd main\npip install -r requirements.txt\n```\n\n### Setup Training\n* In the `main/config.py`, you can change settings of the model including dataset to use, network backbone, and input size and so on.\n\n### Train\nIn the `main` folder, run\n```bash\npython train.py --gpu 0-1 --backbone LPSKI\n```\nto train the network on the GPU 0,1. \n\nIf you want to continue experiment, run \n```bash\npython train.py --gpu 0-1 --backbone LPSKI --continue\n```\n`--gpu 0,1` can be used instead of `--gpu 0-1`.\n\n### Test\nPlace trained model at the `output/model_dump/`.\n\nIn the `main` folder, run \n```bash\npython test.py --gpu 0-1 --test_epoch 20-21 --backbone LPSKI\n```\nto test the network on the GPU 0,1 with 20th and 21th epoch trained model. `--gpu 0,1` can be used instead of `--gpu 0-1`. For the backbone you can either choose \nBACKBONE_DICT = {\n    'LPRES':LpNetResConcat,\n    'LPSKI':LpNetSkiConcat,\n    'LPWO':LpNetWoConcat\n    }\n\n#### Human3.6M dataset using protocol 1\nFor the evaluation, you can run `test.py` or there are evaluation codes in `Human36M`.\n<p align=\"center\">\n<img src=\"assets/protocol1.JPG\">\n</p>\n\n#### Human3.6M dataset using protocol 2\nFor the evaluation, you can run `test.py` or there are evaluation codes in `Human36M`.\n<p align=\"center\">\n<img src=\"assets/protocol2.JPG\">\n</p>\n\n#### MuPoTS-3D dataset\nFor the evaluation, run `test.py`.  After that, move `data/MuPoTS/mpii_mupots_multiperson_eval.m` in `data/MuPoTS/data`. Also, move the test result files (`preds_2d_kpt_mupots.mat` and `preds_3d_kpt_mupots.mat`) in `data/MuPoTS/data`. Then run `mpii_mupots_multiperson_eval.m` with your evaluation mode arguments.\n<p align=\"center\">\n<img src=\"assets/mupots.JPG\">\n</p>\n\n#### TFLite inference\nFor the inference in mobile devices we also tested in mobile devices which converting PyTorch implementation through onnx and finally serving into TFlite.\nOfficial demo app is available in [here](https://github.com/tucan9389/PoseEstimation-TFLiteSwift)\n\n## Reference\n\n**What this repo cames from:**\nTraining section and is based on following paper and github\n* [PyTorch](https://pytorch.org) implementation of [Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image (ICCV 2019)](https://arxiv.org/abs/1907.11346).\n* Flexible and simple code.\n* Compatibility for most of the publicly available 2D and 3D, single and multi-person pose estimation datasets including **[Human3.6M](http://vision.imar.ro/human3.6m/description.php), [MPII](http://human-pose.mpi-inf.mpg.de/), [MS COCO 2017](http://cocodataset.org/#home), [MuCo-3DHP](http://gvv.mpi-inf.mpg.de/projects/SingleShotMultiPerson/) and [MuPoTS-3D](http://gvv.mpi-inf.mpg.de/projects/SingleShotMultiPerson/)**.\n* Human pose estimation visualization code.\n\n```\n@InProceedings{Choi_2021_CVPR,\n    author    = {Choi, Sangbum and Choi, Seokeon and Kim, Changick},\n    title     = {MobileHumanPose: Toward Real-Time 3D Human Pose Estimation in Mobile Devices},\n    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},\n    month     = {June},\n    year      = {2021},\n    pages     = {2328-2338}\n}\n```\n\n"
  },
  {
    "path": "common/backbone/__init__.py",
    "content": "from backbone.lpnet_res_concat import *\nfrom backbone.lpnet_ski_concat import *\nfrom backbone.lpnet_wo_concat import *\n"
  },
  {
    "path": "common/backbone/lpnet_res_concat.py",
    "content": "import torch.nn as nn\r\nimport torch\r\nfrom torchsummary import summary\r\n\r\ndef _make_divisible(v, divisor, min_value=None):\r\n    \"\"\"\r\n    This function is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by 8\r\n    It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py\r\n    :param v:\r\n    :param divisor:\r\n    :param min_value:\r\n    :return:\r\n    \"\"\"\r\n    if min_value is None:\r\n        min_value = divisor\r\n    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)\r\n    # Make sure that round down does not go down by more than 10%.\r\n    if new_v < 0.9 * v:\r\n        new_v += divisor\r\n    return new_v\r\n\r\nclass DoubleConv(nn.Sequential):\r\n    def __init__(self, in_ch, out_ch, norm_layer=None, activation_layer=None):\r\n        super(DoubleConv, self).__init__(\r\n            nn.Conv2d(in_ch , out_ch, kernel_size=1),\r\n            norm_layer(out_ch),\r\n            activation_layer(out_ch),\r\n            nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1),\r\n            norm_layer(out_ch),\r\n            activation_layer(out_ch),\r\n            nn.UpsamplingBilinear2d(scale_factor=2)\r\n        )\r\n\r\nclass ConvBNReLU(nn.Sequential):\r\n    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1, norm_layer=None, activation_layer=None):\r\n        padding = (kernel_size - 1) // 2\r\n        super(ConvBNReLU, self).__init__(\r\n            nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),\r\n            norm_layer(out_planes),\r\n            activation_layer(out_planes)\r\n        )\r\n\r\nclass InvertedResidual(nn.Module):\r\n    def __init__(self, inp, oup, stride, expand_ratio, norm_layer=None, activation_layer=None):\r\n        super(InvertedResidual, self).__init__()\r\n        self.stride = stride\r\n        assert stride in [1, 2]\r\n\r\n        hidden_dim = int(round(inp * expand_ratio))\r\n        self.use_res_connect = self.stride == 1 and inp == oup\r\n\r\n        layers = []\r\n        if expand_ratio != 1:\r\n            # pw\r\n            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer))\r\n        layers.extend([\r\n            # dw\r\n            ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim, norm_layer=norm_layer, activation_layer=activation_layer),\r\n            # pw-linear\r\n            nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\r\n            norm_layer(oup),\r\n        ])\r\n        self.conv = nn.Sequential(*layers)\r\n\r\n    def forward(self, x):\r\n        if self.use_res_connect:\r\n            return x + self.conv(x)\r\n        else:\r\n            return self.conv(x)\r\n\r\nclass LpNetResConcat(nn.Module):\r\n    def __init__(self,\r\n                 input_size,\r\n                 joint_num,\r\n                 input_channel = 48,\r\n                 embedding_size = 2048,\r\n                 width_mult=1.0,\r\n                 round_nearest=8,\r\n                 block=None,\r\n                 norm_layer=None,\r\n                 activation_layer=None,\r\n                 inverted_residual_setting=None):\r\n\r\n        super(LpNetResConcat, self).__init__()\r\n\r\n        assert input_size[1] in [256]\r\n\r\n        if block is None:\r\n            block = InvertedResidual\r\n        if norm_layer is None:\r\n            norm_layer = nn.BatchNorm2d\r\n        if activation_layer is None:\r\n            activation_layer = nn.PReLU # PReLU does not have inplace True\r\n        if inverted_residual_setting is None:\r\n            inverted_residual_setting = [\r\n                # t, c, n, s\r\n                [1, 64, 1, 1],  #[-1, 48, 256, 256]\r\n                [6, 48, 2, 2],  #[-1, 48, 128, 128]\r\n                [6, 48, 3, 2],  #[-1, 48, 64, 64]\r\n                [6, 64, 4, 2],  #[-1, 64, 32, 32]\r\n                [6, 96, 3, 2],  #[-1, 96, 16, 16]\r\n                [6, 160, 3, 2], #[-1, 160, 8, 8]\r\n                [6, 320, 1, 1], #[-1, 320, 8, 8]\r\n            ]\r\n\r\n        # building first layer\r\n        inp_channel = [_make_divisible(input_channel * width_mult, round_nearest),\r\n                         _make_divisible(input_channel * width_mult, round_nearest) + inverted_residual_setting[0][1],\r\n                         inverted_residual_setting[0][1] + inverted_residual_setting[1][1],\r\n                         inverted_residual_setting[1][1] + inverted_residual_setting[2][1],\r\n                         inverted_residual_setting[2][1] + inverted_residual_setting[3][1],\r\n                         inverted_residual_setting[3][1] + inverted_residual_setting[4][1],\r\n                         inverted_residual_setting[4][1] + inverted_residual_setting[5][1],\r\n                         inverted_residual_setting[5][1] + inverted_residual_setting[6][1],\r\n                         inverted_residual_setting[6][1] + embedding_size,\r\n                         256 + embedding_size,\r\n                       ]\r\n        self.first_conv = ConvBNReLU(3, inp_channel[0], stride=1, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        inv_residual = []\r\n        # building inverted residual blocks\r\n        j = 0\r\n        for t, c, n, s in inverted_residual_setting:\r\n            output_channel = _make_divisible(c * width_mult, round_nearest)\r\n            for i in range(n):\r\n                stride = s if i == 0 else 1\r\n                input_channel = inp_channel[j] if i == 0 else output_channel\r\n                inv_residual.append(block(input_channel, output_channel, stride, expand_ratio=t, norm_layer=norm_layer, activation_layer=activation_layer))\r\n            j += 1\r\n        # make it nn.Sequential\r\n        self.inv_residual = nn.Sequential(*inv_residual)\r\n\r\n        self.last_conv = ConvBNReLU(inp_channel[j], embedding_size, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.deonv0 = DoubleConv(inp_channel[j+1], 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deonv1 = DoubleConv(2304, 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deonv2 = DoubleConv(512, 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.final_layer = nn.Conv2d(\r\n            in_channels=256,\r\n            out_channels= joint_num * 64,\r\n            kernel_size=1,\r\n            stride=1,\r\n            padding=0\r\n        )\r\n\r\n        self.avgpool = nn.AvgPool2d(3, stride=2, padding=1, count_include_pad=False)\r\n        self.upsample = nn.UpsamplingBilinear2d(scale_factor=2)\r\n\r\n    def forward(self, x):\r\n        x0 = self.first_conv(x)\r\n        x1 = self.inv_residual[0:1](x0)\r\n        x2 = self.inv_residual[1:3](torch.cat([x0, x1], dim=1))\r\n        x0 = self.inv_residual[3:6](torch.cat([self.avgpool(x1), x2], dim=1))\r\n        x1 = self.inv_residual[6:10](torch.cat([self.avgpool(x2), x0], dim=1))\r\n        x2 = self.inv_residual[10:13](torch.cat([self.avgpool(x0), x1], dim=1))\r\n        x0 = self.inv_residual[13:16](torch.cat([self.avgpool(x1), x2], dim=1))\r\n        x1 = self.inv_residual[16:17](torch.cat([self.avgpool(x2), x0], dim=1))\r\n        x2 = self.last_conv(torch.cat([x0, x1], dim=1))\r\n        x0 = self.deonv0(torch.cat([x1, x2], dim=1))\r\n        x1 = self.deonv1(torch.cat([self.upsample(x2), x0], dim=1))\r\n        x2 = self.deonv2(torch.cat([self.upsample(x0), x1], dim=1))\r\n        x0 = self.final_layer(x2)\r\n        return x0\r\n\r\n    def init_weights(self):\r\n        for i in [self.deconv0, self.deconv1, self.deconv2]:\r\n            for name, m in i.named_modules():\r\n                if isinstance(m, nn.ConvTranspose2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                elif isinstance(m, nn.BatchNorm2d):\r\n                    nn.init.constant_(m.weight, 1)\r\n                    nn.init.constant_(m.bias, 0)\r\n        for j in [self.first_conv, self.inv_residual, self.last_conv, self.final_layer]:\r\n            for m in j.modules():\r\n                if isinstance(m, nn.Conv2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                    if hasattr(m, 'bias'):\r\n                        if m.bias is not None:\r\n                            nn.init.constant_(m.bias, 0)\r\n\r\nif __name__ == \"__main__\":\r\n    model = LpNetResConcat((256, 256), 18)\r\n    test_data = torch.rand(1, 3, 256, 256)\r\n    test_outputs = model(test_data)\r\n    # print(test_outputs.size())\r\n    summary(model, (3, 256, 256))"
  },
  {
    "path": "common/backbone/lpnet_ski_concat.py",
    "content": "import torch.nn as nn\r\nimport torch\r\nfrom torchsummary import summary\r\n\r\ndef _make_divisible(v, divisor, min_value=None):\r\n    \"\"\"\r\n    This function is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by 8\r\n    It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py\r\n    :param v:\r\n    :param divisor:\r\n    :param min_value:\r\n    :return:\r\n    \"\"\"\r\n    if min_value is None:\r\n        min_value = divisor\r\n    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)\r\n    # Make sure that round down does not go down by more than 10%.\r\n    if new_v < 0.9 * v:\r\n        new_v += divisor\r\n    return new_v\r\n\r\nclass DeConv(nn.Sequential):\r\n    def __init__(self, in_ch, mid_ch, out_ch, norm_layer=None, activation_layer=None):\r\n        super(DeConv, self).__init__(\r\n            nn.Conv2d(in_ch + mid_ch, mid_ch, kernel_size=1),\r\n            norm_layer(mid_ch),\r\n            activation_layer(mid_ch),\r\n            nn.Conv2d(mid_ch, out_ch, kernel_size=3, padding=1),\r\n            norm_layer(out_ch),\r\n            activation_layer(out_ch),\r\n            nn.UpsamplingBilinear2d(scale_factor=2)\r\n        )\r\n\r\nclass ConvBNReLU(nn.Sequential):\r\n    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1, norm_layer=None, activation_layer=None):\r\n        padding = (kernel_size - 1) // 2\r\n        super(ConvBNReLU, self).__init__(\r\n            nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),\r\n            norm_layer(out_planes),\r\n            activation_layer(out_planes)\r\n        )\r\n\r\nclass InvertedResidual(nn.Module):\r\n    def __init__(self, inp, oup, stride, expand_ratio, norm_layer=None, activation_layer=None):\r\n        super(InvertedResidual, self).__init__()\r\n        self.stride = stride\r\n        assert stride in [1, 2]\r\n\r\n        hidden_dim = int(round(inp * expand_ratio))\r\n        self.use_res_connect = self.stride == 1 and inp == oup\r\n\r\n        layers = []\r\n        if expand_ratio != 1:\r\n            # pw\r\n            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer))\r\n        layers.extend([\r\n            # dw\r\n            ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim, norm_layer=norm_layer, activation_layer=activation_layer),\r\n            # pw-linear\r\n            nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\r\n            norm_layer(oup),\r\n        ])\r\n        self.conv = nn.Sequential(*layers)\r\n\r\n    def forward(self, x):\r\n        if self.use_res_connect:\r\n            return x + self.conv(x)\r\n        else:\r\n            return self.conv(x)\r\n\r\nclass LpNetSkiConcat(nn.Module):\r\n    def __init__(self,\r\n                 input_size,\r\n                 joint_num,\r\n                 input_channel = 48,\r\n                 embedding_size = 2048,\r\n                 width_mult=1.0,\r\n                 round_nearest=8,\r\n                 block=None,\r\n                 norm_layer=None,\r\n                 activation_layer=None,\r\n                 inverted_residual_setting=None):\r\n\r\n        super(LpNetSkiConcat, self).__init__()\r\n\r\n        assert input_size[1] in [256]\r\n\r\n        if block is None:\r\n            block = InvertedResidual\r\n        if norm_layer is None:\r\n            norm_layer = nn.BatchNorm2d\r\n        if activation_layer is None:\r\n            activation_layer = nn.PReLU # PReLU does not have inplace True\r\n        if inverted_residual_setting is None:\r\n            inverted_residual_setting = [\r\n                # t, c, n, s\r\n                [1, 64, 1, 2],  #[-1, 48, 256, 256]\r\n                [6, 48, 2, 2],  #[-1, 48, 128, 128]\r\n                [6, 48, 3, 2],  #[-1, 48, 64, 64]\r\n                [6, 64, 4, 2],  #[-1, 64, 32, 32]\r\n                [6, 96, 3, 2],  #[-1, 96, 16, 16]\r\n                [6, 160, 3, 1], #[-1, 160, 8, 8]\r\n                [6, 320, 1, 1], #[-1, 320, 8, 8]\r\n            ]\r\n\r\n        # building first layer\r\n        input_channel = _make_divisible(input_channel * width_mult, round_nearest)\r\n\r\n        self.first_conv = ConvBNReLU(3, input_channel, stride=2, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        inv_residual = []\r\n        # building inverted residual blocks\r\n        for t, c, n, s in inverted_residual_setting:\r\n            output_channel = _make_divisible(c * width_mult, round_nearest)\r\n            for i in range(n):\r\n                stride = s if i == 0 else 1\r\n                inv_residual.append(block(input_channel, output_channel, stride, expand_ratio=t, norm_layer=norm_layer, activation_layer=activation_layer))\r\n                input_channel = output_channel\r\n        # make it nn.Sequential\r\n        self.inv_residual = nn.Sequential(*inv_residual)\r\n\r\n        self.last_conv = ConvBNReLU(input_channel, embedding_size, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.deconv0 = DeConv(embedding_size, _make_divisible(inverted_residual_setting[-3][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deconv1 = DeConv(256, _make_divisible(inverted_residual_setting[-4][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deconv2 = DeConv(256, _make_divisible(inverted_residual_setting[-5][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.final_layer = nn.Conv2d(\r\n            in_channels=256,\r\n            out_channels= joint_num * 32,\r\n            kernel_size=1,\r\n            stride=1,\r\n            padding=0\r\n        )\r\n\r\n    def forward(self, x):\r\n        x = self.first_conv(x)\r\n        x = self.inv_residual[0:6](x)\r\n        x2 = x\r\n        x = self.inv_residual[6:10](x)\r\n        x1 = x\r\n        x = self.inv_residual[10:13](x)\r\n        x0 = x\r\n        x = self.inv_residual[13:16](x)\r\n        x = self.inv_residual[16:](x)\r\n        z = self.last_conv(x)\r\n        z = torch.cat([x0, z], dim=1)\r\n        z = self.deconv0(z)\r\n        z = torch.cat([x1, z], dim=1)\r\n        z = self.deconv1(z)\r\n        z = torch.cat([x2, z], dim=1)\r\n        z = self.deconv2(z)\r\n        z = self.final_layer(z)\r\n        return z\r\n\r\n    def init_weights(self):\r\n        for i in [self.deconv0, self.deconv1, self.deconv2]:\r\n            for name, m in i.named_modules():\r\n                if isinstance(m, nn.ConvTranspose2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                elif isinstance(m, nn.BatchNorm2d):\r\n                    nn.init.constant_(m.weight, 1)\r\n                    nn.init.constant_(m.bias, 0)\r\n        for j in [self.first_conv, self.inv_residual, self.last_conv, self.final_layer]:\r\n            for m in j.modules():\r\n                if isinstance(m, nn.Conv2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                    if hasattr(m, 'bias'):\r\n                        if m.bias is not None:\r\n                            nn.init.constant_(m.bias, 0)\r\n\r\nif __name__ == \"__main__\":\r\n    LpNetSkiConcat((256, 256), 18).init_weights()\r\n    model = LpNetSkiConcat((256, 256), 18)\r\n    test_data = torch.rand(1, 3, 256, 256)\r\n    test_outputs = model(test_data)\r\n    print(test_outputs.size())\r\n    summary(model, (3, 256, 256))\r\n"
  },
  {
    "path": "common/backbone/lpnet_wo_concat.py",
    "content": "import torch.nn as nn\r\nimport torch\r\nfrom torchsummary import summary\r\n\r\ndef _make_divisible(v, divisor, min_value=None):\r\n    \"\"\"\r\n    This function is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by 8\r\n    It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py\r\n    :param v:\r\n    :param divisor:\r\n    :param min_value:\r\n    :return:\r\n    \"\"\"\r\n    if min_value is None:\r\n        min_value = divisor\r\n    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)\r\n    # Make sure that round down does not go down by more than 10%.\r\n    if new_v < 0.9 * v:\r\n        new_v += divisor\r\n    return new_v\r\n\r\nclass DeConv(nn.Sequential):\r\n    def __init__(self, in_ch, mid_ch, out_ch, norm_layer=None, activation_layer=None):\r\n        super(DeConv, self).__init__(\r\n            nn.Conv2d(in_ch, mid_ch, kernel_size=1),\r\n            norm_layer(mid_ch),\r\n            activation_layer(mid_ch),\r\n            nn.Conv2d(mid_ch, out_ch, kernel_size=3, padding=1),\r\n            norm_layer(out_ch),\r\n            activation_layer(out_ch),\r\n            nn.UpsamplingBilinear2d(scale_factor=2)\r\n        )\r\n\r\nclass ConvBNReLU(nn.Sequential):\r\n    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1, norm_layer=None, activation_layer=None):\r\n        padding = (kernel_size - 1) // 2\r\n        super(ConvBNReLU, self).__init__(\r\n            nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),\r\n            norm_layer(out_planes),\r\n            activation_layer(out_planes)\r\n        )\r\n\r\nclass InvertedResidual(nn.Module):\r\n    def __init__(self, inp, oup, stride, expand_ratio, norm_layer=None, activation_layer=None):\r\n        super(InvertedResidual, self).__init__()\r\n        self.stride = stride\r\n        assert stride in [1, 2]\r\n\r\n        hidden_dim = int(round(inp * expand_ratio))\r\n        self.use_res_connect = self.stride == 1 and inp == oup\r\n\r\n        layers = []\r\n        if expand_ratio != 1:\r\n            # pw\r\n            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer))\r\n        layers.extend([\r\n            # dw\r\n            ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim, norm_layer=norm_layer, activation_layer=activation_layer),\r\n            # pw-linear\r\n            nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\r\n            norm_layer(oup),\r\n        ])\r\n        self.conv = nn.Sequential(*layers)\r\n\r\n    def forward(self, x):\r\n        if self.use_res_connect:\r\n            return x + self.conv(x)\r\n        else:\r\n            return self.conv(x)\r\n\r\nclass LpNetWoConcat(nn.Module):\r\n    def __init__(self,\r\n                 input_size,\r\n                 joint_num,\r\n                 input_channel = 48,\r\n                 embedding_size = 2048,\r\n                 width_mult=1.0,\r\n                 round_nearest=8,\r\n                 block=None,\r\n                 norm_layer=None,\r\n                 activation_layer=None,\r\n                 inverted_residual_setting=None):\r\n\r\n        super(LpNetWoConcat, self).__init__()\r\n\r\n        assert input_size[1] in [256]\r\n\r\n        if block is None:\r\n            block = InvertedResidual\r\n        if norm_layer is None:\r\n            norm_layer = nn.BatchNorm2d\r\n        if activation_layer is None:\r\n            activation_layer = nn.PReLU # PReLU does not have inplace True\r\n        if inverted_residual_setting is None:\r\n            inverted_residual_setting = [\r\n                # t, c, n, s\r\n                [1, 64, 1, 1],  #[-1, 48, 256, 256]\r\n                [6, 48, 2, 2],  #[-1, 48, 128, 128]\r\n                [6, 48, 3, 2],  #[-1, 48, 64, 64]\r\n                [6, 64, 4, 2],  #[-1, 64, 32, 32]\r\n                [6, 96, 3, 2],  #[-1, 96, 16, 16]\r\n                [6, 160, 3, 2], #[-1, 160, 8, 8]\r\n                [6, 320, 1, 1], #[-1, 320, 8, 8]\r\n            ]\r\n\r\n        # building first layer\r\n        input_channel = _make_divisible(input_channel * width_mult, round_nearest)\r\n        self.first_conv = ConvBNReLU(3, input_channel, stride=1, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        inv_residual = []\r\n        # building inverted residual blocks\r\n        for t, c, n, s in inverted_residual_setting:\r\n            output_channel = _make_divisible(c * width_mult, round_nearest)\r\n            for i in range(n):\r\n                stride = s if i == 0 else 1\r\n                inv_residual.append(block(input_channel, output_channel, stride, expand_ratio=t, norm_layer=norm_layer, activation_layer=activation_layer))\r\n                input_channel = output_channel\r\n        # make it nn.Sequential\r\n        self.inv_residual = nn.Sequential(*inv_residual)\r\n\r\n        self.last_conv = ConvBNReLU(input_channel, embedding_size, kernel_size=1, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.deconv0 = DeConv(embedding_size, _make_divisible(inverted_residual_setting[-2][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deconv1 = DeConv(256, _make_divisible(inverted_residual_setting[-3][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n        self.deconv2 = DeConv(256, _make_divisible(inverted_residual_setting[-4][-3] * width_mult, round_nearest), 256, norm_layer=norm_layer, activation_layer=activation_layer)\r\n\r\n        self.final_layer = nn.Conv2d(\r\n            in_channels=256,\r\n            out_channels= joint_num * 64,\r\n            kernel_size=1,\r\n            stride=1,\r\n            padding=0\r\n        )\r\n\r\n    def forward(self, x):\r\n        x = self.first_conv(x)\r\n        x = self.inv_residual(x)\r\n        x = self.last_conv(x)\r\n        x = self.deconv0(x)\r\n        x = self.deconv1(x)\r\n        x = self.deconv2(x)\r\n        x = self.final_layer(x)\r\n        return x\r\n\r\n    def init_weights(self):\r\n        for i in [self.deconv0, self.deconv1, self.deconv2]:\r\n            for name, m in i.named_modules():\r\n                if isinstance(m, nn.ConvTranspose2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                elif isinstance(m, nn.BatchNorm2d):\r\n                    nn.init.constant_(m.weight, 1)\r\n                    nn.init.constant_(m.bias, 0)\r\n        for j in [self.first_conv, self.inv_residual, self.last_conv, self.final_layer]:\r\n            for m in j.modules():\r\n                if isinstance(m, nn.Conv2d):\r\n                    nn.init.normal_(m.weight, std=0.001)\r\n                    if hasattr(m, 'bias'):\r\n                        if m.bias is not None:\r\n                            nn.init.constant_(m.bias, 0)\r\n\r\nif __name__ == \"__main__\":\r\n    model = LpNetWoConcat((256, 256), 18)\r\n    test_data = torch.rand(1, 3, 256, 256)\r\n    test_outputs = model(test_data)\r\n    summary(model, (3, 256, 256))"
  },
  {
    "path": "common/base.py",
    "content": "import os\nimport os.path as osp\nimport math\nimport time\nimport glob\nimport abc\nfrom torch.utils.data import DataLoader\nimport torch.optim\nimport torchvision.transforms as transforms\nfrom timer import Timer\nfrom logger import colorlogger\nfrom torch.nn.parallel.data_parallel import DataParallel\nfrom config import cfg\nfrom model import get_pose_net\nfrom dataset import DatasetLoader\nfrom multiple_datasets import MultipleDatasets\n\n# dynamic dataset import\nfor i in range(len(cfg.trainset_3d)):\n    exec('from ' + cfg.trainset_3d[i] + ' import ' + cfg.trainset_3d[i])\nfor i in range(len(cfg.trainset_2d)):\n    exec('from ' + cfg.trainset_2d[i] + ' import ' + cfg.trainset_2d[i])\nexec('from ' + cfg.testset + ' import ' + cfg.testset)\n\nclass Base(object):\n    __metaclass__ = abc.ABCMeta\n\n    def __init__(self, log_name='logs.txt'):\n        \n        self.cur_epoch = 0\n\n        # timer\n        self.tot_timer = Timer()\n        self.gpu_timer = Timer()\n        self.read_timer = Timer()\n\n        # logger\n        self.logger = colorlogger(cfg.log_dir, log_name=log_name)\n\n    @abc.abstractmethod\n    def _make_batch_generator(self):\n        return\n\n    @abc.abstractmethod\n    def _make_model(self):\n        return\n\n    def save_model(self, state, epoch):\n        file_path = osp.join(cfg.model_dir,'snapshot_{}.pth.tar'.format(str(epoch)))\n        torch.save(state, file_path)\n        self.logger.info(\"Write snapshot into {}\".format(file_path))\n\n    def load_model(self, model, optimizer):\n        model_file_list = glob.glob(osp.join(cfg.model_dir,'*.pth.tar'))\n        cur_epoch = max([int(file_name[file_name.find('snapshot_') + 9 : file_name.find('.pth.tar')]) for file_name in model_file_list])\n        ckpt = torch.load(osp.join(cfg.model_dir, 'snapshot_' + str(cur_epoch) + '.pth.tar')) \n        start_epoch = ckpt['epoch'] + 1\n        model.load_state_dict(ckpt['network'])\n        optimizer.load_state_dict(ckpt['optimizer'])\n\n        return start_epoch, model, optimizer\n\nclass Trainer(Base):\n    \n    def __init__(self, cfg):\n        super(Trainer, self).__init__(log_name = 'train_logs.txt')\n        self.backbone = cfg.backbone\n\n    def get_optimizer(self, model):\n        \n        optimizer = torch.optim.Adam(model.parameters(), lr=cfg.lr)\n        return optimizer\n\n    def set_lr(self, epoch):\n        for e in cfg.lr_dec_epoch:\n            if epoch < e:\n                break\n        if epoch < cfg.lr_dec_epoch[-1]:\n            idx = cfg.lr_dec_epoch.index(e)\n            for g in self.optimizer.param_groups:\n                g['lr'] = cfg.lr / (cfg.lr_dec_factor ** idx)\n        else:\n            for g in self.optimizer.param_groups:\n                g['lr'] = cfg.lr / (cfg.lr_dec_factor ** len(cfg.lr_dec_epoch))\n\n    def get_lr(self):\n        for g in self.optimizer.param_groups:\n            cur_lr = g['lr']\n\n        return cur_lr\n\n    def _make_batch_generator(self):\n        # data load and construct batch generator\n        self.logger.info(\"Creating dataset...\")\n        trainset3d_loader = []\n        for i in range(len(cfg.trainset_3d)):\n            if i > 0:\n                ref_joints_name = trainset3d_loader[0].joints_name\n            else:\n                ref_joints_name = None\n            trainset3d_loader.append(DatasetLoader(eval(cfg.trainset_3d[i])(\"train\"), ref_joints_name, True, transforms.Compose([\\\n                                                                                                        transforms.ToTensor(),\n                                                                                                        transforms.Normalize(mean=cfg.pixel_mean, std=cfg.pixel_std)]\\\n                                                                                                        )))\n        ref_joints_name = trainset3d_loader[0].joints_name\n        trainset2d_loader = []\n        for i in range(len(cfg.trainset_2d)):\n            trainset2d_loader.append(DatasetLoader(eval(cfg.trainset_2d[i])(\"train\"), ref_joints_name, True, transforms.Compose([\\\n                                                                                                        transforms.ToTensor(),\n                                                                                                        transforms.Normalize(mean=cfg.pixel_mean, std=cfg.pixel_std)]\\\n                                                                                                        )))\n\n        self.joint_num = trainset3d_loader[0].joint_num\n\n        trainset3d_loader = MultipleDatasets(trainset3d_loader, make_same_len=False)\n        if trainset2d_loader != []:\n            trainset2d_loader = MultipleDatasets(trainset2d_loader, make_same_len=False)\n            trainset_loader = MultipleDatasets([trainset3d_loader, trainset2d_loader], make_same_len=True)\n        else:\n            trainset_loader = MultipleDatasets([trainset3d_loader, ], make_same_len=True)\n\n        self.itr_per_epoch = math.ceil(len(trainset_loader) / cfg.num_gpus / cfg.batch_size)\n        self.batch_generator = DataLoader(dataset=trainset_loader, batch_size=cfg.num_gpus*cfg.batch_size, shuffle=True, num_workers=cfg.num_thread, pin_memory=True)\n\n    def _make_model(self):\n        # prepare network\n        self.logger.info(\"Creating graph and optimizer...\")\n        model = get_pose_net(self.backbone, True, self.joint_num)\n        if torch.cuda.is_available():\n            model = DataParallel(model).cuda()\n        optimizer = self.get_optimizer(model)\n        if cfg.continue_train:\n            start_epoch, model, optimizer = self.load_model(model, optimizer)\n        else:\n            start_epoch = 0\n        model.train()\n\n        self.start_epoch = start_epoch\n        self.model = model\n        self.optimizer = optimizer\n\nclass Tester(Base):\n    \n    def __init__(self, backbone):\n        self.backbone = backbone\n        super(Tester, self).__init__(log_name = 'test_logs.txt')\n\n    def _make_batch_generator(self):\n        # data load and construct batch generator\n        # self.logger.info(\"Creating dataset...\")\n        testset = eval(cfg.testset)(\"test\")\n        testset_loader = DatasetLoader(testset, None, False, transforms.Compose([\\\n                                                                                                        transforms.ToTensor(),\n                                                                                                        transforms.Normalize(mean=cfg.pixel_mean, std=cfg.pixel_std)]\\\n                                                                                                        ))\n        batch_generator = DataLoader(dataset=testset_loader, batch_size=cfg.num_gpus*cfg.test_batch_size, shuffle=False, num_workers=cfg.num_thread, pin_memory=True)\n        \n        self.testset = testset\n        self.joint_num = testset_loader.joint_num\n        self.skeleton = testset_loader.skeleton\n        self.flip_pairs = testset.flip_pairs\n        self.batch_generator = batch_generator\n    \n    def _make_model(self, test_epoch):\n        self.test_epoch = test_epoch\n        model_path = os.path.join(cfg.model_dir, 'snapshot_%d.pth.tar' % self.test_epoch)\n        assert os.path.exists(model_path), 'Cannot find model at ' + model_path\n        # self.logger.info('Load checkpoint from {}'.format(model_path))\n        \n        # prepare network\n        # self.logger.info(\"Creating graph...\")\n        model = get_pose_net(self.backbone, False, self.joint_num)\n        model = DataParallel(model).cuda()\n        ckpt = torch.load(model_path)\n        model.load_state_dict(ckpt['network'])\n        model.eval()\n\n        self.model = model\n\n    def _evaluate(self, preds, result_save_path):\n        eval_summary = self.testset.evaluate(preds, result_save_path)\n        self.logger.info('{}'.format(eval_summary))\n\nclass Transformer(Base):\n\n    def __init__(self, backbone, jointnum, modelpath):\n        super(Transformer, self).__init__(log_name='transformer_logs.txt')\n        self.backbone = backbone\n        self.jointnum = jointnum\n        self.modelpath = modelpath\n\n    def _make_model(self):\n        # prepare network\n        self.logger.info(\"Creating graph and optimizer...\")\n        model = get_pose_net(self.backbone, False, self.jointnum)\n        model = DataParallel(model).cuda()\n        model.load_state_dict(torch.load(self.modelpath)['network'])\n        single_pytorch_model = model.module\n        single_pytorch_model.eval()\n        self.model = single_pytorch_model\n"
  },
  {
    "path": "common/logger.py",
    "content": "import logging\nimport os\n\nOK = '\\033[92m'\nWARNING = '\\033[93m'\nFAIL = '\\033[91m'\nEND = '\\033[0m'\n\nPINK = '\\033[95m'\nBLUE = '\\033[94m'\nGREEN = OK\nRED = FAIL\nWHITE = END\nYELLOW = WARNING\n\nclass colorlogger():\n    def __init__(self, log_dir, log_name='train_logs.txt'):\n        # set log\n        self._logger = logging.getLogger(log_name)\n        self._logger.setLevel(logging.INFO)\n        log_file = os.path.join(log_dir, log_name)\n        if not os.path.exists(log_dir):\n            os.makedirs(log_dir)\n        file_log = logging.FileHandler(log_file, mode='a')\n        file_log.setLevel(logging.INFO)\n        console_log = logging.StreamHandler()\n        console_log.setLevel(logging.INFO)\n        formatter = logging.Formatter(\n            \"{}%(asctime)s{} %(message)s\".format(GREEN, END),\n            \"%m-%d %H:%M:%S\")\n        file_log.setFormatter(formatter)\n        console_log.setFormatter(formatter)\n        self._logger.addHandler(file_log)\n        self._logger.addHandler(console_log)\n\n    def debug(self, msg):\n        self._logger.debug(str(msg))\n\n    def info(self, msg):\n        self._logger.info(str(msg))\n\n    def warning(self, msg):\n        self._logger.warning(WARNING + 'WRN: ' + str(msg) + END)\n\n    def critical(self, msg):\n        self._logger.critical(RED + 'CRI: ' + str(msg) + END)\n\n    def error(self, msg):\n        self._logger.error(RED + 'ERR: ' + str(msg) + END)\n\n"
  },
  {
    "path": "common/timer.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport time\n\nclass Timer(object):\n    \"\"\"A simple timer.\"\"\"\n    def __init__(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n        self.warm_up = 0\n\n    def tic(self):\n        # using time.time instead of time.clock because time time.clock\n        # does not normalize for multithreading\n        self.start_time = time.time()\n\n    def toc(self, average=True):\n        self.diff = time.time() - self.start_time\n        if self.warm_up < 10:\n            self.warm_up += 1\n            return self.diff\n        else:\n            self.total_time += self.diff\n            self.calls += 1\n            self.average_time = self.total_time / self.calls\n\n        if average:\n            return self.average_time\n        else:\n            return self.diff\n"
  },
  {
    "path": "common/utils/__init__.py",
    "content": ""
  },
  {
    "path": "common/utils/dir_utils.py",
    "content": "import os\nimport sys\n\ndef make_folder(folder_name):\n    if not os.path.exists(folder_name):\n        os.makedirs(folder_name)\n\ndef add_pypath(path):\n    if path not in sys.path:\n        sys.path.insert(0, path)\n\n"
  },
  {
    "path": "common/utils/pose_utils.py",
    "content": "import torch\nimport numpy as np\nfrom config import cfg\nimport copy\n\ndef cam2pixel(cam_coord, f, c):\n    x = cam_coord[:, 0] / (cam_coord[:, 2] + 1e-8) * f[0] + c[0]\n    y = cam_coord[:, 1] / (cam_coord[:, 2] + 1e-8) * f[1] + c[1]\n    z = cam_coord[:, 2]\n    img_coord = np.concatenate((x[:,None], y[:,None], z[:,None]),1)\n    return img_coord\n\ndef pixel2cam(pixel_coord, f, c):\n    x = (pixel_coord[:, 0] - c[0]) / f[0] * pixel_coord[:, 2]\n    y = (pixel_coord[:, 1] - c[1]) / f[1] * pixel_coord[:, 2]\n    z = pixel_coord[:, 2]\n    cam_coord = np.concatenate((x[:,None], y[:,None], z[:,None]),1)\n    return cam_coord\n\ndef world2cam(world_coord, R, t):\n    cam_coord = np.dot(R, world_coord.transpose(1,0)).transpose(1,0) + t.reshape(1,3)\n    return cam_coord\n\ndef rigid_transform_3D(A, B):\n    centroid_A = np.mean(A, axis = 0)\n    centroid_B = np.mean(B, axis = 0)\n    H = np.dot(np.transpose(A - centroid_A), B - centroid_B)\n    U, s, V = np.linalg.svd(H)\n    R = np.dot(np.transpose(V), np.transpose(U))\n    if np.linalg.det(R) < 0:\n        V[2] = -V[2]\n        R = np.dot(np.transpose(V), np.transpose(U))\n    t = -np.dot(R, np.transpose(centroid_A)) + np.transpose(centroid_B)\n    return R, t\n\ndef rigid_align(A, B):\n    R, t = rigid_transform_3D(A, B)\n    A2 = np.transpose(np.dot(R, np.transpose(A))) + t\n    return A2\n\ndef get_bbox(joint_img):\n    # bbox extract from keypoint coordinates\n    bbox = np.zeros((4))\n    xmin = np.min(joint_img[:,0])\n    ymin = np.min(joint_img[:,1])\n    xmax = np.max(joint_img[:,0])\n    ymax = np.max(joint_img[:,1])\n    width = xmax - xmin - 1\n    height = ymax - ymin - 1\n    \n    bbox[0] = (xmin + xmax)/2. - width/2*1.2\n    bbox[1] = (ymin + ymax)/2. - height/2*1.2\n    bbox[2] = width*1.2\n    bbox[3] = height*1.2\n\n    return bbox\n\ndef process_bbox(bbox, width, height):\n    # sanitize bboxes\n    x, y, w, h = bbox\n    x1 = np.max((0, x))\n    y1 = np.max((0, y))\n    x2 = np.min((width - 1, x1 + np.max((0, w - 1))))\n    y2 = np.min((height - 1, y1 + np.max((0, h - 1))))\n    if w*h > 0 and x2 >= x1 and y2 >= y1:\n        bbox = np.array([x1, y1, x2-x1, y2-y1])\n    else:\n        return None\n\n    # aspect ratio preserving bbox\n    w = bbox[2]\n    h = bbox[3]\n    c_x = bbox[0] + w/2.\n    c_y = bbox[1] + h/2.\n    aspect_ratio = cfg.input_shape[1]/cfg.input_shape[0]\n    if w > aspect_ratio * h:\n        h = w / aspect_ratio\n    elif w < aspect_ratio * h:\n        w = h * aspect_ratio\n    bbox[2] = w*1.25\n    bbox[3] = h*1.25\n    bbox[0] = c_x - bbox[2]/2.\n    bbox[1] = c_y - bbox[3]/2.\n    return bbox\n\ndef transform_joint_to_other_db(src_joint, src_name, dst_name):\n    src_joint_num = len(src_name)\n    dst_joint_num = len(dst_name)\n\n    new_joint = np.zeros(((dst_joint_num,) + src_joint.shape[1:]))\n\n    for src_idx in range(len(src_name)):\n        name = src_name[src_idx]\n        if name in dst_name:\n            dst_idx = dst_name.index(name)\n            new_joint[dst_idx] = src_joint[src_idx]\n\n    return new_joint\n\n\ndef fliplr_joints(_joints, width, matched_parts):\n    \"\"\"\n    flip coords\n    joints: numpy array, nJoints * dim, dim == 2 [x, y] or dim == 3  [x, y, z]\n    width: image width\n    matched_parts: list of pairs\n    \"\"\"\n    joints = _joints.copy()\n    # Flip horizontal\n    joints[:, 0] = width - joints[:, 0] - 1\n\n    # Change left-right parts\n    for pair in matched_parts:\n        joints[pair[0], :], joints[pair[1], :] = joints[pair[1], :], joints[pair[0], :].copy()\n\n    return joints\n\ndef multi_meshgrid(*args):\n    \"\"\"\n    Creates a meshgrid from possibly many\n    elements (instead of only 2).\n    Returns a nd tensor with as many dimensions\n    as there are arguments\n    \"\"\"\n    args = list(args)\n    template = [1 for _ in args]\n    for i in range(len(args)):\n        n = args[i].shape[0]\n        template_copy = template.copy()\n        template_copy[i] = n\n        args[i] = args[i].view(*template_copy)\n        # there will be some broadcast magic going on\n    return tuple(args)\n\n\ndef flip(tensor, dims):\n    if not isinstance(dims, (tuple, list)):\n        dims = [dims]\n    indices = [torch.arange(tensor.shape[dim] - 1, -1, -1,\n                            dtype=torch.int64) for dim in dims]\n    multi_indices = multi_meshgrid(*indices)\n    final_indices = [slice(i) for i in tensor.shape]\n    for i, dim in enumerate(dims):\n        final_indices[dim] = multi_indices[i]\n    flipped = tensor[final_indices]\n    assert flipped.device == tensor.device\n    assert flipped.requires_grad == tensor.requires_grad\n    return flipped\n\n"
  },
  {
    "path": "common/utils/vis.py",
    "content": "import os\nimport cv2\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom config import cfg\n\ndef vis_keypoints(img, kps, kps_lines, kp_thresh=0.4, alpha=1):\n\n    # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.\n    cmap = plt.get_cmap('rainbow')\n    colors = [cmap(i) for i in np.linspace(0, 1, len(kps_lines) + 2)]\n    colors = [(c[2] * 255, c[1] * 255, c[0] * 255) for c in colors]\n\n    # Perform the drawing on a copy of the image, to allow for blending.\n    kp_mask = np.copy(img)\n\n    # Draw the keypoints.\n    for l in range(len(kps_lines)):\n        i1 = kps_lines[l][0]\n        i2 = kps_lines[l][1]\n        p1 = kps[0, i1].astype(np.int32), kps[1, i1].astype(np.int32)\n        p2 = kps[0, i2].astype(np.int32), kps[1, i2].astype(np.int32)\n        if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n            cv2.line(\n                kp_mask, p1, p2,\n                color=colors[l], thickness=2, lineType=cv2.LINE_AA)\n        if kps[2, i1] > kp_thresh:\n            cv2.circle(\n                kp_mask, p1,\n                radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n        if kps[2, i2] > kp_thresh:\n            cv2.circle(\n                kp_mask, p2,\n                radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n\n    # Blend the keypoints.\n    return cv2.addWeighted(img, 1.0 - alpha, kp_mask, alpha, 0)\n\ndef vis_3d_skeleton(kpt_3d, kpt_3d_vis, kps_lines, filename=None):\n\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n\n    # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.\n    cmap = plt.get_cmap('rainbow')\n    colors = [cmap(i) for i in np.linspace(0, 1, len(kps_lines) + 2)]\n    colors = [np.array((c[2], c[1], c[0])) for c in colors]\n\n    for l in range(len(kps_lines)):\n        i1 = kps_lines[l][0]\n        i2 = kps_lines[l][1]\n        x = np.array([kpt_3d[i1,0], kpt_3d[i2,0]])\n        y = np.array([kpt_3d[i1,1], kpt_3d[i2,1]])\n        z = np.array([kpt_3d[i1,2], kpt_3d[i2,2]])\n\n        if kpt_3d_vis[i1,0] > 0 and kpt_3d_vis[i2,0] > 0:\n            ax.plot(x, z, -y, c=colors[l], linewidth=2)\n        if kpt_3d_vis[i1,0] > 0:\n            ax.scatter(kpt_3d[i1,0], kpt_3d[i1,2], -kpt_3d[i1,1], c=colors[l], marker='o')\n        if kpt_3d_vis[i2,0] > 0:\n            ax.scatter(kpt_3d[i2,0], kpt_3d[i2,2], -kpt_3d[i2,1], c=colors[l], marker='o')\n\n    if filename is None:\n        ax.set_title('3D vis')\n    else:\n        ax.set_title(filename)\n\n    ax.set_xlabel('X Label')\n    ax.set_ylabel('Z Label')\n    ax.set_zlabel('Y Label')\n    ax.legend()\n    \n    plt.show()\n    cv2.waitKey(0)\n\ndef vis_3d_multiple_skeleton(kpt_3d, kpt_3d_vis, kps_lines, filename=None):\n\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n\n    # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.\n    cmap = plt.get_cmap('rainbow')\n    colors = [cmap(i) for i in np.linspace(0, 1, len(kps_lines) + 2)]\n    colors = [np.array((c[2], c[1], c[0])) for c in colors]\n\n    for l in range(len(kps_lines)):\n        i1 = kps_lines[l][0]\n        i2 = kps_lines[l][1]\n\n        person_num = kpt_3d.shape[0]\n        for n in range(person_num):\n            x = np.array([kpt_3d[n,i1,0], kpt_3d[n,i2,0]])\n            y = np.array([kpt_3d[n,i1,1], kpt_3d[n,i2,1]])\n            z = np.array([kpt_3d[n,i1,2], kpt_3d[n,i2,2]])\n\n            if kpt_3d_vis[n,i1,0] > 0 and kpt_3d_vis[n,i2,0] > 0:\n                ax.plot(x, z, -y, c=colors[l], linewidth=2)\n            if kpt_3d_vis[n,i1,0] > 0:\n                ax.scatter(kpt_3d[n,i1,0], kpt_3d[n,i1,2], -kpt_3d[n,i1,1], c=colors[l], marker='o')\n            if kpt_3d_vis[n,i2,0] > 0:\n                ax.scatter(kpt_3d[n,i2,0], kpt_3d[n,i2,2], -kpt_3d[n,i2,1], c=colors[l], marker='o')\n\n    if filename is None:\n        ax.set_title('3D vis')\n    else:\n        ax.set_title(filename)\n\n    ax.set_xlabel('X Label')\n    ax.set_ylabel('Z Label')\n    ax.set_zlabel('Y Label')\n    ax.legend()\n    \n    plt.show()\n    cv2.waitKey(0)\n\n"
  },
  {
    "path": "data/Dummy/Dummy.py",
    "content": "import os\nimport os.path as osp\nfrom pycocotools.coco import COCO\nimport numpy as np\nfrom config import cfg\nfrom utils.pose_utils import world2cam, cam2pixel, pixel2cam, rigid_align, process_bbox\nimport cv2\nimport random\nimport json\nfrom utils.vis import vis_keypoints, vis_3d_skeleton\n\nclass Dummy:\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('data', 'Dummy', 'images')\n        self.annot_path = osp.join('data', 'Dummy', 'annotations')\n        self.human_bbox_root_dir = osp.join('data', 'Dummy', 'bbox_root', 'bbox_root_human36m_output.json')\n        self.joint_num = 18 # original:17, but manually added 'Thorax'\n        self.joints_name = ('Pelvis', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Torso', 'Neck', 'Nose', 'Head', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'Thorax')\n        self.flip_pairs = ( (1, 4), (2, 5), (3, 6), (14, 11), (15, 12), (16, 13) )\n        self.skeleton = ( (0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15), (15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6) )\n        self.joints_have_depth = True\n        self.eval_joint = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9,  10, 11, 12, 13, 14, 15, 16) # exclude Thorax\n\n        self.action_name = ['Directions', 'Discussion', 'Eating', 'Greeting', 'Phoning', 'Posing', 'Purchases', 'Sitting', 'SittingDown', 'Smoking', 'Photo', 'Waiting', 'Walking', 'WalkDog', 'WalkTogether']\n        self.root_idx = self.joints_name.index('Pelvis')\n        self.lshoulder_idx = self.joints_name.index('L_Shoulder')\n        self.rshoulder_idx = self.joints_name.index('R_Shoulder')\n        self.data = self.load_data()\n\n    def get_subsampling_ratio(self):\n        if self.data_split == 'train':\n            return 5\n        elif self.data_split == 'test':\n            return 64\n        else:\n            assert 0, print('Unknown subset')\n\n    def get_subject(self):\n        if self.data_split == 'train':\n            subject = [1]\n        elif self.data_split == 'test':\n            subject = [2]\n        else:\n            assert 0, print(\"Unknown subset\")\n\n        return subject\n    \n    def add_thorax(self, joint_coord):\n        thorax = (joint_coord[self.lshoulder_idx, :] + joint_coord[self.rshoulder_idx, :]) * 0.5\n        thorax = thorax.reshape((1, 3))\n        joint_coord = np.concatenate((joint_coord, thorax), axis=0)\n        return joint_coord\n\n    def load_data(self):\n        print('Load data of Dummy')\n\n        subject_list = self.get_subject()\n        sampling_ratio = self.get_subsampling_ratio()\n        \n        # aggregate annotations from each subject\n        db = COCO()\n        cameras = {}\n        joints = {}\n        for subject in subject_list:\n            # data load\n            with open(osp.join(self.annot_path, 'Dummy_subject' + str(subject) + '_data.json'),'r') as f:\n                annot = json.load(f)\n            if len(db.dataset) == 0:\n                for k,v in annot.items():\n                    db.dataset[k] = v\n            else:\n                for k,v in annot.items():\n                    db.dataset[k] += v\n            # camera load\n            with open(osp.join(self.annot_path, 'Dummy_subject' + str(subject) + '_camera.json'),'r') as f:\n                cameras[str(subject)] = json.load(f)\n            # joint coordinate load\n            with open(osp.join(self.annot_path, 'Dummy_subject' + str(subject) + '_joint_3d.json'),'r') as f:\n                joints[str(subject)] = json.load(f)\n        db.createIndex()\n       \n        if self.data_split == 'test' and not cfg.use_gt_info:\n            print(\"Get bounding box and root from \" + self.human_bbox_root_dir)\n            bbox_root_result = {}\n            with open(self.human_bbox_root_dir) as f:\n                annot = json.load(f)\n            for i in range(len(annot)):\n                bbox_root_result[str(annot[i]['image_id'])] = {'bbox': np.array(annot[i]['bbox']), 'root': np.array(annot[i]['root_cam'])}\n        else:\n            print(\"Get bounding box and root from groundtruth\")\n\n        data = []\n        for aid in db.anns.keys():\n            ann = db.anns[aid]\n            image_id = ann['image_id']\n            img = db.loadImgs(image_id)[0]\n            img_path = osp.join(self.img_dir, img['file_name'])\n            img_width, img_height = img['width'], img['height']\n           \n            # check subject and frame_idx\n            subject = img['subject']; frame_idx = img['frame_idx'];\n            if subject not in subject_list:\n                continue\n            if frame_idx % sampling_ratio != 0:\n                continue\n\n            # camera parameter\n            cam_idx = img['cam_idx']\n            cam_param = cameras[str(subject)][str(cam_idx)]\n            R,t,f,c = np.array(cam_param['R'], dtype=np.float32), np.array(cam_param['t'], dtype=np.float32), np.array(cam_param['f'], dtype=np.float32), np.array(cam_param['c'], dtype=np.float32)\n                \n            # project world coordinate to cam, image coordinate space\n            action_idx = img['action_idx']; subaction_idx = img['subaction_idx']; frame_idx = img['frame_idx'];\n            joint_world = np.array(joints[str(subject)][str(action_idx)][str(subaction_idx)][str(frame_idx)], dtype=np.float32)\n            joint_world = self.add_thorax(joint_world)\n            joint_cam = world2cam(joint_world, R, t)\n            joint_img = cam2pixel(joint_cam, f, c)\n            joint_img[:,2] = joint_img[:,2] - joint_cam[self.root_idx,2]\n            joint_vis = np.ones((self.joint_num,1))\n            \n            if self.data_split == 'test' and not cfg.use_gt_info:\n                bbox = bbox_root_result[str(image_id)]['bbox'] # bbox should be aspect ratio preserved-extended. It is done in RootNet.\n                root_cam = bbox_root_result[str(image_id)]['root']\n            else:\n                bbox = process_bbox(np.array(ann['bbox']), img_width, img_height)\n                if bbox is None: continue\n                root_cam = joint_cam[self.root_idx]\n               \n            data.append({\n                'img_path': img_path,\n                'img_id': image_id,\n                'bbox': bbox,\n                'joint_img': joint_img, # [org_img_x, org_img_y, depth - root_depth]\n                'joint_cam': joint_cam, # [X, Y, Z] in camera coordinate\n                'joint_vis': joint_vis,\n                'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                'f': f,\n                'c': c})\n           \n        return data\n\n    def evaluate(self, preds, result_dir):\n        \n        print('Evaluation start...')\n        gts = self.data\n        assert len(gts) == len(preds)\n        sample_num = len(gts)\n        \n        pred_save = []\n        error = np.zeros((sample_num, self.joint_num-1)) # joint error\n        error_action = [ [] for _ in range(len(self.action_name)) ] # error for each sequence\n        for n in range(sample_num):\n            gt = gts[n]\n            image_id = gt['img_id']\n            f = gt['f']\n            c = gt['c']\n            bbox = gt['bbox']\n            gt_3d_root = gt['root_cam']\n            gt_3d_kpt = gt['joint_cam']\n            gt_vis = gt['joint_vis']\n            \n            # restore coordinates to original space\n            pred_2d_kpt = preds[n].copy()\n            pred_2d_kpt[:,0] = pred_2d_kpt[:,0] / cfg.output_shape[1] * bbox[2] + bbox[0]\n            pred_2d_kpt[:,1] = pred_2d_kpt[:,1] / cfg.output_shape[0] * bbox[3] + bbox[1]\n            pred_2d_kpt[:,2] = (pred_2d_kpt[:,2] / cfg.depth_dim * 2 - 1) * (cfg.bbox_3d_shape[0]/2) + gt_3d_root[2]\n\n            vis = False\n            if vis:\n                cvimg = cv2.imread(gt['img_path'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)\n                filename = str(random.randrange(1,500))\n                tmpimg = cvimg.copy().astype(np.uint8)\n                tmpkps = np.zeros((3,self.joint_num))\n                tmpkps[0,:], tmpkps[1,:] = pred_2d_kpt[:,0], pred_2d_kpt[:,1]\n                tmpkps[2,:] = 1\n                tmpimg = vis_keypoints(tmpimg, tmpkps, self.skeleton)\n                cv2.imwrite(filename + '_output.jpg', tmpimg)\n\n            # back project to camera coordinate system\n            pred_3d_kpt = pixel2cam(pred_2d_kpt, f, c)\n \n            # root joint alignment\n            pred_3d_kpt = pred_3d_kpt - pred_3d_kpt[self.root_idx]\n            gt_3d_kpt  = gt_3d_kpt - gt_3d_kpt[self.root_idx]\n\n            pred_3d_kpt = rigid_align(pred_3d_kpt, gt_3d_kpt)\n            \n            # exclude thorax\n            pred_3d_kpt = np.take(pred_3d_kpt, self.eval_joint, axis=0)\n            gt_3d_kpt = np.take(gt_3d_kpt, self.eval_joint, axis=0)\n           \n            # error calculate\n            error[n] = np.sqrt(np.sum((pred_3d_kpt - gt_3d_kpt)**2,1))\n            img_name = gt['img_path']\n            action_idx = int(img_name[img_name.find('act')+4:img_name.find('act')+6]) - 2\n            error_action[action_idx].append(error[n].copy())\n\n            # prediction save\n            pred_save.append({'image_id': image_id, 'joint_cam': pred_3d_kpt.tolist(), 'bbox': bbox.tolist(), 'root_cam': gt_3d_root.tolist()}) # joint_cam is root-relative coordinate\n\n        # total error\n        tot_err = np.mean(error)\n        metric = 'PA MPJPE'\n        eval_summary = 'Protocol 1' + ' error (' + metric + ') >> tot: %.2f\\n' % (tot_err)\n\n        # error for each action\n        for i in range(len(error_action)):\n            err = np.mean(np.array(error_action[i]))\n            eval_summary += (self.action_name[i] + ': %.2f ' % err)\n\n        print(eval_summary)\n\n        # prediction save\n        output_path = osp.join(result_dir, 'bbox_root_pose_dummy_output.json')\n        with open(output_path, 'w') as f:\n            json.dump(pred_save, f)\n        print(\"Test result is saved at \" + output_path)\n\n        return eval_summary\n\n"
  },
  {
    "path": "data/Dummy/annotations/Dummy_subject1_camera.json",
    "content": "{\"1\": {\"R\": [[-0.9059013006181885, 0.4217144115102914, 0.038727105014486805], [0.044493184429779696, 0.1857199061874203, -0.9815948619389944], [-0.4211450938543295, -0.8875049698848251, -0.1870073216538954]], \"t\": [-234.7208032216618, 464.34018262882194, 5536.652631113797], \"f\": [1145.04940458804, 1143.78109572365], \"c\": [512.541504956548, 515.4514869776]}, \"2\": {\"R\": [[0.9216646531492915, 0.3879848687925067, -0.0014172943441045224], [0.07721054863099915, -0.18699239961454955, -0.979322405373477], [-0.3802272982247548, 0.9024974149959955, -0.20230080971229314]], \"t\": [-11.934348472090557, 449.4165893644565, 5541.113551868937], \"f\": [1149.67569986785, 1147.59161666764], \"c\": [508.848621645943, 508.064917088557]}, \"3\": {\"R\": [[-0.9063540572469627, -0.42053101768163204, -0.04093880896680188], [-0.0603212197838846, 0.22468715090881142, -0.9725620980997899], [0.4181909532208387, -0.8790161246439863, -0.2290130547809762]], \"t\": [781.127357651581, 235.3131620173424, 5576.37044019807], \"f\": [1149.14071676148, 1148.7989685676], \"c\": [519.815837182153, 501.402658888552]}, \"4\": {\"R\": [[0.91754082476548, -0.39226322025776267, 0.06517975852741943], [-0.04531905395586976, -0.26600517028098103, -0.9629057236990188], [0.395050652748768, 0.8805514269006645, -0.2618476013752581]], \"t\": [-155.13650339749012, 422.16256306729633, 4435.416222660868], \"f\": [1145.51133842318, 1144.77392807652], \"c\": [514.968197319863, 501.882018537695]}}"
  },
  {
    "path": "data/Dummy/annotations/Dummy_subject1_data.json",
    "content": "{\"images\": [{\"id\": 1877420, \"file_name\": \"s_11_act_02_subact_01_ca_01/s_11_act_02_subact_01_ca_01_000001.jpg\", \"width\": 1000, \"height\": 1002, \"subject\": 1, \"action_name\": \"Directions\", \"action_idx\": 2, \"subaction_idx\": 1, \"cam_idx\": 1, \"frame_idx\": 0}], \"annotations\": [{\"id\": 1877420, \"image_id\": 1877420, \"keypoints_vis\": [true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true], \"bbox\": [304.0201284041609, 222.305917169553, 328.1488619190915, 412.150330355609]}]}"
  },
  {
    "path": "data/Dummy/annotations/Dummy_subject1_joint_3d.json",
    "content": "{\"2\": {\"1\": {\"0\": [[-47.24769973754883, -81.04920196533203, 987.9080200195312], [-184.4625244140625, -69.55330657958984, 999.5223999023438], [-199.22152709960938, -72.29781341552734, 537.8258666992188], [-177.2645721435547, 44.52031326293945, 93.21685028076172], [89.96746063232422, -92.54512023925781, 976.2935791015625], [97.17977142333984, -81.16199493408203, 514.5499877929688], [82.85128784179688, 34.8104248046875, 69.40837097167969], [-52.695899963378906, -77.56897735595703, 1242.206298828125], [-49.09817886352539, -73.6445083618164, 1492.0970458984375], [-71.0900650024414, -139.2397003173828, 1579.0076904296875], [-71.68211364746094, -92.79254150390625, 1684.2078857421875], [116.02037811279297, -63.403587341308594, 1509.3262939453125], [396.226318359375, -72.48757934570312, 1469.46826171875], [633.7438354492188, -144.6726837158203, 1475.2344970703125], [-211.36859130859375, -37.4464111328125, 1487.2081298828125], [-487.9529724121094, -1.2391146421432495, 1438.4637451171875], [-727.43798828125, -60.458595275878906, 1466.75244140625]]}}}"
  },
  {
    "path": "data/Dummy/bbox_root/bbox_dummy_output.json",
    "content": "[{\"image_id\": 1877420, \"category_id\": 1, \"bbox\": [309.1705017089844, 252.84469604492188, 326.1686096191406, 368.1951599121094], \"score\": 0.9997870326042175}]"
  },
  {
    "path": "data/Human36M/Human36M.py",
    "content": "import os\nimport os.path as osp\nfrom pycocotools.coco import COCO\nimport numpy as np\nfrom config import cfg\nfrom utils.pose_utils import world2cam, cam2pixel, pixel2cam, rigid_align, process_bbox\nimport cv2\nimport random\nimport json\nfrom utils.vis import vis_keypoints, vis_3d_skeleton\n\nclass Human36M:\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('/', 'data', 'Human36M', 'images')\n        self.annot_path = osp.join('/', 'data', 'Human36M', 'annotations')\n        self.human_bbox_root_dir = osp.join('/', 'data', 'Human36M', 'bbox_root', 'bbox_root_human36m_output.json')\n        self.joint_num = 18 # original:17, but manually added 'Thorax'\n        self.joints_name = ('Pelvis', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Torso', 'Neck', 'Nose', 'Head', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'Thorax')\n        self.flip_pairs = ( (1, 4), (2, 5), (3, 6), (14, 11), (15, 12), (16, 13) )\n        self.skeleton = ( (0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15), (15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6) )\n        self.joints_have_depth = True\n        self.eval_joint = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9,  10, 11, 12, 13, 14, 15, 16) # exclude Thorax\n\n        self.action_name = ['Directions', 'Discussion', 'Eating', 'Greeting', 'Phoning', 'Posing', 'Purchases', 'Sitting', 'SittingDown', 'Smoking', 'Photo', 'Waiting', 'Walking', 'WalkDog', 'WalkTogether']\n        self.root_idx = self.joints_name.index('Pelvis')\n        self.lshoulder_idx = self.joints_name.index('L_Shoulder')\n        self.rshoulder_idx = self.joints_name.index('R_Shoulder')\n        self.protocol = 2\n        self.data = self.load_data()\n\n    def get_subsampling_ratio(self):\n        if self.data_split == 'train':\n            return 5\n        elif self.data_split == 'test':\n            return 64\n        else:\n            assert 0, print('Unknown subset')\n\n    def get_subject(self):\n        if self.data_split == 'train':\n            if self.protocol == 1:\n                subject = [1,5,6,7,8,9]\n            elif self.protocol == 2:\n                subject = [1,5,6,7,8]\n        elif self.data_split == 'test':\n            if self.protocol == 1:\n                subject = [11]\n            elif self.protocol == 2:\n                subject = [9,11]\n        else:\n            assert 0, print(\"Unknown subset\")\n\n        return subject\n    \n    def add_thorax(self, joint_coord):\n        thorax = (joint_coord[self.lshoulder_idx, :] + joint_coord[self.rshoulder_idx, :]) * 0.5\n        thorax = thorax.reshape((1, 3))\n        joint_coord = np.concatenate((joint_coord, thorax), axis=0)\n        return joint_coord\n\n    def load_data(self):\n        print('Load data of H36M Protocol ' + str(self.protocol))\n\n        subject_list = self.get_subject()\n        sampling_ratio = self.get_subsampling_ratio()\n        \n        # aggregate annotations from each subject\n        db = COCO()\n        cameras = {}\n        joints = {}\n        for subject in subject_list:\n            # data load\n            with open(osp.join(self.annot_path, 'Human36M_subject' + str(subject) + '_data.json'),'r') as f:\n                annot = json.load(f)\n            if len(db.dataset) == 0:\n                for k,v in annot.items():\n                    db.dataset[k] = v\n            else:\n                for k,v in annot.items():\n                    db.dataset[k] += v\n            # camera load\n            with open(osp.join(self.annot_path, 'Human36M_subject' + str(subject) + '_camera.json'),'r') as f:\n                cameras[str(subject)] = json.load(f)\n            # joint coordinate load\n            with open(osp.join(self.annot_path, 'Human36M_subject' + str(subject) + '_joint_3d.json'),'r') as f:\n                joints[str(subject)] = json.load(f)\n        db.createIndex()\n       \n        if self.data_split == 'test' and not cfg.use_gt_info:\n            print(\"Get bounding box and root from \" + self.human_bbox_root_dir)\n            bbox_root_result = {}\n            with open(self.human_bbox_root_dir) as f:\n                annot = json.load(f)\n            for i in range(len(annot)):\n                bbox_root_result[str(annot[i]['image_id'])] = {'bbox': np.array(annot[i]['bbox']), 'root': np.array(annot[i]['root_cam'])}\n        else:\n            print(\"Get bounding box and root from groundtruth\")\n\n        data = []\n        for aid in db.anns.keys():\n            ann = db.anns[aid]\n            image_id = ann['image_id']\n            img = db.loadImgs(image_id)[0]\n            img_path = osp.join(self.img_dir, img['file_name'])\n            img_width, img_height = img['width'], img['height']\n           \n            # check subject and frame_idx\n            subject = img['subject']; frame_idx = img['frame_idx'];\n            if subject not in subject_list:\n                continue\n            if frame_idx % sampling_ratio != 0:\n                continue\n\n            # camera parameter\n            cam_idx = img['cam_idx']\n            cam_param = cameras[str(subject)][str(cam_idx)]\n            R,t,f,c = np.array(cam_param['R'], dtype=np.float32), np.array(cam_param['t'], dtype=np.float32), np.array(cam_param['f'], dtype=np.float32), np.array(cam_param['c'], dtype=np.float32)\n                \n            # project world coordinate to cam, image coordinate space\n            action_idx = img['action_idx']; subaction_idx = img['subaction_idx']; frame_idx = img['frame_idx'];\n            joint_world = np.array(joints[str(subject)][str(action_idx)][str(subaction_idx)][str(frame_idx)], dtype=np.float32)\n            joint_world = self.add_thorax(joint_world)\n            joint_cam = world2cam(joint_world, R, t)\n            joint_img = cam2pixel(joint_cam, f, c)\n            joint_img[:,2] = joint_img[:,2] - joint_cam[self.root_idx,2]\n            joint_vis = np.ones((self.joint_num,1))\n            \n            if self.data_split == 'test' and not cfg.use_gt_info:\n                bbox = bbox_root_result[str(image_id)]['bbox'] # bbox should be aspect ratio preserved-extended. It is done in RootNet.\n                root_cam = bbox_root_result[str(image_id)]['root']\n            else:\n                bbox = process_bbox(np.array(ann['bbox']), img_width, img_height)\n                if bbox is None: continue\n                root_cam = joint_cam[self.root_idx]\n               \n            data.append({\n                'img_path': img_path,\n                'img_id': image_id,\n                'bbox': bbox,\n                'joint_img': joint_img, # [org_img_x, org_img_y, depth - root_depth]\n                'joint_cam': joint_cam, # [X, Y, Z] in camera coordinate\n                'joint_vis': joint_vis,\n                'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                'f': f,\n                'c': c})\n           \n        return data\n\n    def evaluate(self, preds, result_dir):\n        \n        print('Evaluation start...')\n        gts = self.data\n        assert len(gts) == len(preds)\n        sample_num = len(gts)\n        \n        pred_save = []\n        error = np.zeros((sample_num, self.joint_num-1)) # joint error\n        error_action = [ [] for _ in range(len(self.action_name)) ] # error for each sequence\n        for n in range(sample_num):\n            gt = gts[n]\n            image_id = gt['img_id']\n            f = gt['f']\n            c = gt['c']\n            bbox = gt['bbox']\n            gt_3d_root = gt['root_cam']\n            gt_3d_kpt = gt['joint_cam']\n            gt_vis = gt['joint_vis']\n            \n            # restore coordinates to original space\n            pred_2d_kpt = preds[n].copy()\n            pred_2d_kpt[:,0] = pred_2d_kpt[:,0] / cfg.output_shape[1] * bbox[2] + bbox[0]\n            pred_2d_kpt[:,1] = pred_2d_kpt[:,1] / cfg.output_shape[0] * bbox[3] + bbox[1]\n            pred_2d_kpt[:,2] = (pred_2d_kpt[:,2] / cfg.depth_dim * 2 - 1) * (cfg.bbox_3d_shape[0]/2) + gt_3d_root[2]\n\n            vis = False\n            if vis:\n                cvimg = cv2.imread(gt['img_path'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)\n                filename = str(random.randrange(1,500))\n                tmpimg = cvimg.copy().astype(np.uint8)\n                tmpkps = np.zeros((3,self.joint_num))\n                tmpkps[0,:], tmpkps[1,:] = pred_2d_kpt[:,0], pred_2d_kpt[:,1]\n                tmpkps[2,:] = 1\n                tmpimg = vis_keypoints(tmpimg, tmpkps, self.skeleton)\n                cv2.imwrite(filename + '_output.jpg', tmpimg)\n\n            # back project to camera coordinate system\n            pred_3d_kpt = pixel2cam(pred_2d_kpt, f, c)\n \n            # root joint alignment\n            pred_3d_kpt = pred_3d_kpt - pred_3d_kpt[self.root_idx]\n            gt_3d_kpt  = gt_3d_kpt - gt_3d_kpt[self.root_idx]\n           \n            if self.protocol == 1:\n                # rigid alignment for PA MPJPE (protocol #1)\n                pred_3d_kpt = rigid_align(pred_3d_kpt, gt_3d_kpt)\n            \n            # exclude thorax\n            pred_3d_kpt = np.take(pred_3d_kpt, self.eval_joint, axis=0)\n            gt_3d_kpt = np.take(gt_3d_kpt, self.eval_joint, axis=0)\n           \n            # error calculate\n            error[n] = np.sqrt(np.sum((pred_3d_kpt - gt_3d_kpt)**2,1))\n            img_name = gt['img_path']\n            action_idx = int(img_name[img_name.find('act')+4:img_name.find('act')+6]) - 2\n            error_action[action_idx].append(error[n].copy())\n\n            # prediction save\n            pred_save.append({'image_id': image_id, 'joint_cam': pred_3d_kpt.tolist(), 'bbox': bbox.tolist(), 'root_cam': gt_3d_root.tolist()}) # joint_cam is root-relative coordinate\n\n        # total error\n        tot_err = np.mean(error)\n        metric = 'PA MPJPE' if self.protocol == 1 else 'MPJPE'\n        eval_summary = 'Protocol ' + str(self.protocol) + ' error (' + metric + ') >> tot: %.2f\\n' % (tot_err)\n\n        # error for each action\n        for i in range(len(error_action)):\n            err = np.mean(np.array(error_action[i]))\n            eval_summary += (self.action_name[i] + ': %.2f ' % err)\n\n        print(eval_summary)\n\n        # prediction save\n        output_path = osp.join(result_dir, 'bbox_root_pose_human36m_output.json')\n        with open(output_path, 'w') as f:\n            json.dump(pred_save, f)\n        print(\"Test result is saved at \" + output_path)\n\n        return eval_summary\n\n"
  },
  {
    "path": "data/MPII/MPII.py",
    "content": "import os\nimport os.path as osp\nimport numpy as np\nfrom pycocotools.coco import COCO\nfrom utils.pose_utils import process_bbox\nfrom config import cfg\n\nclass MPII:\n\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('/', 'data', 'MPII')\n        self.train_annot_path = osp.join('/', 'data', 'MPII', 'annotations', 'train.json')\n        self.joint_num = 16\n        self.joints_name = ('R_Ankle', 'R_Knee', 'R_Hip', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Thorax', 'Neck', 'Head', 'R_Wrist', 'R_Elbow', 'R_Shoulder', 'L_Shoulder', 'L_Elbow', 'L_Wrist')\n        self.flip_pairs = ( (0, 5), (1, 4), (2, 3), (10, 15), (11, 14), (12, 13) )\n        self.skeleton = ( (0, 1), (1, 2), (2, 6), (7, 12), (12, 11), (11, 10), (5, 4), (4, 3), (3, 6), (7, 13), (13, 14), (14, 15), (6, 7), (7, 8), (8, 9) )\n        self.joints_have_depth = False\n        self.data = self.load_data()\n\n    def load_data(self):\n        \n        if self.data_split == 'train':\n            db = COCO(self.train_annot_path)\n        else:\n            print('Unknown data subset')\n            assert 0\n\n        data = []\n        for aid in db.anns.keys():\n            ann = db.anns[aid]\n            img = db.loadImgs(ann['image_id'])[0]\n            width, height = img['width'], img['height']\n\n            if ann['num_keypoints'] == 0:\n                continue\n            \n            bbox = process_bbox(ann['bbox'], width, height)\n            if bbox is None: continue\n\n            # joints and vis\n            joint_img = np.array(ann['keypoints']).reshape(self.joint_num,3)\n            joint_vis = joint_img[:,2].copy().reshape(-1,1)\n            joint_img[:,2] = 0\n\n            imgname = img['file_name']\n            img_path = osp.join(self.img_dir, imgname)\n            data.append({\n                'img_path': img_path,\n                'bbox': bbox,\n                'joint_img': joint_img, # [org_img_x, org_img_y, 0]\n                'joint_vis': joint_vis,\n            })\n\n        return data\n\n"
  },
  {
    "path": "data/MSCOCO/MSCOCO.py",
    "content": "import os\nimport os.path as osp\nimport numpy as np\nfrom pycocotools.coco import COCO\nfrom config import cfg\nimport scipy.io as sio\nimport json\nimport cv2\nimport random\nimport math\nfrom utils.pose_utils import pixel2cam, process_bbox\nfrom utils.vis import vis_keypoints, vis_3d_skeleton\n\n\nclass MSCOCO:\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('/','home', 'centos', 'datasets', 'coco', 'images')\n        self.train_annot_path = osp.join('/','home', 'centos', 'datasets', 'coco', 'annotations', 'person_keypoints_train2017.json')\n        self.test_annot_path = osp.join('/','home', 'centos', 'datasets', 'coco', 'annotations', 'person_keypoints_val2017.json')\n        self.human_3d_bbox_root_dir = osp.join('/', 'home', 'centos','datasets', 'coco', 'bbox_root', 'bbox_root_coco_output.json')\n        \n        if self.data_split == 'train':\n            self.joint_num = 19 # original: 17, but manually added 'Thorax', 'Pelvis'\n            self.joints_name = ('Nose', 'L_Eye', 'R_Eye', 'L_Ear', 'R_Ear', 'L_Shoulder', 'R_Shoulder', 'L_Elbow', 'R_Elbow', 'L_Wrist', 'R_Wrist', 'L_Hip', 'R_Hip', 'L_Knee', 'R_Knee', 'L_Ankle', 'R_Ankle', 'Thorax', 'Pelvis')\n            self.flip_pairs = ( (1, 2), (3, 4), (5, 6), (7, 8), (9, 10), (11, 12), (13, 14), (15, 16) )\n            self.skeleton = ( (1, 2), (0, 1), (0, 2), (2, 4), (1, 3), (6, 8), (8, 10), (5, 7), (7, 9), (12, 14), (14, 16), (11, 13), (13, 15), (5, 6), (11, 12) )\n            self.joints_have_depth = False\n\n            self.lshoulder_idx = self.joints_name.index('L_Shoulder')\n            self.rshoulder_idx = self.joints_name.index('R_Shoulder')\n            self.lhip_idx = self.joints_name.index('L_Hip')\n            self.rhip_idx = self.joints_name.index('R_Hip')\n       \n        else:\n            ## testing settings (when test model trained on the MuCo-3DHP dataset)\n            self.joint_num = 21 # MuCo-3DHP\n            self.joints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe') # MuCo-3DHP\n            self.original_joint_num = 17 # MuPoTS\n            self.original_joints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head') # MuPoTS\n            self.flip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13) )\n            self.skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (11, 12), (12, 13), (1, 2), (2, 3), (3, 4), (1, 5), (5, 6), (6, 7) )\n            self.eval_joint = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)\n            self.joints_have_depth = False\n\n        self.data = self.load_data()\n\n    def load_data(self):\n\n        if self.data_split == 'train':\n            db = COCO(self.train_annot_path)\n            data = []\n            for aid in db.anns.keys():\n                ann = db.anns[aid]\n                img = db.loadImgs(ann['image_id'])[0]\n                width, height = img['width'], img['height']\n\n                if (ann['image_id'] not in db.imgs) or ann['iscrowd'] or (ann['num_keypoints'] == 0):\n                    continue\n                \n                bbox = process_bbox(ann['bbox'], width, height) \n                if bbox is None: continue\n\n                # joints and vis\n                joint_img = np.array(ann['keypoints']).reshape(-1,3)\n                # add Thorax\n                thorax = (joint_img[self.lshoulder_idx, :] + joint_img[self.rshoulder_idx, :]) * 0.5\n                thorax[2] = joint_img[self.lshoulder_idx,2] * joint_img[self.rshoulder_idx,2]\n                thorax = thorax.reshape((1, 3))\n                # add Pelvis\n                pelvis = (joint_img[self.lhip_idx, :] + joint_img[self.rhip_idx, :]) * 0.5\n                pelvis[2] = joint_img[self.lhip_idx,2] * joint_img[self.rhip_idx,2]\n                pelvis = pelvis.reshape((1, 3))\n\n                joint_img = np.concatenate((joint_img, thorax, pelvis), axis=0)\n\n                joint_vis = (joint_img[:,2].copy().reshape(-1,1) > 0)\n                joint_img[:,2] = 0\n\n                imgname = osp.join('train2017', db.imgs[ann['image_id']]['file_name'])\n                img_path = osp.join(self.img_dir, imgname)\n                data.append({\n                    'img_path': img_path,\n                    'bbox': bbox,\n                    'joint_img': joint_img, # [org_img_x, org_img_y, 0]\n                    'joint_vis': joint_vis,\n                    'f': np.array([1500, 1500]), \n                    'c': np.array([width/2, height/2]) \n                })\n\n        elif self.data_split == 'test':\n            db = COCO(self.test_annot_path)\n            with open(self.human_3d_bbox_root_dir) as f:\n                annot = json.load(f)\n            data = [] \n            for i in range(len(annot)):\n                image_id = annot[i]['image_id']\n                img = db.loadImgs(image_id)[0]\n                img_path = osp.join(self.img_dir, 'val2017', img['file_name'])\n                fx, fy, cx, cy = 1500, 1500, img['width']/2, img['height']/2\n                f = np.array([fx, fy]); c = np.array([cx, cy]);\n                root_cam = np.array(annot[i]['root_cam']).reshape(3)\n                bbox = np.array(annot[i]['bbox']).reshape(4)\n\n                data.append({\n                    'img_path': img_path,\n                    'bbox': bbox,\n                    'joint_img': np.zeros((self.original_joint_num, 3)), # dummy\n                    'joint_cam': np.zeros((self.original_joint_num, 3)), # dummy\n                    'joint_vis': np.zeros((self.original_joint_num, 1)), # dummy\n                    'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                    'f': f,\n                    'c': c,\n                })\n\n        else:\n            print('Unknown data subset')\n            assert 0\n\n\n        return data\n\n    def evaluate(self, preds, result_dir):\n        \n        print('Evaluation start...')\n        gts = self.data\n        sample_num = len(preds)\n        joint_num = self.original_joint_num\n\n        pred_2d_save = {}\n        pred_3d_save = {}\n        for n in range(sample_num):\n            \n            gt = gts[n]\n            f = gt['f']\n            c = gt['c']\n            bbox = gt['bbox']\n            gt_3d_root = gt['root_cam']\n            img_name = gt['img_path'].split('/')\n            img_name = 'coco_' + img_name[-1].split('.')[0] # e.g., coco_00000000\n            \n            # restore coordinates to original space\n            pred_2d_kpt = preds[n].copy()\n            # only consider eval_joint\n            pred_2d_kpt = np.take(pred_2d_kpt, self.eval_joint, axis=0)\n            pred_2d_kpt[:,0] = pred_2d_kpt[:,0] / cfg.output_shape[1] * bbox[2] + bbox[0]\n            pred_2d_kpt[:,1] = pred_2d_kpt[:,1] / cfg.output_shape[0] * bbox[3] + bbox[1]\n            pred_2d_kpt[:,2] = (pred_2d_kpt[:,2] / cfg.depth_dim * 2 - 1) * (cfg.bbox_3d_shape[0]/2) + gt_3d_root[2]\n\n            # 2d kpt save\n            if img_name in pred_2d_save:\n                pred_2d_save[img_name].append(pred_2d_kpt[:,:2])\n            else:\n                pred_2d_save[img_name] = [pred_2d_kpt[:,:2]]\n\n            vis = False\n            if vis:\n                cvimg = cv2.imread(gt['img_path'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)\n                filename = str(random.randrange(1,500))\n                tmpimg = cvimg.copy().astype(np.uint8)\n                tmpkps = np.zeros((3,joint_num))\n                tmpkps[0,:], tmpkps[1,:] = pred_2d_kpt[:,0], pred_2d_kpt[:,1]\n                tmpkps[2,:] = 1\n                tmpimg = vis_keypoints(tmpimg, tmpkps, self.skeleton)\n                cv2.imwrite(filename + '_output.jpg', tmpimg)\n\n            # back project to camera coordinate system\n            pred_3d_kpt = pixel2cam(pred_2d_kpt, f, c)\n            \n            # 3d kpt save\n            if img_name in pred_3d_save:\n                pred_3d_save[img_name].append(pred_3d_kpt)\n            else:\n                pred_3d_save[img_name] = [pred_3d_kpt]\n        \n        output_path = osp.join(result_dir,'preds_2d_kpt_coco.mat')\n        sio.savemat(output_path, pred_2d_save)\n        print(\"Testing result is saved at \" + output_path)\n        output_path = osp.join(result_dir,'preds_3d_kpt_coco.mat')\n        sio.savemat(output_path, pred_3d_save)\n        print(\"Testing result is saved at \" + output_path)\n\n"
  },
  {
    "path": "data/MuCo/MuCo.py",
    "content": "import os\nimport os.path as osp\nimport numpy as np\nimport math\nfrom utils.pose_utils import process_bbox\nfrom pycocotools.coco import COCO\nfrom config import cfg\n\nclass MuCo:\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('/', 'home', 'centos', 'datasets', 'MuCo')\n        self.train_annot_path = osp.join('/', 'home', 'centos', 'datasets', 'MuCo', 'MuCo-3DHP.json')\n        self.joint_num = 21\n        self.joints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe')\n        self.flip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13), (17, 18), (19, 20) )\n        self.skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (10, 19), (11, 12), (12, 13), (13, 20), (1, 2), (2, 3), (3, 4), (4, 17), (1, 5), (5, 6), (6, 7), (7, 18) )\n        self.joints_have_depth = True\n        self.root_idx = self.joints_name.index('Pelvis')\n        self.data = self.load_data()\n\n    def load_data(self):\n\n        if self.data_split == 'train':\n            db = COCO(self.train_annot_path)\n        else:\n            print('Unknown data subset')\n            assert 0\n\n        data = []\n        for iid in db.imgs.keys():\n            img = db.imgs[iid]\n            img_id = img[\"id\"]\n            img_width, img_height = img['width'], img['height']\n            imgname = img['file_name']\n            img_path = osp.join(self.img_dir, imgname)\n            f = img[\"f\"]\n            c = img[\"c\"]\n\n            # crop the closest person to the camera\n            ann_ids = db.getAnnIds(img_id)\n            anns = db.loadAnns(ann_ids)\n\n            root_depths = [ann['keypoints_cam'][self.root_idx][2] for ann in anns]\n            closest_pid = root_depths.index(min(root_depths))\n            pid_list = [closest_pid]\n            for i in range(len(anns)):\n                if i == closest_pid:\n                    continue\n                picked = True\n                for j in range(len(anns)):\n                    if i == j:\n                        continue\n                    dist = (np.array(anns[i]['keypoints_cam'][self.root_idx]) - np.array(anns[j]['keypoints_cam'][self.root_idx])) ** 2\n                    dist_2d = math.sqrt(np.sum(dist[:2]))\n                    dist_3d = math.sqrt(np.sum(dist))\n                    if dist_2d < 500 or dist_3d < 500:\n                        picked = False\n                if picked:\n                    pid_list.append(i)\n            \n            for pid in pid_list:\n                joint_cam = np.array(anns[pid]['keypoints_cam'])\n                root_cam = joint_cam[self.root_idx]\n                \n                joint_img = np.array(anns[pid]['keypoints_img'])\n                joint_img = np.concatenate([joint_img, joint_cam[:,2:]],1)\n                joint_img[:,2] = joint_img[:,2] - root_cam[2]\n                joint_vis = np.ones((self.joint_num,1))\n\n                bbox = process_bbox(anns[pid]['bbox'], img_width, img_height)\n                if bbox is None: continue\n\n                data.append({\n                    'img_path': img_path,\n                    'bbox': bbox,\n                    'joint_img': joint_img, # [org_img_x, org_img_y, depth - root_depth]\n                    'joint_cam': joint_cam, # [X, Y, Z] in camera coordinate\n                    'joint_vis': joint_vis,\n                    'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                    'f': f,\n                    'c': c\n                })\n\n\n        return data\n\n\n"
  },
  {
    "path": "data/MuPoTS/MuPoTS.py",
    "content": "import os\nimport os.path as osp\nimport scipy.io as sio\nimport numpy as np\nfrom pycocotools.coco import COCO\nfrom config import cfg\nimport json\nimport cv2\nimport random\nimport math\nfrom utils.pose_utils import pixel2cam, process_bbox\nfrom utils.vis import vis_keypoints, vis_3d_skeleton\n\nclass MuPoTS:\n    def __init__(self, data_split):\n        self.data_split = data_split\n        self.img_dir = osp.join('/', 'data', 'MuPoTS', 'data', 'MultiPersonTestSet')\n        self.test_annot_path = osp.join('/', 'data', 'MuPoTS', 'data', 'MuPoTS-3D.json')\n        self.human_bbox_root_dir = osp.join('/', 'data', 'MuPoTS', 'bbox_root', 'bbox_root_mupots_output.json')\n        self.joint_num = 21 # MuCo-3DHP\n        self.joints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe') # MuCo-3DHP\n        self.original_joint_num = 17 # MuPoTS\n        self.original_joints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head') # MuPoTS\n\n        self.flip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13) )\n        self.skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (11, 12), (12, 13), (1, 2), (2, 3), (3, 4), (1, 5), (5, 6), (6, 7) )\n        self.eval_joint = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)\n        self.joints_have_depth = True\n        self.root_idx = self.joints_name.index('Pelvis')\n        self.data = self.load_data()\n\n    def load_data(self):\n        \n        if self.data_split != 'test':\n            print('Unknown data subset')\n            assert 0\n        \n        data = []\n        db = COCO(self.test_annot_path)\n\n        # use gt bbox and root\n        if cfg.use_gt_info:\n            print(\"Get bounding box and root from groundtruth\")\n            for aid in db.anns.keys():\n                ann = db.anns[aid]\n                if ann['is_valid'] == 0:\n                    continue\n\n                image_id = ann['image_id']\n                img = db.loadImgs(image_id)[0]\n                img_path = osp.join(self.img_dir, img['file_name'])\n                fx, fy, cx, cy = img['intrinsic']\n                f = np.array([fx, fy]); c = np.array([cx, cy]);\n\n                joint_cam = np.array(ann['keypoints_cam'])\n                root_cam = joint_cam[self.root_idx]\n\n                joint_img = np.array(ann['keypoints_img'])\n                joint_img = np.concatenate([joint_img, joint_cam[:,2:]],1)\n                joint_img[:,2] = joint_img[:,2] - root_cam[2]\n                joint_vis = np.ones((self.original_joint_num,1))\n                \n                bbox = np.array(ann['bbox'])\n                img_width, img_height = img['width'], img['height']\n                bbox = process_bbox(bbox, img_width, img_height)\n                if bbox is None: continue\n                \n                data.append({\n                    'img_path': img_path,\n                    'bbox': bbox, \n                    'joint_img': joint_img, # [org_img_x, org_img_y, depth - root_depth]\n                    'joint_cam': joint_cam, # [X, Y, Z] in camera coordinate\n                    'joint_vis': joint_vis,\n                    'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                    'f': f,\n                    'c': c,\n                })\n           \n        else:\n            print(\"Get bounding box and root from \" + self.human_bbox_root_dir)\n            with open(self.human_bbox_root_dir) as f:\n                annot = json.load(f)\n            \n            for i in range(len(annot)):\n                image_id = annot[i]['image_id']\n                img = db.loadImgs(image_id)[0]\n                img_width, img_height = img['width'], img['height']\n                img_path = osp.join(self.img_dir, img['file_name'])\n                fx, fy, cx, cy = img['intrinsic']\n                f = np.array([fx, fy]); c = np.array([cx, cy]);\n                root_cam = np.array(annot[i]['root_cam']).reshape(3)\n                bbox = np.array(annot[i]['bbox']).reshape(4)\n\n                data.append({\n                    'img_path': img_path,\n                    'bbox': bbox,\n                    'joint_img': np.zeros((self.original_joint_num, 3)), # dummy\n                    'joint_cam': np.zeros((self.original_joint_num, 3)), # dummy\n                    'joint_vis': np.zeros((self.original_joint_num, 1)), # dummy\n                    'root_cam': root_cam, # [X, Y, Z] in camera coordinate\n                    'f': f,\n                    'c': c,\n                })\n\n        return data\n\n    def evaluate(self, preds, result_dir):\n        \n        print('Evaluation start...')\n        gts = self.data\n        sample_num = len(preds)\n        joint_num = self.original_joint_num\n \n        pred_2d_save = {}\n        pred_3d_save = {}\n        for n in range(sample_num):\n            \n            gt = gts[n]\n            f = gt['f']\n            c = gt['c']\n            bbox = gt['bbox']\n            gt_3d_root = gt['root_cam']\n            img_name = gt['img_path'].split('/')\n            img_name = img_name[-2] + '_' + img_name[-1].split('.')[0] # e.g., TS1_img_0001\n            \n            # restore coordinates to original space\n            pred_2d_kpt = preds[n].copy()\n            # only consider eval_joint\n            pred_2d_kpt = np.take(pred_2d_kpt, self.eval_joint, axis=0)\n            pred_2d_kpt[:,0] = pred_2d_kpt[:,0] / cfg.output_shape[1] * bbox[2] + bbox[0]\n            pred_2d_kpt[:,1] = pred_2d_kpt[:,1] / cfg.output_shape[0] * bbox[3] + bbox[1]\n            pred_2d_kpt[:,2] = (pred_2d_kpt[:,2] / cfg.depth_dim * 2 - 1) * (cfg.bbox_3d_shape[0]/2) + gt_3d_root[2]\n\n            # 2d kpt save\n            if img_name in pred_2d_save:\n                pred_2d_save[img_name].append(pred_2d_kpt[:,:2])\n            else:\n                pred_2d_save[img_name] = [pred_2d_kpt[:,:2]]\n\n            vis = False\n            if vis:\n                cvimg = cv2.imread(gt['img_path'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)\n                filename = str(random.randrange(1,500))\n                tmpimg = cvimg.copy().astype(np.uint8)\n                tmpkps = np.zeros((3,joint_num))\n                tmpkps[0,:], tmpkps[1,:] = pred_2d_kpt[:,0], pred_2d_kpt[:,1]\n                tmpkps[2,:] = 1\n                tmpimg = vis_keypoints(tmpimg, tmpkps, self.skeleton)\n                cv2.imwrite(filename + '_output.jpg', tmpimg)\n\n            # back project to camera coordinate system\n            pred_3d_kpt = pixel2cam(pred_2d_kpt, f, c)\n            \n            # 3d kpt save\n            if img_name in pred_3d_save:\n                pred_3d_save[img_name].append(pred_3d_kpt)\n            else:\n                pred_3d_save[img_name] = [pred_3d_kpt]\n        \n        output_path = osp.join(result_dir,'preds_2d_kpt_mupots.mat')\n        sio.savemat(output_path, pred_2d_save)\n        print(\"Testing result is saved at \" + output_path)\n        output_path = osp.join(result_dir,'preds_3d_kpt_mupots.mat')\n        sio.savemat(output_path, pred_3d_save)\n        print(\"Testing result is saved at \" + output_path)\n\n"
  },
  {
    "path": "data/MuPoTS/mpii_mupots_multiperson_eval.m",
    "content": "function mpii_mupots_multiperson_eval(eval_mode, is_relative)\n\n% eval_mode: EVLAUATION_MODE\n% is_relative: 1: root-relative 3D multi-person pose estimation, 0: absolute 3D multi-person pose estimation\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Outline of the test eval procedure on MuPoTS-3D. \n% Plug in your predictions at the appropriate point\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nmpii_mupots_config;\naddpath('./util');\n[~,o1,o2,relevant_labels] = mpii_get_joints('relevant');  \nnum_joints = length(o1);\n\n%Path to the test images and annotations\ntest_annot_base = mpii_mupots_path; %See mpii_mupots_config\n%Path where results are written out \nresults_output_path = './';\n\n%If predicted joints have a different ordering, specify mapping to MPI joints here\n%map_to_mpii_jointset = % [11 14 10 13 9 12 5 8 4 7 3 6 1];\n%Order to process bones in to resize them to the GT\nsafe_traversal_order = [15, 16, 2, 1, 17, 3, 4, 5, 6, 7, 8, 9:14];\n\nEVALUATION_MODE = eval_mode; % 0 = evaluate all annotated persons, 1 = evaluate only predictions matched to annotations\n\nperson_colors = {'red', 'yellow', 'green', 'blue', 'magenta', 'cyan', 'black', 'white'} ;\n\nsequencewise_per_joint_error = {};\nsequencewise_undetected_people = [];\nsequencewise_visibility_mask = {};\nsequencewise_occlusion_mask = {};\nsequencewise_annotated_people = [];\nsequencewise_frames = [];\n\n%% load prdictions\npreds_2d_kpt = load('preds_2d_kpt_mupots.mat');\npreds_3d_kpt = load('preds_3d_kpt_mupots.mat');\n\nfor ts = 1:20\n    person_ids = [];\n    open_person_ids = 1:20;\n\n    load( sprintf('%s/TS%d/annot.mat',test_annot_base, ts));\n    load( sprintf('%s/TS%d/occlusion.mat',test_annot_base, ts));\n    \n    num_frames = size(annotations,1);\n    \n    undetected_people = 0;\n    annotated_people = 0;\n    pje_idx = 1;\n    \n    per_joint_error = []; %zeros(17,1,num_test_points);\n    per_joint_occlusion_mask = [];\n    per_joint_visibility_mask = [];\n    sequencewise_frames(ts) = num_frames;\n\nfor i = 1:num_frames\n\n     %Count valid annotations\n     valid_annotations = 0;\n     for k = 1:size(annotations,2)\n         if(annotations{i,k}.isValidFrame)\n             valid_annotations = valid_annotations + 1;\n         end\n     end\n     annotated_people = annotated_people + valid_annotations;\n     \n     if(valid_annotations == 0)\n         continue;\n     end\n     \n     gt_pose_2d =  cell(valid_annotations,1);\n     gt_pose_3d =  cell(valid_annotations,1);\n     gt_visibility = cell(valid_annotations,1);\n     gt_pose_occlusion_labels =  cell(valid_annotations,1);\n     gt_pose_visibility_labels =  cell(valid_annotations,1);\n     %The joint set to use for matching predictions to GT\n     matching_joints = [2:14];\n     %matching_joints = [2 3 6 9 12];\n\n     idx = 1;\n     for k = 1:size(annotations,2)\n         if(annotations{i,k}.isValidFrame)\n             \n             gt_pose_2d{idx} = annotations{i,k}.annot2(:,matching_joints); \n             gt_pose_3d{idx} = annotations{i,k}.univ_annot3 ;\n             gt_visibility{idx} = ones(1,length(matching_joints));\n             gt_pose_occlusion_labels{idx} = occlusion_labels{i,k} ;\n             gt_pose_visibility_labels{idx} = 1 - occlusion_labels{i,k} ;\n             idx = idx + 1;\n         end\n     end\n\n    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n    %%%% Predictions here     \n    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n    %img = imread(sprintf('%s/TS%d/img_%06d.jpg',test_annot_base, ts, i-1));\n    \n    % prediction of this image\n    pred_2d_kpt = getfield(preds_2d_kpt,sprintf('TS%d_img_%06d',ts, i-1));\n    pred_3d_kpt = getfield(preds_3d_kpt,sprintf('TS%d_img_%06d',ts, i-1));\n\n    %Number of subjects predicted \n    num_pred = size(pred_2d_kpt,1);\n\n    pred_pose_2d = cell(num_pred,1);\n    pred_pose_3d = cell(num_pred,1);\n    pred_visibility = cell(num_pred,1);\n     for k = 1:num_pred\n         \n         pred_pose_2d{k} = zeros(2,14);\n         %pred_pose_2d{k}(:,map_to_mpii_jointset) = % 2D Pose for person detected person k;\n         pred_pose_2d{k} = transpose(squeeze(pred_2d_kpt(k,:,:))); % 2D Pose for person detected person k;\n\n         % If some joints such as neck are missing, they can be estimated as the mean of shoulders\n         %pred_pose_2d{k}(:,2) = mean(pred_pose_2d{k}(:,[3,6]),2);\n\n         pred_pose_2d{k} = pred_pose_2d{k}(:,matching_joints);\n         pred_visibility{k} = ~((pred_pose_2d{k}(1,:) == 0) & (pred_pose_2d{k}(2,:) == 0));\n         \n         pred_pose_3d{k} = zeros(3,num_joints);\n         %pred_pose_3d{k}(:,map_to_mpii_jointset) = % 3D Pose for person detected person k;\n         pred_pose_3d{k} = transpose(squeeze(pred_3d_kpt(k,:,:))); % 3D Pose for person detected person k;\n\n         % If some joints such as neck or pelvis are missing, they can be estimated as \n         % the mean of shoulders or hips\n         %pred_pose_3d{k}(:,2) = mean(pred_pose_3d{k}(:,[3,6]),2);\n         %pred_pose_3d{k}(:,15) = mean(pred_pose_3d{k}(:,[9,12]),2);\n         %Center the predictions at the pelvis\n         if is_relative == 1\n             pred_pose_3d{k} = pred_pose_3d{k} - repmat(pred_pose_3d{k}(:,15), 1, 17);\n         else\n             pred_pose_3d{k} = pred_pose_3d{k};\n         end\n\n         %Other mappings that may be needed to convert the predicted pose to match our coordinate system\n         %pred_pose_3d{k} = 1000* pred_pose_3d{k}([2 3 1],:);\n         %pred_pose_3d{k}(1:2,:) = -pred_pose_3d{k}(1:2,:);\n     end\n    \n    %Match predictions to GT \n    [matching, old_matched] = mpii_multiperson_get_identity_matching(gt_pose_2d, gt_visibility, pred_pose_2d, pred_visibility, 40);\n    \n    undetected_people = undetected_people + sum(matching == 0);\n    \n    for k = 1:valid_annotations\n        if is_relative == 1\n            P = gt_pose_3d{k}(:,1:num_joints) - repmat(gt_pose_3d{k}(:,15),1 , num_joints);\n        else\n            P = gt_pose_3d{k}(:,1:num_joints);\n        end\n\n        pred_considered = 0;\n        \n        if(matching(k) ~= 0 )\n            pred_p = pred_pose_3d{matching(k)}(:,1:num_joints);\n            pred_p = mpii_map_to_gt_bone_lengths(pred_p, P, o1, safe_traversal_order(2:end));\n            pred_considered = 1;\n        else\n            pred_p = 100000 * ones(size(P)); %So that the 3DPCK metric marks all these joints as 0!\n            if(EVALUATION_MODE==0)\n                pred_considered = 1;\n            end\n        end\n        \n        if (pred_considered == 1 )\n            error_p = (pred_p - P).^2;\n            error_p = sqrt(sum(error_p, 1));\n            per_joint_error(1:num_joints,1,pje_idx) = error_p;     \n            per_joint_occlusion_mask(1:num_joints,1,pje_idx) = gt_pose_occlusion_labels{k};\n            per_joint_visibility_mask(1:num_joints,1,pje_idx) = gt_pose_visibility_labels{k};\n            pje_idx = pje_idx + 1;\n        end\n        \n    end\n\nend\nsequencewise_undetected_people(ts) = undetected_people;\nsequencewise_annotated_people(ts) = annotated_people;\nsequencewise_per_joint_error{ts} = per_joint_error;\nsequencewise_visibility_mask{ts} =  per_joint_visibility_mask;\nsequencewise_occlusion_mask{ts} =  per_joint_occlusion_mask;  \n    \nend\n\n\nif(EVALUATION_MODE == 0)\n    out_prefix = 'all_annotated_';\nelse\n    out_prefix = 'only_matched_annotations_';\nend\n\nsave([results_output_path filesep out_prefix 'multiperson_3dhp_evaluation.mat'], 'sequencewise_per_joint_error' );\n\n[seq_table] = mpii_evaluate_multiperson_errors(sequencewise_per_joint_error );%fullfile(net_base, net_path{n,1}));\nout_file = [results_output_path filesep out_prefix 'multiperson_3dhp_evaluation'];\nwritetable(cell2table(seq_table), [out_file '_sequencewise.csv']);\n\n  \n[seq_table] = mpii_evaluate_multiperson_errors_visibility_mask(sequencewise_per_joint_error , sequencewise_visibility_mask);\nout_file = [results_output_path filesep [out_prefix 'visible_joints_'] 'multiperson_3dhp_evaluation'];\nwritetable(cell2table(seq_table), [out_file '_sequencewise.csv']);\n\n[seq_table] = mpii_evaluate_multiperson_errors_visibility_mask(sequencewise_per_joint_error , sequencewise_occlusion_mask);\nout_file = [results_output_path filesep [out_prefix 'occluded_joints_'] 'multiperson_3dhp_evaluation'];\nwritetable(cell2table(seq_table), [out_file '_sequencewise.csv']);\n  \n%\nend\n"
  },
  {
    "path": "data/dataset.py",
    "content": "import numpy as np\nimport cv2\nimport random\nimport time\nimport torch\nimport copy\nimport math\nfrom torch.utils.data.dataset import Dataset\nfrom utils.vis import vis_keypoints, vis_3d_skeleton\nfrom utils.pose_utils import fliplr_joints, transform_joint_to_other_db\nfrom config import cfg\n\nclass DatasetLoader(Dataset):\n    def __init__(self, db, ref_joints_name, is_train, transform):\n        \n        self.db = db.data\n        self.joint_num = db.joint_num\n        self.skeleton = db.skeleton\n        self.flip_pairs = db.flip_pairs\n        self.joints_have_depth = db.joints_have_depth\n        self.joints_name = db.joints_name\n        self.ref_joints_name = ref_joints_name\n        \n        self.transform = transform\n        self.is_train = is_train\n\n        if self.is_train:\n            self.do_augment = True\n        else:\n            self.do_augment = False\n\n    def __getitem__(self, index):\n        \n        joint_num = self.joint_num\n        skeleton = self.skeleton\n        flip_pairs = self.flip_pairs\n        joints_have_depth = self.joints_have_depth\n\n        data = copy.deepcopy(self.db[index])\n\n        bbox = data['bbox']\n        joint_img = data['joint_img']\n        joint_vis = data['joint_vis']\n\n        # 1. load image\n        cvimg = cv2.imread(data['img_path'], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)\n        if not isinstance(cvimg, np.ndarray):\n            raise IOError(\"Fail to read %s\" % data['img_path'])\n        img_height, img_width, img_channels = cvimg.shape\n\n        # 2. get augmentation params\n        if self.do_augment:\n            scale, rot, do_flip, color_scale, do_occlusion = get_aug_config()\n        else:\n            scale, rot, do_flip, color_scale, do_occlusion = 1.0, 0.0, False, [1.0, 1.0, 1.0], False\n\n        # 3. crop patch from img and perform data augmentation (flip, rot, color scale, synthetic occlusion)\n        img_patch, trans = generate_patch_image(cvimg, bbox, do_flip, scale, rot, do_occlusion)\n        for i in range(img_channels):\n            img_patch[:, :, i] = np.clip(img_patch[:, :, i] * color_scale[i], 0, 255)\n\n        # 4. generate patch joint ground truth\n        # flip joints and apply Affine Transform on joints\n        if do_flip:\n            joint_img[:, 0] = img_width - joint_img[:, 0] - 1\n            for pair in flip_pairs:\n                joint_img[pair[0], :], joint_img[pair[1], :] = joint_img[pair[1], :], joint_img[pair[0], :].copy()\n                joint_vis[pair[0], :], joint_vis[pair[1], :] = joint_vis[pair[1], :], joint_vis[pair[0], :].copy()\n\n        for i in range(len(joint_img)):\n            joint_img[i, 0:2] = trans_point2d(joint_img[i, 0:2], trans)\n            joint_img[i, 2] /= (cfg.bbox_3d_shape[0]/2.) # expect depth lies in -bbox_3d_shape[0]/2 ~ bbox_3d_shape[0]/2 -> -1.0 ~ 1.0\n            joint_img[i, 2] = (joint_img[i,2] + 1.0)/2. # 0~1 normalize\n            joint_vis[i] *= (\n                            (joint_img[i,0] >= 0) & \\\n                            (joint_img[i,0] < cfg.input_shape[1]) & \\\n                            (joint_img[i,1] >= 0) & \\\n                            (joint_img[i,1] < cfg.input_shape[0]) & \\\n                            (joint_img[i,2] >= 0) & \\\n                            (joint_img[i,2] < 1)\n                            )\n\n        vis = False\n        if vis:\n            filename = str(random.randrange(1,500))\n            tmpimg = img_patch.copy().astype(np.uint8)\n            tmpkps = np.zeros((3,joint_num))\n            tmpkps[:2,:] = joint_img[:,:2].transpose(1,0)\n            tmpkps[2,:] = joint_vis[:,0]\n            tmpimg = vis_keypoints(tmpimg, tmpkps, skeleton)\n            cv2.imwrite(filename + '_gt.jpg', tmpimg)\n        \n        vis = False\n        if vis:\n            vis_3d_skeleton(joint_img, joint_vis, skeleton, filename)\n\n        # change coordinates to output space\n        joint_img[:, 0] = joint_img[:, 0] / cfg.input_shape[1] * cfg.output_shape[1]\n        joint_img[:, 1] = joint_img[:, 1] / cfg.input_shape[0] * cfg.output_shape[0]\n        joint_img[:, 2] = joint_img[:, 2] * cfg.depth_dim\n        \n        if self.is_train:\n            img_patch = self.transform(img_patch)\n            \n            if self.ref_joints_name is not None:\n                joint_img = transform_joint_to_other_db(joint_img, self.joints_name, self.ref_joints_name) \n                joint_vis = transform_joint_to_other_db(joint_vis, self.joints_name, self.ref_joints_name)\n\n            joint_img = joint_img.astype(np.float32)\n            joint_vis = (joint_vis > 0).astype(np.float32)\n            joints_have_depth = np.array([joints_have_depth]).astype(np.float32)\n\n            return img_patch, joint_img, joint_vis, joints_have_depth\n        else:\n            img_patch = self.transform(img_patch)\n            return img_patch\n\n    def __len__(self):\n        return len(self.db)\n\n# helper functions\ndef get_aug_config():\n    \n    scale_factor = 0.25\n    rot_factor = 30\n    color_factor = 0.2\n    \n    scale = np.clip(np.random.randn(), -1.0, 1.0) * scale_factor + 1.0\n    rot = np.clip(np.random.randn(), -2.0,\n                  2.0) * rot_factor if random.random() <= 0.6 else 0\n    do_flip = random.random() <= 0.5\n    c_up = 1.0 + color_factor\n    c_low = 1.0 - color_factor\n    color_scale = [random.uniform(c_low, c_up), random.uniform(c_low, c_up), random.uniform(c_low, c_up)]\n\n    do_occlusion = random.random() <= 0.5\n\n    return scale, rot, do_flip, color_scale, do_occlusion\n\n\ndef generate_patch_image(cvimg, bbox, do_flip, scale, rot, do_occlusion):\n    img = cvimg.copy()\n    img_height, img_width, img_channels = img.shape\n\n    # synthetic occlusion\n    if do_occlusion:\n        while True:\n            area_min = 0.0\n            area_max = 0.7\n            synth_area = (random.random() * (area_max - area_min) + area_min) * bbox[2] * bbox[3]\n\n            ratio_min = 0.3\n            ratio_max = 1/0.3\n            synth_ratio = (random.random() * (ratio_max - ratio_min) + ratio_min)\n\n            synth_h = math.sqrt(synth_area * synth_ratio)\n            synth_w = math.sqrt(synth_area / synth_ratio)\n            synth_xmin = random.random() * (bbox[2] - synth_w - 1) + bbox[0]\n            synth_ymin = random.random() * (bbox[3] - synth_h - 1) + bbox[1]\n\n            if synth_xmin >= 0 and synth_ymin >= 0 and synth_xmin + synth_w < img_width and synth_ymin + synth_h < img_height:\n                xmin = int(synth_xmin)\n                ymin = int(synth_ymin)\n                w = int(synth_w)\n                h = int(synth_h)\n                img[ymin:ymin+h, xmin:xmin+w, :] = np.random.rand(h, w, 3) * 255\n                break\n\n    bb_c_x = float(bbox[0] + 0.5*bbox[2])\n    bb_c_y = float(bbox[1] + 0.5*bbox[3])\n    bb_width = float(bbox[2])\n    bb_height = float(bbox[3])\n\n    if do_flip:\n        img = img[:, ::-1, :]\n        bb_c_x = img_width - bb_c_x - 1\n    \n    trans = gen_trans_from_patch_cv(bb_c_x, bb_c_y, bb_width, bb_height, cfg.input_shape[1], cfg.input_shape[0], scale, rot, inv=False)\n    img_patch = cv2.warpAffine(img, trans, (int(cfg.input_shape[1]), int(cfg.input_shape[0])), flags=cv2.INTER_LINEAR)\n\n    img_patch = img_patch[:,:,::-1].copy()\n    img_patch = img_patch.astype(np.float32)\n\n    return img_patch, trans\n\ndef rotate_2d(pt_2d, rot_rad):\n    x = pt_2d[0]\n    y = pt_2d[1]\n    sn, cs = np.sin(rot_rad), np.cos(rot_rad)\n    xx = x * cs - y * sn\n    yy = x * sn + y * cs\n    return np.array([xx, yy], dtype=np.float32)\n\ndef gen_trans_from_patch_cv(c_x, c_y, src_width, src_height, dst_width, dst_height, scale, rot, inv=False):\n    # augment size with scale\n    src_w = src_width * scale\n    src_h = src_height * scale\n    src_center = np.array([c_x, c_y], dtype=np.float32)\n\n    # augment rotation\n    rot_rad = np.pi * rot / 180\n    src_downdir = rotate_2d(np.array([0, src_h * 0.5], dtype=np.float32), rot_rad)\n    src_rightdir = rotate_2d(np.array([src_w * 0.5, 0], dtype=np.float32), rot_rad)\n\n    dst_w = dst_width\n    dst_h = dst_height\n    dst_center = np.array([dst_w * 0.5, dst_h * 0.5], dtype=np.float32)\n    dst_downdir = np.array([0, dst_h * 0.5], dtype=np.float32)\n    dst_rightdir = np.array([dst_w * 0.5, 0], dtype=np.float32)\n\n    src = np.zeros((3, 2), dtype=np.float32)\n    src[0, :] = src_center\n    src[1, :] = src_center + src_downdir\n    src[2, :] = src_center + src_rightdir\n\n    dst = np.zeros((3, 2), dtype=np.float32)\n    dst[0, :] = dst_center\n    dst[1, :] = dst_center + dst_downdir\n    dst[2, :] = dst_center + dst_rightdir\n\n    if inv:\n        trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))\n    else:\n        trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))\n\n    return trans\n\ndef trans_point2d(pt_2d, trans):\n    src_pt = np.array([pt_2d[0], pt_2d[1], 1.]).T\n    dst_pt = np.dot(trans, src_pt)\n    return dst_pt[0:2]\n"
  },
  {
    "path": "data/multiple_datasets.py",
    "content": "import random\nimport numpy as np\nfrom torch.utils.data.dataset import Dataset\n\nclass MultipleDatasets(Dataset):\n    def __init__(self, dbs, make_same_len=True):\n        self.dbs = dbs\n        self.db_num = len(self.dbs)\n        self.max_db_data_num = max([len(db) for db in dbs])\n        self.db_len_cumsum = np.cumsum([len(db) for db in dbs])\n        self.make_same_len = make_same_len\n\n    def __len__(self):\n        # all dbs have the same length\n        if self.make_same_len:\n            return self.max_db_data_num * self.db_num\n        # each db has different length\n        else:\n            return sum([len(db) for db in self.dbs])\n\n    def __getitem__(self, index):\n        if self.make_same_len:\n            db_idx = index // self.max_db_data_num\n            data_idx = index % self.max_db_data_num \n            if data_idx >= len(self.dbs[db_idx]) * (self.max_db_data_num // len(self.dbs[db_idx])): # last batch: random sampling\n                data_idx = random.randint(0,len(self.dbs[db_idx])-1)\n            else: # before last batch: use modular\n                data_idx = data_idx % len(self.dbs[db_idx])\n        else:\n            for i in range(self.db_num):\n                if index < self.db_len_cumsum[i]:\n                    db_idx = i\n                    break\n            if db_idx == 0:\n                data_idx = index\n            else:\n                data_idx = index - self.db_len_cumsum[db_idx-1]\n\n        return self.dbs[db_idx][data_idx]\n\n\n"
  },
  {
    "path": "demo/demo.py",
    "content": "import sys\r\nimport os\r\nimport os.path as osp\r\nimport argparse\r\nimport numpy as np\r\nimport cv2\r\nimport torch\r\nimport torchvision.transforms as transforms\r\nfrom torch.nn.parallel.data_parallel import DataParallel\r\nimport torch.backends.cudnn as cudnn\r\n\r\nsys.path.insert(0, osp.join('..', 'main'))\r\nsys.path.insert(0, osp.join('..', 'data'))\r\nsys.path.insert(0, osp.join('..', 'common'))\r\nfrom config import cfg\r\nfrom model import get_pose_net\r\nfrom dataset import generate_patch_image\r\nfrom utils.pose_utils import process_bbox, pixel2cam\r\nfrom utils.vis import vis_keypoints, vis_3d_multiple_skeleton\r\n\r\ndef parse_args():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\r\n    parser.add_argument('--model_path', type=str, dest='model')\r\n    parser.add_argument('--input_image', type=str, dest='image')\r\n    parser.add_argument('--backbone', type=str, dest='backbone')\r\n    args = parser.parse_args()\r\n\r\n    # test gpus\r\n    if not args.gpu_ids:\r\n        assert 0, print(\"Please set proper gpu ids\")\r\n\r\n    if '-' in args.gpu_ids:\r\n        gpus = args.gpu_ids.split('-')\r\n        gpus[0] = 0 if not gpus[0].isdigit() else int(gpus[0])\r\n        gpus[1] = len(mem_info()) if not gpus[1].isdigit() else int(gpus[1]) + 1\r\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\r\n    return args\r\n\r\n# argument parsing\r\nargs = parse_args()\r\ncfg.set_args(args.gpu_ids)\r\ncudnn.benchmark = True\r\n\r\n# MuCo joint set\r\njoint_num = 18\r\njoints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe')\r\n# 'Pelvis' 'RHip' 'RKnee' 'RAnkle' 'LHip' 'LKnee' 'LAnkle' 'Spine1' 'Neck' 'Head' 'Site' 'LShoulder' 'LElbow' 'LWrist' 'RShoulder' 'RElbow' 'RWrist\r\nflip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13), (17, 18), (19, 20) )\r\n# skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (10, 19), (11, 12), (12, 13), (13, 20), (1, 2), (2, 3), (3, 4), (4, 17), (1, 5), (5, 6), (6, 7), (7, 18) )\r\nskeleton = ( (0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15), (15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6) )\r\n\r\n# snapshot load\r\nmodel_path = args.model\r\n\r\n# print('Load checkpoint from {}'.format(model_path))\r\nmodel = get_pose_net(args.backbone, False, joint_num)\r\nmodel = DataParallel(model).cuda()\r\n# print(\"after DataParallel\", model)\r\nckpt = torch.load(model_path)\r\n# print(\"ckpt\", ckpt['network'])\r\nmodel.load_state_dict(ckpt['network'])\r\nmodel.eval()\r\n\r\n# prepare input image\r\ntransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=cfg.pixel_mean, std=cfg.pixel_std)])\r\nimg_path = args.image\r\nassert osp.exists(img_path), 'Cannot find image at ' + img_path\r\noriginal_img = cv2.imread(img_path)\r\noriginal_img_height, original_img_width = original_img.shape[:2]\r\n\r\n# prepare bbox\r\nbbox_list = [\r\n[139.41, 102.25, 222.39, 241.57],\\\r\n[287.17, 61.52, 74.88, 165.61],\\\r\n[540.04, 48.81, 99.96, 223.36],\\\r\n[372.58, 170.84, 266.63, 217.19],\\\r\n[0.5, 43.74, 90.1, 220.09]] # xmin, ymin, width, height\r\nroot_depth_list = [11250.5732421875, 15522.8701171875, 11831.3828125, 8852.556640625, 12572.5966796875] # obtain this from RootNet (https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE/tree/master/demo)\r\nassert len(bbox_list) == len(root_depth_list)\r\nperson_num = len(bbox_list)\r\n\r\n# normalized camera intrinsics\r\nfocal = [1500, 1500] # x-axis, y-axis\r\nprincpt = [original_img_width/2, original_img_height/2] # x-axis, y-axis\r\nprint('focal length: (' + str(focal[0]) + ', ' + str(focal[1]) + ')')\r\nprint('principal points: (' + str(princpt[0]) + ', ' + str(princpt[1]) + ')')\r\n\r\n# for each cropped and resized human image, forward it to PoseNet\r\noutput_pose_2d_list = []\r\noutput_pose_3d_list = []\r\nfor n in range(person_num):\r\n    bbox = process_bbox(np.array(bbox_list[n]), original_img_width, original_img_height)\r\n    img, img2bb_trans = generate_patch_image(original_img, bbox, False, 1.0, 0.0, False) \r\n    img = transform(img).cuda()[None,:,:,:]\r\n\r\n    # forward\r\n    with torch.no_grad():\r\n        pose_3d = model(img) # x,y: pixel, z: root-relative depth (mm)\r\n\r\n    # inverse affine transform (restore the crop and resize)\r\n    pose_3d = pose_3d[0].cpu().numpy()\r\n    pose_3d[:,0] = pose_3d[:,0] / cfg.output_shape[1] * cfg.input_shape[1]\r\n    pose_3d[:,1] = pose_3d[:,1] / cfg.output_shape[0] * cfg.input_shape[0]\r\n    pose_3d_xy1 = np.concatenate((pose_3d[:,:2], np.ones_like(pose_3d[:,:1])),1)\r\n    img2bb_trans_001 = np.concatenate((img2bb_trans, np.array([0,0,1]).reshape(1,3)))\r\n    pose_3d[:,:2] = np.dot(np.linalg.inv(img2bb_trans_001), pose_3d_xy1.transpose(1,0)).transpose(1,0)[:,:2]\r\n    output_pose_2d_list.append(pose_3d[:,:2].copy())\r\n    \r\n    # root-relative discretized depth -> absolute continuous depth\r\n    pose_3d[:,2] = (pose_3d[:,2] / cfg.depth_dim * 2 - 1) * (cfg.bbox_3d_shape[0]/2) + root_depth_list[n]\r\n    pose_3d = pixel2cam(pose_3d, focal, princpt)\r\n    output_pose_3d_list.append(pose_3d.copy())\r\n\r\n# visualize 2d poses\r\nvis_img = original_img.copy()\r\nfor n in range(person_num):\r\n    vis_kps = np.zeros((3,joint_num))\r\n    vis_kps[0,:] = output_pose_2d_list[n][:,0]\r\n    vis_kps[1,:] = output_pose_2d_list[n][:,1]\r\n    vis_kps[2,:] = 1\r\n    vis_img = vis_keypoints(vis_img, vis_kps, skeleton)\r\ncv2.imwrite('output_pose_2d.jpg', vis_img)\r\n\r\n# visualize 3d poses\r\nvis_kps = np.array(output_pose_3d_list)\r\nvis_3d_multiple_skeleton(vis_kps, np.ones_like(vis_kps), skeleton, 'output_pose_3d (x,y,z: camera-centered. mm.)')\r\n\r\n"
  },
  {
    "path": "main/config.py",
    "content": "import os\nimport os.path as osp\nimport sys\nimport numpy as np\n\nclass Config:\n\n    ## model architecture\n    backbone = 'LPSKI'\n    \n    ## dataset\n    # training set\n    # 3D: Human36M, MuCo\n    # 2D: MSCOCO, MPII\n    trainset_3d = ['Dummy']\n    # trainset_3d = ['MuCo']\n    trainset_2d = []\n    # trainset_2d = ['MSCOCO']\n\n    # testing set\n    # Human36M, MuPoTS, MSCOCO\n    testset = 'MuPoTS'\n\n    ## directory\n    cur_dir = osp.dirname(os.path.abspath(__file__))\n    root_dir = osp.join(cur_dir, '..')\n    data_dir = osp.join(root_dir, 'data')\n    output_dir = osp.join(root_dir, 'output')\n    model_dir = osp.join(output_dir, 'model_dump')\n    pretrain_dir = osp.join(output_dir, 'pre_train')\n    vis_dir = osp.join(output_dir, 'vis')\n    log_dir = osp.join(output_dir, 'log')\n    result_dir = osp.join(output_dir, 'result')\n    \n    ## input, output\n    input_shape = (256, 256) \n    output_shape = (input_shape[0]//8, input_shape[1]//8)\n    width_multiplier = 1.0\n    depth_dim = 32\n    bbox_3d_shape = (2000, 2000, 2000) # depth, height, width\n    pixel_mean = (0.485, 0.456, 0.406)\n    pixel_std = (0.229, 0.224, 0.225)\n\n    ## training config\n    embedding_size = 2048\n    lr_dec_epoch = [17, 21]\n    end_epoch = 25\n    lr = 1e-3\n    lr_dec_factor = 10\n    batch_size = 64\n\n    ## testing config\n    test_batch_size = 32\n    flip_test = True\n    use_gt_info = True\n\n    ## others\n    num_thread = 20\n    gpu_ids = '0'\n    num_gpus = 1\n    continue_train = False\n\n    if '-' in gpu_ids:\n        gpus = gpu_ids.split('-')\n        gpus[0] = int(gpus[0])\n        gpus[1] = int(gpus[1]) + 1\n        gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\n\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = gpu_ids\n\ncfg = Config()\n\nsys.path.insert(0, osp.join(cfg.root_dir, 'common'))\nfrom utils.dir_utils import add_pypath, make_folder\n# adding path\nadd_pypath(osp.join(cfg.data_dir))\nfor i in range(len(cfg.trainset_3d)):\n    add_pypath(osp.join(cfg.data_dir, cfg.trainset_3d[i]))\nfor i in range(len(cfg.trainset_2d)):\n    add_pypath(osp.join(cfg.data_dir, cfg.trainset_2d[i]))\nadd_pypath(osp.join(cfg.data_dir, cfg.testset))\nmake_folder(cfg.model_dir)\nmake_folder(cfg.vis_dir)\nmake_folder(cfg.log_dir)\nmake_folder(cfg.result_dir)\n\n"
  },
  {
    "path": "main/intermediate.py",
    "content": "import torch\nimport argparse\nimport numpy as np\nimport os\nimport os.path as osp\nimport cv2\nimport matplotlib.pyplot as plt\nimport torch.backends.cudnn as cudnn\nimport torchvision.transforms as transforms\nfrom torchsummary import summary\nfrom torch.nn.parallel.data_parallel import DataParallel\nfrom config import cfg\nfrom model import get_pose_net\nfrom utils.pose_utils import process_bbox, pixel2cam\nfrom utils.vis import vis_keypoints, vis_3d_multiple_skeleton\nfrom dataset import generate_patch_image\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\n    parser.add_argument('--epoch', type=int, dest='test_epoch')\n    parser.add_argument('--input_image', type=str, dest='image')\n    parser.add_argument('--jointnum', type=int, dest='joint')\n    parser.add_argument('--backbone', type=str, dest='backbone')\n    args = parser.parse_args()\n\n    # test gpus\n    if not args.gpu_ids:\n        assert 0, print(\"Please set proper gpu ids\")\n\n    if not args.joint:\n        assert print(\"please insert number of joint\")\n\n    if '-' in args.gpu_ids:\n        gpus = args.gpu_ids.split('-')\n        gpus[0] = 0 if not gpus[0].isdigit() else int(gpus[0])\n        gpus[1] = len(mem_info()) if not gpus[1].isdigit() else int(gpus[1]) + 1\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\n    return args\n\n# argument parsing\nargs = parse_args()\ncfg.set_args(args.gpu_ids)\ncudnn.benchmark = True\n\n# joint set\njoint_num = args.joint\njoints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe')\nflip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13), (17, 18), (19, 20) )\nif joint_num == 18:\n    skeleton = ( (0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15), (15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6) )\nif joint_num == 21:\n    skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (10, 19), (11, 12), (12, 13), (13, 20), (1, 2), (2, 3), (3, 4), (4, 17), (1, 5), (5, 6), (6, 7), (7, 18) )\n\n# snapshot load\nmodel_path = os.path.join(cfg.model_dir, 'snapshot_%d.pth.tar' % args.test_epoch)\nassert osp.exists(model_path), 'Cannot find model at ' + model_path\nmodel = get_pose_net(args.backbone, args.frontbone, False, joint_num)\nmodel = DataParallel(model).cuda()\nckpt = torch.load(model_path)\nmodel.load_state_dict(ckpt['network'])\nmodel = model.module\nmodel.eval()\n\n# prepare input image\ntransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=cfg.pixel_mean, std=cfg.pixel_std)])\nimg_path = args.image\nassert osp.exists(img_path), 'Cannot find image at ' + img_path\noriginal_img = cv2.imread(img_path)\noriginal_img_height, original_img_width = original_img.shape[:2]\n\n# prepare bbox\nbbox_list = [\n    [139.41, 102.25, 222.39, 241.57],\\\n    [287.17, 61.52, 74.88, 165.61],\\\n    [540.04, 48.81, 99.96, 223.36],\\\n    [372.58, 170.84, 266.63, 217.19],\\\n    [0.5, 43.74, 90.1, 220.09]\n] # xmin, ymin, width, height\nroot_depth_list = [11250.5732421875, 15522.8701171875, 11831.3828125, 8852.556640625, 12572.5966796875] # obtain this from RootNet (https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE/tree/master/demo)\nassert len(bbox_list) == len(root_depth_list)\nperson_num = len(bbox_list)\n\n# extractor\nactivation = {}\ndef get_activation(name):\n    def hook(model, input, output):\n        activation[name] = output.detach()\n    return hook\n\nfor n in range(person_num):\n    bbox = process_bbox(np.array(bbox_list[n]), original_img_width, original_img_height)\n    img, img2bb_trans = generate_patch_image(original_img, bbox, False, 1.0, 0.0, False) \n    img = transform(img).cuda()[None,:,:,:]\n\n    model.backbone.deonv1.register_forward_hook(get_activation('%d' % n))\n    # forward\n    with torch.no_grad():\n        pose_3d = model(img) # x,y: pixel, z: root-relative depth (mm)\n\nplt.figure(figsize=(32, 32))\na = activation['0'] - activation['1']\nb = torch.sum(a, dim=1)\nprint(b)\nfor i in range(person_num):\n    image = activation['%d'%i]\n    print(image.size())\n    sum_image = torch.sum(image[0], dim=0)\n    print(sum_image.size())\n    plt.subplot(1, person_num, i+1)\n    plt.imshow(sum_image.cpu(), cmap='gray')\n    plt.axis('off')\n\nplt.show()\nplt.close()\n"
  },
  {
    "path": "main/model.py",
    "content": "import torch\r\nimport torch.nn as nn\r\nfrom torch.nn import functional as F\r\nfrom backbone import *\r\nfrom config import cfg\r\nimport os.path as osp\r\n\r\nmodel_urls = {\r\n    'MobileNetV2': 'https://download.pytorch.org/models/mobilenet_v2-b0353104.pth',\r\n    'ResNet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\r\n    'ResNet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\r\n    'ResNet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\r\n    'ResNet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\r\n    'ResNet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\r\n    'ResNext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',\r\n    'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',\r\n    'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',\r\n    'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',\r\n}\r\n\r\nBACKBONE_DICT = {\r\n    'LPRES':LpNetResConcat,\r\n    'LPSKI':LpNetSkiConcat,\r\n    'LPWO':LpNetWoConcat\r\n    }\r\n\r\ndef soft_argmax(heatmaps, joint_num):\r\n\r\n    heatmaps = heatmaps.reshape((-1, joint_num, cfg.depth_dim*cfg.output_shape[0]*cfg.output_shape[1]))\r\n    heatmaps = F.softmax(heatmaps, 2)\r\n    heatmaps = heatmaps.reshape((-1, joint_num, cfg.depth_dim, cfg.output_shape[0], cfg.output_shape[1]))\r\n\r\n    accu_x = heatmaps.sum(dim=(2,3))\r\n    accu_y = heatmaps.sum(dim=(2,4))\r\n    accu_z = heatmaps.sum(dim=(3,4))\r\n\r\n    # accu_x = accu_x * torch.nn.parallel.comm.broadcast(torch.arange(1,cfg.output_shape[1]+1).type(torch.cuda.FloatTensor), devices=[accu_x.device.index])[0]\r\n    # accu_y = accu_y * torch.nn.parallel.comm.broadcast(torch.arange(1,cfg.output_shape[0]+1).type(torch.cuda.FloatTensor), devices=[accu_y.device.index])[0]\r\n    # accu_z = accu_z * torch.nn.parallel.comm.broadcast(torch.arange(1,cfg.depth_dim+1).type(torch.cuda.FloatTensor), devices=[accu_z.device.index])[0]\r\n\r\n    accu_x = accu_x * torch.arange(1,cfg.output_shape[1]+1)\r\n    accu_y = accu_y * torch.arange(1,cfg.output_shape[0]+1)\r\n    accu_z = accu_z * torch.arange(1,cfg.depth_dim+1)\r\n\r\n    accu_x = accu_x.sum(dim=2, keepdim=True) -1\r\n    accu_y = accu_y.sum(dim=2, keepdim=True) -1\r\n    accu_z = accu_z.sum(dim=2, keepdim=True) -1\r\n\r\n    coord_out = torch.cat((accu_x, accu_y, accu_z), dim=2)\r\n\r\n    return coord_out\r\n\r\nclass CustomNet(nn.Module):\r\n    def __init__(self, backbone, joint_num):\r\n        super(CustomNet, self).__init__()\r\n        self.backbone = backbone\r\n        self.joint_num = joint_num\r\n\r\n    def forward(self, input_img, target=None):\r\n        fm = self.backbone(input_img)\r\n        coord = soft_argmax(fm, self.joint_num)\r\n\r\n        if target is None:\r\n            return coord\r\n        else:\r\n            target_coord = target['coord']\r\n            target_vis = target['vis']\r\n            target_have_depth = target['have_depth']\r\n\r\n            ## coordinate loss\r\n            loss_coord = torch.abs(coord - target_coord) * target_vis\r\n            loss_coord = (loss_coord[:,:,0] + loss_coord[:,:,1] + loss_coord[:,:,2] * target_have_depth)/3.\r\n            return loss_coord\r\n\r\ndef get_pose_net(backbone_str, is_train, joint_num):\r\n    INPUT_SIZE = cfg.input_shape\r\n    EMBEDDING_SIZE = cfg.embedding_size # feature dimension\r\n    WIDTH_MULTIPLIER = cfg.width_multiplier\r\n\r\n    assert INPUT_SIZE == (256, 256)\r\n\r\n    print(\"=\" * 60)\r\n    print(\"{} BackBone Generated\".format(backbone_str))\r\n    print(\"=\" * 60)\r\n    model = CustomNet(BACKBONE_DICT[backbone_str](input_size = INPUT_SIZE, joint_num = joint_num, embedding_size = EMBEDDING_SIZE, width_mult = WIDTH_MULTIPLIER), joint_num)\r\n    if is_train == True:\r\n        model.backbone.init_weights()\r\n    return model\r\n"
  },
  {
    "path": "main/pytorch2coreml.py",
    "content": "import torch\r\nimport argparse\r\nimport coremltools as ct\r\n\r\n\r\nfrom config import cfg\r\nfrom torch.nn.parallel.data_parallel import DataParallel\r\nfrom base import Transformer\r\n\r\ndef parse_args():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\r\n    parser.add_argument('--joint', type=int, dest='joint')\r\n    parser.add_argument('--modelpath', type=str, dest='modelpath')\r\n    parser.add_argument('--backbone', type=str, dest='backbone')\r\n    args = parser.parse_args()\r\n\r\n    # test gpus\r\n    if not args.gpu_ids:\r\n        assert 0, \"Please set proper gpu ids\"\r\n\r\n    if '-' in args.gpu_ids:\r\n        gpus = args.gpu_ids.split('-')\r\n        gpus[0] = int(gpus[0])\r\n        gpus[1] = int(gpus[1]) + 1\r\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\r\n\r\n    return args\r\n\r\nargs = parse_args()\r\n\r\n# modelpath as definite path\r\ntransformer = Transformer(args.backbone, args.joint, args.modelpath)\r\ntransformer._make_model()\r\n\r\nsingle_pytorch_model = transformer.model\r\n\r\ndevice = torch.device('cpu')\r\nsingle_pytorch_model.to(device)\r\n\r\ndummy_input = torch.randn(1, 3, 256, 256)\r\n\r\ntraced_model = torch.jit.trace(single_pytorch_model, dummy_input)\r\n\r\n# Convert to Core ML using the Unified Conversion API\r\nmodel = ct.convert(\r\n    traced_model,\r\n    inputs=[ct.ImageType(name=\"input_1\", shape=dummy_input.shape)], #name \"input_1\" is used in 'quickstart'\r\n)\r\n\r\nmodel.save(\"test.mlmodel\")\r\n"
  },
  {
    "path": "main/pytorch2onnx.py",
    "content": "import onnx\r\nimport torch\r\nimport argparse\r\nimport numpy\r\nimport imageio\r\nimport onnxruntime as ort\r\nimport tensorflow as tf\r\n\r\nfrom config import cfg\r\nfrom torchsummary import summary\r\nfrom base import Transformer\r\nfrom onnx_tf.backend import prepare\r\n\r\ndef parse_args():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\r\n    parser.add_argument('--joint', type=int, dest='joint')\r\n    parser.add_argument('--modelpath', type=str, dest='modelpath')\r\n    parser.add_argument('--backbone', type=str, dest='backbone')\r\n    args = parser.parse_args()\r\n\r\n    # test gpus\r\n    if not args.gpu_ids:\r\n        assert 0, \"Please set proper gpu ids\"\r\n\r\n    if '-' in args.gpu_ids:\r\n        gpus = args.gpu_ids.split('-')\r\n        gpus[0] = int(gpus[0])\r\n        gpus[1] = int(gpus[1]) + 1\r\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\r\n\r\n    return args\r\n\r\nargs = parse_args()\r\n\r\ndummy_input = torch.randn(1, 3, 256, 256, device='cuda')\r\n\r\n# modelpath as definite path\r\ntransformer = Transformer(args.backbone, args.joint, args.modelpath)\r\ntransformer._make_model()\r\n\r\nsingle_pytorch_model = transformer.model\r\n\r\nsummary(single_pytorch_model, (3, 256, 256))\r\n\r\nONNX_PATH=\"../output/baseline.onnx\"\r\n\r\ntorch.onnx.export(\r\n    model=single_pytorch_model,\r\n    args=dummy_input,\r\n    f=ONNX_PATH, # where should it be saved\r\n    verbose=False,\r\n    export_params=True,\r\n    do_constant_folding=False,  # fold constant values for optimization\r\n    # do_constant_folding=True,   # fold constant values for optimization\r\n    input_names=['input'],\r\n    output_names=['output'],\r\n    opset_version=11\r\n)\r\n\r\nonnx_model = onnx.load(ONNX_PATH)\r\nonnx.checker.check_model(onnx_model)\r\nonnx.helper.printable_graph(onnx_model.graph)\r\n\r\npytorch_result = single_pytorch_model(dummy_input)\r\npytorch_result = pytorch_result.cpu().detach().numpy()\r\nprint(\"pytorch_model output {}\".format(pytorch_result.shape), pytorch_result)\r\n\r\nort_session = ort.InferenceSession(ONNX_PATH)\r\noutputs = ort_session.run(None, {'input': dummy_input.cpu().numpy()})\r\noutputs = numpy.array(outputs[0])\r\nprint(\"onnx_model ouput size{}\".format(outputs.shape), outputs)\r\n\r\nprint(\"difference\", numpy.linalg.norm(pytorch_result-outputs))\r\n\r\nTF_PATH = \"../output/baseline\" # where the representation of tensorflow model will be stored\r\n\r\n# prepare function converts an ONNX model to an internel representation\r\n# of the computational graph called TensorflowRep and returns\r\n# the converted representation.\r\ntf_rep = prepare(onnx_model)  # creating TensorflowRep object\r\n\r\n# export_graph function obtains the graph proto corresponding to the ONNX\r\n# model associated with the backend representation and serializes\r\n# to a protobuf file.\r\ntf_rep.export_graph(TF_PATH)\r\n\r\nTFLITE_PATH = \"../output/baseline.tflite\"\r\n\r\nPB_PATH = \"../output/baseline/saved_model.pb\"\r\n\r\n# make a converter object from the saved tensorflow file\r\n# converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(PB_PATH, input_arrays=['input'], output_arrays=['output'])\r\nconverter = tf.lite.TFLiteConverter.from_saved_model(TF_PATH)\r\n\r\n# tell converter which type of optimization techniques to use\r\n# to view the best option for optimization read documentation of tflite about optimization\r\n# go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional\r\n# converter.optimizations = [tf.compat.v1.lite.Optimize.DEFAULT]\r\n\r\n# converter.experimental_new_converter = True\r\n#\r\n# # I had to explicitly state the ops\r\n# converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,\r\n#                                        tf.lite.OpsSet.SELECT_TF_OPS]\r\n\r\ndef representative_dataset():\r\n\r\n    dataset_size = 10\r\n\r\n    for i in range(dataset_size):\r\n        print(i)\r\n        data = imageio.imread(\"../sample_images/\" + \"00000\" + str(i) + \".jpg\")\r\n        data = numpy.resize(data, [1, 3, 256, 256])\r\n        yield [data.astype(numpy.float32)]\r\n\r\n\r\nconverter.experimental_new_converter = True\r\nconverter.experimental_new_quantizer = True\r\n\r\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\r\nconverter.representative_dataset = representative_dataset\r\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\r\nconverter.inference_input_type = tf.uint8\r\nconverter.inference_output_type = tf.uint8\r\n\r\n# input_arrays = converter.get_input_arrays()\r\n# converter.quantized_input_stats = {input_arrays[0]: (0.0, 1.0)}\r\n\r\ntf_lite_model = converter.convert()\r\n# Save the model.\r\nwith open(TFLITE_PATH, 'wb') as f:\r\n    f.write(tf_lite_model)\r\n"
  },
  {
    "path": "main/summary.py",
    "content": "import torch\nimport argparse\nimport os\nimport os.path as osp\nimport torch.backends.cudnn as cudnn\nfrom torchsummary import summary\nfrom torch.nn.parallel.data_parallel import DataParallel\nfrom config import cfg\nfrom model import get_pose_net\nfrom thop import profile\nfrom thop import clever_format\nfrom ptflops import get_model_complexity_info\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\n    parser.add_argument('--epoch', type=int, dest='test_epoch')\n    parser.add_argument('--jointnum', type=int, dest='joint')\n    parser.add_argument('--backbone', type=str, dest='backbone')\n    args = parser.parse_args()\n\n    # test gpus\n    if not args.gpu_ids:\n        assert 0, print(\"Please set proper gpu ids\")\n\n    if not args.joint:\n        assert print(\"please insert number of joint\")\n\n    if '-' in args.gpu_ids:\n        gpus = args.gpu_ids.split('-')\n        gpus[0] = 0 if not gpus[0].isdigit() else int(gpus[0])\n        gpus[1] = len(mem_info()) if not gpus[1].isdigit() else int(gpus[1]) + 1\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\n    return args\n\n# argument parsing\nargs = parse_args()\ncfg.set_args(args.gpu_ids)\ncudnn.benchmark = True\n\n# joint set\njoint_num = args.joint\njoints_name = ('Head_top', 'Thorax', 'R_Shoulder', 'R_Elbow', 'R_Wrist', 'L_Shoulder', 'L_Elbow', 'L_Wrist', 'R_Hip', 'R_Knee', 'R_Ankle', 'L_Hip', 'L_Knee', 'L_Ankle', 'Pelvis', 'Spine', 'Head', 'R_Hand', 'L_Hand', 'R_Toe', 'L_Toe')\nflip_pairs = ( (2, 5), (3, 6), (4, 7), (8, 11), (9, 12), (10, 13), (17, 18), (19, 20) )\nif joint_num == 18:\n    skeleton = ( (0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15), (15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6) )\nif joint_num == 21:\n    skeleton = ( (0, 16), (16, 1), (1, 15), (15, 14), (14, 8), (14, 11), (8, 9), (9, 10), (10, 19), (11, 12), (12, 13), (13, 20), (1, 2), (2, 3), (3, 4), (4, 17), (1, 5), (5, 6), (6, 7), (7, 18) )\n\n# snapshot load\nmodel_path = os.path.join(cfg.model_dir, 'snapshot_%d.pth.tar' % args.test_epoch)\nassert osp.exists(model_path), 'Cannot find model at ' + model_path\nmodel = get_pose_net(args.backbone, args.frontbone, False, joint_num)\nmodel = DataParallel(model).cuda()\nckpt = torch.load(model_path)\nmodel.load_state_dict(ckpt['network'])\n\nsingle_model = model.module\n\nsummary(single_model, (3, 256, 256))\n\ninput = torch.randn(1, 3, 256, 256).cuda()\nmacs, params = profile(single_model, inputs=(input,))\nmacs, params = clever_format([macs, params], \"%.3f\")\nflops, params1 = get_model_complexity_info(single_model, (3, 256, 256),as_strings=True, print_per_layer_stat=False)\nprint('{:<30}  {:<8}'.format('Computational complexity: ', flops))\nprint('{:<30}  {:<8}'.format('Computational complexity: ', macs))\nprint('{:<30}  {:<8}'.format('Number of parameters: ', params))\nprint('{:<30}  {:<8}'.format('Number of parameters: ', params1))\n"
  },
  {
    "path": "main/test.py",
    "content": "import argparse\nfrom tqdm import tqdm\nimport numpy as np\nimport cv2\nfrom config import cfg\nimport torch\nfrom base import Tester\nfrom utils.vis import vis_keypoints\nfrom utils.pose_utils import flip\nimport torch.backends.cudnn as cudnn\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\n    parser.add_argument('--epochs', type=str, dest='model')\n    parser.add_argument('--backbone', type=str, dest='backbone')\n    args = parser.parse_args()\n\n    # test gpus\n    if not args.gpu_ids:\n        assert 0, \"Please set proper gpu ids\"\n\n    if '-' in args.gpu_ids:\n        gpus = args.gpu_ids.split('-')\n        gpus[0] = int(gpus[0])\n        gpus[1] = int(gpus[1]) + 1\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\n\n    if '-' in args.model:\n        model_epoch = args.model.split('-')\n        model_epoch[0] = int(model_epoch[0])\n        model_epoch[1] = int(model_epoch[1]) + 1\n        args.model_epoch = model_epoch\n\n    return args\n\ndef main():\n\n    args = parse_args()\n    cfg.set_args(args.gpu_ids)\n    cudnn.fastest = True\n    cudnn.benchmark = True\n    cudnn.deterministic = False\n    cudnn.enabled = True\n\n    tester = Tester(args.backbone)\n    tester._make_batch_generator()\n\n    for epoch in range(args.model_epoch[0], args.model_epoch[1]):\n\n        tester._make_model(epoch)\n\n        preds = []\n\n        with torch.no_grad():\n            for itr, input_img in enumerate(tqdm(tester.batch_generator)):\n\n                # forward\n                coord_out = tester.model(input_img)\n\n                if cfg.flip_test:\n                    flipped_input_img = flip(input_img, dims=3)\n                    flipped_coord_out = tester.model(flipped_input_img)\n                    flipped_coord_out[:, :, 0] = cfg.output_shape[1] - flipped_coord_out[:, :, 0] - 1\n                    for pair in tester.flip_pairs:\n                        flipped_coord_out[:, pair[0], :], flipped_coord_out[:, pair[1], :] = flipped_coord_out[:, pair[1], :].clone(), flipped_coord_out[:, pair[0], :].clone()\n                    coord_out = (coord_out + flipped_coord_out)/2.\n\n                vis = False\n                if vis:\n                    filename = str(itr)\n                    tmpimg = input_img[0].cpu().numpy()\n                    tmpimg = tmpimg * np.array(cfg.pixel_std).reshape(3,1,1) + np.array(cfg.pixel_mean).reshape(3,1,1)\n                    tmpimg = tmpimg.astype(np.uint8)\n                    tmpimg = tmpimg[::-1, :, :]\n                    tmpimg = np.transpose(tmpimg,(1,2,0)).copy()\n                    tmpkps = np.zeros((3,tester.joint_num))\n                    tmpkps[:2,:] = coord_out[0,:,:2].cpu().numpy().transpose(1,0) / cfg.output_shape[0] * cfg.input_shape[0]\n                    tmpkps[2,:] = 1\n                    tmpimg = vis_keypoints(tmpimg, tmpkps, tester.skeleton)\n                    cv2.imwrite(filename + '_output.jpg', tmpimg)\n\n                coord_out = coord_out.cpu().numpy()\n                preds.append(coord_out)\n\n        # evaluate\n        preds = np.concatenate(preds, axis=0)\n        tester._evaluate(preds, cfg.result_dir)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "main/time.py",
    "content": "import torch\nimport argparse\nfrom base import Transformer\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--gpu', type=str, dest='gpu_ids')\n    parser.add_argument('--joint', type=int, dest='joint')\n    parser.add_argument('--modelpath', type=str, dest='modelpath')\n    parser.add_argument('--backbone', type=str, dest='backbone')\n    args = parser.parse_args()\n\n    # test gpus\n    if not args.gpu_ids:\n        assert 0, \"Please set proper gpu ids\"\n\n    if '-' in args.gpu_ids:\n        gpus = args.gpu_ids.split('-')\n        gpus[0] = int(gpus[0])\n        gpus[1] = int(gpus[1]) + 1\n        args.gpu_ids = ','.join(map(lambda x: str(x), list(range(*gpus))))\n\n    return args\n\nargs = parse_args()\n\noptimal_batch_size = 64\n\ntransformer = Transformer(args.backbone, args.joint, args.modelpath)\ntransformer._make_model()\n\nmodel = transformer.model\n\ndevice = torch.device(\"cuda\")\n\ndummy_input = torch.randn(optimal_batch_size, 3, 256, 256, dtype=torch.float).to(device)\n\nrepetitions=100\ntotal_time = 0\n\nwith torch.no_grad():\n    for rep in range(repetitions):\n        starter, ender = torch.cuda.Event(enable_timing=True),   torch.cuda.Event(enable_timing=True)\n        starter.record()\n        _ = model(dummy_input)\n        ender.record()\n        torch.cuda.synchronize()\n        curr_time = starter.elapsed_time(ender)/1000\n        total_time += curr_time\nThroughput = (repetitions*optimal_batch_size)/total_time\nprint('Final Throughput:',Throughput)"
  },
  {
    "path": "main/train.py",
    "content": "import argparse\nfrom config import cfg\nfrom tqdm import tqdm\nimport os.path as osp\nimport numpy as np\nimport torch\nfrom base import Trainer\nfrom utils.pose_utils import flip\nimport torch.backends.cudnn as cudnn\n\n\ndef main():\n    \n    # argument parse and create log\n    cudnn.fastest = True\n    cudnn.benchmark = True\n\n    trainer = Trainer(cfg)\n    trainer._make_batch_generator()\n    trainer._make_model()\n\n    # train\n    for epoch in range(trainer.start_epoch, cfg.end_epoch):\n        \n        trainer.set_lr(epoch)\n        trainer.tot_timer.tic()\n        trainer.read_timer.tic()\n\n        for itr, (input_img, joint_img, joint_vis, joints_have_depth) in enumerate(trainer.batch_generator):\n            trainer.read_timer.toc()\n            trainer.gpu_timer.tic()\n\n            # forward\n            trainer.optimizer.zero_grad()\n            target = {'coord': joint_img, 'vis': joint_vis, 'have_depth': joints_have_depth}\n            loss_coord = trainer.model(input_img, target)\n            loss_coord = loss_coord.mean()\n\n            # backward\n            loss = loss_coord\n            loss.backward()\n            trainer.optimizer.step()\n            \n            trainer.gpu_timer.toc()\n            screen = [\n                'Epoch %d/%d itr %d/%d:' % (epoch, cfg.end_epoch, itr, trainer.itr_per_epoch),\n                'lr: %g' % (trainer.get_lr()),\n                'speed: %.2f(%.2fs r%.2f)s/itr' % (\n                    trainer.tot_timer.average_time, trainer.gpu_timer.average_time, trainer.read_timer.average_time),\n                '%.2fh/epoch' % (trainer.tot_timer.average_time / 3600. * trainer.itr_per_epoch),\n                '%s: %.4f' % ('loss_coord', loss_coord.detach()),\n                ]\n            trainer.logger.info(' '.join(screen))\n            trainer.tot_timer.toc()\n            trainer.tot_timer.tic()\n            trainer.read_timer.tic()\n\n        trainer.save_model({\n            'epoch': epoch,\n            'network': trainer.model.state_dict(),\n            'optimizer': trainer.optimizer.state_dict(),\n        }, epoch)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "requirements.txt",
    "content": "numpy\ntqdm\ntorch\ntorchvision\ntorchsummary\nopencv-python\nmatplotlib\npycocotools\nscipy\n"
  },
  {
    "path": "tool/Human36M/README.MD",
    "content": "## Human3.6M dataset pre-processing code\n\nYou should run the matlab code first, and the python code will convert the output of the matlab code to the json files.\n**You don't have to run this when you downloaded json files from the google drive.** This is to make json files from raw data.\n"
  },
  {
    "path": "tool/Human36M/h36m2coco.py",
    "content": "import os\nimport os.path as osp\nimport scipy.io as sio\nimport numpy as np\nimport cv2\nimport random\nimport json\nimport math\nfrom tqdm import tqdm\n\nroot_dir = './images' # define path here\nsave_dir = './annotations' # define path here\n\njoint_num = 17\nsubject_list = [1, 5, 6, 7, 8, 9, 11]\naction_idx = (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)\nsubaction_idx = (1, 2)\ncamera_idx = (1, 2, 3, 4)\naction_name = ['Directions', 'Discussion', 'Eating', 'Greeting', 'Phoning', 'Posing', 'Purchases', 'Sitting', 'SittingDown', 'Smoking', 'Photo', 'Waiting', 'Walking', 'WalkDog', 'WalkTogether']\n\ndef load_h36m_annot_file(annot_file):\n    data = sio.loadmat(annot_file)\n    joint_world = data['pose3d_world'] # 3D world coordinates of keypoints\n    R = data['R'] # extrinsic\n    T = np.reshape(data['T'],(3)) # extrinsic\n    f = np.reshape(data['f'],(-1)) # focal legnth\n    c = np.reshape(data['c'],(-1)) # principal points\n    img_heights = np.reshape(data['img_height'],(-1))\n    img_widths = np.reshape(data['img_width'],(-1))\n   \n    return joint_world, R, T, f, c, img_widths, img_heights\n\ndef _H36FolderName(subject_id, act_id, subact_id, camera_id):\n    return \"s_%02d_act_%02d_subact_%02d_ca_%02d\" % \\\n           (subject_id, act_id, subact_id, camera_id)\n\ndef _H36ImageName(folder_name, frame_id):\n    return \"%s_%06d.jpg\" % (folder_name, frame_id + 1)\n\ndef cam2pixel(cam_coord, f, c):\n    x = cam_coord[..., 0] / cam_coord[..., 2] * f[0] + c[0]\n    y = cam_coord[..., 1] / cam_coord[..., 2] * f[1] + c[1]\n    return x,y\n\ndef world2cam(world_coord, R, t):\n    cam_coord = np.dot(R, world_coord - t)\n    return cam_coord\n\ndef get_bbox(joint_img):\n    bbox = np.zeros((4))\n    xmin = np.min(joint_img[:,0])\n    ymin = np.min(joint_img[:,1])\n    xmax = np.max(joint_img[:,0])\n    ymax = np.max(joint_img[:,1])\n    width = xmax - xmin - 1\n    height = ymax - ymin - 1\n    \n    bbox[0] = (xmin + xmax)/2. - width/2*1.2\n    bbox[1] = (ymin + ymax)/2. - height/2*1.2\n    bbox[2] = width*1.2\n    bbox[3] = height*1.2\n\n    return bbox\n\nimg_id = 0; annot_id = 0\nfor subject in tqdm(subject_list):\n    cam_param = {}\n    joint_3d = {}\n    images = []; annotations = [];\n    for aid in tqdm(action_idx):\n        for said in tqdm(subaction_idx):\n            for cid in tqdm(camera_idx):\n                folder = _H36FolderName(subject,aid,said,cid)\n                if folder == 's_11_act_02_subact_02_ca_01':\n                    continue\n               \n                joint_world, R, t, f, c, img_widths, img_heights = load_h36m_annot_file(osp.join(root_dir, folder, 'h36m_meta.mat'))\n\n                if str(aid) not in joint_3d:\n                    joint_3d[str(aid)] = {}\n                if str(said) not in joint_3d[str(aid)]:\n                    joint_3d[str(aid)][str(said)] = {}\n\n                img_num = np.shape(joint_world)[0]\n                for n in range(img_num):\n                    img_dict = {}\n                    img_dict['id'] = img_id\n                    img_dict['file_name'] = osp.join(folder, _H36ImageName(folder, n))\n                    img_dict['width'] = int(img_widths[n])\n                    img_dict['height'] = int(img_heights[n])\n                    img_dict['subject'] = subject\n                    img_dict['action_name'] = action_name[aid-2]\n                    img_dict['action_idx'] = aid\n                    img_dict['subaction_idx'] = said\n                    img_dict['cam_idx'] = cid\n                    img_dict['frame_idx'] = n\n                    images.append(img_dict)\n                    \n                    if str(cid) not in cam_param:\n                        cam_param[str(cid)] = {'R': R.tolist(), 't': t.tolist(), 'f': f.tolist(), 'c': c.tolist()}\n                    if str(n) not in joint_3d[str(aid)][str(said)]:\n                        joint_3d[str(aid)][str(said)][str(n)] = joint_world[n].tolist()\n\n                    annot_dict = {}\n                    annot_dict['id'] = annot_id\n                    annot_dict['image_id'] = img_id\n\n                    # project world coordinate to cam, image coordinate space\n                    joint_cam = np.zeros((joint_num,3))\n                    for j in range(joint_num):\n                        joint_cam[j] = world2cam(joint_world[n][j], R, t)\n                    joint_img = np.zeros((joint_num,2))\n                    joint_img[:,0], joint_img[:,1] = cam2pixel(joint_cam, f, c)\n                    joint_vis = (joint_img[:,0] >= 0) * (joint_img[:,0] < img_widths[n]) * (joint_img[:,1] >= 0) * (joint_img[:,1] < img_heights[n])\n                    annot_dict['keypoints_vis'] = joint_vis.tolist()\n                    \n                    bbox = get_bbox(joint_img)\n                    annot_dict['bbox'] = bbox.tolist() # xmin, ymin, width, height\n                    annotations.append(annot_dict)\n\n                    img_id += 1\n                    annot_id += 1\n    \n    data = {'images': images, 'annotations': annotations}\n    with open(osp.join(save_dir, 'Human36M_subject' + str(subject) + '_data.json'), 'w') as f:\n        json.dump(data, f)    \n    with open(osp.join(save_dir, 'Human36M_subject' + str(subject) + '_camera.json'), 'w') as f:\n        json.dump(cam_param, f)\n    with open(osp.join(save_dir, 'Human36M_subject' + str(subject) + '_joint_3d.json'), 'w') as f:\n        json.dump(joint_3d, f)\n"
  },
  {
    "path": "tool/Human36M/preprocess_h36m.m",
    "content": "% Preprocess human3.6m dataset\n% Place this file to the Release-v1.1 folder and run it\n\nfunction preprocess_h36m()\n\n    close all;\n    %clear;\n    %clc;\n\n    addpaths;\n\n    %--------------------------------------------------------------------------\n    % PARAMETERS\n\n    % Subject (1, 5, 6, 7, 8, 9, 11)\n    SUBJECT = [1 5 6 7 8 9 11];\n     \n    % Action (2 ~ 16)\n    ACTION = 2:16;\n    \n    % Subaction (1 ~ 2)\n    SUBACTION = 1:2;\n    \n    % Camera (1 ~ 4)\n    CAMERA = 1:4;\n    \n    num_joint = 17;\n    root_dir = '.'; % define path here\n    \n    % if rgb sequence is declared in the loop, it causes stuck (do not know\n    % reason)\n    rgb_sequence = cell(1,100000000);\n    COUNT = 1;\n    %--------------------------------------------------------------------------\n    % MAIN LOOP\n    % For each subject, action, subaction, and camera..\n    for subject = SUBJECT\n        for action = ACTION\n            for subaction = SUBACTION\n                for camera = CAMERA\n\n                    fprintf('Processing subject %d, action %d, subaction %d, camera %d..\\n', ...\n                        subject, action, subaction, camera);\n\n                    img_save_dir = sprintf('%s/images/s_%02d_act_%02d_subact_%02d_ca_%02d', ...\n                        root_dir, subject, action, subaction, camera);\n                    if ~exist(img_save_dir, 'dir')\n                        mkdir(img_save_dir);\n                    end\n\n                    mask_save_dir = sprintf('%s/masks/s_%02d_act_%02d_subact_%02d_ca_%02d', ...\n                        root_dir, subject, action, subaction, camera);\n                    if ~exist(mask_save_dir, 'dir')\n                        mkdir(mask_save_dir);\n                    end\n\n                    annot_save_dir = sprintf('%s/annotations/s_%02d_act_%02d_subact_%02d_ca_%02d', ...\n                        root_dir, subject, action, subaction, camera);\n                    if ~exist(annot_save_dir, 'dir')\n                        mkdir(annot_save_dir);\n                    end\n\n                    if (subject==11) && (action==2) && (subaction==2) && (camera==1)\n                        fprintf('There is an error in subject 11, action 2, subaction 2, and camera 1\\n');\n                        continue;\n                    end\n                    \n                    % Select sequence\n                    Sequence = H36MSequence(subject, action, subaction, camera);\n\n                    % Get 3D pose and 2D pose\n                    Features{1} = H36MPose3DPositionsFeature(); % 3D world coordinates\n                    Features{1}.Part = 'body'; % Only consider 17 joints\n                    Features{2} = H36MPose3DPositionsFeature('Monocular', true); % 3D camera coordinates\n                    Features{2}.Part = 'body'; % Only consider 17 joints\n                    Features{3} = H36MPose2DPositionsFeature(); % 2D image coordinates\n                    Features{3}.Part = 'body'; % Only consider 17 joints\n                    F = H36MComputeFeatures(Sequence, Features);\n                    num_frame = Sequence.NumFrames;\n                    pose3d_world = reshape(F{1}, num_frame, 3, num_joint);\n                    pose3d = reshape(F{2}, num_frame, 3, num_joint);\n                    pose2d = reshape(F{3}, num_frame, 2, num_joint);\n\n                    % Camera (in global coordinate)\n                    Camera = Sequence.getCamera();\n\n                    % Sanity check\n                    if false\n                        R = Camera.R; % rotation matrix\n                        T = Camera.T'; % origin of the world coord system\n                        K = [Camera.f(1)    0           Camera.c(1);\n                            0              Camera.f(2) Camera.c(2);\n                            0              0           1]; % f: focal length, c: principal points\n                        error = 0;\n                        for i = 1:num_frame\n                            X = squeeze(pose3d_global(i,:,:));\n                            x = squeeze(pose2d(i,:,:));\n                            px = K*R*(X-T);\n                            px = px ./ px(3,:);\n                            px = px(1:2,:);\n                            error = error + mean(sqrt(sum((px-x).^2, 1)));\n                        end\n                        error = error / num_frame;\n                        fprintf('reprojection error = %.2f (pixels)\\n', error);\n                        keyboard;\n                    end\n\n                    %% Image, bounding box for each sampled frame\n                    fprintf('Load RGB video: ');\n                    rgb_extractor = H36MRGBVideoFeature();\n                    rgb_sequence{COUNT} = rgb_extractor.serializer(Sequence);\n                    fprintf('Done!!\\n');\n                    img_height = zeros(num_frame,1);\n                    img_width = zeros(num_frame,1);\n\n                    fprintf('Load mask video: ');\n                    mask_extractor = H36MMyBGMask();\n                    mask_sequence = mask_extractor.serializer(Sequence);\n                    fprintf('Done!!\\n');\n\n\n               \n                    % For each frame,\n                    for i = 1:num_frame\n                        if mod(i,100) == 1\n                            fprintf('.');\n                        end\n                       \n                        % Save image\n                        % Get data\n                        img = rgb_sequence{COUNT}.getFrame(i);  \n                        [h, w, c] = size(img);\n                        img_height(i) = h;\n                        img_width(i) = w;\n                        img_name = sprintf('%s/s_%02d_act_%02d_subact_%02d_ca_%02d_%06d.jpg', ...\n                            img_save_dir, subject, action, subaction, camera, i);\n                        %imwrite(img, img_name);\n\n                        mask = mask_sequence.Buffer{i};\n                        mask_name = sprintf('%s/s_%02d_act_%02d_subact_%02d_ca_%02d_%06d.jpg', ...\n                            mask_save_dir, subject, action, subaction, camera, i);\n                        imwrite(mask, mask_name);\n                        \n                    end\n                    \n                    COUNT = COUNT + 1;\n                    \n                    % Save data\n                    pose3d_world = permute(pose3d_world,[1,3,2]); % world coordinate 3D keypoint coordinates\n                    R = Camera.R; % rotation matrix\n                    T = Camera.T; % origin of the world coord system\n                    f = Camera.f; % focal length\n                    c = Camera.c; % principal points\n                    filename = sprintf('%s/h36m_meta.mat', annot_save_dir);\n                    %save(filename, 'pose3d_world', 'f', 'c', 'R', 'T', 'img_height', 'img_width');\n                    \n                    fprintf('\\n');\n                    \n                end\n            end\n        end\n    end\n\nend\n\n"
  },
  {
    "path": "vis/coco_img_name.py",
    "content": "import os\nimport os.path as osp\nimport scipy.io as sio\nimport numpy as np\nfrom pycocotools.coco import COCO\nimport json\nimport cv2\nimport random\nimport math\n\nannot_path = osp.join('coco', 'person_keypoints_val2017.json')\n\ndata = []\ndb = COCO(annot_path)\nfp = open('coco_img_name.txt','w') \nfor iid in db.imgs.keys():\n    img = db.imgs[iid]\n    imgname = img['file_name']\n    imgname = 'coco_' + imgname.split('.')[0]\n    fp.write(imgname + '\\n')\nfp.close()\n\n"
  },
  {
    "path": "vis/multi/draw_2Dskeleton.m",
    "content": "function img = draw_2Dskeleton(img_name, pred_2d_kpt, num_joint, skeleton, colorList_joint, colorList_skeleton)\n \n    img = imread(img_name);\n    [imgHeight, imgWidth, dim] = size(image);\n\n    f = figure;\n    set(f, 'visible', 'off');\n    imshow(img);\n    hold on;\n    line_width = 4;\n    \n    num_skeleton = size(skeleton,1);\n\n    num_pred = size(pred_2d_kpt,1);\n    for i = 1:num_pred\n        for j =1:num_skeleton\n            k1 = skeleton(j,1);\n            k2 = skeleton(j,2);\n            plot([pred_2d_kpt(i,k1,1),pred_2d_kpt(i,k2,1)],[pred_2d_kpt(i,k1,2),pred_2d_kpt(i,k2,2)],'Color',colorList_skeleton(j,:),'LineWidth',line_width);\n        end\n        for j=1:num_joint\n            scatter(pred_2d_kpt(i,j,1),pred_2d_kpt(i,j,2),100,colorList_joint(j,:),'filled');\n        end\n    end\n    \n    set(gca,'Units','normalized','Position',[0 0 1 1]);  %# Modify axes size\n\n    frame = getframe(gcf);\n    img = frame.cdata;\n    \n    hold off;\n    close(f); \n\nend\n"
  },
  {
    "path": "vis/multi/draw_3Dpose_coco.m",
    "content": "function draw_3Dpose_coco()\n \n    root_path = '/mnt/hdd1/Data/Human_pose_estimation/COCO/2017/val2017/';\n    save_path = './vis/';\n    num_joint =  17;\n\n    colorList_skeleton = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 178/255 102/255;\n    230/255 230/255 0/255;\n\n    255/255 153/255 255/255;\n    153/255 204/255 255/255;\n\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    ];\n    colorList_joint = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    255/255 153/255 255/255;\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n    153/255 204/255 255/255;\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n    230/255 230/255 0/255;\n    230/255 230/255 0/255;\n    255/255 178/255 102/255;\n\n    ];\n    skeleton = [ [0, 16], [1, 16], [1, 15], [15, 14], [14, 8], [14, 11], [8, 9], [9, 10], [11, 12], [12, 13], [1, 2], [2, 3], [3, 4], [1, 5], [5, 6], [6, 7] ];\n    skeleton = transpose(reshape(skeleton,[2,16])) + 1;\n\n    fp_img_name = fopen('../coco_img_name.txt');\n    preds_2d_kpt = load('preds_2d_kpt_coco.mat');\n    preds_3d_kpt = load('preds_3d_kpt_coco.mat');\n\n    img_name = fgetl(fp_img_name);\n    while ischar(img_name)\n        \n        if isfield(preds_2d_kpt,img_name)\n            pred_2d_kpt = getfield(preds_2d_kpt,img_name);\n            pred_3d_kpt = getfield(preds_3d_kpt,img_name);\n            \n            img_name = strsplit(img_name,'_'); \n            img_name = strcat(img_name{2},'.jpg');\n            img_path = strcat(root_path,img_name);\n            \n            %img = draw_2Dskeleton(img_path,pred_2d_kpt,num_joint,skeleton,colorList_joint,colorList_skeleton);\n            img = imread(img_path);\n            f = draw_3Dskeleton(img,pred_3d_kpt,num_joint,skeleton,colorList_joint,colorList_skeleton);\n            \n            set(gcf, 'InvertHardCopy', 'off');\n            set(gcf,'color','w');\n            mkdir(save_path);\n            saveas(f, strcat(save_path,img_name));\n            close(f);\n        end\n\n        img_name = fgetl(fp_img_name);\n    end\n        \nend\n"
  },
  {
    "path": "vis/multi/draw_3Dpose_mupots.m",
    "content": "function draw_3Dpose_mupots()\n \n    root_path = '/mnt/hdd1/Data/Human_pose_estimation/MU/mupots-3d-eval/MultiPersonTestSet/';\n    save_path = './vis/';\n    num_joint =  17;\n\n    colorList_skeleton = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 178/255 102/255;\n    230/255 230/255 0/255;\n\n    255/255 153/255 255/255;\n    153/255 204/255 255/255;\n\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    ];\n    colorList_joint = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    255/255 153/255 255/255;\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n    153/255 204/255 255/255;\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n    230/255 230/255 0/255;\n    230/255 230/255 0/255;\n    255/255 178/255 102/255;\n\n    ];\n    skeleton = [ [0, 16], [1, 16], [1, 15], [15, 14], [14, 8], [14, 11], [8, 9], [9, 10], [11, 12], [12, 13], [1, 2], [2, 3], [3, 4], [1, 5], [5, 6], [6, 7] ];\n    skeleton = transpose(reshape(skeleton,[2,16])) + 1;\n\n    fp_img_name = fopen('../mupots_img_name.txt');\n    preds_2d_kpt = load('preds_2d_kpt_mupots.mat');\n    preds_3d_kpt = load('preds_3d_kpt_mupots.mat');\n\n    img_name = fgetl(fp_img_name);\n    while ischar(img_name)\n        img_name_split = strsplit(img_name);\n        folder_id = str2double(img_name_split(1)); frame_id = str2double(img_name_split(2));\n        img_name = sprintf('TS%d/img_%06d.jpg',folder_id, frame_id);\n        img_path = strcat(root_path,img_name);\n\n        pred_2d_kpt = getfield(preds_2d_kpt,sprintf('TS%d_img_%06d',folder_id, frame_id));\n        pred_3d_kpt = getfield(preds_3d_kpt,sprintf('TS%d_img_%06d',folder_id, frame_id));\n\n        %img = draw_2Dskeleton(img_path,pred_2d_kpt,num_joint,skeleton,colorList_joint,colorList_skeleton);\n        img = imread(img_path);\n        f = draw_3Dskeleton(img,pred_3d_kpt,num_joint,skeleton,colorList_joint,colorList_skeleton);\n\n        set(gcf, 'InvertHardCopy', 'off');\n        set(gcf,'color','w');\n        mkdir(strcat(save_path,sprintf('TS%d',folder_id)));\n        saveas(f, strcat(save_path,img_name));\n        close(f);\n\n        img_name = fgetl(fp_img_name);\n    end\n        \nend\n"
  },
  {
    "path": "vis/multi/draw_3Dskeleton.m",
    "content": "function f = draw_3Dskeleton(img, pred_3d_kpt, num_joint, skeleton, colorList_joint, colorList_skeleton)\n \n    x = pred_3d_kpt(:,:,1);\n    y = pred_3d_kpt(:,:,2);\n    z = pred_3d_kpt(:,:,3);\n    pred_3d_kpt(:,:,1) = -z;\n    pred_3d_kpt(:,:,2) = x;\n    pred_3d_kpt(:,:,3) = -y;\n\n    [imgHeight, imgWidth, dim] = size(img);\n    \n    figure_height = 450;\n    figure_width = figure_height / imgHeight * imgWidth;\n    f = figure('Position',[100 100 figure_width figure_height]);\n    set(f, 'visible', 'off');\n    hold on;\n    grid on;\n    line_width = 4;\n    point_width = 50;\n \n    num_skeleton = size(skeleton,1);\n\n    num_pred = size(pred_3d_kpt,1);\n    for i = 1:num_pred\n        for j =1:num_skeleton\n            k1 = skeleton(j,1);\n            k2 = skeleton(j,2);\n\n            plot3([pred_3d_kpt(i,k1,1),pred_3d_kpt(i,k2,1)],[pred_3d_kpt(i,k1,2),pred_3d_kpt(i,k2,2)],[pred_3d_kpt(i,k1,3),pred_3d_kpt(i,k2,3)],'Color',colorList_skeleton(j,:),'LineWidth',line_width);\n        end\n        for j=1:num_joint\n            scatter3(pred_3d_kpt(i,j,1),pred_3d_kpt(i,j,2),pred_3d_kpt(i,j,3),point_width,colorList_joint(j,:),'filled');\n        end\n    end\n   \n    set(gca, 'color', [255/255 255/255 255/255]);\n    set(gca,'XTickLabel',[]);\n    set(gca,'YTickLabel',[]);\n    set(gca,'ZTickLabel',[]);\n    \n    x = pred_3d_kpt(:,:,1);\n    xmin = min(x(:)) - 120000;\n    xmax = max(x(:)) + 6000;\n    \n    y = pred_3d_kpt(:,:,2);\n    ymin = min(y(:));\n    ymax = max(y(:));\n\n    z = pred_3d_kpt(:,:,3);\n    zmin = min(z(:));\n    zmax = max(z(:));\n    \n    xlim([xmin xmax]);\n    ylim([ymin ymax]);\n    zlim([zmin zmax]);\n    \n    h_img = surf([xmin;xmin],[ymin ymax;ymin ymax],[zmax zmax;zmin zmin],'CData',img,'FaceColor','texturemap');\n    set(h_img);\n    \n    view(62,27);\nend\n"
  },
  {
    "path": "vis/mupots_img_name.py",
    "content": "import os\nimport os.path as osp\nimport scipy.io as sio\nimport numpy as np\nfrom pycocotools.coco import COCO\nimport json\nimport cv2\nimport random\nimport math\n\nannot_path = osp.join('mupots', 'MuPoTS-3D.json')\n\ndata = []\ndb = COCO(annot_path)\nfp = open('mupots_img_name.txt','w') \nfor iid in db.imgs.keys():\n    img = db.imgs[iid]\n    imgname = img['file_name'].split('/')\n    folder_id = int(imgname[0][2:])\n    frame_id = int(imgname[1].split('.')[0][4:])\n    fp.write(str(folder_id) + ' ' + str(frame_id) + '\\n')\nfp.close()\n\n"
  },
  {
    "path": "vis/single/draw_2Dskeleton.m",
    "content": "function img = draw_2Dskeleton(img_name, pred_2d_kpt, num_joint, skeleton, colorList_joint, colorList_skeleton)\n \n    img = imread(img_name);\n    pred_2d_kpt = squeeze(pred_2d_kpt);\n\n    f = figure;\n    set(f, 'visible', 'off');\n    imshow(img);\n    hold on;\n    line_width = 4;\n    \n    num_skeleton = size(skeleton,1);\n    for j =1:num_skeleton\n        k1 = skeleton(j,1);\n        k2 = skeleton(j,2);\n        plot([pred_2d_kpt(k1,1),pred_2d_kpt(k2,1)],[pred_2d_kpt(k1,2),pred_2d_kpt(k2,2)],'Color',colorList_skeleton(j,:),'LineWidth',line_width);\n    end\n    for j=1:num_joint\n        scatter(pred_2d_kpt(j,1),pred_2d_kpt(j,2),100,colorList_joint(j,:),'filled');\n    end\n    \n    set(gca,'Units','normalized','Position',[0 0 1 1]);  %# Modify axes size\n\n    frame = getframe(gcf);\n    img = frame.cdata;\n    \n    hold off;\n    close(f); \n\nend\n"
  },
  {
    "path": "vis/single/draw_3Dpose_coco.m",
    "content": "function draw_3Dpose_coco()\n    \n    root_path = '/mnt/hdd1/Data/Human_pose_estimation/COCO/2017/val2017/';\n    save_path = './vis/';\n    num_joint =  17;\n    mkdir(save_path);\n\n    colorList_skeleton = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 178/255 102/255;\n    230/255 230/255 0/255;\n\n    255/255 153/255 255/255;\n    153/255 204/255 255/255;\n\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    ];\n    colorList_joint = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    255/255 153/255 255/255;\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n    153/255 204/255 255/255;\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n    230/255 230/255 0/255;\n    230/255 230/255 0/255;\n    255/255 178/255 102/255;\n\n    ];\n    skeleton = [ [0, 16], [1, 16], [1, 15], [15, 14], [14, 8], [14, 11], [8, 9], [9, 10], [11, 12], [12, 13], [1, 2], [2, 3], [3, 4], [1, 5], [5, 6], [6, 7] ];\n    skeleton = transpose(reshape(skeleton,[2,16])) + 1;\n\n    fp_img_name = fopen('../coco_img_name.txt');\n    preds_2d_kpt = load('preds_2d_kpt_coco.mat');\n    preds_3d_kpt = load('preds_3d_kpt_coco.mat');\n    \n    img_name = fgetl(fp_img_name);\n    while ischar(img_name)\n        if isfield(preds_2d_kpt,img_name)\n            pred_2d_kpt = getfield(preds_2d_kpt,img_name);\n            pred_3d_kpt = getfield(preds_3d_kpt,img_name);\n            \n            img_name = strsplit(img_name,'_');\n            img_name = strcat(img_name{2},'.jpg');\n            img_path = strcat(root_path,img_name);\n \n            num_pred = size(pred_2d_kpt,1);\n            for i = 1:num_pred\n\n                img = draw_2Dskeleton(img_path,pred_2d_kpt(i,:,:),num_joint,skeleton,colorList_joint,colorList_skeleton);\n                save_name = strsplit(img_name,'.');\n                save_name = save_name{1};\n                save_name = strcat(save_name,sprintf('_%d_2d.jpg',i));\n                disp(strcat(save_path,save_name));\n                imwrite(img,strcat(save_path,save_name));\n\n                f = draw_3Dskeleton(pred_3d_kpt(i,:,:),num_joint,skeleton,colorList_joint,colorList_skeleton);\n                set(gcf, 'InvertHardCopy', 'off');\n                set(gcf,'color','w');\n                save_name = strsplit(img_name,'.');\n                save_name = save_name{1};\n                save_name = strcat(save_name,sprintf('_%d_3d.jpg',i));\n                saveas(f, strcat(save_path,save_name));\n                close(f);\n            end\n           \n        end\n\n        img_name = fgetl(fp_img_name);\n    end\n        \nend\n"
  },
  {
    "path": "vis/single/draw_3Dpose_mupots.m",
    "content": "function draw_3Dpose_mupots()\n \n    root_path = '/mnt/hdd1/Data/Human_pose_estimation/MU/mupots-3d-eval/MultiPersonTestSet/';\n    save_path = './vis/';\n    num_joint =  17;\n\n    colorList_skeleton = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 178/255 102/255;\n    230/255 230/255 0/255;\n\n    255/255 153/255 255/255;\n    153/255 204/255 255/255;\n\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    ];\n    colorList_joint = [\n    255/255 128/255 0/255;\n    255/255 153/255 51/255;\n    255/255 153/255 153/255;\n    255/255 102/255 102/255;\n    255/255 51/255 51/255;\n    153/255 255/255 153/255;\n    102/255 255/255 102/255;\n    51/255 255/255 51/255;\n    255/255 153/255 255/255;\n    255/255 102/255 255/255;\n    255/255 51/255 255/255;\n    153/255 204/255 255/255;\n    102/255 178/255 255/255;\n    51/255 153/255 255/255;\n    230/255 230/255 0/255;\n    230/255 230/255 0/255;\n    255/255 178/255 102/255;\n\n    ];\n    skeleton = [ [0, 16], [1, 16], [1, 15], [15, 14], [14, 8], [14, 11], [8, 9], [9, 10], [11, 12], [12, 13], [1, 2], [2, 3], [3, 4], [1, 5], [5, 6], [6, 7] ];\n    skeleton = transpose(reshape(skeleton,[2,16])) + 1;\n\n    fp_img_name = fopen('../mupots_img_name.txt');\n    preds_2d_kpt = load('preds_2d_kpt_mupots.mat');\n    preds_3d_kpt = load('preds_3d_kpt_mupots.mat');\n\n    img_name = fgetl(fp_img_name);\n    while ischar(img_name)\n        img_name_split = strsplit(img_name);\n        folder_id = str2double(img_name_split(1)); frame_id = str2double(img_name_split(2));\n        img_name = sprintf('TS%d/img_%06d.jpg',folder_id, frame_id);\n        img_path = strcat(root_path,img_name);\n        mkdir(strcat(save_path,sprintf('TS%d',folder_id)));\n\n        pred_2d_kpt = getfield(preds_2d_kpt,sprintf('TS%d_img_%06d',folder_id, frame_id));\n        pred_3d_kpt = getfield(preds_3d_kpt,sprintf('TS%d_img_%06d',folder_id, frame_id));\n        \n        num_pred = size(pred_2d_kpt,1);\n        for i = 1:num_pred\n\n            img = draw_2Dskeleton(img_path,pred_2d_kpt(i,:,:),num_joint,skeleton,colorList_joint,colorList_skeleton);\n            save_name = sprintf('TS%d/img_%06d_%d_2d.jpg',folder_id, frame_id, i);\n            imwrite(img,strcat(save_path,save_name));\n\n            f = draw_3Dskeleton(pred_3d_kpt(i,:,:),num_joint,skeleton,colorList_joint,colorList_skeleton);\n            set(gcf, 'InvertHardCopy', 'off');\n            set(gcf,'color','w');\n            save_name = sprintf('TS%d/img_%06d_%d_3d.jpg',folder_id, frame_id, i);\n            saveas(f, strcat(save_path,save_name));\n            close(f);\n        end\n\n        img_name = fgetl(fp_img_name);\n    end\n        \nend\n"
  },
  {
    "path": "vis/single/draw_3Dskeleton.m",
    "content": "function f = draw_3Dskeleton(pred_3d_kpt, num_joint, skeleton, colorList_joint, colorList_skeleton)\n    \n    pred_3d_kpt = squeeze(pred_3d_kpt);\n\n    x = pred_3d_kpt(:,1);\n    y = pred_3d_kpt(:,2);\n    z = pred_3d_kpt(:,3);\n    pred_3d_kpt(:,1) = -z;\n    pred_3d_kpt(:,2) = x;\n    pred_3d_kpt(:,3) = -y;\n\n    \n    f = figure;%('Position',[100 100 600 600]);\n    set(f, 'visible', 'off');\n    hold on;\n    grid on;\n    line_width = 6;\n \n    num_skeleton = size(skeleton,1);\n    for j =1:num_skeleton\n        k1 = skeleton(j,1);\n        k2 = skeleton(j,2);\n\n        plot3([pred_3d_kpt(k1,1),pred_3d_kpt(k2,1)],[pred_3d_kpt(k1,2),pred_3d_kpt(k2,2)],[pred_3d_kpt(k1,3),pred_3d_kpt(k2,3)],'Color',colorList_skeleton(j,:),'LineWidth',line_width);\n    end\n    for j=1:num_joint\n        scatter3(pred_3d_kpt(j,1),pred_3d_kpt(j,2),pred_3d_kpt(j,3),100,colorList_joint(j,:),'filled');\n    end\n   \n    set(gca, 'color', [255/255 255/255 255/255]);\n    set(gca,'XTickLabel',[]);\n    set(gca,'YTickLabel',[]);\n    set(gca,'ZTickLabel',[]);\n    \n    x = pred_3d_kpt(:,1);\n    xmin = min(x(:)) - 100;\n    xmax = max(x(:)) + 100;\n    \n    y = pred_3d_kpt(:,2);\n    ymin = min(y(:)) - 100;\n    ymax = max(y(:)) + 100;\n\n    z = pred_3d_kpt(:,3);\n    zmin = min(z(:));\n    zmax = max(z(:)) + 100;\n\n    xcenter = mean(pred_3d_kpt(:,1));\n    ycenter = mean(pred_3d_kpt(:,2));\n    zcenter = mean(pred_3d_kpt(:,3));\n    xmin = xcenter - 1000;\n    xmax = xcenter + 1000;\n    ymin = ycenter - 1000;\n    ymax = ycenter + 1000;\n    zmin = zcenter - 1000;\n    zmax = zcenter + 1000;\n    \n    xlim([xmin xmax]);\n    ylim([ymin ymax]);\n    zlim([zmin zmax]);\n    \n    view(62,7);\nend\n"
  }
]