[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\ndebug\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n# yaml files\nyamls/\n\n# logs\nlogs/\n\n# vim swap files\n*.swp\n\n# data\ndatasets/data\ndatasets/extras\ndatasets/examples/*.obj\nsummary\ncheckpoints\n\n# IDEA\n.idea\n"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"external/neural_renderer\"]\n\tpath = external/neural_renderer\n\turl = https://github.com/daniilidis-group/neural_renderer\n"
  },
  {
    "path": "README.md",
    "content": "# Pixel2Mesh\n\nThis is an implementation of Pixel2Mesh in PyTorch. Besides, we also:\n\n- Provide retrained Pixel2Mesh checkpoints. Besides, the pretrained tensorflow pretrained model provided in [official implementation](https://github.com/nywang16/Pixel2Mesh) is also converted into a PyTorch checkpoint file for convenience.\n- Provide a modified version of Pixel2Mesh whose backbone is ResNet instead of VGG.\n- Clarify some details in previous implementation and provide a flexible training framework.\n\n**If you have any urgent question or issue, please contact jinkuncao@gmail.com.**\n\n\n## Get Started\n\n### Environment\n\nCurrent version only supports training and inference on GPU. It works well under dependencies as follows:\n\n- Ubuntu 16.04 / 18.04\n- Python 3.7\n- PyTorch 1.1\n- CUDA 9.0 (10.0 should also work)\n- OpenCV 4.1\n- Scipy 1.3\n- Scikit-Image 0.15\n\nSome minor dependencies are also needed, for which the latest version provided by conda/pip works well:\n\n> easydict, pyyaml, tensorboardx, trimesh, shapely\n\nTwo another steps to prepare the codebase:\n\n1. `git submodule update --init` to get [Neural Renderer](https://github.com/daniilidis-group/neural_renderer) ready.\n2. `python setup.py install` in directory [external/chamfer](external/chamfer) and `external/neural_renderer` to compile the modules.\n\n### Datasets\n\nWe use [ShapeNet](https://www.shapenet.org/) for model training and evaluation. The official tensorflow implementation provides a subset of ShapeNet for it, you can download it [here](https://drive.google.com/drive/folders/131dH36qXCabym1JjSmEpSQZg4dmZVQid). Extract it and link it to `data_tf` directory as follows. Before that, some meta files [here](https://drive.google.com/file/d/16d9druvCpsjKWsxHmsTD5HSOWiCWtDzo/view?usp=sharing) will help you establish the folder tree, demonstrated as follows.\n\n~~*P.S. In case more data is needed, another larger data package of ShapeNet is also [available](https://drive.google.com/file/d/1Z8gt4HdPujBNFABYrthhau9VZW10WWYe/view). You can extract it and place it in the `data` directory. But this would take much time and needs about 300GB storage.*~~\n\nP.S.S. For the larger data package, we provide a temporal access here on [OneDrive](https://1drv.ms/u/s!AtMVLfbdnqr4nGZjQ8GuPHlEUSg9?e=0dIEbK).\n\n```\ndatasets/data\n├── ellipsoid\n│   ├── face1.obj\n│   ├── face2.obj\n│   ├── face3.obj\n│   └── info_ellipsoid.dat\n├── pretrained\n│   ... (.pth files)\n└── shapenet\n    ├── data (larger data package, optional)\n    │   ├── 02691156\n    │   │   └── 3a123ae34379ea6871a70be9f12ce8b0_02.dat\n    │   ├── 02828884\n    │   └── ...\n    ├── data_tf (standard data used in official implementation)\n    │   ├── 02691156 (put the folders directly in data_tf)\n    │   │   └── 10115655850468db78d106ce0a280f87\n    │   ├── 02828884\n    │   └── ...\n    └── meta\n        ...\n```\n\nDifference between the two versions of dataset is worth some explanation:\n\n- `data_tf` has images of 137x137 resolution and four channels (RGB + alpha), 175,132 samples for training and 43,783 for evaluation.\n- `data` has RGB images of 224x224 resolution with background set all white. It contains altogether 1,050,240 for training and evaluation.\n\n*P.S. We trained model with both datasets and evaluated on both benchmarks. To save time and align our results with the official paper/implementation, we use `data_tf` by default.*\n\n### Usage\n\n#### Configuration\n\nYou can modify configuration in a `yml` file for training/evaluation. It overrides dsefault settings in `options.py`. We provide some examples in the `experiments` directory. \n\n#### Training\n\n```\npython entrypoint_train.py --name xxx --options path/to/yaml\n```\n\n*P.S. To train on slurm clusters, we also provide settings reference. Refer to [slurm](slurm) folder for details.*\n\n#### Evaluation\n\n```shell\npython entrypoint_eval.py --name xxx --options path/to/yml --checkpoint path/to/checkpoint\n```\n\n#### Inference\n\nYou can do inference on your own images by a simple command:\n\n``` \npython entrypoint_predict.py --options /path/to/yml --checkpoint /path/to/checkpoint --folder /path/to/images\n```\n\n*P.S. we only support do training/evaluation/inference with GPU by default.*\n\n## Results\n\nWe tested performance of some models. The [official tensorflow implementation](https://github.com/nywang16/Pixel2Mesh) reports much higher performance than claimed in the [original paper](https://arxiv.org/abs/1804.01654) as follows. The results are listed as follows, which is close to that reported in [MeshRCNN](https://arxiv.org/abs/1906.02739).  The original paper evaluates result on simple mean, without considerations of different categories containing different number of samples, while some later papers use weighted-mean. We report results under both two metrics for caution.\n\n<table>\n  <thead>\n    <tr>\n      <th>Checkpoint</th>\n      <th>Eval Protocol\n      <th>CD</th>\n      <th>F1<sup>τ</sup></th>\n      <th>F1<sup>2τ</sup></th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <td rowspan=2>Official Pretrained (tensorflow)</td>\n      <td>Mean</td>\n      <td>0.482</td>\n      <td>65.22</td>\n      <td>78.80</td>\n    </tr>\n    <tr>\n      <td>Weighted-mean</td>\n      <td>0.439</td>\n      <td><b>66.56</b></td>\n      <td><b>80.17</b></td>\n    </tr>\n    <tr>\n      <td rowspan=2>Migrated Checkpoint</td>\n      <td>Mean</td>\n      <td>0.498</td>\n      <td>64.21</td>\n      <td>78.03</td>\n    </tr>\n    <tr>\n      <td>Weighted-mean</td>\n      <td>0.451</td>\n      <td>65.67</td>\n      <td>79.51</td>\n    </tr>\n    <tr>\n      <td rowspan=2>ResNet</td>\n      <td>Mean</td>\n      <td><b>0.443</b></td>\n      <td><b>65.36</b></td>\n      <td><b>79.24</b></td>\n    </tr>\n    <tr>\n      <td>Weighted-mean</td>\n      <td><b>0.411</b></td>\n      <td>66.13</td>\n      <td>80.13</td>\n    </tr>\n  </tbody> \n</table>\n\n*P.S. Due to time limit, the resnet checkpoint has not been trained in detail and sufficiently.*\n\n### Pretrained checkpoints\n\n- **VGG backbone:** The checkpoint converted from official pretrained model  (based on VGG) can be downloaded [here](https://drive.google.com/file/d/1Gk3M4KQekEenG9qQm60OFsxNar0sG8bN/view?usp=sharing). (scripts to migrate tensorflow checkpoints into `.pth` are available in `utils/migrations`. )\n- **ResNet backbone:** As we provide another backbone choice of resenet, we also provide a corresponding checkpoint [here](https://drive.google.com/file/d/1pZm_IIWDUDje6gRZHW-GDhx5FCDM2Qg_/view?usp=sharing). \n\n## Details of Improvement\n\nWe explain some improvement of this version of implementation compared with the official version here.\n\n- **Larger batch size:** We support larger batch size on multiple GPUs for training. Since Chamfer distances cannot be calculated if samples in a batch with different ground-truth pointcloud size, \"resizing\" the pointcloud is necessary. Instead of resampling points, we simply upsample/downsample from the dataset.\n- **Better backbone:** We enable replacing VGG by ResNet50 for model backbone. The training progress is more stable and final performance is higher.\n- **More stable training:** We do normalization on the deformed sphere, so that it's deformed at location $(0,0,0)$; we use a threshold activation on $z$-axis during projection, so that $z$ will always be positive or negative and never be $0$. These seem not to result in better performance but more stable training loss.\n\n## Demo\n\nGenerated mesh samples are provided in [datasets/examples](datasets/examples) from our ResNet model. Three mesh models in a line are deformed from a single ellipsoid mesh with different number of vertices (156 vs 268 vs 2466) as configurated in the original paper. \n\n![](datasets/examples/airplane.gif)\n\n![](datasets/examples/lamp.gif)\n\n![](datasets/examples/table.gif)\n\n![](datasets/examples/display.gif)\n\n## Acknowledgements\n\nOur work is based on the official version of [Pixel2Mesh](https://github.com/nywang16/Pixel2Mesh); Some part of code are borrowed from [a previous PyTorch implementation of Pixel2Mesh](https://github.com/Tong-ZHAO/Pixel2Mesh-Pytorch). The packed files for two version of datasets are also provided by them two. Most codework is done by [Yuge Zhang](https://github.com/ultmaster).\n"
  },
  {
    "path": "config.py",
    "content": "import os\n\n# dataset root\nDATASET_ROOT = \"datasets/data\"\nSHAPENET_ROOT = os.path.join(DATASET_ROOT, \"shapenet\")\nIMAGENET_ROOT = os.path.join(DATASET_ROOT, \"imagenet\")\n\n# ellipsoid path\nELLIPSOID_PATH = os.path.join(DATASET_ROOT, \"ellipsoid/info_ellipsoid.dat\")\n\n# pretrained weights path\nPRETRAINED_WEIGHTS_PATH = {\n    \"vgg16\": os.path.join(DATASET_ROOT, \"pretrained/vgg16-397923af.pth\"),\n    \"resnet50\": os.path.join(DATASET_ROOT, \"pretrained/resnet50-19c8e357.pth\"),\n    \"vgg16p2m\": os.path.join(DATASET_ROOT, \"pretrained/vgg16-p2m.pth\"),\n}\n\n# Mean and standard deviation for normalizing input image\nIMG_NORM_MEAN = [0.485, 0.456, 0.406]\nIMG_NORM_STD = [0.229, 0.224, 0.225]\nIMG_SIZE = 224\n"
  },
  {
    "path": "datasets/base_dataset.py",
    "content": "from torch.utils.data.dataset import Dataset\nfrom torchvision.transforms import Normalize\n\nimport config\n\n\nclass BaseDataset(Dataset):\n\n    def __init__(self):\n        self.normalize_img = Normalize(mean=config.IMG_NORM_MEAN, std=config.IMG_NORM_STD)\n"
  },
  {
    "path": "datasets/imagenet.py",
    "content": "import os\n\nimport numpy as np\n\nfrom torch.utils.data import Dataset\nfrom torchvision import transforms\n\nfrom PIL import ImageFile, Image\n\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\n\nclass ImageNet(Dataset):\n\n    def __init__(self, root_dir, split=\"train\"):\n        self.image_dir = os.path.join(root_dir, split)\n        self.images = []\n        self.labels = []\n        with open(os.path.join(root_dir, \"meta\", split + \".txt\"), \"r\") as f:\n            for line in f.readlines():\n                image, label = line.strip().split()\n                self.images.append(image)\n                self.labels.append(int(label))\n\n        self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],\n                                              std=[0.229, 0.224, 0.225])\n        if split == \"train\":\n            self.transform = transforms.Compose([\n                transforms.RandomResizedCrop(224),\n                transforms.RandomHorizontalFlip(),\n                transforms.ToTensor(),\n                self.normalize\n            ])\n        else:\n            self.transform = transforms.Compose([\n                transforms.Resize(256),\n                transforms.CenterCrop(224),\n                transforms.ToTensor(),\n                self.normalize\n            ])\n\n    def __getitem__(self, index):\n        image = Image.open(os.path.join(self.image_dir, self.images[index]))\n        image = image.convert('RGB')\n        image = self.transform(image)\n        return {\n            \"images\": image,\n            \"labels\": self.labels[index],\n            \"filename\": self.images[index],\n        }\n\n    def __len__(self):\n        return len(self.images)\n"
  },
  {
    "path": "datasets/preprocess/shapenet/.gitignore",
    "content": "data"
  },
  {
    "path": "datasets/shapenet.py",
    "content": "import json\nimport os\nimport pickle\n\nimport numpy as np\nimport torch\nfrom PIL import Image\nfrom skimage import io, transform\nfrom torch.utils.data.dataloader import default_collate\n\nimport config\nfrom datasets.base_dataset import BaseDataset\n\n\nclass ShapeNet(BaseDataset):\n    \"\"\"\n    Dataset wrapping images and target meshes for ShapeNet dataset.\n    \"\"\"\n\n    def __init__(self, file_root, file_list_name, mesh_pos, normalization, shapenet_options):\n        super().__init__()\n        self.file_root = file_root\n        with open(os.path.join(self.file_root, \"meta\", \"shapenet.json\"), \"r\") as fp:\n            self.labels_map = sorted(list(json.load(fp).keys()))\n        self.labels_map = {k: i for i, k in enumerate(self.labels_map)}\n        # Read file list\n        with open(os.path.join(self.file_root, \"meta\", file_list_name + \".txt\"), \"r\") as fp:\n            self.file_names = fp.read().split(\"\\n\")[:-1]\n        self.tensorflow = \"_tf\" in file_list_name # tensorflow version of data\n        self.normalization = normalization\n        self.mesh_pos = mesh_pos\n        self.resize_with_constant_border = shapenet_options.resize_with_constant_border\n\n    def __getitem__(self, index):\n        if self.tensorflow:\n            filename = self.file_names[index][17:]\n            label = filename.split(\"/\", maxsplit=1)[0]\n            pkl_path = os.path.join(self.file_root, \"data_tf\", filename)\n            img_path = pkl_path[:-4] + \".png\"\n            with open(pkl_path) as f:\n                data = pickle.load(open(pkl_path, 'rb'), encoding=\"latin1\")\n            pts, normals = data[:, :3], data[:, 3:]\n            img = io.imread(img_path)\n            img[np.where(img[:, :, 3] == 0)] = 255\n            if self.resize_with_constant_border:\n                img = transform.resize(img, (config.IMG_SIZE, config.IMG_SIZE),\n                                       mode='constant', anti_aliasing=False)  # to match behavior of old versions\n            else:\n                img = transform.resize(img, (config.IMG_SIZE, config.IMG_SIZE))\n            img = img[:, :, :3].astype(np.float32)\n        else:\n            label, filename = self.file_names[index].split(\"_\", maxsplit=1)\n            with open(os.path.join(self.file_root, \"data\", label, filename), \"rb\") as f:\n                data = pickle.load(f, encoding=\"latin1\")\n            img, pts, normals = data[0].astype(np.float32) / 255.0, data[1][:, :3], data[1][:, 3:]\n\n        pts -= np.array(self.mesh_pos)\n        assert pts.shape[0] == normals.shape[0]\n        length = pts.shape[0]\n\n        img = torch.from_numpy(np.transpose(img, (2, 0, 1)))\n        img_normalized = self.normalize_img(img) if self.normalization else img\n\n        return {\n            \"images\": img_normalized,\n            \"images_orig\": img,\n            \"points\": pts,\n            \"normals\": normals,\n            \"labels\": self.labels_map[label],\n            \"filename\": filename,\n            \"length\": length\n        }\n\n    def __len__(self):\n        return len(self.file_names)\n\n\nclass ShapeNetImageFolder(BaseDataset):\n\n    def __init__(self, folder, normalization, shapenet_options):\n        super().__init__()\n        self.normalization = normalization\n        self.resize_with_constant_border = shapenet_options.resize_with_constant_border\n        self.file_list = []\n        for fl in os.listdir(folder):\n            file_path = os.path.join(folder, fl)\n            # check image before hand\n            try:\n                if file_path.endswith(\".gif\"):\n                    raise ValueError(\"gif's are results. Not acceptable\")\n                Image.open(file_path)\n                self.file_list.append(file_path)\n            except (IOError, ValueError):\n                print(\"=> Ignoring %s because it's not a valid image\" % file_path)\n\n    def __getitem__(self, item):\n        img_path = self.file_list[item]\n        img = io.imread(img_path)\n\n        if img.shape[2] > 3:  # has alpha channel\n            img[np.where(img[:, :, 3] == 0)] = 255\n\n        if self.resize_with_constant_border:\n            img = transform.resize(img, (config.IMG_SIZE, config.IMG_SIZE),\n                                   mode='constant', anti_aliasing=False)\n        else:\n            img = transform.resize(img, (config.IMG_SIZE, config.IMG_SIZE))\n        img = img[:, :, :3].astype(np.float32)\n\n        img = torch.from_numpy(np.transpose(img, (2, 0, 1)))\n        img_normalized = self.normalize_img(img) if self.normalization else img\n\n        return {\n            \"images\": img_normalized,\n            \"images_orig\": img,\n            \"filepath\": self.file_list[item]\n        }\n\n    def __len__(self):\n        return len(self.file_list)\n\n\ndef get_shapenet_collate(num_points):\n    \"\"\"\n    :param num_points: This option will not be activated when batch size = 1\n    :return: shapenet_collate function\n    \"\"\"\n    def shapenet_collate(batch):\n        if len(batch) > 1:\n            all_equal = True\n            for t in batch:\n                if t[\"length\"] != batch[0][\"length\"]:\n                    all_equal = False\n                    break\n            points_orig, normals_orig = [], []\n            if not all_equal:\n                for t in batch:\n                    pts, normal = t[\"points\"], t[\"normals\"]\n                    length = pts.shape[0]\n                    choices = np.resize(np.random.permutation(length), num_points)\n                    t[\"points\"], t[\"normals\"] = pts[choices], normal[choices]\n                    points_orig.append(torch.from_numpy(pts))\n                    normals_orig.append(torch.from_numpy(normal))\n                ret = default_collate(batch)\n                ret[\"points_orig\"] = points_orig\n                ret[\"normals_orig\"] = normals_orig\n                return ret\n        ret = default_collate(batch)\n        ret[\"points_orig\"] = ret[\"points\"]\n        ret[\"normals_orig\"] = ret[\"normals\"]\n        return ret\n\n    return shapenet_collate"
  },
  {
    "path": "entrypoint_eval.py",
    "content": "import argparse\nimport sys\n\nfrom functions.evaluator import Evaluator\nfrom options import update_options, options, reset_options\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='Pixel2Mesh Evaluation Entrypoint')\n    parser.add_argument('--options', help='experiment options file name', required=False, type=str)\n\n    args, rest = parser.parse_known_args()\n    if args.options is None:\n        print(\"Running without options file...\", file=sys.stderr)\n    else:\n        update_options(args.options)\n\n    parser.add_argument('--batch-size', help='batch size', type=int)\n    parser.add_argument('--shuffle', help='shuffle samples', default=False, action='store_true')\n    parser.add_argument('--checkpoint', help='trained checkpoint file', type=str, required=True)\n    parser.add_argument('--version', help='version of task (timestamp by default)', type=str)\n    parser.add_argument('--name', help='subfolder name of this experiment', required=True, type=str)\n    parser.add_argument('--gpus', help='number of GPUs to use', type=int)\n\n    args = parser.parse_args()\n\n    return args\n\n\ndef main():\n    args = parse_args()\n    logger, writer = reset_options(options, args, phase='eval')\n\n    evaluator = Evaluator(options, logger, writer)\n    evaluator.evaluate()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "entrypoint_predict.py",
    "content": "import argparse\nimport sys\n\nfrom functions.predictor import Predictor\nfrom options import update_options, options, reset_options\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='Pixel2Mesh Prediction Entrypoint')\n    parser.add_argument('--options', help='experiment options file name', required=False, type=str)\n\n    args, rest = parser.parse_known_args()\n    if args.options is None:\n        print(\"Running without options file...\", file=sys.stderr)\n    else:\n        update_options(args.options)\n\n    parser.add_argument('--batch-size', help='batch size', type=int)\n    parser.add_argument('--checkpoint', help='trained model file', type=str, required=True)\n    parser.add_argument('--name', required=True, type=str)\n    parser.add_argument('--folder', required=True, type=str)\n\n    options.dataset.name += '_demo'\n\n    args = parser.parse_args()\n\n    return args\n\n\ndef main():\n    args = parse_args()\n    logger, writer = reset_options(options, args, phase='predict')\n\n    predictor = Predictor(options, logger, writer)\n    predictor.predict()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "entrypoint_train.py",
    "content": "import argparse\nimport sys\n\nfrom functions.trainer import Trainer\nfrom options import update_options, options, reset_options\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='Pixel2Mesh Training Entrypoint')\n    parser.add_argument('--options', help='experiment options file name', required=False, type=str)\n\n    args, rest = parser.parse_known_args()\n    if args.options is None:\n        print(\"Running without options file...\", file=sys.stderr)\n    else:\n        update_options(args.options)\n\n    # training\n    parser.add_argument('--batch-size', help='batch size', type=int)\n    parser.add_argument('--checkpoint', help='checkpoint file', type=str)\n    parser.add_argument('--num-epochs', help='number of epochs', type=int)\n    parser.add_argument('--version', help='version of task (timestamp by default)', type=str)\n    parser.add_argument('--name', required=True, type=str)\n\n    args = parser.parse_args()\n\n    return args\n\n\ndef main():\n    args = parse_args()\n    logger, writer = reset_options(options, args)\n\n    trainer = Trainer(options, logger, writer)\n    trainer.train()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "experiments/backbone/vgg16.yml",
    "content": "dataset:\n  name: imagenet\n  num_classes: 1000\ntrain:\n  num_epochs: 80\n  batch_size: 32\nmodel:\n  name: classifier\n  backbone: vgg16\noptim:\n  name: sgd\n  lr: 1.0e-2\n  wd: 5.0e-4\n  lr_step:\n    - 20\n    - 40\n    - 60\ntest:\n  batch_size: 32\nnum_workers: 16\nnum_gpus: 8\n"
  },
  {
    "path": "experiments/backbone/vgg16_1e-3.yml",
    "content": "based_on:\n  - vgg16.yml\noptim:\n  lr: 1.0e-4\n"
  },
  {
    "path": "experiments/backbone/vgg16_1e-4.yml",
    "content": "based_on:\n  - vgg16.yml\noptim:\n  lr: 1.0e-3\n"
  },
  {
    "path": "experiments/baseline/chamfer_only.yml",
    "content": "based_on:\n  - default.yml\nloss:\n  weights:\n    normal: 0.\n    laplace: 0.\n    edge: 0."
  },
  {
    "path": "experiments/baseline/default.yml",
    "content": "num_gpus: 8\nnum_workers: 16\ntrain:\n  batch_size: 24\ntest:\n  batch_size: 24\n"
  },
  {
    "path": "experiments/baseline/default_zthresh.yml",
    "content": "based_on:\n  - default.yml\nmodel:\n  z_threshold: -0.05\n"
  },
  {
    "path": "experiments/baseline/large_laplace.yml",
    "content": "based_on:\n  - default.yml\nloss:\n  weights:\n    laplace: 45.0\n    move: 3.0\n"
  },
  {
    "path": "experiments/baseline/lr_1e-3_weighted_chamfer.yml",
    "content": "based_on:\n  - lr_1e-4_weighted_chamfer.yml\noptim:\n  lr: 1.0E-3\n"
  },
  {
    "path": "experiments/baseline/lr_1e-3_weighted_chamfer_oppo.yml",
    "content": "based_on:\n  - lr_1e-4_weighted_chamfer_oppo.yml\noptim:\n  lr: 1.0E-3\n"
  },
  {
    "path": "experiments/baseline/lr_1e-3_zthresh.yml",
    "content": "based_on:\n  - default.yml\noptim:\n  lr: 1.0E-3\nmodel:\n  z_threshold: -0.05\n"
  },
  {
    "path": "experiments/baseline/lr_1e-3_zthresh_resnet.yml",
    "content": "based_on:\n  - lr_1e-3_zthresh.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\ntest:\n  batch_size: 8\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4.yml",
    "content": "based_on:\n  - default.yml\noptim:\n  lr: 1.0E-4\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_dataset_all.yml",
    "content": "based_on:\n  - lr_1e-4.yml\ndataset:\n  subset_train: train_all\n  subset_eval: test_all\noptim:\n  lr_factor: 0.2\n  lr_step:\n    - 25\n    - 45\ntrain:\n  num_epochs: 60"
  },
  {
    "path": "experiments/baseline/lr_1e-4_dataset_tf_same_weights_step_adjusted.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_tf_sample_9k.yml\nmodel:\n  backbone: vgg16\ntrain:\n  batch_size: 24\ntest:\n  batch_size: 24\nloss:\n  weights:\n    chamfer_opposite: 0.55\n    laplace: 0.5\n    edge: 0.1\n    move: 0.033\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_dataset_tf_same_weights_step_adjusted_more_epochs.yml",
    "content": "based_on:\n  - lr_1e-4_dataset_tf_same_weights_step_adjusted.yml\ntrain:\n  num_epochs: 110\noptim:\n  lr_step:\n    - 40\n    - 80\n    - 100\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_k250_d256.yml",
    "content": "based_on:\n  - lr_1e-4_dataset_all.yml\nmodel:\n  hidden_dim: 256\n  last_hidden_dim: 128\ndataset:\n  camera_f: [250., 250.]\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_plane_only.yml",
    "content": "based_on:\n  - lr_1e-4.yml\ntrain:\n  num_epochs: 100\noptim:\n  lr_step:\n    - 60\n    - 90\ndataset:\n  subset_train: train_plane\n  subset_eval: test_plane\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_all.yml",
    "content": "based_on:\n  - lr_1e-4.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\ntest:\n  batch_size: 8\ndataset:\n  subset_train: train_all\n  subset_eval: test_all\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_all_larger_sample.yml",
    "content": "based_on:\n  - lr_1e-4.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\n  num_epochs: 70\ntest:\n  batch_size: 8\ndataset:\n  subset_train: train_all\n  subset_eval: test_all\n  shapenet:\n    num_points: 5000\noptim:\n  lr_factor: 0.3\n  lr_step:\n    - 25\n    - 45\n    - 60\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_all_sample_9k.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_all_larger_sample.yml\ndataset:\n  shapenet:\n    num_points: 9000\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_tf_larger_sample.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_all_larger_sample.yml\ndataset:\n  subset_train: train_tf\n  subset_eval: test_tf\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_tf_same_weights_step_adjusted.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_tf_sample_9k.yml\nloss:\n  weights:\n    chamfer_opposite: 0.55\n    laplace: 0.5\n    edge: 0.1\n    move: 0.033\noptim:\n  lr_step:\n    - 30\n    - 70\n    - 90\ntrain:\n  num_epochs: 110"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_tf_sample_9k.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_all_sample_9k.yml\ndataset:\n  subset_train: train_tf\n  subset_eval: test_tf\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_tf_sample_9k_more_epochs.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_tf_sample_9k.yml\ntrain:\n  num_epochs: 110\noptim:\n  lr_step:\n    - 40\n    - 70\n    - 90\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_dataset_tf_sample_9k_more_epochs_same_weights.yml",
    "content": "based_on:\n  - lr_1e-4_resnet_dataset_tf_sample_9k_more_epochs.yml\nloss:\n  weights:\n    chamfer_opposite: 0.55\n    laplace: 0.5\n    edge: 0.1\n    move: 0.033"
  },
  {
    "path": "experiments/baseline/lr_1e-4_resnet_k250_d256.yml",
    "content": "based_on:\n  - lr_1e-4_k250_d256.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\ntest:\n  batch_size: 8\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_wd_1e-8.yml",
    "content": "based_on:\n  - lr_1e-4.yml\noptim:\n  wd: 1.0e-8\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_weighted_chamfer.yml",
    "content": "based_on:\n  - lr_1e-4.yml\nloss:\n  weights:\n    chamfer: [0.05, 0.4, 2.]\n    chamfer_opposite: 0.55"
  },
  {
    "path": "experiments/baseline/lr_1e-4_weighted_chamfer_oppo.yml",
    "content": "based_on:\n  - lr_1e-4.yml\nloss:\n  weights:\n    chamfer_opposite: 0.55\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_zthresh.yml",
    "content": "based_on:\n  - lr_1e-4.yml\nmodel:\n  z_threshold: -0.05\n"
  },
  {
    "path": "experiments/baseline/lr_1e-4_zthresh_resnet.yml",
    "content": "based_on:\n  - lr_1e-4_zthresh.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\ntest:\n  batch_size: 8\n"
  },
  {
    "path": "experiments/baseline/lr_1e-5.yml",
    "content": "based_on:\n  - default.yml\noptim:\n  lr: 1.0E-5\n"
  },
  {
    "path": "experiments/baseline/lr_1e-5_dataset_tf_same_weights_step_adjusted.yml",
    "content": "based_on:\n  - lr_1e-4_dataset_tf_same_weights_step_adjusted.yml\noptim:\n  lr: 1.0e-5"
  },
  {
    "path": "experiments/baseline/lr_2.5e-5.yml",
    "content": "based_on:\n  - default.yml\noptim:\n  lr: 2.5E-5\n"
  },
  {
    "path": "experiments/baseline/lr_3e-5_dataset_tf_same_weights_step_adjusted.yml",
    "content": "based_on:\n  - lr_1e-4_dataset_tf_same_weights_step_adjusted.yml\noptim:\n  lr: 3.0e-5"
  },
  {
    "path": "experiments/baseline/lr_5e-4_zthresh_resnet.yml",
    "content": "based_on:\n  - lr_1e-4_zthresh_resnet.yml\noptim:\n  lr: 5.0e-4"
  },
  {
    "path": "experiments/baseline/lr_5e-5_dataset_all_more_epochs.yml",
    "content": "based_on:\n  - lr_1e-4_dataset_all.yml\noptim:\n  lr: 5.0e-5\n  lr_factor: 0.2\n  lr_step:\n    - 40\n    - 70\n    - 90\ntrain:\n  num_epochs: 100"
  },
  {
    "path": "experiments/baseline/normal_free.yml",
    "content": "based_on:\n  - default.yml\nloss:\n  weights:\n    normal: 0.\n"
  },
  {
    "path": "experiments/baseline/relu_free.yml",
    "content": "based_on:\n  - default.yml\nmodel:\n  gconv_activation: false\n"
  },
  {
    "path": "experiments/baseline/resnet.yml",
    "content": "based_on:\n  - default.yml\nmodel:\n  backbone: resnet50\ntrain:\n  batch_size: 8\ntest:\n  batch_size: 8\n"
  },
  {
    "path": "experiments/default/resnet.yml",
    "content": "checkpoint: null\ncheckpoint_dir: checkpoints\ndataset:\n  camera_c:\n  - 111.5\n  - 111.5\n  camera_f:\n  - 248.0\n  - 248.0\n  mesh_pos:\n  - 0.0\n  - 0.0\n  - -0.8\n  name: shapenet\n  normalization: true\n  num_classes: 13\n  predict:\n    folder: /tmp\n  shapenet:\n    num_points: 9000\n    resize_with_constant_border: false\n  subset_eval: test_tf\n  subset_train: train_tf\nlog_dir: logs\nlog_level: info\nloss:\n  weights:\n    chamfer:\n    - 1.0\n    - 1.0\n    - 1.0\n    chamfer_opposite: 0.55\n    constant: 1.0\n    edge: 0.1\n    laplace: 0.5\n    move: 0.033\n    normal: 0.00016\n    reconst: 0.0\nmodel:\n  align_with_tensorflow: false\n  backbone: resnet50\n  coord_dim: 3\n  gconv_activation: true\n  hidden_dim: 192\n  last_hidden_dim: 192\n  name: pixel2mesh\n  z_threshold: 0\nname: p2m\nnum_gpus: 8\nnum_workers: 16\noptim:\n  adam_beta1: 0.9\n  lr: 0.0001\n  lr_factor: 0.3\n  lr_step:\n  - 30\n  - 70\n  - 90\n  name: adam\n  sgd_momentum: 0.9\n  wd: 1.0e-06\npin_memory: true\nsummary_dir: summary\ntest:\n  batch_size: 8\n  dataset: []\n  shuffle: false\n  summary_steps: 50\n  weighted_mean: false\ntrain:\n  batch_size: 8\n  checkpoint_steps: 10000\n  num_epochs: 110\n  shuffle: true\n  summary_steps: 50\n  test_epochs: 1\n  use_augmentation: true\nversion: null\n"
  },
  {
    "path": "experiments/default/tensorflow.yml",
    "content": "checkpoint: null\ncheckpoint_dir: checkpoints\ndataset:\n  camera_c:\n  - 112.0\n  - 112.0\n  camera_f:\n  - 250.0\n  - 250.0\n  mesh_pos:\n  - 0.0\n  - 0.0\n  - 0.0\n  name: shapenet\n  normalization: false\n  num_classes: 13\n  predict:\n    folder: /tmp\n  shapenet:\n    num_points: 9000\n    resize_with_constant_border: true\n  subset_eval: test_tf\n  subset_train: train_tf\nlog_dir: logs\nlog_level: info\nloss:\n  weights:\n    chamfer:\n    - 1.0\n    - 1.0\n    - 1.0\n    chamfer_opposite: 0.55\n    constant: 1.0\n    edge: 0.1\n    laplace: 0.5\n    move: 0.033\n    normal: 0.00016\n    reconst: 0.0\nmodel:\n  align_with_tensorflow: true\n  backbone: vgg16\n  coord_dim: 3\n  gconv_activation: true\n  hidden_dim: 256\n  last_hidden_dim: 128\n  name: pixel2mesh\n  z_threshold: 0\nname: p2m\nnum_gpus: 1\nnum_workers: 16\noptim:\n  adam_beta1: 0.9\n  lr: 1.0e-06\n  lr_factor: 0.1\n  lr_step:\n  - 30\n  - 45\n  name: adam\n  sgd_momentum: 0.9\n  wd: 1.0e-06\npin_memory: true\nsummary_dir: summary\ntest:\n  batch_size: 24\n  dataset: []\n  shuffle: true\n  summary_steps: 5\n  weighted_mean: false\ntrain:\n  batch_size: 1\n  checkpoint_steps: 10000\n  num_epochs: 2\n  shuffle: true\n  summary_steps: 1\n  test_epochs: 1\n  use_augmentation: true\nversion: null\n"
  },
  {
    "path": "external/chamfer/chamfer.cu",
    "content": "#include <stdio.h>\n#include <ATen/ATen.h>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n#include <vector>\n\n\n__global__ void NmDistanceKernel(int b, int n, const float *xyz, int m,\n                                 const float *xyz2, float *result, int *result_i) {\n    const int batch = 512;\n    __shared__ float buf[batch * 3];\n    for (int i = blockIdx.x; i < b; i += gridDim.x) {\n        for (int k2 = 0; k2 < m; k2 += batch) {\n            int end_k = min(m, k2 + batch) - k2;\n            for (int j = threadIdx.x; j < end_k * 3; j += blockDim.x) {\n                buf[j] = xyz2[(i * m + k2) * 3 + j];\n            }\n            __syncthreads();\n            for (int j = threadIdx.x + blockIdx.y * blockDim.x; j < n; j += blockDim.x * gridDim.y) {\n                float x1 = xyz[(i * n + j) * 3 + 0];\n                float y1 = xyz[(i * n + j) * 3 + 1];\n                float z1 = xyz[(i * n + j) * 3 + 2];\n                int best_i = 0;\n                float best = 0;\n                int end_ka = end_k - (end_k & 3);\n                if (end_ka == batch) {\n                    for (int k = 0; k < batch; k += 4) {\n                        {\n                            float x2 = buf[k * 3 + 0] - x1;\n                            float y2 = buf[k * 3 + 1] - y1;\n                            float z2 = buf[k * 3 + 2] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (k == 0 || d < best) {\n                                best = d;\n                                best_i = k + k2;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 3] - x1;\n                            float y2 = buf[k * 3 + 4] - y1;\n                            float z2 = buf[k * 3 + 5] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 1;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 6] - x1;\n                            float y2 = buf[k * 3 + 7] - y1;\n                            float z2 = buf[k * 3 + 8] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 2;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 9] - x1;\n                            float y2 = buf[k * 3 + 10] - y1;\n                            float z2 = buf[k * 3 + 11] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 3;\n                            }\n                        }\n                    }\n                } else {\n                    for (int k = 0; k < end_ka; k += 4) {\n                        {\n                            float x2 = buf[k * 3 + 0] - x1;\n                            float y2 = buf[k * 3 + 1] - y1;\n                            float z2 = buf[k * 3 + 2] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (k == 0 || d < best) {\n                                best = d;\n                                best_i = k + k2;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 3] - x1;\n                            float y2 = buf[k * 3 + 4] - y1;\n                            float z2 = buf[k * 3 + 5] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 1;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 6] - x1;\n                            float y2 = buf[k * 3 + 7] - y1;\n                            float z2 = buf[k * 3 + 8] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 2;\n                            }\n                        }\n                        {\n                            float x2 = buf[k * 3 + 9] - x1;\n                            float y2 = buf[k * 3 + 10] - y1;\n                            float z2 = buf[k * 3 + 11] - z1;\n                            float d = x2 * x2 + y2 * y2 + z2 * z2;\n                            if (d < best) {\n                                best = d;\n                                best_i = k + k2 + 3;\n                            }\n                        }\n                    }\n                }\n                for (int k = end_ka; k < end_k; k++) {\n                    float x2 = buf[k * 3 + 0] - x1;\n                    float y2 = buf[k * 3 + 1] - y1;\n                    float z2 = buf[k * 3 + 2] - z1;\n                    float d = x2 * x2 + y2 * y2 + z2 * z2;\n                    if (k == 0 || d < best) {\n                        best = d;\n                        best_i = k + k2;\n                    }\n                }\n                if (k2 == 0 || result[(i * n + j)] > best) {\n                    result[(i * n + j)] = best;\n                    result_i[(i * n + j)] = best_i;\n                }\n            }\n            __syncthreads();\n        }\n    }\n}\n\nint chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1,\n                         at::Tensor idx2) {\n\n    const auto batch_size = xyz1.size(0);\n    const auto n = xyz1.size(1); //num_points point cloud A\n    const auto m = xyz2.size(1); //num_points point cloud B\n\n    NmDistanceKernel <<< dim3(32, 16, 1), 512 >>> (batch_size, n, xyz1.data<float>(), m,\n                                                   xyz2.data<float>(), dist1.data<float>(), idx1.data<int>());\n    NmDistanceKernel <<< dim3(32, 16, 1), 512 >>> (batch_size, m, xyz2.data<float>(), n,\n                                                   xyz1.data<float>(), dist2.data<float>(), idx2.data<int>());\n\n    cudaError_t err = cudaGetLastError();\n    if (err != cudaSuccess) {\n        printf(\"error in nnd updateOutput: %s\\n\", cudaGetErrorString(err));\n        return 0;\n    }\n    return 1;\n}\n\n__global__ void NmDistanceGradKernel(int b, int n, const float *xyz1, int m, const float *xyz2, const float *grad_dist1,\n                                     const int *idx1, float *grad_xyz1, float *grad_xyz2) {\n    for (int i = blockIdx.x; i < b; i += gridDim.x) {\n        for (int j = threadIdx.x + blockIdx.y * blockDim.x; j < n; j += blockDim.x * gridDim.y) {\n            float x1 = xyz1[(i * n + j) * 3 + 0];\n            float y1 = xyz1[(i * n + j) * 3 + 1];\n            float z1 = xyz1[(i * n + j) * 3 + 2];\n            int j2 = idx1[i * n + j];\n            float x2 = xyz2[(i * m + j2) * 3 + 0];\n            float y2 = xyz2[(i * m + j2) * 3 + 1];\n            float z2 = xyz2[(i * m + j2) * 3 + 2];\n            float g = grad_dist1[i * n + j] * 2;\n            atomicAdd(&(grad_xyz1[(i * n + j) * 3 + 0]), g * (x1 - x2));\n            atomicAdd(&(grad_xyz1[(i * n + j) * 3 + 1]), g * (y1 - y2));\n            atomicAdd(&(grad_xyz1[(i * n + j) * 3 + 2]), g * (z1 - z2));\n            atomicAdd(&(grad_xyz2[(i * m + j2) * 3 + 0]), -(g * (x1 - x2)));\n            atomicAdd(&(grad_xyz2[(i * m + j2) * 3 + 1]), -(g * (y1 - y2)));\n            atomicAdd(&(grad_xyz2[(i * m + j2) * 3 + 2]), -(g * (z1 - z2)));\n        }\n    }\n}\n\nint chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1,\n                          at::Tensor gradxyz2, at::Tensor graddist1,\n                          at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2) {\n    const auto batch_size = xyz1.size(0);\n    const auto n = xyz1.size(1); // num_points point cloud A\n    const auto m = xyz2.size(1); // num_points point cloud B\n\n    NmDistanceGradKernel <<< dim3(1, 16, 1), 256 >>> (batch_size, n, xyz1.data<float>(), m,\n                                                      xyz2.data<float>(), graddist1.data<float>(), idx1.data<int>(),\n                                                      gradxyz1.data<float>(), gradxyz2.data<float>());\n    NmDistanceGradKernel <<< dim3(1, 16, 1), 256 >>> (batch_size, m, xyz2.data<float>(), n,\n                                                      xyz1.data<float>(), graddist2.data<float>(), idx2.data<int>(),\n                                                      gradxyz2.data<float>(), gradxyz1.data<float>());\n\n    cudaError_t err = cudaGetLastError();\n    if (err != cudaSuccess) {\n        printf(\"error in nnd get grad: %s\\n\", cudaGetErrorString(err));\n        return 0;\n    }\n    return 1;\n\n}"
  },
  {
    "path": "external/chamfer/chamfer_cuda.cpp",
    "content": "#include <torch/extension.h>\n#include <vector>\n\n\nint chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1,\n                         at::Tensor idx2);\n\nint chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2,\n                          at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2);\n\nint chamfer_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2,\n                    at::Tensor idx1, at::Tensor idx2) {\n    return chamfer_cuda_forward(xyz1, xyz2, dist1, dist2, idx1, idx2);\n}\n\nint chamfer_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2,\n                     at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2) {\n    return chamfer_cuda_backward(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2);\n}\n\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"forward\", &chamfer_forward, \"chamfer forward (CUDA)\");\n    m.def(\"backward\", &chamfer_backward, \"chamfer backward (CUDA)\");\n}"
  },
  {
    "path": "external/chamfer/setup.py",
    "content": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\nsetup(\n    name='chamfer',\n    ext_modules=[\n        CUDAExtension('chamfer', [\n            'chamfer_cuda.cpp',\n            'chamfer.cu',\n        ]),\n    ],\n    cmdclass={\n        'build_ext': BuildExtension\n    })\n"
  },
  {
    "path": "external/chamfer/test.py",
    "content": "import sys\nimport os\nfor file in os.listdir(\"build\"):\n    if file.startswith(\"lib\"):\n        sys.path.insert(0, os.path.join(\"build\", file))\n\n# torch must be imported before we import chamfer\nimport torch\nimport chamfer\n\nbatch_size = 8\nn, m = 30, 20\n\nxyz1 = torch.rand((batch_size, n, 3)).cuda()\nxyz2 = torch.rand((batch_size, m, 3)).cuda()\n\ndist1 = torch.zeros(batch_size, n).cuda()\ndist2 = torch.zeros(batch_size, m).cuda()\n\nidx1 = torch.zeros((batch_size, n), dtype=torch.int).cuda()\nidx2 = torch.zeros((batch_size, m), dtype=torch.int).cuda()\n\nchamfer.forward(xyz1, xyz2, dist1, dist2, idx1, idx2)\nprint(dist1)\nprint(dist2)\nprint(idx1)\nprint(idx2)"
  },
  {
    "path": "functions/base.py",
    "content": "import os\nimport time\nfrom datetime import timedelta\nfrom logging import Logger\n\nimport torch\nimport torch.nn\nfrom tensorboardX import SummaryWriter\nfrom torch.utils.data.dataloader import default_collate\n\nimport config\nfrom datasets.imagenet import ImageNet\nfrom datasets.shapenet import ShapeNet, get_shapenet_collate, ShapeNetImageFolder\nfrom functions.saver import CheckpointSaver\n\n\nclass CheckpointRunner(object):\n    def __init__(self, options, logger: Logger, summary_writer: SummaryWriter,\n                 dataset=None, training=True, shared_model=None):\n        self.options = options\n        self.logger = logger\n\n        # GPUs\n        if not torch.cuda.is_available() and self.options.num_gpus > 0:\n            raise ValueError(\"CUDA not found yet number of GPUs is set to be greater than 0\")\n        if os.environ.get(\"CUDA_VISIBLE_DEVICES\"):\n            logger.info(\"CUDA visible devices is activated here, number of GPU setting is not working\")\n            self.gpus = list(map(int, os.environ[\"CUDA_VISIBLE_DEVICES\"].split(\",\")))\n            self.options.num_gpus = len(self.gpus)\n            enumerate_gpus = list(range(self.options.num_gpus))\n            logger.info(\"CUDA is asking for \" + str(self.gpus) + \", PyTorch to doing a mapping, changing it to \" +\n                        str(enumerate_gpus))\n            self.gpus = enumerate_gpus\n        else:\n            self.gpus = list(range(self.options.num_gpus))\n            logger.info(\"Using GPUs: \" + str(self.gpus))\n\n        # initialize summary writer\n        self.summary_writer = summary_writer\n\n        # initialize dataset\n        if dataset is None:\n            dataset = options.dataset  # useful during training\n        self.dataset = self.load_dataset(dataset, training)\n        self.dataset_collate_fn = self.load_collate_fn(dataset, training)\n\n        # by default, epoch_count = step_count = 0\n        self.epoch_count = self.step_count = 0\n        self.time_start = time.time()\n\n        # override this function to define your model, optimizers etc.\n        # in case you want to use a model that is defined in a trainer or other place in the code,\n        # shared_model should help. in this case, checkpoint is not used\n        self.logger.info(\"Running model initialization...\")\n        self.init_fn(shared_model=shared_model)\n\n        if shared_model is None:\n            # checkpoint is loaded if any\n            self.saver = CheckpointSaver(self.logger, checkpoint_dir=str(self.options.checkpoint_dir),\n                                         checkpoint_file=self.options.checkpoint)\n            self.init_with_checkpoint()\n\n    def load_dataset(self, dataset, training):\n        self.logger.info(\"Loading datasets: %s\" % dataset.name)\n        if dataset.name == \"shapenet\":\n            return ShapeNet(config.SHAPENET_ROOT, dataset.subset_train if training else dataset.subset_eval,\n                            dataset.mesh_pos, dataset.normalization, dataset.shapenet)\n        elif dataset.name == \"shapenet_demo\":\n            return ShapeNetImageFolder(dataset.predict.folder, dataset.normalization, dataset.shapenet)\n        elif dataset.name == \"imagenet\":\n            return ImageNet(config.IMAGENET_ROOT, \"train\" if training else \"val\")\n        raise NotImplementedError(\"Unsupported dataset\")\n\n    def load_collate_fn(self, dataset, training):\n        if dataset.name == \"shapenet\":\n            return get_shapenet_collate(dataset.shapenet.num_points)\n        else:\n            return default_collate\n\n    def init_fn(self, shared_model=None, **kwargs):\n        raise NotImplementedError('You need to provide an _init_fn method')\n\n    # Pack models and optimizers in a dict - necessary for checkpointing\n    def models_dict(self):\n        return None\n\n    def optimizers_dict(self):\n        # NOTE: optimizers and models cannot have conflicting names\n        return None\n\n    def init_with_checkpoint(self):\n        checkpoint = self.saver.load_checkpoint()\n        if checkpoint is None:\n            self.logger.info(\"Checkpoint not loaded\")\n            return\n        for model_name, model in self.models_dict().items():\n            if model_name in checkpoint:\n                if isinstance(model, torch.nn.DataParallel):\n                    model.module.load_state_dict(checkpoint[model_name], strict=False)\n                else:\n                    model.load_state_dict(checkpoint[model_name], strict=False)\n        if self.optimizers_dict() is not None:\n            for optimizer_name, optimizer in self.optimizers_dict().items():\n                if optimizer_name in checkpoint:\n                    optimizer.load_state_dict(checkpoint[optimizer_name])\n        else:\n            self.logger.warning(\"Optimizers not found in the runner, skipping...\")\n        if \"epoch\" in checkpoint:\n            self.epoch_count = checkpoint[\"epoch\"]\n        if \"total_step_count\" in checkpoint:\n            self.step_count = checkpoint[\"total_step_count\"]\n\n    def dump_checkpoint(self):\n        checkpoint = {\n            \"epoch\": self.epoch_count,\n            \"total_step_count\": self.step_count\n        }\n        for model_name, model in self.models_dict().items():\n            if isinstance(model, torch.nn.DataParallel):\n                checkpoint[model_name] = model.module.state_dict()\n            else:\n                checkpoint[model_name] = model.state_dict()\n            for k, v in list(checkpoint[model_name].items()):\n                if isinstance(v, torch.Tensor) and v.is_sparse:\n                    checkpoint[model_name].pop(k)\n        if self.optimizers_dict() is not None:\n            for optimizer_name, optimizer in self.optimizers_dict().items():\n                checkpoint[optimizer_name] = optimizer.state_dict()\n        self.saver.save_checkpoint(checkpoint, \"%06d_%06d\" % (self.step_count, self.epoch_count))\n\n    @property\n    def time_elapsed(self):\n        return timedelta(seconds=time.time() - self.time_start)\n"
  },
  {
    "path": "functions/evaluator.py",
    "content": "from logging import Logger\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\n\nfrom functions.base import CheckpointRunner\nfrom models.classifier import Classifier\nfrom models.layers.chamfer_wrapper import ChamferDist\nfrom models.p2m import P2MModel\nfrom utils.average_meter import AverageMeter\nfrom utils.mesh import Ellipsoid\nfrom utils.vis.renderer import MeshRenderer\n\n\nclass Evaluator(CheckpointRunner):\n\n    def __init__(self, options, logger: Logger, writer, shared_model=None):\n        super().__init__(options, logger, writer, training=False, shared_model=shared_model)\n\n    # noinspection PyAttributeOutsideInit\n    def init_fn(self, shared_model=None, **kwargs):\n        if self.options.model.name == \"pixel2mesh\":\n            # Renderer for visualization\n            self.renderer = MeshRenderer(self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                         self.options.dataset.mesh_pos)\n            # Initialize distance module\n            self.chamfer = ChamferDist()\n            # create ellipsoid\n            self.ellipsoid = Ellipsoid(self.options.dataset.mesh_pos)\n            # use weighted mean evaluation metrics or not\n            self.weighted_mean = self.options.test.weighted_mean\n        else:\n            self.renderer = None\n        self.num_classes = self.options.dataset.num_classes\n\n        if shared_model is not None:\n            self.model = shared_model\n        else:\n            if self.options.model.name == \"pixel2mesh\":\n                # create model\n                self.model = P2MModel(self.options.model, self.ellipsoid,\n                                      self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                      self.options.dataset.mesh_pos)\n            elif self.options.model.name == \"classifier\":\n                self.model = Classifier(self.options.model, self.options.dataset.num_classes)\n            else:\n                raise NotImplementedError(\"Your model is not found\")\n            self.model = torch.nn.DataParallel(self.model, device_ids=self.gpus).cuda()\n\n        # Evaluate step count, useful in summary\n        self.evaluate_step_count = 0\n        self.total_step_count = 0\n\n    def models_dict(self):\n        return {'model': self.model}\n\n    def evaluate_f1(self, dis_to_pred, dis_to_gt, pred_length, gt_length, thresh):\n        recall = np.sum(dis_to_gt < thresh) / gt_length\n        prec = np.sum(dis_to_pred < thresh) / pred_length\n        return 2 * prec * recall / (prec + recall + 1e-8)\n\n    def evaluate_chamfer_and_f1(self, pred_vertices, gt_points, labels):\n        # calculate accurate chamfer distance; ground truth points with different lengths;\n        # therefore cannot be batched\n        batch_size = pred_vertices.size(0)\n        pred_length = pred_vertices.size(1)\n        for i in range(batch_size):\n            gt_length = gt_points[i].size(0)\n            label = labels[i].cpu().item()\n            d1, d2, i1, i2 = self.chamfer(pred_vertices[i].unsqueeze(0), gt_points[i].unsqueeze(0))\n            d1, d2 = d1.cpu().numpy(), d2.cpu().numpy()  # convert to millimeter\n            self.chamfer_distance[label].update(np.mean(d1) + np.mean(d2))\n            self.f1_tau[label].update(self.evaluate_f1(d1, d2, pred_length, gt_length, 1E-4))\n            self.f1_2tau[label].update(self.evaluate_f1(d1, d2, pred_length, gt_length, 2E-4))\n\n    def evaluate_accuracy(self, output, target):\n        \"\"\"Computes the accuracy over the k top predictions for the specified values of k\"\"\"\n        top_k = [1, 5]\n        maxk = max(top_k)\n        batch_size = target.size(0)\n\n        _, pred = output.topk(maxk, 1, True, True)\n        pred = pred.t()\n        correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n        for k in top_k:\n            correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)\n            acc = correct_k.mul_(1.0 / batch_size)\n            if k == 1:\n                self.acc_1.update(acc)\n            elif k == 5:\n                self.acc_5.update(acc)\n\n    def evaluate_step(self, input_batch):\n        self.model.eval()\n\n        # Run inference\n        with torch.no_grad():\n            # Get ground truth\n            images = input_batch['images']\n\n            out = self.model(images)\n\n            if self.options.model.name == \"pixel2mesh\":\n                pred_vertices = out[\"pred_coord\"][-1]\n                gt_points = input_batch[\"points_orig\"]\n                if isinstance(gt_points, list):\n                    gt_points = [pts.cuda() for pts in gt_points]\n                self.evaluate_chamfer_and_f1(pred_vertices, gt_points, input_batch[\"labels\"])\n            elif self.options.model.name == \"classifier\":\n                self.evaluate_accuracy(out, input_batch[\"labels\"])\n\n        return out\n\n    # noinspection PyAttributeOutsideInit\n    def evaluate(self):\n        self.logger.info(\"Running evaluations...\")\n\n        # clear evaluate_step_count, but keep total count uncleared\n        self.evaluate_step_count = 0\n\n        test_data_loader = DataLoader(self.dataset,\n                                      batch_size=self.options.test.batch_size * self.options.num_gpus,\n                                      num_workers=self.options.num_workers,\n                                      pin_memory=self.options.pin_memory,\n                                      shuffle=self.options.test.shuffle,\n                                      collate_fn=self.dataset_collate_fn)\n\n        if self.options.model.name == \"pixel2mesh\":\n            self.chamfer_distance = [AverageMeter() for _ in range(self.num_classes)]\n            self.f1_tau = [AverageMeter() for _ in range(self.num_classes)]\n            self.f1_2tau = [AverageMeter() for _ in range(self.num_classes)]\n        elif self.options.model.name == \"classifier\":\n            self.acc_1 = AverageMeter()\n            self.acc_5 = AverageMeter()\n\n        # Iterate over all batches in an epoch\n        for step, batch in enumerate(test_data_loader):\n            # Send input to GPU\n            batch = {k: v.cuda() if isinstance(v, torch.Tensor) else v for k, v in batch.items()}\n\n            # Run evaluation step\n            out = self.evaluate_step(batch)\n\n            # Tensorboard logging every summary_steps steps\n            if self.evaluate_step_count % self.options.test.summary_steps == 0:\n                self.evaluate_summaries(batch, out)\n\n            # add later to log at step 0\n            self.evaluate_step_count += 1\n            self.total_step_count += 1\n\n        for key, val in self.get_result_summary().items():\n            scalar = val\n            if isinstance(val, AverageMeter):\n                scalar = val.avg\n            self.logger.info(\"Test [%06d] %s: %.6f\" % (self.total_step_count, key, scalar))\n            self.summary_writer.add_scalar(\"eval_\" + key, scalar, self.total_step_count + 1)\n\n    def average_of_average_meters(self, average_meters):\n        s = sum([meter.sum for meter in average_meters])\n        c = sum([meter.count for meter in average_meters])\n        weighted_avg = s / c if c > 0 else 0.\n        avg = sum([meter.avg for meter in average_meters]) / len(average_meters)\n        ret = AverageMeter()\n        if self.weighted_mean:\n            ret.val, ret.avg = avg, weighted_avg\n        else:\n            ret.val, ret.avg = weighted_avg, avg\n        return ret\n\n    def get_result_summary(self):\n        if self.options.model.name == \"pixel2mesh\":\n            return {\n                \"cd\": self.average_of_average_meters(self.chamfer_distance),\n                \"f1_tau\": self.average_of_average_meters(self.f1_tau),\n                \"f1_2tau\": self.average_of_average_meters(self.f1_2tau),\n            }\n        elif self.options.model.name == \"classifier\":\n            return {\n                \"acc_1\": self.acc_1,\n                \"acc_5\": self.acc_5,\n            }\n\n    def evaluate_summaries(self, input_batch, out_summary):\n        self.logger.info(\"Test Step %06d/%06d (%06d) \" % (self.evaluate_step_count,\n                                                          len(self.dataset) // (\n                                                                  self.options.num_gpus * self.options.test.batch_size),\n                                                          self.total_step_count,) \\\n                         + \", \".join([key + \" \" + (str(val) if isinstance(val, AverageMeter) else \"%.6f\" % val)\n                                      for key, val in self.get_result_summary().items()]))\n\n        self.summary_writer.add_histogram(\"eval_labels\", input_batch[\"labels\"].cpu().numpy(),\n                                          self.total_step_count)\n        if self.renderer is not None:\n            # Do visualization for the first 2 images of the batch\n            render_mesh = self.renderer.p2m_batch_visualize(input_batch, out_summary, self.ellipsoid.faces)\n            self.summary_writer.add_image(\"eval_render_mesh\", render_mesh, self.total_step_count)\n"
  },
  {
    "path": "functions/predictor.py",
    "content": "import os\nimport random\nfrom logging import Logger\n\nimport imageio\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom functions.base import CheckpointRunner\nfrom models.p2m import P2MModel\nfrom utils.mesh import Ellipsoid\nfrom utils.vis.renderer import MeshRenderer\n\n\nclass Predictor(CheckpointRunner):\n\n    def __init__(self, options, logger: Logger, writer, shared_model=None):\n        super().__init__(options, logger, writer, training=False, shared_model=shared_model)\n\n    # noinspection PyAttributeOutsideInit\n    def init_fn(self, shared_model=None, **kwargs):\n        self.gpu_inference = self.options.num_gpus > 0\n        if self.gpu_inference == 0:\n            raise NotImplementedError(\"CPU inference is currently buggy. This takes some extra efforts and \"\n                                      \"might be fixed in the future.\")\n            # self.logger.warning(\"Render part would be disabled since you are using CPU. \"\n            #                     \"Neural renderer requires GPU to run. Please use other softwares \"\n            #                     \"or packages to view .obj file generated.\")\n\n        if self.options.model.name == \"pixel2mesh\":\n            # create ellipsoid\n            self.ellipsoid = Ellipsoid(self.options.dataset.mesh_pos)\n            # create model\n            self.model = P2MModel(self.options.model, self.ellipsoid,\n                                  self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                  self.options.dataset.mesh_pos)\n            if self.gpu_inference:\n                self.model.cuda()\n                # create renderer\n                self.renderer = MeshRenderer(self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                             self.options.dataset.mesh_pos)\n        else:\n            raise NotImplementedError(\"Currently the predictor only supports pixel2mesh\")\n\n    def models_dict(self):\n        return {'model': self.model}\n\n    def predict_step(self, input_batch):\n        self.model.eval()\n\n        # Run inference\n        with torch.no_grad():\n            images = input_batch['images']\n            out = self.model(images)\n            self.save_inference_results(input_batch, out)\n\n    def predict(self):\n        self.logger.info(\"Running predictions...\")\n\n        predict_data_loader = DataLoader(self.dataset,\n                                         batch_size=self.options.test.batch_size,\n                                         pin_memory=self.options.pin_memory,\n                                         collate_fn=self.dataset_collate_fn)\n\n        for step, batch in enumerate(predict_data_loader):\n            self.logger.info(\"Predicting [%05d/%05d]\" % (step * self.options.test.batch_size, len(self.dataset)))\n\n            if self.gpu_inference:\n                # Send input to GPU\n                batch = {k: v.cuda() if isinstance(v, torch.Tensor) else v for k, v in batch.items()}\n\n            self.predict_step(batch)\n\n    def save_inference_results(self, inputs, outputs):\n        if self.options.model.name == \"pixel2mesh\":\n            batch_size = inputs[\"images\"].size(0)\n            for i in range(batch_size):\n                basename, ext = os.path.splitext(inputs[\"filepath\"][i])\n                mesh_center = np.mean(outputs[\"pred_coord_before_deform\"][0][i].cpu().numpy(), 0)\n                verts = [outputs[\"pred_coord\"][k][i].cpu().numpy() for k in range(3)]\n                for k, vert in enumerate(verts):\n                    meshname = basename + \".%d.obj\" % (k + 1)\n                    vert_v = np.hstack((np.full([vert.shape[0], 1], \"v\"), vert))\n                    mesh = np.vstack((vert_v, self.ellipsoid.obj_fmt_faces[k]))\n                    np.savetxt(meshname, mesh, fmt='%s', delimiter=\" \")\n\n                if self.gpu_inference:\n                    # generate gif here\n\n                    color_repo = ['light_blue', 'purple', 'orange', 'light_yellow']\n\n                    rot_degree = 10\n                    rot_radius = rot_degree / 180 * np.pi\n                    rot_matrix = np.array([\n                        [np.cos(rot_radius), 0, -np.sin(rot_radius)],\n                        [0., 1., 0.],\n                        [np.sin(rot_radius), 0, np.cos(rot_radius)]\n                    ])\n                    writer = imageio.get_writer(basename + \".gif\", mode='I')\n                    color = random.choice(color_repo)\n                    for _ in tqdm(range(360 // rot_degree), desc=\"Rendering sample %d\" % i):\n                        image = inputs[\"images_orig\"][i].cpu().numpy()\n                        ret = image\n                        for k, vert in enumerate(verts):\n                            vert = rot_matrix.dot((vert - mesh_center).T).T + mesh_center\n                            rend_result = self.renderer.visualize_reconstruction(None,\n                                                                                 vert + \\\n                                                                                 np.array(\n                                                                                     self.options.dataset.mesh_pos),\n                                                                                 self.ellipsoid.faces[k],\n                                                                                 image,\n                                                                                 mesh_only=True,\n                                                                                 color=color)\n                            ret = np.concatenate((ret, rend_result), axis=2)\n                            verts[k] = vert\n                        ret = np.transpose(ret, (1, 2, 0))\n                        writer.append_data((255 * ret).astype(np.uint8))\n                    writer.close()\n"
  },
  {
    "path": "functions/saver.py",
    "content": "import os\n\nimport torch\nimport torch.nn\n\n\nclass CheckpointSaver(object):\n    \"\"\"Class that handles saving and loading checkpoints during training.\"\"\"\n\n    def __init__(self, logger, checkpoint_dir=None, checkpoint_file=None):\n        self.logger = logger\n        if checkpoint_file is not None:\n            if not os.path.exists(checkpoint_file):\n                raise ValueError(\"Checkpoint file [%s] does not exist!\" % checkpoint_file)\n            self.save_dir = os.path.dirname(os.path.abspath(checkpoint_file))\n            self.checkpoint_file = os.path.abspath(checkpoint_file)\n            return\n        if checkpoint_dir is None:\n            raise ValueError(\"Checkpoint directory must be not None in case file is not provided!\")\n        self.save_dir = os.path.abspath(checkpoint_dir)\n        self.checkpoint_file = self.get_latest_checkpoint()\n\n    def load_checkpoint(self):\n        if self.checkpoint_file is None:\n            self.logger.info(\"Checkpoint file not found, skipping...\")\n            return None\n        self.logger.info(\"Loading checkpoint file: %s\" % self.checkpoint_file)\n        try:\n            return torch.load(self.checkpoint_file)\n        except UnicodeDecodeError:\n            # to be compatible with old encoding methods\n            return torch.load(self.checkpoint_file, encoding=\"bytes\")\n\n    def save_checkpoint(self, obj, name):\n        self.checkpoint_file = os.path.join(self.save_dir, \"%s.pt\" % name)\n        self.logger.info(\"Dumping to checkpoint file: %s\" % self.checkpoint_file)\n        torch.save(obj, self.checkpoint_file)\n\n    def get_latest_checkpoint(self):\n        # this will automatically find the checkpoint with latest modified time\n        checkpoint_list = []\n        for dirpath, dirnames, filenames in os.walk(self.save_dir):\n            for filename in filenames:\n                if filename.endswith('.pt'):\n                    file_path = os.path.abspath(os.path.join(dirpath, filename))\n                    modified_time = os.path.getmtime(file_path)\n                    checkpoint_list.append((file_path, modified_time))\n        checkpoint_list = sorted(checkpoint_list, key=lambda x: x[1])\n        return None if not checkpoint_list else checkpoint_list[-1][0]\n"
  },
  {
    "path": "functions/trainer.py",
    "content": "import time\nfrom datetime import timedelta\n\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\n\nfrom functions.base import CheckpointRunner\nfrom functions.evaluator import Evaluator\nfrom models.classifier import Classifier\nfrom models.losses.classifier import CrossEntropyLoss\nfrom models.losses.p2m import P2MLoss\nfrom models.p2m import P2MModel\nfrom utils.average_meter import AverageMeter\nfrom utils.mesh import Ellipsoid\nfrom utils.tensor import recursive_detach\nfrom utils.vis.renderer import MeshRenderer\n\n\nclass Trainer(CheckpointRunner):\n\n    # noinspection PyAttributeOutsideInit\n    def init_fn(self, shared_model=None, **kwargs):\n        if self.options.model.name == \"pixel2mesh\":\n            # Visualization renderer\n            self.renderer = MeshRenderer(self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                         self.options.dataset.mesh_pos)\n            # create ellipsoid\n            self.ellipsoid = Ellipsoid(self.options.dataset.mesh_pos)\n        else:\n            self.renderer = None\n\n        if shared_model is not None:\n            self.model = shared_model\n        else:\n            if self.options.model.name == \"pixel2mesh\":\n                # create model\n                self.model = P2MModel(self.options.model, self.ellipsoid,\n                                      self.options.dataset.camera_f, self.options.dataset.camera_c,\n                                      self.options.dataset.mesh_pos)\n            elif self.options.model.name == \"classifier\":\n                self.model = Classifier(self.options.model, self.options.dataset.num_classes)\n            else:\n                raise NotImplementedError(\"Your model is not found\")\n            self.model = torch.nn.DataParallel(self.model, device_ids=self.gpus).cuda()\n\n        # Setup a joint optimizer for the 2 models\n        if self.options.optim.name == \"adam\":\n            self.optimizer = torch.optim.Adam(\n                params=list(self.model.parameters()),\n                lr=self.options.optim.lr,\n                betas=(self.options.optim.adam_beta1, 0.999),\n                weight_decay=self.options.optim.wd\n            )\n        elif self.options.optim.name == \"sgd\":\n            self.optimizer = torch.optim.SGD(\n                params=list(self.model.parameters()),\n                lr=self.options.optim.lr,\n                momentum=self.options.optim.sgd_momentum,\n                weight_decay=self.options.optim.wd\n            )\n        else:\n            raise NotImplementedError(\"Your optimizer is not found\")\n        self.lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(\n            self.optimizer, self.options.optim.lr_step, self.options.optim.lr_factor\n        )\n\n        # Create loss functions\n        if self.options.model.name == \"pixel2mesh\":\n            self.criterion = P2MLoss(self.options.loss, self.ellipsoid).cuda()\n        elif self.options.model.name == \"classifier\":\n            self.criterion = CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Your loss is not found\")\n\n        # Create AverageMeters for losses\n        self.losses = AverageMeter()\n\n        # Evaluators\n        self.evaluators = [Evaluator(self.options, self.logger, self.summary_writer, shared_model=self.model)]\n\n    def models_dict(self):\n        return {'model': self.model}\n\n    def optimizers_dict(self):\n        return {'optimizer': self.optimizer,\n                'lr_scheduler': self.lr_scheduler}\n\n    def train_step(self, input_batch):\n        self.model.train()\n\n        # Grab data from the batch\n        images = input_batch[\"images\"]\n\n        # predict with model\n        out = self.model(images)\n\n        # compute loss\n        loss, loss_summary = self.criterion(out, input_batch)\n        self.losses.update(loss.detach().cpu().item())\n\n        # Do backprop\n        self.optimizer.zero_grad()\n        loss.backward()\n        self.optimizer.step()\n\n        # Pack output arguments to be used for visualization\n        return recursive_detach(out), recursive_detach(loss_summary)\n\n    def train(self):\n        # Run training for num_epochs epochs\n        for epoch in range(self.epoch_count, self.options.train.num_epochs):\n            self.epoch_count += 1\n\n            # Create a new data loader for every epoch\n            train_data_loader = DataLoader(self.dataset,\n                                           batch_size=self.options.train.batch_size * self.options.num_gpus,\n                                           num_workers=self.options.num_workers,\n                                           pin_memory=self.options.pin_memory,\n                                           shuffle=self.options.train.shuffle,\n                                           collate_fn=self.dataset_collate_fn)\n\n            # Reset loss\n            self.losses.reset()\n\n            # Iterate over all batches in an epoch\n            for step, batch in enumerate(train_data_loader):\n                # Send input to GPU\n                batch = {k: v.cuda() if isinstance(v, torch.Tensor) else v for k, v in batch.items()}\n\n                # Run training step\n                out = self.train_step(batch)\n\n                self.step_count += 1\n\n                # Tensorboard logging every summary_steps steps\n                if self.step_count % self.options.train.summary_steps == 0:\n                    self.train_summaries(batch, *out)\n\n                # Save checkpoint every checkpoint_steps steps\n                if self.step_count % self.options.train.checkpoint_steps == 0:\n                    self.dump_checkpoint()\n\n            # save checkpoint after each epoch\n            self.dump_checkpoint()\n\n            # Run validation every test_epochs\n            if self.epoch_count % self.options.train.test_epochs == 0:\n                self.test()\n\n            # lr scheduler step\n            self.lr_scheduler.step()\n\n    def train_summaries(self, input_batch, out_summary, loss_summary):\n        if self.renderer is not None:\n            # Do visualization for the first 2 images of the batch\n            render_mesh = self.renderer.p2m_batch_visualize(input_batch, out_summary, self.ellipsoid.faces)\n            self.summary_writer.add_image(\"render_mesh\", render_mesh, self.step_count)\n            self.summary_writer.add_histogram(\"length_distribution\", input_batch[\"length\"].cpu().numpy(),\n                                              self.step_count)\n\n        # Debug info for filenames\n        self.logger.debug(input_batch[\"filename\"])\n\n        # Save results in Tensorboard\n        for k, v in loss_summary.items():\n            self.summary_writer.add_scalar(k, v, self.step_count)\n\n        # Save results to log\n        self.logger.info(\"Epoch %03d, Step %06d/%06d, Time elapsed %s, Loss %.9f (%.9f)\" % (\n            self.epoch_count, self.step_count,\n            self.options.train.num_epochs * len(self.dataset) // (\n                        self.options.train.batch_size * self.options.num_gpus),\n            self.time_elapsed, self.losses.val, self.losses.avg))\n\n    def test(self):\n        for evaluator in self.evaluators:\n            evaluator.evaluate()\n"
  },
  {
    "path": "logger.py",
    "content": "import logging\nimport os\n\n\ndef create_logger(cfg, phase='train'):\n    log_file = '{}_{}.log'.format(cfg.version, phase)\n    final_log_file = os.path.join(cfg.log_dir, log_file)\n    head = '%(asctime)-15s %(message)s'\n    logging.basicConfig(filename=str(final_log_file), format=head)\n    logger = logging.getLogger()\n    if cfg.log_level == \"info\":\n        logger.setLevel(logging.INFO)\n    elif cfg.log_level == \"debug\":\n        logger.setLevel(logging.DEBUG)\n    else:\n        raise NotImplementedError(\"Log level has to be one of info and debug\")\n    console = logging.StreamHandler()\n    logging.getLogger('').addHandler(console)\n\n    return logger\n"
  },
  {
    "path": "models/backbones/__init__.py",
    "content": "from models.backbones.resnet import resnet50\nfrom models.backbones.vgg16 import VGG16TensorflowAlign, VGG16P2M, VGG16Recons\n\n\ndef get_backbone(options):\n    if options.backbone.startswith(\"vgg16\"):\n        if options.align_with_tensorflow:\n            nn_encoder = VGG16TensorflowAlign()\n        else:\n            nn_encoder = VGG16P2M(pretrained=\"pretrained\" in options.backbone)\n        nn_decoder = VGG16Recons()\n    elif options.backbone == \"resnet50\":\n        nn_encoder = resnet50()\n        nn_decoder = None\n    else:\n        raise NotImplementedError(\"No implemented backbone called '%s' found\" % options.backbone)\n    return nn_encoder, nn_decoder\n"
  },
  {
    "path": "models/backbones/resnet.py",
    "content": "import torch\nfrom torchvision.models import ResNet\nfrom torchvision.models.resnet import Bottleneck\n\nimport config\n\n\nclass P2MResNet(ResNet):\n\n    def __init__(self, *args, **kwargs):\n        self.output_dim = 0\n        super().__init__(*args, **kwargs)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        res = super()._make_layer(block, planes, blocks, stride=stride, dilate=dilate)\n        self.output_dim += self.inplanes\n        return res\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        features = []\n        x = self.layer1(x)\n        features.append(x)\n        x = self.layer2(x)\n        features.append(x)\n        x = self.layer3(x)\n        features.append(x)\n        x = self.layer4(x)\n        features.append(x)\n\n        return features\n\n    @property\n    def features_dim(self):\n        return self.output_dim\n\n\ndef resnet50():\n    model = P2MResNet(Bottleneck, [3, 4, 6, 3])\n    state_dict = torch.load(config.PRETRAINED_WEIGHTS_PATH[\"resnet50\"])\n    model.load_state_dict(state_dict)\n    return model\n"
  },
  {
    "path": "models/backbones/vgg16.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport config\n\n\nclass VGG16TensorflowAlign(nn.Module):\n\n    def __init__(self, n_classes_input=3):\n        super(VGG16TensorflowAlign, self).__init__()\n\n        self.features_dim = 960\n        # this is to align with tensorflow padding (with stride)\n        # https://bugxch.github.io/tf%E4%B8%AD%E7%9A%84padding%E6%96%B9%E5%BC%8FSAME%E5%92%8CVALID%E6%9C%89%E4%BB%80%E4%B9%88%E5%8C%BA%E5%88%AB/\n        self.same_padding = nn.ZeroPad2d(1)\n        self.tf_padding = nn.ZeroPad2d((0, 1, 0, 1))\n        self.tf_padding_2 = nn.ZeroPad2d((1, 2, 1, 2))\n\n        self.conv0_1 = nn.Conv2d(n_classes_input, 16, 3, stride=1, padding=0)\n        self.conv0_2 = nn.Conv2d(16, 16, 3, stride=1, padding=0)\n\n        self.conv1_1 = nn.Conv2d(16, 32, 3, stride=2, padding=0)  # 224 -> 112\n        self.conv1_2 = nn.Conv2d(32, 32, 3, stride=1, padding=0)\n        self.conv1_3 = nn.Conv2d(32, 32, 3, stride=1, padding=0)\n\n        self.conv2_1 = nn.Conv2d(32, 64, 3, stride=2, padding=0)  # 112 -> 56\n        self.conv2_2 = nn.Conv2d(64, 64, 3, stride=1, padding=0)\n        self.conv2_3 = nn.Conv2d(64, 64, 3, stride=1, padding=0)\n\n        self.conv3_1 = nn.Conv2d(64, 128, 3, stride=2, padding=0)  # 56 -> 28\n        self.conv3_2 = nn.Conv2d(128, 128, 3, stride=1, padding=0)\n        self.conv3_3 = nn.Conv2d(128, 128, 3, stride=1, padding=0)\n\n        self.conv4_1 = nn.Conv2d(128, 256, 5, stride=2, padding=0)  # 28 -> 14\n        self.conv4_2 = nn.Conv2d(256, 256, 3, stride=1, padding=0)\n        self.conv4_3 = nn.Conv2d(256, 256, 3, stride=1, padding=0)\n\n        self.conv5_1 = nn.Conv2d(256, 512, 5, stride=2, padding=0)  # 14 -> 7\n        self.conv5_2 = nn.Conv2d(512, 512, 3, stride=1, padding=0)\n        self.conv5_3 = nn.Conv2d(512, 512, 3, stride=1, padding=0)\n        self.conv5_4 = nn.Conv2d(512, 512, 3, stride=1, padding=0)\n\n    def forward(self, img):\n        img = F.relu(self.conv0_1(self.same_padding(img)))\n        img = F.relu(self.conv0_2(self.same_padding(img)))\n\n        img = F.relu(self.conv1_1(self.tf_padding(img)))\n        img = F.relu(self.conv1_2(self.same_padding(img)))\n        img = F.relu(self.conv1_3(self.same_padding(img)))\n\n        img = F.relu(self.conv2_1(self.tf_padding(img)))\n        img = F.relu(self.conv2_2(self.same_padding(img)))\n        img = F.relu(self.conv2_3(self.same_padding(img)))\n        img2 = img\n\n        img = F.relu(self.conv3_1(self.tf_padding(img)))\n        img = F.relu(self.conv3_2(self.same_padding(img)))\n        img = F.relu(self.conv3_3(self.same_padding(img)))\n        img3 = img\n\n        img = F.relu(self.conv4_1(self.tf_padding_2(img)))\n        img = F.relu(self.conv4_2(self.same_padding(img)))\n        img = F.relu(self.conv4_3(self.same_padding(img)))\n        img4 = img\n\n        img = F.relu(self.conv5_1(self.tf_padding_2(img)))\n        img = F.relu(self.conv5_2(self.same_padding(img)))\n        img = F.relu(self.conv5_3(self.same_padding(img)))\n        img = F.relu(self.conv5_4(self.same_padding(img)))\n        img5 = img\n\n        return [img2, img3, img4, img5]\n\n\nclass VGG16P2M(nn.Module):\n\n    def __init__(self, n_classes_input=3, pretrained=False):\n        super(VGG16P2M, self).__init__()\n\n        self.features_dim = 960\n\n        self.conv0_1 = nn.Conv2d(n_classes_input, 16, 3, stride=1, padding=1)\n        self.conv0_2 = nn.Conv2d(16, 16, 3, stride=1, padding=1)\n\n        self.conv1_1 = nn.Conv2d(16, 32, 3, stride=2, padding=1)  # 224 -> 112\n        self.conv1_2 = nn.Conv2d(32, 32, 3, stride=1, padding=1)\n        self.conv1_3 = nn.Conv2d(32, 32, 3, stride=1, padding=1)\n\n        self.conv2_1 = nn.Conv2d(32, 64, 3, stride=2, padding=1)  # 112 -> 56\n        self.conv2_2 = nn.Conv2d(64, 64, 3, stride=1, padding=1)\n        self.conv2_3 = nn.Conv2d(64, 64, 3, stride=1, padding=1)\n\n        self.conv3_1 = nn.Conv2d(64, 128, 3, stride=2, padding=1)  # 56 -> 28\n        self.conv3_2 = nn.Conv2d(128, 128, 3, stride=1, padding=1)\n        self.conv3_3 = nn.Conv2d(128, 128, 3, stride=1, padding=1)\n\n        self.conv4_1 = nn.Conv2d(128, 256, 5, stride=2, padding=2)  # 28 -> 14\n        self.conv4_2 = nn.Conv2d(256, 256, 3, stride=1, padding=1)\n        self.conv4_3 = nn.Conv2d(256, 256, 3, stride=1, padding=1)\n\n        self.conv5_1 = nn.Conv2d(256, 512, 5, stride=2, padding=2)  # 14 -> 7\n        self.conv5_2 = nn.Conv2d(512, 512, 3, stride=1, padding=1)\n        self.conv5_3 = nn.Conv2d(512, 512, 3, stride=1, padding=1)\n        self.conv5_4 = nn.Conv2d(512, 512, 3, stride=1, padding=1)\n\n        if \"vgg16p2m\" in config.PRETRAINED_WEIGHTS_PATH and pretrained:\n            state_dict = torch.load(config.PRETRAINED_WEIGHTS_PATH[\"vgg16p2m\"])\n            self.load_state_dict(state_dict)\n        else:\n            self._initialize_weights()\n\n    def _initialize_weights(self):\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n                if m.bias is not None:\n                    nn.init.constant_(m.bias, 0)\n            elif isinstance(m, nn.BatchNorm2d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n            elif isinstance(m, nn.Linear):\n                nn.init.normal_(m.weight, 0, 0.01)\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, img):\n        img = F.relu(self.conv0_1(img))\n        img = F.relu(self.conv0_2(img))\n        # img0 = torch.squeeze(img) # 224\n\n        img = F.relu(self.conv1_1(img))\n        img = F.relu(self.conv1_2(img))\n        img = F.relu(self.conv1_3(img))\n        # img1 = torch.squeeze(img) # 112\n\n        img = F.relu(self.conv2_1(img))\n        img = F.relu(self.conv2_2(img))\n        img = F.relu(self.conv2_3(img))\n        img2 = img\n\n        img = F.relu(self.conv3_1(img))\n        img = F.relu(self.conv3_2(img))\n        img = F.relu(self.conv3_3(img))\n        img3 = img\n\n        img = F.relu(self.conv4_1(img))\n        img = F.relu(self.conv4_2(img))\n        img = F.relu(self.conv4_3(img))\n        img4 = img\n\n        img = F.relu(self.conv5_1(img))\n        img = F.relu(self.conv5_2(img))\n        img = F.relu(self.conv5_3(img))\n        img = F.relu(self.conv5_4(img))\n        img5 = img\n\n        return [img2, img3, img4, img5]\n\n\nclass VGG16Recons(nn.Module):\n\n    def __init__(self, input_dim=512, image_channel=3):\n        super(VGG16Recons, self).__init__()\n\n        self.conv_1 = nn.ConvTranspose2d(input_dim, 256, kernel_size=2, stride=2, padding=0)  # 7 -> 14\n        self.conv_2 = nn.ConvTranspose2d(512, 128, kernel_size=4, stride=2, padding=1)  # 14 -> 28\n        self.conv_3 = nn.ConvTranspose2d(256, 64, kernel_size=4, stride=2, padding=1)  # 28 -> 56\n        self.conv_4 = nn.ConvTranspose2d(128, 32, kernel_size=6, stride=2, padding=2)  # 56 -> 112\n        self.conv_5 = nn.ConvTranspose2d(32, image_channel, kernel_size=6, stride=2, padding=2)  # 112 -> 224\n\n    def forward(self, img_feats):\n        x = F.relu(self.conv_1(img_feats[-1]))\n        x = torch.cat((x, img_feats[-2]), dim=1)\n        x = F.relu(self.conv_2(x))\n        x = torch.cat((x, img_feats[-3]), dim=1)\n        x = F.relu(self.conv_3(x))\n        x = torch.cat((x, img_feats[-4]), dim=1)\n        x = F.relu(self.conv_4(x))\n        x = F.relu(self.conv_5(x))\n\n        return torch.sigmoid(x)\n"
  },
  {
    "path": "models/classifier.py",
    "content": "import torch.nn as nn\n\nfrom models.backbones import get_backbone\n\n\nclass Classifier(nn.Module):\n\n    def __init__(self, options, num_classes):\n        super(Classifier, self).__init__()\n\n        self.nn_encoder, self.nn_decoder = get_backbone(options)\n\n        if \"vgg\" in options.backbone:\n            self.avgpool = nn.AdaptiveAvgPool2d((7, 7))\n            self.classifier = nn.Sequential(\n                nn.Linear(list(self.nn_encoder.children())[-1].out_channels * 7 * 7, 4096),\n                nn.ReLU(True),\n                nn.Dropout(),\n                nn.Linear(4096, 4096),\n                nn.ReLU(True),\n                nn.Dropout(),\n                nn.Linear(4096, num_classes),\n            )\n        elif \"resnet\" in options.backbone:\n            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n            self.classifier = nn.Linear(self.nn_encoder.inplanes, num_classes)\n        else:\n            raise NotImplementedError\n\n    def _initialize_weights(self):\n        for m in self.modules():\n            if isinstance(m, nn.Linear):\n                nn.init.normal_(m.weight, 0, 0.01)\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, img):\n        x = self.nn_encoder(img)[-1]  # last layer\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        x = self.classifier(x)\n        return x\n"
  },
  {
    "path": "models/layers/chamfer_wrapper.py",
    "content": "import chamfer\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Function\n\n\n# Chamfer's distance module @thibaultgroueix\n# GPU tensors only\nclass ChamferFunction(Function):\n    @staticmethod\n    def forward(ctx, xyz1, xyz2):\n        batchsize, n, _ = xyz1.size()\n        _, m, _ = xyz2.size()\n\n        dist1 = torch.zeros(batchsize, n)\n        dist2 = torch.zeros(batchsize, m)\n\n        idx1 = torch.zeros(batchsize, n).type(torch.IntTensor)\n        idx2 = torch.zeros(batchsize, m).type(torch.IntTensor)\n\n        dist1 = dist1.cuda()\n        dist2 = dist2.cuda()\n        idx1 = idx1.cuda()\n        idx2 = idx2.cuda()\n\n        chamfer.forward(xyz1, xyz2, dist1, dist2, idx1, idx2)\n        ctx.save_for_backward(xyz1, xyz2, idx1, idx2)\n        return dist1, dist2, idx1, idx2\n\n    @staticmethod\n    def backward(ctx, graddist1, graddist2, _idx1, _idx2):\n        xyz1, xyz2, idx1, idx2 = ctx.saved_tensors\n        graddist1 = graddist1.contiguous()\n        graddist2 = graddist2.contiguous()\n\n        gradxyz1 = torch.zeros(xyz1.size())\n        gradxyz2 = torch.zeros(xyz2.size())\n\n        gradxyz1 = gradxyz1.cuda()\n        gradxyz2 = gradxyz2.cuda()\n        chamfer.backward(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2)\n        return gradxyz1, gradxyz2\n\n\nclass ChamferDist(nn.Module):\n    def __init__(self):\n        super(ChamferDist, self).__init__()\n\n    def forward(self, input1, input2):\n        return ChamferFunction.apply(input1, input2)\n"
  },
  {
    "path": "models/layers/gbottleneck.py",
    "content": "import torch.nn as nn\nimport torch.nn.functional as F\n\nfrom models.layers.gconv import GConv\n\n\nclass GResBlock(nn.Module):\n\n    def __init__(self, in_dim, hidden_dim, adj_mat, activation=None):\n        super(GResBlock, self).__init__()\n\n        self.conv1 = GConv(in_features=in_dim, out_features=hidden_dim, adj_mat=adj_mat)\n        self.conv2 = GConv(in_features=hidden_dim, out_features=in_dim, adj_mat=adj_mat)\n        self.activation = F.relu if activation else None\n\n    def forward(self, inputs):\n        x = self.conv1(inputs)\n        if self.activation:\n            x = self.activation(x)\n        x = self.conv2(x)\n        if self.activation:\n            x = self.activation(x)\n\n        return (inputs + x) * 0.5\n\n\nclass GBottleneck(nn.Module):\n\n    def __init__(self, block_num, in_dim, hidden_dim, out_dim, adj_mat, activation=None):\n        super(GBottleneck, self).__init__()\n\n        resblock_layers = [GResBlock(in_dim=hidden_dim, hidden_dim=hidden_dim, adj_mat=adj_mat, activation=activation)\n                           for _ in range(block_num)]\n        self.blocks = nn.Sequential(*resblock_layers)\n        self.conv1 = GConv(in_features=in_dim, out_features=hidden_dim, adj_mat=adj_mat)\n        self.conv2 = GConv(in_features=hidden_dim, out_features=out_dim, adj_mat=adj_mat)\n        self.activation = F.relu if activation else None\n\n    def forward(self, inputs):\n        x = self.conv1(inputs)\n        if self.activation:\n            x = self.activation(x)\n        x_hidden = self.blocks(x)\n        x_out = self.conv2(x_hidden)\n\n        return x_out, x_hidden\n"
  },
  {
    "path": "models/layers/gconv.py",
    "content": "import math\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.tensor import dot\n\n\nclass GConv(nn.Module):\n    \"\"\"Simple GCN layer\n\n    Similar to https://arxiv.org/abs/1609.02907\n    \"\"\"\n\n    def __init__(self, in_features, out_features, adj_mat, bias=True):\n        super(GConv, self).__init__()\n        self.in_features = in_features\n        self.out_features = out_features\n\n        self.adj_mat = nn.Parameter(adj_mat, requires_grad=False)\n        self.weight = nn.Parameter(torch.zeros((in_features, out_features), dtype=torch.float))\n        # Following https://github.com/Tong-ZHAO/Pixel2Mesh-Pytorch/blob/a0ae88c4a42eef6f8f253417b97df978db842708/model/gcn_layers.py#L45\n        # This seems to be different from the original implementation of P2M\n        self.loop_weight = nn.Parameter(torch.zeros((in_features, out_features), dtype=torch.float))\n        if bias:\n            self.bias = nn.Parameter(torch.zeros((out_features,), dtype=torch.float))\n        else:\n            self.register_parameter('bias', None)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.xavier_uniform_(self.weight.data)\n        nn.init.xavier_uniform_(self.loop_weight.data)\n\n    def forward(self, inputs):\n        support = torch.matmul(inputs, self.weight)\n        support_loop = torch.matmul(inputs, self.loop_weight)\n        output = dot(self.adj_mat, support, True) + support_loop\n        if self.bias is not None:\n            ret = output + self.bias\n        else:\n            ret = output\n        return ret\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n"
  },
  {
    "path": "models/layers/gpooling.py",
    "content": "import torch\nimport torch.nn as nn\nimport numpy as np\n\n\nclass GUnpooling(nn.Module):\n    \"\"\"Graph Pooling layer, aims to add additional vertices to the graph.\n    The middle point of each edges are added, and its feature is simply\n    the average of the two edge vertices.\n    Three middle points are connected in each triangle.\n    \"\"\"\n\n    def __init__(self, unpool_idx):\n        super(GUnpooling, self).__init__()\n        self.unpool_idx = unpool_idx\n        # save dim info\n        self.in_num = torch.max(unpool_idx).item()\n        self.out_num = self.in_num + len(unpool_idx)\n\n    def forward(self, inputs):\n        new_features = inputs[:, self.unpool_idx].clone()\n        new_vertices = 0.5 * new_features.sum(2)\n        output = torch.cat([inputs, new_vertices], 1)\n\n        return output\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_num) + ' -> ' \\\n               + str(self.out_num) + ')'"
  },
  {
    "path": "models/layers/gprojection.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn import Threshold\n\n\nclass GProjection(nn.Module):\n    \"\"\"\n    Graph Projection layer, which pool 2D features to mesh\n\n    The layer projects a vertex of the mesh to the 2D image and use\n    bi-linear interpolation to get the corresponding feature.\n    \"\"\"\n\n    def __init__(self, mesh_pos, camera_f, camera_c, bound=0, tensorflow_compatible=False):\n        super(GProjection, self).__init__()\n        self.mesh_pos, self.camera_f, self.camera_c = mesh_pos, camera_f, camera_c\n        self.threshold = None\n        self.bound = 0\n        self.tensorflow_compatible = tensorflow_compatible\n        if self.bound != 0:\n            self.threshold = Threshold(bound, bound)\n\n    def bound_val(self, x):\n        \"\"\"\n        given x, return min(threshold, x), in case threshold is not None\n        \"\"\"\n        if self.bound < 0:\n            return -self.threshold(-x)\n        elif self.bound > 0:\n            return self.threshold(x)\n        return x\n\n    @staticmethod\n    def image_feature_shape(img):\n        return np.array([img.size(-1), img.size(-2)])\n\n    def project_tensorflow(self, x, y, img_size, img_feat):\n        x = torch.clamp(x, min=0, max=img_size[1] - 1)\n        y = torch.clamp(y, min=0, max=img_size[0] - 1)\n\n        # it's tedious and contains bugs...\n        # when x1 = x2, the area is 0, therefore it won't be processed\n        # keep it here to align with tensorflow version\n        x1, x2 = torch.floor(x).long(), torch.ceil(x).long()\n        y1, y2 = torch.floor(y).long(), torch.ceil(y).long()\n\n        Q11 = img_feat[:, x1, y1].clone()\n        Q12 = img_feat[:, x1, y2].clone()\n        Q21 = img_feat[:, x2, y1].clone()\n        Q22 = img_feat[:, x2, y2].clone()\n\n        weights = torch.mul(x2.float() - x, y2.float() - y)\n        Q11 = torch.mul(weights.unsqueeze(-1), torch.transpose(Q11, 0, 1))\n\n        weights = torch.mul(x2.float() - x, y - y1.float())\n        Q12 = torch.mul(weights.unsqueeze(-1), torch.transpose(Q12, 0, 1))\n\n        weights = torch.mul(x - x1.float(), y2.float() - y)\n        Q21 = torch.mul(weights.unsqueeze(-1), torch.transpose(Q21, 0, 1))\n\n        weights = torch.mul(x - x1.float(), y - y1.float())\n        Q22 = torch.mul(weights.unsqueeze(-1), torch.transpose(Q22, 0, 1))\n\n        output = Q11 + Q21 + Q12 + Q22\n        return output\n\n    def forward(self, resolution, img_features, inputs):\n        half_resolution = (resolution - 1) / 2\n        camera_c_offset = np.array(self.camera_c) - half_resolution\n        # map to [-1, 1]\n        # not sure why they render to negative x\n        positions = inputs + torch.tensor(self.mesh_pos, device=inputs.device, dtype=torch.float)\n        w = -self.camera_f[0] * (positions[:, :, 0] / self.bound_val(positions[:, :, 2])) + camera_c_offset[0]\n        h = self.camera_f[1] * (positions[:, :, 1] / self.bound_val(positions[:, :, 2])) + camera_c_offset[1]\n\n        if self.tensorflow_compatible:\n            # to align with tensorflow\n            # this is incorrect, I believe\n            w += half_resolution[0]\n            h += half_resolution[1]\n\n        else:\n            # directly do clamping\n            w /= half_resolution[0]\n            h /= half_resolution[1]\n\n            # clamp to [-1, 1]\n            w = torch.clamp(w, min=-1, max=1)\n            h = torch.clamp(h, min=-1, max=1)\n\n        feats = [inputs]\n        for img_feature in img_features:\n            feats.append(self.project(resolution, img_feature, torch.stack([w, h], dim=-1)))\n\n        output = torch.cat(feats, 2)\n\n        return output\n\n    def project(self, img_shape, img_feat, sample_points):\n        \"\"\"\n        :param img_shape: raw image shape\n        :param img_feat: [batch_size x channel x h x w]\n        :param sample_points: [batch_size x num_points x 2], in range [-1, 1]\n        :return: [batch_size x num_points x feat_dim]\n        \"\"\"\n        if self.tensorflow_compatible:\n            feature_shape = self.image_feature_shape(img_feat)\n            points_w = sample_points[:, :, 0] / (img_shape[0] / feature_shape[0])\n            points_h = sample_points[:, :, 1] / (img_shape[1] / feature_shape[1])\n            output = torch.stack([self.project_tensorflow(points_h[i], points_w[i],\n                                                          feature_shape, img_feat[i]) for i in range(img_feat.size(0))], 0)\n        else:\n            output = F.grid_sample(img_feat, sample_points.unsqueeze(1))\n            output = torch.transpose(output.squeeze(2), 1, 2)\n\n        return output\n"
  },
  {
    "path": "models/losses/classifier.py",
    "content": "import torch\nimport torch.nn as nn\n\n\nclass CrossEntropyLoss(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.cross_entropy = nn.CrossEntropyLoss().cuda()\n\n    def forward(self, outputs, targets):\n        labels = targets[\"labels\"]\n        loss = self.cross_entropy(outputs, labels)\n        _, predicted = torch.max(outputs.data, 1)\n        total = labels.size(0)\n        correct = (predicted == labels).sum().item()\n        return loss, {\"loss\": loss, \"acc\": correct / total}\n"
  },
  {
    "path": "models/losses/p2m.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom models.layers.chamfer_wrapper import ChamferDist\n\n\nclass P2MLoss(nn.Module):\n    def __init__(self, options, ellipsoid):\n        super().__init__()\n        self.options = options\n        self.l1_loss = nn.L1Loss(reduction='mean')\n        self.l2_loss = nn.MSELoss(reduction='mean')\n        self.chamfer_dist = ChamferDist()\n        self.laplace_idx = nn.ParameterList([\n            nn.Parameter(idx, requires_grad=False) for idx in ellipsoid.laplace_idx])\n        self.edges = nn.ParameterList([\n            nn.Parameter(edges, requires_grad=False) for edges in ellipsoid.edges])\n\n    def edge_regularization(self, pred, edges):\n        \"\"\"\n        :param pred: batch_size * num_points * 3\n        :param edges: num_edges * 2\n        :return:\n        \"\"\"\n        return self.l2_loss(pred[:, edges[:, 0]], pred[:, edges[:, 1]]) * pred.size(-1)\n\n    @staticmethod\n    def laplace_coord(inputs, lap_idx):\n        \"\"\"\n        :param inputs: nodes Tensor, size (n_pts, n_features = 3)\n        :param lap_idx: laplace index matrix Tensor, size (n_pts, 10)\n        for each vertex, the laplace vector shows: [neighbor_index * 8, self_index, neighbor_count]\n\n        :returns\n        The laplacian coordinates of input with respect to edges as in lap_idx\n        \"\"\"\n\n        indices = lap_idx[:, :-2]\n        invalid_mask = indices < 0\n        all_valid_indices = indices.clone()\n        all_valid_indices[invalid_mask] = 0  # do this to avoid negative indices\n\n        vertices = inputs[:, all_valid_indices]\n        vertices[:, invalid_mask] = 0\n        neighbor_sum = torch.sum(vertices, 2)\n        neighbor_count = lap_idx[:, -1].float()\n        laplace = inputs - neighbor_sum / neighbor_count[None, :, None]\n\n        return laplace\n\n    def laplace_regularization(self, input1, input2, block_idx):\n        \"\"\"\n        :param input1: vertices tensor before deformation\n        :param input2: vertices after the deformation\n        :param block_idx: idx to select laplace index matrix tensor\n        :return:\n\n        if different than 1 then adds a move loss as in the original TF code\n        \"\"\"\n\n        lap1 = self.laplace_coord(input1, self.laplace_idx[block_idx])\n        lap2 = self.laplace_coord(input2, self.laplace_idx[block_idx])\n        laplace_loss = self.l2_loss(lap1, lap2) * lap1.size(-1)\n        move_loss = self.l2_loss(input1, input2) * input1.size(-1) if block_idx > 0 else 0\n        return laplace_loss, move_loss\n\n    def normal_loss(self, gt_normal, indices, pred_points, adj_list):\n        edges = F.normalize(pred_points[:, adj_list[:, 0]] - pred_points[:, adj_list[:, 1]], dim=2)\n        nearest_normals = torch.stack([t[i] for t, i in zip(gt_normal, indices.long())])\n        normals = F.normalize(nearest_normals[:, adj_list[:, 0]], dim=2)\n        cosine = torch.abs(torch.sum(edges * normals, 2))\n        return torch.mean(cosine)\n\n    def image_loss(self, gt_img, pred_img):\n        rect_loss = F.binary_cross_entropy(pred_img, gt_img)\n        return rect_loss\n\n    def forward(self, outputs, targets):\n        \"\"\"\n        :param outputs: outputs from P2MModel\n        :param targets: targets from input\n        :return: loss, loss_summary (dict)\n        \"\"\"\n\n        chamfer_loss, edge_loss, normal_loss, lap_loss, move_loss = 0., 0., 0., 0., 0.\n        lap_const = [0.2, 1., 1.]\n\n        gt_coord, gt_normal, gt_images = targets[\"points\"], targets[\"normals\"], targets[\"images\"]\n        pred_coord, pred_coord_before_deform = outputs[\"pred_coord\"], outputs[\"pred_coord_before_deform\"]\n        image_loss = 0.\n        if outputs[\"reconst\"] is not None and self.options.weights.reconst != 0:\n            image_loss = self.image_loss(gt_images, outputs[\"reconst\"])\n\n        for i in range(3):\n            dist1, dist2, idx1, idx2 = self.chamfer_dist(gt_coord, pred_coord[i])\n            chamfer_loss += self.options.weights.chamfer[i] * (torch.mean(dist1) +\n                                                               self.options.weights.chamfer_opposite * torch.mean(dist2))\n            normal_loss += self.normal_loss(gt_normal, idx2, pred_coord[i], self.edges[i])\n            edge_loss += self.edge_regularization(pred_coord[i], self.edges[i])\n            lap, move = self.laplace_regularization(pred_coord_before_deform[i],\n                                                                   pred_coord[i], i)\n            lap_loss += lap_const[i] * lap\n            move_loss += lap_const[i] * move\n\n        loss = chamfer_loss + image_loss * self.options.weights.reconst + \\\n               self.options.weights.laplace * lap_loss + \\\n               self.options.weights.move * move_loss + \\\n               self.options.weights.edge * edge_loss + \\\n               self.options.weights.normal * normal_loss\n\n        loss = loss * self.options.weights.constant\n\n        return loss, {\n            \"loss\": loss,\n            \"loss_chamfer\": chamfer_loss,\n            \"loss_edge\": edge_loss,\n            \"loss_laplace\": lap_loss,\n            \"loss_move\": move_loss,\n            \"loss_normal\": normal_loss,\n        }\n"
  },
  {
    "path": "models/p2m.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom models.backbones import get_backbone\nfrom models.layers.gbottleneck import GBottleneck\nfrom models.layers.gconv import GConv\nfrom models.layers.gpooling import GUnpooling\nfrom models.layers.gprojection import GProjection\n\n\nclass P2MModel(nn.Module):\n\n    def __init__(self, options, ellipsoid, camera_f, camera_c, mesh_pos):\n        super(P2MModel, self).__init__()\n\n        self.hidden_dim = options.hidden_dim\n        self.coord_dim = options.coord_dim\n        self.last_hidden_dim = options.last_hidden_dim\n        self.init_pts = nn.Parameter(ellipsoid.coord, requires_grad=False)\n        self.gconv_activation = options.gconv_activation\n\n        self.nn_encoder, self.nn_decoder = get_backbone(options)\n        self.features_dim = self.nn_encoder.features_dim + self.coord_dim\n\n        self.gcns = nn.ModuleList([\n            GBottleneck(6, self.features_dim, self.hidden_dim, self.coord_dim,\n                        ellipsoid.adj_mat[0], activation=self.gconv_activation),\n            GBottleneck(6, self.features_dim + self.hidden_dim, self.hidden_dim, self.coord_dim,\n                        ellipsoid.adj_mat[1], activation=self.gconv_activation),\n            GBottleneck(6, self.features_dim + self.hidden_dim, self.hidden_dim, self.last_hidden_dim,\n                        ellipsoid.adj_mat[2], activation=self.gconv_activation)\n        ])\n\n        self.unpooling = nn.ModuleList([\n            GUnpooling(ellipsoid.unpool_idx[0]),\n            GUnpooling(ellipsoid.unpool_idx[1])\n        ])\n\n        # if options.align_with_tensorflow:\n        #     self.projection = GProjection\n        # else:\n        #     self.projection = GProjection\n        self.projection = GProjection(mesh_pos, camera_f, camera_c, bound=options.z_threshold,\n                                      tensorflow_compatible=options.align_with_tensorflow)\n\n        self.gconv = GConv(in_features=self.last_hidden_dim, out_features=self.coord_dim,\n                           adj_mat=ellipsoid.adj_mat[2])\n\n    def forward(self, img):\n        batch_size = img.size(0)\n        img_feats = self.nn_encoder(img)\n        img_shape = self.projection.image_feature_shape(img)\n\n        init_pts = self.init_pts.data.unsqueeze(0).expand(batch_size, -1, -1)\n        # GCN Block 1\n        x = self.projection(img_shape, img_feats, init_pts)\n        x1, x_hidden = self.gcns[0](x)\n\n        # before deformation 2\n        x1_up = self.unpooling[0](x1)\n\n        # GCN Block 2\n        x = self.projection(img_shape, img_feats, x1)\n        x = self.unpooling[0](torch.cat([x, x_hidden], 2))\n        # after deformation 2\n        x2, x_hidden = self.gcns[1](x)\n\n        # before deformation 3\n        x2_up = self.unpooling[1](x2)\n\n        # GCN Block 3\n        x = self.projection(img_shape, img_feats, x2)\n        x = self.unpooling[1](torch.cat([x, x_hidden], 2))\n        x3, _ = self.gcns[2](x)\n        if self.gconv_activation:\n            x3 = F.relu(x3)\n        # after deformation 3\n        x3 = self.gconv(x3)\n\n        if self.nn_decoder is not None:\n            reconst = self.nn_decoder(img_feats)\n        else:\n            reconst = None\n\n        return {\n            \"pred_coord\": [x1, x2, x3],\n            \"pred_coord_before_deform\": [init_pts, x1_up, x2_up],\n            \"reconst\": reconst\n        }\n"
  },
  {
    "path": "options.py",
    "content": "import os\nimport pprint\nfrom argparse import ArgumentParser\nfrom datetime import datetime\n\nimport numpy as np\nimport yaml\nfrom easydict import EasyDict as edict\nfrom tensorboardX import SummaryWriter\n\nfrom logger import create_logger\n\noptions = edict()\n\noptions.name = 'p2m'\noptions.version = None\noptions.num_workers = 1\noptions.num_gpus = 1\noptions.pin_memory = True\n\noptions.log_dir = \"logs\"\noptions.log_level = \"info\"\noptions.summary_dir = \"summary\"\noptions.checkpoint_dir = \"checkpoints\"\noptions.checkpoint = None\n\noptions.dataset = edict()\noptions.dataset.name = \"shapenet\"\noptions.dataset.subset_train = \"train_small\"\noptions.dataset.subset_eval = \"test_small\"\noptions.dataset.camera_f = [248., 248.]\noptions.dataset.camera_c = [111.5, 111.5]\noptions.dataset.mesh_pos = [0., 0., -0.8]\noptions.dataset.normalization = True\noptions.dataset.num_classes = 13\n\noptions.dataset.shapenet = edict()\noptions.dataset.shapenet.num_points = 3000\noptions.dataset.shapenet.resize_with_constant_border = False\n\noptions.dataset.predict = edict()\noptions.dataset.predict.folder = \"/tmp\"\n\noptions.model = edict()\noptions.model.name = \"pixel2mesh\"\noptions.model.hidden_dim = 192\noptions.model.last_hidden_dim = 192\noptions.model.coord_dim = 3\noptions.model.backbone = \"vgg16\"\noptions.model.gconv_activation = True\n# provide a boundary for z, so that z will never be equal to 0, on denominator\n# if z is greater than 0, it will never be less than z;\n# if z is less than 0, it will never be greater than z.\noptions.model.z_threshold = 0\n# align with original tensorflow model\n# please follow experiments/tensorflow.yml\noptions.model.align_with_tensorflow = False\n\noptions.loss = edict()\noptions.loss.weights = edict()\noptions.loss.weights.normal = 1.6e-4\noptions.loss.weights.edge = 0.3\noptions.loss.weights.laplace = 0.5\noptions.loss.weights.move = 0.1\noptions.loss.weights.constant = 1.\noptions.loss.weights.chamfer = [1., 1., 1.]\noptions.loss.weights.chamfer_opposite = 1.\noptions.loss.weights.reconst = 0.\n\noptions.train = edict()\noptions.train.num_epochs = 50\noptions.train.batch_size = 4\noptions.train.summary_steps = 50\noptions.train.checkpoint_steps = 10000\noptions.train.test_epochs = 1\noptions.train.use_augmentation = True\noptions.train.shuffle = True\n\noptions.test = edict()\noptions.test.dataset = []\noptions.test.summary_steps = 50\noptions.test.batch_size = 4\noptions.test.shuffle = False\noptions.test.weighted_mean = False\n\noptions.optim = edict()\noptions.optim.name = \"adam\"\noptions.optim.adam_beta1 = 0.9\noptions.optim.sgd_momentum = 0.9\noptions.optim.lr = 5.0E-5\noptions.optim.wd = 1.0E-6\noptions.optim.lr_step = [30, 45]\noptions.optim.lr_factor = 0.1\n\n\ndef _update_dict(full_key, val, d):\n    for vk, vv in val.items():\n        if vk not in d:\n            raise ValueError(\"{}.{} does not exist in options\".format(full_key, vk))\n        if isinstance(vv, list):\n            d[vk] = np.array(vv)\n        elif isinstance(vv, dict):\n            _update_dict(full_key + \".\" + vk, vv, d[vk])\n        else:\n            d[vk] = vv\n\n\ndef _update_options(options_file):\n    # do scan twice\n    # in the first round, MODEL.NAME is located so that we can initialize MODEL.EXTRA\n    # in the second round, we update everything\n\n    with open(options_file) as f:\n        options_dict = yaml.safe_load(f)\n        # do a dfs on `BASED_ON` options files\n        if \"based_on\" in options_dict:\n            for base_options in options_dict[\"based_on\"]:\n                _update_options(os.path.join(os.path.dirname(options_file), base_options))\n            options_dict.pop(\"based_on\")\n        _update_dict(\"\", options_dict, options)\n\n\ndef update_options(options_file):\n    _update_options(options_file)\n\n\ndef gen_options(options_file):\n    def to_dict(ed):\n        ret = dict(ed)\n        for k, v in ret.items():\n            if isinstance(v, edict):\n                ret[k] = to_dict(v)\n            elif isinstance(v, np.ndarray):\n                ret[k] = v.tolist()\n        return ret\n\n    cfg = to_dict(options)\n\n    with open(options_file, 'w') as f:\n        yaml.safe_dump(dict(cfg), f, default_flow_style=False)\n\n\ndef slugify(filename):\n    filename = os.path.relpath(filename, \".\")\n    if filename.startswith(\"experiments/\"):\n        filename = filename[len(\"experiments/\"):]\n    return os.path.splitext(filename)[0].lower().replace(\"/\", \"_\").replace(\".\", \"_\")\n\n\ndef reset_options(options, args, phase='train'):\n    if hasattr(args, \"batch_size\") and args.batch_size:\n        options.train.batch_size = options.test.batch_size = args.batch_size\n    if hasattr(args, \"version\") and args.version:\n        options.version = args.version\n    if hasattr(args, \"num_epochs\") and args.num_epochs:\n        options.train.num_epochs = args.num_epochs\n    if hasattr(args, \"checkpoint\") and args.checkpoint:\n        options.checkpoint = args.checkpoint\n    if hasattr(args, \"folder\") and args.folder:\n        options.dataset.predict.folder = args.folder\n    if hasattr(args, \"gpus\") and args.gpus:\n        options.num_gpus = args.gpus\n    if hasattr(args, \"shuffle\") and args.shuffle:\n        options.train.shuffle = options.test.shuffle = True\n\n    options.name = args.name\n\n    if options.version is None:\n        prefix = \"\"\n        if args.options:\n            prefix = slugify(args.options) + \"_\"\n        options.version = prefix + datetime.now().strftime('%m%d%H%M%S')  # ignore %Y\n    options.log_dir = os.path.join(options.log_dir, options.name)\n    print('=> creating {}'.format(options.log_dir))\n    os.makedirs(options.log_dir, exist_ok=True)\n\n    options.checkpoint_dir = os.path.join(options.checkpoint_dir, options.name, options.version)\n    print('=> creating {}'.format(options.checkpoint_dir))\n    os.makedirs(options.checkpoint_dir, exist_ok=True)\n\n    options.summary_dir = os.path.join(options.summary_dir, options.name, options.version)\n    print('=> creating {}'.format(options.summary_dir))\n    os.makedirs(options.summary_dir, exist_ok=True)\n\n    logger = create_logger(options, phase=phase)\n    options_text = pprint.pformat(vars(options))\n    logger.info(options_text)\n\n    print('=> creating summary writer')\n    writer = SummaryWriter(options.summary_dir)\n\n    return logger, writer\n\n\nif __name__ == \"__main__\":\n    parser = ArgumentParser(\"Read options and freeze\")\n    parser.add_argument(\"--input\", type=str, required=True)\n    parser.add_argument(\"--output\", type=str, required=True)\n    args = parser.parse_args()\n    update_options(args.input)\n    gen_options(args.output)\n"
  },
  {
    "path": "slurm/eval.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nif [[ $# -lt 4 ]] ; then\n    echo 'too few arguments supplied'\n    exit 1\nfi\n\nPARTITION=$1\nNAME=$2\nOPTIONS=$3\nCHECKPOINT=$4\n\nsrun -p ${PARTITION} \\\n    --job-name=MeshEval \\\n    --gres=gpu:8 \\\n    --ntasks=1 \\\n    --kill-on-bad-exit=1 \\\n    python entrypoint_eval.py --name ${NAME} --options ${OPTIONS} --checkpoint ${CHECKPOINT} &\n"
  },
  {
    "path": "slurm/train.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nif [[ $# -lt 3 ]] ; then\n    echo 'too few arguments supplied'\n    exit 1\nfi\n\nPARTITION=$1\nNAME=$2\nOPTIONS=$3\n\nsrun -p ${PARTITION} \\\n    --job-name=Mesh \\\n    --gres=gpu:8 \\\n    --ntasks=1 \\\n    --kill-on-bad-exit=1 \\\n    python entrypoint_train.py --name ${NAME} --options ${OPTIONS} &\n"
  },
  {
    "path": "slurm/train_checkpoint.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nif [[ $# -lt 4 ]] ; then\n    echo 'too few arguments supplied'\n    exit 1\nfi\n\nPARTITION=$1\nNAME=$2\nOPTIONS=$3\nCHECKPOINT=$4\n\nsrun -p ${PARTITION} \\\n    --job-name=Mesh \\\n    --gres=gpu:8 \\\n    --ntasks=1 \\\n    --kill-on-bad-exit=1 \\\n    python entrypoint_train.py --name ${NAME} --options ${OPTIONS} --checkpoint ${CHECKPOINT} &\n"
  },
  {
    "path": "slurm/train_checkpoint_1gpu.sh",
    "content": "#!/usr/bin/env bash\n\nset -x\n\nif [[ $# -lt 4 ]] ; then\n    echo 'too few arguments supplied'\n    exit 1\nfi\n\nPARTITION=$1\nNAME=$2\nOPTIONS=$3\nCHECKPOINT=$4\n\nsrun -p ${PARTITION} \\\n    --job-name=Mesh \\\n    --gres=gpu:1 \\\n    --ntasks=1 \\\n    --kill-on-bad-exit=1 \\\n    python entrypoint_train.py --name ${NAME} --options ${OPTIONS} --checkpoint ${CHECKPOINT} &\n"
  },
  {
    "path": "test.py",
    "content": "import torch\nimport torch.nn as nn\n\nfrom models.layers.chamfer_wrapper import ChamferDist\n\n\ndef test():\n    torch.manual_seed(42)\n    chamfer = ChamferDist()\n    dense = nn.Linear(6, 3)\n    dense.cuda()\n    optimizer = torch.optim.Adam(dense.parameters(), 1e-3)\n    a = torch.rand(4, 5, 6).cuda()\n    b = torch.rand(4, 8, 3).cuda()\n    c = torch.rand(4, 5, 6).cuda()\n    for i in range(30000):\n        a_out = dense(a)\n        d1, d2, i1, i2 = chamfer(a_out, b)\n        loss = d1.mean() + d2.mean()\n\n        c_out = dense(a)\n        d1, d2, i1, i2 = chamfer(c_out, b)\n\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n        print(loss)\n\n\ntest()"
  },
  {
    "path": "utils/average_meter.py",
    "content": "from collections import Iterable\n\nimport torch\n\nimport numpy as np\n\n\n# noinspection PyAttributeOutsideInit\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\"\"\"\n\n    def __init__(self, multiplier=1.0):\n        self.multiplier = multiplier\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        if isinstance(val, torch.Tensor):\n            val = val.cpu().numpy()\n        if isinstance(val, Iterable):\n            val = np.array(val)\n            self.update(np.mean(np.array(val)), n=val.size)\n        else:\n            self.val = self.multiplier * val\n            self.sum += self.multiplier * val * n\n            self.count += n\n            self.avg = self.sum / self.count if self.count != 0 else 0\n\n    def __str__(self):\n        return \"%.6f (%.6f)\" % (self.val, self.avg)\n"
  },
  {
    "path": "utils/demo_selection/select_demo_images.py",
    "content": "import json\nimport os\nimport random\nimport shutil\n\nwith open(\"datasets/data/shapenet/meta/shapenet.json\") as fp:\n    labels_map = json.load(fp)\n\nwith open(\"datasets/data/shapenet/meta/test_tf.txt\") as fp:\n    lines = [line.strip() for line in fp.readlines()]\n\nfor entry in labels_map.values():\n    file_list = list(filter(lambda x: (entry[\"id\"] + \"/\") in x, lines))\n    chosen = random.choice(file_list)\n    file_location = os.path.join(\"datasets/data/shapenet/data_tf\",\n                                 chosen[len(\"Data/ShapeNetP2M/\"):-4] + \".png\")\n    shutil.copyfile(file_location, \"datasets/examples/%s.png\" % entry[\"name\"].split(\",\")[0])\n"
  },
  {
    "path": "utils/mesh.py",
    "content": "import os\nimport pickle\n\nimport numpy as np\nimport torch\nimport trimesh\nfrom scipy.sparse import coo_matrix\n\nimport config\n\n\ndef torch_sparse_tensor(indices, value, size):\n    coo = coo_matrix((value, (indices[:, 0], indices[:, 1])), shape=size)\n    values = coo.data\n    indices = np.vstack((coo.row, coo.col))\n\n    i = torch.tensor(indices, dtype=torch.long)\n    v = torch.tensor(values, dtype=torch.float)\n    shape = coo.shape\n\n    return torch.sparse.FloatTensor(i, v, shape)\n\n\nclass Ellipsoid(object):\n\n    def __init__(self, mesh_pos, file=config.ELLIPSOID_PATH):\n        with open(file, \"rb\") as fp:\n            fp_info = pickle.load(fp, encoding='latin1')\n\n        # shape: n_pts * 3\n        self.coord = torch.tensor(fp_info[0]) - torch.tensor(mesh_pos, dtype=torch.float)\n\n        # edges & faces & lap_idx\n        # edge: num_edges * 2\n        # faces: num_faces * 4\n        # laplace_idx: num_pts * 10\n        self.edges, self.laplace_idx = [], []\n\n        for i in range(3):\n            self.edges.append(torch.tensor(fp_info[1 + i][1][0], dtype=torch.long))\n            self.laplace_idx.append(torch.tensor(fp_info[7][i], dtype=torch.long))\n\n        # unpool index\n        # num_pool_edges * 2\n        # pool_01: 462 * 2, pool_12: 1848 * 2\n        self.unpool_idx = [torch.tensor(fp_info[4][i], dtype=torch.long) for i in range(2)]\n\n        # loops and adjacent edges\n        self.adj_mat = []\n        for i in range(1, 4):\n            # 0: np.array, 2D, pos\n            # 1: np.array, 1D, vals\n            # 2: tuple - shape, n * n\n            adj_mat = torch_sparse_tensor(*fp_info[i][1])\n            self.adj_mat.append(adj_mat)\n\n        ellipsoid_dir = os.path.dirname(file)\n        self.faces = []\n        self.obj_fmt_faces = []\n        # faces: f * 3, original ellipsoid, and two after deformations\n        for i in range(1, 4):\n            face_file = os.path.join(ellipsoid_dir, \"face%d.obj\" % i)\n            faces = np.loadtxt(face_file, dtype='|S32')\n            self.obj_fmt_faces.append(faces)\n            self.faces.append(torch.tensor(faces[:, 1:].astype(np.int) - 1))\n"
  },
  {
    "path": "utils/migrations/delete_unnecessary_keys.py",
    "content": "from argparse import ArgumentParser\n\nimport torch\n\nparser = ArgumentParser()\nparser.add_argument(\"--input\", type=str, required=True)\nparser.add_argument(\"--output\", type=str, required=True)\nargs = parser.parse_args()\n\ndata = torch.load(args.input)\ncompressed = dict()\ncompressed[\"model\"] = data[\"model\"]\ntorch.save(compressed, args.output)\n"
  },
  {
    "path": "utils/migrations/extract_vgg_weights.py",
    "content": "import torch\n\nfrom models.classifier import Classifier\nfrom options import options\n\n\noptions.model.backbone = \"vgg16\"\nmodel = Classifier(options.model, 1000)\nstate_dict = torch.load(\"checkpoints/debug/migration/400400_000080.pt\")\nmodel.load_state_dict(state_dict[\"model\"])\ntorch.save(model.nn_encoder.state_dict(), \"checkpoints/debug/migration/vgg16-p2m.pth\")\n"
  },
  {
    "path": "utils/migrations/from_p2m_pytorch.py",
    "content": "import re\n\nimport torch\n\n\ncheckpoint = torch.load(\"checkpoints/debug/20190705192654/000001_000001.pt\")\npretrained = torch.load(\"checkpoints/pretrained/network_4.pth\")\n\nweights = checkpoint[\"model\"]\n\nfor k in weights.keys():\n    match = k\n    match = re.sub(\"gcns\\.(\\d)\", \"GCN_\\\\1\", match)\n    match = re.sub(\"conv(\\d)\\.weight\", \"conv\\\\1.weight_2\", match)\n    match = re.sub(\"conv(\\d)\\.loop_weight\", \"conv\\\\1.weight_1\", match)\n    match = re.sub(\"gconv\\.weight\", \"GConv.weight_2\", match)\n    match = re.sub(\"gconv\\.loop_weight\", \"GConv.weight_1\", match)\n    match = re.sub(\"gconv\\.\", \"GConv.\", match)\n    if match not in pretrained:\n        print(k, match)\n    else:\n        weights[k] = pretrained[match]\ntorch.save(checkpoint, \"checkpoints/debug/migration/network_4.pt\")\n"
  },
  {
    "path": "utils/migrations/official_config_pytorch_256.txt",
    "content": "nn_encoder.conv0_1.weight torch.Size([16, 3, 3, 3])\nnn_encoder.conv0_1.bias torch.Size([16])\nnn_encoder.conv0_2.weight torch.Size([16, 16, 3, 3])\nnn_encoder.conv0_2.bias torch.Size([16])\nnn_encoder.conv1_1.weight torch.Size([32, 16, 3, 3])\nnn_encoder.conv1_1.bias torch.Size([32])\nnn_encoder.conv1_2.weight torch.Size([32, 32, 3, 3])\nnn_encoder.conv1_2.bias torch.Size([32])\nnn_encoder.conv1_3.weight torch.Size([32, 32, 3, 3])\nnn_encoder.conv1_3.bias torch.Size([32])\nnn_encoder.conv2_1.weight torch.Size([64, 32, 3, 3])\nnn_encoder.conv2_1.bias torch.Size([64])\nnn_encoder.conv2_2.weight torch.Size([64, 64, 3, 3])\nnn_encoder.conv2_2.bias torch.Size([64])\nnn_encoder.conv2_3.weight torch.Size([64, 64, 3, 3])\nnn_encoder.conv2_3.bias torch.Size([64])\nnn_encoder.conv3_1.weight torch.Size([128, 64, 3, 3])\nnn_encoder.conv3_1.bias torch.Size([128])\nnn_encoder.conv3_2.weight torch.Size([128, 128, 3, 3])\nnn_encoder.conv3_2.bias torch.Size([128])\nnn_encoder.conv3_3.weight torch.Size([128, 128, 3, 3])\nnn_encoder.conv3_3.bias torch.Size([128])\nnn_encoder.conv4_1.weight torch.Size([256, 128, 5, 5])\nnn_encoder.conv4_1.bias torch.Size([256])\nnn_encoder.conv4_2.weight torch.Size([256, 256, 3, 3])\nnn_encoder.conv4_2.bias torch.Size([256])\nnn_encoder.conv4_3.weight torch.Size([256, 256, 3, 3])\nnn_encoder.conv4_3.bias torch.Size([256])\nnn_encoder.conv5_1.weight torch.Size([512, 256, 5, 5])\nnn_encoder.conv5_1.bias torch.Size([512])\nnn_encoder.conv5_2.weight torch.Size([512, 512, 3, 3])\nnn_encoder.conv5_2.bias torch.Size([512])\nnn_encoder.conv5_3.weight torch.Size([512, 512, 3, 3])\nnn_encoder.conv5_3.bias torch.Size([512])\nnn_encoder.conv5_4.weight torch.Size([512, 512, 3, 3])\nnn_encoder.conv5_4.bias torch.Size([512])\ngcns.0.conv1.loop_weight torch.Size([963, 256])\ngcns.0.conv1.weight torch.Size([963, 256])\ngcns.0.conv1.bias torch.Size([256])\ngcns.0.blocks.0.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.0.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.0.conv1.bias torch.Size([256])\ngcns.0.blocks.0.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.0.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.0.conv2.bias torch.Size([256])\ngcns.0.blocks.1.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.1.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.1.conv1.bias torch.Size([256])\ngcns.0.blocks.1.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.1.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.1.conv2.bias torch.Size([256])\ngcns.0.blocks.2.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.2.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.2.conv1.bias torch.Size([256])\ngcns.0.blocks.2.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.2.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.2.conv2.bias torch.Size([256])\ngcns.0.blocks.3.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.3.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.3.conv1.bias torch.Size([256])\ngcns.0.blocks.3.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.3.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.3.conv2.bias torch.Size([256])\ngcns.0.blocks.4.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.4.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.4.conv1.bias torch.Size([256])\ngcns.0.blocks.4.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.4.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.4.conv2.bias torch.Size([256])\ngcns.0.blocks.5.conv1.loop_weight torch.Size([256, 256])\ngcns.0.blocks.5.conv1.weight torch.Size([256, 256])\ngcns.0.blocks.5.conv1.bias torch.Size([256])\ngcns.0.blocks.5.conv2.loop_weight torch.Size([256, 256])\ngcns.0.blocks.5.conv2.weight torch.Size([256, 256])\ngcns.0.blocks.5.conv2.bias torch.Size([256])\ngcns.0.conv2.loop_weight torch.Size([256, 3])\ngcns.0.conv2.weight torch.Size([256, 3])\ngcns.0.conv2.bias torch.Size([3])\ngcns.1.conv1.loop_weight torch.Size([1219, 256])\ngcns.1.conv1.weight torch.Size([1219, 256])\ngcns.1.conv1.bias torch.Size([256])\ngcns.1.blocks.0.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.0.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.0.conv1.bias torch.Size([256])\ngcns.1.blocks.0.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.0.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.0.conv2.bias torch.Size([256])\ngcns.1.blocks.1.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.1.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.1.conv1.bias torch.Size([256])\ngcns.1.blocks.1.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.1.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.1.conv2.bias torch.Size([256])\ngcns.1.blocks.2.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.2.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.2.conv1.bias torch.Size([256])\ngcns.1.blocks.2.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.2.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.2.conv2.bias torch.Size([256])\ngcns.1.blocks.3.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.3.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.3.conv1.bias torch.Size([256])\ngcns.1.blocks.3.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.3.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.3.conv2.bias torch.Size([256])\ngcns.1.blocks.4.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.4.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.4.conv1.bias torch.Size([256])\ngcns.1.blocks.4.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.4.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.4.conv2.bias torch.Size([256])\ngcns.1.blocks.5.conv1.loop_weight torch.Size([256, 256])\ngcns.1.blocks.5.conv1.weight torch.Size([256, 256])\ngcns.1.blocks.5.conv1.bias torch.Size([256])\ngcns.1.blocks.5.conv2.loop_weight torch.Size([256, 256])\ngcns.1.blocks.5.conv2.weight torch.Size([256, 256])\ngcns.1.blocks.5.conv2.bias torch.Size([256])\ngcns.1.conv2.loop_weight torch.Size([256, 3])\ngcns.1.conv2.weight torch.Size([256, 3])\ngcns.1.conv2.bias torch.Size([3])\ngcns.2.conv1.loop_weight torch.Size([1219, 256])\ngcns.2.conv1.weight torch.Size([1219, 256])\ngcns.2.conv1.bias torch.Size([256])\ngcns.2.blocks.0.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.0.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.0.conv1.bias torch.Size([256])\ngcns.2.blocks.0.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.0.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.0.conv2.bias torch.Size([256])\ngcns.2.blocks.1.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.1.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.1.conv1.bias torch.Size([256])\ngcns.2.blocks.1.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.1.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.1.conv2.bias torch.Size([256])\ngcns.2.blocks.2.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.2.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.2.conv1.bias torch.Size([256])\ngcns.2.blocks.2.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.2.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.2.conv2.bias torch.Size([256])\ngcns.2.blocks.3.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.3.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.3.conv1.bias torch.Size([256])\ngcns.2.blocks.3.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.3.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.3.conv2.bias torch.Size([256])\ngcns.2.blocks.4.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.4.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.4.conv1.bias torch.Size([256])\ngcns.2.blocks.4.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.4.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.4.conv2.bias torch.Size([256])\ngcns.2.blocks.5.conv1.loop_weight torch.Size([256, 256])\ngcns.2.blocks.5.conv1.weight torch.Size([256, 256])\ngcns.2.blocks.5.conv1.bias torch.Size([256])\ngcns.2.blocks.5.conv2.loop_weight torch.Size([256, 256])\ngcns.2.blocks.5.conv2.weight torch.Size([256, 256])\ngcns.2.blocks.5.conv2.bias torch.Size([256])\ngcns.2.conv2.loop_weight torch.Size([256, 256])\ngcns.2.conv2.weight torch.Size([256, 256])\ngcns.2.conv2.bias torch.Size([256])\ngconv.loop_weight torch.Size([256, 3])\ngconv.weight torch.Size([256, 3])\ngconv.bias torch.Size([3])\n"
  },
  {
    "path": "utils/migrations/official_config_tensorflow_256.txt",
    "content": "gcn/Conv2D/W:0 (3, 3, 3, 16)\ngcn/Conv2D/b:0 (16,)\ngcn/Conv2D_1/W:0 (3, 3, 16, 16)\ngcn/Conv2D_1/b:0 (16,)\ngcn/Conv2D_2/W:0 (3, 3, 16, 32)\ngcn/Conv2D_2/b:0 (32,)\ngcn/Conv2D_3/W:0 (3, 3, 32, 32)\ngcn/Conv2D_3/b:0 (32,)\ngcn/Conv2D_4/W:0 (3, 3, 32, 32)\ngcn/Conv2D_4/b:0 (32,)\ngcn/Conv2D_5/W:0 (3, 3, 32, 64)\ngcn/Conv2D_5/b:0 (64,)\ngcn/Conv2D_6/W:0 (3, 3, 64, 64)\ngcn/Conv2D_6/b:0 (64,)\ngcn/Conv2D_7/W:0 (3, 3, 64, 64)\ngcn/Conv2D_7/b:0 (64,)\ngcn/Conv2D_8/W:0 (3, 3, 64, 128)\ngcn/Conv2D_8/b:0 (128,)\ngcn/Conv2D_9/W:0 (3, 3, 128, 128)\ngcn/Conv2D_9/b:0 (128,)\ngcn/Conv2D_10/W:0 (3, 3, 128, 128)\ngcn/Conv2D_10/b:0 (128,)\ngcn/Conv2D_11/W:0 (5, 5, 128, 256)\ngcn/Conv2D_11/b:0 (256,)\ngcn/Conv2D_12/W:0 (3, 3, 256, 256)\ngcn/Conv2D_12/b:0 (256,)\ngcn/Conv2D_13/W:0 (3, 3, 256, 256)\ngcn/Conv2D_13/b:0 (256,)\ngcn/Conv2D_14/W:0 (5, 5, 256, 512)\ngcn/Conv2D_14/b:0 (512,)\ngcn/Conv2D_15/W:0 (3, 3, 512, 512)\ngcn/Conv2D_15/b:0 (512,)\ngcn/Conv2D_16/W:0 (3, 3, 512, 512)\ngcn/Conv2D_16/b:0 (512,)\ngcn/Conv2D_17/W:0 (3, 3, 512, 512)\ngcn/Conv2D_17/b:0 (512,)\ngcn/graphconvolution_1_vars/weights_0:0 (963, 256)\ngcn/graphconvolution_1_vars/weights_1:0 (963, 256)\ngcn/graphconvolution_1_vars/bias:0 (256,)\ngcn/graphconvolution_2_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_2_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_2_vars/bias:0 (256,)\ngcn/graphconvolution_3_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_3_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_3_vars/bias:0 (256,)\ngcn/graphconvolution_4_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_4_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_4_vars/bias:0 (256,)\ngcn/graphconvolution_5_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_5_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_5_vars/bias:0 (256,)\ngcn/graphconvolution_6_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_6_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_6_vars/bias:0 (256,)\ngcn/graphconvolution_7_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_7_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_7_vars/bias:0 (256,)\ngcn/graphconvolution_8_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_8_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_8_vars/bias:0 (256,)\ngcn/graphconvolution_9_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_9_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_9_vars/bias:0 (256,)\ngcn/graphconvolution_10_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_10_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_10_vars/bias:0 (256,)\ngcn/graphconvolution_11_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_11_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_11_vars/bias:0 (256,)\ngcn/graphconvolution_12_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_12_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_12_vars/bias:0 (256,)\ngcn/graphconvolution_13_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_13_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_13_vars/bias:0 (256,)\ngcn/graphconvolution_14_vars/weights_0:0 (256, 3)\ngcn/graphconvolution_14_vars/weights_1:0 (256, 3)\ngcn/graphconvolution_14_vars/bias:0 (3,)\ngcn/graphconvolution_15_vars/weights_0:0 (1219, 256)\ngcn/graphconvolution_15_vars/weights_1:0 (1219, 256)\ngcn/graphconvolution_15_vars/bias:0 (256,)\ngcn/graphconvolution_16_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_16_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_16_vars/bias:0 (256,)\ngcn/graphconvolution_17_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_17_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_17_vars/bias:0 (256,)\ngcn/graphconvolution_18_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_18_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_18_vars/bias:0 (256,)\ngcn/graphconvolution_19_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_19_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_19_vars/bias:0 (256,)\ngcn/graphconvolution_20_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_20_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_20_vars/bias:0 (256,)\ngcn/graphconvolution_21_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_21_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_21_vars/bias:0 (256,)\ngcn/graphconvolution_22_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_22_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_22_vars/bias:0 (256,)\ngcn/graphconvolution_23_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_23_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_23_vars/bias:0 (256,)\ngcn/graphconvolution_24_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_24_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_24_vars/bias:0 (256,)\ngcn/graphconvolution_25_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_25_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_25_vars/bias:0 (256,)\ngcn/graphconvolution_26_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_26_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_26_vars/bias:0 (256,)\ngcn/graphconvolution_27_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_27_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_27_vars/bias:0 (256,)\ngcn/graphconvolution_28_vars/weights_0:0 (256, 3)\ngcn/graphconvolution_28_vars/weights_1:0 (256, 3)\ngcn/graphconvolution_28_vars/bias:0 (3,)\ngcn/graphconvolution_29_vars/weights_0:0 (1219, 256)\ngcn/graphconvolution_29_vars/weights_1:0 (1219, 256)\ngcn/graphconvolution_29_vars/bias:0 (256,)\ngcn/graphconvolution_30_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_30_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_30_vars/bias:0 (256,)\ngcn/graphconvolution_31_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_31_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_31_vars/bias:0 (256,)\ngcn/graphconvolution_32_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_32_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_32_vars/bias:0 (256,)\ngcn/graphconvolution_33_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_33_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_33_vars/bias:0 (256,)\ngcn/graphconvolution_34_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_34_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_34_vars/bias:0 (256,)\ngcn/graphconvolution_35_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_35_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_35_vars/bias:0 (256,)\ngcn/graphconvolution_36_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_36_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_36_vars/bias:0 (256,)\ngcn/graphconvolution_37_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_37_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_37_vars/bias:0 (256,)\ngcn/graphconvolution_38_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_38_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_38_vars/bias:0 (256,)\ngcn/graphconvolution_39_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_39_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_39_vars/bias:0 (256,)\ngcn/graphconvolution_40_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_40_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_40_vars/bias:0 (256,)\ngcn/graphconvolution_41_vars/weights_0:0 (256, 256)\ngcn/graphconvolution_41_vars/weights_1:0 (256, 256)\ngcn/graphconvolution_41_vars/bias:0 (256,)\ngcn/graphconvolution_42_vars/weights_0:0 (256, 128)\ngcn/graphconvolution_42_vars/weights_1:0 (256, 128)\ngcn/graphconvolution_42_vars/bias:0 (128,)\ngcn/graphconvolution_43_vars/weights_0:0 (128, 3)\ngcn/graphconvolution_43_vars/weights_1:0 (128, 3)\ngcn/graphconvolution_43_vars/bias:0 (3,)\n"
  },
  {
    "path": "utils/migrations/official_model_converter.py",
    "content": "import pickle\nimport torch\nimport numpy as np\n\n\nwith open(\"checkpoints/debug/migration/p2m-tensorflow.pkl\", \"rb\") as f:\n    official = pickle.load(f)\nfor k, v in official.items():\n    print(k, v.shape)\n\nwith open(\"checkpoints/debug/host_template_256/000001_000001.pt\", \"rb\") as f:\n    host = torch.load(f)\nfor k, v in host[\"model\"].items():\n    print(k, v.shape)\n\nwith open(\"utils/migrations/official_config_pytorch_256.txt\", \"r\") as f:\n    pt_names = [line.split()[0] for line in f.readlines()]\nwith open(\"utils/migrations/official_config_tensorflow_256.txt\", \"r\") as f:\n    tf_names = [line.split()[0] for line in f.readlines()]\nfor pt, tf in zip(pt_names, tf_names):\n    if host[\"model\"][pt].shape != official[tf].shape:\n        data = np.transpose(official[tf], (3, 2, 0, 1))\n    else:\n        data = official[tf]\n    print(pt, tf, host[\"model\"][pt].data.shape, data.shape)\n    host[\"model\"][pt].data = torch.from_numpy(data)\n\ntorch.save(host, \"checkpoints/debug/migration/network_official.pt\")"
  },
  {
    "path": "utils/migrations/tensorflow_to_pkl.py",
    "content": "import pickle\n\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\n\nnn_distance_module = tf.load_op_library('tf_ops/libtf_nndistance.so')\n\n\ndef nn_distance(xyz1, xyz2):\n    '''\n    Computes the distance of nearest neighbors for a pair of point clouds\n    input: xyz1: (batch_size,#points_1,3)  the first point cloud\n    input: xyz2: (batch_size,#points_2,3)  the second point cloud\n    output: dist1: (batch_size,#point_1)   distance from first to second\n    output: idx1:  (batch_size,#point_1)   nearest neighbor from first to second\n    output: dist2: (batch_size,#point_2)   distance from second to first\n    output: idx2:  (batch_size,#point_2)   nearest neighbor from second to first\n    '''\n    return nn_distance_module.nn_distance(xyz1, xyz2)\n\n\n@ops.RegisterGradient('NnDistance')\ndef _nn_distance_grad(op, grad_dist1, grad_idx1, grad_dist2, grad_idx2):\n    xyz1 = op.inputs[0]\n    xyz2 = op.inputs[1]\n    idx1 = op.outputs[1]\n    idx2 = op.outputs[3]\n    return nn_distance_module.nn_distance_grad(xyz1, xyz2, grad_dist1, idx1, grad_dist2, idx2)\n\n\npickle_format = dict()\n\nwith tf.Session() as sess:\n    new_saver = tf.train.import_meta_graph('checkpoint/gcn.ckpt.meta')\n    what = new_saver.restore(sess, 'checkpoint/gcn.ckpt')\n    all_vars = tf.get_collection(ops.GraphKeys.GLOBAL_VARIABLES)\n    for v in all_vars:\n        try:\n            v_ = sess.run(v)\n            pickle_format[v.name] = v_\n        except:\n            pass\n    with open(\"result.pkl\", \"wb\") as f:\n        pickle.dump(pickle_format, f)\n"
  },
  {
    "path": "utils/migrations/validate_dataset_all.py",
    "content": "import os\nimport sys\n\nimport requests\nfrom tqdm import tqdm\n\n\ndef go(file_path, subset):\n    shapenet_root = \"datasets/data/shapenet\"\n    with open(file_path, \"r\") as f, open(os.path.join(shapenet_root, \"meta\", subset + \"_all.txt\"), \"w\") as g:\n        for line in tqdm(f.readlines()):\n            _, _, label, filename, _, index = line.strip().split(\"/\")\n            converted = label + \"_\" + filename + \"_\" + index\n            file_path = os.path.join(shapenet_root, \"data\", label + \"/\" + filename + \"_\" + index)\n            if not os.path.exists(file_path):\n                print(\"fail! \" + file_path)\n                continue\n            print(converted, file=g)\n\n\ngo(sys.argv[1], \"train\")\ngo(sys.argv[2], \"test\")"
  },
  {
    "path": "utils/tensor.py",
    "content": "\"\"\"\nHelper functions that have not yet been implemented in pytorch\n\"\"\"\n\nimport torch\n\n\ndef recursive_detach(t):\n    if isinstance(t, torch.Tensor):\n        return t.detach()\n    elif isinstance(t, list):\n        return [recursive_detach(x) for x in t]\n    elif isinstance(t, dict):\n        return {k: recursive_detach(v) for k, v in t.items()}\n    else:\n        return t\n\n\ndef batch_mm(matrix, batch):\n    \"\"\"\n    https://github.com/pytorch/pytorch/issues/14489\n    \"\"\"\n    # TODO: accelerate this with batch operations\n    return torch.stack([matrix.mm(b) for b in batch], dim=0)\n\n\ndef dot(x, y, sparse=False):\n    \"\"\"Wrapper for torch.matmul (sparse vs dense).\"\"\"\n    if sparse:\n        return batch_mm(x, y)\n    else:\n        return torch.matmul(x, y)\n"
  },
  {
    "path": "utils/vis/renderer.py",
    "content": "import cv2\nimport neural_renderer as nr\nimport numpy as np\nimport torch\n\n\ndef _process_render_result(img, height, width):\n    if isinstance(img, torch.Tensor):\n        img = img.cpu().numpy()\n    if img.ndim == 2:\n        # assuming single channel image\n        img = np.expand_dims(img, axis=0)\n    if img.shape[-1] == 3:\n        # assuming [height, width, rgb]\n        img = np.moveaxis(img, -1, 0)\n    # return 3 * width * height or width * height, in range [0, 1]\n    return np.clip(img[:height, :width], 0, 1)\n\n\ndef _mix_render_result_with_image(rgb, alpha, image):\n    alpha = np.expand_dims(alpha, 0)\n    return alpha * rgb + (1 - alpha) * image\n\n\nclass MeshRenderer(object):\n\n    def __init__(self, camera_f, camera_c, mesh_pos):\n        self.colors = {'pink': np.array([.9, .7, .7]),\n                       'light_blue': np.array([0.65098039, 0.74117647, 0.85882353]),\n                       'light_green': np.array([165., 216., 168.]) / 255,\n                       'purple': np.array([216., 193., 165.]) / 255,\n                       'orange': np.array([216., 165., 213.]) / 255,\n                       'light_yellow': np.array([213., 216., 165.]) / 255,\n                       }\n        self.camera_f, self.camera_c, self.mesh_pos = camera_f, camera_c, mesh_pos\n        self.renderer = nr.Renderer(camera_mode='projection',\n                                    light_intensity_directional=.8,\n                                    light_intensity_ambient=.3,\n                                    background_color=[1., 1., 1.],\n                                    light_direction=[0., 0., -1.])\n\n    def _render_mesh(self, vertices: np.ndarray, faces: np.ndarray, width, height,\n                     camera_k, camera_dist_coeffs, rvec, tvec, color=None):\n        # render a square image, then crop\n        img_size = max(height, width)\n\n        # This is not thread safe!\n        self.renderer.image_size = img_size\n\n        vertices = torch.tensor(vertices, dtype=torch.float32)\n        faces = torch.tensor(faces, dtype=torch.int32)\n\n        if color is None:\n            color = 'light_blue'\n        color = self.colors[color]\n        texture_size = 2\n        textures = torch.tensor(color, dtype=torch.float32) \\\n            .repeat(faces.size(0), texture_size, texture_size, texture_size, 1)\n\n        camera_k = torch.tensor(camera_k, dtype=torch.float32)\n        rotmat = torch.tensor(cv2.Rodrigues(rvec)[0], dtype=torch.float32)\n        tvec = torch.tensor(tvec, dtype=torch.float32)\n        camera_dist_coeffs = torch.tensor(camera_dist_coeffs, dtype=torch.float32)\n\n        rgb, _, alpha = self.renderer.render(vertices.unsqueeze(0).cuda(),\n                                             faces.unsqueeze(0).cuda(),\n                                             textures.unsqueeze(0).cuda(),\n                                             K=camera_k.unsqueeze(0).cuda(),\n                                             R=rotmat.unsqueeze(0).cuda(),\n                                             t=tvec.unsqueeze(0).cuda(),\n                                             dist_coeffs=camera_dist_coeffs.unsqueeze(0).cuda(),\n                                             orig_size=img_size)\n        # use the extra dimension of alpha for broadcasting\n        alpha = _process_render_result(alpha[0], height, width)\n        rgb = _process_render_result(rgb[0], height, width)\n\n        return rgb, alpha\n\n    def _render_pointcloud(self, vertices: np.ndarray, width, height,\n                           camera_k, camera_dist_coeffs, rvec, tvec, color=None):\n        if color is None:\n            color = 'pink'\n        color = self.colors[color]\n\n        # return pointcloud\n        vertices_2d = cv2.projectPoints(np.expand_dims(vertices, -1),\n                                        rvec, tvec, camera_k, camera_dist_coeffs)[0]\n        vertices_2d = np.reshape(vertices_2d, (-1, 2))\n        alpha = np.zeros((height, width, 3), np.float)\n        whiteboard = np.ones((3, height, width), np.float)\n        if np.isnan(vertices_2d).any():\n            return whiteboard, alpha\n        for x, y in vertices_2d:\n            cv2.circle(alpha, (int(x), int(y)), radius=1, color=(1., 1., 1.), thickness=-1)\n        rgb = _process_render_result(alpha * color[None, None, :], height, width)\n        alpha = _process_render_result(alpha[:, :, 0], height, width)\n        rgb = _mix_render_result_with_image(rgb, alpha[0], whiteboard)\n        return rgb, alpha\n\n    def visualize_reconstruction(self, gt_coord, coord, faces, image, mesh_only=False, **kwargs):\n        camera_k = np.array([[self.camera_f[0], 0, self.camera_c[0]],\n                             [0, self.camera_f[1], self.camera_c[1]],\n                             [0, 0, 1]])\n        # inverse y and z, equivalent to inverse x, but gives positive z\n        rvec = np.array([np.pi, 0., 0.], dtype=np.float32)\n        tvec = np.zeros(3, dtype=np.float32)\n        dist_coeffs = np.zeros(5, dtype=np.float32)\n        mesh, _ = self._render_mesh(coord, faces, image.shape[2], image.shape[1],\n                                    camera_k, dist_coeffs, rvec, tvec, **kwargs)\n        if mesh_only:\n            return mesh\n\n        gt_pc, _ = self._render_pointcloud(gt_coord, image.shape[2], image.shape[1],\n                                           camera_k, dist_coeffs, rvec, tvec, **kwargs)\n        pred_pc, _ = self._render_pointcloud(coord, image.shape[2], image.shape[1],\n                                             camera_k, dist_coeffs, rvec, tvec, **kwargs)\n        return np.concatenate((image, gt_pc, pred_pc, mesh), 2)\n\n    def p2m_batch_visualize(self, batch_input, batch_output, faces, atmost=3):\n        \"\"\"\n        Every thing is tensor for now, needs to move to cpu and convert to numpy\n        \"\"\"\n        batch_size = min(batch_input[\"images_orig\"].size(0), atmost)\n        images_stack = []\n        mesh_pos = np.array(self.mesh_pos)\n        for i in range(batch_size):\n            image = batch_input[\"images_orig\"][i].cpu().numpy()\n            gt_points = batch_input[\"points\"][i].cpu().numpy() + mesh_pos\n            for j in range(3):\n                for k in ([\"pred_coord_before_deform\", \"pred_coord\"] if j == 0 else [\"pred_coord\"]):\n                    coord = batch_output[k][j][i].cpu().numpy() + mesh_pos\n                    images_stack.append(self.visualize_reconstruction(gt_points, coord, faces[j].cpu().numpy(), image))\n        return torch.from_numpy(np.concatenate(images_stack, 1))\n"
  }
]