[
  {
    "path": ".gitignore",
    "content": ".vscode\n__pycache__\nlog\n.idea\ndata/pcn\n"
  },
  {
    "path": "README.md",
    "content": "# PCN: Point Completion Network\n\n## Introduction\n\n![PCN](images/network.png)\n\nThis is implementation of PCN——Point Completion Network in pytorch. PCN is an autoencoder for point cloud completion. As for the details of the paper, please refer to [arXiv](https://arxiv.org/pdf/1808.00671.pdf).\n\n## Environment\n\n* Ubuntu 18.04 LTS\n* Python 3.7.9\n* PyTorch 1.7.0\n* CUDA 10.1.243\n\n## Prerequisite\n\nCompile for cd and emd:\n\n```shell\ncd extensions/chamfer_distance\npython setup.py install\ncd ../earth_movers_distance\npython setup.py install\n```\n\n**Hint**: Don't compile on Windows platform.\n\nAs for other modules, please install by:\n\n```shell\npip install -r requirements.txt\n```\n\n## Dataset\n\nPlease reference `render` and `sample` to create your own dataset. Also, we decompressed all `.lmdb` data from [PCN](https://drive.google.com/drive/folders/1M_lJN14Ac1RtPtEQxNlCV9e8pom3U6Pa) data into `.ply` data which has smaller volume 8.1G and upload it into Google Drive. Here is the shared link: [Google Drive](https://drive.google.com/file/d/1OvvRyx02-C_DkzYiJ5stpin0mnXydHQ7/view?usp=sharing).\n\n## Training\n\nIn order to train the model, please use script:\n\n```shell\npython train.py --exp_name PCN_16384 --lr 0.0001 --epochs 400 --batch_size 32 --coarse_loss cd --num_workers 8\n```\n\nIf you want to use emd to calculate the distances between coarse point clouds, please use script:\n\n```shell\npython train.py --exp_name PCN_16384 --lr 0.0001 --epochs 400 --batch_size 32 --coarse_loss emd --num_workers 8\n```\n\n## Testing\n\nIn order to test the model, please use follow script:\n\n```shell\npython test.py --exp_name PCN_16384 --ckpt_path <path of pretrained model> --batch_size 32 --num_workers 8\n```\n\nBecause of the computation cost for calculating emd for 16384 points, I split out the emd's evaluation. The parameter `--emd` is used for testing emd. The parameter `--novel` is for novel testing data contains unseen categories while training. The parameter `--save` is used for saving the prediction into `.ply` file and visualize the result into `.png` image.\n\n## Pretrained Model\n\nThe pretrained model is in `checkpoint/`.\n\n## Results\n\nI trained the model on Nvidia GPU 1080Ti with L1 Chamfer Distance for 400 epochs with initial learning rate 0.0001 and decay by 0.7 every 50 epochs. The batch size is 32. Best model is the minimum L1 cd one in validation data.\n\n### Quantitative Result\n\nThe threshold for F-Score is 0.01.\n\n#### Seen Categories:\n\nCategory | L1_CD(1e-3) | L2_CD(1e-4) | EMD(1e-3) | F-Score(%)\n-- | -- | -- | -- | --\nAirplane | 6.0028 | 1.7323 | 10.5922 | 86.2954\nCabinet | 11.2092 | 4.7351 | 27.1505 | 61.6697\nCar | 9.1304 | 2.7157 | 14.3661 | 70.5874\nChair | 12.0340 | 5.8717 | 22.4904 | 58.2958\nLamp | 12.6754 | 7.5891 | 58.7799 | 57.8894\nSofa | 12.8218 | 6.4572 | 19.2891 | 53.4009\nTable | 9.8840 | 4.5669 | 23.7691 | 70.9750\nVessel | 10.1603 | 4.2766 | 17.9761 | 66.6521\n**Average** | 10.4897 | 4.7431 | 24.3017 | 65.7207\n\n#### Unseen Categories\n\nCategory | L1_CD(1e-3) | L2_CD(1e-4) | EMD(1e-3) | F-Score(%)\n-- | -- | -- | -- | --\nBus       | 10.5110 | 4.4648  | 17.0274 | 66.9774\nBed       | 24.9320 | 32.4809 | 42.7974 | 32.2265\nBookshelf | 15.8186 | 13.1783 | 28.5608 | 50.0337\nBench     | 12.1345 | 7.3033  | 12.7497 | 62.4376\nGuitar    | 11.4964 | 5.9601  | 28.4223 | 59.4976\nMotorbike | 15.3426 | 8.7723  | 21.8634 | 44.7431\nSkateboard| 13.1909 | 7.9711  | 17.9910 | 58.4427\nPistol    | 17.4897 | 15.5062 | 33.8937 | 45.6073\n**Average**  | 15.1145 | 11.9546 | 25.4132 | 52.4958\n\n### Qualitative Result\n\n#### Seen Categories\n\n![seen](images/seen_categories.png)\n\n#### Unseen Categories\n\n![unseen](images/unseen_categories.png)\n\n## Citation\n\n* [PCN: Point Completion Network](https://arxiv.org/pdf/1808.00671.pdf)\n* [PCN's official Tensorflow implementation](https://github.com/wentaoyuan/pcn)\n"
  },
  {
    "path": "data/README.md",
    "content": "# data\n\nPlease download `PCN.zip` from Cloud and unzip it here.\n\n```shell\nunzip PCN.zip\n```"
  },
  {
    "path": "dataset/__init__.py",
    "content": "from dataset.shapenet import ShapeNet\n"
  },
  {
    "path": "dataset/shapenet.py",
    "content": "import sys\nsys.path.append('.')\n\nimport os\nimport random\n\nimport torch\nimport torch.utils.data as data\nimport numpy as np\nimport open3d as o3d\n\n\nclass ShapeNet(data.Dataset):\n    \"\"\"\n    ShapeNet dataset in \"PCN: Point Completion Network\". It contains 28974 training\n    samples while each complete samples corresponds to 8 viewpoint partial scans, 800\n    validation samples and 1200 testing samples.\n    \"\"\"\n    \n    def __init__(self, dataroot, split, category):\n        assert split in ['train', 'valid', 'test', 'test_novel'], \"split error value!\"\n\n        self.cat2id = {\n            # seen categories\n            \"airplane\"  : \"02691156\",  # plane\n            \"cabinet\"   : \"02933112\",  # dresser\n            \"car\"       : \"02958343\",\n            \"chair\"     : \"03001627\",\n            \"lamp\"      : \"03636649\",\n            \"sofa\"      : \"04256520\",\n            \"table\"     : \"04379243\",\n            \"vessel\"    : \"04530566\",  # boat\n            \n            # alis for some seen categories\n            \"boat\"      : \"04530566\",  # vessel\n            \"couch\"     : \"04256520\",  # sofa\n            \"dresser\"   : \"02933112\",  # cabinet\n            \"airplane\"  : \"02691156\",  # airplane\n            \"watercraft\": \"04530566\",  # boat\n\n            # unseen categories\n            \"bus\"       : \"02924116\",\n            \"bed\"       : \"02818832\",\n            \"bookshelf\" : \"02871439\",\n            \"bench\"     : \"02828884\",\n            \"guitar\"    : \"03467517\",\n            \"motorbike\" : \"03790512\",\n            \"skateboard\": \"04225987\",\n            \"pistol\"    : \"03948459\",\n        }\n\n        # self.id2cat = {cat_id: cat for cat, cat_id in self.cat2id.items()}\n\n        self.dataroot = dataroot\n        self.split = split\n        self.category = category\n\n        self.partial_paths, self.complete_paths = self._load_data()\n    \n    def __getitem__(self, index):\n        if self.split == 'train':\n            partial_path = self.partial_paths[index].format(random.randint(0, 7))\n        else:\n            partial_path = self.partial_paths[index]\n        complete_path = self.complete_paths[index]\n\n        partial_pc = self.random_sample(self.read_point_cloud(partial_path), 2048)\n        complete_pc = self.random_sample(self.read_point_cloud(complete_path), 16384)\n\n        return torch.from_numpy(partial_pc), torch.from_numpy(complete_pc)\n\n    def __len__(self):\n        return len(self.complete_paths)\n\n    def _load_data(self):\n        with open(os.path.join(self.dataroot, '{}.list').format(self.split), 'r') as f:\n            lines = f.read().splitlines()\n\n        if self.category != 'all':\n            lines = list(filter(lambda x: x.startswith(self.cat2id[self.category]), lines))\n        \n        partial_paths, complete_paths = list(), list()\n\n        for line in lines:\n            category, model_id = line.split('/')\n            if self.split == 'train':\n                partial_paths.append(os.path.join(self.dataroot, self.split, 'partial', category, model_id + '_{}.ply'))\n            else:\n                partial_paths.append(os.path.join(self.dataroot, self.split, 'partial', category, model_id + '.ply'))\n            complete_paths.append(os.path.join(self.dataroot, self.split, 'complete', category, model_id + '.ply'))\n        \n        return partial_paths, complete_paths\n    \n    def read_point_cloud(self, path):\n        pc = o3d.io.read_point_cloud(path)\n        return np.array(pc.points, np.float32)\n    \n    def random_sample(self, pc, n):\n        idx = np.random.permutation(pc.shape[0])\n        if idx.shape[0] < n:\n            idx = np.concatenate([idx, np.random.randint(pc.shape[0], size=n-pc.shape[0])])\n        return pc[idx[:n]]\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer3D.cu",
    "content": "\n#include <stdio.h>\n#include <ATen/ATen.h>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n#include <vector>\n\n\n\n__global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){\n\tconst int batch=512;\n\t__shared__ float buf[batch*3];\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tfor (int k2=0;k2<m;k2+=batch){\n\t\t\tint end_k=min(m,k2+batch)-k2;\n\t\t\tfor (int j=threadIdx.x;j<end_k*3;j+=blockDim.x){\n\t\t\t\tbuf[j]=xyz2[(i*m+k2)*3+j];\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t\tfor (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){\n\t\t\t\tfloat x1=xyz[(i*n+j)*3+0];\n\t\t\t\tfloat y1=xyz[(i*n+j)*3+1];\n\t\t\t\tfloat z1=xyz[(i*n+j)*3+2];\n\t\t\t\tint best_i=0;\n\t\t\t\tfloat best=0;\n\t\t\t\tint end_ka=end_k-(end_k&3);\n\t\t\t\tif (end_ka==batch){\n\t\t\t\t\tfor (int k=0;k<batch;k+=4){\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+0]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+1]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+2]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (k==0 || d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+3]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+4]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+5]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+6]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+7]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+8]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+9]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+10]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+11]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+3;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}else{\n\t\t\t\t\tfor (int k=0;k<end_ka;k+=4){\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+0]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+1]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+2]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (k==0 || d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+3]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+4]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+5]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+6]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+7]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+8]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfloat x2=buf[k*3+9]-x1;\n\t\t\t\t\t\t\tfloat y2=buf[k*3+10]-y1;\n\t\t\t\t\t\t\tfloat z2=buf[k*3+11]-z1;\n\t\t\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\t\t\tif (d<best){\n\t\t\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\t\t\tbest_i=k+k2+3;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (int k=end_ka;k<end_k;k++){\n\t\t\t\t\tfloat x2=buf[k*3+0]-x1;\n\t\t\t\t\tfloat y2=buf[k*3+1]-y1;\n\t\t\t\t\tfloat z2=buf[k*3+2]-z1;\n\t\t\t\t\tfloat d=x2*x2+y2*y2+z2*z2;\n\t\t\t\t\tif (k==0 || d<best){\n\t\t\t\t\t\tbest=d;\n\t\t\t\t\t\tbest_i=k+k2;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (k2==0 || result[(i*n+j)]>best){\n\t\t\t\t\tresult[(i*n+j)]=best;\n\t\t\t\t\tresult_i[(i*n+j)]=best_i;\n\t\t\t\t}\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t}\n\t}\n}\n// int chamfer_cuda_forward(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i, cudaStream_t stream){\nint chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2){\n\n\tconst auto batch_size = xyz1.size(0);\n\tconst auto n = xyz1.size(1); //num_points point cloud A\n\tconst auto m = xyz2.size(1); //num_points point cloud B\n\n\tNmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data<float>(), m, xyz2.data<float>(), dist1.data<float>(), idx1.data<int>());\n\tNmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data<float>(), n, xyz1.data<float>(), dist2.data<float>(), idx2.data<int>());\n\n\tcudaError_t err = cudaGetLastError();\n\t  if (err != cudaSuccess) {\n\t    printf(\"error in nnd updateOutput: %s\\n\", cudaGetErrorString(err));\n\t    //THError(\"aborting\");\n\t    return 0;\n\t  }\n\t  return 1;\n\n\n}\n__global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tfor (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){\n\t\t\tfloat x1=xyz1[(i*n+j)*3+0];\n\t\t\tfloat y1=xyz1[(i*n+j)*3+1];\n\t\t\tfloat z1=xyz1[(i*n+j)*3+2];\n\t\t\tint j2=idx1[i*n+j];\n\t\t\tfloat x2=xyz2[(i*m+j2)*3+0];\n\t\t\tfloat y2=xyz2[(i*m+j2)*3+1];\n\t\t\tfloat z2=xyz2[(i*m+j2)*3+2];\n\t\t\tfloat g=grad_dist1[i*n+j]*2;\n\t\t\tatomicAdd(&(grad_xyz1[(i*n+j)*3+0]),g*(x1-x2));\n\t\t\tatomicAdd(&(grad_xyz1[(i*n+j)*3+1]),g*(y1-y2));\n\t\t\tatomicAdd(&(grad_xyz1[(i*n+j)*3+2]),g*(z1-z2));\n\t\t\tatomicAdd(&(grad_xyz2[(i*m+j2)*3+0]),-(g*(x1-x2)));\n\t\t\tatomicAdd(&(grad_xyz2[(i*m+j2)*3+1]),-(g*(y1-y2)));\n\t\t\tatomicAdd(&(grad_xyz2[(i*m+j2)*3+2]),-(g*(z1-z2)));\n\t\t}\n\t}\n}\n// int chamfer_cuda_backward(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2, cudaStream_t stream){\nint chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2){\n\t// cudaMemset(grad_xyz1,0,b*n*3*4);\n\t// cudaMemset(grad_xyz2,0,b*m*3*4);\n\t\n\tconst auto batch_size = xyz1.size(0);\n\tconst auto n = xyz1.size(1); //num_points point cloud A\n\tconst auto m = xyz2.size(1); //num_points point cloud B\n\n\tNmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data<float>(),m,xyz2.data<float>(),graddist1.data<float>(),idx1.data<int>(),gradxyz1.data<float>(),gradxyz2.data<float>());\n\tNmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,m,xyz2.data<float>(),n,xyz1.data<float>(),graddist2.data<float>(),idx2.data<int>(),gradxyz2.data<float>(),gradxyz1.data<float>());\n\t\n\tcudaError_t err = cudaGetLastError();\n\t  if (err != cudaSuccess) {\n\t    printf(\"error in nnd get grad: %s\\n\", cudaGetErrorString(err));\n\t    //THError(\"aborting\");\n\t    return 0;\n\t  }\n\t  return 1;\n\t\n}\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/PKG-INFO",
    "content": "Metadata-Version: 2.1\nName: chamfer-3D\nVersion: 0.0.0\nSummary: UNKNOWN\nHome-page: UNKNOWN\nLicense: UNKNOWN\nPlatform: UNKNOWN\n\nUNKNOWN\n\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/SOURCES.txt",
    "content": "chamfer3D.cu\nchamfer_cuda.cpp\nsetup.py\nchamfer_3D.egg-info/PKG-INFO\nchamfer_3D.egg-info/SOURCES.txt\nchamfer_3D.egg-info/dependency_links.txt\nchamfer_3D.egg-info/top_level.txt"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/dependency_links.txt",
    "content": "\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/top_level.txt",
    "content": "chamfer_3D\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_cuda.cpp",
    "content": "#include <torch/torch.h>\n#include <vector>\n\n///TMP\n//#include \"common.h\"\n/// NOT TMP\n\t\n\nint chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2);\n\n\nint chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2);\n\n\n\n\nint chamfer_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2) {\n    return chamfer_cuda_forward(xyz1, xyz2, dist1, dist2, idx1, idx2);\n}\n\n\nint chamfer_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, \n\t\t\t\t\t  at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2) {\n\n    return chamfer_cuda_backward(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2);\n}\n\n\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"forward\", &chamfer_forward, \"chamfer forward (CUDA)\");\n  m.def(\"backward\", &chamfer_backward, \"chamfer backward (CUDA)\");\n}\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_distance.py",
    "content": "import importlib\nimport os\n\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\n\n\nchamfer_found = importlib.find_loader(\"chamfer_3D\") is not None\nif not chamfer_found:\n    ## Cool trick from https://github.com/chrdiller\n    print(\"Jitting Chamfer 3D\")\n\n    from torch.utils.cpp_extension import load\n    chamfer_3D = load(name=\"chamfer_3D\",\n          sources=[\n              \"/\".join(os.path.abspath(__file__).split('/')[:-1] + [\"chamfer_cuda.cpp\"]),\n              \"/\".join(os.path.abspath(__file__).split('/')[:-1] + [\"chamfer3D.cu\"]),\n              ])\n    # print(\"Loaded JIT 3D CUDA chamfer distance\")\n\nelse:\n    import chamfer_3D\n    # print(\"Loaded compiled 3D CUDA chamfer distance\")\n\n\n# Chamfer's distance module @thibaultgroueix\n# GPU tensors only\nclass chamfer_3DFunction(Function):\n    @staticmethod\n    def forward(ctx, xyz1, xyz2):\n        \"\"\"\n        xyz1: (B, N, 3)\n        xyz2: (B, M, 3)\n        \"\"\"\n        batchsize, n, _ = xyz1.size()\n        _, m, _ = xyz2.size()\n        device = xyz1.device\n\n        dist1 = torch.zeros(batchsize, n)\n        dist2 = torch.zeros(batchsize, m)\n\n        idx1 = torch.zeros(batchsize, n).type(torch.IntTensor)\n        idx2 = torch.zeros(batchsize, m).type(torch.IntTensor)\n\n        dist1 = dist1.to(device)\n        dist2 = dist2.to(device)\n        idx1 = idx1.to(device)\n        idx2 = idx2.to(device)\n        torch.cuda.set_device(device)\n\n        chamfer_3D.forward(xyz1, xyz2, dist1, dist2, idx1, idx2)\n        ctx.save_for_backward(xyz1, xyz2, idx1, idx2)\n        return dist1, dist2, idx1, idx2\n\n    @staticmethod\n    def backward(ctx, graddist1, graddist2, gradidx1, gradidx2):\n        xyz1, xyz2, idx1, idx2 = ctx.saved_tensors\n        graddist1 = graddist1.contiguous()\n        graddist2 = graddist2.contiguous()\n        device = graddist1.device\n\n        gradxyz1 = torch.zeros(xyz1.size())\n        gradxyz2 = torch.zeros(xyz2.size())\n\n        gradxyz1 = gradxyz1.to(device)\n        gradxyz2 = gradxyz2.to(device)\n        chamfer_3D.backward(\n            xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2\n        )\n        return gradxyz1, gradxyz2\n\n\nclass ChamferDistance(nn.Module):\n    def __init__(self):\n        super(ChamferDistance, self).__init__()\n\n    def forward(self, input1, input2):\n        \"\"\"\n        input1: (B, N, 3)\n        input2: (B, M, 3)\n        \"\"\"\n        dist1, dist2, _, _ = chamfer_3DFunction.apply(input1, input2)\n        return dist1, dist2\n"
  },
  {
    "path": "extensions/chamfer_distance/setup.py",
    "content": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n\nsetup(\n    name='chamfer_3D',\n    ext_modules=[\n        CUDAExtension('chamfer_3D', [\n            \"/\".join(__file__.split('/')[:-1] + ['chamfer_cuda.cpp']),\n            \"/\".join(__file__.split('/')[:-1] + ['chamfer3D.cu']),\n        ]),\n    ],\n    cmdclass={\n        'build_ext': BuildExtension\n    })\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd.cpp",
    "content": "#ifndef _EMD\n#define _EMD\n\n#include <vector>\n#include <torch/extension.h>\n\n//CUDA declarations\nat::Tensor ApproxMatchForward(\n    const at::Tensor xyz1,\n    const at::Tensor xyz2);\n\nat::Tensor MatchCostForward(\n    const at::Tensor xyz1,\n    const at::Tensor xyz2,\n    const at::Tensor match);\n\nstd::vector<at::Tensor> MatchCostBackward(\n    const at::Tensor grad_cost,\n    const at::Tensor xyz1,\n    const at::Tensor xyz2,\n    const at::Tensor match);\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"approxmatch_forward\", &ApproxMatchForward,\"ApproxMatch forward (CUDA)\");\n  m.def(\"matchcost_forward\", &MatchCostForward,\"MatchCost forward (CUDA)\");\n  m.def(\"matchcost_backward\", &MatchCostBackward,\"MatchCost backward (CUDA)\");\n}\n\n#endif\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd.py",
    "content": "import torch\nimport torch.nn as nn\nimport emd_cuda\n\n\nclass EarthMoverDistanceFunction(torch.autograd.Function):\n    @staticmethod\n    def forward(ctx, xyz1, xyz2):\n        xyz1 = xyz1.contiguous()\n        xyz2 = xyz2.contiguous()\n        assert xyz1.is_cuda and xyz2.is_cuda, \"Only support cuda currently.\"\n        match = emd_cuda.approxmatch_forward(xyz1, xyz2)\n        cost = emd_cuda.matchcost_forward(xyz1, xyz2, match)\n        ctx.save_for_backward(xyz1, xyz2, match)\n        return cost\n\n    @staticmethod\n    def backward(ctx, grad_cost):\n        xyz1, xyz2, match = ctx.saved_tensors\n        grad_cost = grad_cost.contiguous()\n        grad_xyz1, grad_xyz2 = emd_cuda.matchcost_backward(grad_cost, xyz1, xyz2, match)\n        return grad_xyz1, grad_xyz2\n\n\nclass EarthMoverDistance(nn.Module):\n    def __init__(self):\n        super().__init__()\n    \n    def forward(self, xyz1, xyz2):\n        \"\"\"\n        Args:\n            xyz1 (torch.Tensor): (b, N1, 3)\n            xyz2 (torch.Tensor): (b, N2, 3)\n\n        Returns:\n            cost (torch.Tensor): (b)\n        \"\"\"\n        if xyz1.dim() == 2:\n            xyz1 = xyz1.unsqueeze(0)\n        if xyz2.dim() == 2:\n            xyz2 = xyz2.unsqueeze(0)\n        cost = EarthMoverDistanceFunction.apply(xyz1, xyz2)\n        return cost\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/PKG-INFO",
    "content": "Metadata-Version: 2.1\nName: emd-cuda\nVersion: 0.0.0\nSummary: UNKNOWN\nHome-page: UNKNOWN\nLicense: UNKNOWN\nPlatform: UNKNOWN\n\nUNKNOWN\n\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/SOURCES.txt",
    "content": "emd.cpp\nemd_kernel.cu\nsetup.py\nemd_cuda.egg-info/PKG-INFO\nemd_cuda.egg-info/SOURCES.txt\nemd_cuda.egg-info/dependency_links.txt\nemd_cuda.egg-info/top_level.txt"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/dependency_links.txt",
    "content": "\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/top_level.txt",
    "content": "emd_cuda\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_kernel.cu",
    "content": "/**********************************\n * Original Author: Haoqiang Fan\n * Modified by: Kaichun Mo\n *********************************/\n\n#ifndef _EMD_KERNEL\n#define _EMD_KERNEL\n\n#include <cmath>\n#include <vector>\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>  // at::cuda::getApplyGrid\n#include <THC/THC.h>\n\n#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\")\n#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x \" must be contiguous\")\n#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)\n\n\n/********************************\n* Forward kernel for approxmatch\n*********************************/\n\ntemplate<typename scalar_t>\n__global__ void approxmatch(int b,int n,int m,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,scalar_t * __restrict__ match,scalar_t * temp){\n\tscalar_t * remainL=temp+blockIdx.x*(n+m)*2, * remainR=temp+blockIdx.x*(n+m)*2+n,*ratioL=temp+blockIdx.x*(n+m)*2+n+m,*ratioR=temp+blockIdx.x*(n+m)*2+n+m+n;\n\tscalar_t multiL,multiR;\n\tif (n>=m){\n\t\tmultiL=1;\n\t\tmultiR=n/m;\n\t}else{\n\t\tmultiL=m/n;\n\t\tmultiR=1;\n\t}\n\tconst int Block=1024;\n\t__shared__ scalar_t buf[Block*4];\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tfor (int j=threadIdx.x;j<n*m;j+=blockDim.x)\n\t\t\tmatch[i*n*m+j]=0;\n\t\tfor (int j=threadIdx.x;j<n;j+=blockDim.x)\n\t\t\tremainL[j]=multiL;\n\t\tfor (int j=threadIdx.x;j<m;j+=blockDim.x)\n\t\t\tremainR[j]=multiR;\n\t\t__syncthreads();\n\t\tfor (int j=7;j>=-2;j--){\n\t\t\tscalar_t level=-powf(4.0f,j);\n\t\t\tif (j==-2){\n\t\t\t\tlevel=0;\n\t\t\t}\n\t\t\tfor (int k0=0;k0<n;k0+=blockDim.x){\n\t\t\t\tint k=k0+threadIdx.x;\n\t\t\t\tscalar_t x1=0,y1=0,z1=0;\n\t\t\t\tif (k<n){\n\t\t\t\t\tx1=xyz1[i*n*3+k*3+0];\n\t\t\t\t\ty1=xyz1[i*n*3+k*3+1];\n\t\t\t\t\tz1=xyz1[i*n*3+k*3+2];\n\t\t\t\t}\n\t\t\t\tscalar_t suml=1e-9f;\n\t\t\t\tfor (int l0=0;l0<m;l0+=Block){\n\t\t\t\t\tint lend=min(m,l0+Block)-l0;\n\t\t\t\t\tfor (int l=threadIdx.x;l<lend;l+=blockDim.x){\n\t\t\t\t\t\tscalar_t x2=xyz2[i*m*3+l0*3+l*3+0];\n\t\t\t\t\t\tscalar_t y2=xyz2[i*m*3+l0*3+l*3+1];\n\t\t\t\t\t\tscalar_t z2=xyz2[i*m*3+l0*3+l*3+2];\n\t\t\t\t\t\tbuf[l*4+0]=x2;\n\t\t\t\t\t\tbuf[l*4+1]=y2;\n\t\t\t\t\t\tbuf[l*4+2]=z2;\n\t\t\t\t\t\tbuf[l*4+3]=remainR[l0+l];\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t\tfor (int l=0;l<lend;l++){\n\t\t\t\t\t\tscalar_t x2=buf[l*4+0];\n\t\t\t\t\t\tscalar_t y2=buf[l*4+1];\n\t\t\t\t\t\tscalar_t z2=buf[l*4+2];\n\t\t\t\t\t\tscalar_t d=level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1));\n\t\t\t\t\t\tscalar_t w=__expf(d)*buf[l*4+3];\n\t\t\t\t\t\tsuml+=w;\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t}\n\t\t\t\tif (k<n)\n\t\t\t\t\tratioL[k]=remainL[k]/suml;\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t\tfor (int l0=0;l0<m;l0+=blockDim.x){\n\t\t\t\tint l=l0+threadIdx.x;\n\t\t\t\tscalar_t x2=0,y2=0,z2=0;\n\t\t\t\tif (l<m){\n\t\t\t\t\tx2=xyz2[i*m*3+l*3+0];\n\t\t\t\t\ty2=xyz2[i*m*3+l*3+1];\n\t\t\t\t\tz2=xyz2[i*m*3+l*3+2];\n\t\t\t\t}\n\t\t\t\tscalar_t sumr=0;\n\t\t\t\tfor (int k0=0;k0<n;k0+=Block){\n\t\t\t\t\tint kend=min(n,k0+Block)-k0;\n\t\t\t\t\tfor (int k=threadIdx.x;k<kend;k+=blockDim.x){\n\t\t\t\t\t\tbuf[k*4+0]=xyz1[i*n*3+k0*3+k*3+0];\n\t\t\t\t\t\tbuf[k*4+1]=xyz1[i*n*3+k0*3+k*3+1];\n\t\t\t\t\t\tbuf[k*4+2]=xyz1[i*n*3+k0*3+k*3+2];\n\t\t\t\t\t\tbuf[k*4+3]=ratioL[k0+k];\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t\tfor (int k=0;k<kend;k++){\n\t\t\t\t\t\tscalar_t x1=buf[k*4+0];\n\t\t\t\t\t\tscalar_t y1=buf[k*4+1];\n\t\t\t\t\t\tscalar_t z1=buf[k*4+2];\n\t\t\t\t\t\tscalar_t w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*buf[k*4+3];\n\t\t\t\t\t\tsumr+=w;\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t}\n\t\t\t\tif (l<m){\n\t\t\t\t\tsumr*=remainR[l];\n\t\t\t\t\tscalar_t consumption=fminf(remainR[l]/(sumr+1e-9f),1.0f);\n\t\t\t\t\tratioR[l]=consumption*remainR[l];\n\t\t\t\t\tremainR[l]=fmaxf(0.0f,remainR[l]-sumr);\n\t\t\t\t}\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t\tfor (int k0=0;k0<n;k0+=blockDim.x){\n\t\t\t\tint k=k0+threadIdx.x;\n\t\t\t\tscalar_t x1=0,y1=0,z1=0;\n\t\t\t\tif (k<n){\n\t\t\t\t\tx1=xyz1[i*n*3+k*3+0];\n\t\t\t\t\ty1=xyz1[i*n*3+k*3+1];\n\t\t\t\t\tz1=xyz1[i*n*3+k*3+2];\n\t\t\t\t}\n\t\t\t\tscalar_t suml=0;\n\t\t\t\tfor (int l0=0;l0<m;l0+=Block){\n\t\t\t\t\tint lend=min(m,l0+Block)-l0;\n\t\t\t\t\tfor (int l=threadIdx.x;l<lend;l+=blockDim.x){\n\t\t\t\t\t\tbuf[l*4+0]=xyz2[i*m*3+l0*3+l*3+0];\n\t\t\t\t\t\tbuf[l*4+1]=xyz2[i*m*3+l0*3+l*3+1];\n\t\t\t\t\t\tbuf[l*4+2]=xyz2[i*m*3+l0*3+l*3+2];\n\t\t\t\t\t\tbuf[l*4+3]=ratioR[l0+l];\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t\tscalar_t rl=ratioL[k];\n\t\t\t\t\tif (k<n){\n\t\t\t\t\t\tfor (int l=0;l<lend;l++){\n\t\t\t\t\t\t\tscalar_t x2=buf[l*4+0];\n\t\t\t\t\t\t\tscalar_t y2=buf[l*4+1];\n\t\t\t\t\t\t\tscalar_t z2=buf[l*4+2];\n\t\t\t\t\t\t\tscalar_t w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*rl*buf[l*4+3];\n\t\t\t\t\t\t\tmatch[i*n*m+(l0+l)*n+k]+=w;\n\t\t\t\t\t\t\tsuml+=w;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\t__syncthreads();\n\t\t\t\t}\n\t\t\t\tif (k<n)\n\t\t\t\t\tremainL[k]=fmaxf(0.0f,remainL[k]-suml);\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t}\n\t}\n}\n\n//void approxmatchLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,scalar_t * match,scalar_t * temp){\n//\tapproxmatch<<<32,512>>>(b,n,m,xyz1,xyz2,match,temp);\n//}\n\n/* ApproxMatch forward interface\nInput:\n  xyz1: (B, N1, 3)  # dataset_points\n  xyz2: (B, N2, 3)  # query_points\nOutput:\n  match: (B, N2, N1)\n*/\nat::Tensor ApproxMatchForward(\n    const at::Tensor xyz1,\n    const at::Tensor xyz2){\n  const auto b = xyz1.size(0);\n  const auto n = xyz1.size(1);\n  const auto m = xyz2.size(1);\n\n  CHECK_EQ(xyz2.size(0), b);\n  CHECK_EQ(xyz1.size(2), 3);\n  CHECK_EQ(xyz2.size(2), 3);\n  CHECK_INPUT(xyz1);\n  CHECK_INPUT(xyz2);\n\n  auto match = at::zeros({b, m, n}, xyz1.type());\n  auto temp = at::zeros({b, (n+m)*2}, xyz1.type());\n\n  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), \"ApproxMatchForward\", ([&] {\n        approxmatch<scalar_t><<<32,512>>>(b, n, m, xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), temp.data<scalar_t>());\n  }));\n  THCudaCheck(cudaGetLastError());\n\n  return match;\n}\n\n\n/********************************\n* Forward kernel for matchcost\n*********************************/\n\ntemplate<typename scalar_t>\n__global__ void matchcost(int b,int n,int m,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ out){\n\t__shared__ scalar_t allsum[512];\n\tconst int Block=1024;\n\t__shared__ scalar_t buf[Block*3];\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tscalar_t subsum=0;\n\t\tfor (int k0=0;k0<n;k0+=blockDim.x){\n\t\t\tint k=k0+threadIdx.x;\n\t\t\tscalar_t x1=0,y1=0,z1=0;\n\t\t\tif (k<n){\n\t\t\t\tx1=xyz1[i*n*3+k*3+0];\n\t\t\t\ty1=xyz1[i*n*3+k*3+1];\n\t\t\t\tz1=xyz1[i*n*3+k*3+2];\n\t\t\t}\n\t\t\tfor (int l0=0;l0<m;l0+=Block){\n\t\t\t\tint lend=min(m,l0+Block)-l0;\n\t\t\t\tfor (int l=threadIdx.x;l<lend*3;l+=blockDim.x)\n\t\t\t\t\tbuf[l]=xyz2[i*m*3+l0*3+l];\n\t\t\t\t__syncthreads();\n\t\t\t\tif (k<n){\n\t\t\t\t\tfor (int l=0;l<lend;l++){\n\t\t\t\t\t\tscalar_t x2=buf[l*3+0];\n\t\t\t\t\t\tscalar_t y2=buf[l*3+1];\n\t\t\t\t\t\tscalar_t z2=buf[l*3+2];\n\t\t\t\t\t\tscalar_t d=(x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1);\n\t\t\t\t\t\tsubsum+=d*match[i*n*m+(l0+l)*n+k];\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t__syncthreads();\n\t\t\t}\n\t\t}\n\t\tallsum[threadIdx.x]=subsum;\n\t\tfor (int j=1;j<blockDim.x;j<<=1){\n\t\t\t__syncthreads();\n\t\t\tif ((threadIdx.x&j)==0 && threadIdx.x+j<blockDim.x){\n\t\t\t\tallsum[threadIdx.x]+=allsum[threadIdx.x+j];\n\t\t\t}\n\t\t}\n\t\tif (threadIdx.x==0)\n\t\t\tout[i]=allsum[0];\n\t\t__syncthreads();\n\t}\n}\n\n//void matchcostLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,const scalar_t * match,scalar_t * out){\n//\tmatchcost<<<32,512>>>(b,n,m,xyz1,xyz2,match,out);\n//}\n\n/* MatchCost forward interface\nInput:\n  xyz1: (B, N1, 3)  # dataset_points\n  xyz2: (B, N2, 3)  # query_points\n  match: (B, N2, N1)\nOutput:\n  cost: (B)\n*/\nat::Tensor MatchCostForward(\n    const at::Tensor xyz1,\n    const at::Tensor xyz2,\n    const at::Tensor match){\n  const auto b = xyz1.size(0);\n  const auto n = xyz1.size(1);\n  const auto m = xyz2.size(1);\n\n  CHECK_EQ(xyz2.size(0), b);\n  CHECK_EQ(xyz1.size(2), 3);\n  CHECK_EQ(xyz2.size(2), 3);\n  CHECK_INPUT(xyz1);\n  CHECK_INPUT(xyz2);\n\n  auto cost = at::zeros({b}, xyz1.type());\n\n  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), \"MatchCostForward\", ([&] {\n        matchcost<scalar_t><<<32,512>>>(b, n, m, xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), cost.data<scalar_t>());\n  }));\n  THCudaCheck(cudaGetLastError());\n\n  return cost;\n}\n\n\n/********************************\n* matchcostgrad2 kernel\n*********************************/\n\ntemplate<typename scalar_t>\n__global__ void matchcostgrad2(int b,int n,int m,const scalar_t * __restrict__ grad_cost,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ grad2){\n\t__shared__ scalar_t sum_grad[256*3];\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tint kbeg=m*blockIdx.y/gridDim.y;\n\t\tint kend=m*(blockIdx.y+1)/gridDim.y;\n\t\tfor (int k=kbeg;k<kend;k++){\n\t\t\tscalar_t x2=xyz2[(i*m+k)*3+0];\n\t\t\tscalar_t y2=xyz2[(i*m+k)*3+1];\n\t\t\tscalar_t z2=xyz2[(i*m+k)*3+2];\n\t\t\tscalar_t subsumx=0,subsumy=0,subsumz=0;\n\t\t\tfor (int j=threadIdx.x;j<n;j+=blockDim.x){\n\t\t\t\tscalar_t x1=x2-xyz1[(i*n+j)*3+0];\n\t\t\t\tscalar_t y1=y2-xyz1[(i*n+j)*3+1];\n\t\t\t\tscalar_t z1=z2-xyz1[(i*n+j)*3+2];\n\t\t\t\tscalar_t d=match[i*n*m+k*n+j]*2;\n\t\t\t\tsubsumx+=x1*d;\n\t\t\t\tsubsumy+=y1*d;\n\t\t\t\tsubsumz+=z1*d;\n\t\t\t}\n\t\t\tsum_grad[threadIdx.x*3+0]=subsumx;\n\t\t\tsum_grad[threadIdx.x*3+1]=subsumy;\n\t\t\tsum_grad[threadIdx.x*3+2]=subsumz;\n\t\t\tfor (int j=1;j<blockDim.x;j<<=1){\n\t\t\t\t__syncthreads();\n\t\t\t\tint j1=threadIdx.x;\n\t\t\t\tint j2=threadIdx.x+j;\n\t\t\t\tif ((j1&j)==0 && j2<blockDim.x){\n\t\t\t\t\tsum_grad[j1*3+0]+=sum_grad[j2*3+0];\n\t\t\t\t\tsum_grad[j1*3+1]+=sum_grad[j2*3+1];\n\t\t\t\t\tsum_grad[j1*3+2]+=sum_grad[j2*3+2];\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (threadIdx.x==0){\n\t\t\t\tgrad2[(i*m+k)*3+0]=sum_grad[0]*grad_cost[i];\n\t\t\t\tgrad2[(i*m+k)*3+1]=sum_grad[1]*grad_cost[i];\n\t\t\t\tgrad2[(i*m+k)*3+2]=sum_grad[2]*grad_cost[i];\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t}\n\t}\n}\n\n/********************************\n* matchcostgrad1 kernel\n*********************************/\n\ntemplate<typename scalar_t>\n__global__ void matchcostgrad1(int b,int n,int m,const scalar_t * __restrict__ grad_cost,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ grad1){\n\tfor (int i=blockIdx.x;i<b;i+=gridDim.x){\n\t\tfor (int l=threadIdx.x;l<n;l+=blockDim.x){\n\t\t\tscalar_t x1=xyz1[i*n*3+l*3+0];\n\t\t\tscalar_t y1=xyz1[i*n*3+l*3+1];\n\t\t\tscalar_t z1=xyz1[i*n*3+l*3+2];\n\t\t\tscalar_t dx=0,dy=0,dz=0;\n\t\t\tfor (int k=0;k<m;k++){\n\t\t\t\tscalar_t x2=xyz2[i*m*3+k*3+0];\n\t\t\t\tscalar_t y2=xyz2[i*m*3+k*3+1];\n\t\t\t\tscalar_t z2=xyz2[i*m*3+k*3+2];\n\t\t\t\tscalar_t d=match[i*n*m+k*n+l]*2;\n\t\t\t\tdx+=(x1-x2)*d;\n\t\t\t\tdy+=(y1-y2)*d;\n\t\t\t\tdz+=(z1-z2)*d;\n\t\t\t}\n\t\t\tgrad1[i*n*3+l*3+0]=dx*grad_cost[i];\n\t\t\tgrad1[i*n*3+l*3+1]=dy*grad_cost[i];\n\t\t\tgrad1[i*n*3+l*3+2]=dz*grad_cost[i];\n\t\t}\n\t}\n}\n\n//void matchcostgradLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,const scalar_t * match,scalar_t * grad1,scalar_t * grad2){\n//\tmatchcostgrad1<<<32,512>>>(b,n,m,xyz1,xyz2,match,grad1);\n//\tmatchcostgrad2<<<dim3(32,32),256>>>(b,n,m,xyz1,xyz2,match,grad2);\n//}\n\n\n/* MatchCost backward interface\nInput:\n  grad_cost: (B)    # gradients on cost\n  xyz1: (B, N1, 3)  # dataset_points\n  xyz2: (B, N2, 3)  # query_points\n  match: (B, N2, N1)\nOutput:\n  grad1: (B, N1, 3)\n  grad2: (B, N2, 3)\n*/\nstd::vector<at::Tensor> MatchCostBackward(\n    const at::Tensor grad_cost,\n    const at::Tensor xyz1,\n    const at::Tensor xyz2,\n    const at::Tensor match){\n  const auto b = xyz1.size(0);\n  const auto n = xyz1.size(1);\n  const auto m = xyz2.size(1);\n\n  CHECK_EQ(xyz2.size(0), b);\n  CHECK_EQ(xyz1.size(2), 3);\n  CHECK_EQ(xyz2.size(2), 3);\n  CHECK_INPUT(xyz1);\n  CHECK_INPUT(xyz2);\n\n  auto grad1 = at::zeros({b, n, 3}, xyz1.type());\n  auto grad2 = at::zeros({b, m, 3}, xyz1.type());\n\n  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), \"MatchCostBackward\", ([&] {\n        matchcostgrad1<scalar_t><<<32,512>>>(b, n, m, grad_cost.data<scalar_t>(), xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), grad1.data<scalar_t>());\n        matchcostgrad2<scalar_t><<<dim3(32,32),256>>>(b, n, m, grad_cost.data<scalar_t>(), xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), grad2.data<scalar_t>());\n  }));\n  THCudaCheck(cudaGetLastError());\n\n  return std::vector<at::Tensor>({grad1, grad2});\n}\n\n#endif\n"
  },
  {
    "path": "extensions/earth_movers_distance/setup.py",
    "content": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n\nsetup(\n    name='emd_cuda',\n    ext_modules=[\n        CUDAExtension(\n            name='emd_cuda',\n            sources=[\n                'emd.cpp',\n                'emd_kernel.cu',\n            ],\n            # extra_compile_args={'cxx': ['-g'], 'nvcc': ['-O2']}\n        ),\n    ],\n    cmdclass={\n        'build_ext': BuildExtension\n    })\n"
  },
  {
    "path": "metrics/loss.py",
    "content": "import torch\n\nfrom extensions.chamfer_distance.chamfer_distance import ChamferDistance\nfrom extensions.earth_movers_distance.emd import EarthMoverDistance\n\n\nCD = ChamferDistance()\nEMD = EarthMoverDistance()\n\n\ndef cd_loss_L1(pcs1, pcs2):\n    \"\"\"\n    L1 Chamfer Distance.\n\n    Args:\n        pcs1 (torch.tensor): (B, N, 3)\n        pcs2 (torch.tensor): (B, M, 3)\n    \"\"\"\n    dist1, dist2 = CD(pcs1, pcs2)\n    dist1 = torch.sqrt(dist1)\n    dist2 = torch.sqrt(dist2)\n    return (torch.mean(dist1) + torch.mean(dist2)) / 2.0\n\n\ndef cd_loss_L2(pcs1, pcs2):\n    \"\"\"\n    L2 Chamfer Distance.\n\n    Args:\n        pcs1 (torch.tensor): (B, N, 3)\n        pcs2 (torch.tensor): (B, M, 3)\n    \"\"\"\n    dist1, dist2 = CD(pcs1, pcs2)\n    return torch.mean(dist1) + torch.mean(dist2)\n\n\ndef emd_loss(pcs1, pcs2):\n    \"\"\"\n    EMD Loss.\n\n    Args:\n        xyz1 (torch.Tensor): (b, N, 3)\n        xyz2 (torch.Tensor): (b, N, 3)\n    \"\"\"\n    dists = EMD(pcs1, pcs2)\n    return torch.mean(dists)\n"
  },
  {
    "path": "metrics/metric.py",
    "content": "import torch\nimport open3d as o3d\n\nfrom extensions.chamfer_distance.chamfer_distance import ChamferDistance\nfrom extensions.earth_movers_distance.emd import EarthMoverDistance\n\n\nCD = ChamferDistance()\nEMD = EarthMoverDistance()\n\n\ndef l2_cd(pcs1, pcs2):\n    dist1, dist2 = CD(pcs1, pcs2)\n    dist1 = torch.mean(dist1, dim=1)\n    dist2 = torch.mean(dist2, dim=1)\n    return torch.sum(dist1 + dist2)\n\n\ndef l1_cd(pcs1, pcs2):\n    dist1, dist2 = CD(pcs1, pcs2)\n    dist1 = torch.mean(torch.sqrt(dist1), 1)\n    dist2 = torch.mean(torch.sqrt(dist2), 1)\n    return torch.sum(dist1 + dist2) / 2\n\n\ndef emd(pcs1, pcs2):\n    dists = EMD(pcs1, pcs2)\n    return torch.sum(dists)\n\n\ndef f_score(pred, gt, th=0.01):\n    \"\"\"\n    References: https://github.com/lmb-freiburg/what3d/blob/master/util.py\n\n    Args:\n        pred (np.ndarray): (N1, 3)\n        gt   (np.ndarray): (N2, 3)\n        th   (float): a distance threshhold\n    \"\"\"\n    pred = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(pred))\n    gt = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(gt))\n\n    dist1 = pred.compute_point_cloud_distance(gt)\n    dist2 = gt.compute_point_cloud_distance(pred)\n\n    recall = float(sum(d < th for d in dist2)) / float(len(dist2))\n    precision = float(sum(d < th for d in dist1)) / float(len(dist1))\n    return 2 * recall * precision / (recall + precision) if recall + precision else 0\n"
  },
  {
    "path": "models/__init__.py",
    "content": "from models.pcn import PCN\n"
  },
  {
    "path": "models/pcn.py",
    "content": "import torch\nimport torch.nn as nn\n\n\nclass PCN(nn.Module):\n    \"\"\"\n    \"PCN: Point Cloud Completion Network\"\n    (https://arxiv.org/pdf/1808.00671.pdf)\n\n    Attributes:\n        num_dense:  16384\n        latent_dim: 1024\n        grid_size:  4\n        num_coarse: 1024\n    \"\"\"\n\n    def __init__(self, num_dense=16384, latent_dim=1024, grid_size=4):\n        super().__init__()\n\n        self.num_dense = num_dense\n        self.latent_dim = latent_dim\n        self.grid_size = grid_size\n\n        assert self.num_dense % self.grid_size ** 2 == 0\n\n        self.num_coarse = self.num_dense // (self.grid_size ** 2)\n\n        self.first_conv = nn.Sequential(\n            nn.Conv1d(3, 128, 1),\n            nn.BatchNorm1d(128),\n            nn.ReLU(inplace=True),\n            nn.Conv1d(128, 256, 1)\n        )\n\n        self.second_conv = nn.Sequential(\n            nn.Conv1d(512, 512, 1),\n            nn.BatchNorm1d(512),\n            nn.ReLU(inplace=True),\n            nn.Conv1d(512, self.latent_dim, 1)\n        )\n\n        self.mlp = nn.Sequential(\n            nn.Linear(self.latent_dim, 1024),\n            nn.ReLU(inplace=True),\n            nn.Linear(1024, 1024),\n            nn.ReLU(inplace=True),\n            nn.Linear(1024, 3 * self.num_coarse)\n        )\n\n        self.final_conv = nn.Sequential(\n            nn.Conv1d(1024 + 3 + 2, 512, 1),\n            nn.BatchNorm1d(512),\n            nn.ReLU(inplace=True),\n            nn.Conv1d(512, 512, 1),\n            nn.BatchNorm1d(512),\n            nn.ReLU(inplace=True),\n            nn.Conv1d(512, 3, 1)\n        )\n        a = torch.linspace(-0.05, 0.05, steps=self.grid_size, dtype=torch.float).view(1, self.grid_size).expand(self.grid_size, self.grid_size).reshape(1, -1)\n        b = torch.linspace(-0.05, 0.05, steps=self.grid_size, dtype=torch.float).view(self.grid_size, 1).expand(self.grid_size, self.grid_size).reshape(1, -1)\n        \n        self.folding_seed = torch.cat([a, b], dim=0).view(1, 2, self.grid_size ** 2).cuda()  # (1, 2, S)\n\n    def forward(self, xyz):\n        B, N, _ = xyz.shape\n        \n        # encoder\n        feature = self.first_conv(xyz.transpose(2, 1))                                       # (B,  256, N)\n        feature_global = torch.max(feature, dim=2, keepdim=True)[0]                          # (B,  256, 1)\n        feature = torch.cat([feature_global.expand(-1, -1, N), feature], dim=1)              # (B,  512, N)\n        feature = self.second_conv(feature)                                                  # (B, 1024, N)\n        feature_global = torch.max(feature,dim=2,keepdim=False)[0]                           # (B, 1024)\n        \n        # decoder\n        coarse = self.mlp(feature_global).reshape(-1, self.num_coarse, 3)                    # (B, num_coarse, 3), coarse point cloud\n        point_feat = coarse.unsqueeze(2).expand(-1, -1, self.grid_size ** 2, -1)             # (B, num_coarse, S, 3)\n        point_feat = point_feat.reshape(-1, self.num_dense, 3).transpose(2, 1)               # (B, 3, num_fine)\n\n        seed = self.folding_seed.unsqueeze(2).expand(B, -1, self.num_coarse, -1)             # (B, 2, num_coarse, S)\n        seed = seed.reshape(B, -1, self.num_dense)                                           # (B, 2, num_fine)\n\n        feature_global = feature_global.unsqueeze(2).expand(-1, -1, self.num_dense)          # (B, 1024, num_fine)\n        feat = torch.cat([feature_global, seed, point_feat], dim=1)                          # (B, 1024+2+3, num_fine)\n    \n        fine = self.final_conv(feat) + point_feat                                            # (B, 3, num_fine), fine point cloud\n\n        return coarse.contiguous(), fine.transpose(1, 2).contiguous()\n"
  },
  {
    "path": "render/README.md",
    "content": "# render\n\n## Description\n\n`process_exr.py` and `render_depth.py` are used for generating the partial point cloud from CAD model.\n\nIn order to run the `render_depth.py`, you need to install [Blender](https://www.blender.org/) firstly. After complete installing, you can use this command to render the depth images:\n\n```bash\nblender -b -P render_depth.py [ShapeNet directory] [model list] [output directory] [num scans per model]\n```\n\nThe images will be stored in OpenEXR format. The version of blender I used is `2.9.1`.\n\nIn order to run the `process_exr.py`, you need to install `imath`、`OpenEXR` and `open3d-python`. These are third python modules, you can install with `pip`. The command to generate partial point clouds from `.exr` is:\n\n```bash\npython3 process_exr.py [model list] [intrinsics file] [output directory] [num scans per model]\n```\n\nThe version of Python should not be too high. I use the version of `3.7.9`.\n\n## Example\n\nComplete point cloud:\n\n<img src=\"../images/ground_truth.png\" width=\"300px\"/>\n\nPartial point clouds:\n\n<img src=\"../images/partial1.png\" width=\"300px\"/>\n<img src=\"../images/partial2.png\" width=\"300px\"/>\n<img src=\"../images/partial3.png\" width=\"300px\"/>\n<img src=\"../images/partial4.png\" width=\"300px\"/>\n<img src=\"../images/partial5.png\" width=\"300px\"/>\n<img src=\"../images/partial6.png\" width=\"300px\"/>\n<img src=\"../images/partial7.png\" width=\"300px\"/>\n<img src=\"../images/partial8.png\" width=\"300px\"/>\n"
  },
  {
    "path": "render/blender.log",
    "content": "Progress:   0.00%\r(  0.0000 sec |   0.0000 sec) Importing OBJ '/media/rico/BACKUP/Dataset/ShapeNetForPCN/02958343/167ec61fc29df46460593c98e3e63028/model.obj'...\nProgress:   0.00%\r  (  0.0008 sec |   0.0008 sec) Parsing OBJ file...\nProgress:   0.00%\r    (  0.8401 sec |   0.8393 sec) Done, loading materials and images...\nProgress:  33.33%\r    (  1.0985 sec |   1.0977 sec) Done, building geometries (verts:49443 faces:89741 materials: 79 smoothgroups:0) ...\nProgress:  66.67%\r    (  3.4071 sec |   3.4063 sec) Done.\nProgress:  66.67%\rProgress: 100.00%\r  (  3.4072 sec |   3.4072 sec) Finished importing: '/media/rico/BACKUP/Dataset/ShapeNetForPCN/02958343/167ec61fc29df46460593c98e3e63028/model.obj'\nProgress: 100.00%\rProgress: 100.00%\r\n\nFra:0 Mem:179.97M (0.00M, Peak 179.98M) | Time:00:00.00 | Preparing Scene data\nFra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Preparing Scene data\nFra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Creating Shadowbuffers\nFra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Raytree.. preparing\nFra:0 Mem:341.84M (0.00M, Peak 341.84M) | Time:00:00.09 | Raytree.. building\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Raytree finished\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Creating Environment maps\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Caching Point Densities\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Loading voxel datasets\nFra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Volume preprocessing\nFra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:0 Mem:343.08M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 6-6\nFra:0 Mem:343.01M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 5-6\nFra:0 Mem:342.82M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 3-6\nFra:0 Mem:349.62M (0.00M, Peak 360.38M) | Time:00:00.24 | Scene, Part 4-6\nFra:0 Mem:348.73M (0.00M, Peak 360.38M) | Time:00:00.24 | Scene, Part 2-6\nFra:0 Mem:348.44M (0.00M, Peak 360.38M) | Time:00:00.25 | Scene, Part 1-6\nFra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing\nFra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Determining resolution\nFra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Initializing execution\nFra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/0.exr\nFra:0 Mem:187.19M (0.00M, Peak 360.38M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.27 (Saving: 00:00.00)\n\nFra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:1 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:1 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:1 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:1 Mem:349.59M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6\nFra:1 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 4-6\nFra:1 Mem:348.74M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6\nFra:1 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6\nFra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing\nFra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution\nFra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution\nFra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/1.exr\nFra:1 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.26 (Saving: 00:00.00)\n\nFra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:2 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:2 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:2 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:2 Mem:349.59M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 3-6\nFra:2 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6\nFra:2 Mem:348.86M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 1-6\nFra:2 Mem:348.44M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 2-6\nFra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing\nFra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution\nFra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution\nFra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/2.exr\nFra:2 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.26 (Saving: 00:00.00)\n\nFra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:3 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Raytree finished\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Creating Environment maps\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Caching Point Densities\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Loading voxel datasets\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Volume preprocessing\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:3 Mem:350.79M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 6-6\nFra:3 Mem:350.77M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 5-6\nFra:3 Mem:349.43M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 3-6\nFra:3 Mem:349.27M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 4-6\nFra:3 Mem:348.87M (0.00M, Peak 367.47M) | Time:00:00.24 | Scene, Part 2-6\nFra:3 Mem:348.44M (0.00M, Peak 367.47M) | Time:00:00.25 | Scene, Part 1-6\nFra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing\nFra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Determining resolution\nFra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Initializing execution\nFra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/3.exr\nFra:3 Mem:187.19M (0.00M, Peak 367.47M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.27 (Saving: 00:00.00)\n\nFra:4 Mem:187.03M (0.00M, Peak 187.04M) | Time:00:00.00 | Preparing Scene data\nFra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:4 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:4 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:4 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:4 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6\nFra:4 Mem:349.17M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6\nFra:4 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6\nFra:4 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6\nFra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing\nFra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Determining resolution\nFra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Initializing execution\nFra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/4.exr\nFra:4 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.27 (Saving: 00:00.00)\n\nFra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:5 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.09 | Raytree.. building\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:5 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:5 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:5 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6\nFra:5 Mem:349.17M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6\nFra:5 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 2-6\nFra:5 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6\nFra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing\nFra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution\nFra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution\nFra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/5.exr\nFra:5 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.27 (Saving: 00:00.00)\n\nFra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:6 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:6 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:6 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:6 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6\nFra:6 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6\nFra:6 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6\nFra:6 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6\nFra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing\nFra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Determining resolution\nFra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Initializing execution\nFra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1\nFra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/6.exr\nFra:6 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.27 (Saving: 00:00.00)\n\nFra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data\nFra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers\nFra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing\nFra:7 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1\nFra:7 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6\nFra:7 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6\nFra:7 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6\nFra:7 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6\nFra:7 Mem:348.87M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6\nFra:7 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6\nFra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing\nFra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution\nFra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution\nFra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1\nFra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/7.exr\nFra:7 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1\nSaved: 'buffer.png'\n Time: 00:00.26 (Saving: 00:00.00)\n\nved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/6d7e8fa77d384c07e4d0922154a19a3f/7.exr\nFra:7 Mem:203.64M (0.00M, Peak 569.00M) | Time:00:00.42 | Sce: Scene Ve:150090 Fa:132950 La:1\nSaved: 'buffer.png'\n Time: 00:00.42 (Saving: 00:00.00)\n\nk 1088.52M) | Time:00:02.09 | Compositing | Tile 1-1\nFra:7 Mem:365.11M (0.00M, Peak 1088.52M) | Time:00:02.09 | Compositing | Tile 1-1\nFra:7 Mem:365.11M (0.00M, Peak 1088.52M) | Time:00:02.09 | Compositing | De-initializing execution\nSaved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/b8dd449dd857e7f19b58a6529594c9d/7.exr\nFra:7 Mem:365.03M (0.00M, Peak 1088.52M) | Time:00:02.09 | Sce: Scene Ve:621636 Fa:702574 La:1\nSaved: 'buffer.png'\n Time: 00:02.09 (Saving: 00:00.00)\n\n"
  },
  {
    "path": "render/partial.sh",
    "content": "#!/bin/bash\necho \"Begin to generate exr files\"\n\nfor ((i=1; i<=21; i++)); do\n    blender -b -P render_depth.py \"/media/rico/BACKUP/Dataset/ShapeNetForPCN\" \"../dataset/car_split/split${i}.list\" \"/home/rico/Workspace/Dataset/partials/partial${i}\" 8\ndone\n\necho \"Done\"\n"
  },
  {
    "path": "render/process_exr.py",
    "content": "'''\nMIT License\n\nCopyright (c) 2018 Wentao Yuan\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'''\n\nimport Imath\nimport OpenEXR\nimport argparse\nimport array\nimport numpy as np\nimport os\nfrom open3d import *\n\n\ndef read_exr(exr_path, height, width):\n    file = OpenEXR.InputFile(exr_path)\n    depth_arr = array.array('f', file.channel('R', Imath.PixelType(Imath.PixelType.FLOAT)))\n    depth = np.array(depth_arr).reshape((height, width))\n    depth[depth < 0] = 0\n    depth[np.isinf(depth)] = 0\n    return depth\n\n\ndef depth2pcd(depth, intrinsics, pose):\n    inv_K = np.linalg.inv(intrinsics)\n    inv_K[2, 2] = -1\n    depth = np.flipud(depth)\n    y, x = np.where(depth > 0)\n    # image coordinates -> camera coordinates\n    points = np.dot(inv_K, np.stack([x, y, np.ones_like(x)] * depth[y, x], 0))\n    # camera coordinates -> world coordinates\n    points = np.dot(pose, np.concatenate([points, np.ones((1, points.shape[1]))], 0)).T[:, :3]\n    return points\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('list_file')\n    parser.add_argument('intrinsics_file')\n    parser.add_argument('output_dir')\n    parser.add_argument('num_scans', type=int)\n    args = parser.parse_args()\n\n    with open(args.list_file) as file:\n        model_list = file.read().splitlines()\n    intrinsics = np.loadtxt(args.intrinsics_file)\n    width = int(intrinsics[0, 2] * 2)\n    height = int(intrinsics[1, 2] * 2)\n\n    for model_id in model_list:\n        depth_dir = os.path.join(args.output_dir, 'depth', model_id)\n        pcd_dir = os.path.join(args.output_dir, 'pcd', model_id)\n        os.makedirs(depth_dir, exist_ok=True)\n        os.makedirs(pcd_dir, exist_ok=True)\n        for i in range(args.num_scans):\n            exr_path = os.path.join(args.output_dir, 'exr', model_id, '%d.exr' % i)\n            pose_path = os.path.join(args.output_dir, 'pose', model_id, '%d.txt' % i)\n\n            depth = read_exr(exr_path, height, width)\n            depth_img = Image(np.uint16(depth * 1000))\n            write_image(os.path.join(depth_dir, '%d.png' % i), depth_img)\n\n            pose = np.loadtxt(pose_path)\n            points = depth2pcd(depth, intrinsics, pose)\n            pcd = PointCloud()\n            pcd.points = Vector3dVector(points)\n            write_point_cloud(os.path.join(pcd_dir, '%d.pcd' % i), pcd)\n"
  },
  {
    "path": "render/render_depth.py",
    "content": "'''\nMIT License\n\nCopyright (c) 2018 Wentao Yuan\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'''\n\nimport bpy\nimport mathutils\nimport numpy as np\nimport os\nimport sys\nimport time\n\n\ndef random_pose():\n    angle_x = np.random.uniform() * 2 * np.pi\n    angle_y = np.random.uniform() * 2 * np.pi\n    angle_z = np.random.uniform() * 2 * np.pi\n    Rx = np.array([[1, 0, 0],\n                   [0, np.cos(angle_x), -np.sin(angle_x)],\n                   [0, np.sin(angle_x), np.cos(angle_x)]])\n    Ry = np.array([[np.cos(angle_y), 0, np.sin(angle_y)],\n                   [0, 1, 0],\n                   [-np.sin(angle_y), 0, np.cos(angle_y)]])\n    Rz = np.array([[np.cos(angle_z), -np.sin(angle_z), 0],\n                   [np.sin(angle_z), np.cos(angle_z), 0],\n                   [0, 0, 1]])\n    R = np.dot(Rz, np.dot(Ry, Rx))\n    # Set camera pointing to the origin and 1 unit away from the origin\n    t = np.expand_dims(R[:, 2], 1)\n    pose = np.concatenate([np.concatenate([R, t], 1), [[0, 0, 0, 1]]], 0)\n    return pose\n\n\ndef setup_blender(width, height, focal_length):\n    # camera\n    camera = bpy.data.objects['Camera']\n    camera.data.angle = np.arctan(width / 2 / focal_length) * 2\n\n    # render layer\n    scene = bpy.context.scene\n    scene.render.filepath = 'buffer'\n    scene.render.image_settings.color_depth = '16'\n    scene.render.resolution_percentage = 100\n    scene.render.resolution_x = width\n    scene.render.resolution_y = height\n\n    # compositor nodes\n    scene.use_nodes = True\n    tree = scene.node_tree\n    rl = tree.nodes.new('CompositorNodeRLayers')\n    output = tree.nodes.new('CompositorNodeOutputFile')\n    output.base_path = ''\n    output.format.file_format = 'OPEN_EXR'\n    tree.links.new(rl.outputs['Depth'], output.inputs[0])\n\n    # remove default cube\n    bpy.data.objects['Cube'].select = True\n    bpy.ops.object.delete()\n\n    return scene, camera, output\n\n\nif __name__ == '__main__':\n    model_dir = sys.argv[-4]\n    list_path = sys.argv[-3]\n    output_dir = sys.argv[-2]\n    num_scans = int(sys.argv[-1])\n\n    width = 160\n    height = 120\n    focal = 100\n    scene, camera, output = setup_blender(width, height, focal)\n    intrinsics = np.array([[focal, 0, width / 2], [0, focal, height / 2], [0, 0, 1]])\n\n    with open(os.path.join(list_path)) as file:\n        model_list = [line.strip() for line in file]\n    open('blender.log', 'w+').close()\n    os.system('rm -rf %s' % output_dir)\n    os.makedirs(output_dir)\n    np.savetxt(os.path.join(output_dir, 'intrinsics.txt'), intrinsics, '%f')\n\n    for model_id in model_list:\n        start = time.time()\n        exr_dir = os.path.join(output_dir, 'exr', model_id)\n        pose_dir = os.path.join(output_dir, 'pose', model_id)\n        os.makedirs(exr_dir)\n        os.makedirs(pose_dir)\n\n        # Redirect output to log file\n        old_os_out = os.dup(1)\n        os.close(1)\n        os.open('blender.log', os.O_WRONLY)\n\n        # Import mesh model\n        model_path = os.path.join(model_dir, model_id, 'model.obj')\n        bpy.ops.import_scene.obj(filepath=model_path)\n\n        # Rotate model by 90 degrees around x-axis (z-up => y-up) to match ShapeNet's coordinates\n        bpy.ops.transform.rotate(value=-np.pi / 2, axis=(1, 0, 0))\n\n        # Render\n        for i in range(num_scans):\n            scene.frame_set(i)\n            pose = random_pose()\n            camera.matrix_world = mathutils.Matrix(pose)\n            output.file_slots[0].path = os.path.join(exr_dir, '#.exr')\n            bpy.ops.render.render(write_still=True)\n            np.savetxt(os.path.join(pose_dir, '%d.txt' % i), pose, '%f')\n\n        # Clean up\n        bpy.ops.object.delete()\n        for m in bpy.data.meshes:\n            bpy.data.meshes.remove(m)\n        for m in bpy.data.materials:\n            m.user_clear()\n            bpy.data.materials.remove(m)\n\n        # Show time\n        os.close(1)\n        os.dup(old_os_out)\n        os.close(old_os_out)\n        print('%s done, time=%.4f sec' % (model_id, time.time() - start))\n"
  },
  {
    "path": "requirements.txt",
    "content": "open3d\nmatplotlib\ntensorboardX\n"
  },
  {
    "path": "sample/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8 FATAL_ERROR)\n\nproject(sample)\n\nfind_package(PCL 1.2 REQUIRED)\n\ninclude_directories(${PCL_INCLUDE_DIRS})\nlink_directories(${PCL_LIBRARY_DIRS})\nadd_definitions(${PCL_DEFINITIONS})\n\nadd_executable (mesh_sampling mesh_sampling.cpp)\ntarget_link_libraries (mesh_sampling ${PCL_LIBRARIES})\n"
  },
  {
    "path": "sample/README.md",
    "content": "# Sample\n\n`mesh_sampling.cpp` is used to sample point clouds uniformly from CAD model. In order to compile it, you have to install:\n\n* CMake\n* PCL\n* VTK\n\n## CMake\n\nUse this command to install CMake:\n\n```bash\nsudo apt-get udpate\nsudo apt-get install cmake\n```\n\n## PCL\n\nThe version I used is the latest version, you can use these commands to install:\n\n```bash\nsudo apt-get update  \nsudo apt-get install git build-essential linux-libc-dev\nsudo apt-get install cmake cmake-gui\nsudo apt-get install libusb-1.0-0-dev libusb-dev libudev-dev\nsudo apt-get install mpi-default-dev openmpi-bin openmpi-common \nsudo apt-get install libflann1.9 libflann-dev\nsudo apt-get install libeigen3-dev \nsudo apt-get install libboost-all-dev\nsudo apt-get install libqhull* libgtest-dev\nsudo apt-get install freeglut3-dev pkg-config\nsudo apt-get install libxmu-dev libxi-dev\nsudo apt-get install mono-complete\nsudo apt-get install openjdk-8-jdk openjdk-8-jre\n\ngit clone https://github.com/PointCloudLibrary/pcl.git\ncd pcl\nmkdir build && cd build\ncmake ..\nmake -j4\nsudo make install\n```\n\n## VTK\n\nThe version of the VTK is `8.2.0`. You can download it from the [website](https://vtk.org/download/) and use the commands blew to install:\n\n```bash\ntar -xzvf VTK-8.2.0.zip\ncd VTK-8.2.0/\n```\n\nBefore compiling, you need to edit the file `IO/Geometry/vtkOBJReader.cxx`. In line 859, add the following code:\n\n```C++\n// Here we turn off texturing and/or normals\nif (n_tcoord_pts == 0)\n{\n    hasTCoords = false;\n}\nif (n_normal_pts == 0)\n{\n    hasNormals = false;\n}\n```\n\nContinue to build:\n\n```bash\nmkdir build && cd build\ncmake ..\nmake -j4\nsudo make install\n```\n\n## Compile\n\nIn order to use the script, you need to compile it:\n\n```bash\ncd sample\nmkdir build && cd build\ncmake ..\nmake\n```\n\nAnd you can get a exectuable file `mesh_sampling` in the `build` directory. You can use `mesh_sampling -h` for help. I've provided the `mesh_sampling`. But there are some problems with the options of the command. The option `-n_samples` seems cannot work.\n\n## Example\n\nCAD model and sampled point cloud :\n\n<img src=\"../images/cad.png\" width=\"300px\"/>\n\n<img src=\"../images/ground_truth.png\" width=\"300px\"/>\n"
  },
  {
    "path": "sample/mesh_sampling.cpp",
    "content": "/*\n * Software License Agreement (BSD License)\n *\n *  Point Cloud Library (PCL) - www.pointclouds.org\n *  Copyright (c) 2010-2011, Willow Garage, Inc.\n *\n *  All rights reserved.\n *\n *  Redistribution and use in source and binary forms, with or without\n *  modification, are permitted provided that the following conditions\n *  are met:\n *\n *   * Redistributions of source code must retain the above copyright\n *     notice, this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above\n *     copyright notice, this list of conditions and the following\n *     disclaimer in the documentation and/or other materials provided\n *     with the distribution.\n *   * Neither the name of the copyright holder(s) nor the names of its\n *     contributors may be used to endorse or promote products derived\n *     from this software without specific prior written permission.\n *\n *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *  \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n *  FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n *  COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n *  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n *  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n *  LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n *  CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n *  LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n *  ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n *  POSSIBILITY OF SUCH DAMAGE.\n *\n * Modified by Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018\n */\n\n#include <pcl/visualization/pcl_visualizer.h>\n#include <pcl/io/pcd_io.h>\n#include <pcl/io/vtk_lib_io.h>\n#include <pcl/common/transforms.h>\n#include <vtkVersion.h>\n#include <vtkPLYReader.h>\n#include <vtkOBJReader.h>\n#include <vtkTriangle.h>\n#include <vtkTriangleFilter.h>\n#include <vtkPolyDataMapper.h>\n#include <pcl/filters/voxel_grid.h>\n#include <pcl/console/print.h>\n#include <pcl/console/parse.h>\n\ninline double\nuniform_deviate(int seed)\n{\n  double ran = seed * (1.0 / (RAND_MAX + 1.0));\n  return ran;\n}\n\ninline void\nrandomPointTriangle(float a1, float a2, float a3, float b1, float b2, float b3, float c1, float c2, float c3,\n                    Eigen::Vector4f &p)\n{\n  float r1 = static_cast<float>(uniform_deviate(rand()));\n  float r2 = static_cast<float>(uniform_deviate(rand()));\n  float r1sqr = std::sqrt(r1);\n  float OneMinR1Sqr = (1 - r1sqr);\n  float OneMinR2 = (1 - r2);\n  a1 *= OneMinR1Sqr;\n  a2 *= OneMinR1Sqr;\n  a3 *= OneMinR1Sqr;\n  b1 *= OneMinR2;\n  b2 *= OneMinR2;\n  b3 *= OneMinR2;\n  c1 = r1sqr * (r2 * c1 + b1) + a1;\n  c2 = r1sqr * (r2 * c2 + b2) + a2;\n  c3 = r1sqr * (r2 * c3 + b3) + a3;\n  p[0] = c1;\n  p[1] = c2;\n  p[2] = c3;\n  p[3] = 0;\n}\n\ninline void\nrandPSurface(vtkPolyData *polydata, std::vector<double> *cumulativeAreas, double totalArea, Eigen::Vector4f &p, bool calcNormal, Eigen::Vector3f &n)\n{\n  float r = static_cast<float>(uniform_deviate(rand()) * totalArea);\n\n  std::vector<double>::iterator low = std::lower_bound(cumulativeAreas->begin(), cumulativeAreas->end(), r);\n  vtkIdType el = vtkIdType(low - cumulativeAreas->begin());\n\n  double A[3], B[3], C[3];\n  vtkIdType npts = 0;\n  vtkIdType *ptIds = NULL;\n  polydata->GetCellPoints(el, npts, ptIds);\n  polydata->GetPoint(ptIds[0], A);\n  polydata->GetPoint(ptIds[1], B);\n  polydata->GetPoint(ptIds[2], C);\n  if (calcNormal)\n  {\n    // OBJ: Vertices are stored in a counter-clockwise order by default\n    Eigen::Vector3f v1 = Eigen::Vector3f(A[0], A[1], A[2]) - Eigen::Vector3f(C[0], C[1], C[2]);\n    Eigen::Vector3f v2 = Eigen::Vector3f(B[0], B[1], B[2]) - Eigen::Vector3f(C[0], C[1], C[2]);\n    n = v1.cross(v2);\n    n.normalize();\n  }\n  randomPointTriangle(float(A[0]), float(A[1]), float(A[2]),\n                      float(B[0]), float(B[1]), float(B[2]),\n                      float(C[0]), float(C[1]), float(C[2]), p);\n}\n\nvoid uniform_sampling(vtkSmartPointer<vtkPolyData> polydata, size_t n_samples, bool calc_normal, pcl::PointCloud<pcl::PointNormal> &cloud_out)\n{\n  polydata->BuildCells();\n  vtkSmartPointer<vtkCellArray> cells = polydata->GetPolys();\n\n  double p1[3], p2[3], p3[3], totalArea = 0;\n  std::vector<double> cumulativeAreas(cells->GetNumberOfCells(), 0);\n  size_t i = 0;\n  vtkIdType npts = 0, *ptIds = NULL;\n  for (cells->InitTraversal(); cells->GetNextCell(npts, ptIds); i++)\n  {\n    polydata->GetPoint(ptIds[0], p1);\n    polydata->GetPoint(ptIds[1], p2);\n    polydata->GetPoint(ptIds[2], p3);\n    totalArea += vtkTriangle::TriangleArea(p1, p2, p3);\n    cumulativeAreas[i] = totalArea;\n  }\n\n  cloud_out.points.resize(n_samples);\n  cloud_out.width = static_cast<uint32_t>(n_samples);\n  cloud_out.height = 1;\n\n  for (i = 0; i < n_samples; i++)\n  {\n    Eigen::Vector4f p;\n    Eigen::Vector3f n;\n    randPSurface(polydata, &cumulativeAreas, totalArea, p, calc_normal, n);\n    cloud_out.points[i].x = p[0];\n    cloud_out.points[i].y = p[1];\n    cloud_out.points[i].z = p[2];\n    if (calc_normal)\n    {\n      cloud_out.points[i].normal_x = n[0];\n      cloud_out.points[i].normal_y = n[1];\n      cloud_out.points[i].normal_z = n[2];\n    }\n  }\n}\n\nusing namespace pcl;\nusing namespace pcl::io;\nusing namespace pcl::console;\n\nconst int default_number_samples = 100000;\nconst float default_leaf_size = 0.01f;\n\nvoid printHelp(int, char **argv)\n{\n  print_error(\"Syntax is: %s input.{ply,obj} output.pcd <options>\\n\", argv[0]);\n  print_info(\"  where options are:\\n\");\n  print_info(\"                -n_samples X   = number of samples (default: \");\n  print_value(\"%d\", default_number_samples);\n  print_info(\")\\n\");\n  print_info(\n      \"                -leaf_size X   = the XYZ leaf size for the VoxelGrid -- for data reduction (default: \");\n  print_value(\"%f\", default_leaf_size);\n  print_info(\" m)\\n\");\n  print_info(\"                -write_normals = flag to write normals to the output pcd\\n\");\n  print_info(\n      \"                -no_vis_result = flag to stop visualizing the generated pcd\\n\");\n  print_info(\n      \"                -no_vox_filter = flag to stop downsampling the generated pcd\\n\");\n}\n\n/* ---[ */\nint main(int argc, char **argv)\n{\n  if (argc < 3)\n  {\n    printHelp(argc, argv);\n    return (-1);\n  }\n\n  // Parse command line arguments\n  int SAMPLE_POINTS_ = default_number_samples;\n  parse_argument(argc, argv, \"-n_samples\", SAMPLE_POINTS_);\n  float leaf_size = default_leaf_size;\n  parse_argument(argc, argv, \"-leaf_size\", leaf_size);\n  bool vis_result = !find_switch(argc, argv, \"-no_vis_result\");\n  bool vox_filter = !find_switch(argc, argv, \"-no_vox_filter\");\n  const bool write_normals = find_switch(argc, argv, \"-write_normals\");\n\n  std::vector<int> pcd_file_indices = parse_file_extension_argument(argc, argv, \".pcd\");\n  std::vector<int> ply_file_indices = parse_file_extension_argument(argc, argv, \".ply\");\n  std::vector<int> obj_file_indices = parse_file_extension_argument(argc, argv, \".obj\");\n  if (pcd_file_indices.size() != 1)\n  {\n    print_error(\"Need a single output PCD file to continue.\\n\");\n    return (-1);\n  }\n  if (ply_file_indices.size() != 1 && obj_file_indices.size() != 1)\n  {\n    print_error(\"Need a single input PLY/OBJ file to continue.\\n\");\n    return (-1);\n  }\n\n  vtkSmartPointer<vtkPolyData> polydata1 = vtkSmartPointer<vtkPolyData>::New();\n  if (ply_file_indices.size() == 1)\n  {\n    pcl::PolygonMesh mesh;\n    pcl::io::loadPolygonFilePLY(argv[ply_file_indices[0]], mesh);\n    pcl::io::mesh2vtk(mesh, polydata1);\n  }\n  else if (obj_file_indices.size() == 1)\n  {\n    print_info(\"Convert %s to a point cloud using uniform sampling.\\n\", argv[obj_file_indices[0]]);\n    vtkSmartPointer<vtkOBJReader> readerQuery = vtkSmartPointer<vtkOBJReader>::New();\n    readerQuery->SetFileName(argv[obj_file_indices[0]]);\n    readerQuery->Update();\n    polydata1 = readerQuery->GetOutput();\n  }\n\n  //make sure that the polygons are triangles!\n  vtkSmartPointer<vtkTriangleFilter> triangleFilter = vtkSmartPointer<vtkTriangleFilter>::New();\n#if VTK_MAJOR_VERSION < 6\n  triangleFilter->SetInput(polydata1);\n#else\n  triangleFilter->SetInputData(polydata1);\n#endif\n  triangleFilter->Update();\n\n  vtkSmartPointer<vtkPolyDataMapper> triangleMapper = vtkSmartPointer<vtkPolyDataMapper>::New();\n  triangleMapper->SetInputConnection(triangleFilter->GetOutputPort());\n  triangleMapper->Update();\n  polydata1 = triangleMapper->GetInput();\n\n  bool INTER_VIS = false;\n\n  if (INTER_VIS)\n  {\n    visualization::PCLVisualizer vis;\n    vis.addModelFromPolyData(polydata1, \"mesh1\", 0);\n    vis.setRepresentationToSurfaceForAllActors();\n    vis.spin();\n  }\n\n  pcl::PointCloud<pcl::PointNormal>::Ptr cloud_1(new pcl::PointCloud<pcl::PointNormal>);\n  uniform_sampling(polydata1, SAMPLE_POINTS_, write_normals, *cloud_1);\n\n  if (INTER_VIS)\n  {\n    visualization::PCLVisualizer vis_sampled;\n    vis_sampled.addPointCloud<pcl::PointNormal>(cloud_1);\n    if (write_normals)\n      vis_sampled.addPointCloudNormals<pcl::PointNormal>(cloud_1, 1, 0.02f, \"cloud_normals\");\n    vis_sampled.spin();\n  }\n\n  pcl::PointCloud<pcl::PointNormal>::Ptr cloud(new pcl::PointCloud<pcl::PointNormal>);\n\n  // Voxelgrid\n  if (vox_filter)\n  {\n    VoxelGrid<PointNormal> grid_;\n    grid_.setInputCloud(cloud_1);\n    grid_.setLeafSize(leaf_size, leaf_size, leaf_size);\n    grid_.filter(*cloud);\n  }\n  else\n  {\n    *cloud = *cloud_1;\n  }\n\n  if (vis_result)\n  {\n    visualization::PCLVisualizer vis3(\"VOXELIZED SAMPLES CLOUD\");\n    vis3.addPointCloud<pcl::PointNormal>(cloud);\n    if (write_normals)\n      vis3.addPointCloudNormals<pcl::PointNormal>(cloud, 1, 0.02f, \"cloud_normals\");\n    vis3.spin();\n  }\n\n  if (!write_normals)\n  {\n    pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_xyz(new pcl::PointCloud<pcl::PointXYZ>);\n    // Strip uninitialized normals from cloud:\n    pcl::copyPointCloud(*cloud, *cloud_xyz);\n    savePCDFileASCII(argv[pcd_file_indices[0]], *cloud_xyz);\n  }\n  else\n  {\n    savePCDFileASCII(argv[pcd_file_indices[0]], *cloud);\n  }\n}\n"
  },
  {
    "path": "test.py",
    "content": "import os\nimport argparse\n\nimport numpy as np\nimport open3d as o3d\nimport torch\nimport torch.utils.data as Data\n\nfrom models import PCN\nfrom dataset import ShapeNet\nfrom visualization import plot_pcd_one_view\nfrom metrics.metric import l1_cd, l2_cd, emd, f_score\n\n\nCATEGORIES_PCN       = ['airplane', 'cabinet', 'car', 'chair', 'lamp', 'sofa', 'table', 'vessel']\nCATEGORIES_PCN_NOVEL = ['bus', 'bed', 'bookshelf', 'bench', 'guitar', 'motorbike', 'skateboard', 'pistol']\n\n\ndef make_dir(dir_path):\n    if not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n\n\ndef export_ply(filename, points):\n    pc = o3d.geometry.PointCloud()\n    pc.points = o3d.utility.Vector3dVector(points)\n    o3d.io.write_point_cloud(filename, pc, write_ascii=True)\n\n\ndef test_single_category(category, model, params, save=True):\n    if save:\n        cat_dir = os.path.join(params.result_dir, category)\n        image_dir = os.path.join(cat_dir, 'image')\n        output_dir = os.path.join(cat_dir, 'output')\n        make_dir(cat_dir)\n        make_dir(image_dir)\n        make_dir(output_dir)\n\n    test_dataset = ShapeNet('/media/server/new/datasets/PCN', 'test_novel' if params.novel else 'test', category)\n    test_dataloader = Data.DataLoader(test_dataset, batch_size=params.batch_size, shuffle=False)\n\n    index = 1\n    total_l1_cd, total_l2_cd, total_f_score = 0.0, 0.0, 0.0\n    with torch.no_grad():\n        for p, c in test_dataloader:\n            p = p.to(params.device)\n            c = c.to(params.device)\n            _, c_ = model(p)\n            total_l1_cd += l1_cd(c_, c).item()\n            total_l2_cd += l2_cd(c_, c).item()\n            for i in range(len(c)):\n                input_pc = p[i].detach().cpu().numpy()\n                output_pc = c_[i].detach().cpu().numpy()\n                gt_pc = c[i].detach().cpu().numpy()\n                total_f_score += f_score(output_pc, gt_pc)\n                if save:\n                    plot_pcd_one_view(os.path.join(image_dir, '{:03d}.png'.format(index)), [input_pc, output_pc, gt_pc], ['Input', 'Output', 'GT'], xlim=(-0.35, 0.35), ylim=(-0.35, 0.35), zlim=(-0.35, 0.35))\n                    export_ply(os.path.join(output_dir, '{:03d}.ply'.format(index)), output_pc)\n                index += 1\n    \n    avg_l1_cd = total_l1_cd / len(test_dataset)\n    avg_l2_cd = total_l2_cd / len(test_dataset)\n    avg_f_score = total_f_score / len(test_dataset)\n\n    return avg_l1_cd, avg_l2_cd, avg_f_score\n\n\ndef test(params, save=False):\n    if save:\n        make_dir(params.result_dir)\n\n    print(params.exp_name)\n\n    # load pretrained model\n    model = PCN(16384, 1024, 4).to(params.device)\n    model.load_state_dict(torch.load(params.ckpt_path))\n    model.eval()\n\n    print('\\033[33m{:20s}{:20s}{:20s}{:20s}\\033[0m'.format('Category', 'L1_CD(1e-3)', 'L2_CD(1e-4)', 'FScore-0.01(%)'))\n    print('\\033[33m{:20s}{:20s}{:20s}{:20s}\\033[0m'.format('--------', '-----------', '-----------', '--------------'))\n\n    if params.category == 'all':\n        if params.novel:\n            categories = CATEGORIES_PCN_NOVEL\n        else:\n            categories = CATEGORIES_PCN\n        \n        l1_cds, l2_cds, fscores = list(), list(), list()\n        for category in categories:\n            avg_l1_cd, avg_l2_cd, avg_f_score = test_single_category(category, model, params, save)\n            print('{:20s}{:<20.4f}{:<20.4f}{:<20.4f}'.format(category.title(), 1e3 * avg_l1_cd, 1e4 * avg_l2_cd, 1e2 * avg_f_score))\n            l1_cds.append(avg_l1_cd)\n            l2_cds.append(avg_l2_cd)\n            fscores.append(avg_f_score)\n        \n        print('\\033[33m{:20s}{:20s}{:20s}{:20s}\\033[0m'.format('--------', '-----------', '-----------', '--------------'))\n        print('\\033[32m{:20s}{:<20.4f}{:<20.4f}{:<20.4f}\\033[0m'.format('Average', np.mean(l1_cds) * 1e3, np.mean(l2_cds) * 1e4, np.mean(fscores) * 1e2))\n    else:\n        avg_l1_cd, avg_l2_cd, avg_f_score = test_single_category(params.category, model, params, save)\n        print('{:20s}{:<20.4f}{:<20.4f}{:<20.4f}'.format(params.category.title(), 1e3 * avg_l1_cd, 1e4 * avg_l2_cd, 1e2 * avg_f_score))\n\n\ndef test_single_category_emd(category, model, params):\n    test_dataset = ShapeNet('/media/server/new/datasets/PCN', 'test_novel' if params.novel else 'test', category)\n    test_dataloader = Data.DataLoader(test_dataset, batch_size=params.batch_size, shuffle=False)\n\n    total_emd = 0.0\n    with torch.no_grad():\n        for p, c in test_dataloader:\n            p = p.to(params.device)\n            c = c.to(params.device)\n            _, c_ = model(p)\n            total_emd += emd(c_, c).item()\n        \n    avg_emd = total_emd / len(test_dataset) / c_.shape[1]\n    return avg_emd\n\n\ndef test_emd(params):\n    print(params.exp_name)\n\n    # load pretrained model\n    model = PCN(16384, 1024, 4).to(params.device)\n    model.load_state_dict(torch.load(params.ckpt_path))\n    model.eval()\n\n    print('\\033[33m{:20s}{:20s}\\033[0m'.format('Category', 'EMD(1e-3)'))\n    print('\\033[33m{:20s}{:20s}\\033[0m'.format('--------', '---------'))\n\n    if params.category == 'all':\n        if params.novel:\n            categories = CATEGORIES_PCN_NOVEL\n        else:\n            categories = CATEGORIES_PCN\n        \n        emds = list()\n        for category in categories:\n            avg_emd = test_single_category_emd(category, model, params)\n            print('{:20s}{:<20.4f}'.format(category.title(), 1e3 * avg_emd))\n            emds.append(avg_emd)\n        \n        print('\\033[33m{:20s}{:20s}\\033[0m'.format('--------', '---------'))\n        print('\\033[32m{:20s}{:<20.4f}\\033[0m'.format('Average', np.mean(emds) * 1e3))\n    else:\n        avg_emd = test_single_category_emd(params.category, model, params)\n        print('{:20s}{:<20.4f}'.format(params.category.title(), 1e3 * avg_emd))\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser('Point Cloud Completion Testing')\n    parser.add_argument('--exp_name', type=str, help='Tag of experiment')\n    parser.add_argument('--result_dir', type=str, default='results', help='Results directory')\n    parser.add_argument('--ckpt_path', type=str, help='The path of pretrained model.')\n    parser.add_argument('--category', type=str, default='all', help='Category of point clouds')\n    parser.add_argument('--batch_size', type=int, default=1, help='Batch size for data loader')\n    parser.add_argument('--num_workers', type=int, default=6, help='Num workers for data loader')\n    parser.add_argument('--device', type=str, default='cuda:0', help='Device for testing')\n    parser.add_argument('--save', type=bool, default=False, help='Saving test result')\n    parser.add_argument('--novel', type=bool, default=False, help='unseen categories for testing')\n    parser.add_argument('--emd', type=bool, default=False, help='Whether evaluate emd')\n    params = parser.parse_args()\n\n    if not params.emd:\n        test(params, params.save)\n    else:\n        test_emd(params)\n"
  },
  {
    "path": "train.py",
    "content": "import argparse\nimport os\nimport datetime\nimport random\n\nimport torch\nimport torch.optim as Optim\n\nfrom torch.utils.data.dataloader import DataLoader\nfrom tensorboardX import SummaryWriter\n\nfrom dataset import ShapeNet\nfrom models import PCN\nfrom metrics.metric import l1_cd\nfrom metrics.loss import cd_loss_L1, emd_loss\nfrom visualization import plot_pcd_one_view\n\n\ndef make_dir(dir_path):\n    if not os.path.exists(dir_path):\n        os.makedirs(dir_path)\n\n\ndef log(fd,  message, time=True):\n    if time:\n        message = ' ==> '.join([datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), message])\n    fd.write(message + '\\n')\n    fd.flush()\n    print(message)\n\n\ndef prepare_logger(params):\n    # prepare logger directory\n    make_dir(params.log_dir)\n    make_dir(os.path.join(params.log_dir, params.exp_name))\n\n    logger_path = os.path.join(params.log_dir, params.exp_name, params.category)\n    ckpt_dir = os.path.join(params.log_dir, params.exp_name, params.category, 'checkpoints')\n    epochs_dir = os.path.join(params.log_dir, params.exp_name, params.category, 'epochs')\n\n    make_dir(logger_path)\n    make_dir(ckpt_dir)\n    make_dir(epochs_dir)\n\n    logger_file = os.path.join(params.log_dir, params.exp_name, params.category, 'logger.log')\n    log_fd = open(logger_file, 'a')\n\n    log(log_fd, \"Experiment: {}\".format(params.exp_name), False)\n    log(log_fd, \"Logger directory: {}\".format(logger_path), False)\n    log(log_fd, str(params), False)\n\n    train_writer = SummaryWriter(os.path.join(logger_path, 'train'))\n    val_writer = SummaryWriter(os.path.join(logger_path, 'val'))\n\n    return ckpt_dir, epochs_dir, log_fd, train_writer, val_writer\n\n\ndef train(params):\n    torch.backends.cudnn.benchmark = True\n\n    ckpt_dir, epochs_dir, log_fd, train_writer, val_writer = prepare_logger(params)\n\n    log(log_fd, 'Loading Data...')\n\n    train_dataset = ShapeNet('data/PCN', 'train', params.category)\n    val_dataset = ShapeNet('data/PCN', 'valid', params.category)\n\n    train_dataloader = DataLoader(train_dataset, batch_size=params.batch_size, shuffle=True, num_workers=params.num_workers)\n    val_dataloader = DataLoader(val_dataset, batch_size=params.batch_size, shuffle=False, num_workers=params.num_workers)\n    log(log_fd, \"Dataset loaded!\")\n\n    # model\n    model = PCN(num_dense=16384, latent_dim=1024, grid_size=4).to(params.device)\n\n    # optimizer\n    optimizer = Optim.Adam(model.parameters(), lr=params.lr, betas=(0.9, 0.999))\n    lr_schedual = Optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.7)\n\n    step = len(train_dataloader) // params.log_frequency\n\n    # load pretrained model and optimizer\n    if params.ckpt_path is not None:\n        model.load_state_dict(torch.load(params.ckpt_path))\n\n    # training\n    best_cd_l1 = 1e8\n    best_epoch_l1 = -1\n    train_step, val_step = 0, 0\n    for epoch in range(1, params.epochs + 1):\n        # hyperparameter alpha\n        if train_step < 10000:\n            alpha = 0.01\n        elif train_step < 20000:\n            alpha = 0.1\n        elif train_step < 50000:\n            alpha = 0.5\n        else:\n            alpha = 1.0\n\n        # training\n        model.train()\n        for i, (p, c) in enumerate(train_dataloader):\n            p, c = p.to(params.device), c.to(params.device)\n\n            optimizer.zero_grad()\n\n            # forward propagation\n            coarse_pred, dense_pred = model(p)\n            \n            # loss function\n            if params.coarse_loss == 'cd':\n                loss1 = cd_loss_L1(coarse_pred, c)\n            elif params.coarse_loss == 'emd':\n                coarse_c = c[:, :1024, :]\n                loss1 = emd_loss(coarse_pred, coarse_c)\n            else:\n                raise ValueError('Not implemented loss {}'.format(params.coarse_loss))\n                \n            loss2 = cd_loss_L1(dense_pred, c)\n            loss = loss1 + alpha * loss2\n\n            # back propagation\n            loss.backward()\n            optimizer.step()\n\n            if (i + 1) % step == 0:\n                log(log_fd, \"Training Epoch [{:03d}/{:03d}] - Iteration [{:03d}/{:03d}]: coarse loss = {:.6f}, dense l1 cd = {:.6f}, total loss = {:.6f}\"\n                    .format(epoch, params.epochs, i + 1, len(train_dataloader), loss1.item() * 1e3, loss2.item() * 1e3, loss.item() * 1e3))\n            \n            train_writer.add_scalar('coarse', loss1.item(), train_step)\n            train_writer.add_scalar('dense', loss2.item(), train_step)\n            train_writer.add_scalar('total', loss.item(), train_step)\n            train_step += 1\n        \n        lr_schedual.step()\n\n        # evaluation\n        model.eval()\n        total_cd_l1 = 0.0\n        with torch.no_grad():\n            rand_iter = random.randint(0, len(val_dataloader) - 1)  # for visualization\n\n            for i, (p, c) in enumerate(val_dataloader):\n                p, c = p.to(params.device), c.to(params.device)\n                coarse_pred, dense_pred = model(p)\n                total_cd_l1 += l1_cd(dense_pred, c).item()\n\n                # save into image\n                if rand_iter == i:\n                    index = random.randint(0, dense_pred.shape[0] - 1)\n                    plot_pcd_one_view(os.path.join(epochs_dir, 'epoch_{:03d}.png'.format(epoch)),\n                                      [p[index].detach().cpu().numpy(), coarse_pred[index].detach().cpu().numpy(), dense_pred[index].detach().cpu().numpy(), c[index].detach().cpu().numpy()],\n                                      ['Input', 'Coarse', 'Dense', 'Ground Truth'], xlim=(-0.35, 0.35), ylim=(-0.35, 0.35), zlim=(-0.35, 0.35))\n            \n            total_cd_l1 /= len(val_dataset)\n            val_writer.add_scalar('l1_cd', total_cd_l1, val_step)\n            val_step += 1\n\n            log(log_fd, \"Validate Epoch [{:03d}/{:03d}]: L1 Chamfer Distance = {:.6f}\".format(epoch, params.epochs, total_cd_l1 * 1e3))\n        \n        if total_cd_l1 < best_cd_l1:\n            best_epoch_l1 = epoch\n            best_cd_l1 = total_cd_l1\n            torch.save(model.state_dict(), os.path.join(ckpt_dir, 'best_l1_cd.pth'))\n            \n    log(log_fd, 'Best l1 cd model in epoch {}, the minimum l1 cd is {}'.format(best_epoch_l1, best_cd_l1 * 1e3))\n    log_fd.close()\n    \n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser('PCN')\n    parser.add_argument('--exp_name', type=str, help='Tag of experiment')\n    parser.add_argument('--log_dir', type=str, default='log', help='Logger directory')\n    parser.add_argument('--ckpt_path', type=str, default=None, help='The path of pretrained model')\n    parser.add_argument('--lr', type=float, default=0.0001, help='Learning rate')\n    parser.add_argument('--category', type=str, default='all', help='Category of point clouds')\n    parser.add_argument('--epochs', type=int, default=200, help='Epochs of training')\n    parser.add_argument('--batch_size', type=int, default=32, help='Batch size for data loader')\n    parser.add_argument('--coarse_loss', type=str, default='cd', help='loss function for coarse point cloud')\n    parser.add_argument('--num_workers', type=int, default=6, help='num_workers for data loader')\n    parser.add_argument('--device', type=str, default='cuda:0', help='device for training')\n    parser.add_argument('--log_frequency', type=int, default=10, help='Logger frequency in every epoch')\n    parser.add_argument('--save_frequency', type=int, default=10, help='Model saving frequency')\n    params = parser.parse_args()\n    \n    train(params)\n"
  },
  {
    "path": "visualization/__init__.py",
    "content": "from visualization.visualization import plot_pcd_one_view, o3d_visualize_pc\n"
  },
  {
    "path": "visualization/visualization.py",
    "content": "import open3d as o3d\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef o3d_visualize_pc(pc):\n    point_cloud = o3d.geometry.PointCloud()\n    point_cloud.points = o3d.utility.Vector3dVector(pc)\n    o3d.visualization.draw_geometries([point_cloud])\n\n\ndef plot_pcd_one_view(filename, pcds, titles, suptitle='', sizes=None, cmap='Reds', zdir='y',\n                         xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), zlim=(-0.5, 0.5)):\n    if sizes is None:\n        sizes = [0.5 for i in range(len(pcds))]\n    fig = plt.figure(figsize=(len(pcds) * 3 * 1.4, 3 * 1.4))\n    elev = 30  # 水平倾斜\n    azim = -45  # 旋转\n    for j, (pcd, size) in enumerate(zip(pcds, sizes)):\n        color = pcd[:, 0]\n        ax = fig.add_subplot(1, len(pcds), j + 1, projection='3d')\n        ax.view_init(elev, azim)\n        ax.scatter(pcd[:, 0], pcd[:, 1], pcd[:, 2], zdir=zdir, c=color, s=size, cmap=cmap, vmin=-1.0, vmax=0.5)\n        ax.set_title(titles[j])\n        ax.set_axis_off()\n        ax.set_xlim(xlim)\n        ax.set_ylim(ylim)\n        ax.set_zlim(zlim)\n    plt.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=0.9, wspace=0.1, hspace=0.1)\n    plt.suptitle(suptitle)\n    fig.savefig(filename)\n    plt.close(fig)\n"
  }
]