Full Code of qinglew/PCN-PyTorch for AI

master 3b3d1ce97d92 cached
40 files
26.3 MB
34.3k tokens
55 symbols
1 requests
Download .txt
Repository: qinglew/PCN-PyTorch
Branch: master
Commit: 3b3d1ce97d92
Files: 40
Total size: 26.3 MB

Directory structure:
gitextract_fmkz5hde/

├── .gitignore
├── README.md
├── checkpoint/
│   └── best_l1_cd.pth
├── data/
│   └── README.md
├── dataset/
│   ├── __init__.py
│   └── shapenet.py
├── extensions/
│   ├── chamfer_distance/
│   │   ├── chamfer3D.cu
│   │   ├── chamfer_3D.egg-info/
│   │   │   ├── PKG-INFO
│   │   │   ├── SOURCES.txt
│   │   │   ├── dependency_links.txt
│   │   │   └── top_level.txt
│   │   ├── chamfer_cuda.cpp
│   │   ├── chamfer_distance.py
│   │   └── setup.py
│   └── earth_movers_distance/
│       ├── emd.cpp
│       ├── emd.py
│       ├── emd_cuda.egg-info/
│       │   ├── PKG-INFO
│       │   ├── SOURCES.txt
│       │   ├── dependency_links.txt
│       │   └── top_level.txt
│       ├── emd_kernel.cu
│       └── setup.py
├── metrics/
│   ├── loss.py
│   └── metric.py
├── models/
│   ├── __init__.py
│   └── pcn.py
├── render/
│   ├── README.md
│   ├── blender.log
│   ├── partial.sh
│   ├── process_exr.py
│   └── render_depth.py
├── requirements.txt
├── sample/
│   ├── CMakeLists.txt
│   ├── README.md
│   ├── mesh_sampling
│   └── mesh_sampling.cpp
├── test.py
├── train.py
└── visualization/
    ├── __init__.py
    └── visualization.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.vscode
__pycache__
log
.idea
data/pcn


================================================
FILE: README.md
================================================
# PCN: Point Completion Network

## Introduction

![PCN](images/network.png)

This is implementation of PCN——Point Completion Network in pytorch. PCN is an autoencoder for point cloud completion. As for the details of the paper, please refer to [arXiv](https://arxiv.org/pdf/1808.00671.pdf).

## Environment

* Ubuntu 18.04 LTS
* Python 3.7.9
* PyTorch 1.7.0
* CUDA 10.1.243

## Prerequisite

Compile for cd and emd:

```shell
cd extensions/chamfer_distance
python setup.py install
cd ../earth_movers_distance
python setup.py install
```

**Hint**: Don't compile on Windows platform.

As for other modules, please install by:

```shell
pip install -r requirements.txt
```

## Dataset

Please reference `render` and `sample` to create your own dataset. Also, we decompressed all `.lmdb` data from [PCN](https://drive.google.com/drive/folders/1M_lJN14Ac1RtPtEQxNlCV9e8pom3U6Pa) data into `.ply` data which has smaller volume 8.1G and upload it into Google Drive. Here is the shared link: [Google Drive](https://drive.google.com/file/d/1OvvRyx02-C_DkzYiJ5stpin0mnXydHQ7/view?usp=sharing).

## Training

In order to train the model, please use script:

```shell
python train.py --exp_name PCN_16384 --lr 0.0001 --epochs 400 --batch_size 32 --coarse_loss cd --num_workers 8
```

If you want to use emd to calculate the distances between coarse point clouds, please use script:

```shell
python train.py --exp_name PCN_16384 --lr 0.0001 --epochs 400 --batch_size 32 --coarse_loss emd --num_workers 8
```

## Testing

In order to test the model, please use follow script:

```shell
python test.py --exp_name PCN_16384 --ckpt_path <path of pretrained model> --batch_size 32 --num_workers 8
```

Because of the computation cost for calculating emd for 16384 points, I split out the emd's evaluation. The parameter `--emd` is used for testing emd. The parameter `--novel` is for novel testing data contains unseen categories while training. The parameter `--save` is used for saving the prediction into `.ply` file and visualize the result into `.png` image.

## Pretrained Model

The pretrained model is in `checkpoint/`.

## Results

I trained the model on Nvidia GPU 1080Ti with L1 Chamfer Distance for 400 epochs with initial learning rate 0.0001 and decay by 0.7 every 50 epochs. The batch size is 32. Best model is the minimum L1 cd one in validation data.

### Quantitative Result

The threshold for F-Score is 0.01.

#### Seen Categories:

Category | L1_CD(1e-3) | L2_CD(1e-4) | EMD(1e-3) | F-Score(%)
-- | -- | -- | -- | --
Airplane | 6.0028 | 1.7323 | 10.5922 | 86.2954
Cabinet | 11.2092 | 4.7351 | 27.1505 | 61.6697
Car | 9.1304 | 2.7157 | 14.3661 | 70.5874
Chair | 12.0340 | 5.8717 | 22.4904 | 58.2958
Lamp | 12.6754 | 7.5891 | 58.7799 | 57.8894
Sofa | 12.8218 | 6.4572 | 19.2891 | 53.4009
Table | 9.8840 | 4.5669 | 23.7691 | 70.9750
Vessel | 10.1603 | 4.2766 | 17.9761 | 66.6521
**Average** | 10.4897 | 4.7431 | 24.3017 | 65.7207

#### Unseen Categories

Category | L1_CD(1e-3) | L2_CD(1e-4) | EMD(1e-3) | F-Score(%)
-- | -- | -- | -- | --
Bus       | 10.5110 | 4.4648  | 17.0274 | 66.9774
Bed       | 24.9320 | 32.4809 | 42.7974 | 32.2265
Bookshelf | 15.8186 | 13.1783 | 28.5608 | 50.0337
Bench     | 12.1345 | 7.3033  | 12.7497 | 62.4376
Guitar    | 11.4964 | 5.9601  | 28.4223 | 59.4976
Motorbike | 15.3426 | 8.7723  | 21.8634 | 44.7431
Skateboard| 13.1909 | 7.9711  | 17.9910 | 58.4427
Pistol    | 17.4897 | 15.5062 | 33.8937 | 45.6073
**Average**  | 15.1145 | 11.9546 | 25.4132 | 52.4958

### Qualitative Result

#### Seen Categories

![seen](images/seen_categories.png)

#### Unseen Categories

![unseen](images/unseen_categories.png)

## Citation

* [PCN: Point Completion Network](https://arxiv.org/pdf/1808.00671.pdf)
* [PCN's official Tensorflow implementation](https://github.com/wentaoyuan/pcn)


================================================
FILE: checkpoint/best_l1_cd.pth
================================================
[File too large to display: 26.2 MB]

================================================
FILE: data/README.md
================================================
# data

Please download `PCN.zip` from Cloud and unzip it here.

```shell
unzip PCN.zip
```

================================================
FILE: dataset/__init__.py
================================================
from dataset.shapenet import ShapeNet


================================================
FILE: dataset/shapenet.py
================================================
import sys
sys.path.append('.')

import os
import random

import torch
import torch.utils.data as data
import numpy as np
import open3d as o3d


class ShapeNet(data.Dataset):
    """
    ShapeNet dataset in "PCN: Point Completion Network". It contains 28974 training
    samples while each complete samples corresponds to 8 viewpoint partial scans, 800
    validation samples and 1200 testing samples.
    """
    
    def __init__(self, dataroot, split, category):
        assert split in ['train', 'valid', 'test', 'test_novel'], "split error value!"

        self.cat2id = {
            # seen categories
            "airplane"  : "02691156",  # plane
            "cabinet"   : "02933112",  # dresser
            "car"       : "02958343",
            "chair"     : "03001627",
            "lamp"      : "03636649",
            "sofa"      : "04256520",
            "table"     : "04379243",
            "vessel"    : "04530566",  # boat
            
            # alis for some seen categories
            "boat"      : "04530566",  # vessel
            "couch"     : "04256520",  # sofa
            "dresser"   : "02933112",  # cabinet
            "airplane"  : "02691156",  # airplane
            "watercraft": "04530566",  # boat

            # unseen categories
            "bus"       : "02924116",
            "bed"       : "02818832",
            "bookshelf" : "02871439",
            "bench"     : "02828884",
            "guitar"    : "03467517",
            "motorbike" : "03790512",
            "skateboard": "04225987",
            "pistol"    : "03948459",
        }

        # self.id2cat = {cat_id: cat for cat, cat_id in self.cat2id.items()}

        self.dataroot = dataroot
        self.split = split
        self.category = category

        self.partial_paths, self.complete_paths = self._load_data()
    
    def __getitem__(self, index):
        if self.split == 'train':
            partial_path = self.partial_paths[index].format(random.randint(0, 7))
        else:
            partial_path = self.partial_paths[index]
        complete_path = self.complete_paths[index]

        partial_pc = self.random_sample(self.read_point_cloud(partial_path), 2048)
        complete_pc = self.random_sample(self.read_point_cloud(complete_path), 16384)

        return torch.from_numpy(partial_pc), torch.from_numpy(complete_pc)

    def __len__(self):
        return len(self.complete_paths)

    def _load_data(self):
        with open(os.path.join(self.dataroot, '{}.list').format(self.split), 'r') as f:
            lines = f.read().splitlines()

        if self.category != 'all':
            lines = list(filter(lambda x: x.startswith(self.cat2id[self.category]), lines))
        
        partial_paths, complete_paths = list(), list()

        for line in lines:
            category, model_id = line.split('/')
            if self.split == 'train':
                partial_paths.append(os.path.join(self.dataroot, self.split, 'partial', category, model_id + '_{}.ply'))
            else:
                partial_paths.append(os.path.join(self.dataroot, self.split, 'partial', category, model_id + '.ply'))
            complete_paths.append(os.path.join(self.dataroot, self.split, 'complete', category, model_id + '.ply'))
        
        return partial_paths, complete_paths
    
    def read_point_cloud(self, path):
        pc = o3d.io.read_point_cloud(path)
        return np.array(pc.points, np.float32)
    
    def random_sample(self, pc, n):
        idx = np.random.permutation(pc.shape[0])
        if idx.shape[0] < n:
            idx = np.concatenate([idx, np.random.randint(pc.shape[0], size=n-pc.shape[0])])
        return pc[idx[:n]]


================================================
FILE: extensions/chamfer_distance/chamfer3D.cu
================================================

#include <stdio.h>
#include <ATen/ATen.h>

#include <cuda.h>
#include <cuda_runtime.h>

#include <vector>



__global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){
	const int batch=512;
	__shared__ float buf[batch*3];
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int k2=0;k2<m;k2+=batch){
			int end_k=min(m,k2+batch)-k2;
			for (int j=threadIdx.x;j<end_k*3;j+=blockDim.x){
				buf[j]=xyz2[(i*m+k2)*3+j];
			}
			__syncthreads();
			for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
				float x1=xyz[(i*n+j)*3+0];
				float y1=xyz[(i*n+j)*3+1];
				float z1=xyz[(i*n+j)*3+2];
				int best_i=0;
				float best=0;
				int end_ka=end_k-(end_k&3);
				if (end_ka==batch){
					for (int k=0;k<batch;k+=4){
						{
							float x2=buf[k*3+0]-x1;
							float y2=buf[k*3+1]-y1;
							float z2=buf[k*3+2]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (k==0 || d<best){
								best=d;
								best_i=k+k2;
							}
						}
						{
							float x2=buf[k*3+3]-x1;
							float y2=buf[k*3+4]-y1;
							float z2=buf[k*3+5]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+1;
							}
						}
						{
							float x2=buf[k*3+6]-x1;
							float y2=buf[k*3+7]-y1;
							float z2=buf[k*3+8]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+2;
							}
						}
						{
							float x2=buf[k*3+9]-x1;
							float y2=buf[k*3+10]-y1;
							float z2=buf[k*3+11]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+3;
							}
						}
					}
				}else{
					for (int k=0;k<end_ka;k+=4){
						{
							float x2=buf[k*3+0]-x1;
							float y2=buf[k*3+1]-y1;
							float z2=buf[k*3+2]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (k==0 || d<best){
								best=d;
								best_i=k+k2;
							}
						}
						{
							float x2=buf[k*3+3]-x1;
							float y2=buf[k*3+4]-y1;
							float z2=buf[k*3+5]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+1;
							}
						}
						{
							float x2=buf[k*3+6]-x1;
							float y2=buf[k*3+7]-y1;
							float z2=buf[k*3+8]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+2;
							}
						}
						{
							float x2=buf[k*3+9]-x1;
							float y2=buf[k*3+10]-y1;
							float z2=buf[k*3+11]-z1;
							float d=x2*x2+y2*y2+z2*z2;
							if (d<best){
								best=d;
								best_i=k+k2+3;
							}
						}
					}
				}
				for (int k=end_ka;k<end_k;k++){
					float x2=buf[k*3+0]-x1;
					float y2=buf[k*3+1]-y1;
					float z2=buf[k*3+2]-z1;
					float d=x2*x2+y2*y2+z2*z2;
					if (k==0 || d<best){
						best=d;
						best_i=k+k2;
					}
				}
				if (k2==0 || result[(i*n+j)]>best){
					result[(i*n+j)]=best;
					result_i[(i*n+j)]=best_i;
				}
			}
			__syncthreads();
		}
	}
}
// int chamfer_cuda_forward(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i, cudaStream_t stream){
int chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2){

	const auto batch_size = xyz1.size(0);
	const auto n = xyz1.size(1); //num_points point cloud A
	const auto m = xyz2.size(1); //num_points point cloud B

	NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data<float>(), m, xyz2.data<float>(), dist1.data<float>(), idx1.data<int>());
	NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data<float>(), n, xyz1.data<float>(), dist2.data<float>(), idx2.data<int>());

	cudaError_t err = cudaGetLastError();
	  if (err != cudaSuccess) {
	    printf("error in nnd updateOutput: %s\n", cudaGetErrorString(err));
	    //THError("aborting");
	    return 0;
	  }
	  return 1;


}
__global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
			float x1=xyz1[(i*n+j)*3+0];
			float y1=xyz1[(i*n+j)*3+1];
			float z1=xyz1[(i*n+j)*3+2];
			int j2=idx1[i*n+j];
			float x2=xyz2[(i*m+j2)*3+0];
			float y2=xyz2[(i*m+j2)*3+1];
			float z2=xyz2[(i*m+j2)*3+2];
			float g=grad_dist1[i*n+j]*2;
			atomicAdd(&(grad_xyz1[(i*n+j)*3+0]),g*(x1-x2));
			atomicAdd(&(grad_xyz1[(i*n+j)*3+1]),g*(y1-y2));
			atomicAdd(&(grad_xyz1[(i*n+j)*3+2]),g*(z1-z2));
			atomicAdd(&(grad_xyz2[(i*m+j2)*3+0]),-(g*(x1-x2)));
			atomicAdd(&(grad_xyz2[(i*m+j2)*3+1]),-(g*(y1-y2)));
			atomicAdd(&(grad_xyz2[(i*m+j2)*3+2]),-(g*(z1-z2)));
		}
	}
}
// int chamfer_cuda_backward(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2, cudaStream_t stream){
int chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2){
	// cudaMemset(grad_xyz1,0,b*n*3*4);
	// cudaMemset(grad_xyz2,0,b*m*3*4);
	
	const auto batch_size = xyz1.size(0);
	const auto n = xyz1.size(1); //num_points point cloud A
	const auto m = xyz2.size(1); //num_points point cloud B

	NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data<float>(),m,xyz2.data<float>(),graddist1.data<float>(),idx1.data<int>(),gradxyz1.data<float>(),gradxyz2.data<float>());
	NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,m,xyz2.data<float>(),n,xyz1.data<float>(),graddist2.data<float>(),idx2.data<int>(),gradxyz2.data<float>(),gradxyz1.data<float>());
	
	cudaError_t err = cudaGetLastError();
	  if (err != cudaSuccess) {
	    printf("error in nnd get grad: %s\n", cudaGetErrorString(err));
	    //THError("aborting");
	    return 0;
	  }
	  return 1;
	
}


================================================
FILE: extensions/chamfer_distance/chamfer_3D.egg-info/PKG-INFO
================================================
Metadata-Version: 2.1
Name: chamfer-3D
Version: 0.0.0
Summary: UNKNOWN
Home-page: UNKNOWN
License: UNKNOWN
Platform: UNKNOWN

UNKNOWN



================================================
FILE: extensions/chamfer_distance/chamfer_3D.egg-info/SOURCES.txt
================================================
chamfer3D.cu
chamfer_cuda.cpp
setup.py
chamfer_3D.egg-info/PKG-INFO
chamfer_3D.egg-info/SOURCES.txt
chamfer_3D.egg-info/dependency_links.txt
chamfer_3D.egg-info/top_level.txt

================================================
FILE: extensions/chamfer_distance/chamfer_3D.egg-info/dependency_links.txt
================================================



================================================
FILE: extensions/chamfer_distance/chamfer_3D.egg-info/top_level.txt
================================================
chamfer_3D


================================================
FILE: extensions/chamfer_distance/chamfer_cuda.cpp
================================================
#include <torch/torch.h>
#include <vector>

///TMP
//#include "common.h"
/// NOT TMP
	

int chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2);


int chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2);




int chamfer_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2) {
    return chamfer_cuda_forward(xyz1, xyz2, dist1, dist2, idx1, idx2);
}


int chamfer_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, 
					  at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2) {

    return chamfer_cuda_backward(xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2);
}



PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
  m.def("forward", &chamfer_forward, "chamfer forward (CUDA)");
  m.def("backward", &chamfer_backward, "chamfer backward (CUDA)");
}


================================================
FILE: extensions/chamfer_distance/chamfer_distance.py
================================================
import importlib
import os

import torch
from torch import nn
from torch.autograd import Function


chamfer_found = importlib.find_loader("chamfer_3D") is not None
if not chamfer_found:
    ## Cool trick from https://github.com/chrdiller
    print("Jitting Chamfer 3D")

    from torch.utils.cpp_extension import load
    chamfer_3D = load(name="chamfer_3D",
          sources=[
              "/".join(os.path.abspath(__file__).split('/')[:-1] + ["chamfer_cuda.cpp"]),
              "/".join(os.path.abspath(__file__).split('/')[:-1] + ["chamfer3D.cu"]),
              ])
    # print("Loaded JIT 3D CUDA chamfer distance")

else:
    import chamfer_3D
    # print("Loaded compiled 3D CUDA chamfer distance")


# Chamfer's distance module @thibaultgroueix
# GPU tensors only
class chamfer_3DFunction(Function):
    @staticmethod
    def forward(ctx, xyz1, xyz2):
        """
        xyz1: (B, N, 3)
        xyz2: (B, M, 3)
        """
        batchsize, n, _ = xyz1.size()
        _, m, _ = xyz2.size()
        device = xyz1.device

        dist1 = torch.zeros(batchsize, n)
        dist2 = torch.zeros(batchsize, m)

        idx1 = torch.zeros(batchsize, n).type(torch.IntTensor)
        idx2 = torch.zeros(batchsize, m).type(torch.IntTensor)

        dist1 = dist1.to(device)
        dist2 = dist2.to(device)
        idx1 = idx1.to(device)
        idx2 = idx2.to(device)
        torch.cuda.set_device(device)

        chamfer_3D.forward(xyz1, xyz2, dist1, dist2, idx1, idx2)
        ctx.save_for_backward(xyz1, xyz2, idx1, idx2)
        return dist1, dist2, idx1, idx2

    @staticmethod
    def backward(ctx, graddist1, graddist2, gradidx1, gradidx2):
        xyz1, xyz2, idx1, idx2 = ctx.saved_tensors
        graddist1 = graddist1.contiguous()
        graddist2 = graddist2.contiguous()
        device = graddist1.device

        gradxyz1 = torch.zeros(xyz1.size())
        gradxyz2 = torch.zeros(xyz2.size())

        gradxyz1 = gradxyz1.to(device)
        gradxyz2 = gradxyz2.to(device)
        chamfer_3D.backward(
            xyz1, xyz2, gradxyz1, gradxyz2, graddist1, graddist2, idx1, idx2
        )
        return gradxyz1, gradxyz2


class ChamferDistance(nn.Module):
    def __init__(self):
        super(ChamferDistance, self).__init__()

    def forward(self, input1, input2):
        """
        input1: (B, N, 3)
        input2: (B, M, 3)
        """
        dist1, dist2, _, _ = chamfer_3DFunction.apply(input1, input2)
        return dist1, dist2


================================================
FILE: extensions/chamfer_distance/setup.py
================================================
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension


setup(
    name='chamfer_3D',
    ext_modules=[
        CUDAExtension('chamfer_3D', [
            "/".join(__file__.split('/')[:-1] + ['chamfer_cuda.cpp']),
            "/".join(__file__.split('/')[:-1] + ['chamfer3D.cu']),
        ]),
    ],
    cmdclass={
        'build_ext': BuildExtension
    })


================================================
FILE: extensions/earth_movers_distance/emd.cpp
================================================
#ifndef _EMD
#define _EMD

#include <vector>
#include <torch/extension.h>

//CUDA declarations
at::Tensor ApproxMatchForward(
    const at::Tensor xyz1,
    const at::Tensor xyz2);

at::Tensor MatchCostForward(
    const at::Tensor xyz1,
    const at::Tensor xyz2,
    const at::Tensor match);

std::vector<at::Tensor> MatchCostBackward(
    const at::Tensor grad_cost,
    const at::Tensor xyz1,
    const at::Tensor xyz2,
    const at::Tensor match);

PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
  m.def("approxmatch_forward", &ApproxMatchForward,"ApproxMatch forward (CUDA)");
  m.def("matchcost_forward", &MatchCostForward,"MatchCost forward (CUDA)");
  m.def("matchcost_backward", &MatchCostBackward,"MatchCost backward (CUDA)");
}

#endif


================================================
FILE: extensions/earth_movers_distance/emd.py
================================================
import torch
import torch.nn as nn
import emd_cuda


class EarthMoverDistanceFunction(torch.autograd.Function):
    @staticmethod
    def forward(ctx, xyz1, xyz2):
        xyz1 = xyz1.contiguous()
        xyz2 = xyz2.contiguous()
        assert xyz1.is_cuda and xyz2.is_cuda, "Only support cuda currently."
        match = emd_cuda.approxmatch_forward(xyz1, xyz2)
        cost = emd_cuda.matchcost_forward(xyz1, xyz2, match)
        ctx.save_for_backward(xyz1, xyz2, match)
        return cost

    @staticmethod
    def backward(ctx, grad_cost):
        xyz1, xyz2, match = ctx.saved_tensors
        grad_cost = grad_cost.contiguous()
        grad_xyz1, grad_xyz2 = emd_cuda.matchcost_backward(grad_cost, xyz1, xyz2, match)
        return grad_xyz1, grad_xyz2


class EarthMoverDistance(nn.Module):
    def __init__(self):
        super().__init__()
    
    def forward(self, xyz1, xyz2):
        """
        Args:
            xyz1 (torch.Tensor): (b, N1, 3)
            xyz2 (torch.Tensor): (b, N2, 3)

        Returns:
            cost (torch.Tensor): (b)
        """
        if xyz1.dim() == 2:
            xyz1 = xyz1.unsqueeze(0)
        if xyz2.dim() == 2:
            xyz2 = xyz2.unsqueeze(0)
        cost = EarthMoverDistanceFunction.apply(xyz1, xyz2)
        return cost


================================================
FILE: extensions/earth_movers_distance/emd_cuda.egg-info/PKG-INFO
================================================
Metadata-Version: 2.1
Name: emd-cuda
Version: 0.0.0
Summary: UNKNOWN
Home-page: UNKNOWN
License: UNKNOWN
Platform: UNKNOWN

UNKNOWN



================================================
FILE: extensions/earth_movers_distance/emd_cuda.egg-info/SOURCES.txt
================================================
emd.cpp
emd_kernel.cu
setup.py
emd_cuda.egg-info/PKG-INFO
emd_cuda.egg-info/SOURCES.txt
emd_cuda.egg-info/dependency_links.txt
emd_cuda.egg-info/top_level.txt

================================================
FILE: extensions/earth_movers_distance/emd_cuda.egg-info/dependency_links.txt
================================================



================================================
FILE: extensions/earth_movers_distance/emd_cuda.egg-info/top_level.txt
================================================
emd_cuda


================================================
FILE: extensions/earth_movers_distance/emd_kernel.cu
================================================
/**********************************
 * Original Author: Haoqiang Fan
 * Modified by: Kaichun Mo
 *********************************/

#ifndef _EMD_KERNEL
#define _EMD_KERNEL

#include <cmath>
#include <vector>

#include <ATen/ATen.h>
#include <ATen/cuda/CUDAApplyUtils.cuh>  // at::cuda::getApplyGrid
#include <THC/THC.h>

#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)


/********************************
* Forward kernel for approxmatch
*********************************/

template<typename scalar_t>
__global__ void approxmatch(int b,int n,int m,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,scalar_t * __restrict__ match,scalar_t * temp){
	scalar_t * remainL=temp+blockIdx.x*(n+m)*2, * remainR=temp+blockIdx.x*(n+m)*2+n,*ratioL=temp+blockIdx.x*(n+m)*2+n+m,*ratioR=temp+blockIdx.x*(n+m)*2+n+m+n;
	scalar_t multiL,multiR;
	if (n>=m){
		multiL=1;
		multiR=n/m;
	}else{
		multiL=m/n;
		multiR=1;
	}
	const int Block=1024;
	__shared__ scalar_t buf[Block*4];
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int j=threadIdx.x;j<n*m;j+=blockDim.x)
			match[i*n*m+j]=0;
		for (int j=threadIdx.x;j<n;j+=blockDim.x)
			remainL[j]=multiL;
		for (int j=threadIdx.x;j<m;j+=blockDim.x)
			remainR[j]=multiR;
		__syncthreads();
		for (int j=7;j>=-2;j--){
			scalar_t level=-powf(4.0f,j);
			if (j==-2){
				level=0;
			}
			for (int k0=0;k0<n;k0+=blockDim.x){
				int k=k0+threadIdx.x;
				scalar_t x1=0,y1=0,z1=0;
				if (k<n){
					x1=xyz1[i*n*3+k*3+0];
					y1=xyz1[i*n*3+k*3+1];
					z1=xyz1[i*n*3+k*3+2];
				}
				scalar_t suml=1e-9f;
				for (int l0=0;l0<m;l0+=Block){
					int lend=min(m,l0+Block)-l0;
					for (int l=threadIdx.x;l<lend;l+=blockDim.x){
						scalar_t x2=xyz2[i*m*3+l0*3+l*3+0];
						scalar_t y2=xyz2[i*m*3+l0*3+l*3+1];
						scalar_t z2=xyz2[i*m*3+l0*3+l*3+2];
						buf[l*4+0]=x2;
						buf[l*4+1]=y2;
						buf[l*4+2]=z2;
						buf[l*4+3]=remainR[l0+l];
					}
					__syncthreads();
					for (int l=0;l<lend;l++){
						scalar_t x2=buf[l*4+0];
						scalar_t y2=buf[l*4+1];
						scalar_t z2=buf[l*4+2];
						scalar_t d=level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1));
						scalar_t w=__expf(d)*buf[l*4+3];
						suml+=w;
					}
					__syncthreads();
				}
				if (k<n)
					ratioL[k]=remainL[k]/suml;
			}
			__syncthreads();
			for (int l0=0;l0<m;l0+=blockDim.x){
				int l=l0+threadIdx.x;
				scalar_t x2=0,y2=0,z2=0;
				if (l<m){
					x2=xyz2[i*m*3+l*3+0];
					y2=xyz2[i*m*3+l*3+1];
					z2=xyz2[i*m*3+l*3+2];
				}
				scalar_t sumr=0;
				for (int k0=0;k0<n;k0+=Block){
					int kend=min(n,k0+Block)-k0;
					for (int k=threadIdx.x;k<kend;k+=blockDim.x){
						buf[k*4+0]=xyz1[i*n*3+k0*3+k*3+0];
						buf[k*4+1]=xyz1[i*n*3+k0*3+k*3+1];
						buf[k*4+2]=xyz1[i*n*3+k0*3+k*3+2];
						buf[k*4+3]=ratioL[k0+k];
					}
					__syncthreads();
					for (int k=0;k<kend;k++){
						scalar_t x1=buf[k*4+0];
						scalar_t y1=buf[k*4+1];
						scalar_t z1=buf[k*4+2];
						scalar_t w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*buf[k*4+3];
						sumr+=w;
					}
					__syncthreads();
				}
				if (l<m){
					sumr*=remainR[l];
					scalar_t consumption=fminf(remainR[l]/(sumr+1e-9f),1.0f);
					ratioR[l]=consumption*remainR[l];
					remainR[l]=fmaxf(0.0f,remainR[l]-sumr);
				}
			}
			__syncthreads();
			for (int k0=0;k0<n;k0+=blockDim.x){
				int k=k0+threadIdx.x;
				scalar_t x1=0,y1=0,z1=0;
				if (k<n){
					x1=xyz1[i*n*3+k*3+0];
					y1=xyz1[i*n*3+k*3+1];
					z1=xyz1[i*n*3+k*3+2];
				}
				scalar_t suml=0;
				for (int l0=0;l0<m;l0+=Block){
					int lend=min(m,l0+Block)-l0;
					for (int l=threadIdx.x;l<lend;l+=blockDim.x){
						buf[l*4+0]=xyz2[i*m*3+l0*3+l*3+0];
						buf[l*4+1]=xyz2[i*m*3+l0*3+l*3+1];
						buf[l*4+2]=xyz2[i*m*3+l0*3+l*3+2];
						buf[l*4+3]=ratioR[l0+l];
					}
					__syncthreads();
					scalar_t rl=ratioL[k];
					if (k<n){
						for (int l=0;l<lend;l++){
							scalar_t x2=buf[l*4+0];
							scalar_t y2=buf[l*4+1];
							scalar_t z2=buf[l*4+2];
							scalar_t w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*rl*buf[l*4+3];
							match[i*n*m+(l0+l)*n+k]+=w;
							suml+=w;
						}
					}
					__syncthreads();
				}
				if (k<n)
					remainL[k]=fmaxf(0.0f,remainL[k]-suml);
			}
			__syncthreads();
		}
	}
}

//void approxmatchLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,scalar_t * match,scalar_t * temp){
//	approxmatch<<<32,512>>>(b,n,m,xyz1,xyz2,match,temp);
//}

/* ApproxMatch forward interface
Input:
  xyz1: (B, N1, 3)  # dataset_points
  xyz2: (B, N2, 3)  # query_points
Output:
  match: (B, N2, N1)
*/
at::Tensor ApproxMatchForward(
    const at::Tensor xyz1,
    const at::Tensor xyz2){
  const auto b = xyz1.size(0);
  const auto n = xyz1.size(1);
  const auto m = xyz2.size(1);

  CHECK_EQ(xyz2.size(0), b);
  CHECK_EQ(xyz1.size(2), 3);
  CHECK_EQ(xyz2.size(2), 3);
  CHECK_INPUT(xyz1);
  CHECK_INPUT(xyz2);

  auto match = at::zeros({b, m, n}, xyz1.type());
  auto temp = at::zeros({b, (n+m)*2}, xyz1.type());

  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), "ApproxMatchForward", ([&] {
        approxmatch<scalar_t><<<32,512>>>(b, n, m, xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), temp.data<scalar_t>());
  }));
  THCudaCheck(cudaGetLastError());

  return match;
}


/********************************
* Forward kernel for matchcost
*********************************/

template<typename scalar_t>
__global__ void matchcost(int b,int n,int m,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ out){
	__shared__ scalar_t allsum[512];
	const int Block=1024;
	__shared__ scalar_t buf[Block*3];
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		scalar_t subsum=0;
		for (int k0=0;k0<n;k0+=blockDim.x){
			int k=k0+threadIdx.x;
			scalar_t x1=0,y1=0,z1=0;
			if (k<n){
				x1=xyz1[i*n*3+k*3+0];
				y1=xyz1[i*n*3+k*3+1];
				z1=xyz1[i*n*3+k*3+2];
			}
			for (int l0=0;l0<m;l0+=Block){
				int lend=min(m,l0+Block)-l0;
				for (int l=threadIdx.x;l<lend*3;l+=blockDim.x)
					buf[l]=xyz2[i*m*3+l0*3+l];
				__syncthreads();
				if (k<n){
					for (int l=0;l<lend;l++){
						scalar_t x2=buf[l*3+0];
						scalar_t y2=buf[l*3+1];
						scalar_t z2=buf[l*3+2];
						scalar_t d=(x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1);
						subsum+=d*match[i*n*m+(l0+l)*n+k];
					}
				}
				__syncthreads();
			}
		}
		allsum[threadIdx.x]=subsum;
		for (int j=1;j<blockDim.x;j<<=1){
			__syncthreads();
			if ((threadIdx.x&j)==0 && threadIdx.x+j<blockDim.x){
				allsum[threadIdx.x]+=allsum[threadIdx.x+j];
			}
		}
		if (threadIdx.x==0)
			out[i]=allsum[0];
		__syncthreads();
	}
}

//void matchcostLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,const scalar_t * match,scalar_t * out){
//	matchcost<<<32,512>>>(b,n,m,xyz1,xyz2,match,out);
//}

/* MatchCost forward interface
Input:
  xyz1: (B, N1, 3)  # dataset_points
  xyz2: (B, N2, 3)  # query_points
  match: (B, N2, N1)
Output:
  cost: (B)
*/
at::Tensor MatchCostForward(
    const at::Tensor xyz1,
    const at::Tensor xyz2,
    const at::Tensor match){
  const auto b = xyz1.size(0);
  const auto n = xyz1.size(1);
  const auto m = xyz2.size(1);

  CHECK_EQ(xyz2.size(0), b);
  CHECK_EQ(xyz1.size(2), 3);
  CHECK_EQ(xyz2.size(2), 3);
  CHECK_INPUT(xyz1);
  CHECK_INPUT(xyz2);

  auto cost = at::zeros({b}, xyz1.type());

  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), "MatchCostForward", ([&] {
        matchcost<scalar_t><<<32,512>>>(b, n, m, xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), cost.data<scalar_t>());
  }));
  THCudaCheck(cudaGetLastError());

  return cost;
}


/********************************
* matchcostgrad2 kernel
*********************************/

template<typename scalar_t>
__global__ void matchcostgrad2(int b,int n,int m,const scalar_t * __restrict__ grad_cost,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ grad2){
	__shared__ scalar_t sum_grad[256*3];
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		int kbeg=m*blockIdx.y/gridDim.y;
		int kend=m*(blockIdx.y+1)/gridDim.y;
		for (int k=kbeg;k<kend;k++){
			scalar_t x2=xyz2[(i*m+k)*3+0];
			scalar_t y2=xyz2[(i*m+k)*3+1];
			scalar_t z2=xyz2[(i*m+k)*3+2];
			scalar_t subsumx=0,subsumy=0,subsumz=0;
			for (int j=threadIdx.x;j<n;j+=blockDim.x){
				scalar_t x1=x2-xyz1[(i*n+j)*3+0];
				scalar_t y1=y2-xyz1[(i*n+j)*3+1];
				scalar_t z1=z2-xyz1[(i*n+j)*3+2];
				scalar_t d=match[i*n*m+k*n+j]*2;
				subsumx+=x1*d;
				subsumy+=y1*d;
				subsumz+=z1*d;
			}
			sum_grad[threadIdx.x*3+0]=subsumx;
			sum_grad[threadIdx.x*3+1]=subsumy;
			sum_grad[threadIdx.x*3+2]=subsumz;
			for (int j=1;j<blockDim.x;j<<=1){
				__syncthreads();
				int j1=threadIdx.x;
				int j2=threadIdx.x+j;
				if ((j1&j)==0 && j2<blockDim.x){
					sum_grad[j1*3+0]+=sum_grad[j2*3+0];
					sum_grad[j1*3+1]+=sum_grad[j2*3+1];
					sum_grad[j1*3+2]+=sum_grad[j2*3+2];
				}
			}
			if (threadIdx.x==0){
				grad2[(i*m+k)*3+0]=sum_grad[0]*grad_cost[i];
				grad2[(i*m+k)*3+1]=sum_grad[1]*grad_cost[i];
				grad2[(i*m+k)*3+2]=sum_grad[2]*grad_cost[i];
			}
			__syncthreads();
		}
	}
}

/********************************
* matchcostgrad1 kernel
*********************************/

template<typename scalar_t>
__global__ void matchcostgrad1(int b,int n,int m,const scalar_t * __restrict__ grad_cost,const scalar_t * __restrict__ xyz1,const scalar_t * __restrict__ xyz2,const scalar_t * __restrict__ match,scalar_t * __restrict__ grad1){
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int l=threadIdx.x;l<n;l+=blockDim.x){
			scalar_t x1=xyz1[i*n*3+l*3+0];
			scalar_t y1=xyz1[i*n*3+l*3+1];
			scalar_t z1=xyz1[i*n*3+l*3+2];
			scalar_t dx=0,dy=0,dz=0;
			for (int k=0;k<m;k++){
				scalar_t x2=xyz2[i*m*3+k*3+0];
				scalar_t y2=xyz2[i*m*3+k*3+1];
				scalar_t z2=xyz2[i*m*3+k*3+2];
				scalar_t d=match[i*n*m+k*n+l]*2;
				dx+=(x1-x2)*d;
				dy+=(y1-y2)*d;
				dz+=(z1-z2)*d;
			}
			grad1[i*n*3+l*3+0]=dx*grad_cost[i];
			grad1[i*n*3+l*3+1]=dy*grad_cost[i];
			grad1[i*n*3+l*3+2]=dz*grad_cost[i];
		}
	}
}

//void matchcostgradLauncher(int b,int n,int m,const scalar_t * xyz1,const scalar_t * xyz2,const scalar_t * match,scalar_t * grad1,scalar_t * grad2){
//	matchcostgrad1<<<32,512>>>(b,n,m,xyz1,xyz2,match,grad1);
//	matchcostgrad2<<<dim3(32,32),256>>>(b,n,m,xyz1,xyz2,match,grad2);
//}


/* MatchCost backward interface
Input:
  grad_cost: (B)    # gradients on cost
  xyz1: (B, N1, 3)  # dataset_points
  xyz2: (B, N2, 3)  # query_points
  match: (B, N2, N1)
Output:
  grad1: (B, N1, 3)
  grad2: (B, N2, 3)
*/
std::vector<at::Tensor> MatchCostBackward(
    const at::Tensor grad_cost,
    const at::Tensor xyz1,
    const at::Tensor xyz2,
    const at::Tensor match){
  const auto b = xyz1.size(0);
  const auto n = xyz1.size(1);
  const auto m = xyz2.size(1);

  CHECK_EQ(xyz2.size(0), b);
  CHECK_EQ(xyz1.size(2), 3);
  CHECK_EQ(xyz2.size(2), 3);
  CHECK_INPUT(xyz1);
  CHECK_INPUT(xyz2);

  auto grad1 = at::zeros({b, n, 3}, xyz1.type());
  auto grad2 = at::zeros({b, m, 3}, xyz1.type());

  AT_DISPATCH_FLOATING_TYPES(xyz1.scalar_type(), "MatchCostBackward", ([&] {
        matchcostgrad1<scalar_t><<<32,512>>>(b, n, m, grad_cost.data<scalar_t>(), xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), grad1.data<scalar_t>());
        matchcostgrad2<scalar_t><<<dim3(32,32),256>>>(b, n, m, grad_cost.data<scalar_t>(), xyz1.data<scalar_t>(), xyz2.data<scalar_t>(), match.data<scalar_t>(), grad2.data<scalar_t>());
  }));
  THCudaCheck(cudaGetLastError());

  return std::vector<at::Tensor>({grad1, grad2});
}

#endif


================================================
FILE: extensions/earth_movers_distance/setup.py
================================================
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension


setup(
    name='emd_cuda',
    ext_modules=[
        CUDAExtension(
            name='emd_cuda',
            sources=[
                'emd.cpp',
                'emd_kernel.cu',
            ],
            # extra_compile_args={'cxx': ['-g'], 'nvcc': ['-O2']}
        ),
    ],
    cmdclass={
        'build_ext': BuildExtension
    })


================================================
FILE: metrics/loss.py
================================================
import torch

from extensions.chamfer_distance.chamfer_distance import ChamferDistance
from extensions.earth_movers_distance.emd import EarthMoverDistance


CD = ChamferDistance()
EMD = EarthMoverDistance()


def cd_loss_L1(pcs1, pcs2):
    """
    L1 Chamfer Distance.

    Args:
        pcs1 (torch.tensor): (B, N, 3)
        pcs2 (torch.tensor): (B, M, 3)
    """
    dist1, dist2 = CD(pcs1, pcs2)
    dist1 = torch.sqrt(dist1)
    dist2 = torch.sqrt(dist2)
    return (torch.mean(dist1) + torch.mean(dist2)) / 2.0


def cd_loss_L2(pcs1, pcs2):
    """
    L2 Chamfer Distance.

    Args:
        pcs1 (torch.tensor): (B, N, 3)
        pcs2 (torch.tensor): (B, M, 3)
    """
    dist1, dist2 = CD(pcs1, pcs2)
    return torch.mean(dist1) + torch.mean(dist2)


def emd_loss(pcs1, pcs2):
    """
    EMD Loss.

    Args:
        xyz1 (torch.Tensor): (b, N, 3)
        xyz2 (torch.Tensor): (b, N, 3)
    """
    dists = EMD(pcs1, pcs2)
    return torch.mean(dists)


================================================
FILE: metrics/metric.py
================================================
import torch
import open3d as o3d

from extensions.chamfer_distance.chamfer_distance import ChamferDistance
from extensions.earth_movers_distance.emd import EarthMoverDistance


CD = ChamferDistance()
EMD = EarthMoverDistance()


def l2_cd(pcs1, pcs2):
    dist1, dist2 = CD(pcs1, pcs2)
    dist1 = torch.mean(dist1, dim=1)
    dist2 = torch.mean(dist2, dim=1)
    return torch.sum(dist1 + dist2)


def l1_cd(pcs1, pcs2):
    dist1, dist2 = CD(pcs1, pcs2)
    dist1 = torch.mean(torch.sqrt(dist1), 1)
    dist2 = torch.mean(torch.sqrt(dist2), 1)
    return torch.sum(dist1 + dist2) / 2


def emd(pcs1, pcs2):
    dists = EMD(pcs1, pcs2)
    return torch.sum(dists)


def f_score(pred, gt, th=0.01):
    """
    References: https://github.com/lmb-freiburg/what3d/blob/master/util.py

    Args:
        pred (np.ndarray): (N1, 3)
        gt   (np.ndarray): (N2, 3)
        th   (float): a distance threshhold
    """
    pred = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(pred))
    gt = o3d.geometry.PointCloud(o3d.utility.Vector3dVector(gt))

    dist1 = pred.compute_point_cloud_distance(gt)
    dist2 = gt.compute_point_cloud_distance(pred)

    recall = float(sum(d < th for d in dist2)) / float(len(dist2))
    precision = float(sum(d < th for d in dist1)) / float(len(dist1))
    return 2 * recall * precision / (recall + precision) if recall + precision else 0


================================================
FILE: models/__init__.py
================================================
from models.pcn import PCN


================================================
FILE: models/pcn.py
================================================
import torch
import torch.nn as nn


class PCN(nn.Module):
    """
    "PCN: Point Cloud Completion Network"
    (https://arxiv.org/pdf/1808.00671.pdf)

    Attributes:
        num_dense:  16384
        latent_dim: 1024
        grid_size:  4
        num_coarse: 1024
    """

    def __init__(self, num_dense=16384, latent_dim=1024, grid_size=4):
        super().__init__()

        self.num_dense = num_dense
        self.latent_dim = latent_dim
        self.grid_size = grid_size

        assert self.num_dense % self.grid_size ** 2 == 0

        self.num_coarse = self.num_dense // (self.grid_size ** 2)

        self.first_conv = nn.Sequential(
            nn.Conv1d(3, 128, 1),
            nn.BatchNorm1d(128),
            nn.ReLU(inplace=True),
            nn.Conv1d(128, 256, 1)
        )

        self.second_conv = nn.Sequential(
            nn.Conv1d(512, 512, 1),
            nn.BatchNorm1d(512),
            nn.ReLU(inplace=True),
            nn.Conv1d(512, self.latent_dim, 1)
        )

        self.mlp = nn.Sequential(
            nn.Linear(self.latent_dim, 1024),
            nn.ReLU(inplace=True),
            nn.Linear(1024, 1024),
            nn.ReLU(inplace=True),
            nn.Linear(1024, 3 * self.num_coarse)
        )

        self.final_conv = nn.Sequential(
            nn.Conv1d(1024 + 3 + 2, 512, 1),
            nn.BatchNorm1d(512),
            nn.ReLU(inplace=True),
            nn.Conv1d(512, 512, 1),
            nn.BatchNorm1d(512),
            nn.ReLU(inplace=True),
            nn.Conv1d(512, 3, 1)
        )
        a = torch.linspace(-0.05, 0.05, steps=self.grid_size, dtype=torch.float).view(1, self.grid_size).expand(self.grid_size, self.grid_size).reshape(1, -1)
        b = torch.linspace(-0.05, 0.05, steps=self.grid_size, dtype=torch.float).view(self.grid_size, 1).expand(self.grid_size, self.grid_size).reshape(1, -1)
        
        self.folding_seed = torch.cat([a, b], dim=0).view(1, 2, self.grid_size ** 2).cuda()  # (1, 2, S)

    def forward(self, xyz):
        B, N, _ = xyz.shape
        
        # encoder
        feature = self.first_conv(xyz.transpose(2, 1))                                       # (B,  256, N)
        feature_global = torch.max(feature, dim=2, keepdim=True)[0]                          # (B,  256, 1)
        feature = torch.cat([feature_global.expand(-1, -1, N), feature], dim=1)              # (B,  512, N)
        feature = self.second_conv(feature)                                                  # (B, 1024, N)
        feature_global = torch.max(feature,dim=2,keepdim=False)[0]                           # (B, 1024)
        
        # decoder
        coarse = self.mlp(feature_global).reshape(-1, self.num_coarse, 3)                    # (B, num_coarse, 3), coarse point cloud
        point_feat = coarse.unsqueeze(2).expand(-1, -1, self.grid_size ** 2, -1)             # (B, num_coarse, S, 3)
        point_feat = point_feat.reshape(-1, self.num_dense, 3).transpose(2, 1)               # (B, 3, num_fine)

        seed = self.folding_seed.unsqueeze(2).expand(B, -1, self.num_coarse, -1)             # (B, 2, num_coarse, S)
        seed = seed.reshape(B, -1, self.num_dense)                                           # (B, 2, num_fine)

        feature_global = feature_global.unsqueeze(2).expand(-1, -1, self.num_dense)          # (B, 1024, num_fine)
        feat = torch.cat([feature_global, seed, point_feat], dim=1)                          # (B, 1024+2+3, num_fine)
    
        fine = self.final_conv(feat) + point_feat                                            # (B, 3, num_fine), fine point cloud

        return coarse.contiguous(), fine.transpose(1, 2).contiguous()


================================================
FILE: render/README.md
================================================
# render

## Description

`process_exr.py` and `render_depth.py` are used for generating the partial point cloud from CAD model.

In order to run the `render_depth.py`, you need to install [Blender](https://www.blender.org/) firstly. After complete installing, you can use this command to render the depth images:

```bash
blender -b -P render_depth.py [ShapeNet directory] [model list] [output directory] [num scans per model]
```

The images will be stored in OpenEXR format. The version of blender I used is `2.9.1`.

In order to run the `process_exr.py`, you need to install `imath`、`OpenEXR` and `open3d-python`. These are third python modules, you can install with `pip`. The command to generate partial point clouds from `.exr` is:

```bash
python3 process_exr.py [model list] [intrinsics file] [output directory] [num scans per model]
```

The version of Python should not be too high. I use the version of `3.7.9`.

## Example

Complete point cloud:

<img src="../images/ground_truth.png" width="300px"/>

Partial point clouds:

<img src="../images/partial1.png" width="300px"/>
<img src="../images/partial2.png" width="300px"/>
<img src="../images/partial3.png" width="300px"/>
<img src="../images/partial4.png" width="300px"/>
<img src="../images/partial5.png" width="300px"/>
<img src="../images/partial6.png" width="300px"/>
<img src="../images/partial7.png" width="300px"/>
<img src="../images/partial8.png" width="300px"/>


================================================
FILE: render/blender.log
================================================
Progress:   0.00%
(  0.0000 sec |   0.0000 sec) Importing OBJ '/media/rico/BACKUP/Dataset/ShapeNetForPCN/02958343/167ec61fc29df46460593c98e3e63028/model.obj'...
Progress:   0.00%
  (  0.0008 sec |   0.0008 sec) Parsing OBJ file...
Progress:   0.00%
    (  0.8401 sec |   0.8393 sec) Done, loading materials and images...
Progress:  33.33%
    (  1.0985 sec |   1.0977 sec) Done, building geometries (verts:49443 faces:89741 materials: 79 smoothgroups:0) ...
Progress:  66.67%
    (  3.4071 sec |   3.4063 sec) Done.
Progress:  66.67%
Progress: 100.00%
  (  3.4072 sec |   3.4072 sec) Finished importing: '/media/rico/BACKUP/Dataset/ShapeNetForPCN/02958343/167ec61fc29df46460593c98e3e63028/model.obj'
Progress: 100.00%
Progress: 100.00%

Fra:0 Mem:179.97M (0.00M, Peak 179.98M) | Time:00:00.00 | Preparing Scene data
Fra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Preparing Scene data
Fra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Creating Shadowbuffers
Fra:0 Mem:329.52M (0.00M, Peak 329.89M) | Time:00:00.09 | Raytree.. preparing
Fra:0 Mem:341.84M (0.00M, Peak 341.84M) | Time:00:00.09 | Raytree.. building
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Raytree finished
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Creating Environment maps
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Caching Point Densities
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Loading voxel datasets
Fra:0 Mem:341.14M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Volume preprocessing
Fra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:0 Mem:341.15M (0.00M, Peak 360.38M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:0 Mem:343.08M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 6-6
Fra:0 Mem:343.01M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 5-6
Fra:0 Mem:342.82M (0.00M, Peak 360.38M) | Time:00:00.22 | Scene, Part 3-6
Fra:0 Mem:349.62M (0.00M, Peak 360.38M) | Time:00:00.24 | Scene, Part 4-6
Fra:0 Mem:348.73M (0.00M, Peak 360.38M) | Time:00:00.24 | Scene, Part 2-6
Fra:0 Mem:348.44M (0.00M, Peak 360.38M) | Time:00:00.25 | Scene, Part 1-6
Fra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing
Fra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Determining resolution
Fra:0 Mem:186.83M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Initializing execution
Fra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:0 Mem:187.27M (0.00M, Peak 360.38M) | Time:00:00.26 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/0.exr
Fra:0 Mem:187.19M (0.00M, Peak 360.38M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.27 (Saving: 00:00.00)

Fra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:1 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:1 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:1 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:1 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:1 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:1 Mem:349.59M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6
Fra:1 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 4-6
Fra:1 Mem:348.74M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6
Fra:1 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6
Fra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing
Fra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution
Fra:1 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution
Fra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:1 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/1.exr
Fra:1 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.26 (Saving: 00:00.00)

Fra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:2 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:2 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:2 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:2 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:2 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:2 Mem:349.59M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 3-6
Fra:2 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6
Fra:2 Mem:348.86M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 1-6
Fra:2 Mem:348.44M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 2-6
Fra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing
Fra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution
Fra:2 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution
Fra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:2 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/2.exr
Fra:2 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.26 (Saving: 00:00.00)

Fra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:3 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:3 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Raytree finished
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Creating Environment maps
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Caching Point Densities
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Loading voxel datasets
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Volume preprocessing
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:3 Mem:348.24M (0.00M, Peak 367.47M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:3 Mem:350.79M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 6-6
Fra:3 Mem:350.77M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 5-6
Fra:3 Mem:349.43M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 3-6
Fra:3 Mem:349.27M (0.00M, Peak 367.47M) | Time:00:00.22 | Scene, Part 4-6
Fra:3 Mem:348.87M (0.00M, Peak 367.47M) | Time:00:00.24 | Scene, Part 2-6
Fra:3 Mem:348.44M (0.00M, Peak 367.47M) | Time:00:00.25 | Scene, Part 1-6
Fra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing
Fra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Determining resolution
Fra:3 Mem:186.83M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Initializing execution
Fra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:3 Mem:187.27M (0.00M, Peak 367.47M) | Time:00:00.27 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/3.exr
Fra:3 Mem:187.19M (0.00M, Peak 367.47M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.27 (Saving: 00:00.00)

Fra:4 Mem:187.03M (0.00M, Peak 187.04M) | Time:00:00.00 | Preparing Scene data
Fra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:4 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:4 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:4 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:4 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:4 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:4 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6
Fra:4 Mem:349.17M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6
Fra:4 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6
Fra:4 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6
Fra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing
Fra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Determining resolution
Fra:4 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Initializing execution
Fra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:4 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/4.exr
Fra:4 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.27 (Saving: 00:00.00)

Fra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:5 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:5 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.09 | Raytree.. building
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:5 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:5 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:5 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:5 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6
Fra:5 Mem:349.17M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6
Fra:5 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 2-6
Fra:5 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6
Fra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing
Fra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution
Fra:5 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution
Fra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:5 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/5.exr
Fra:5 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.27 (Saving: 00:00.00)

Fra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:6 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:6 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:6 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:6 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:6 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:6 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6
Fra:6 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6
Fra:6 Mem:348.89M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6
Fra:6 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6
Fra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing
Fra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Determining resolution
Fra:6 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Initializing execution
Fra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | Tile 1-1
Fra:6 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.27 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/6.exr
Fra:6 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.27 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.27 (Saving: 00:00.00)

Fra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Preparing Scene data
Fra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Creating Shadowbuffers
Fra:7 Mem:336.58M (0.00M, Peak 336.95M) | Time:00:00.09 | Raytree.. preparing
Fra:7 Mem:348.90M (0.00M, Peak 348.90M) | Time:00:00.10 | Raytree.. building
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Raytree finished
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Creating Environment maps
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Caching Point Densities
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Loading voxel datasets
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Volume preprocessing
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:7 Mem:348.22M (0.00M, Peak 367.46M) | Time:00:00.22 | Sce: Scene Ve:93605 Fa:89708 La:1
Fra:7 Mem:350.16M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 6-6
Fra:7 Mem:350.08M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 5-6
Fra:7 Mem:349.71M (0.00M, Peak 367.46M) | Time:00:00.22 | Scene, Part 3-6
Fra:7 Mem:349.29M (0.00M, Peak 367.46M) | Time:00:00.23 | Scene, Part 4-6
Fra:7 Mem:348.87M (0.00M, Peak 367.46M) | Time:00:00.24 | Scene, Part 2-6
Fra:7 Mem:348.46M (0.00M, Peak 367.46M) | Time:00:00.25 | Scene, Part 1-6
Fra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing
Fra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Determining resolution
Fra:7 Mem:186.83M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Initializing execution
Fra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | Tile 1-1
Fra:7 Mem:187.27M (0.00M, Peak 367.46M) | Time:00:00.26 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/167ec61fc29df46460593c98e3e63028/7.exr
Fra:7 Mem:187.19M (0.00M, Peak 367.46M) | Time:00:00.26 | Sce: Scene Ve:93605 Fa:89708 La:1
Saved: 'buffer.png'
 Time: 00:00.26 (Saving: 00:00.00)

ved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/6d7e8fa77d384c07e4d0922154a19a3f/7.exr
Fra:7 Mem:203.64M (0.00M, Peak 569.00M) | Time:00:00.42 | Sce: Scene Ve:150090 Fa:132950 La:1
Saved: 'buffer.png'
 Time: 00:00.42 (Saving: 00:00.00)

k 1088.52M) | Time:00:02.09 | Compositing | Tile 1-1
Fra:7 Mem:365.11M (0.00M, Peak 1088.52M) | Time:00:02.09 | Compositing | Tile 1-1
Fra:7 Mem:365.11M (0.00M, Peak 1088.52M) | Time:00:02.09 | Compositing | De-initializing execution
Saved: /home/rico/Workspace/Dataset/partials/partial21/exr/02958343/b8dd449dd857e7f19b58a6529594c9d/7.exr
Fra:7 Mem:365.03M (0.00M, Peak 1088.52M) | Time:00:02.09 | Sce: Scene Ve:621636 Fa:702574 La:1
Saved: 'buffer.png'
 Time: 00:02.09 (Saving: 00:00.00)



================================================
FILE: render/partial.sh
================================================
#!/bin/bash
echo "Begin to generate exr files"

for ((i=1; i<=21; i++)); do
    blender -b -P render_depth.py "/media/rico/BACKUP/Dataset/ShapeNetForPCN" "../dataset/car_split/split${i}.list" "/home/rico/Workspace/Dataset/partials/partial${i}" 8
done

echo "Done"


================================================
FILE: render/process_exr.py
================================================
'''
MIT License

Copyright (c) 2018 Wentao Yuan

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''

import Imath
import OpenEXR
import argparse
import array
import numpy as np
import os
from open3d import *


def read_exr(exr_path, height, width):
    file = OpenEXR.InputFile(exr_path)
    depth_arr = array.array('f', file.channel('R', Imath.PixelType(Imath.PixelType.FLOAT)))
    depth = np.array(depth_arr).reshape((height, width))
    depth[depth < 0] = 0
    depth[np.isinf(depth)] = 0
    return depth


def depth2pcd(depth, intrinsics, pose):
    inv_K = np.linalg.inv(intrinsics)
    inv_K[2, 2] = -1
    depth = np.flipud(depth)
    y, x = np.where(depth > 0)
    # image coordinates -> camera coordinates
    points = np.dot(inv_K, np.stack([x, y, np.ones_like(x)] * depth[y, x], 0))
    # camera coordinates -> world coordinates
    points = np.dot(pose, np.concatenate([points, np.ones((1, points.shape[1]))], 0)).T[:, :3]
    return points


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('list_file')
    parser.add_argument('intrinsics_file')
    parser.add_argument('output_dir')
    parser.add_argument('num_scans', type=int)
    args = parser.parse_args()

    with open(args.list_file) as file:
        model_list = file.read().splitlines()
    intrinsics = np.loadtxt(args.intrinsics_file)
    width = int(intrinsics[0, 2] * 2)
    height = int(intrinsics[1, 2] * 2)

    for model_id in model_list:
        depth_dir = os.path.join(args.output_dir, 'depth', model_id)
        pcd_dir = os.path.join(args.output_dir, 'pcd', model_id)
        os.makedirs(depth_dir, exist_ok=True)
        os.makedirs(pcd_dir, exist_ok=True)
        for i in range(args.num_scans):
            exr_path = os.path.join(args.output_dir, 'exr', model_id, '%d.exr' % i)
            pose_path = os.path.join(args.output_dir, 'pose', model_id, '%d.txt' % i)

            depth = read_exr(exr_path, height, width)
            depth_img = Image(np.uint16(depth * 1000))
            write_image(os.path.join(depth_dir, '%d.png' % i), depth_img)

            pose = np.loadtxt(pose_path)
            points = depth2pcd(depth, intrinsics, pose)
            pcd = PointCloud()
            pcd.points = Vector3dVector(points)
            write_point_cloud(os.path.join(pcd_dir, '%d.pcd' % i), pcd)


================================================
FILE: render/render_depth.py
================================================
'''
MIT License

Copyright (c) 2018 Wentao Yuan

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''

import bpy
import mathutils
import numpy as np
import os
import sys
import time


def random_pose():
    angle_x = np.random.uniform() * 2 * np.pi
    angle_y = np.random.uniform() * 2 * np.pi
    angle_z = np.random.uniform() * 2 * np.pi
    Rx = np.array([[1, 0, 0],
                   [0, np.cos(angle_x), -np.sin(angle_x)],
                   [0, np.sin(angle_x), np.cos(angle_x)]])
    Ry = np.array([[np.cos(angle_y), 0, np.sin(angle_y)],
                   [0, 1, 0],
                   [-np.sin(angle_y), 0, np.cos(angle_y)]])
    Rz = np.array([[np.cos(angle_z), -np.sin(angle_z), 0],
                   [np.sin(angle_z), np.cos(angle_z), 0],
                   [0, 0, 1]])
    R = np.dot(Rz, np.dot(Ry, Rx))
    # Set camera pointing to the origin and 1 unit away from the origin
    t = np.expand_dims(R[:, 2], 1)
    pose = np.concatenate([np.concatenate([R, t], 1), [[0, 0, 0, 1]]], 0)
    return pose


def setup_blender(width, height, focal_length):
    # camera
    camera = bpy.data.objects['Camera']
    camera.data.angle = np.arctan(width / 2 / focal_length) * 2

    # render layer
    scene = bpy.context.scene
    scene.render.filepath = 'buffer'
    scene.render.image_settings.color_depth = '16'
    scene.render.resolution_percentage = 100
    scene.render.resolution_x = width
    scene.render.resolution_y = height

    # compositor nodes
    scene.use_nodes = True
    tree = scene.node_tree
    rl = tree.nodes.new('CompositorNodeRLayers')
    output = tree.nodes.new('CompositorNodeOutputFile')
    output.base_path = ''
    output.format.file_format = 'OPEN_EXR'
    tree.links.new(rl.outputs['Depth'], output.inputs[0])

    # remove default cube
    bpy.data.objects['Cube'].select = True
    bpy.ops.object.delete()

    return scene, camera, output


if __name__ == '__main__':
    model_dir = sys.argv[-4]
    list_path = sys.argv[-3]
    output_dir = sys.argv[-2]
    num_scans = int(sys.argv[-1])

    width = 160
    height = 120
    focal = 100
    scene, camera, output = setup_blender(width, height, focal)
    intrinsics = np.array([[focal, 0, width / 2], [0, focal, height / 2], [0, 0, 1]])

    with open(os.path.join(list_path)) as file:
        model_list = [line.strip() for line in file]
    open('blender.log', 'w+').close()
    os.system('rm -rf %s' % output_dir)
    os.makedirs(output_dir)
    np.savetxt(os.path.join(output_dir, 'intrinsics.txt'), intrinsics, '%f')

    for model_id in model_list:
        start = time.time()
        exr_dir = os.path.join(output_dir, 'exr', model_id)
        pose_dir = os.path.join(output_dir, 'pose', model_id)
        os.makedirs(exr_dir)
        os.makedirs(pose_dir)

        # Redirect output to log file
        old_os_out = os.dup(1)
        os.close(1)
        os.open('blender.log', os.O_WRONLY)

        # Import mesh model
        model_path = os.path.join(model_dir, model_id, 'model.obj')
        bpy.ops.import_scene.obj(filepath=model_path)

        # Rotate model by 90 degrees around x-axis (z-up => y-up) to match ShapeNet's coordinates
        bpy.ops.transform.rotate(value=-np.pi / 2, axis=(1, 0, 0))

        # Render
        for i in range(num_scans):
            scene.frame_set(i)
            pose = random_pose()
            camera.matrix_world = mathutils.Matrix(pose)
            output.file_slots[0].path = os.path.join(exr_dir, '#.exr')
            bpy.ops.render.render(write_still=True)
            np.savetxt(os.path.join(pose_dir, '%d.txt' % i), pose, '%f')

        # Clean up
        bpy.ops.object.delete()
        for m in bpy.data.meshes:
            bpy.data.meshes.remove(m)
        for m in bpy.data.materials:
            m.user_clear()
            bpy.data.materials.remove(m)

        # Show time
        os.close(1)
        os.dup(old_os_out)
        os.close(old_os_out)
        print('%s done, time=%.4f sec' % (model_id, time.time() - start))


================================================
FILE: requirements.txt
================================================
open3d
matplotlib
tensorboardX


================================================
FILE: sample/CMakeLists.txt
================================================
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)

project(sample)

find_package(PCL 1.2 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS})

add_executable (mesh_sampling mesh_sampling.cpp)
target_link_libraries (mesh_sampling ${PCL_LIBRARIES})


================================================
FILE: sample/README.md
================================================
# Sample

`mesh_sampling.cpp` is used to sample point clouds uniformly from CAD model. In order to compile it, you have to install:

* CMake
* PCL
* VTK

## CMake

Use this command to install CMake:

```bash
sudo apt-get udpate
sudo apt-get install cmake
```

## PCL

The version I used is the latest version, you can use these commands to install:

```bash
sudo apt-get update  
sudo apt-get install git build-essential linux-libc-dev
sudo apt-get install cmake cmake-gui
sudo apt-get install libusb-1.0-0-dev libusb-dev libudev-dev
sudo apt-get install mpi-default-dev openmpi-bin openmpi-common 
sudo apt-get install libflann1.9 libflann-dev
sudo apt-get install libeigen3-dev 
sudo apt-get install libboost-all-dev
sudo apt-get install libqhull* libgtest-dev
sudo apt-get install freeglut3-dev pkg-config
sudo apt-get install libxmu-dev libxi-dev
sudo apt-get install mono-complete
sudo apt-get install openjdk-8-jdk openjdk-8-jre

git clone https://github.com/PointCloudLibrary/pcl.git
cd pcl
mkdir build && cd build
cmake ..
make -j4
sudo make install
```

## VTK

The version of the VTK is `8.2.0`. You can download it from the [website](https://vtk.org/download/) and use the commands blew to install:

```bash
tar -xzvf VTK-8.2.0.zip
cd VTK-8.2.0/
```

Before compiling, you need to edit the file `IO/Geometry/vtkOBJReader.cxx`. In line 859, add the following code:

```C++
// Here we turn off texturing and/or normals
if (n_tcoord_pts == 0)
{
    hasTCoords = false;
}
if (n_normal_pts == 0)
{
    hasNormals = false;
}
```

Continue to build:

```bash
mkdir build && cd build
cmake ..
make -j4
sudo make install
```

## Compile

In order to use the script, you need to compile it:

```bash
cd sample
mkdir build && cd build
cmake ..
make
```

And you can get a exectuable file `mesh_sampling` in the `build` directory. You can use `mesh_sampling -h` for help. I've provided the `mesh_sampling`. But there are some problems with the options of the command. The option `-n_samples` seems cannot work.

## Example

CAD model and sampled point cloud :

<img src="../images/cad.png" width="300px"/>

<img src="../images/ground_truth.png" width="300px"/>


================================================
FILE: sample/mesh_sampling.cpp
================================================
/*
 * Software License Agreement (BSD License)
 *
 *  Point Cloud Library (PCL) - www.pointclouds.org
 *  Copyright (c) 2010-2011, Willow Garage, Inc.
 *
 *  All rights reserved.
 *
 *  Redistribution and use in source and binary forms, with or without
 *  modification, are permitted provided that the following conditions
 *  are met:
 *
 *   * Redistributions of source code must retain the above copyright
 *     notice, this list of conditions and the following disclaimer.
 *   * Redistributions in binary form must reproduce the above
 *     copyright notice, this list of conditions and the following
 *     disclaimer in the documentation and/or other materials provided
 *     with the distribution.
 *   * Neither the name of the copyright holder(s) nor the names of its
 *     contributors may be used to endorse or promote products derived
 *     from this software without specific prior written permission.
 *
 *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
 *  FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
 *  COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
 *  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
 *  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
 *  LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
 *  CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
 *  LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
 *  ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
 *  POSSIBILITY OF SUCH DAMAGE.
 *
 * Modified by Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018
 */

#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/io/pcd_io.h>
#include <pcl/io/vtk_lib_io.h>
#include <pcl/common/transforms.h>
#include <vtkVersion.h>
#include <vtkPLYReader.h>
#include <vtkOBJReader.h>
#include <vtkTriangle.h>
#include <vtkTriangleFilter.h>
#include <vtkPolyDataMapper.h>
#include <pcl/filters/voxel_grid.h>
#include <pcl/console/print.h>
#include <pcl/console/parse.h>

inline double
uniform_deviate(int seed)
{
  double ran = seed * (1.0 / (RAND_MAX + 1.0));
  return ran;
}

inline void
randomPointTriangle(float a1, float a2, float a3, float b1, float b2, float b3, float c1, float c2, float c3,
                    Eigen::Vector4f &p)
{
  float r1 = static_cast<float>(uniform_deviate(rand()));
  float r2 = static_cast<float>(uniform_deviate(rand()));
  float r1sqr = std::sqrt(r1);
  float OneMinR1Sqr = (1 - r1sqr);
  float OneMinR2 = (1 - r2);
  a1 *= OneMinR1Sqr;
  a2 *= OneMinR1Sqr;
  a3 *= OneMinR1Sqr;
  b1 *= OneMinR2;
  b2 *= OneMinR2;
  b3 *= OneMinR2;
  c1 = r1sqr * (r2 * c1 + b1) + a1;
  c2 = r1sqr * (r2 * c2 + b2) + a2;
  c3 = r1sqr * (r2 * c3 + b3) + a3;
  p[0] = c1;
  p[1] = c2;
  p[2] = c3;
  p[3] = 0;
}

inline void
randPSurface(vtkPolyData *polydata, std::vector<double> *cumulativeAreas, double totalArea, Eigen::Vector4f &p, bool calcNormal, Eigen::Vector3f &n)
{
  float r = static_cast<float>(uniform_deviate(rand()) * totalArea);

  std::vector<double>::iterator low = std::lower_bound(cumulativeAreas->begin(), cumulativeAreas->end(), r);
  vtkIdType el = vtkIdType(low - cumulativeAreas->begin());

  double A[3], B[3], C[3];
  vtkIdType npts = 0;
  vtkIdType *ptIds = NULL;
  polydata->GetCellPoints(el, npts, ptIds);
  polydata->GetPoint(ptIds[0], A);
  polydata->GetPoint(ptIds[1], B);
  polydata->GetPoint(ptIds[2], C);
  if (calcNormal)
  {
    // OBJ: Vertices are stored in a counter-clockwise order by default
    Eigen::Vector3f v1 = Eigen::Vector3f(A[0], A[1], A[2]) - Eigen::Vector3f(C[0], C[1], C[2]);
    Eigen::Vector3f v2 = Eigen::Vector3f(B[0], B[1], B[2]) - Eigen::Vector3f(C[0], C[1], C[2]);
    n = v1.cross(v2);
    n.normalize();
  }
  randomPointTriangle(float(A[0]), float(A[1]), float(A[2]),
                      float(B[0]), float(B[1]), float(B[2]),
                      float(C[0]), float(C[1]), float(C[2]), p);
}

void uniform_sampling(vtkSmartPointer<vtkPolyData> polydata, size_t n_samples, bool calc_normal, pcl::PointCloud<pcl::PointNormal> &cloud_out)
{
  polydata->BuildCells();
  vtkSmartPointer<vtkCellArray> cells = polydata->GetPolys();

  double p1[3], p2[3], p3[3], totalArea = 0;
  std::vector<double> cumulativeAreas(cells->GetNumberOfCells(), 0);
  size_t i = 0;
  vtkIdType npts = 0, *ptIds = NULL;
  for (cells->InitTraversal(); cells->GetNextCell(npts, ptIds); i++)
  {
    polydata->GetPoint(ptIds[0], p1);
    polydata->GetPoint(ptIds[1], p2);
    polydata->GetPoint(ptIds[2], p3);
    totalArea += vtkTriangle::TriangleArea(p1, p2, p3);
    cumulativeAreas[i] = totalArea;
  }

  cloud_out.points.resize(n_samples);
  cloud_out.width = static_cast<uint32_t>(n_samples);
  cloud_out.height = 1;

  for (i = 0; i < n_samples; i++)
  {
    Eigen::Vector4f p;
    Eigen::Vector3f n;
    randPSurface(polydata, &cumulativeAreas, totalArea, p, calc_normal, n);
    cloud_out.points[i].x = p[0];
    cloud_out.points[i].y = p[1];
    cloud_out.points[i].z = p[2];
    if (calc_normal)
    {
      cloud_out.points[i].normal_x = n[0];
      cloud_out.points[i].normal_y = n[1];
      cloud_out.points[i].normal_z = n[2];
    }
  }
}

using namespace pcl;
using namespace pcl::io;
using namespace pcl::console;

const int default_number_samples = 100000;
const float default_leaf_size = 0.01f;

void printHelp(int, char **argv)
{
  print_error("Syntax is: %s input.{ply,obj} output.pcd <options>\n", argv[0]);
  print_info("  where options are:\n");
  print_info("                -n_samples X   = number of samples (default: ");
  print_value("%d", default_number_samples);
  print_info(")\n");
  print_info(
      "                -leaf_size X   = the XYZ leaf size for the VoxelGrid -- for data reduction (default: ");
  print_value("%f", default_leaf_size);
  print_info(" m)\n");
  print_info("                -write_normals = flag to write normals to the output pcd\n");
  print_info(
      "                -no_vis_result = flag to stop visualizing the generated pcd\n");
  print_info(
      "                -no_vox_filter = flag to stop downsampling the generated pcd\n");
}

/* ---[ */
int main(int argc, char **argv)
{
  if (argc < 3)
  {
    printHelp(argc, argv);
    return (-1);
  }

  // Parse command line arguments
  int SAMPLE_POINTS_ = default_number_samples;
  parse_argument(argc, argv, "-n_samples", SAMPLE_POINTS_);
  float leaf_size = default_leaf_size;
  parse_argument(argc, argv, "-leaf_size", leaf_size);
  bool vis_result = !find_switch(argc, argv, "-no_vis_result");
  bool vox_filter = !find_switch(argc, argv, "-no_vox_filter");
  const bool write_normals = find_switch(argc, argv, "-write_normals");

  std::vector<int> pcd_file_indices = parse_file_extension_argument(argc, argv, ".pcd");
  std::vector<int> ply_file_indices = parse_file_extension_argument(argc, argv, ".ply");
  std::vector<int> obj_file_indices = parse_file_extension_argument(argc, argv, ".obj");
  if (pcd_file_indices.size() != 1)
  {
    print_error("Need a single output PCD file to continue.\n");
    return (-1);
  }
  if (ply_file_indices.size() != 1 && obj_file_indices.size() != 1)
  {
    print_error("Need a single input PLY/OBJ file to continue.\n");
    return (-1);
  }

  vtkSmartPointer<vtkPolyData> polydata1 = vtkSmartPointer<vtkPolyData>::New();
  if (ply_file_indices.size() == 1)
  {
    pcl::PolygonMesh mesh;
    pcl::io::loadPolygonFilePLY(argv[ply_file_indices[0]], mesh);
    pcl::io::mesh2vtk(mesh, polydata1);
  }
  else if (obj_file_indices.size() == 1)
  {
    print_info("Convert %s to a point cloud using uniform sampling.\n", argv[obj_file_indices[0]]);
    vtkSmartPointer<vtkOBJReader> readerQuery = vtkSmartPointer<vtkOBJReader>::New();
    readerQuery->SetFileName(argv[obj_file_indices[0]]);
    readerQuery->Update();
    polydata1 = readerQuery->GetOutput();
  }

  //make sure that the polygons are triangles!
  vtkSmartPointer<vtkTriangleFilter> triangleFilter = vtkSmartPointer<vtkTriangleFilter>::New();
#if VTK_MAJOR_VERSION < 6
  triangleFilter->SetInput(polydata1);
#else
  triangleFilter->SetInputData(polydata1);
#endif
  triangleFilter->Update();

  vtkSmartPointer<vtkPolyDataMapper> triangleMapper = vtkSmartPointer<vtkPolyDataMapper>::New();
  triangleMapper->SetInputConnection(triangleFilter->GetOutputPort());
  triangleMapper->Update();
  polydata1 = triangleMapper->GetInput();

  bool INTER_VIS = false;

  if (INTER_VIS)
  {
    visualization::PCLVisualizer vis;
    vis.addModelFromPolyData(polydata1, "mesh1", 0);
    vis.setRepresentationToSurfaceForAllActors();
    vis.spin();
  }

  pcl::PointCloud<pcl::PointNormal>::Ptr cloud_1(new pcl::PointCloud<pcl::PointNormal>);
  uniform_sampling(polydata1, SAMPLE_POINTS_, write_normals, *cloud_1);

  if (INTER_VIS)
  {
    visualization::PCLVisualizer vis_sampled;
    vis_sampled.addPointCloud<pcl::PointNormal>(cloud_1);
    if (write_normals)
      vis_sampled.addPointCloudNormals<pcl::PointNormal>(cloud_1, 1, 0.02f, "cloud_normals");
    vis_sampled.spin();
  }

  pcl::PointCloud<pcl::PointNormal>::Ptr cloud(new pcl::PointCloud<pcl::PointNormal>);

  // Voxelgrid
  if (vox_filter)
  {
    VoxelGrid<PointNormal> grid_;
    grid_.setInputCloud(cloud_1);
    grid_.setLeafSize(leaf_size, leaf_size, leaf_size);
    grid_.filter(*cloud);
  }
  else
  {
    *cloud = *cloud_1;
  }

  if (vis_result)
  {
    visualization::PCLVisualizer vis3("VOXELIZED SAMPLES CLOUD");
    vis3.addPointCloud<pcl::PointNormal>(cloud);
    if (write_normals)
      vis3.addPointCloudNormals<pcl::PointNormal>(cloud, 1, 0.02f, "cloud_normals");
    vis3.spin();
  }

  if (!write_normals)
  {
    pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_xyz(new pcl::PointCloud<pcl::PointXYZ>);
    // Strip uninitialized normals from cloud:
    pcl::copyPointCloud(*cloud, *cloud_xyz);
    savePCDFileASCII(argv[pcd_file_indices[0]], *cloud_xyz);
  }
  else
  {
    savePCDFileASCII(argv[pcd_file_indices[0]], *cloud);
  }
}


================================================
FILE: test.py
================================================
import os
import argparse

import numpy as np
import open3d as o3d
import torch
import torch.utils.data as Data

from models import PCN
from dataset import ShapeNet
from visualization import plot_pcd_one_view
from metrics.metric import l1_cd, l2_cd, emd, f_score


CATEGORIES_PCN       = ['airplane', 'cabinet', 'car', 'chair', 'lamp', 'sofa', 'table', 'vessel']
CATEGORIES_PCN_NOVEL = ['bus', 'bed', 'bookshelf', 'bench', 'guitar', 'motorbike', 'skateboard', 'pistol']


def make_dir(dir_path):
    if not os.path.exists(dir_path):
        os.makedirs(dir_path)


def export_ply(filename, points):
    pc = o3d.geometry.PointCloud()
    pc.points = o3d.utility.Vector3dVector(points)
    o3d.io.write_point_cloud(filename, pc, write_ascii=True)


def test_single_category(category, model, params, save=True):
    if save:
        cat_dir = os.path.join(params.result_dir, category)
        image_dir = os.path.join(cat_dir, 'image')
        output_dir = os.path.join(cat_dir, 'output')
        make_dir(cat_dir)
        make_dir(image_dir)
        make_dir(output_dir)

    test_dataset = ShapeNet('/media/server/new/datasets/PCN', 'test_novel' if params.novel else 'test', category)
    test_dataloader = Data.DataLoader(test_dataset, batch_size=params.batch_size, shuffle=False)

    index = 1
    total_l1_cd, total_l2_cd, total_f_score = 0.0, 0.0, 0.0
    with torch.no_grad():
        for p, c in test_dataloader:
            p = p.to(params.device)
            c = c.to(params.device)
            _, c_ = model(p)
            total_l1_cd += l1_cd(c_, c).item()
            total_l2_cd += l2_cd(c_, c).item()
            for i in range(len(c)):
                input_pc = p[i].detach().cpu().numpy()
                output_pc = c_[i].detach().cpu().numpy()
                gt_pc = c[i].detach().cpu().numpy()
                total_f_score += f_score(output_pc, gt_pc)
                if save:
                    plot_pcd_one_view(os.path.join(image_dir, '{:03d}.png'.format(index)), [input_pc, output_pc, gt_pc], ['Input', 'Output', 'GT'], xlim=(-0.35, 0.35), ylim=(-0.35, 0.35), zlim=(-0.35, 0.35))
                    export_ply(os.path.join(output_dir, '{:03d}.ply'.format(index)), output_pc)
                index += 1
    
    avg_l1_cd = total_l1_cd / len(test_dataset)
    avg_l2_cd = total_l2_cd / len(test_dataset)
    avg_f_score = total_f_score / len(test_dataset)

    return avg_l1_cd, avg_l2_cd, avg_f_score


def test(params, save=False):
    if save:
        make_dir(params.result_dir)

    print(params.exp_name)

    # load pretrained model
    model = PCN(16384, 1024, 4).to(params.device)
    model.load_state_dict(torch.load(params.ckpt_path))
    model.eval()

    print('\033[33m{:20s}{:20s}{:20s}{:20s}\033[0m'.format('Category', 'L1_CD(1e-3)', 'L2_CD(1e-4)', 'FScore-0.01(%)'))
    print('\033[33m{:20s}{:20s}{:20s}{:20s}\033[0m'.format('--------', '-----------', '-----------', '--------------'))

    if params.category == 'all':
        if params.novel:
            categories = CATEGORIES_PCN_NOVEL
        else:
            categories = CATEGORIES_PCN
        
        l1_cds, l2_cds, fscores = list(), list(), list()
        for category in categories:
            avg_l1_cd, avg_l2_cd, avg_f_score = test_single_category(category, model, params, save)
            print('{:20s}{:<20.4f}{:<20.4f}{:<20.4f}'.format(category.title(), 1e3 * avg_l1_cd, 1e4 * avg_l2_cd, 1e2 * avg_f_score))
            l1_cds.append(avg_l1_cd)
            l2_cds.append(avg_l2_cd)
            fscores.append(avg_f_score)
        
        print('\033[33m{:20s}{:20s}{:20s}{:20s}\033[0m'.format('--------', '-----------', '-----------', '--------------'))
        print('\033[32m{:20s}{:<20.4f}{:<20.4f}{:<20.4f}\033[0m'.format('Average', np.mean(l1_cds) * 1e3, np.mean(l2_cds) * 1e4, np.mean(fscores) * 1e2))
    else:
        avg_l1_cd, avg_l2_cd, avg_f_score = test_single_category(params.category, model, params, save)
        print('{:20s}{:<20.4f}{:<20.4f}{:<20.4f}'.format(params.category.title(), 1e3 * avg_l1_cd, 1e4 * avg_l2_cd, 1e2 * avg_f_score))


def test_single_category_emd(category, model, params):
    test_dataset = ShapeNet('/media/server/new/datasets/PCN', 'test_novel' if params.novel else 'test', category)
    test_dataloader = Data.DataLoader(test_dataset, batch_size=params.batch_size, shuffle=False)

    total_emd = 0.0
    with torch.no_grad():
        for p, c in test_dataloader:
            p = p.to(params.device)
            c = c.to(params.device)
            _, c_ = model(p)
            total_emd += emd(c_, c).item()
        
    avg_emd = total_emd / len(test_dataset) / c_.shape[1]
    return avg_emd


def test_emd(params):
    print(params.exp_name)

    # load pretrained model
    model = PCN(16384, 1024, 4).to(params.device)
    model.load_state_dict(torch.load(params.ckpt_path))
    model.eval()

    print('\033[33m{:20s}{:20s}\033[0m'.format('Category', 'EMD(1e-3)'))
    print('\033[33m{:20s}{:20s}\033[0m'.format('--------', '---------'))

    if params.category == 'all':
        if params.novel:
            categories = CATEGORIES_PCN_NOVEL
        else:
            categories = CATEGORIES_PCN
        
        emds = list()
        for category in categories:
            avg_emd = test_single_category_emd(category, model, params)
            print('{:20s}{:<20.4f}'.format(category.title(), 1e3 * avg_emd))
            emds.append(avg_emd)
        
        print('\033[33m{:20s}{:20s}\033[0m'.format('--------', '---------'))
        print('\033[32m{:20s}{:<20.4f}\033[0m'.format('Average', np.mean(emds) * 1e3))
    else:
        avg_emd = test_single_category_emd(params.category, model, params)
        print('{:20s}{:<20.4f}'.format(params.category.title(), 1e3 * avg_emd))


if __name__ == '__main__':
    parser = argparse.ArgumentParser('Point Cloud Completion Testing')
    parser.add_argument('--exp_name', type=str, help='Tag of experiment')
    parser.add_argument('--result_dir', type=str, default='results', help='Results directory')
    parser.add_argument('--ckpt_path', type=str, help='The path of pretrained model.')
    parser.add_argument('--category', type=str, default='all', help='Category of point clouds')
    parser.add_argument('--batch_size', type=int, default=1, help='Batch size for data loader')
    parser.add_argument('--num_workers', type=int, default=6, help='Num workers for data loader')
    parser.add_argument('--device', type=str, default='cuda:0', help='Device for testing')
    parser.add_argument('--save', type=bool, default=False, help='Saving test result')
    parser.add_argument('--novel', type=bool, default=False, help='unseen categories for testing')
    parser.add_argument('--emd', type=bool, default=False, help='Whether evaluate emd')
    params = parser.parse_args()

    if not params.emd:
        test(params, params.save)
    else:
        test_emd(params)


================================================
FILE: train.py
================================================
import argparse
import os
import datetime
import random

import torch
import torch.optim as Optim

from torch.utils.data.dataloader import DataLoader
from tensorboardX import SummaryWriter

from dataset import ShapeNet
from models import PCN
from metrics.metric import l1_cd
from metrics.loss import cd_loss_L1, emd_loss
from visualization import plot_pcd_one_view


def make_dir(dir_path):
    if not os.path.exists(dir_path):
        os.makedirs(dir_path)


def log(fd,  message, time=True):
    if time:
        message = ' ==> '.join([datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), message])
    fd.write(message + '\n')
    fd.flush()
    print(message)


def prepare_logger(params):
    # prepare logger directory
    make_dir(params.log_dir)
    make_dir(os.path.join(params.log_dir, params.exp_name))

    logger_path = os.path.join(params.log_dir, params.exp_name, params.category)
    ckpt_dir = os.path.join(params.log_dir, params.exp_name, params.category, 'checkpoints')
    epochs_dir = os.path.join(params.log_dir, params.exp_name, params.category, 'epochs')

    make_dir(logger_path)
    make_dir(ckpt_dir)
    make_dir(epochs_dir)

    logger_file = os.path.join(params.log_dir, params.exp_name, params.category, 'logger.log')
    log_fd = open(logger_file, 'a')

    log(log_fd, "Experiment: {}".format(params.exp_name), False)
    log(log_fd, "Logger directory: {}".format(logger_path), False)
    log(log_fd, str(params), False)

    train_writer = SummaryWriter(os.path.join(logger_path, 'train'))
    val_writer = SummaryWriter(os.path.join(logger_path, 'val'))

    return ckpt_dir, epochs_dir, log_fd, train_writer, val_writer


def train(params):
    torch.backends.cudnn.benchmark = True

    ckpt_dir, epochs_dir, log_fd, train_writer, val_writer = prepare_logger(params)

    log(log_fd, 'Loading Data...')

    train_dataset = ShapeNet('data/PCN', 'train', params.category)
    val_dataset = ShapeNet('data/PCN', 'valid', params.category)

    train_dataloader = DataLoader(train_dataset, batch_size=params.batch_size, shuffle=True, num_workers=params.num_workers)
    val_dataloader = DataLoader(val_dataset, batch_size=params.batch_size, shuffle=False, num_workers=params.num_workers)
    log(log_fd, "Dataset loaded!")

    # model
    model = PCN(num_dense=16384, latent_dim=1024, grid_size=4).to(params.device)

    # optimizer
    optimizer = Optim.Adam(model.parameters(), lr=params.lr, betas=(0.9, 0.999))
    lr_schedual = Optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.7)

    step = len(train_dataloader) // params.log_frequency

    # load pretrained model and optimizer
    if params.ckpt_path is not None:
        model.load_state_dict(torch.load(params.ckpt_path))

    # training
    best_cd_l1 = 1e8
    best_epoch_l1 = -1
    train_step, val_step = 0, 0
    for epoch in range(1, params.epochs + 1):
        # hyperparameter alpha
        if train_step < 10000:
            alpha = 0.01
        elif train_step < 20000:
            alpha = 0.1
        elif train_step < 50000:
            alpha = 0.5
        else:
            alpha = 1.0

        # training
        model.train()
        for i, (p, c) in enumerate(train_dataloader):
            p, c = p.to(params.device), c.to(params.device)

            optimizer.zero_grad()

            # forward propagation
            coarse_pred, dense_pred = model(p)
            
            # loss function
            if params.coarse_loss == 'cd':
                loss1 = cd_loss_L1(coarse_pred, c)
            elif params.coarse_loss == 'emd':
                coarse_c = c[:, :1024, :]
                loss1 = emd_loss(coarse_pred, coarse_c)
            else:
                raise ValueError('Not implemented loss {}'.format(params.coarse_loss))
                
            loss2 = cd_loss_L1(dense_pred, c)
            loss = loss1 + alpha * loss2

            # back propagation
            loss.backward()
            optimizer.step()

            if (i + 1) % step == 0:
                log(log_fd, "Training Epoch [{:03d}/{:03d}] - Iteration [{:03d}/{:03d}]: coarse loss = {:.6f}, dense l1 cd = {:.6f}, total loss = {:.6f}"
                    .format(epoch, params.epochs, i + 1, len(train_dataloader), loss1.item() * 1e3, loss2.item() * 1e3, loss.item() * 1e3))
            
            train_writer.add_scalar('coarse', loss1.item(), train_step)
            train_writer.add_scalar('dense', loss2.item(), train_step)
            train_writer.add_scalar('total', loss.item(), train_step)
            train_step += 1
        
        lr_schedual.step()

        # evaluation
        model.eval()
        total_cd_l1 = 0.0
        with torch.no_grad():
            rand_iter = random.randint(0, len(val_dataloader) - 1)  # for visualization

            for i, (p, c) in enumerate(val_dataloader):
                p, c = p.to(params.device), c.to(params.device)
                coarse_pred, dense_pred = model(p)
                total_cd_l1 += l1_cd(dense_pred, c).item()

                # save into image
                if rand_iter == i:
                    index = random.randint(0, dense_pred.shape[0] - 1)
                    plot_pcd_one_view(os.path.join(epochs_dir, 'epoch_{:03d}.png'.format(epoch)),
                                      [p[index].detach().cpu().numpy(), coarse_pred[index].detach().cpu().numpy(), dense_pred[index].detach().cpu().numpy(), c[index].detach().cpu().numpy()],
                                      ['Input', 'Coarse', 'Dense', 'Ground Truth'], xlim=(-0.35, 0.35), ylim=(-0.35, 0.35), zlim=(-0.35, 0.35))
            
            total_cd_l1 /= len(val_dataset)
            val_writer.add_scalar('l1_cd', total_cd_l1, val_step)
            val_step += 1

            log(log_fd, "Validate Epoch [{:03d}/{:03d}]: L1 Chamfer Distance = {:.6f}".format(epoch, params.epochs, total_cd_l1 * 1e3))
        
        if total_cd_l1 < best_cd_l1:
            best_epoch_l1 = epoch
            best_cd_l1 = total_cd_l1
            torch.save(model.state_dict(), os.path.join(ckpt_dir, 'best_l1_cd.pth'))
            
    log(log_fd, 'Best l1 cd model in epoch {}, the minimum l1 cd is {}'.format(best_epoch_l1, best_cd_l1 * 1e3))
    log_fd.close()
    

if __name__ == '__main__':
    parser = argparse.ArgumentParser('PCN')
    parser.add_argument('--exp_name', type=str, help='Tag of experiment')
    parser.add_argument('--log_dir', type=str, default='log', help='Logger directory')
    parser.add_argument('--ckpt_path', type=str, default=None, help='The path of pretrained model')
    parser.add_argument('--lr', type=float, default=0.0001, help='Learning rate')
    parser.add_argument('--category', type=str, default='all', help='Category of point clouds')
    parser.add_argument('--epochs', type=int, default=200, help='Epochs of training')
    parser.add_argument('--batch_size', type=int, default=32, help='Batch size for data loader')
    parser.add_argument('--coarse_loss', type=str, default='cd', help='loss function for coarse point cloud')
    parser.add_argument('--num_workers', type=int, default=6, help='num_workers for data loader')
    parser.add_argument('--device', type=str, default='cuda:0', help='device for training')
    parser.add_argument('--log_frequency', type=int, default=10, help='Logger frequency in every epoch')
    parser.add_argument('--save_frequency', type=int, default=10, help='Model saving frequency')
    params = parser.parse_args()
    
    train(params)


================================================
FILE: visualization/__init__.py
================================================
from visualization.visualization import plot_pcd_one_view, o3d_visualize_pc


================================================
FILE: visualization/visualization.py
================================================
import open3d as o3d
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D


def o3d_visualize_pc(pc):
    point_cloud = o3d.geometry.PointCloud()
    point_cloud.points = o3d.utility.Vector3dVector(pc)
    o3d.visualization.draw_geometries([point_cloud])


def plot_pcd_one_view(filename, pcds, titles, suptitle='', sizes=None, cmap='Reds', zdir='y',
                         xlim=(-0.5, 0.5), ylim=(-0.5, 0.5), zlim=(-0.5, 0.5)):
    if sizes is None:
        sizes = [0.5 for i in range(len(pcds))]
    fig = plt.figure(figsize=(len(pcds) * 3 * 1.4, 3 * 1.4))
    elev = 30  # 水平倾斜
    azim = -45  # 旋转
    for j, (pcd, size) in enumerate(zip(pcds, sizes)):
        color = pcd[:, 0]
        ax = fig.add_subplot(1, len(pcds), j + 1, projection='3d')
        ax.view_init(elev, azim)
        ax.scatter(pcd[:, 0], pcd[:, 1], pcd[:, 2], zdir=zdir, c=color, s=size, cmap=cmap, vmin=-1.0, vmax=0.5)
        ax.set_title(titles[j])
        ax.set_axis_off()
        ax.set_xlim(xlim)
        ax.set_ylim(ylim)
        ax.set_zlim(zlim)
    plt.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=0.9, wspace=0.1, hspace=0.1)
    plt.suptitle(suptitle)
    fig.savefig(filename)
    plt.close(fig)
Download .txt
gitextract_fmkz5hde/

├── .gitignore
├── README.md
├── checkpoint/
│   └── best_l1_cd.pth
├── data/
│   └── README.md
├── dataset/
│   ├── __init__.py
│   └── shapenet.py
├── extensions/
│   ├── chamfer_distance/
│   │   ├── chamfer3D.cu
│   │   ├── chamfer_3D.egg-info/
│   │   │   ├── PKG-INFO
│   │   │   ├── SOURCES.txt
│   │   │   ├── dependency_links.txt
│   │   │   └── top_level.txt
│   │   ├── chamfer_cuda.cpp
│   │   ├── chamfer_distance.py
│   │   └── setup.py
│   └── earth_movers_distance/
│       ├── emd.cpp
│       ├── emd.py
│       ├── emd_cuda.egg-info/
│       │   ├── PKG-INFO
│       │   ├── SOURCES.txt
│       │   ├── dependency_links.txt
│       │   └── top_level.txt
│       ├── emd_kernel.cu
│       └── setup.py
├── metrics/
│   ├── loss.py
│   └── metric.py
├── models/
│   ├── __init__.py
│   └── pcn.py
├── render/
│   ├── README.md
│   ├── blender.log
│   ├── partial.sh
│   ├── process_exr.py
│   └── render_depth.py
├── requirements.txt
├── sample/
│   ├── CMakeLists.txt
│   ├── README.md
│   ├── mesh_sampling
│   └── mesh_sampling.cpp
├── test.py
├── train.py
└── visualization/
    ├── __init__.py
    └── visualization.py
Download .txt
SYMBOL INDEX (55 symbols across 14 files)

FILE: dataset/shapenet.py
  class ShapeNet (line 13) | class ShapeNet(data.Dataset):
    method __init__ (line 20) | def __init__(self, dataroot, split, category):
    method __getitem__ (line 60) | def __getitem__(self, index):
    method __len__ (line 72) | def __len__(self):
    method _load_data (line 75) | def _load_data(self):
    method read_point_cloud (line 94) | def read_point_cloud(self, path):
    method random_sample (line 98) | def random_sample(self, pc, n):

FILE: extensions/chamfer_distance/chamfer_cuda.cpp
  function chamfer_forward (line 17) | int chamfer_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, ...
  function chamfer_backward (line 22) | int chamfer_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxy...
  function PYBIND11_MODULE (line 30) | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {

FILE: extensions/chamfer_distance/chamfer_distance.py
  class chamfer_3DFunction (line 29) | class chamfer_3DFunction(Function):
    method forward (line 31) | def forward(ctx, xyz1, xyz2):
    method backward (line 57) | def backward(ctx, graddist1, graddist2, gradidx1, gradidx2):
  class ChamferDistance (line 74) | class ChamferDistance(nn.Module):
    method __init__ (line 75) | def __init__(self):
    method forward (line 78) | def forward(self, input1, input2):

FILE: extensions/earth_movers_distance/emd.cpp
  function PYBIND11_MODULE (line 23) | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {

FILE: extensions/earth_movers_distance/emd.py
  class EarthMoverDistanceFunction (line 6) | class EarthMoverDistanceFunction(torch.autograd.Function):
    method forward (line 8) | def forward(ctx, xyz1, xyz2):
    method backward (line 18) | def backward(ctx, grad_cost):
  class EarthMoverDistance (line 25) | class EarthMoverDistance(nn.Module):
    method __init__ (line 26) | def __init__(self):
    method forward (line 29) | def forward(self, xyz1, xyz2):

FILE: metrics/loss.py
  function cd_loss_L1 (line 11) | def cd_loss_L1(pcs1, pcs2):
  function cd_loss_L2 (line 25) | def cd_loss_L2(pcs1, pcs2):
  function emd_loss (line 37) | def emd_loss(pcs1, pcs2):

FILE: metrics/metric.py
  function l2_cd (line 12) | def l2_cd(pcs1, pcs2):
  function l1_cd (line 19) | def l1_cd(pcs1, pcs2):
  function emd (line 26) | def emd(pcs1, pcs2):
  function f_score (line 31) | def f_score(pred, gt, th=0.01):

FILE: models/pcn.py
  class PCN (line 5) | class PCN(nn.Module):
    method __init__ (line 17) | def __init__(self, num_dense=16384, latent_dim=1024, grid_size=4):
    method forward (line 64) | def forward(self, xyz):

FILE: render/process_exr.py
  function read_exr (line 34) | def read_exr(exr_path, height, width):
  function depth2pcd (line 43) | def depth2pcd(depth, intrinsics, pose):

FILE: render/render_depth.py
  function random_pose (line 33) | def random_pose():
  function setup_blender (line 53) | def setup_blender(width, height, focal_length):

FILE: sample/mesh_sampling.cpp
  function uniform_deviate (line 53) | inline double
  function randomPointTriangle (line 60) | inline void
  function randPSurface (line 84) | inline void
  function uniform_sampling (line 112) | void uniform_sampling(vtkSmartPointer<vtkPolyData> polydata, size_t n_sa...
  function printHelp (line 158) | void printHelp(int, char **argv)
  function main (line 177) | int main(int argc, char **argv)

FILE: test.py
  function make_dir (line 19) | def make_dir(dir_path):
  function export_ply (line 24) | def export_ply(filename, points):
  function test_single_category (line 30) | def test_single_category(category, model, params, save=True):
  function test (line 68) | def test(params, save=False):
  function test_single_category_emd (line 103) | def test_single_category_emd(category, model, params):
  function test_emd (line 119) | def test_emd(params):

FILE: train.py
  function make_dir (line 19) | def make_dir(dir_path):
  function log (line 24) | def log(fd,  message, time=True):
  function prepare_logger (line 32) | def prepare_logger(params):
  function train (line 58) | def train(params):

FILE: visualization/visualization.py
  function o3d_visualize_pc (line 6) | def o3d_visualize_pc(pc):
  function plot_pcd_one_view (line 12) | def plot_pcd_one_view(filename, pcds, titles, suptitle='', sizes=None, c...
Condensed preview — 40 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (105K chars).
[
  {
    "path": ".gitignore",
    "chars": 39,
    "preview": ".vscode\n__pycache__\nlog\n.idea\ndata/pcn\n"
  },
  {
    "path": "README.md",
    "chars": 3810,
    "preview": "# PCN: Point Completion Network\n\n## Introduction\n\n![PCN](images/network.png)\n\nThis is implementation of PCN——Point Compl"
  },
  {
    "path": "data/README.md",
    "chars": 91,
    "preview": "# data\n\nPlease download `PCN.zip` from Cloud and unzip it here.\n\n```shell\nunzip PCN.zip\n```"
  },
  {
    "path": "dataset/__init__.py",
    "chars": 38,
    "preview": "from dataset.shapenet import ShapeNet\n"
  },
  {
    "path": "dataset/shapenet.py",
    "chars": 3669,
    "preview": "import sys\nsys.path.append('.')\n\nimport os\nimport random\n\nimport torch\nimport torch.utils.data as data\nimport numpy as n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer3D.cu",
    "chars": 5946,
    "preview": "\n#include <stdio.h>\n#include <ATen/ATen.h>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n#include <vector>\n\n\n\n__global__"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/PKG-INFO",
    "chars": 135,
    "preview": "Metadata-Version: 2.1\nName: chamfer-3D\nVersion: 0.0.0\nSummary: UNKNOWN\nHome-page: UNKNOWN\nLicense: UNKNOWN\nPlatform: UNK"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/SOURCES.txt",
    "chars": 174,
    "preview": "chamfer3D.cu\nchamfer_cuda.cpp\nsetup.py\nchamfer_3D.egg-info/PKG-INFO\nchamfer_3D.egg-info/SOURCES.txt\nchamfer_3D.egg-info/"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/dependency_links.txt",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_3D.egg-info/top_level.txt",
    "chars": 11,
    "preview": "chamfer_3D\n"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_cuda.cpp",
    "chars": 1073,
    "preview": "#include <torch/torch.h>\n#include <vector>\n\n///TMP\n//#include \"common.h\"\n/// NOT TMP\n\t\n\nint chamfer_cuda_forward(at::Ten"
  },
  {
    "path": "extensions/chamfer_distance/chamfer_distance.py",
    "chars": 2465,
    "preview": "import importlib\nimport os\n\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\n\n\nchamfer_found = impo"
  },
  {
    "path": "extensions/chamfer_distance/setup.py",
    "chars": 400,
    "preview": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n\nsetup(\n    name='cham"
  },
  {
    "path": "extensions/earth_movers_distance/emd.cpp",
    "chars": 744,
    "preview": "#ifndef _EMD\n#define _EMD\n\n#include <vector>\n#include <torch/extension.h>\n\n//CUDA declarations\nat::Tensor ApproxMatchFor"
  },
  {
    "path": "extensions/earth_movers_distance/emd.py",
    "chars": 1282,
    "preview": "import torch\nimport torch.nn as nn\nimport emd_cuda\n\n\nclass EarthMoverDistanceFunction(torch.autograd.Function):\n    @sta"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/PKG-INFO",
    "chars": 133,
    "preview": "Metadata-Version: 2.1\nName: emd-cuda\nVersion: 0.0.0\nSummary: UNKNOWN\nHome-page: UNKNOWN\nLicense: UNKNOWN\nPlatform: UNKNO"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/SOURCES.txt",
    "chars": 158,
    "preview": "emd.cpp\nemd_kernel.cu\nsetup.py\nemd_cuda.egg-info/PKG-INFO\nemd_cuda.egg-info/SOURCES.txt\nemd_cuda.egg-info/dependency_lin"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/dependency_links.txt",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_cuda.egg-info/top_level.txt",
    "chars": 9,
    "preview": "emd_cuda\n"
  },
  {
    "path": "extensions/earth_movers_distance/emd_kernel.cu",
    "chars": 11821,
    "preview": "/**********************************\n * Original Author: Haoqiang Fan\n * Modified by: Kaichun Mo\n ***********************"
  },
  {
    "path": "extensions/earth_movers_distance/setup.py",
    "chars": 436,
    "preview": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n\nsetup(\n    name='emd_"
  },
  {
    "path": "metrics/loss.py",
    "chars": 965,
    "preview": "import torch\n\nfrom extensions.chamfer_distance.chamfer_distance import ChamferDistance\nfrom extensions.earth_movers_dist"
  },
  {
    "path": "metrics/metric.py",
    "chars": 1374,
    "preview": "import torch\nimport open3d as o3d\n\nfrom extensions.chamfer_distance.chamfer_distance import ChamferDistance\nfrom extensi"
  },
  {
    "path": "models/__init__.py",
    "chars": 27,
    "preview": "from models.pcn import PCN\n"
  },
  {
    "path": "models/pcn.py",
    "chars": 3661,
    "preview": "import torch\nimport torch.nn as nn\n\n\nclass PCN(nn.Module):\n    \"\"\"\n    \"PCN: Point Cloud Completion Network\"\n    (https:"
  },
  {
    "path": "render/README.md",
    "chars": 1438,
    "preview": "# render\n\n## Description\n\n`process_exr.py` and `render_depth.py` are used for generating the partial point cloud from CA"
  },
  {
    "path": "render/blender.log",
    "chars": 20802,
    "preview": "Progress:   0.00%\r(  0.0000 sec |   0.0000 sec) Importing OBJ '/media/rico/BACKUP/Dataset/ShapeNetForPCN/02958343/167ec6"
  },
  {
    "path": "render/partial.sh",
    "chars": 264,
    "preview": "#!/bin/bash\necho \"Begin to generate exr files\"\n\nfor ((i=1; i<=21; i++)); do\n    blender -b -P render_depth.py \"/media/ri"
  },
  {
    "path": "render/process_exr.py",
    "chars": 3308,
    "preview": "'''\nMIT License\n\nCopyright (c) 2018 Wentao Yuan\n\nPermission is hereby granted, free of charge, to any person obtaining a"
  },
  {
    "path": "render/render_depth.py",
    "chars": 4960,
    "preview": "'''\nMIT License\n\nCopyright (c) 2018 Wentao Yuan\n\nPermission is hereby granted, free of charge, to any person obtaining a"
  },
  {
    "path": "requirements.txt",
    "chars": 31,
    "preview": "open3d\nmatplotlib\ntensorboardX\n"
  },
  {
    "path": "sample/CMakeLists.txt",
    "chars": 318,
    "preview": "cmake_minimum_required(VERSION 2.8 FATAL_ERROR)\n\nproject(sample)\n\nfind_package(PCL 1.2 REQUIRED)\n\ninclude_directories(${"
  },
  {
    "path": "sample/README.md",
    "chars": 2160,
    "preview": "# Sample\n\n`mesh_sampling.cpp` is used to sample point clouds uniformly from CAD model. In order to compile it, you have "
  },
  {
    "path": "sample/mesh_sampling.cpp",
    "chars": 10116,
    "preview": "/*\n * Software License Agreement (BSD License)\n *\n *  Point Cloud Library (PCL) - www.pointclouds.org\n *  Copyright (c) "
  },
  {
    "path": "test.py",
    "chars": 6890,
    "preview": "import os\nimport argparse\n\nimport numpy as np\nimport open3d as o3d\nimport torch\nimport torch.utils.data as Data\n\nfrom mo"
  },
  {
    "path": "train.py",
    "chars": 7460,
    "preview": "import argparse\nimport os\nimport datetime\nimport random\n\nimport torch\nimport torch.optim as Optim\n\nfrom torch.utils.data"
  },
  {
    "path": "visualization/__init__.py",
    "chars": 76,
    "preview": "from visualization.visualization import plot_pcd_one_view, o3d_visualize_pc\n"
  },
  {
    "path": "visualization/visualization.py",
    "chars": 1218,
    "preview": "import open3d as o3d\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef o3d_visualize_pc(pc):"
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the qinglew/PCN-PyTorch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 40 files (26.3 MB), approximately 34.3k tokens, and a symbol index with 55 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!