Showing preview only (958K chars total). Download the full file or copy to clipboard to get everything.
Repository: hansen7/OcCo
Branch: master
Commit: 4a236a600916
Files: 112
Total size: 915.5 KB
Directory structure:
gitextract_3k9761vh/
├── .gitignore
├── LICENSE
├── OcCo_TF/
│ ├── .gitignore
│ ├── Requirements_TF.txt
│ ├── cls_models/
│ │ ├── __init__.py
│ │ ├── dgcnn_cls.py
│ │ ├── pcn_cls.py
│ │ └── pointnet_cls.py
│ ├── completion_models/
│ │ ├── __init__.py
│ │ ├── dgcnn_cd.py
│ │ ├── dgcnn_emd.py
│ │ ├── pcn_cd.py
│ │ ├── pcn_emd.py
│ │ ├── pointnet_cd.py
│ │ └── pointnet_emd.py
│ ├── docker/
│ │ ├── .dockerignore
│ │ └── Dockerfile_TF
│ ├── pc_distance/
│ │ ├── __init__.py
│ │ ├── makefile
│ │ ├── tf_approxmatch.cpp
│ │ ├── tf_approxmatch.cu
│ │ ├── tf_approxmatch.py
│ │ ├── tf_nndistance.cpp
│ │ ├── tf_nndistance.cu
│ │ └── tf_nndistance.py
│ ├── readme.md
│ ├── train_cls.py
│ ├── train_cls_dgcnn_torchloader.py
│ ├── train_cls_torchloader.py
│ ├── train_completion.py
│ └── utils/
│ ├── Dataset_Assign.py
│ ├── EarlyStoppingCriterion.py
│ ├── ModelNetDataLoader.py
│ ├── Train_Logger.py
│ ├── __init__.py
│ ├── check_num_point.py
│ ├── check_scale.py
│ ├── data_util.py
│ ├── io_util.py
│ ├── pc_util.py
│ ├── tf_util.py
│ ├── transfer_pretrained_w.py
│ ├── transform_nets.py
│ └── visu_util.py
├── OcCo_Torch/
│ ├── Requirements_Torch.txt
│ ├── bash_template/
│ │ ├── train_cls_template.sh
│ │ ├── train_completion_template.sh
│ │ ├── train_jigsaw_template.sh
│ │ ├── train_partseg_template.sh
│ │ ├── train_semseg_template.sh
│ │ └── train_svm_template.sh
│ ├── chamfer_distance/
│ │ ├── __init__.py
│ │ ├── chamfer_distance.cpp
│ │ ├── chamfer_distance.cu
│ │ ├── chamfer_distance.py
│ │ └── readme.md
│ ├── docker/
│ │ ├── .dockerignore
│ │ ├── Dockerfile_Torch
│ │ ├── build_docker_torch.sh
│ │ └── launch_docker_torch.sh
│ ├── models/
│ │ ├── dgcnn_cls.py
│ │ ├── dgcnn_jigsaw.py
│ │ ├── dgcnn_occo.py
│ │ ├── dgcnn_partseg.py
│ │ ├── dgcnn_semseg.py
│ │ ├── dgcnn_util.py
│ │ ├── pcn_cls.py
│ │ ├── pcn_jigsaw.py
│ │ ├── pcn_occo.py
│ │ ├── pcn_partseg.py
│ │ ├── pcn_semseg.py
│ │ ├── pcn_util.py
│ │ ├── pointnet_cls.py
│ │ ├── pointnet_jigsaw.py
│ │ ├── pointnet_occo.py
│ │ ├── pointnet_partseg.py
│ │ ├── pointnet_semseg.py
│ │ └── pointnet_util.py
│ ├── readme.md
│ ├── train_cls.py
│ ├── train_completion.py
│ ├── train_jigsaw.py
│ ├── train_partseg.py
│ ├── train_semseg.py
│ ├── train_svm.py
│ └── utils/
│ ├── 3DPC_Data_Gen.py
│ ├── Dataset_Loc.py
│ ├── Inference_Timer.py
│ ├── LMDB_DataFlow.py
│ ├── LMDB_Writer.py
│ ├── ModelNetDataLoader.py
│ ├── PC_Augmentation.py
│ ├── S3DISDataLoader.py
│ ├── ShapeNetDataLoader.py
│ ├── TSNE_Visu.py
│ ├── Torch_Utility.py
│ ├── TrainLogger.py
│ ├── Visu_Utility.py
│ ├── __init__.py
│ ├── collect_indoor3d_data.py
│ ├── gen_indoor3d_h5.py
│ ├── indoor3d_util.py
│ └── lmdb2hdf5.py
├── readme.md
├── render/
│ ├── Depth_Renderer.py
│ ├── EXR_Process.py
│ ├── ModelNet_Flist.txt
│ ├── PC_Normalisation.py
│ └── readme.md
└── sample/
├── CMakeLists.txt
├── mesh_sampling.cpp
└── readme.md
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
results/*/plots
*/log/
demo/
*/para_restored.txt
*/pc_distance/__pycache__
.idea/
.DS_Store
__pycache__/
*.py[cod]
*$py.class
*/*.sh
*/data/*
/render/dump
*.ipynb
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2020 Hanchen Wang
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: OcCo_TF/.gitignore
================================================
# others code
results/*/plots
log/
demo/
demo_data/
para_restored.txt
pc_distance/__pycache__
# Byte-compiled / optimized / DLL files
.idea/
.DS_Store
__pycache__/
*.py[cod]
*$py.class
*.sh
*/*.sh
data/*
/render/dump*
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
================================================
FILE: OcCo_TF/Requirements_TF.txt
================================================
# Originally Designed for Docker Environment, TensorFlow 1.12.0/1.15.0, Python 3.7, CUDA 10.0
lmdb>=0.9
numpy>=1.14.0
h5py >= 2.10.0
msgpack==0.5.6
pyarrow>=0.10.0
open3d>=0.9.0.0
tensorpack>=0.8.9
matplotlib>=2.1.0
tensorflow==2.4.0
open3d-python==0.7.0.0
================================================
FILE: OcCo_TF/cls_models/__init__.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
================================================
FILE: OcCo_TF/cls_models/dgcnn_cls.py
================================================
# Author: Hanchen Wang (hw501@cam.ac.uk)
# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/models/dgcnn.py
import sys, pdb, tensorflow as tf
sys.path.append('../')
from utils import tf_util
from train_cls_dgcnn_torchloader import NUM_CLASSES, BATCH_SIZE, NUM_POINT
class Model:
def __init__(self, inputs, npts, labels, is_training, **kwargs):
self.__dict__.update(kwargs) # have self.bn_decay
self.knn = 20
self.is_training = is_training
self.features = self.create_encoder(inputs)
self.pred = self.create_decoder(self.features)
self.loss = self.create_loss(self.pred, labels)
@staticmethod
def get_graph_feature(x, k):
"""Torch: get_graph_feature = TF: adj_matrix + nn_idx + edge_feature"""
adj_matrix = tf_util.pairwise_distance(x)
nn_idx = tf_util.knn(adj_matrix, k=k)
x = tf_util.get_edge_feature(x, nn_idx=nn_idx, k=k)
return x
def create_encoder(self, point_cloud):
point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
''' Previous Solution Author Provided '''
# point_cloud_transformed = point_cloud
# adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
# nn_idx = tf_util.knn(adj_matrix, k=self.knn)
# x = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
x = self.get_graph_feature(point_cloud, self.knn)
x = tf_util.conv2d(x, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, bias=False, is_training=self.is_training,
activation_fn=tf.nn.leaky_relu, scope='conv1', bn_decay=self.bn_decay)
x1 = tf.reduce_max(x, axis=-2, keep_dims=True)
x = self.get_graph_feature(x1, self.knn)
x = tf_util.conv2d(x, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, bias=False, is_training=self.is_training,
activation_fn=tf.nn.leaky_relu, scope='conv2', bn_decay=self.bn_decay)
x2 = tf.reduce_max(x, axis=-2, keep_dims=True)
x = self.get_graph_feature(x2, self.knn)
x = tf_util.conv2d(x, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, bias=False, is_training=self.is_training,
activation_fn=tf.nn.leaky_relu, scope='conv3', bn_decay=self.bn_decay)
x3 = tf.reduce_max(x, axis=-2, keep_dims=True)
x = self.get_graph_feature(x3, self.knn)
x = tf_util.conv2d(x, 256, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, bias=False, is_training=self.is_training,
activation_fn=tf.nn.leaky_relu, scope='conv4', bn_decay=self.bn_decay)
x4 = tf.reduce_max(x, axis=-2, keep_dims=True)
x = tf_util.conv2d(tf.concat([x1, x2, x3, x4], axis=-1), 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, bias=False, is_training=self.is_training,
activation_fn=tf.nn.leaky_relu, scope='agg', bn_decay=self.bn_decay)
x1 = tf.reduce_max(x, axis=1, keep_dims=True)
x2 = tf.reduce_mean(x, axis=1, keep_dims=True)
# pdb.set_trace()
features = tf.reshape(tf.concat([x1, x2], axis=-1), [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
"""fully connected layers for classification with dropout"""
with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
# self.linear1 = nn.Linear(args.emb_dims*2, 512, bias=False)
features = tf_util.fully_connected(features, 512, bn=True, bias=False,
activation_fn=tf.nn.leaky_relu,
scope='linear1', is_training=self.is_training)
features = tf_util.dropout(features, keep_prob=0.5, scope='dp1', is_training=self.is_training)
# self.linear2 = nn.Linear(512, 256)
features = tf_util.fully_connected(features, 256, bn=True, bias=True,
activation_fn=tf.nn.leaky_relu,
scope='linear2', is_training=self.is_training)
features = tf_util.dropout(features, keep_prob=0.5, scope='dp2', is_training=self.is_training)
# self.linear3 = nn.Linear(256, output_channels)
pred = tf_util.fully_connected(features, NUM_CLASSES, bn=False, bias=True,
activation_fn=None,
scope='linear3', is_training=self.is_training)
return pred
@staticmethod
def create_loss(pred, label, smoothing=True):
# if smoothing:
# eps = 0.2
# n_class = pred.size(1)
#
# one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)
# one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
# log_prb = F.log_softmax(pred, dim=1)
#
# loss = -(one_hot * log_prb).sum(dim=1).mean()
if smoothing:
eps = 0.2
# pdb.set_trace()
one_hot = tf.one_hot(indices=label, depth=NUM_CLASSES)
# tf.print(one_hot, output_stream=sys.stderr) # not working
# pdb.set_trace()
one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (NUM_CLASSES - 1)
log_prb = tf.nn.log_softmax(logits=pred, axis=1)
# pdb.set_trace()
cls_loss = -tf.reduce_mean(tf.reduce_sum(one_hot * log_prb, axis=1))
else:
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
cls_loss = tf.reduce_mean(loss)
tf.summary.scalar('classification loss', cls_loss)
return cls_loss
if __name__ == '__main__':
batch_size, num_cls = 16, NUM_CLASSES
lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step')
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
learning_rate = tf.train.exponential_decay(base_lr, global_step, lr_decay_steps, lr_decay_rate,
staircase=True, name='lr')
learning_rate = tf.maximum(learning_rate, lr_clip)
model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(model.loss, global_step)
print('\n\n\n==========')
print('pred', model.pred)
print('loss', model.loss)
# seems like different from the what the paper has claimed:
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
config.log_device_placement = True
sess = tf.Session(config=config)
# Init Weights
init = tf.global_variables_initializer()
sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
for idx, var in enumerate(tf.trainable_variables()):
print(idx, var)
================================================
FILE: OcCo_TF/cls_models/pcn_cls.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import sys, tensorflow as tf
sys.path.append('../')
from utils.tf_util import mlp_conv, point_maxpool, point_unpool, fully_connected, dropout
from train_cls import NUM_CLASSES
# NUM_CLASSES = 40
class Model:
def __init__(self, inputs, npts, labels, is_training, **kwargs):
self.is_training = is_training
self.features = self.create_encoder(inputs, npts)
self.pred = self.create_decoder(self.features)
self.loss = self.create_loss(self.pred, labels)
def create_encoder(self, inputs, npts):
"""mini-PointNet encoder"""
with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
features = mlp_conv(inputs, [128, 256])
features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
features = tf.concat([features, features_global], axis=2)
with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
features = mlp_conv(features, [512, 1024])
features = point_maxpool(features, npts)
return features
def create_decoder(self, features):
"""fully connected layers for classification with dropout"""
with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
features = fully_connected(features, 512, bn=True, scope='fc1', is_training=self.is_training)
features = dropout(features, keep_prob=0.7, scope='dp1', is_training=self.is_training)
features = fully_connected(features, 256, bn=True, scope='fc2', is_training=self.is_training)
features = dropout(features, keep_prob=0.7, scope='dp2', is_training=self.is_training)
pred = fully_connected(features, NUM_CLASSES, activation_fn=None, scope='fc3',
is_training=self.is_training)
return pred
def create_loss(self, pred, label):
""" pred: B * NUM_CLASSES,
label: B, """
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
cls_loss = tf.reduce_mean(loss)
tf.summary.scalar('classification loss', cls_loss)
return cls_loss
if __name__ == '__main__':
batch_size, num_cls = 16, NUM_CLASSES
lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step')
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
learning_rate = tf.train.exponential_decay(base_lr, global_step,
lr_decay_steps, lr_decay_rate,
staircase=True, name='lr')
learning_rate = tf.maximum(learning_rate, lr_clip)
# model_module = importlib.import_module('./pcn_cls', './')
model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(model.loss, global_step)
print('\n\n\n==========')
print('pred', model.pred)
print('loss', model.loss)
# seems like different from the what the paper has claimed:
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
config.log_device_placement = True
sess = tf.Session(config=config)
# Init variables
init = tf.global_variables_initializer()
sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
for idx, var in enumerate(tf.trainable_variables()):
print(idx, var)
================================================
FILE: OcCo_TF/cls_models/pointnet_cls.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import sys, os
import tensorflow as tf
BASE_DIR = os.path.dirname(__file__)
sys.path.append(BASE_DIR)
sys.path.append(os.path.join(BASE_DIR, '../utils'))
from utils.tf_util import fully_connected, dropout, conv2d, max_pool2d
from train_cls import NUM_CLASSES, BATCH_SIZE, NUM_POINT
from utils.transform_nets import input_transform_net, feature_transform_net
class Model:
def __init__(self, inputs, npts, labels, is_training, **kwargs):
self.__dict__.update(kwargs) # batch_decay and is_training
self.is_training = is_training
self.features = self.create_encoder(inputs, npts)
self.pred = self.create_decoder(self.features)
self.loss = self.create_loss(self.pred, labels)
def create_encoder(self, inputs, npts):
"""PointNet encoder"""
inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
point_cloud_transformed = tf.matmul(inputs, transform)
input_image = tf.expand_dims(point_cloud_transformed, -1)
net = conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
scope='conv1', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
scope='conv2', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
with tf.variable_scope('transform_net2') as sc:
transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
net_transformed = tf.expand_dims(net_transformed, [2])
'''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
basically equals with the MLPs'''
# use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
net = conv2d(net_transformed, 64, [1, 1],
scope='conv3', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = conv2d(net, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv4', bn_decay=self.bn_decay)
net = conv2d(net, 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv5', bn_decay=self.bn_decay)
net = max_pool2d(net, [NUM_POINT, 1],
padding='VALID', scope='maxpool')
features = tf.reshape(net, [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
"""fully connected layers for classification with dropout"""
with tf.variable_scope('decoder_cls', reuse=tf.AUTO_REUSE):
features = fully_connected(features, 512, bn=True, scope='fc1', is_training=self.is_training)
features = dropout(features, keep_prob=0.7, scope='dp1', is_training=self.is_training)
features = fully_connected(features, 256, bn=True, scope='fc2', is_training=self.is_training)
features = dropout(features, keep_prob=0.7, scope='dp2', is_training=self.is_training)
pred = fully_connected(features, NUM_CLASSES, activation_fn=None, scope='fc3',
is_training=self.is_training)
return pred
def create_loss(self, pred, label):
""" pred: B * NUM_CLASSES,
label: B, """
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
cls_loss = tf.reduce_mean(loss)
tf.summary.scalar('classification loss', cls_loss)
return cls_loss
if __name__ == '__main__':
batch_size, num_cls = BATCH_SIZE, NUM_CLASSES
lr_clip, base_lr, lr_decay_steps, lr_decay_rate = 1e-6, 1e-4, 50000, .7
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step')
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
npts_pl = tf.placeholder(tf.int32, (batch_size,), 'num_points')
labels_pl = tf.placeholder(tf.int32, (batch_size,), 'ground_truths')
learning_rate = tf.train.exponential_decay(base_lr, global_step,
lr_decay_steps, lr_decay_rate,
staircase=True, name='lr')
learning_rate = tf.maximum(learning_rate, lr_clip)
# model_module = importlib.import_module('./pcn_cls', './')
model = Model(inputs_pl, npts_pl, labels_pl, is_training_pl)
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(model.loss, global_step)
print('\n\n\n==========')
print('pred', model.pred)
print('loss', model.loss)
# seems like different from the what the paper has claimed:
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
config.log_device_placement = True
sess = tf.Session(config=config)
# Init variables
init = tf.global_variables_initializer()
sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
for idx, var in enumerate(tf.trainable_variables()):
print(idx, var)
================================================
FILE: OcCo_TF/completion_models/__init__.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
================================================
FILE: OcCo_TF/completion_models/dgcnn_cd.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
# author: Hanchen Wang
import os, sys, tensorflow as tf
BASE_DIR = os.path.dirname(__file__)
sys.path.append(BASE_DIR)
sys.path.append('../')
sys.path.append(os.path.join(BASE_DIR, '../utils'))
from utils import tf_util
from utils.transform_nets import input_transform_net_dgcnn
from train_completion import BATCH_SIZE, NUM_POINT
# BATCH_SIZE = 8 # otherwise set to 8
# NUM_POINT = 2048 # 3000
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.knn = 20
self.__dict__.update(kwargs) # batch_decay and is_training
self.num_output_points = 16384 # 1024 * 16
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, point_cloud, npts):
point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
adj_matrix = tf_util.pairwise_distance(point_cloud)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(point_cloud, nn_idx=nn_idx, k=self.knn)
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net_dgcnn(edge_feature, self.is_training, self.bn_decay, K=3)
point_cloud_transformed = tf.matmul(point_cloud, transform)
adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn1', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net1 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn2', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net2 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn3', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net3 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn4', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net4 = net
net = tf_util.conv2d(tf.concat([net1, net2, net3, net4], axis=-1), 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='agg', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=1, keep_dims=True)
features = tf.reshape(net, [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, gt, alpha):
loss_coarse = tf_util.chamfer(self.coarse, gt)
tf_util.add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = tf_util.chamfer(self.fine, gt)
tf_util.add_train_summary('train/fine_loss', loss_fine)
update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
tf_util.add_train_summary('train/loss', loss)
update_loss = tf_util.add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/completion_models/dgcnn_emd.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
# author: Hanchen Wang
import os, sys, tensorflow as tf
BASE_DIR = os.path.dirname(__file__)
sys.path.append(BASE_DIR)
sys.path.append('../')
sys.path.append(os.path.join(BASE_DIR, '../utils'))
from utils import tf_util
from utils.transform_nets import input_transform_net_dgcnn
from train_completion import BATCH_SIZE, NUM_POINT
# BATCH_SIZE = 8 # otherwise set to 8
# NUM_POINT = 2048 # 3000
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.knn = 20
self.__dict__.update(kwargs) # batch_decay and is_training
self.num_output_points = 16384 # 1024 * 16
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, point_cloud, npts):
point_cloud = tf.reshape(point_cloud, (BATCH_SIZE, NUM_POINT, 3))
adj_matrix = tf_util.pairwise_distance(point_cloud)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(point_cloud, nn_idx=nn_idx, k=self.knn)
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net_dgcnn(edge_feature, self.is_training, self.bn_decay, K=3)
point_cloud_transformed = tf.matmul(point_cloud, transform)
adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn1', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net1 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn2', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net2 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 64, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn3', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net3 = net
adj_matrix = tf_util.pairwise_distance(net)
nn_idx = tf_util.knn(adj_matrix, k=self.knn)
edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=self.knn)
net = tf_util.conv2d(edge_feature, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='dgcnn4', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=-2, keep_dims=True)
net4 = net
net = tf_util.conv2d(tf.concat([net1, net2, net3, net4], axis=-1), 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='agg', bn_decay=self.bn_decay)
net = tf.reduce_max(net, axis=1, keep_dims=True)
features = tf.reshape(net, [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, gt, alpha):
gt_ds = gt[:, :self.coarse.shape[1], :]
loss_coarse = tf_util.earth_mover(self.coarse, gt_ds)
tf_util.add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = tf_util.chamfer(self.fine, gt)
tf_util.add_train_summary('train/fine_loss', loss_fine)
update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
tf_util.add_train_summary('train/loss', loss)
update_loss = tf_util.add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/completion_models/pcn_cd.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd.py
import pdb, tensorflow as tf
from utils.tf_util import mlp, mlp_conv, point_maxpool, point_unpool, chamfer, \
add_train_summary, add_valid_summary
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.__dict__.update(kwargs) # batch_decay and is_training
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(self.coarse, self.fine, gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, inputs, npts):
with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
features = mlp_conv(inputs, [128, 256])
features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
features = tf.concat([features, features_global], axis=2)
with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
features = mlp_conv(features, [512, 1024])
features = point_maxpool(features, npts)
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, coarse, fine, gt, alpha):
# print('coarse shape:', coarse.shape)
# print('fine shape:', fine.shape)
# print('gt shape:', gt.shape)
loss_coarse = chamfer(coarse, gt)
add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = chamfer(fine, gt)
add_train_summary('train/fine_loss', loss_fine)
update_fine = add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
add_train_summary('train/loss', loss)
update_loss = add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/completion_models/pcn_emd.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
# Author: Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018
import tensorflow as tf
from utils.tf_util import mlp_conv, point_maxpool, point_unpool, mlp, add_train_summary, \
add_valid_summary, earth_mover, chamfer
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(self.coarse, self.fine, gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, inputs, npts):
with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
features = mlp_conv(inputs, [128, 256])
features_global = point_unpool(point_maxpool(features, npts, keepdims=True), npts)
features = tf.concat([features, features_global], axis=2)
with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
features = mlp_conv(features, [512, 1024])
features = point_maxpool(features, npts)
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
x = tf.linspace(-self.grid_scale, self.grid_scale, self.grid_size)
y = tf.linspace(-self.grid_scale, self.grid_scale, self.grid_size)
grid = tf.meshgrid(x, y)
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, coarse, fine, gt, alpha):
gt_ds = gt[:, :coarse.shape[1], :]
loss_coarse = earth_mover(coarse, gt_ds)
add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = chamfer(fine, gt)
add_train_summary('train/fine_loss', loss_fine)
update_fine = add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
add_train_summary('train/loss', loss)
update_loss = add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/completion_models/pointnet_cd.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import os, sys, tensorflow as tf
BASE_DIR = os.path.dirname(__file__)
sys.path.append(BASE_DIR)
sys.path.append(os.path.join(BASE_DIR, '../utils'))
sys.path.append('../')
from utils.tf_util import conv2d, mlp, mlp_conv, chamfer, add_valid_summary, add_train_summary, max_pool2d
from utils.transform_nets import input_transform_net, feature_transform_net
from train_completion import BATCH_SIZE, NUM_POINT
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.__dict__.update(kwargs) # batch_decay and is_training
self.num_output_points = 16384 # 1024 * 16
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, inputs, npts):
# with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
# features = mlp_conv(inputs, [128, 256])
# features_global = tf.reduce_max(features, axis=1, keep_dims=True, name='maxpool_0')
# features = tf.concat([features, tf.tile(features_global, [1, tf.shape(inputs)[1], 1])], axis=2)
# with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
# features = mlp_conv(features, [512, 1024])
# features = tf.reduce_max(features, axis=1, name='maxpool_1')
# end_points = {}
# if DATASET =='modelnet40':
inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
point_cloud_transformed = tf.matmul(inputs, transform)
input_image = tf.expand_dims(point_cloud_transformed, -1)
net = conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
scope='conv1', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
scope='conv2', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
with tf.variable_scope('transform_net2') as sc:
transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
# end_points['transform'] = transform
net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
net_transformed = tf.expand_dims(net_transformed, [2])
'''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
basically equals with the MLPs'''
# use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
net = conv2d(net_transformed, 64, [1, 1],
scope='conv3', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = conv2d(net, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv4', bn_decay=self.bn_decay)
net = conv2d(net, 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv5', bn_decay=self.bn_decay)
net = max_pool2d(net, [NUM_POINT, 1],
padding='VALID', scope='maxpool')
features = tf.reshape(net, [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size),
tf.linspace(-0.05, 0.05, self.grid_size))
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, gt, alpha):
loss_coarse = chamfer(self.coarse, gt)
add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = chamfer(self.fine, gt)
add_train_summary('train/fine_loss', loss_fine)
update_fine = add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
add_train_summary('train/loss', loss)
update_loss = add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/completion_models/pointnet_emd.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import os, sys, tensorflow as tf
BASE_DIR = os.path.dirname(__file__)
sys.path.append(BASE_DIR)
sys.path.append(os.path.join(BASE_DIR, '../utils'))
sys.path.append('../')
from utils import tf_util
from utils.transform_nets import input_transform_net, feature_transform_net
from train_completion import BATCH_SIZE, NUM_POINT
# BATCH_SIZE = 8 # otherwise set to 8
# NUM_POINT = 2048 # 3000
class Model:
def __init__(self, inputs, npts, gt, alpha, **kwargs):
self.__dict__.update(kwargs) # batch_decay and is_training
self.num_output_points = 16384 # 1024 * 16
self.num_coarse = 1024
self.grid_size = 4
self.grid_scale = 0.05
self.num_fine = self.grid_size ** 2 * self.num_coarse
self.features = self.create_encoder(inputs, npts)
self.coarse, self.fine = self.create_decoder(self.features)
self.loss, self.update = self.create_loss(gt, alpha)
self.outputs = self.fine
self.visualize_ops = [tf.split(inputs[0], npts, axis=0), self.coarse, self.fine, gt]
self.visualize_titles = ['input', 'coarse output', 'fine output', 'ground truth']
def create_encoder(self, inputs, npts):
# with tf.variable_scope('encoder_0', reuse=tf.AUTO_REUSE):
# features = mlp_conv(inputs, [128, 256])
# features_global = tf.reduce_max(features, axis=1, keep_dims=True, name='maxpool_0')
# features = tf.concat([features, tf.tile(features_global, [1, tf.shape(inputs)[1], 1])], axis=2)
# with tf.variable_scope('encoder_1', reuse=tf.AUTO_REUSE):
# features = mlp_conv(features, [512, 1024])
# features = tf.reduce_max(features, axis=1, name='maxpool_1')
# end_points = {}
inputs = tf.reshape(inputs, (BATCH_SIZE, NUM_POINT, 3))
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net(inputs, self.is_training, self.bn_decay, K=3)
point_cloud_transformed = tf.matmul(inputs, transform)
input_image = tf.expand_dims(point_cloud_transformed, -1)
net = tf_util.conv2d(inputs=input_image, num_output_channels=64, kernel_size=[1, 3],
scope='conv1', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = tf_util.conv2d(inputs=net, num_output_channels=64, kernel_size=[1, 1],
scope='conv2', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
with tf.variable_scope('transform_net2') as sc:
transform = feature_transform_net(net, self.is_training, self.bn_decay, K=64)
# end_points['transform'] = transform
net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
net_transformed = tf.expand_dims(net_transformed, [2])
'''conv2d, with kernel size of [1,1,1,1] and stride of [1,1,1,1],
basically equals with the MLPs'''
# use_xavier=True, stddev=1e-3, weight_decay=0.0, activation_fn=tf.nn.relu,
net = tf_util.conv2d(net_transformed, 64, [1, 1],
scope='conv3', padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training, bn_decay=self.bn_decay)
net = tf_util.conv2d(net, 128, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv4', bn_decay=self.bn_decay)
net = tf_util.conv2d(net, 1024, [1, 1],
padding='VALID', stride=[1, 1],
bn=True, is_training=self.is_training,
scope='conv5', bn_decay=self.bn_decay)
net = tf_util.max_pool2d(net, [NUM_POINT, 1],
padding='VALID', scope='maxpool')
features = tf.reshape(net, [BATCH_SIZE, -1])
return features
def create_decoder(self, features):
with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
coarse = tf_util.mlp(features, [1024, 1024, self.num_coarse * 3])
coarse = tf.reshape(coarse, [-1, self.num_coarse, 3])
with tf.variable_scope('folding', reuse=tf.AUTO_REUSE):
grid = tf.meshgrid(tf.linspace(-0.05, 0.05, self.grid_size), tf.linspace(-0.05, 0.05, self.grid_size))
grid = tf.expand_dims(tf.reshape(tf.stack(grid, axis=2), [-1, 2]), 0)
grid_feat = tf.tile(grid, [features.shape[0], self.num_coarse, 1])
point_feat = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
point_feat = tf.reshape(point_feat, [-1, self.num_fine, 3])
global_feat = tf.tile(tf.expand_dims(features, 1), [1, self.num_fine, 1])
feat = tf.concat([grid_feat, point_feat, global_feat], axis=2)
center = tf.tile(tf.expand_dims(coarse, 2), [1, 1, self.grid_size ** 2, 1])
center = tf.reshape(center, [-1, self.num_fine, 3])
fine = tf_util.mlp_conv(feat, [512, 512, 3]) + center
return coarse, fine
def create_loss(self, gt, alpha):
gt_ds = gt[:, :self.coarse.shape[1], :]
loss_coarse = tf_util.earth_mover(self.coarse, gt_ds)
# loss_coarse = earth_mover(coarse, gt_ds)
tf_util.add_train_summary('train/coarse_loss', loss_coarse)
update_coarse = tf_util.add_valid_summary('valid/coarse_loss', loss_coarse)
loss_fine = tf_util.chamfer(self.fine, gt)
tf_util.add_train_summary('train/fine_loss', loss_fine)
update_fine = tf_util.add_valid_summary('valid/fine_loss', loss_fine)
loss = loss_coarse + alpha * loss_fine
tf_util.add_train_summary('train/loss', loss)
update_loss = tf_util.add_valid_summary('valid/loss', loss)
return loss, [update_coarse, update_fine, update_loss]
================================================
FILE: OcCo_TF/docker/.dockerignore
================================================
../data/
../log/
================================================
FILE: OcCo_TF/docker/Dockerfile_TF
================================================
FROM tensorflow/tensorflow:1.12.0-gpu-py3
WORKDIR /workspace/OcCo_TF
RUN mkdir /home/hcw
RUN chmod -R 777 /home/hcw
RUN chmod 777 /usr/bin
RUN chmod 777 /bin
RUN chmod 777 /usr/local/
RUN apt-get -y update
RUN apt-get -y install vim screen libgl1-mesa-glx
COPY ./Requirements_TF.txt /workspace/OcCo_TF
RUN pip install -r ../Requirements_TF.txt
COPY ./pc_distance /workspace/OcCo_TF/pc_distance
# RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
# RUN apt-get install wget
# RUN wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
# RUN yes|apt -y install ./cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
# RUN wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
# RUN apt -y install ./nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
# RUN apt-get update
# Install the NVIDIA driver
# Issue with driver install requires creating /usr/lib/nvidia
# RUN mkdir /usr/lib/nvidia
# RUN apt-get -y -o Dpkg::Options::="--force-overwrite" install --no-install-recommends nvidia-410
# Reboot. Check that GPUs are visible using the command: nvidia-smi
# Install CUDA and tools. Include optional NCCL 2.x
# RUN apt install -y --allow-downgrades cuda9.0 cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 \
# cuda-cusolver-9-0 cuda-cusparse-9-0 libcudnn7=7.2.1.38-1+cuda9.0 \
# libnccl2=2.2.13-1+cuda9.0 cuda-command-line-tools-9-0
# Optional: Install the TensorRT runtime (must be after CUDA install)
# RUN apt update
# RUN apt -y install libnvinfer4=4.1.2-1+cuda9.0
WORKDIR /workspace/OcCo_TF/pc_distance
RUN make
RUN chmod -R 777 /workspace/OcCo_TF/pc_distance
# RUN ln -s /usr/local/cuda/lib64/libcudart.so.10.0 /usr/local/cuda/lib64/libcudart.so.9.0
RUN ln -s /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so.1
RUN mkdir -p /usr/local/nvidia/lib
RUN cp /usr/local/lib/python3.5/dist-packages/tensorflow/libtensorflow_framework.so /usr/local/nvidia/lib/libtensorflow_framework.so.1
RUN useradd hcw
USER hcw
WORKDIR /workspace/OcCo_TF
================================================
FILE: OcCo_TF/pc_distance/__init__.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
================================================
FILE: OcCo_TF/pc_distance/makefile
================================================
cuda_inc = /usr/local/cuda-9.0/include/
cuda_lib = /usr/local/cuda-9.0/lib64/
nvcc = /usr/local/cuda-9.0/bin/nvcc
tf_inc = /usr/local/lib/python3.5/dist-packages/tensorflow/include
tf_lib = /usr/local/lib/python3.5/dist-packages/tensorflow
all: tf_nndistance_so.so tf_approxmatch_so.so
tf_nndistance.cu.o: tf_nndistance.cu
$(nvcc) tf_nndistance.cu -o tf_nndistance.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC
tf_nndistance_so.so: tf_nndistance.cpp tf_nndistance.cu.o
g++ tf_nndistance.cpp tf_nndistance.cu.o -o tf_nndistance_so.so \
-I $(cuda_inc) -I $(tf_inc) -L $(cuda_lib) -lcudart -L $(tf_lib) -ltensorflow_framework \
-shared -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -fPIC -O2
tf_approxmatch.cu.o: tf_approxmatch.cu
$(nvcc) tf_approxmatch.cu -o tf_approxmatch.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC
tf_approxmatch_so.so: tf_approxmatch.cpp tf_approxmatch.cu.o
g++ -shared $(CPPFLAGS) tf_approxmatch.cpp tf_approxmatch.cu.o -o tf_approxmatch_so.so \
-I $(cuda_inc) -I $(tf_inc) -L $(cuda_lib) -lcudart -L $(tf_lib) -ltensorflow_framework \
-shared -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -fPIC -O2
clean:
rm -rf *.o *.so
================================================
FILE: OcCo_TF/pc_distance/tf_approxmatch.cpp
================================================
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <algorithm>
#include <vector>
#include <math.h>
using namespace tensorflow;
REGISTER_OP("ApproxMatch")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("match: float32");
REGISTER_OP("MatchCost")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("cost: float32");
REGISTER_OP("MatchCostGrad")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("grad1: float32")
.Output("grad2: float32");
void approxmatch_cpu(int b,int n,int m,const float * xyz1,const float * xyz2,float * match){
for (int i=0;i<b;i++){
int factorl=std::max(n,m)/n;
int factorr=std::max(n,m)/m;
std::vector<double> saturatedl(n,double(factorl)),saturatedr(m,double(factorr));
std::vector<double> weight(n*m);
for (int j=0;j<n*m;j++)
match[j]=0;
for (int j=8;j>=-2;j--){
//printf("i=%d j=%d\n",i,j);
double level=-powf(4.0,j);
if (j==-2)
level=0;
for (int k=0;k<n;k++){
double x1=xyz1[k*3+0];
double y1=xyz1[k*3+1];
double z1=xyz1[k*3+2];
for (int l=0;l<m;l++){
double x2=xyz2[l*3+0];
double y2=xyz2[l*3+1];
double z2=xyz2[l*3+2];
weight[k*m+l]=expf(level*((x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+(z1-z2)*(z1-z2)))*saturatedr[l];
}
}
std::vector<double> ss(m,1e-9);
for (int k=0;k<n;k++){
double s=1e-9;
for (int l=0;l<m;l++){
s+=weight[k*m+l];
}
for (int l=0;l<m;l++){
weight[k*m+l]=weight[k*m+l]/s*saturatedl[k];
}
for (int l=0;l<m;l++)
ss[l]+=weight[k*m+l];
}
for (int l=0;l<m;l++){
double s=ss[l];
double r=std::min(saturatedr[l]/s,1.0);
ss[l]=r;
}
std::vector<double> ss2(m,0);
for (int k=0;k<n;k++){
double s=0;
for (int l=0;l<m;l++){
weight[k*m+l]*=ss[l];
s+=weight[k*m+l];
ss2[l]+=weight[k*m+l];
}
saturatedl[k]=std::max(saturatedl[k]-s,0.0);
}
for (int k=0;k<n*m;k++)
match[k]+=weight[k];
for (int l=0;l<m;l++){
saturatedr[l]=std::max(saturatedr[l]-ss2[l],0.0);
}
}
xyz1+=n*3;
xyz2+=m*3;
match+=n*m;
}
}
void matchcost_cpu(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * cost){
for (int i=0;i<b;i++){
double s=0;
for (int j=0;j<n;j++)
for (int k=0;k<m;k++){
float x1=xyz1[j*3+0];
float y1=xyz1[j*3+1];
float z1=xyz1[j*3+2];
float x2=xyz2[k*3+0];
float y2=xyz2[k*3+1];
float z2=xyz2[k*3+2];
float d=sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1))*match[j*m+k];
s+=d;
}
cost[0]=s;
xyz1+=n*3;
xyz2+=m*3;
match+=n*m;
cost+=1;
}
}
void matchcostgrad_cpu(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * grad1,float * grad2){
for (int i=0;i<b;i++){
for (int j=0;j<n;j++)
grad1[j*3+0]=0;
for (int j=0;j<m;j++){
float sx=0,sy=0,sz=0;
for (int k=0;k<n;k++){
float x2=xyz2[j*3+0];
float y2=xyz2[j*3+1];
float z2=xyz2[j*3+2];
float x1=xyz1[k*3+0];
float y1=xyz1[k*3+1];
float z1=xyz1[k*3+2];
float d=std::max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);
float dx=match[k*m+j]*((x2-x1)/d);
float dy=match[k*m+j]*((y2-y1)/d);
float dz=match[k*m+j]*((z2-z1)/d);
grad1[k*3+0]-=dx;
grad1[k*3+1]-=dy;
grad1[k*3+2]-=dz;
sx+=dx;
sy+=dy;
sz+=dz;
}
grad2[j*3+0]=sx;
grad2[j*3+1]=sy;
grad2[j*3+2]=sz;
}
xyz1+=n*3;
xyz2+=m*3;
match+=n*m;
grad1+=n*3;
grad2+=m*3;
}
}
void approxmatchLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,float * match,float * temp);
void matchcostLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * out);
void matchcostgradLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * grad1,float * grad2);
class ApproxMatchGpuOp: public OpKernel{
public:
explicit ApproxMatchGpuOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
//OP_REQUIRES(context,n<=4096,errors::InvalidArgument("ApproxMatch handles at most 4096 dataset points"));
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
//OP_REQUIRES(context,m<=1024,errors::InvalidArgument("ApproxMatch handles at most 1024 query points"));
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
Tensor * match_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,n},&match_tensor));
auto match_flat=match_tensor->flat<float>();
float * match=&(match_flat(0));
Tensor temp_tensor;
OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum<float>::value,TensorShape{b,(n+m)*2},&temp_tensor));
auto temp_flat=temp_tensor.flat<float>();
float * temp=&(temp_flat(0));
approxmatchLauncher(b,n,m,xyz1,xyz2,match,temp);
}
};
REGISTER_KERNEL_BUILDER(Name("ApproxMatch").Device(DEVICE_GPU), ApproxMatchGpuOp);
class ApproxMatchOp: public OpKernel{
public:
explicit ApproxMatchOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
//OP_REQUIRES(context,n<=4096,errors::InvalidArgument("ApproxMatch handles at most 4096 dataset points"));
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
//OP_REQUIRES(context,m<=1024,errors::InvalidArgument("ApproxMatch handles at most 1024 query points"));
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
Tensor * match_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,n},&match_tensor));
auto match_flat=match_tensor->flat<float>();
float * match=&(match_flat(0));
approxmatch_cpu(b,n,m,xyz1,xyz2,match);
}
};
REGISTER_KERNEL_BUILDER(Name("ApproxMatch").Device(DEVICE_CPU), ApproxMatchOp);
class MatchCostGpuOp: public OpKernel{
public:
explicit MatchCostGpuOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
const Tensor& match_tensor=context->input(2);
OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
auto match_flat=match_tensor.flat<float>();
const float * match=&(match_flat(0));
Tensor * cost_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b},&cost_tensor));
auto cost_flat=cost_tensor->flat<float>();
float * cost=&(cost_flat(0));
matchcostLauncher(b,n,m,xyz1,xyz2,match,cost);
}
};
REGISTER_KERNEL_BUILDER(Name("MatchCost").Device(DEVICE_GPU), MatchCostGpuOp);
class MatchCostOp: public OpKernel{
public:
explicit MatchCostOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
const Tensor& match_tensor=context->input(2);
OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
auto match_flat=match_tensor.flat<float>();
const float * match=&(match_flat(0));
Tensor * cost_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b},&cost_tensor));
auto cost_flat=cost_tensor->flat<float>();
float * cost=&(cost_flat(0));
matchcost_cpu(b,n,m,xyz1,xyz2,match,cost);
}
};
REGISTER_KERNEL_BUILDER(Name("MatchCost").Device(DEVICE_CPU), MatchCostOp);
class MatchCostGradGpuOp: public OpKernel{
public:
explicit MatchCostGradGpuOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCostGrad expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCostGrad expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
const Tensor& match_tensor=context->input(2);
OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
auto match_flat=match_tensor.flat<float>();
const float * match=&(match_flat(0));
Tensor * grad1_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad1_tensor));
auto grad1_flat=grad1_tensor->flat<float>();
float * grad1=&(grad1_flat(0));
Tensor * grad2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad2_tensor));
auto grad2_flat=grad2_tensor->flat<float>();
float * grad2=&(grad2_flat(0));
matchcostgradLauncher(b,n,m,xyz1,xyz2,match,grad1,grad2);
}
};
REGISTER_KERNEL_BUILDER(Name("MatchCostGrad").Device(DEVICE_GPU), MatchCostGradGpuOp);
class MatchCostGradOp: public OpKernel{
public:
explicit MatchCostGradOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz1 shape"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&(xyz1_flat(0));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("MatchCost expects (batch_size,num_points,3) xyz2 shape, and batch_size must match"));
int m=xyz2_tensor.shape().dim_size(1);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&(xyz2_flat(0));
const Tensor& match_tensor=context->input(2);
OP_REQUIRES(context,match_tensor.dims()==3 && match_tensor.shape().dim_size(0)==b && match_tensor.shape().dim_size(1)==m && match_tensor.shape().dim_size(2)==n,errors::InvalidArgument("MatchCost expects (batch_size,#query,#dataset) match shape"));
auto match_flat=match_tensor.flat<float>();
const float * match=&(match_flat(0));
Tensor * grad1_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad1_tensor));
auto grad1_flat=grad1_tensor->flat<float>();
float * grad1=&(grad1_flat(0));
Tensor * grad2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad2_tensor));
auto grad2_flat=grad2_tensor->flat<float>();
float * grad2=&(grad2_flat(0));
matchcostgrad_cpu(b,n,m,xyz1,xyz2,match,grad1,grad2);
}
};
REGISTER_KERNEL_BUILDER(Name("MatchCostGrad").Device(DEVICE_CPU), MatchCostGradOp);
================================================
FILE: OcCo_TF/pc_distance/tf_approxmatch.cu
================================================
__global__ void approxmatch(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,float * __restrict__ match,float * temp){
float * remainL=temp+blockIdx.x*(n+m)*2, * remainR=temp+blockIdx.x*(n+m)*2+n,*ratioL=temp+blockIdx.x*(n+m)*2+n+m,*ratioR=temp+blockIdx.x*(n+m)*2+n+m+n;
float multiL,multiR;
if (n>=m){
multiL=1;
multiR=n/m;
}else{
multiL=m/n;
multiR=1;
}
const int Block=1024;
__shared__ float buf[Block*4];
for (int i=blockIdx.x;i<b;i+=gridDim.x){
for (int j=threadIdx.x;j<n*m;j+=blockDim.x)
match[i*n*m+j]=0;
for (int j=threadIdx.x;j<n;j+=blockDim.x)
remainL[j]=multiL;
for (int j=threadIdx.x;j<m;j+=blockDim.x)
remainR[j]=multiR;
__syncthreads();
for (int j=7;j>=-2;j--){
float level=-powf(4.0f,j);
if (j==-2){
level=0;
}
for (int k0=0;k0<n;k0+=blockDim.x){
int k=k0+threadIdx.x;
float x1=0,y1=0,z1=0;
if (k<n){
x1=xyz1[i*n*3+k*3+0];
y1=xyz1[i*n*3+k*3+1];
z1=xyz1[i*n*3+k*3+2];
}
float suml=1e-9f;
for (int l0=0;l0<m;l0+=Block){
int lend=min(m,l0+Block)-l0;
for (int l=threadIdx.x;l<lend;l+=blockDim.x){
float x2=xyz2[i*m*3+l0*3+l*3+0];
float y2=xyz2[i*m*3+l0*3+l*3+1];
float z2=xyz2[i*m*3+l0*3+l*3+2];
buf[l*4+0]=x2;
buf[l*4+1]=y2;
buf[l*4+2]=z2;
buf[l*4+3]=remainR[l0+l];
}
__syncthreads();
for (int l=0;l<lend;l++){
float x2=buf[l*4+0];
float y2=buf[l*4+1];
float z2=buf[l*4+2];
float d=level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1));
float w=__expf(d)*buf[l*4+3];
suml+=w;
}
__syncthreads();
}
if (k<n)
ratioL[k]=remainL[k]/suml;
}
/*for (int k=threadIdx.x;k<n;k+=gridDim.x){
float x1=xyz1[i*n*3+k*3+0];
float y1=xyz1[i*n*3+k*3+1];
float z1=xyz1[i*n*3+k*3+2];
float suml=1e-9f;
for (int l=0;l<m;l++){
float x2=xyz2[i*m*3+l*3+0];
float y2=xyz2[i*m*3+l*3+1];
float z2=xyz2[i*m*3+l*3+2];
float w=expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*remainR[l];
suml+=w;
}
ratioL[k]=remainL[k]/suml;
}*/
__syncthreads();
for (int l0=0;l0<m;l0+=blockDim.x){
int l=l0+threadIdx.x;
float x2=0,y2=0,z2=0;
if (l<m){
x2=xyz2[i*m*3+l*3+0];
y2=xyz2[i*m*3+l*3+1];
z2=xyz2[i*m*3+l*3+2];
}
float sumr=0;
for (int k0=0;k0<n;k0+=Block){
int kend=min(n,k0+Block)-k0;
for (int k=threadIdx.x;k<kend;k+=blockDim.x){
buf[k*4+0]=xyz1[i*n*3+k0*3+k*3+0];
buf[k*4+1]=xyz1[i*n*3+k0*3+k*3+1];
buf[k*4+2]=xyz1[i*n*3+k0*3+k*3+2];
buf[k*4+3]=ratioL[k0+k];
}
__syncthreads();
for (int k=0;k<kend;k++){
float x1=buf[k*4+0];
float y1=buf[k*4+1];
float z1=buf[k*4+2];
float w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*buf[k*4+3];
sumr+=w;
}
__syncthreads();
}
if (l<m){
sumr*=remainR[l];
float consumption=fminf(remainR[l]/(sumr+1e-9f),1.0f);
ratioR[l]=consumption*remainR[l];
remainR[l]=fmaxf(0.0f,remainR[l]-sumr);
}
}
/*for (int l=threadIdx.x;l<m;l+=blockDim.x){
float x2=xyz2[i*m*3+l*3+0];
float y2=xyz2[i*m*3+l*3+1];
float z2=xyz2[i*m*3+l*3+2];
float sumr=0;
for (int k=0;k<n;k++){
float x1=xyz1[i*n*3+k*3+0];
float y1=xyz1[i*n*3+k*3+1];
float z1=xyz1[i*n*3+k*3+2];
float w=expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*ratioL[k];
sumr+=w;
}
sumr*=remainR[l];
float consumption=fminf(remainR[l]/(sumr+1e-9f),1.0f);
ratioR[l]=consumption*remainR[l];
remainR[l]=fmaxf(0.0f,remainR[l]-sumr);
}*/
__syncthreads();
for (int k0=0;k0<n;k0+=blockDim.x){
int k=k0+threadIdx.x;
float x1=0,y1=0,z1=0;
if (k<n){
x1=xyz1[i*n*3+k*3+0];
y1=xyz1[i*n*3+k*3+1];
z1=xyz1[i*n*3+k*3+2];
}
float suml=0;
for (int l0=0;l0<m;l0+=Block){
int lend=min(m,l0+Block)-l0;
for (int l=threadIdx.x;l<lend;l+=blockDim.x){
buf[l*4+0]=xyz2[i*m*3+l0*3+l*3+0];
buf[l*4+1]=xyz2[i*m*3+l0*3+l*3+1];
buf[l*4+2]=xyz2[i*m*3+l0*3+l*3+2];
buf[l*4+3]=ratioR[l0+l];
}
__syncthreads();
float rl=ratioL[k];
if (k<n){
for (int l=0;l<lend;l++){
float x2=buf[l*4+0];
float y2=buf[l*4+1];
float z2=buf[l*4+2];
float w=__expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*rl*buf[l*4+3];
match[i*n*m+(l0+l)*n+k]+=w;
suml+=w;
}
}
__syncthreads();
}
if (k<n)
remainL[k]=fmaxf(0.0f,remainL[k]-suml);
}
/*for (int k=threadIdx.x;k<n;k+=blockDim.x){
float x1=xyz1[i*n*3+k*3+0];
float y1=xyz1[i*n*3+k*3+1];
float z1=xyz1[i*n*3+k*3+2];
float suml=0;
for (int l=0;l<m;l++){
float x2=xyz2[i*m*3+l*3+0];
float y2=xyz2[i*m*3+l*3+1];
float z2=xyz2[i*m*3+l*3+2];
float w=expf(level*((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)))*ratioL[k]*ratioR[l];
match[i*n*m+l*n+k]+=w;
suml+=w;
}
remainL[k]=fmaxf(0.0f,remainL[k]-suml);
}*/
__syncthreads();
}
}
}
void approxmatchLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,float * match,float * temp){
approxmatch<<<32,512>>>(b,n,m,xyz1,xyz2,match,temp);
}
__global__ void matchcost(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,const float * __restrict__ match,float * __restrict__ out){
__shared__ float allsum[512];
const int Block=1024;
__shared__ float buf[Block*3];
for (int i=blockIdx.x;i<b;i+=gridDim.x){
float subsum=0;
for (int k0=0;k0<n;k0+=blockDim.x){
int k=k0+threadIdx.x;
float x1=0,y1=0,z1=0;
if (k<n){
x1=xyz1[i*n*3+k*3+0];
y1=xyz1[i*n*3+k*3+1];
z1=xyz1[i*n*3+k*3+2];
}
for (int l0=0;l0<m;l0+=Block){
int lend=min(m,l0+Block)-l0;
for (int l=threadIdx.x;l<lend*3;l+=blockDim.x)
buf[l]=xyz2[i*m*3+l0*3+l];
__syncthreads();
if (k<n){
for (int l=0;l<lend;l++){
float x2=buf[l*3+0];
float y2=buf[l*3+1];
float z2=buf[l*3+2];
float d=sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1));
subsum+=d*match[i*n*m+(l0+l)*n+k];
}
}
__syncthreads();
}
}
allsum[threadIdx.x]=subsum;
for (int j=1;j<blockDim.x;j<<=1){
__syncthreads();
if ((threadIdx.x&j)==0 && threadIdx.x+j<blockDim.x){
allsum[threadIdx.x]+=allsum[threadIdx.x+j];
}
}
if (threadIdx.x==0)
out[i]=allsum[0];
__syncthreads();
}
}
void matchcostLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * out){
matchcost<<<32,512>>>(b,n,m,xyz1,xyz2,match,out);
}
__global__ void matchcostgrad2(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,const float * __restrict__ match,float * __restrict__ grad2){
__shared__ float sum_grad[256*3];
for (int i=blockIdx.x;i<b;i+=gridDim.x){
int kbeg=m*blockIdx.y/gridDim.y;
int kend=m*(blockIdx.y+1)/gridDim.y;
for (int k=kbeg;k<kend;k++){
float x2=xyz2[(i*m+k)*3+0];
float y2=xyz2[(i*m+k)*3+1];
float z2=xyz2[(i*m+k)*3+2];
float subsumx=0,subsumy=0,subsumz=0;
for (int j=threadIdx.x;j<n;j+=blockDim.x){
float x1=x2-xyz1[(i*n+j)*3+0];
float y1=y2-xyz1[(i*n+j)*3+1];
float z1=z2-xyz1[(i*n+j)*3+2];
float d=match[i*n*m+k*n+j]*rsqrtf(fmaxf(x1*x1+y1*y1+z1*z1,1e-20f));
subsumx+=x1*d;
subsumy+=y1*d;
subsumz+=z1*d;
}
sum_grad[threadIdx.x*3+0]=subsumx;
sum_grad[threadIdx.x*3+1]=subsumy;
sum_grad[threadIdx.x*3+2]=subsumz;
for (int j=1;j<blockDim.x;j<<=1){
__syncthreads();
int j1=threadIdx.x;
int j2=threadIdx.x+j;
if ((j1&j)==0 && j2<blockDim.x){
sum_grad[j1*3+0]+=sum_grad[j2*3+0];
sum_grad[j1*3+1]+=sum_grad[j2*3+1];
sum_grad[j1*3+2]+=sum_grad[j2*3+2];
}
}
if (threadIdx.x==0){
grad2[(i*m+k)*3+0]=sum_grad[0];
grad2[(i*m+k)*3+1]=sum_grad[1];
grad2[(i*m+k)*3+2]=sum_grad[2];
}
__syncthreads();
}
}
}
__global__ void matchcostgrad1(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,const float * __restrict__ match,float * __restrict__ grad1){
for (int i=blockIdx.x;i<b;i+=gridDim.x){
for (int l=threadIdx.x;l<n;l+=blockDim.x){
float x1=xyz1[i*n*3+l*3+0];
float y1=xyz1[i*n*3+l*3+1];
float z1=xyz1[i*n*3+l*3+2];
float dx=0,dy=0,dz=0;
for (int k=0;k<m;k++){
float x2=xyz2[i*m*3+k*3+0];
float y2=xyz2[i*m*3+k*3+1];
float z2=xyz2[i*m*3+k*3+2];
float d=match[i*n*m+k*n+l]*rsqrtf(fmaxf((x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+(z1-z2)*(z1-z2),1e-20f));
dx+=(x1-x2)*d;
dy+=(y1-y2)*d;
dz+=(z1-z2)*d;
}
grad1[i*n*3+l*3+0]=dx;
grad1[i*n*3+l*3+1]=dy;
grad1[i*n*3+l*3+2]=dz;
}
}
}
void matchcostgradLauncher(int b,int n,int m,const float * xyz1,const float * xyz2,const float * match,float * grad1,float * grad2){
matchcostgrad1<<<32,512>>>(b,n,m,xyz1,xyz2,match,grad1);
matchcostgrad2<<<dim3(32,32),256>>>(b,n,m,xyz1,xyz2,match,grad2);
}
================================================
FILE: OcCo_TF/pc_distance/tf_approxmatch.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import tensorflow as tf
from tensorflow.python.framework import ops # it turns out work
import os.path as osp
base_dir = osp.dirname(osp.abspath(__file__))
approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))
def approx_match(xyz1, xyz2):
"""
:param xyz1: batch_size * #dataset_points * 3
:param xyz2: batch_size * #query_points * 3
:return:
match : batch_size * #query_points * #dataset_points
"""
return approxmatch_module.approx_match(xyz1, xyz2)
ops.NoGradient('ApproxMatch')
# @tf.RegisterShape('ApproxMatch')
@ops.RegisterShape('ApproxMatch')
def _approx_match_shape(op):
shape1 = op.inputs[0].get_shape().with_rank(3)
shape2 = op.inputs[1].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0], shape2.dims[1], shape1.dims[1]])]
def match_cost(xyz1, xyz2, match):
"""
:param xyz1: batch_size * #dataset_points * 3
:param xyz2: batch_size * #query_points * 3
:param match: batch_size * #query_points * #dataset_points
:return: cost : batch_size,
"""
return approxmatch_module.match_cost(xyz1, xyz2, match)
# @tf.RegisterShape('MatchCost')
@ops.RegisterShape('MatchCost')
def _match_cost_shape(op):
shape1 = op.inputs[0].get_shape().with_rank(3)
# shape2 = op.inputs[1].get_shape().with_rank(3)
# shape3 = op.inputs[2].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0]])]
@tf.RegisterGradient('MatchCost')
def _match_cost_grad(op,grad_cost):
xyz1 = op.inputs[0]
xyz2 = op.inputs[1]
match = op.inputs[2]
grad_1, grad_2 = approxmatch_module.match_cost_grad(xyz1, xyz2, match)
return [grad_1 * tf.expand_dims(tf.expand_dims(grad_cost, 1), 2),
grad_2 * tf.expand_dims(tf.expand_dims(grad_cost, 1), 2), None]
if __name__ == '__main__':
alpha = 0.5
beta = 2.0
# import bestmatch
import numpy as np
# import math
import random
import cv2
# import tf_nndistance
npoint = 100
with tf.device('/gpu:2'):
pt_in = tf.placeholder(tf.float32, shape=(1, npoint * 4, 3))
mypoints = tf.Variable(np.random.randn(1, npoint, 3).astype('float32'))
match = approx_match(pt_in, mypoints)
loss = tf.reduce_sum(match_cost(pt_in, mypoints, match))
# match=approx_match(mypoints,pt_in)
# loss=tf.reduce_sum(match_cost(mypoints,pt_in,match))
# distf,_,distb,_=tf_nndistance.nn_distance(pt_in,mypoints)
# loss=tf.reduce_sum((distf+1e-9)**0.5)*0.5+tf.reduce_sum((distb+1e-9)**0.5)*0.5
# loss=tf.reduce_max((distf+1e-9)**0.5)*0.5*npoint+tf.reduce_max((distb+1e-9)**0.5)*0.5*npoint
optimizer = tf.train.GradientDescentOptimizer(1e-4).minimize(loss)
with tf.Session('') as sess:
# sess.run(tf.initialize_all_variables())
sess.run(tf.global_variables_initializer())
while True:
meanloss = 0
meantrueloss = 0
for i in range(1001):
# phi=np.random.rand(4*npoint)*math.pi*2
# tpoints=(np.hstack([np.cos(phi)[:,None],np.sin(phi)[:,None],(phi*0)[:,None]])*random.random())[None,:,:]
# tpoints=((np.random.rand(400)-0.5)[:,None]*[0,2,0]+[(random.random()-0.5)*2,0,0]).astype('float32')[None,:,:]
tpoints = np.hstack([np.linspace(-1, 1, 400)[:, None],
(random.random() * 2 * np.linspace(1,0,400)**2)[:, None],
np.zeros((400,1))])[None, :, :]
trainloss, _ = sess.run([loss, optimizer], feed_dict={pt_in: tpoints.astype('float32')})
trainloss, trainmatch = sess.run([loss, match], feed_dict={pt_in: tpoints.astype('float32')})
# trainmatch=trainmatch.transpose((0,2,1))
print('trainloss: %f'%trainloss)
show = np.zeros((400,400,3), dtype='uint8')^255
trainmypoints = sess.run(mypoints)
''' === visualisation ===
for i in range(len(tpoints[0])):
u = np.random.choice(range(len(trainmypoints[0])), p=trainmatch[0].T[i])
cv2.line(show,
(int(tpoints[0][i,1]*100+200),int(tpoints[0][i,0]*100+200)),
(int(trainmypoints[0][u,1]*100+200),int(trainmypoints[0][u,0]*100+200)),
cv2.cv.CV_RGB(0,255,0))
for x, y, z in tpoints[0]:
cv2.circle(show, (int(y*100+200), int(x*100+200)), 2, cv2.cv.CV_RGB(255, 0, 0))
for x, y, z in trainmypoints[0]:
cv2.circle(show, (int(y*100+200),int(x*100+200)), 3, cv2.cv.CV_RGB(0, 0, 255))
'''
cost = ((tpoints[0][:, None, :] - np.repeat(trainmypoints[0][None, :, :], 4, axis=1))**2).sum(axis=2)**0.5
# trueloss=bestmatch.bestmatch(cost)[0]
print(trainloss) # true loss
# cv2.imshow('show', show)
cmd = cv2.waitKey(10) % 256
if cmd == ord('q'):
break
================================================
FILE: OcCo_TF/pc_distance/tf_nndistance.cpp
================================================
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
REGISTER_OP("NnDistance")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("dist1: float32")
.Output("idx1: int32")
.Output("dist2: float32")
.Output("idx2: int32");
REGISTER_OP("NnDistanceGrad")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("grad_dist1: float32")
.Input("idx1: int32")
.Input("grad_dist2: float32")
.Input("idx2: int32")
.Output("grad_xyz1: float32")
.Output("grad_xyz2: float32");
using namespace tensorflow;
static void nnsearch(int b,int n,int m,const float * xyz1,const float * xyz2,float * dist,int * idx){
for (int i=0;i<b;i++){
for (int j=0;j<n;j++){
float x1=xyz1[(i*n+j)*3+0];
float y1=xyz1[(i*n+j)*3+1];
float z1=xyz1[(i*n+j)*3+2];
double best=0;
int besti=0;
for (int k=0;k<m;k++){
float x2=xyz2[(i*m+k)*3+0]-x1;
float y2=xyz2[(i*m+k)*3+1]-y1;
float z2=xyz2[(i*m+k)*3+2]-z1;
double d=x2*x2+y2*y2+z2*z2;
if (k==0 || d<best){
best=d;
besti=k;
}
}
dist[i*n+j]=best;
idx[i*n+j]=besti;
}
}
}
class NnDistanceOp : public OpKernel{
public:
explicit NnDistanceOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz1 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz1"));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz2 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz2"));
int m=xyz2_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistance expects xyz1 and xyz2 have same batch size"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&xyz1_flat(0);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&xyz2_flat(0);
Tensor * dist1_tensor=NULL;
Tensor * idx1_tensor=NULL;
Tensor * dist2_tensor=NULL;
Tensor * idx2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n},&dist1_tensor));
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,n},&idx1_tensor));
auto dist1_flat=dist1_tensor->flat<float>();
auto idx1_flat=idx1_tensor->flat<int>();
OP_REQUIRES_OK(context,context->allocate_output(2,TensorShape{b,m},&dist2_tensor));
OP_REQUIRES_OK(context,context->allocate_output(3,TensorShape{b,m},&idx2_tensor));
auto dist2_flat=dist2_tensor->flat<float>();
auto idx2_flat=idx2_tensor->flat<int>();
float * dist1=&(dist1_flat(0));
int * idx1=&(idx1_flat(0));
float * dist2=&(dist2_flat(0));
int * idx2=&(idx2_flat(0));
nnsearch(b,n,m,xyz1,xyz2,dist1,idx1);
nnsearch(b,m,n,xyz2,xyz1,dist2,idx2);
}
};
REGISTER_KERNEL_BUILDER(Name("NnDistance").Device(DEVICE_CPU), NnDistanceOp);
class NnDistanceGradOp : public OpKernel{
public:
explicit NnDistanceGradOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
const Tensor& xyz2_tensor=context->input(1);
const Tensor& grad_dist1_tensor=context->input(2);
const Tensor& idx1_tensor=context->input(3);
const Tensor& grad_dist2_tensor=context->input(4);
const Tensor& idx2_tensor=context->input(5);
OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz1 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz1"));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz2 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz2"));
int m=xyz2_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistanceGrad expects xyz1 and xyz2 have same batch size"));
OP_REQUIRES(context,grad_dist1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires grad_dist1 be of shape(batch,#points)"));
OP_REQUIRES(context,idx1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires idx1 be of shape(batch,#points)"));
OP_REQUIRES(context,grad_dist2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires grad_dist2 be of shape(batch,#points)"));
OP_REQUIRES(context,idx2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires idx2 be of shape(batch,#points)"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&xyz1_flat(0);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&xyz2_flat(0);
auto idx1_flat=idx1_tensor.flat<int>();
const int * idx1=&idx1_flat(0);
auto idx2_flat=idx2_tensor.flat<int>();
const int * idx2=&idx2_flat(0);
auto grad_dist1_flat=grad_dist1_tensor.flat<float>();
const float * grad_dist1=&grad_dist1_flat(0);
auto grad_dist2_flat=grad_dist2_tensor.flat<float>();
const float * grad_dist2=&grad_dist2_flat(0);
Tensor * grad_xyz1_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad_xyz1_tensor));
Tensor * grad_xyz2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad_xyz2_tensor));
auto grad_xyz1_flat=grad_xyz1_tensor->flat<float>();
float * grad_xyz1=&grad_xyz1_flat(0);
auto grad_xyz2_flat=grad_xyz2_tensor->flat<float>();
float * grad_xyz2=&grad_xyz2_flat(0);
for (int i=0;i<b*n*3;i++)
grad_xyz1[i]=0;
for (int i=0;i<b*m*3;i++)
grad_xyz2[i]=0;
for (int i=0;i<b;i++){
for (int j=0;j<n;j++){
float x1=xyz1[(i*n+j)*3+0];
float y1=xyz1[(i*n+j)*3+1];
float z1=xyz1[(i*n+j)*3+2];
int j2=idx1[i*n+j];
float x2=xyz2[(i*m+j2)*3+0];
float y2=xyz2[(i*m+j2)*3+1];
float z2=xyz2[(i*m+j2)*3+2];
float g=grad_dist1[i*n+j]*2;
grad_xyz1[(i*n+j)*3+0]+=g*(x1-x2);
grad_xyz1[(i*n+j)*3+1]+=g*(y1-y2);
grad_xyz1[(i*n+j)*3+2]+=g*(z1-z2);
grad_xyz2[(i*m+j2)*3+0]-=(g*(x1-x2));
grad_xyz2[(i*m+j2)*3+1]-=(g*(y1-y2));
grad_xyz2[(i*m+j2)*3+2]-=(g*(z1-z2));
}
for (int j=0;j<m;j++){
float x1=xyz2[(i*m+j)*3+0];
float y1=xyz2[(i*m+j)*3+1];
float z1=xyz2[(i*m+j)*3+2];
int j2=idx2[i*m+j];
float x2=xyz1[(i*n+j2)*3+0];
float y2=xyz1[(i*n+j2)*3+1];
float z2=xyz1[(i*n+j2)*3+2];
float g=grad_dist2[i*m+j]*2;
grad_xyz2[(i*m+j)*3+0]+=g*(x1-x2);
grad_xyz2[(i*m+j)*3+1]+=g*(y1-y2);
grad_xyz2[(i*m+j)*3+2]+=g*(z1-z2);
grad_xyz1[(i*n+j2)*3+0]-=(g*(x1-x2));
grad_xyz1[(i*n+j2)*3+1]-=(g*(y1-y2));
grad_xyz1[(i*n+j2)*3+2]-=(g*(z1-z2));
}
}
}
};
REGISTER_KERNEL_BUILDER(Name("NnDistanceGrad").Device(DEVICE_CPU), NnDistanceGradOp);
void NmDistanceKernelLauncher(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i);
class NnDistanceGpuOp : public OpKernel{
public:
explicit NnDistanceGpuOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
const Tensor& xyz2_tensor=context->input(1);
OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz1 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz1"));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistance requires xyz2 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistance only accepts 3d point set xyz2"));
int m=xyz2_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistance expects xyz1 and xyz2 have same batch size"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&xyz1_flat(0);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&xyz2_flat(0);
Tensor * dist1_tensor=NULL;
Tensor * idx1_tensor=NULL;
Tensor * dist2_tensor=NULL;
Tensor * idx2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n},&dist1_tensor));
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,n},&idx1_tensor));
auto dist1_flat=dist1_tensor->flat<float>();
auto idx1_flat=idx1_tensor->flat<int>();
OP_REQUIRES_OK(context,context->allocate_output(2,TensorShape{b,m},&dist2_tensor));
OP_REQUIRES_OK(context,context->allocate_output(3,TensorShape{b,m},&idx2_tensor));
auto dist2_flat=dist2_tensor->flat<float>();
auto idx2_flat=idx2_tensor->flat<int>();
float * dist1=&(dist1_flat(0));
int * idx1=&(idx1_flat(0));
float * dist2=&(dist2_flat(0));
int * idx2=&(idx2_flat(0));
NmDistanceKernelLauncher(b,n,xyz1,m,xyz2,dist1,idx1,dist2,idx2);
}
};
REGISTER_KERNEL_BUILDER(Name("NnDistance").Device(DEVICE_GPU), NnDistanceGpuOp);
void NmDistanceGradKernelLauncher(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2);
class NnDistanceGradGpuOp : public OpKernel{
public:
explicit NnDistanceGradGpuOp(OpKernelConstruction* context):OpKernel(context){}
void Compute(OpKernelContext * context)override{
const Tensor& xyz1_tensor=context->input(0);
const Tensor& xyz2_tensor=context->input(1);
const Tensor& grad_dist1_tensor=context->input(2);
const Tensor& idx1_tensor=context->input(3);
const Tensor& grad_dist2_tensor=context->input(4);
const Tensor& idx2_tensor=context->input(5);
OP_REQUIRES(context,xyz1_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz1 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz1"));
int b=xyz1_tensor.shape().dim_size(0);
int n=xyz1_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.dims()==3,errors::InvalidArgument("NnDistanceGrad requires xyz2 be of shape (batch,#points,3)"));
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(2)==3,errors::InvalidArgument("NnDistanceGrad only accepts 3d point set xyz2"));
int m=xyz2_tensor.shape().dim_size(1);
OP_REQUIRES(context,xyz2_tensor.shape().dim_size(0)==b,errors::InvalidArgument("NnDistanceGrad expects xyz1 and xyz2 have same batch size"));
OP_REQUIRES(context,grad_dist1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires grad_dist1 be of shape(batch,#points)"));
OP_REQUIRES(context,idx1_tensor.shape()==(TensorShape{b,n}),errors::InvalidArgument("NnDistanceGrad requires idx1 be of shape(batch,#points)"));
OP_REQUIRES(context,grad_dist2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires grad_dist2 be of shape(batch,#points)"));
OP_REQUIRES(context,idx2_tensor.shape()==(TensorShape{b,m}),errors::InvalidArgument("NnDistanceGrad requires idx2 be of shape(batch,#points)"));
auto xyz1_flat=xyz1_tensor.flat<float>();
const float * xyz1=&xyz1_flat(0);
auto xyz2_flat=xyz2_tensor.flat<float>();
const float * xyz2=&xyz2_flat(0);
auto idx1_flat=idx1_tensor.flat<int>();
const int * idx1=&idx1_flat(0);
auto idx2_flat=idx2_tensor.flat<int>();
const int * idx2=&idx2_flat(0);
auto grad_dist1_flat=grad_dist1_tensor.flat<float>();
const float * grad_dist1=&grad_dist1_flat(0);
auto grad_dist2_flat=grad_dist2_tensor.flat<float>();
const float * grad_dist2=&grad_dist2_flat(0);
Tensor * grad_xyz1_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&grad_xyz1_tensor));
Tensor * grad_xyz2_tensor=NULL;
OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,m,3},&grad_xyz2_tensor));
auto grad_xyz1_flat=grad_xyz1_tensor->flat<float>();
float * grad_xyz1=&grad_xyz1_flat(0);
auto grad_xyz2_flat=grad_xyz2_tensor->flat<float>();
float * grad_xyz2=&grad_xyz2_flat(0);
NmDistanceGradKernelLauncher(b,n,xyz1,m,xyz2,grad_dist1,idx1,grad_dist2,idx2,grad_xyz1,grad_xyz2);
}
};
REGISTER_KERNEL_BUILDER(Name("NnDistanceGrad").Device(DEVICE_GPU), NnDistanceGradGpuOp);
================================================
FILE: OcCo_TF/pc_distance/tf_nndistance.cu
================================================
#if GOOGLE_CUDA
#define EIGEN_USE_GPU
// #include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
__global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){
const int batch=512;
__shared__ float buf[batch*3];
for (int i=blockIdx.x;i<b;i+=gridDim.x){
for (int k2=0;k2<m;k2+=batch){
int end_k=min(m,k2+batch)-k2;
for (int j=threadIdx.x;j<end_k*3;j+=blockDim.x){
buf[j]=xyz2[(i*m+k2)*3+j];
}
__syncthreads();
for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
float x1=xyz[(i*n+j)*3+0];
float y1=xyz[(i*n+j)*3+1];
float z1=xyz[(i*n+j)*3+2];
int best_i=0;
float best=0;
int end_ka=end_k-(end_k&3);
if (end_ka==batch){
for (int k=0;k<batch;k+=4){
{
float x2=buf[k*3+0]-x1;
float y2=buf[k*3+1]-y1;
float z2=buf[k*3+2]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (k==0 || d<best){
best=d;
best_i=k+k2;
}
}
{
float x2=buf[k*3+3]-x1;
float y2=buf[k*3+4]-y1;
float z2=buf[k*3+5]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+1;
}
}
{
float x2=buf[k*3+6]-x1;
float y2=buf[k*3+7]-y1;
float z2=buf[k*3+8]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+2;
}
}
{
float x2=buf[k*3+9]-x1;
float y2=buf[k*3+10]-y1;
float z2=buf[k*3+11]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+3;
}
}
}
}else{
for (int k=0;k<end_ka;k+=4){
{
float x2=buf[k*3+0]-x1;
float y2=buf[k*3+1]-y1;
float z2=buf[k*3+2]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (k==0 || d<best){
best=d;
best_i=k+k2;
}
}
{
float x2=buf[k*3+3]-x1;
float y2=buf[k*3+4]-y1;
float z2=buf[k*3+5]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+1;
}
}
{
float x2=buf[k*3+6]-x1;
float y2=buf[k*3+7]-y1;
float z2=buf[k*3+8]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+2;
}
}
{
float x2=buf[k*3+9]-x1;
float y2=buf[k*3+10]-y1;
float z2=buf[k*3+11]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (d<best){
best=d;
best_i=k+k2+3;
}
}
}
}
for (int k=end_ka;k<end_k;k++){
float x2=buf[k*3+0]-x1;
float y2=buf[k*3+1]-y1;
float z2=buf[k*3+2]-z1;
float d=x2*x2+y2*y2+z2*z2;
if (k==0 || d<best){
best=d;
best_i=k+k2;
}
}
if (k2==0 || result[(i*n+j)]>best){
result[(i*n+j)]=best;
result_i[(i*n+j)]=best_i;
}
}
__syncthreads();
}
}
}
void NmDistanceKernelLauncher(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i){
NmDistanceKernel<<<dim3(32,16,1),512>>>(b,n,xyz,m,xyz2,result,result_i);
NmDistanceKernel<<<dim3(32,16,1),512>>>(b,m,xyz2,n,xyz,result2,result2_i);
}
__global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){
for (int i=blockIdx.x;i<b;i+=gridDim.x){
for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
float x1=xyz1[(i*n+j)*3+0];
float y1=xyz1[(i*n+j)*3+1];
float z1=xyz1[(i*n+j)*3+2];
int j2=idx1[i*n+j];
float x2=xyz2[(i*m+j2)*3+0];
float y2=xyz2[(i*m+j2)*3+1];
float z2=xyz2[(i*m+j2)*3+2];
float g=grad_dist1[i*n+j]*2;
atomicAdd(&(grad_xyz1[(i*n+j)*3+0]),g*(x1-x2));
atomicAdd(&(grad_xyz1[(i*n+j)*3+1]),g*(y1-y2));
atomicAdd(&(grad_xyz1[(i*n+j)*3+2]),g*(z1-z2));
atomicAdd(&(grad_xyz2[(i*m+j2)*3+0]),-(g*(x1-x2)));
atomicAdd(&(grad_xyz2[(i*m+j2)*3+1]),-(g*(y1-y2)));
atomicAdd(&(grad_xyz2[(i*m+j2)*3+2]),-(g*(z1-z2)));
}
}
}
void NmDistanceGradKernelLauncher(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2){
cudaMemset(grad_xyz1,0,b*n*3*4);
cudaMemset(grad_xyz2,0,b*m*3*4);
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(b,n,xyz1,m,xyz2,grad_dist1,idx1,grad_xyz1,grad_xyz2);
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(b,m,xyz2,n,xyz1,grad_dist2,idx2,grad_xyz2,grad_xyz1);
}
#endif
================================================
FILE: OcCo_TF/pc_distance/tf_nndistance.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
"""Scripts for Chamfer Distance"""
import os, tensorflow as tf
from tensorflow.python.framework import ops
os.environ["LD_LIBRARY_PATH"] = "/usr/local/lib/python3.5/dist-packages/tensorflow/"
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
nn_distance_module = tf.load_op_library(os.path.join(BASE_DIR, 'tf_nndistance_so.so'))
def nn_distance(xyz1, xyz2):
"""
Computes the distance of nearest neighbors for a pair of point clouds
input: xyz1: (batch_size,#points_1,3) the first point cloud
input: xyz2: (batch_size,#points_2,3) the second point cloud
output: dist1: (batch_size,#point_1) distance from first to second
output: idx1: (batch_size,#point_1) nearest neighbor from first to second
output: dist2: (batch_size,#point_2) distance from second to first
output: idx2: (batch_size,#point_2) nearest neighbor from second to first
"""
return nn_distance_module.nn_distance(xyz1, xyz2)
@ops.RegisterGradient('NnDistance')
def _nn_distance_grad(op, grad_dist1, grad_idx1, grad_dist2, grad_idx2):
xyz1 = op.inputs[0]
xyz2 = op.inputs[1]
idx1 = op.outputs[1]
idx2 = op.outputs[3]
return nn_distance_module.nn_distance_grad(xyz1, xyz2, grad_dist1, idx1, grad_dist2, idx2)
if __name__ == '__main__':
import random, numpy as np
random.seed(100)
np.random.seed(100)
''' === test code ==='''
with tf.Session('') as sess:
xyz1 = np.random.randn(32, 16384, 3).astype('float32')
xyz2 = np.random.randn(32, 1024, 3).astype('float32')
# with tf.device('/gpu:0'):
if True:
inp1 = tf.Variable(xyz1)
inp2 = tf.constant(xyz2)
reta, retb, retc, retd = nn_distance(inp1, inp2)
loss = tf.reduce_mean(reta) + tf.reduce_mean(retc)
train = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(loss)
sess.run(tf.initialize_all_variables())
best = 1e100
for i in range(1):
# loss, _ = sess.run([loss, train])
loss, _ = sess.run([loss])
best = min(best, loss)
print(i, loss, best)
================================================
FILE: OcCo_TF/readme.md
================================================
## OcCo in TensorFlow
================================================
FILE: OcCo_TF/train_cls.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
import os, sys, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
from termcolor import colored
from utils.Dataset_Assign import Dataset_Assign
from utils.EarlyStoppingCriterion import EarlyStoppingCriterion
from utils.tf_util import add_train_summary, get_bn_decay, get_learning_rate
from utils.io_util import shuffle_data, loadh5DataFile
from utils.pc_util import rotate_point_cloud, jitter_point_cloud, random_point_dropout, \
random_scale_point_cloud, random_shift_point_cloud
# from utils.transfer_pretrained_w import load_pretrained_var
parser = argparse.ArgumentParser()
''' === Basic Learning Settings === '''
parser.add_argument('--gpu', type=int, default=1)
parser.add_argument('--log_dir', default='log/log_cls/pointnet_cls')
parser.add_argument('--model', default='pointnet_cls')
parser.add_argument('--epoch', type=int, default=200)
parser.add_argument('--restore', action='store_true')
parser.add_argument('--restore_path', default='log/pointnet_cls')
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--num_point', type=int, default=1024)
parser.add_argument('--base_lr', type=float, default=0.001)
parser.add_argument('--lr_clip', type=float, default=1e-5)
parser.add_argument('--decay_steps', type=int, default=20)
parser.add_argument('--decay_rate', type=float, default=0.7)
# parser.add_argument('--verbose', type=bool, default=True)
parser.add_argument('--dataset', type=str, default='modelnet40')
parser.add_argument('--partial', action='store_true')
parser.add_argument('--filename', type=str, default='')
parser.add_argument('--data_bn', action='store_true')
''' === Data Augmentation Settings === '''
parser.add_argument('--data_aug', action='store_true')
parser.add_argument('--just_save', action='store_true') # pretrained encoder restoration
parser.add_argument('--patience', type=int, default=200) # early stopping, set it as 200 for deprecation
parser.add_argument('--fewshot', action='store_true')
args = parser.parse_args()
NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
BATCH_SIZE = args.batch_size
NUM_POINT = args.num_point
BASE_LR = args.base_lr
LR_CLIP = args.lr_clip
DECAY_RATE = args.decay_rate
# DECAY_STEP = args.decay_steps
DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
BN_INIT_DECAY = 0.5
BN_DECAY_RATE = 0.5
BN_DECAY_STEP = float(DECAY_STEP)
BN_DECAY_CLIP = 0.99
LOG_DIR = args.log_dir
BEST_EVAL_ACC = 0
os.system('mkdir -p %s' % LOG_DIR) if not os.path.exists(LOG_DIR) else None
LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'a+')
def log_string(out_str):
LOG_FOUT.write(out_str + '\n')
LOG_FOUT.flush()
print(out_str)
def train(args):
log_string('\n\n' + '=' * 44)
log_string('Start Training, Time: %s' % datetime.datetime.now())
log_string('=' * 44 + '\n\n')
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step') # will be used in defining train_op
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
bn_decay = get_bn_decay(global_step, BN_INIT_DECAY, BATCH_SIZE, BN_DECAY_STEP, BN_DECAY_RATE, BN_DECAY_CLIP)
# model_module = importlib.import_module('.%s' % args.model, 'cls_models')
# MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
''' === To fix issues when running on woma === '''
ldic = locals()
exec('from cls_models.%s import Model' % args.model, globals(), ldic)
MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
pred, loss = MODEL.pred, MODEL.loss
tf.summary.scalar('loss', loss)
# useful information in displaying during training
correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)
tf.summary.scalar('accuracy', accuracy)
learning_rate = get_learning_rate(global_step, BASE_LR, BATCH_SIZE, DECAY_STEP, DECAY_RATE, LR_CLIP)
add_train_summary('learning_rate', learning_rate)
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(MODEL.loss, global_step)
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
# config.log_device_placement = True
sess = tf.Session(config=config)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)
val_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'val'))
# Init variables
init = tf.global_variables_initializer()
log_string('\nModel Parameters has been Initialized\n')
sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
# to save the randomized variables
if not args.restore and args.just_save:
save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
print(colored('just save the model, now exit', 'white', 'on_red'))
sys.exit()
'''current solution: first load pretrained head, assemble with output layers then save as a checkpoint'''
# to partially load the saved head from:
# if args.load_pretrained_head:
# sess.close()
# load_pretrained_head(args.pretrained_head_path, os.path.join(LOG_DIR, 'model.ckpt'), None, args.verbose)
# print('shared varibles have been restored from ', args.pretrained_head_path)
#
# sess = tf.Session(config=config)
# log_string('\nModel Parameters has been Initialized\n')
# sess.run(init, {is_training_pl: True})
# saver.restore(sess, tf.train.latest_checkpoint(LOG_DIR))
# log_string('\nModel Parameters have been restored with pretrained weights from %s' % args.pretrained_head_path)
if args.restore:
# load_pretrained_var(args.restore_path, os.path.join(LOG_DIR, "model.ckpt"), args.verbose)
saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
log_string('\n')
log_string(colored('Model Parameters have been restored from %s' % args.restore_path, 'white', 'on_red'))
for arg in sorted(vars(args)):
print(arg + ': ' + str(getattr(args, arg)) + '\n') # log of arguments
os.system('cp cls_models/%s.py %s' % (args.model, LOG_DIR)) # bkp of model def
os.system('cp train_cls.py %s' % LOG_DIR) # bkp of train procedure
train_start = time.time()
ops = {'pointclouds_pl': inputs_pl,
'labels_pl': labels_pl,
'is_training_pl': is_training_pl,
'npts_pl': npts_pl,
'pred': pred,
'loss': loss,
'train_op': train_op,
'merged': merged,
'step': global_step}
ESC = EarlyStoppingCriterion(patience=args.patience)
for epoch in range(args.epoch):
log_string('\n\n')
log_string(colored('**** EPOCH %03d ****' % epoch, 'grey', 'on_green'))
sys.stdout.flush()
'''=== training the model ==='''
train_one_epoch(sess, ops, train_writer)
'''=== evaluating the model ==='''
eval_mean_loss, eval_acc, eval_cls_acc = eval_one_epoch(sess, ops, val_writer)
'''=== check whether to early stop ==='''
early_stop, save_checkpoint = ESC.step(eval_acc, epoch=epoch)
if save_checkpoint:
save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
log_string(colored('model saved at %s' % save_path, 'white', 'on_blue'))
if early_stop:
break
log_string('total time: %s' % datetime.timedelta(seconds=time.time() - train_start))
log_string('stop epoch: %d, best eval acc: %f' % (ESC.best_epoch, ESC.best_dev_score))
sess.close()
def train_one_epoch(sess, ops, train_writer):
is_training = True
total_correct, total_seen, loss_sum = 0, 0, 0
train_file_idxs = np.arange(0, len(TRAIN_FILES))
np.random.shuffle(train_file_idxs)
for fn in range(len(TRAIN_FILES)):
current_data, current_label = loadh5DataFile(TRAIN_FILES[train_file_idxs[fn]])
current_data = current_data[:, :NUM_POINT, :]
current_data, current_label, _ = shuffle_data(current_data, np.squeeze(current_label))
current_label = np.squeeze(current_label)
file_size = current_data.shape[0]
num_batches = file_size // BATCH_SIZE
for batch_idx in range(num_batches):
start_idx = batch_idx * BATCH_SIZE
end_idx = (batch_idx + 1) * BATCH_SIZE
feed_data = current_data[start_idx:end_idx, :, :]
if args.data_aug:
feed_data = random_point_dropout(feed_data)
feed_data[:, :, 0:3] = random_scale_point_cloud(feed_data[:, :, 0:3])
feed_data[:, :, 0:3] = random_shift_point_cloud(feed_data[:, :, 0:3])
feed_dict = {
ops['pointclouds_pl']: feed_data.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
ops['is_training_pl']: is_training}
summary, step, _, loss_val, pred_val = sess.run([
ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
train_writer.add_summary(summary, step)
pred_val = np.argmax(pred_val, 1)
correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
total_correct += correct
total_seen += BATCH_SIZE
loss_sum += loss_val
log_string('\n=== training ===')
log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
# log_string('mean batch loss: %f' % (loss_sum / num_batches))
log_string('accuracy: %f' % (total_correct / float(total_seen)))
def eval_one_epoch(sess, ops, val_writer):
is_training = False
total_correct, total_seen, loss_sum = 0, 0, 0
total_seen_class = [0 for _ in range(NUM_CLASSES)]
total_correct_class = [0 for _ in range(NUM_CLASSES)]
for fn in VALID_FILES:
current_data, current_label = loadh5DataFile(fn)
current_data = current_data[:, :NUM_POINT, :]
file_size = current_data.shape[0]
num_batches = file_size // BATCH_SIZE
for batch_idx in range(num_batches):
start_idx, end_idx = batch_idx * BATCH_SIZE, (batch_idx + 1) * BATCH_SIZE
feed_dict = {
ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :].reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
ops['is_training_pl']: is_training}
summary, step, loss_val, pred_val = sess.run(
[ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
val_writer.add_summary(summary, step)
pred_val = np.argmax(pred_val, 1)
correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
total_correct += correct
total_seen += BATCH_SIZE
loss_sum += (loss_val * BATCH_SIZE)
for i in range(start_idx, end_idx):
l = int(current_label.reshape(-1)[i])
total_seen_class[l] += 1
total_correct_class[l] += (pred_val[i - start_idx] == l)
eval_mean_loss = loss_sum / float(total_seen)
eval_acc = total_correct / float(total_seen)
eval_cls_acc = np.mean(np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
log_string('\n=== evaluating ===')
log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
log_string('eval mean loss: %f' % eval_mean_loss)
log_string('eval accuracy: %f' % eval_acc)
log_string('eval avg class acc: %f' % eval_cls_acc)
global BEST_EVAL_ACC
if eval_acc > BEST_EVAL_ACC:
BEST_EVAL_ACC = eval_acc
log_string('best eval accuracy: %f' % BEST_EVAL_ACC)
return eval_mean_loss, eval_acc, eval_cls_acc
if __name__ == '__main__':
print('Now Using GPU:%d to train the model' % args.gpu)
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
train(args)
LOG_FOUT.close()
================================================
FILE: OcCo_TF/train_cls_dgcnn_torchloader.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
# Ref: https://github.com/hansen7/NRS_3D/blob/master/train_dgcnn_cls.py
import os, sys, pdb, shutil, argparse, numpy as np, tensorflow as tf
from tqdm import tqdm
from termcolor import colored
from utils.Train_Logger import TrainLogger
from utils.Dataset_Assign import Dataset_Assign
# from utils.tf_util import get_bn_decay, get_lr_dgcnn
# from utils.io_util import shuffle_data, loadh5DataFile
# from utils.transfer_pretrained_w import load_pretrained_var
from utils.pc_util import random_point_dropout, random_scale_point_cloud, random_shift_point_cloud
from utils.ModelNetDataLoader import General_CLSDataLoader_HDF5
from torch.utils.data import DataLoader
def parse_args():
parser = argparse.ArgumentParser(description='DGCNN Point Cloud Recognition Training Configuration')
parser.add_argument('--gpu', type=str, default='0')
parser.add_argument('--log_dir', default='occo_dgcnn_cls')
parser.add_argument('--model', default='dgcnn_cls')
parser.add_argument('--epoch', type=int, default=250)
parser.add_argument('--restore', action='store_true')
parser.add_argument('--restore_path', type=str, default='')
parser.add_argument('--batch_size', type=int, default=24)
parser.add_argument('--num_points', type=int, default=1024)
parser.add_argument('--base_lr', type=float, default=0.001)
# parser.add_argument('--decay_steps', type=int, default=20)
# parser.add_argument('--decay_rate', type=float, default=0.7)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--dataset', type=str, default='modelnet40')
parser.add_argument('--filename', type=str, default='')
parser.add_argument('--data_bn', action='store_true')
parser.add_argument('--partial', action='store_true')
parser.add_argument('--data_aug', action='store_true')
parser.add_argument('--just_save', action='store_true') # use only in the pretrained encoder restoration
parser.add_argument('--fewshot', action='store_true')
return parser.parse_args()
args = parse_args()
DATA_PATH = 'data/modelnet40_normal_resampled/'
NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
BATCH_SIZE, NUM_POINT = args.batch_size, args.num_points
# DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
TRAIN_DATASET = General_CLSDataLoader_HDF5(file_list=TRAIN_FILES, num_point=1024)
TEST_DATASET = General_CLSDataLoader_HDF5(file_list=VALID_FILES, num_point=1024)
trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=BATCH_SIZE, shuffle=True, num_workers=4, drop_last=True)
testDataLoader = DataLoader(TEST_DATASET, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
# reduce the num_workers if the loaded data are huge, ref: https://github.com/pytorch/pytorch/issues/973
def main(args):
MyLogger = TrainLogger(args, name=args.model.upper(), subfold='log_cls')
shutil.copy(os.path.join('cls_models', '%s.py' % args.model), MyLogger.log_dir)
shutil.copy(os.path.abspath(__file__), MyLogger.log_dir)
# is_training_pl -> to decide whether to apply batch normalisation
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step')
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
# bn_decay = get_bn_decay(batch=global_step, bn_init_decay=0.5, batch_size=args.batch_size,
# bn_decay_step=DECAY_STEP, bn_decay_rate=0.5, bn_decay_clip=0.99)
bn_decay = 0.9
# See "BatchNorm1d" in https://pytorch.org/docs/stable/nn.html
''' === fix issues of importlib when running on some servers (i.e., woma) === '''
# model_module = importlib.import_module('.%s' % args.model_type, 'cls_models')
# MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
ldic = locals()
exec('from cls_models.%s import Model' % args.model, globals(), ldic)
MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
pred, loss = MODEL.pred, MODEL.loss
tf.summary.scalar('loss', loss)
correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(args.batch_size)
tf.summary.scalar('accuracy', accuracy)
''' === Learning Rate === '''
def get_lr_dgcnn(args, global_step, alpha):
learning_rate = tf.train.cosine_decay(
learning_rate=100 * args.base_lr, # Base Learning Rate, 0.1
global_step=global_step, # Training Step Index
decay_steps=NUM_TRAINOBJECTS//BATCH_SIZE * args.epoch, # Total Training Step
alpha=alpha # Fraction of the Minimum Value of the Set lr
)
# learning_rate = tf.maximum(learning_rate, args.base_lr)
return learning_rate
learning_rate = get_lr_dgcnn(args, global_step, alpha=0.01)
tf.summary.scalar('learning rate', learning_rate)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(opt, args.epoch, eta_min=args.lr)
# doc: https://pytorch.org/docs/stable/optim.html
# doc: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/cosine_decay
''' === Optimiser === '''
# trainer = tf.train.GradientDescentOptimizer(learning_rate)
trainer = tf.train.MomentumOptimizer(learning_rate, momentum=args.momentum)
# equivalent to torch.optim.SGD
# doc: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/MomentumOptimizer
# another alternative is to use keras
# trainer = tf.keras.optimizers.SGD(learning_rate, momentum=args.momentum)
# doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/keras/optimizers/SGD
# opt = torch.optim.SGD(model.parameters(), lr=args.lr * 100, momentum=args.momentum, weight_decay=1e-4)
train_op = trainer.minimize(loss=MODEL.loss, global_step=global_step)
saver = tf.train.Saver()
# ref: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto
config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
# config.allow_soft_placement = True # Uncomment it if GPU option is not available
# config.log_device_placement = True # Uncomment it if you want device placements to be logged
sess = tf.Session(config=config)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(MyLogger.experiment_dir, 'runs', 'train'), sess.graph)
val_writer = tf.summary.FileWriter(os.path.join(MyLogger.experiment_dir, 'runs', 'valid'), sess.graph)
# Initialise all the variables of the models
init = tf.global_variables_initializer()
sess.run(init, {is_training_pl: True})
# to save the randomized initialised models then exit
if args.just_save:
save_path = saver.save(sess, os.path.join(MyLogger.checkpoints_dir, "model.ckpt"))
print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
print(colored('just save the model, now exit', 'white', 'on_red'))
sys.exit()
'''current solution: first load pretrained encoder,
assemble with randomly initialised FC layers then save to the checkpoint'''
if args.restore:
saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
MyLogger.logger.info('Model Parameters has been Restored')
ops = {'pointclouds_pl': inputs_pl,
'labels_pl': labels_pl,
'is_training_pl': is_training_pl,
'npts_pl': npts_pl,
'pred': pred,
'loss': loss,
'train_op': train_op,
'merged': merged,
'step': global_step}
for epoch in range(args.epoch):
'''=== training the model ==='''
train_one_epoch(sess, ops, MyLogger, train_writer)
'''=== evaluating the model ==='''
save_checkpoint = eval_one_epoch(sess, ops, MyLogger, val_writer)
'''=== check whether to store the checkpoints ==='''
if save_checkpoint:
save_path = saver.save(sess, os.path.join(MyLogger.savepath, "model.ckpt"))
MyLogger.logger.info('model saved at %s' % MyLogger.savepath)
sess.close()
MyLogger.train_summary()
def train_one_epoch(sess, ops, MyLogger, train_writer):
is_training = True
MyLogger.epoch_init(training=is_training)
for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
# pdb.set_trace()
points, target = points.numpy(), target.numpy()
if args.data_aug:
points = random_point_dropout(points)
points[:, :, 0:3] = random_scale_point_cloud(points[:, :, 0:3])
points[:, :, 0:3] = random_shift_point_cloud(points[:, :, 0:3])
feed_dict = {
ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: target.reshape(BATCH_SIZE, ),
ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
ops['is_training_pl']: is_training}
summary, step, _, loss, pred = sess.run([
ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
train_writer.add_summary(summary, step)
# pdb.set_trace()
MyLogger.step_update(np.argmax(pred, 1), target.reshape(BATCH_SIZE, ), loss)
MyLogger.epoch_summary(writer=None, training=is_training)
return None
def eval_one_epoch(sess, ops, MyLogger, val_writer):
is_training = False
MyLogger.epoch_init(training=is_training)
for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
# pdb.set_trace()
points, target = points.numpy(), target.numpy()
feed_dict = {
ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: target.reshape(BATCH_SIZE, ),
ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
ops['is_training_pl']: is_training}
summary, step, loss_val, pred_val = sess.run(
[ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
val_writer.add_summary(summary, step)
# pdb.set_trace()
MyLogger.step_update(np.argmax(pred_val, 1), target.reshape(BATCH_SIZE, ), loss_val)
MyLogger.epoch_summary(writer=None, training=is_training)
return MyLogger.save_model
if __name__ == '__main__':
print('Now Using GPU:%s to train the model' % args.gpu)
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
main(args)
================================================
FILE: OcCo_TF/train_cls_torchloader.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import os, sys, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
from tqdm import tqdm
from termcolor import colored
from utils.Dataset_Assign import Dataset_Assign
from utils.io_util import shuffle_data, loadh5DataFile
from utils.EarlyStoppingCriterion import EarlyStoppingCriterion
from utils.tf_util import add_train_summary, get_bn_decay, get_learning_rate
from utils.pc_util import rotate_point_cloud, jitter_point_cloud, random_point_dropout, \
random_scale_point_cloud, random_shift_point_cloud
# from utils.transfer_pretrained_w import load_pretrained_var
from utils.ModelNetDataLoader import General_CLSDataLoader_HDF5
from torch.utils.data import DataLoader
parser = argparse.ArgumentParser()
''' === Basic Learning Settings === '''
parser.add_argument('--gpu', type=int, default=1)
parser.add_argument('--log_dir', default='log/log_cls/pointnet_cls')
parser.add_argument('--model', default='pointnet_cls')
parser.add_argument('--epoch', type=int, default=200)
parser.add_argument('--restore', action='store_true')
parser.add_argument('--restore_path', default='log/pointnet_cls')
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--num_point', type=int, default=1024)
parser.add_argument('--base_lr', type=float, default=0.001)
parser.add_argument('--lr_clip', type=float, default=1e-5)
parser.add_argument('--decay_steps', type=int, default=20)
parser.add_argument('--decay_rate', type=float, default=0.7)
# parser.add_argument('--verbose', type=bool, default=True)
parser.add_argument('--dataset', type=str, default='modelnet40')
parser.add_argument('--partial', action='store_true')
parser.add_argument('--filename', type=str, default='')
parser.add_argument('--data_bn', action='store_true')
''' === Data Augmentation Settings === '''
parser.add_argument('--data_aug', action='store_true')
parser.add_argument('--just_save', action='store_true') # pretrained encoder restoration
parser.add_argument('--patience', type=int, default=200) # early stopping, set it as 200 for deprecation
parser.add_argument('--fewshot', action='store_true')
args = parser.parse_args()
DATA_PATH = 'data/modelnet40_normal_resampled/'
NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES = Dataset_Assign(
dataset=args.dataset, fname=args.filename, partial=args.partial, bn=args.data_bn, few_shot=args.fewshot)
TRAIN_DATASET = General_CLSDataLoader_HDF5(root=DATA_PATH, file_list=TRAIN_FILES, num_point=1024)
TEST_DATASET = General_CLSDataLoader_HDF5(root=DATA_PATH, file_list=VALID_FILES, num_point=1024)
trainDataLoader = DataLoader(TRAIN_DATASET, batch_size=args.batch_size, shuffle=True, num_workers=4, drop_last=True)
testDataLoader = DataLoader(TEST_DATASET, batch_size=args.batch_size, shuffle=False, num_workers=4, drop_last=True)
BATCH_SIZE = args.batch_size
NUM_POINT = args.num_point
BASE_LR = args.base_lr
LR_CLIP = args.lr_clip
DECAY_RATE = args.decay_rate
# DECAY_STEP = args.decay_steps
DECAY_STEP = NUM_TRAINOBJECTS//BATCH_SIZE * args.decay_steps
BN_INIT_DECAY = 0.5
BN_DECAY_RATE = 0.5
BN_DECAY_STEP = float(DECAY_STEP)
BN_DECAY_CLIP = 0.99
LOG_DIR = args.log_dir
BEST_EVAL_ACC = 0
os.system('mkdir -p %s' % LOG_DIR) if not os.path.exists(LOG_DIR) else None
LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'a+')
def log_string(out_str):
LOG_FOUT.write(out_str + '\n')
LOG_FOUT.flush()
print(out_str)
def train(args):
log_string('\n\n' + '=' * 50)
log_string('Start Training, Time: %s' % datetime.datetime.now())
log_string('=' * 50 + '\n\n')
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step') # will be used in defining train_op
inputs_pl = tf.placeholder(tf.float32, (1, None, 3), 'inputs')
labels_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'labels')
npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
bn_decay = get_bn_decay(global_step, BN_INIT_DECAY, BATCH_SIZE, BN_DECAY_STEP, BN_DECAY_RATE, BN_DECAY_CLIP)
# model_module = importlib.import_module('.%s' % args.model, 'cls_models')
# MODEL = model_module.Model(inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
''' === To fix issues when running on woma === '''
ldic = locals()
exec('from cls_models.%s import Model' % args.model, globals(), ldic)
MODEL = ldic['Model'](inputs_pl, npts_pl, labels_pl, is_training_pl, bn_decay=bn_decay)
pred, loss = MODEL.pred, MODEL.loss
tf.summary.scalar('loss', loss)
# pdb.set_trace()
# useful information in displaying during training
correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))
accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)
tf.summary.scalar('accuracy', accuracy)
learning_rate = get_learning_rate(global_step, BASE_LR, BATCH_SIZE, DECAY_STEP, DECAY_RATE, LR_CLIP)
add_train_summary('learning_rate', learning_rate)
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(MODEL.loss, global_step)
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
# config.log_device_placement = True
sess = tf.Session(config=config)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)
val_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'val'))
# Init variables
init = tf.global_variables_initializer()
log_string('\nModel Parameters has been Initialized\n')
sess.run(init, {is_training_pl: True}) # restore will cover the random initialized parameters
# to save the randomized variables
if not args.restore and args.just_save:
save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
print(colored('random initialised model saved at %s' % save_path, 'white', 'on_blue'))
print(colored('just save the model, now exit', 'white', 'on_red'))
sys.exit()
'''current solution: first load pretrained head, assemble with output layers then save as a checkpoint'''
# to partially load the saved head from:
# if args.load_pretrained_head:
# sess.close()
# load_pretrained_head(args.pretrained_head_path, os.path.join(LOG_DIR, 'model.ckpt'), None, args.verbose)
# print('shared varibles have been restored from ', args.pretrained_head_path)
#
# sess = tf.Session(config=config)
# log_string('\nModel Parameters has been Initialized\n')
# sess.run(init, {is_training_pl: True})
# saver.restore(sess, tf.train.latest_checkpoint(LOG_DIR))
# log_string('\nModel Parameters have been restored with pretrained weights from %s' % args.pretrained_head_path)
if args.restore:
# load_pretrained_var(args.restore_path, os.path.join(LOG_DIR, "model.ckpt"), args.verbose)
saver.restore(sess, tf.train.latest_checkpoint(args.restore_path))
log_string('\n')
log_string(colored('Model Parameters have been restored from %s' % args.restore_path, 'white', 'on_red'))
for arg in sorted(vars(args)):
print(arg + ': ' + str(getattr(args, arg)) + '\n') # log of arguments
os.system('cp cls_models/%s.py %s' % (args.model, LOG_DIR)) # bkp of model def
os.system('cp train_cls.py %s' % LOG_DIR) # bkp of train procedure
train_start = time.time()
ops = {'pointclouds_pl': inputs_pl,
'labels_pl': labels_pl,
'is_training_pl': is_training_pl,
'npts_pl': npts_pl,
'pred': pred,
'loss': loss,
'train_op': train_op,
'merged': merged,
'step': global_step}
ESC = EarlyStoppingCriterion(patience=args.patience)
for epoch in range(args.epoch):
log_string('\n\n')
log_string(colored('**** EPOCH %03d ****' % epoch, 'grey', 'on_green'))
sys.stdout.flush()
'''=== training the model ==='''
train_one_epoch(sess, ops, train_writer)
'''=== evaluating the model ==='''
eval_mean_loss, eval_acc, eval_cls_acc = eval_one_epoch(sess, ops, val_writer)
'''=== check whether to early stop ==='''
early_stop, save_checkpoint = ESC.step(eval_acc, epoch=epoch)
if save_checkpoint:
save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
log_string(colored('model saved at %s' % save_path, 'white', 'on_blue'))
if early_stop:
break
log_string('total time: %s' % datetime.timedelta(seconds=time.time() - train_start))
log_string('stop epoch: %d, best eval acc: %f' % (ESC.best_epoch + 1, ESC.best_dev_score))
sess.close()
def train_one_epoch(sess, ops, train_writer):
is_training = True
total_correct, total_seen, loss_sum = 0, 0, 0
for points, target in tqdm(trainDataLoader, total=len(trainDataLoader), smoothing=0.9):
# pdb.set_trace()
points, target = points.numpy(), target.numpy()
if args.data_aug:
points = random_point_dropout(points)
points[:, :, 0:3] = random_scale_point_cloud(points[:, :, 0:3])
points[:, :, 0:3] = random_shift_point_cloud(points[:, :, 0:3])
feed_dict = {
ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: target.reshape(BATCH_SIZE, ),
ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
ops['is_training_pl']: is_training}
summary, step, _, loss_val, pred_val = sess.run([
ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
train_writer.add_summary(summary, step)
pred_val = np.argmax(pred_val, 1)
correct = np.sum(pred_val == target.reshape(BATCH_SIZE, ))
total_correct += correct
total_seen += BATCH_SIZE
# loss_sum += loss_val
# train_file_idxs = np.arange(0, len(TRAIN_FILES))
# np.random.shuffle(train_file_idxs)
#
# for fn in range(len(TRAIN_FILES)):
# current_data, current_label = loadh5DataFile(TRAIN_FILES[train_file_idxs[fn]])
# current_data = current_data[:, :NUM_POINT, :]
# current_data, current_label, _ = shuffle_data(current_data, np.squeeze(current_label))
# current_label = np.squeeze(current_label)
#
# file_size = current_data.shape[0]
# num_batches = file_size // BATCH_SIZE
#
# for batch_idx in range(num_batches):
# start_idx = batch_idx * BATCH_SIZE
# end_idx = (batch_idx + 1) * BATCH_SIZE
# feed_data = current_data[start_idx:end_idx, :, :]
#
# if args.data_aug:
# feed_data = random_point_dropout(feed_data)
# feed_data[:, :, 0:3] = random_scale_point_cloud(feed_data[:, :, 0:3])
# feed_data[:, :, 0:3] = random_shift_point_cloud(feed_data[:, :, 0:3])
#
# feed_dict = {
# ops['pointclouds_pl']: feed_data.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
# ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
# ops['npts_pl']: [NUM_POINT] * BATCH_SIZE,
# ops['is_training_pl']: is_training}
#
# summary, step, _, loss_val, pred_val = sess.run([
# ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
# train_writer.add_summary(summary, step)
#
# pred_val = np.argmax(pred_val, 1)
# correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
# total_correct += correct
# total_seen += BATCH_SIZE
# loss_sum += loss_val
log_string('\n=== training ===')
log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
# log_string('mean batch loss: %f' % (loss_sum / num_batches))
log_string('accuracy: %f' % (total_correct / float(total_seen)))
def eval_one_epoch(sess, ops, val_writer):
is_training = False
total_correct, total_seen, loss_sum = 0, 0, 0
total_seen_class = [0 for _ in range(NUM_CLASSES)]
total_correct_class = [0 for _ in range(NUM_CLASSES)]
for points, target in tqdm(testDataLoader, total=len(testDataLoader), smoothing=0.9):
# pdb.set_trace()
points, target = points.numpy(), target.numpy()
feed_dict = {
ops['pointclouds_pl']: points.reshape([1, BATCH_SIZE * NUM_POINT, 3]),
ops['labels_pl']: target.reshape(BATCH_SIZE, ),
ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
ops['is_training_pl']: is_training}
summary, step, loss_val, pred_val = sess.run(
[ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
val_writer.add_summary(summary, step)
pred_val = np.argmax(pred_val, 1)
correct = np.sum(pred_val == target.reshape(BATCH_SIZE, ))
total_correct += correct
total_seen += BATCH_SIZE
loss_sum += (loss_val * BATCH_SIZE)
for i, l in enumerate(target):
# l = int(target.reshape(-1)[i])
# pdb.set_trace()
total_seen_class[int(l)] += 1
total_correct_class[int(l)] += (int(pred_val[i]) == int(l))
# for fn in VALID_FILES:
# current_data, current_label = loadh5DataFile(fn)
# current_data = current_data[:, :NUM_POINT, :]
# file_size = current_data.shape[0]
# num_batches = file_size // BATCH_SIZE
#
# for batch_idx in range(num_batches):
# start_idx, end_idx = batch_idx * BATCH_SIZE, (batch_idx + 1) * BATCH_SIZE
#
# feed_dict = {
# ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :].reshape([1, BATCH_SIZE * NUM_POINT, 3]),
# ops['labels_pl']: current_label[start_idx:end_idx].reshape(BATCH_SIZE, ),
# ops['npts_pl']: np.array([NUM_POINT] * BATCH_SIZE),
# ops['is_training_pl']: is_training}
#
# summary, step, loss_val, pred_val = sess.run(
# [ops['merged'], ops['step'], ops['loss'], ops['pred']], feed_dict=feed_dict)
# val_writer.add_summary(summary, step)
# pred_val = np.argmax(pred_val, 1)
# correct = np.sum(pred_val == current_label[start_idx:end_idx].reshape(BATCH_SIZE, ))
# total_correct += correct
# total_seen += BATCH_SIZE
# loss_sum += (loss_val * BATCH_SIZE)
#
# for i in range(start_idx, end_idx):
# l = int(current_label.reshape(-1)[i])
# total_seen_class[l] += 1
# total_correct_class[l] += (pred_val[i - start_idx] == l)
eval_mean_loss = loss_sum / float(total_seen)
eval_acc = total_correct / float(total_seen)
eval_cls_acc = np.mean(np.array(total_correct_class) / np.array(total_seen_class, dtype=np.float))
log_string('\n=== evaluating ===')
log_string('total correct: %d, total_seen: %d' % (total_correct, total_seen))
log_string('eval mean loss: %f' % eval_mean_loss)
log_string('eval accuracy: %f' % eval_acc)
log_string('eval avg class acc: %f' % eval_cls_acc)
global BEST_EVAL_ACC
if eval_acc > BEST_EVAL_ACC:
BEST_EVAL_ACC = eval_acc
log_string('best eval accuracy: %f' % BEST_EVAL_ACC)
return eval_mean_loss, eval_acc, eval_cls_acc
if __name__ == '__main__':
print('Now Using GPU:%d to train the model' % args.gpu)
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
train(args)
LOG_FOUT.close()
================================================
FILE: OcCo_TF/train_completion.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
import os, pdb, time, argparse, datetime, importlib, numpy as np, tensorflow as tf
from utils import lmdb_dataflow, add_train_summary, plot_pcd_three_views
from termcolor import colored
parser = argparse.ArgumentParser()
parser.add_argument('--gpu', type=str, default='1')
parser.add_argument('--lmdb_train', default='data/modelnet/train.lmdb')
parser.add_argument('--lmdb_valid', default='data/modelnet/test.lmdb')
parser.add_argument('--log_dir', type=str, default='')
parser.add_argument('--model_type', default='pcn_cd')
parser.add_argument('--restore', action='store_true')
parser.add_argument('--restore_path', default='log/pcn_cd')
parser.add_argument('--batch_size', type=int, default=16)
parser.add_argument('--num_gt_points', type=int, default=16384)
parser.add_argument('--base_lr', type=float, default=1e-4)
parser.add_argument('--lr_decay', action='store_true')
parser.add_argument('--lr_decay_steps', type=int, default=50000)
parser.add_argument('--lr_decay_rate', type=float, default=0.7)
parser.add_argument('--lr_clip', type=float, default=1e-6)
parser.add_argument('--max_step', type=int, default=3000000)
parser.add_argument('--epoch', type=int, default=50)
parser.add_argument('--steps_per_print', type=int, default=100)
parser.add_argument('--steps_per_eval', type=int, default=1000)
parser.add_argument('--steps_per_visu', type=int, default=3456)
parser.add_argument('--epochs_per_save', type=int, default=5)
parser.add_argument('--visu_freq', type=int, default=10)
parser.add_argument('--store_grad', action='store_true')
parser.add_argument('--num_input_points', type=int, default=1024)
parser.add_argument('--dataset', default='modelnet40')
args = parser.parse_args()
BATCH_SIZE = args.batch_size
NUM_POINT = args.num_input_points
NUM_GT_POINT = args.num_gt_points
DECAY_STEP = args.lr_decay_steps
DATASET = args.dataset
BN_INIT_DECAY = 0.5
BN_DECAY_DECAY_RATE = 0.5
BN_DECAY_DECAY_STEP = float(DECAY_STEP)
BN_DECAY_CLIP = 0.99
def get_bn_decay(batch):
bn_momentum = tf.train.exponential_decay(
BN_INIT_DECAY,
batch * BATCH_SIZE,
BN_DECAY_DECAY_STEP,
BN_DECAY_DECAY_RATE,
staircase=True)
bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)
return bn_decay
def vary2fix(inputs, npts):
inputs_ls = np.split(inputs[0], npts.cumsum())
ret_inputs = np.zeros((1, BATCH_SIZE * NUM_POINT, 3), dtype=np.float32)
ret_npts = npts.copy()
for idx, obj in enumerate(inputs_ls[:-1]):
if len(obj) <= NUM_POINT:
select_idx = np.concatenate([
np.arange(len(obj)), np.random.choice(len(obj), NUM_POINT - len(obj))])
else:
select_idx = np.arange(len(obj))
np.random.shuffle(select_idx)
pdb.set_trace()
ret_inputs[0][idx * NUM_POINT:(idx + 1) * NUM_POINT] = obj[select_idx].copy()
ret_npts[idx] = NUM_POINT
return ret_inputs, ret_npts
def train(args):
is_training_pl = tf.placeholder(tf.bool, shape=(), name='is_training')
global_step = tf.Variable(0, trainable=False, name='global_step')
alpha = tf.train.piecewise_constant(global_step, [10000, 20000, 50000],
[0.01, 0.1, 0.5, 1.0], 'alpha_op')
# for ModelNet, it is with Fixed Number of Input Points
# for ShapeNet, it is with Varying Number of Input Points
inputs_pl = tf.placeholder(tf.float32, (1, BATCH_SIZE * NUM_POINT, 3), 'inputs')
npts_pl = tf.placeholder(tf.int32, (BATCH_SIZE,), 'num_points')
gt_pl = tf.placeholder(tf.float32, (BATCH_SIZE, args.num_gt_points, 3), 'ground_truths')
add_train_summary('alpha', alpha)
bn_decay = get_bn_decay(global_step)
add_train_summary('bn_decay', bn_decay)
model_module = importlib.import_module('.%s' % args.model_type, 'completion_models')
model = model_module.Model(inputs_pl, npts_pl, gt_pl, alpha,
bn_decay=bn_decay, is_training=is_training_pl)
# Another Solution instead of importlib:
# ldic = locals()
# exec('from completion_models.%s import Model' % args.model_type, globals(), ldic)
# model = ldic['Model'](inputs_pl, npts_pl, gt_pl, alpha,
# bn_decay=bn_decay, is_training=is_training_pl)
if args.lr_decay:
learning_rate = tf.train.exponential_decay(args.base_lr, global_step,
args.lr_decay_steps, args.lr_decay_rate,
staircase=True, name='lr')
learning_rate = tf.maximum(learning_rate, args.lr_clip)
add_train_summary('learning_rate', learning_rate)
else:
learning_rate = tf.constant(args.base_lr, name='lr')
trainer = tf.train.AdamOptimizer(learning_rate)
train_op = trainer.minimize(model.loss, global_step)
saver = tf.train.Saver(max_to_keep=10)
''' from PCN paper:
All our completion_models are trained using the Adam optimizer
with an initial learning rate of 0.0001 for 50 epochs
and a batch size of 32. The learning rate is decayed by 0.7 every 50K iterations.
'''
if args.store_grad:
grads_and_vars = trainer.compute_gradients(model.loss)
for g, v in grads_and_vars:
tf.summary.histogram(v.name, v, collections=['train_summary'])
tf.summary.histogram(v.name + '_grad', g, collections=['train_summary'])
train_summary = tf.summary.merge_all('train_summary')
valid_summary = tf.summary.merge_all('valid_summary')
# the input number of points for the partial observed data is not a fixed number
df_train, num_train = lmdb_dataflow(
args.lmdb_train, args.batch_size,
args.num_input_points, args.num_gt_points, is_training=True)
train_gen = df_train.get_data()
df_valid, num_valid = lmdb_dataflow(
args.lmdb_valid, args.batch_size,
args.num_input_points, args.num_gt_points, is_training=False)
valid_gen = df_valid.get_data()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)
if args.restore:
saver.restore(sess, tf.train.latest_checkpoint(args.log_dir))
writer = tf.summary.FileWriter(args.log_dir)
else:
sess.run(tf.global_variables_initializer())
if os.path.exists(args.log_dir):
delete_key = input(colored('%s exists. Delete? [y/n]' % args.log_dir, 'white', 'on_red'))
if delete_key == 'y' or delete_key == "yes":
os.system('rm -rf %s/*' % args.log_dir)
os.makedirs(os.path.join(args.log_dir, 'plots'))
else:
os.makedirs(os.path.join(args.log_dir, 'plots'))
with open(os.path.join(args.log_dir, 'args.txt'), 'w') as log:
for arg in sorted(vars(args)):
log.write(arg + ': ' + str(getattr(args, arg)) + '\n')
log.close()
os.system('cp completion_models/%s.py %s' % (args.model_type, args.log_dir)) # bkp of model scripts
os.system('cp train_completion.py %s' % args.log_dir) # bkp of train procedure
writer = tf.summary.FileWriter(args.log_dir, sess.graph) # GOOD habit
log_fout = open(os.path.join(args.log_dir, 'log_train.txt'), 'a+')
for arg in sorted(vars(args)):
log_fout.write(arg + ': ' + str(getattr(args, arg)) + '\n')
log_fout.flush()
total_time = 0
train_start = time.time()
init_step = sess.run(global_step)
for step in range(init_step + 1, args.max_step + 1):
epoch = step * args.batch_size // num_train + 1
ids, inputs, npts, gt = next(train_gen)
if epoch > args.epoch:
break
if DATASET == 'shapenet8':
inputs, npts = vary2fix(inputs, npts)
start = time.time()
feed_dict = {inputs_pl: inputs, npts_pl: npts, gt_pl: gt, is_training_pl: True}
_, loss, summary = sess.run([train_op, model.loss, train_summary], feed_dict=feed_dict)
total_time += time.time() - start
writer.add_summary(summary, step)
if step % args.steps_per_print == 0:
print('epoch %d step %d loss %.8f - time per batch %.4f' %
(epoch, step, loss, total_time / args.steps_per_print))
total_time = 0
if step % args.steps_per_eval == 0:
print(colored('Testing...', 'grey', 'on_green'))
num_eval_steps = num_valid // args.batch_size
total_loss, total_time = 0, 0
sess.run(tf.local_variables_initializer())
for i in range(num_eval_steps):
start = time.time()
_, inputs, npts, gt = next(valid_gen)
if DATASET == 'shapenet8':
inputs, npts = vary2fix(inputs, npts)
feed_dict = {inputs_pl: inputs, npts_pl: npts, gt_pl: gt, is_training_pl: False}
loss, _ = sess.run([model.loss, model.update], feed_dict=feed_dict)
total_loss += loss
total_time += time.time() - start
summary = sess.run(valid_summary, feed_dict={is_training_pl: False})
writer.add_summary(summary, step)
print(colored('epoch %d step %d loss %.8f - time per batch %.4f' %
(epoch, step, total_loss / num_eval_steps, total_time / num_eval_steps),
'grey', 'on_green'))
total_time = 0
if step % args.steps_per_visu == 0:
all_pcds = sess.run(model.visualize_ops, feed_dict=feed_dict)
for i in range(0, args.batch_size, args.visu_freq):
plot_path = os.path.join(args.log_dir, 'plots',
'epoch_%d_step_%d_%s.png' % (epoch, step, ids[i]))
pcds = [x[i] for x in all_pcds]
plot_pcd_three_views(plot_path, pcds, model.visualize_titles)
if (epoch % args.epochs_per_save == 0) and \
not os.path.exists(os.path.join(args.log_dir, 'model-%d.meta' % epoch)):
saver.save(sess, os.path.join(args.log_dir, 'model'), epoch)
print(colored('Epoch:%d, Model saved at %s' % (epoch, args.log_dir), 'white', 'on_blue'))
print('Total time', datetime.timedelta(seconds=time.time() - train_start))
sess.close()
if __name__ == '__main__':
print('Now Using GPU:%s to train the model' % args.gpu)
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
train(args)
================================================
FILE: OcCo_TF/utils/Dataset_Assign.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import h5py
def Dataset_Assign(dataset, fname, partial=True, bn=False, few_shot=False):
def fetch_files(filelist):
return [item.strip() for item in open(filelist).readlines()]
def loadh5DataFile(PathtoFile):
f = h5py.File(PathtoFile, 'r')
return f['data'][:], f['label'][:]
dataset = dataset.lower()
if dataset == 'shapenet8':
NUM_CLASSES = 8
if partial:
NUM_TRAINOBJECTS = 231792
TRAIN_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/train_file.txt')
VALID_FILES = fetch_files('./data/shapenet/hdf5_partial_1024/valid_file.txt')
else:
raise ValueError("For ShapeNet we are only interested in the partial objects recognition")
elif dataset == 'shapenet10':
# Number of Objects: 17378
# Number of Objects: 2492
NUM_CLASSES, NUM_TRAINOBJECTS = 10, 17378
TRAIN_FILES = fetch_files('./data/ShapeNet10/Cleaned/train_file.txt')
VALID_FILES = fetch_files('./data/ShapeNet10/Cleaned/test_file.txt')
elif dataset == 'modelnet40':
'''Actually we find that using data from PointNet++:
https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip
will increase the accuracy a bit, however to make a fair comparison: we use the same data as
the original data provided by PointNet: https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'''
NUM_CLASSES = 40
if partial:
NUM_TRAINOBJECTS = 98430
TRAIN_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/train_file.txt')
VALID_FILES = fetch_files('./data/modelnet40_pcn/hdf5_partial_1024/test_file.txt')
else:
VALID_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/test_files.txt')
if few_shot:
TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/few_labels/%s.h5' % fname)
data, _ = loadh5DataFile('./data/modelnet40_ply_hdf5_2048/few_labels/%s.h5' % fname)
NUM_TRAINOBJECTS = len(data)
else:
NUM_TRAINOBJECTS = 9843
TRAIN_FILES = fetch_files('./data/modelnet40_ply_hdf5_2048/train_files.txt')
elif dataset == 'scannet10':
NUM_CLASSES, NUM_TRAINOBJECTS = 10, 6110
TRAIN_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/train_file.txt')
VALID_FILES = fetch_files('./data/ScanNet10/ScanNet_Cleaned/test_file.txt')
elif dataset == 'scanobjectnn':
NUM_CLASSES = 15
if bn:
TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/training_objectdataset' + fname + '.h5']
VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split/test_objectdataset' + fname + '.h5']
data, _ = loadh5DataFile('./data/ScanNetObjectNN/h5_files/main_split/training_objectdataset' + fname + '.h5')
NUM_TRAINOBJECTS = len(data)
else:
TRAIN_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/training_objectdataset' + fname + '.h5']
VALID_FILES = ['./data/ScanNetObjectNN/h5_files/main_split_nobg/test_objectdataset' + fname + '.h5']
data, _ = loadh5DataFile('./data/ScanNetObjectNN/h5_files/main_split_nobg/training_objectdataset' + fname + '.h5')
NUM_TRAINOBJECTS = len(data)
else:
raise ValueError('dataset not exists')
return NUM_CLASSES, NUM_TRAINOBJECTS, TRAIN_FILES, VALID_FILES
================================================
FILE: OcCo_TF/utils/EarlyStoppingCriterion.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
class EarlyStoppingCriterion(object):
"""
adapted from https://github.com/facebookresearch/hgnn/blob/master/utils/EarlyStoppingCriterion.py
Arguments:
patience (int): The maximum number of epochs with no improvement before early stopping should take place
mode (str, can only be 'max' or 'min'): To take the maximum or minimum of the score for optimization
min_delta (float, optional): Minimum change in the score to qualify as an improvement (default: 0.0)
"""
def __init__(self, patience=10, mode='max', min_delta=0.0):
assert patience >= 0
assert mode in {'min', 'max'}
assert min_delta >= 0.0
self.patience = patience
self.mode = mode
self.min_delta = min_delta
self._count = 0
self.best_dev_score = None
self.best_test_score = None
self.best_epoch = None
self.is_improved = None
def step(self, cur_dev_score, epoch):
"""
Checks if training should be continued given the current score.
Arguments:
cur_dev_score (float): the current development score
# cur_test_score (float): the current test score
Output:
bool: if training should be continued
"""
save_checkpoint = False
if self.best_dev_score is None:
self.best_dev_score = cur_dev_score
self.best_epoch = epoch
save_checkpoint = True
return False, save_checkpoint
else:
if self.mode == 'max':
self.is_improved = (cur_dev_score > self.best_dev_score + self.min_delta)
else:
self.is_improved = (cur_dev_score < self.best_dev_score - self.min_delta)
if self.is_improved:
self._count = 0
self.best_dev_score = cur_dev_score
self.best_epoch = epoch
save_checkpoint = True
else:
self._count += 1
return self._count >= self.patience, save_checkpoint
================================================
FILE: OcCo_TF/utils/ModelNetDataLoader.py
================================================
import os, torch, h5py, warnings, numpy as np
from torch.utils.data import Dataset
warnings.filterwarnings('ignore')
def pc_normalize(pc):
centroid = np.mean(pc, axis=0)
pc = pc - centroid
m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
pc = pc / m
return pc
def farthest_point_sample(point, npoint):
"""
Input:
xyz: point cloud data, [N, D]
npoint: number of samples
Return:
centroids: sampled point cloud index, [npoint, D]
"""
N, D = point.shape
xyz = point[:, :3]
centroids = np.zeros((npoint,))
distance = np.ones((N,)) * 1e10
farthest = np.random.randint(0, N)
for i in range(npoint):
centroids[i] = farthest
centroid = xyz[farthest, :]
dist = np.sum((xyz - centroid) ** 2, -1)
mask = dist < distance
distance[mask] = dist[mask]
farthest = np.argmax(distance, -1)
point = point[centroids.astype(np.int32)]
return point
class ModelNetDataLoader(Dataset):
def __init__(self, root, npoint=1024, split='train', uniform=False, normal_channel=True, cache_size=15000):
self.root = root
self.npoints = npoint
self.uniform = uniform
self.catfile = os.path.join(self.root, 'modelnet40_shape_names.txt')
self.cat = [line.rstrip() for line in open(self.catfile)]
self.classes = dict(zip(self.cat, range(len(self.cat))))
self.normal_channel = normal_channel
shape_ids = {'train': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_train.txt'))],
'test': [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_test.txt'))]}
assert (split == 'train' or split == 'test')
shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids[split]]
# list of (shape_name, shape_txt_file_path) tuple
self.datapath = [(shape_names[i], os.path.join(self.root, shape_names[i], shape_ids[split][i]) + '.txt') for i
in range(len(shape_ids[split]))]
print('The size of %s data is %d' % (split, len(self.datapath)))
self.cache_size = cache_size # how many data points to cache in memory
self.cache = {} # from index to (point_set, cls) tuple
def __len__(self):
return len(self.datapath)
def _get_item(self, index):
if index in self.cache:
point_set, cls = self.cache[index]
else:
fn = self.datapath[index]
cls = self.classes[self.datapath[index][0]]
cls = np.array([cls]).astype(np.int32)
point_set = np.loadtxt(fn[1], delimiter=',').astype(np.float32)
if self.uniform:
point_set = farthest_point_sample(point_set, self.npoints)
else:
point_set = point_set[0:self.npoints, :]
point_set[:, 0:3] = pc_normalize(point_set[:, 0:3])
if not self.normal_channel:
point_set = point_set[:, 0:3]
if len(self.cache) < self.cache_size:
self.cache[index] = (point_set, cls)
return point_set, cls
def __getitem__(self, index):
return self._get_item(index)
class General_CLSDataLoader_HDF5(Dataset):
def __init__(self, file_list, num_point=1024):
# self.root = root
self.num_point = num_point
self.file_list = file_list
self.points_list = np.zeros((1, num_point, 3))
self.labels_list = np.zeros((1,))
for file in self.file_list:
# pdb.set_trace()
# file = os.path.join(root, file)
# pdb.set_trace()
data, label = self.loadh5DataFile(file)
self.points_list = np.concatenate([self.points_list,
data[:, :self.num_point, :]], axis=0)
self.labels_list = np.concatenate([self.labels_list, label.ravel()], axis=0)
self.points_list = self.points_list[1:]
self.labels_list = self.labels_list[1:]
assert len(self.points_list) == len(self.labels_list)
print('Number of Objects: ', len(self.labels_list))
@staticmethod
def loadh5DataFile(PathtoFile):
f = h5py.File(PathtoFile, 'r')
return f['data'][:], f['label'][:]
def __len__(self):
return len(self.points_list)
def __getitem__(self, index):
point_xyz = self.points_list[index][:, 0:3]
point_label = self.labels_list[index].astype(np.int32)
return point_xyz, point_label
class ModelNetJigsawDataLoader(Dataset):
def __init__(self, root=r'./data/modelnet40_ply_hdf5_2048/jigsaw',
n_points=1024, split='train', k=3):
self.npoints = n_points
self.root = root
self.split = split
assert split in ['train', 'test']
if self.split == 'train':
self.file_list = [d for d in os.listdir(root) if d.find('train') is not -1]
else:
self.file_list = [d for d in os.listdir(root) if d.find('test') is not -1]
self.points_list = np.zeros((1, n_points, 3))
self.labels_list = np.zeros((1, n_points))
for file in self.file_list:
file = os.path.join(root, file)
data, label = self.loadh5DataFile(file)
# data = np.load(root + file)
self.points_list = np.concatenate([self.points_list, data], axis=0) # .append(data)
self.labels_list = np.concatenate([self.labels_list, label], axis=0)
# self.labels_list.append(label)
self.points_list = self.points_list[1:]
self.labels_list = self.labels_list[1:]
assert len(self.points_list) == len(self.labels_list)
print('Number of %s Objects: '%self.split, len(self.labels_list))
# just use the average weights
self.labelweights = np.ones(k ** 3)
# pdb.set_trace()
@staticmethod
def loadh5DataFile(PathtoFile):
f = h5py.File(PathtoFile, 'r')
return f['data'][:], f['label'][:]
def __getitem__(self, index):
point_set = self.points_list[index][:, 0:3]
semantic_seg = self.labels_list[index].astype(np.int32)
# sample_weight = self.labelweights[semantic_seg]
# return point_set, semantic_seg, sample_weight
return point_set, semantic_seg
def __len__(self):
return len(self.points_list)
if __name__ == '__main__':
data = ModelNetDataLoader('/data/modelnet40_normal_resampled/', split='train', uniform=False, normal_channel=True, )
DataLoader = torch.utils.data.DataLoader(data, batch_size=12, shuffle=True)
for point, label in DataLoader:
print(point.shape)
print(label.shape)
================================================
FILE: OcCo_TF/utils/Train_Logger.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
import os, logging, datetime, numpy as np, sklearn.metrics as metrics
from pathlib import Path
class TrainLogger:
def __init__(self, args, name='Model', subfold='cls', cls2name=None):
self.step = 1
self.epoch = 1
self.args = args
self.name = name
self.sf = subfold
self.make_logdir()
self.logger_setup()
self.epoch_init()
self.save_model = False
self.cls2name = cls2name
self.best_instance_acc, self.best_class_acc = 0., 0.
self.best_instance_epoch, self.best_class_epoch = 0, 0
self.savepath = str(self.checkpoints_dir) + '/best_model.pth'
def logger_setup(self):
self.logger = logging.getLogger(self.name)
self.logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler = logging.FileHandler(os.path.join(self.log_dir, 'train_log.txt'))
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter)
# ref: https://stackoverflow.com/a/53496263/12525201
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# logging.getLogger('').addHandler(console) # this is root logger
self.logger.addHandler(console)
self.logger.addHandler(file_handler)
self.logger.info('PARAMETER ...')
self.logger.info(self.args)
self.logger.removeHandler(console)
def make_logdir(self):
timestr = str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M'))
experiment_dir = Path('./log/')
experiment_dir.mkdir(exist_ok=True)
experiment_dir = experiment_dir.joinpath(self.sf)
experiment_dir.mkdir(exist_ok=True)
if self.args.log_dir is None:
self.experiment_dir = experiment_dir.joinpath(timestr)
else:
self.experiment_dir = experiment_dir.joinpath(self.args.log_dir)
self.experiment_dir.mkdir(exist_ok=True)
self.checkpoints_dir = self.experiment_dir.joinpath('checkpoints/')
self.checkpoints_dir.mkdir(exist_ok=True)
self.log_dir = self.experiment_dir.joinpath('logs/')
self.log_dir.mkdir(exist_ok=True)
self.experiment_dir.joinpath('runs').mkdir(exist_ok=True)
# @property.setter
def epoch_init(self, training=True):
self.loss, self.count, self.pred, self.gt = 0., 0., [], []
if training:
self.logger.info('\nEpoch %d/%d:' % (self.epoch, self.args.epoch))
def step_update(self, pred, gt, loss, training=True):
if training:
self.step += 1
self.gt.append(gt)
self.pred.append(pred)
batch_size = len(pred)
self.count += batch_size
self.loss += loss * batch_size
def cls_epoch_update(self, training=True):
self.save_model = False
self.gt = np.concatenate(self.gt)
self.pred = np.concatenate(self.pred)
instance_acc = metrics.accuracy_score(self.gt, self.pred)
class_acc = metrics.balanced_accuracy_score(self.gt, self.pred)
if instance_acc > self.best_instance_acc and not training:
self.best_instance_acc = instance_acc
self.best_instance_epoch = self.epoch
self.save_model = True
if class_acc > self.best_class_acc and not training:
self.best_class_acc = class_acc
self.best_class_epoch = self.epoch
if not training:
self.epoch += 1
return instance_acc, class_acc
def seg_epoch_update(self, training=True):
self.save_model = False
self.gt = np.concatenate(self.gt)
self.pred = np.concatenate(self.pred)
instance_acc = metrics.accuracy_score(self.gt, self.pred)
if instance_acc > self.best_instance_acc and not training:
self.best_instance_acc = instance_acc
self.best_instance_epoch = self.epoch
self.save_model = True
if not training:
self.epoch += 1
return instance_acc
def epoch_summary(self, writer=None, training=True):
instance_acc, class_acc = self.cls_epoch_update(training)
if training:
if writer is not None:
writer.add_scalar('Train Class Accuracy', class_acc, self.step)
writer.add_scalar('Train Instance Accuracy', instance_acc, self.step)
self.logger.info('Train Instance Accuracy: %.3f, Class Accuracy: %.3f' % (instance_acc, class_acc))
else:
if writer is not None:
writer.add_scalar('Test Class Accuracy', class_acc, self.step)
writer.add_scalar('Test Instance Accuracy', instance_acc, self.step)
self.logger.info('Test Instance Accuracy: %.3f, Class Accuracy: %.3f' % (instance_acc, class_acc))
self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
self.best_instance_acc, self.best_instance_epoch))
self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
self.best_class_acc, self.best_class_epoch))
if self.save_model:
self.logger.info('Saving the Model Params to %s' % self.savepath)
def train_summary(self):
self.logger.info('\n\nEnd of Training...')
self.logger.info('Best Instance Accuracy: %.3f at Epoch %d ' % (
self.best_instance_acc, self.best_instance_epoch))
self.logger.info('Best Class Accuracy: %.3f at Epoch %d' % (
self.best_class_acc, self.best_class_epoch))
def update_from_checkpoints(self, checkpoint):
self.logger.info('Use Pre-Trained Weights')
self.step = checkpoint['step']
self.epoch = checkpoint['epoch']
self.best_instance_epoch, self.best_instance_acc = checkpoint['epoch'], checkpoint['instance_acc']
self.best_class_epoch, self.best_class_acc = checkpoint['best_class_epoch'], checkpoint['best_class_acc']
self.logger.info('Best Class Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_class_epoch))
self.logger.info('Best Instance Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_instance_epoch))
def update_from_checkpoints_tf(self, checkpoint):
self.logger.info('Use Pre-Trained Weights')
self.step = checkpoint['step']
self.epoch = checkpoint['epoch']
self.best_instance_epoch, self.best_instance_acc = checkpoint['epoch'], checkpoint['instance_acc']
self.best_class_epoch, self.best_class_acc = checkpoint['best_class_epoch'], checkpoint['best_class_acc']
self.logger.info('Best Class Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_class_epoch))
self.logger.info('Best Instance Acc {:.3f} at Epoch {}'.format(self.best_instance_acc, self.best_instance_epoch))
================================================
FILE: OcCo_TF/utils/__init__.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
#
from tf_util import *
from pc_util import *
from io_util import *
from data_util import *
from visu_util import *
================================================
FILE: OcCo_TF/utils/check_num_point.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
# Author: Hanchen Wang, hw501@cam.ac.uk
import numpy as np
import os, json, argparse
from data_util import lmdb_dataflow
from io_util import read_pcd
from tqdm import tqdm
MODELNET40_PATH = r"../render/dump_modelnet_normalised_"
SCANNET10_PATH = r"../data/ScanNet10"
SHAPENET8_PATH = r"../data/shapenet"
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='modelnet40', help="modelnet40, shapenet8 or scannet10")
args = parser.parse_args()
os.system("mkdir -p ./dump_sum_points")
if args.dataset == 'modelnet40':
shape_names = open(r'../render/shape_names.txt').read().splitlines()
file_ = open(r'../render/ModelNet_flist_normalised.txt').read().splitlines()
print("=== ModelNet40 ===\n")
for t in ['train', 'test']:
# for res in ['fine', 'middle', 'coarse', 'supercoarse']:
for res in ['supercoarse']:
sum_dict = {}
for shape in shape_names:
sum_dict[shape] = np.zeros(3,dtype=np.int32) # num of objects, num of points, average
model_list = [_file for _file in file_ if t in _file]
for model_id in tqdm(model_list):
model_name = model_id.split('/')[0]
for i in range(10):
partial_pc = read_pcd(os.path.join(MODELNET40_PATH + res, 'pcd', model_id + '_%d.pcd' % i))
sum_dict[model_name][1] += len(partial_pc)
sum_dict[model_name][0] += 1
sum_dict[model_name][2] = sum_dict[model_name][1]/sum_dict[model_name][0]
f = open("./dump_sum_points/modelnet40_%s_%s.txt" % (t, res), "w+")
for key in sum_dict.keys():
f.writelines([key, str(sum_dict[key]), '\n'])
f.close()
print("=== ModelNet40 %s %s Done ===\n" % (t, res))
elif args.dataset == 'shapenet8':
print("\n\n=== ShapeNet8 ===\n")
for t in ['train', 'valid']:
sum_dict = json.loads(open(os.path.join(SHAPENET8_PATH, 'keys.json')).read())
for key in sum_dict.keys():
sum_dict[key] = np.zeros(3) # num of objects, num of points, average
# the data stored in the lmdb files is with varying number of points
df, num = lmdb_dataflow(lmdb_path=os.path.join(SHAPENET8_PATH, '%s.lmdb' % t),
batch_size=1, input_size=1000000, output_size=1, is_training=False)
data_gen = df.get_data()
for _ in tqdm(range(num)):
ids, _, npts, _ = next(data_gen)
model_name = ids[0][:8]
sum_dict[model_name][1] += npts[0]
sum_dict[model_name][0] += 1
sum_dict[model_name][2] = sum_dict[model_name][1] / sum_dict[model_name][0]
f = open("./dump_sum_points/shapenet8_%s.json" % t, "w+")
for key in sum_dict.keys():
f.writelines([key, str(sum_dict[key]), '\n'])
# f.write(json.dumps(sum_dict))
f.close()
print("=== ShapeNet8 %s Done ===\n" % t)
elif args.dataset == 'scannet10':
print("\n\n=== ScanNet10 is not ready yet ===\n")
else:
raise ValueError('Assigned dataset do not exist.')
================================================
FILE: OcCo_TF/utils/check_scale.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import numpy as np
import os, open3d, sys
LOG_F = open(r'./scale_sum_modelnet40raw.txt', 'w+')
open3d.utility.set_verbosity_level = 0
def log_string(msg):
print(msg)
LOG_F.writelines(msg + '\n')
if __name__ == "__main__":
lmdb_f = r'./data/shapenet/train.lmdb'
modelnet_raw_path = r'./data/modelnet40_raw/'
shapenet_raw_path = r'./data/ShapeNet_raw/'
modelnet40_pn_processed_f = r'./data/'
off_set, max_radius = 0, 0
'''=== ModelNet40 ==='''
log_string('=== ModelNet40 Raw ===\n\n\n')
for root, dirs, files in os.walk(modelnet_raw_path):
for name in files:
if '.ply' in name:
mesh = open3d.io.read_triangle_mesh(os.path.join(root, name))
off_set_bias = (mesh.get_center()**2).sum()
if off_set_bias > off_set:
off_set = off_set_bias
log_string('update offset: %f by %s' % (off_set, os.path.join(root, name)))
radius_bias = (np.asarray(mesh.vertices)**2).sum(axis=1).max()
if radius_bias > max_radius:
max_radius = radius_bias
log_string('update max radius: %f by %s' %(max_radius, os.path.join(root, name)))
log_string('\n\n\n=== sum for ShapeNetCorev2 ===')
log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
sys.exit('finish computing ModelNet40')
'''=== ShapeNetCore ==='''
log_string('=== now on ShapeNetCorev2 ===\n\n\n')
for root, dirs, files in os.walk(shapenet_raw_path):
for name in files:
if '.obj' in name:
mesh = open3d.io.read_triangle_mesh(os.path.join(root, name))
off_set_bias = (mesh.get_center()**2).sum()
if off_set_bias > off_set:
off_set = off_set_bias
log_string('update offset: %f by %s' % (off_set, os.path.join(root, name)))
radius_bias = (np.asarray(mesh.vertices)**2).sum(axis=1).max()
if radius_bias > max_radius:
max_radius = radius_bias
log_string('update max radius: %f by %s' %(max_radius, os.path.join(root, name)))
log_string('\n\n\n=== sum for ShapeNetCorev2 ===')
log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
sys.exit('finish computing ShapeNetCorev2')
'''=== PCN ==='''
log_string('===now on PCN cleaned subset of ShapeNet===\n\n\n')
df_train, num_train = lmdb_dataflow(lmdb_path = lmdb_f, batch_size=1,
input_size=3000, output_size=16384, is_training=True)
train_gen = df_train.get_data()
for idx in range(231792):
ids, _, _, gt = next(train_gen)
off_set_bias = (gt.mean(axis=1)**2).sum()
if off_set_bias > off_set:
off_set = off_set_bias
log_string('update offset: %f by %d, %s' % (off_set, idx, ids))
radius_bias = (gt**2).sum(axis=2).max()
if radius_bias > max_radius:
max_radius = radius_bias
log_string('update max radius: %f by %d, %s' %(max_radius, idx, ids))
log_string('\n\n\n===for PCN cleaned subset of ShapeNet===')
log_string('===offset:%f, radius:%f===\n\n\n'%(off_set, max_radius))
================================================
FILE: OcCo_TF/utils/data_util.py
================================================
# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk
# Ref:
import numpy as np, tensorflow as tf
from tensorpack import dataflow
def resample_pcd(pcd, n):
"""drop or duplicate points so that input of each object has exactly n points"""
idx = np.random.permutation(pcd.shape[0])
if idx.shape[0] < n:
idx = np.concatenate([idx, np.random.randint(pcd.shape[0], size=n-pcd.shape[0])])
return pcd[idx[:n]]
class PreprocessData(dataflow.ProxyDataFlow):
def __init__(self, ds, input_size, output_size):
# what is ds doing..?
super(PreprocessData, self).__init__(ds)
self.input_size = input_size
self.output_size = output_size
def get_data(self):
for id, input, gt in self.ds.get_data():
input = resample_pcd(input, self.input_size)
gt = resample_pcd(gt, self.output_size)
yield id, input, gt
class BatchData(dataflow.ProxyDataFlow):
def __init__(self, ds, batch_size, input_size, gt_size, remainder=False, use_list=False):
super(BatchData, self).__init__(ds)
self.batch_size = batch_size
self.input_size = input_size
self.gt_size = gt_size
self.remainder = remainder
self.use_list = use_list
def __len__(self):
"""get the number of batches"""
ds_size = len(self.ds)
div = ds_size // self.batch_size
rem = ds_size % self.batch_size
if rem == 0:
return div
return div + int(self.remainder) # int(False) == 0
def __iter__(self):
"""generating data in batches"""
holder = []
for data in self.ds:
holder.append(data)
if len(holder) == self.batch_size:
yield self._aggregate_batch(holder, self.use_list)
del holder[:] # reset holder as empty list => holder = []
if self.remainder and len(holder) > 0:
yield self._aggregate_batch(holder, self.use_list)
def _aggregate_batch(self, data_holder, use_list=False):
"""
Concatenate input points along the 0-th dimension
Stack all other data along the 0-th dimension
"""
ids = np.stack([x[0] for x in data_holder])
inputs = [resample_pcd(x[1], self.input_size) if x[1].shape[0] > self.input_size else x[1]
for x in data_holder]
inputs = np.expand_dims(np.concatenate([x for x in inputs]), 0).astype(np.float32)
npts = np.stack([x[1].shape[0] if x[1].shape[0] < self.input_size else self.input_size
for x in data_holder]).astype(np.int32)
gts = np.stack([resample_pcd(x[2], self.gt_size) for x in data_holder]).astype(np.float32)
return ids, inputs, npts, gts
def lmdb_dataflow(lmdb_path, batch_size, input_size, output_size, is_training, test_speed=False):
"""load LMDB files, then generate batches??"""
df = dataflow.LMDBSerializer.load(lmdb_path, shuffle=False)
size = df.size()
if is_training:
df = dataflow.LocallyShuffleData(df, buffer_size=2000) # buffer_size
df = dataflow.PrefetchData(df, nr_prefetch=500, nr_proc=1) # multiprocess the data
df = BatchData(df, batch_size, input_size, output_size)
if is_training:
df = dataflow.PrefetchDataZMQ(df, nr_proc=8)
df = dataflow.RepeatedData(df, -1)
if test_speed:
dataflow.TestDataSpeed(df, size=1000).start()
df.reset_state()
return df, size
def get_queued_data(generator, dtypes, shapes, queue_capacity=10):
assert len(dtypes) == len(shapes), 'dtypes and shapes must have the same length'
queue = tf.FIFOQueue(queue_capacity, dtypes, shapes)
placeholders = [tf.placeholder(dtype, shape) for dtype, shape in zip(dtypes, shapes)]
enqueue_op = queue.enqueue(placeholders)
close_op = queue.close(cancel_pending_enqueues=True)
feed_fn = lambda: {placeholder: value for placeholder, value in zip(placeholders, next(generator))}
queue_runner = tf.contrib.training.FeedingQueueRunner(
queue, [enqueue_op], close_op, feed_fns=[feed_fn])
tf.train.add_queue_runner(queue_runner)
return queue.dequeue()
================================================
FILE: OcCo_TF/utils/io_util.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import h5py, numpy as np
from open3d.open3d.geometry import PointCloud
from open3d.open3d.utility import Vector3dVector
from open3d.open3d.io import read_point_cloud, write_point_cloud
def read_pcd(filename):
pcd = read_point_cloud(filename)
return np.array(pcd.points)
def save_pcd(filename, points):
pcd = PointCloud()
pcd.points = Vector3dVector(points)
write_point_cloud(filename, pcd)
def shuffle_data(data, labels):
""" Shuffle data and labels """
idx = np.arange(len(labels))
np.random.shuffle(idx)
return data[idx, ...], labels[idx], idx
def loadh5DataFile(PathtoFile):
f = h5py.File(PathtoFile, 'r')
return f['data'][:], f['label'][:]
def getDataFiles(list_filename):
return [line.rstrip() for line in open(list_filename)]
def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'):
h5_fout = h5py.File(h5_filename)
h5_fout.create_dataset(
name='data', data=data,
compression='gzip', compression_opts=4,
dtype=data_dtype)
h5_fout.create_dataset(
name='label', data=label,
compression='gzip', compression_opts=1,
dtype=label_dtype)
h5_fout.close()
================================================
FILE: OcCo_TF/utils/pc_util.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import numpy as np
def jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):
""" Randomly jitter points. jittering is per point.
Input:
BxNx3 array, original batch of point clouds
Return:
BxNx3 array, jittered batch of point clouds
"""
B, N, C = batch_data.shape
assert(clip > 0)
jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)
jittered_data += batch_data
return jittered_data
def rotate_point_cloud(batch_data):
""" Randomly rotate the point clouds to argument the dataset
rotation is per shape based along up direction
Input:
BxNx3 array, original batch of point clouds
Return:
BxNx3 array, rotated batch of point clouds
"""
rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
for k in range(batch_data.shape[0]):
rotation_angle = np.random.uniform() * 2 * np.pi
cosval = np.cos(rotation_angle)
sinval = np.sin(rotation_angle)
rotation_matrix = np.array([[cosval, 0, sinval],
[0, 1, 0],
[-sinval, 0, cosval]])
shape_pc = batch_data[k, ...]
rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
return rotated_data
def rotate_point_cloud_by_angle(batch_data, rotation_angle):
""" Rotate the point cloud along up direction with certain angle.
Input:
BxNx3 array, original batch of point clouds
Return:
BxNx3 array, rotated batch of point clouds
"""
rotated_data = np.zeros(batch_data.shape, dtype=np.float32)
for k in range(batch_data.shape[0]):
# rotation_angle = np.random.uniform() * 2 * np.pi
cosval = np.cos(rotation_angle)
sinval = np.sin(rotation_angle)
rotation_matrix = np.array([[cosval, 0, sinval],
[0, 1, 0],
[-sinval, 0, cosval]])
shape_pc = batch_data[k, ...]
rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)
return rotated_data
def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
""" batch_pc: BxNx3 """
for b in range(batch_pc.shape[0]):
# np.random.random() -> Return random floats in the half-open interval [0.0, 1.0).
dropout_ratio = np.random.random() * max_dropout_ratio # 0 ~ 0.875
drop_idx = np.where(np.random.random((batch_pc.shape[1])) <= dropout_ratio)[0]
if len(drop_idx) > 0:
batch_pc[b, drop_idx, :] = batch_pc[b, 0, :] # set to the first point
return batch_pc
def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
""" Randomly scale the point cloud. Scale is per point cloud.
Input:
BxNx3 array, original batch of point clouds
Return:
BxNx3 array, scaled batch of point clouds
"""
B, N, C = batch_data.shape
scales = np.random.uniform(scale_low, scale_high, B)
for batch_index in range(B):
batch_data[batch_index, :, :] *= scales[batch_index]
return batch_data
def random_shift_point_cloud(batch_data, shift_range=0.1):
""" Randomly shift point cloud. Shift is per point cloud.
Input:
BxNx3 array, original batch of point clouds
Return:
BxNx3 array, shifted batch of point clouds
"""
B, N, C = batch_data.shape
shifts = np.random.uniform(-shift_range, shift_range, (B, 3))
for batch_index in range(B):
batch_data[batch_index, :, :] += shifts[batch_index, :]
return batch_data
================================================
FILE: OcCo_TF/utils/tf_util.py
================================================
# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com
import tensorflow as tf
try:
from pc_distance import tf_nndistance, tf_approxmatch
except:
pass
'''mlp and conv1d with stride 1 are different'''
def mlp(features, layer_dims, bn=None, bn_params=None):
# doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/fully_connected
for i, num_outputs in enumerate(layer_dims[:-1]):
features = tf.contrib.layers.fully_connected(
features, num_outputs,
normalizer_fn=bn,
normalizer_params=bn_params,
scope='fc_%d' % i)
outputs = tf.contrib.layers.fully_connected(
features, layer_dims[-1],
activation_fn=None,
scope='fc_%d' % (len(layer_dims) - 1))
return outputs
def mlp_conv(inputs, layer_dims, bn=None, bn_params=None):
# doc: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/conv1d
for i, num_out_channel in enumerate(layer_dims[:-1]):
inputs = tf.contrib.layers.conv1d(
inputs, num_out_channel,
kernel_size=1,
normalizer_fn=bn,
normalizer_params=bn_params,
scope='conv_%d' % i)
# kernel size -> single value for all spatial dimensions
# the size of filter should be (1, 3)
outputs = tf.contrib.layers.conv1d(
inputs, layer_dims[-1],
kernel_size=1,
activation_fn=None,
scope='conv_%d' % (len(layer_dims) - 1))
return outputs
def point_maxpool(inputs, npts, keepdims=False):
# number of points, number of channels -> get the maximum value along the number of channels
outputs = [tf.reduce_max(f, axis=1, keepdims=keepdims) for f in tf.split(inputs, npts, axis=1)]
return tf.concat(outputs, axis=0)
def point_unpool(inputs, npts):
inputs = tf.split(inputs, inputs.shape[0], axis=0)
outputs = [tf.tile(f, [1, npts[i], 1]) for i, f in enumerate(inputs)]
return tf.concat(outputs, axis=1)
def chamfer(pcd1, pcd2):
"""Normalised Chamfer Distance"""
dist1, _, dist2, _ = tf_nndistance.nn_distance(pcd1, pcd2)
dist1 = tf.reduce_mean(tf.sqrt(dist1))
dist2 = tf.reduce_mean(tf.sqrt(dist2))
return (dist1 + dist2) / 2
def earth_mover(pcd1, pcd2):
"""Normalised Earth Mover Distance"""
assert pcd1.shape[1] == pcd2.shape[1] # has the same number of points
num_points = tf.cast(pcd1.shape[1], tf.float32)
match = tf_approxmatch.approx_match(pcd1, pcd2)
cost = tf_approxmatch.match_cost(pcd1, pcd2, match)
return tf.reduce_mean(cost / num_points)
def add_train_summary(name, value):
tf.summary.scalar(name, value, collections=['train_summary'])
def add_valid_summary(name, value):
avg, update = tf.metrics.mean(value)
tf.summary.scalar(name, avg, collections=['valid_summary'])
return update
''' === borrow from PointNet === '''
def _variable_on_cpu(name, shape, initializer, use_fp16=False):
"""Helper to create a Variable stored on CPU memory.
Args:
name: name of the variable
shape: list of ints
initializer: initializer for Variable
use_fp16: use 16 bit float or 32 bit float
Returns:
Variable Tensor
"""
with tf.device('/cpu:0'):
dtype = tf.float16 if use_fp16 else tf.float32
var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
return var
def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):
"""Helper to create an initialized Variable with weight decay.
Note that the Variable is initialized with a truncated normal distribution.
A weight decay is added only if one is specified.
Args:
name: name of the variable
shape: list of ints
stddev: standard deviation of a truncated Gaussian
wd: add L2Loss weight decay multiplied by this float. If None, weight
decay is not added for this Variable.
use_xavier: bool, whether to use xavier initializer
Returns:
Variable Tensor
"""
if use_xavier:
initializer = tf.contrib.layers.xavier_initializer()
else:
initializer = tf.truncated_normal_initializer(stddev=stddev)
var = _variable_on_cpu(name, shape, initializer)
if wd is not None:
weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
tf.add_to_collection('losses', weight_decay)
return var
def batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay):
""" Batch normalization on convolutional maps and beyond...
Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
Args:
inputs: Tensor, k-D input ... x C could be BC or BHWC or BDHWC
is_training: boolean tf.Variable, true indicates training phase
scope: string, variable scope
moments_dims: a list of ints, indicating dimensions for moments calculation
bn_decay: float or float tensor variable, controlling moving average weight
Return:
normed: batch-normalized maps
"""
with tf.variable_scope(scope) as sc:
num_channels = inputs.get_shape()[-1].value
beta = tf.Variable(tf.constant(0.0, shape=[num_channels]),
name='beta', trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]),
name='gamma', trainable=True)
batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments') # basically just mean and variance
decay = bn_decay if bn_decay is not None else 0.9
ema = tf.train.ExponentialMovingAverage(decay=decay)
# Operator that maintains moving averages of variables.
ema_apply_op = tf.cond(is_training,
lambda: ema.apply([batch_mean, batch_var]),
lambda: tf.no_op())
# Update moving average and return current batch's avg and var.
def mean_var_with_update():
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
# ema.average returns the Variable holding the average of var.
mean, var = tf.cond(is_training,
mean_var_with_update,
lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)
'''tf.nn.batch_normalization(x, mean, variance, offset,
gitextract_3k9761vh/
├── .gitignore
├── LICENSE
├── OcCo_TF/
│ ├── .gitignore
│ ├── Requirements_TF.txt
│ ├── cls_models/
│ │ ├── __init__.py
│ │ ├── dgcnn_cls.py
│ │ ├── pcn_cls.py
│ │ └── pointnet_cls.py
│ ├── completion_models/
│ │ ├── __init__.py
│ │ ├── dgcnn_cd.py
│ │ ├── dgcnn_emd.py
│ │ ├── pcn_cd.py
│ │ ├── pcn_emd.py
│ │ ├── pointnet_cd.py
│ │ └── pointnet_emd.py
│ ├── docker/
│ │ ├── .dockerignore
│ │ └── Dockerfile_TF
│ ├── pc_distance/
│ │ ├── __init__.py
│ │ ├── makefile
│ │ ├── tf_approxmatch.cpp
│ │ ├── tf_approxmatch.cu
│ │ ├── tf_approxmatch.py
│ │ ├── tf_nndistance.cpp
│ │ ├── tf_nndistance.cu
│ │ └── tf_nndistance.py
│ ├── readme.md
│ ├── train_cls.py
│ ├── train_cls_dgcnn_torchloader.py
│ ├── train_cls_torchloader.py
│ ├── train_completion.py
│ └── utils/
│ ├── Dataset_Assign.py
│ ├── EarlyStoppingCriterion.py
│ ├── ModelNetDataLoader.py
│ ├── Train_Logger.py
│ ├── __init__.py
│ ├── check_num_point.py
│ ├── check_scale.py
│ ├── data_util.py
│ ├── io_util.py
│ ├── pc_util.py
│ ├── tf_util.py
│ ├── transfer_pretrained_w.py
│ ├── transform_nets.py
│ └── visu_util.py
├── OcCo_Torch/
│ ├── Requirements_Torch.txt
│ ├── bash_template/
│ │ ├── train_cls_template.sh
│ │ ├── train_completion_template.sh
│ │ ├── train_jigsaw_template.sh
│ │ ├── train_partseg_template.sh
│ │ ├── train_semseg_template.sh
│ │ └── train_svm_template.sh
│ ├── chamfer_distance/
│ │ ├── __init__.py
│ │ ├── chamfer_distance.cpp
│ │ ├── chamfer_distance.cu
│ │ ├── chamfer_distance.py
│ │ └── readme.md
│ ├── docker/
│ │ ├── .dockerignore
│ │ ├── Dockerfile_Torch
│ │ ├── build_docker_torch.sh
│ │ └── launch_docker_torch.sh
│ ├── models/
│ │ ├── dgcnn_cls.py
│ │ ├── dgcnn_jigsaw.py
│ │ ├── dgcnn_occo.py
│ │ ├── dgcnn_partseg.py
│ │ ├── dgcnn_semseg.py
│ │ ├── dgcnn_util.py
│ │ ├── pcn_cls.py
│ │ ├── pcn_jigsaw.py
│ │ ├── pcn_occo.py
│ │ ├── pcn_partseg.py
│ │ ├── pcn_semseg.py
│ │ ├── pcn_util.py
│ │ ├── pointnet_cls.py
│ │ ├── pointnet_jigsaw.py
│ │ ├── pointnet_occo.py
│ │ ├── pointnet_partseg.py
│ │ ├── pointnet_semseg.py
│ │ └── pointnet_util.py
│ ├── readme.md
│ ├── train_cls.py
│ ├── train_completion.py
│ ├── train_jigsaw.py
│ ├── train_partseg.py
│ ├── train_semseg.py
│ ├── train_svm.py
│ └── utils/
│ ├── 3DPC_Data_Gen.py
│ ├── Dataset_Loc.py
│ ├── Inference_Timer.py
│ ├── LMDB_DataFlow.py
│ ├── LMDB_Writer.py
│ ├── ModelNetDataLoader.py
│ ├── PC_Augmentation.py
│ ├── S3DISDataLoader.py
│ ├── ShapeNetDataLoader.py
│ ├── TSNE_Visu.py
│ ├── Torch_Utility.py
│ ├── TrainLogger.py
│ ├── Visu_Utility.py
│ ├── __init__.py
│ ├── collect_indoor3d_data.py
│ ├── gen_indoor3d_h5.py
│ ├── indoor3d_util.py
│ └── lmdb2hdf5.py
├── readme.md
├── render/
│ ├── Depth_Renderer.py
│ ├── EXR_Process.py
│ ├── ModelNet_Flist.txt
│ ├── PC_Normalisation.py
│ └── readme.md
└── sample/
├── CMakeLists.txt
├── mesh_sampling.cpp
└── readme.md
SYMBOL INDEX (478 symbols across 74 files)
FILE: OcCo_TF/cls_models/dgcnn_cls.py
class Model (line 10) | class Model:
method __init__ (line 11) | def __init__(self, inputs, npts, labels, is_training, **kwargs):
method get_graph_feature (line 20) | def get_graph_feature(x, k):
method create_encoder (line 27) | def create_encoder(self, point_cloud):
method create_decoder (line 75) | def create_decoder(self, features):
method create_loss (line 98) | def create_loss(pred, label, smoothing=True):
FILE: OcCo_TF/cls_models/pcn_cls.py
class Model (line 10) | class Model:
method __init__ (line 11) | def __init__(self, inputs, npts, labels, is_training, **kwargs):
method create_encoder (line 17) | def create_encoder(self, inputs, npts):
method create_decoder (line 29) | def create_decoder(self, features):
method create_loss (line 43) | def create_loss(self, pred, label):
FILE: OcCo_TF/cls_models/pointnet_cls.py
class Model (line 13) | class Model:
method __init__ (line 14) | def __init__(self, inputs, npts, labels, is_training, **kwargs):
method create_encoder (line 21) | def create_encoder(self, inputs, npts):
method create_decoder (line 65) | def create_decoder(self, features):
method create_loss (line 79) | def create_loss(self, pred, label):
FILE: OcCo_TF/completion_models/dgcnn_cd.py
class Model (line 18) | class Model:
method __init__ (line 19) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 34) | def create_encoder(self, point_cloud, npts):
method create_decoder (line 100) | def create_decoder(self, features):
method create_loss (line 123) | def create_loss(self, gt, alpha):
FILE: OcCo_TF/completion_models/dgcnn_emd.py
class Model (line 18) | class Model:
method __init__ (line 19) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 34) | def create_encoder(self, point_cloud, npts):
method create_decoder (line 100) | def create_decoder(self, features):
method create_loss (line 123) | def create_loss(self, gt, alpha):
FILE: OcCo_TF/completion_models/pcn_cd.py
class Model (line 9) | class Model:
method __init__ (line 10) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 23) | def create_encoder(self, inputs, npts):
method create_decoder (line 33) | def create_decoder(self, features):
method create_loss (line 56) | def create_loss(self, coarse, fine, gt, alpha):
FILE: OcCo_TF/completion_models/pcn_emd.py
class Model (line 10) | class Model:
method __init__ (line 11) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 23) | def create_encoder(self, inputs, npts):
method create_decoder (line 33) | def create_decoder(self, features):
method create_loss (line 58) | def create_loss(self, coarse, fine, gt, alpha):
FILE: OcCo_TF/completion_models/pointnet_cd.py
class Model (line 13) | class Model:
method __init__ (line 14) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 28) | def create_encoder(self, inputs, npts):
method create_decoder (line 82) | def create_decoder(self, features):
method create_loss (line 106) | def create_loss(self, gt, alpha):
FILE: OcCo_TF/completion_models/pointnet_emd.py
class Model (line 16) | class Model:
method __init__ (line 17) | def __init__(self, inputs, npts, gt, alpha, **kwargs):
method create_encoder (line 31) | def create_encoder(self, inputs, npts):
method create_decoder (line 83) | def create_decoder(self, features):
method create_loss (line 106) | def create_loss(self, gt, alpha):
FILE: OcCo_TF/pc_distance/tf_approxmatch.cpp
function approxmatch_cpu (line 23) | void approxmatch_cpu(int b,int n,int m,const float * xyz1,const float * ...
function matchcost_cpu (line 85) | void matchcost_cpu(int b,int n,int m,const float * xyz1,const float * xy...
function matchcostgrad_cpu (line 106) | void matchcostgrad_cpu(int b,int n,int m,const float * xyz1,const float ...
class ApproxMatchGpuOp (line 145) | class ApproxMatchGpuOp: public OpKernel{
method ApproxMatchGpuOp (line 147) | explicit ApproxMatchGpuOp(OpKernelConstruction* context):OpKernel(cont...
method Compute (line 148) | void Compute(OpKernelContext * context)override{
class ApproxMatchOp (line 175) | class ApproxMatchOp: public OpKernel{
method ApproxMatchOp (line 177) | explicit ApproxMatchOp(OpKernelConstruction* context):OpKernel(context){}
method Compute (line 178) | void Compute(OpKernelContext * context)override{
class MatchCostGpuOp (line 201) | class MatchCostGpuOp: public OpKernel{
method MatchCostGpuOp (line 203) | explicit MatchCostGpuOp(OpKernelConstruction* context):OpKernel(contex...
method Compute (line 204) | void Compute(OpKernelContext * context)override{
class MatchCostOp (line 231) | class MatchCostOp: public OpKernel{
method MatchCostOp (line 233) | explicit MatchCostOp(OpKernelConstruction* context):OpKernel(context){}
method Compute (line 234) | void Compute(OpKernelContext * context)override{
class MatchCostGradGpuOp (line 262) | class MatchCostGradGpuOp: public OpKernel{
method MatchCostGradGpuOp (line 264) | explicit MatchCostGradGpuOp(OpKernelConstruction* context):OpKernel(co...
method Compute (line 265) | void Compute(OpKernelContext * context)override{
class MatchCostGradOp (line 296) | class MatchCostGradOp: public OpKernel{
method MatchCostGradOp (line 298) | explicit MatchCostGradOp(OpKernelConstruction* context):OpKernel(conte...
method Compute (line 299) | void Compute(OpKernelContext * context)override{
FILE: OcCo_TF/pc_distance/tf_approxmatch.py
function approx_match (line 11) | def approx_match(xyz1, xyz2):
function _approx_match_shape (line 25) | def _approx_match_shape(op):
function match_cost (line 31) | def match_cost(xyz1, xyz2, match):
function _match_cost_shape (line 43) | def _match_cost_shape(op):
function _match_cost_grad (line 51) | def _match_cost_grad(op,grad_cost):
FILE: OcCo_TF/pc_distance/tf_nndistance.cpp
function nnsearch (line 21) | static void nnsearch(int b,int n,int m,const float * xyz1,const float * ...
class NnDistanceOp (line 45) | class NnDistanceOp : public OpKernel{
method NnDistanceOp (line 47) | explicit NnDistanceOp(OpKernelConstruction* context):OpKernel(context){}
method Compute (line 48) | void Compute(OpKernelContext * context)override{
class NnDistanceGradOp (line 84) | class NnDistanceGradOp : public OpKernel{
method NnDistanceGradOp (line 86) | explicit NnDistanceGradOp(OpKernelConstruction* context):OpKernel(cont...
method Compute (line 87) | void Compute(OpKernelContext * context)override{
class NnDistanceGpuOp (line 169) | class NnDistanceGpuOp : public OpKernel{
method NnDistanceGpuOp (line 171) | explicit NnDistanceGpuOp(OpKernelConstruction* context):OpKernel(conte...
method Compute (line 172) | void Compute(OpKernelContext * context)override{
class NnDistanceGradGpuOp (line 209) | class NnDistanceGradGpuOp : public OpKernel{
method NnDistanceGradGpuOp (line 211) | explicit NnDistanceGradGpuOp(OpKernelConstruction* context):OpKernel(c...
method Compute (line 212) | void Compute(OpKernelContext * context)override{
FILE: OcCo_TF/pc_distance/tf_nndistance.py
function nn_distance (line 10) | def nn_distance(xyz1, xyz2):
function _nn_distance_grad (line 24) | def _nn_distance_grad(op, grad_dist1, grad_idx1, grad_dist2, grad_idx2):
FILE: OcCo_TF/train_cls.py
function log_string (line 62) | def log_string(out_str):
function train (line 68) | def train(args):
function train_one_epoch (line 186) | def train_one_epoch(sess, ops, train_writer):
function eval_one_epoch (line 234) | def eval_one_epoch(sess, ops, val_writer):
FILE: OcCo_TF/train_cls_dgcnn_torchloader.py
function parse_args (line 15) | def parse_args():
function main (line 56) | def main(args):
function train_one_epoch (line 174) | def train_one_epoch(sess, ops, MyLogger, train_writer):
function eval_one_epoch (line 205) | def eval_one_epoch(sess, ops, MyLogger, val_writer):
FILE: OcCo_TF/train_cls_torchloader.py
function log_string (line 70) | def log_string(out_str):
function train (line 76) | def train(args):
function train_one_epoch (line 195) | def train_one_epoch(sess, ops, train_writer):
function eval_one_epoch (line 268) | def eval_one_epoch(sess, ops, val_writer):
FILE: OcCo_TF/train_completion.py
function get_bn_decay (line 48) | def get_bn_decay(batch):
function vary2fix (line 59) | def vary2fix(inputs, npts):
function train (line 77) | def train(args):
FILE: OcCo_TF/utils/Dataset_Assign.py
function Dataset_Assign (line 5) | def Dataset_Assign(dataset, fname, partial=True, bn=False, few_shot=False):
FILE: OcCo_TF/utils/EarlyStoppingCriterion.py
class EarlyStoppingCriterion (line 3) | class EarlyStoppingCriterion(object):
method __init__ (line 12) | def __init__(self, patience=10, mode='max', min_delta=0.0):
method step (line 26) | def step(self, cur_dev_score, epoch):
FILE: OcCo_TF/utils/ModelNetDataLoader.py
function pc_normalize (line 7) | def pc_normalize(pc):
function farthest_point_sample (line 15) | def farthest_point_sample(point, npoint):
class ModelNetDataLoader (line 39) | class ModelNetDataLoader(Dataset):
method __init__ (line 40) | def __init__(self, root, npoint=1024, split='train', uniform=False, no...
method __len__ (line 63) | def __len__(self):
method _get_item (line 66) | def _get_item(self, index):
method __getitem__ (line 89) | def __getitem__(self, index):
class General_CLSDataLoader_HDF5 (line 93) | class General_CLSDataLoader_HDF5(Dataset):
method __init__ (line 94) | def __init__(self, file_list, num_point=1024):
method loadh5DataFile (line 116) | def loadh5DataFile(PathtoFile):
method __len__ (line 120) | def __len__(self):
method __getitem__ (line 123) | def __getitem__(self, index):
class ModelNetJigsawDataLoader (line 131) | class ModelNetJigsawDataLoader(Dataset):
method __init__ (line 132) | def __init__(self, root=r'./data/modelnet40_ply_hdf5_2048/jigsaw',
method loadh5DataFile (line 164) | def loadh5DataFile(PathtoFile):
method __getitem__ (line 168) | def __getitem__(self, index):
method __len__ (line 177) | def __len__(self):
FILE: OcCo_TF/utils/Train_Logger.py
class TrainLogger (line 7) | class TrainLogger:
method __init__ (line 9) | def __init__(self, args, name='Model', subfold='cls', cls2name=None):
method logger_setup (line 24) | def logger_setup(self):
method make_logdir (line 42) | def make_logdir(self):
method epoch_init (line 62) | def epoch_init(self, training=True):
method step_update (line 67) | def step_update(self, pred, gt, loss, training=True):
method cls_epoch_update (line 76) | def cls_epoch_update(self, training=True):
method seg_epoch_update (line 95) | def seg_epoch_update(self, training=True):
method epoch_summary (line 109) | def epoch_summary(self, writer=None, training=True):
method train_summary (line 129) | def train_summary(self):
method update_from_checkpoints (line 136) | def update_from_checkpoints(self, checkpoint):
method update_from_checkpoints_tf (line 145) | def update_from_checkpoints_tf(self, checkpoint):
FILE: OcCo_TF/utils/check_scale.py
function log_string (line 9) | def log_string(msg):
FILE: OcCo_TF/utils/data_util.py
function resample_pcd (line 8) | def resample_pcd(pcd, n):
class PreprocessData (line 16) | class PreprocessData(dataflow.ProxyDataFlow):
method __init__ (line 17) | def __init__(self, ds, input_size, output_size):
method get_data (line 23) | def get_data(self):
class BatchData (line 30) | class BatchData(dataflow.ProxyDataFlow):
method __init__ (line 31) | def __init__(self, ds, batch_size, input_size, gt_size, remainder=Fals...
method __len__ (line 39) | def __len__(self):
method __iter__ (line 48) | def __iter__(self):
method _aggregate_batch (line 59) | def _aggregate_batch(self, data_holder, use_list=False):
function lmdb_dataflow (line 74) | def lmdb_dataflow(lmdb_path, batch_size, input_size, output_size, is_tra...
function get_queued_data (line 91) | def get_queued_data(generator, dtypes, shapes, queue_capacity=10):
FILE: OcCo_TF/utils/io_util.py
function read_pcd (line 9) | def read_pcd(filename):
function save_pcd (line 14) | def save_pcd(filename, points):
function shuffle_data (line 20) | def shuffle_data(data, labels):
function loadh5DataFile (line 27) | def loadh5DataFile(PathtoFile):
function getDataFiles (line 32) | def getDataFiles(list_filename):
function save_h5 (line 36) | def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='u...
FILE: OcCo_TF/utils/pc_util.py
function jitter_point_cloud (line 6) | def jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):
function rotate_point_cloud (line 20) | def rotate_point_cloud(batch_data):
function rotate_point_cloud_by_angle (line 41) | def rotate_point_cloud_by_angle(batch_data, rotation_angle):
function random_point_dropout (line 61) | def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
function random_scale_point_cloud (line 72) | def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
function random_shift_point_cloud (line 86) | def random_shift_point_cloud(batch_data, shift_range=0.1):
FILE: OcCo_TF/utils/tf_util.py
function mlp (line 12) | def mlp(features, layer_dims, bn=None, bn_params=None):
function mlp_conv (line 27) | def mlp_conv(inputs, layer_dims, bn=None, bn_params=None):
function point_maxpool (line 46) | def point_maxpool(inputs, npts, keepdims=False):
function point_unpool (line 52) | def point_unpool(inputs, npts):
function chamfer (line 58) | def chamfer(pcd1, pcd2):
function earth_mover (line 66) | def earth_mover(pcd1, pcd2):
function add_train_summary (line 75) | def add_train_summary(name, value):
function add_valid_summary (line 79) | def add_valid_summary(name, value):
function _variable_on_cpu (line 88) | def _variable_on_cpu(name, shape, initializer, use_fp16=False):
function _variable_with_weight_decay (line 104) | def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):
function batch_norm_template (line 132) | def batch_norm_template(inputs, is_training, scope, moments_dims, bn_dec...
function batch_norm_for_fc (line 176) | def batch_norm_for_fc(inputs, is_training, bn_decay, scope):
function fully_connected (line 190) | def fully_connected(inputs,
function max_pool2d (line 231) | def max_pool2d(inputs,
function dropout (line 264) | def dropout(inputs,
function conv2d (line 288) | def conv2d(inputs,
function batch_norm_for_conv2d (line 364) | def batch_norm_for_conv2d(inputs, is_training, bn_decay, scope):
function batch_norm_template (line 369) | def batch_norm_template(inputs, is_training, scope, moments_dims, bn_dec...
function pairwise_distance (line 417) | def pairwise_distance(point_cloud):
function knn (line 439) | def knn(adj_matrix, k=20):
function get_edge_feature (line 453) | def get_edge_feature(point_cloud, nn_idx, k=20):
function get_learning_rate (line 489) | def get_learning_rate(batch, base_lr, batch_size, decay_step, decay_rate...
function get_lr_dgcnn (line 500) | def get_lr_dgcnn(batch, base_lr, batch_size, decay_step, alpha):
function get_bn_decay (line 509) | def get_bn_decay(batch, bn_init_decay, batch_size, bn_decay_step, bn_dec...
FILE: OcCo_TF/utils/transfer_pretrained_w.py
function load_para_from_saved_model (line 7) | def load_para_from_saved_model(model_path, verbose=False):
function intersec_saved_var (line 24) | def intersec_saved_var(model_path1, model_path2, verbose=False):
function load_pretrained_var (line 40) | def load_pretrained_var(source_model_path, target_model_path, verbose=Fa...
FILE: OcCo_TF/utils/transform_nets.py
function input_transform_net_dgcnn (line 7) | def input_transform_net_dgcnn(edge_feature, is_training, bn_decay=None, ...
function input_transform_net (line 56) | def input_transform_net(point_cloud, is_training, bn_decay=None, K=3):
function feature_transform_net (line 113) | def feature_transform_net(inputs, is_training, bn_decay=None, K=64):
FILE: OcCo_TF/utils/visu_util.py
function plot_pcd_three_views (line 11) | def plot_pcd_three_views(filename, pcds, titles, suptitle='', sizes=None...
FILE: OcCo_Torch/chamfer_distance/chamfer_distance.cpp
function chamfer_distance_forward_cuda (line 27) | void chamfer_distance_forward_cuda(
function chamfer_distance_backward_cuda (line 41) | void chamfer_distance_backward_cuda(
function nnsearch (line 59) | void nnsearch(
function chamfer_distance_forward (line 90) | void chamfer_distance_forward(
function chamfer_distance_backward (line 114) | void chamfer_distance_backward(
function PYBIND11_MODULE (line 180) | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
FILE: OcCo_Torch/chamfer_distance/chamfer_distance.py
class ChamferDistanceFunction (line 10) | class ChamferDistanceFunction(torch.autograd.Function):
method forward (line 12) | def forward(ctx, xyz1, xyz2):
method backward (line 37) | def backward(ctx, graddist1, graddist2):
class ChamferDistance (line 56) | class ChamferDistance(nn.Module):
method forward (line 57) | def forward(self, xyz1, xyz2):
class get_model (line 61) | class get_model(nn.Module):
method __init__ (line 62) | def __init__(self, channel=3):
method forward (line 67) | def forward(self, x):
FILE: OcCo_Torch/models/dgcnn_cls.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 9) | def __init__(self, args, num_channel=3, num_class=40, **kwargs):
method forward (line 42) | def forward(self, x):
class get_loss (line 75) | class get_loss(torch.nn.Module):
method __init__ (line 76) | def __init__(self):
method cal_loss (line 80) | def cal_loss(pred, gold, smoothing=True):
method forward (line 96) | def forward(self, pred, target):
FILE: OcCo_Torch/models/dgcnn_jigsaw.py
class get_model (line 8) | class get_model(nn.Module):
method __init__ (line 9) | def __init__(self, args, num_class, **kwargs):
method forward (line 50) | def forward(self, x):
class get_loss (line 89) | class get_loss(torch.nn.Module):
method __init__ (line 90) | def __init__(self):
method cal_loss (line 94) | def cal_loss(pred, gold, smoothing=False):
method forward (line 111) | def forward(self, pred, target):
FILE: OcCo_Torch/models/dgcnn_occo.py
class get_model (line 11) | class get_model(nn.Module):
method __init__ (line 12) | def __init__(self, **kwargs):
method build_grid (line 61) | def build_grid(self, batch_size):
method tile (line 69) | def tile(self, tensor, multiples):
method expand_dims (line 89) | def expand_dims(tensor, dim):
method forward (line 94) | def forward(self, x):
class get_loss (line 140) | class get_loss(nn.Module):
method __init__ (line 141) | def __init__(self):
method dist_cd (line 145) | def dist_cd(pc1, pc2):
method forward (line 150) | def forward(self, coarse, fine, gt, alpha):
FILE: OcCo_Torch/models/dgcnn_partseg.py
class get_model (line 9) | class get_model(nn.Module):
method __init__ (line 10) | def __init__(self, args, part_num=50, num_channel=3, **kwargs):
method forward (line 61) | def forward(self, x, l):
class get_loss (line 111) | class get_loss(nn.Module):
method __init__ (line 112) | def __init__(self):
method cal_loss (line 116) | def cal_loss(pred, gold, smoothing=False):
method forward (line 133) | def forward(self, pred, target):
FILE: OcCo_Torch/models/dgcnn_semseg.py
class get_model (line 9) | class get_model(nn.Module):
method __init__ (line 10) | def __init__(self, args, num_class, num_channel=9, **kwargs):
method forward (line 50) | def forward(self, x):
class get_loss (line 83) | class get_loss(nn.Module):
method __init__ (line 84) | def __init__(self):
method cal_loss (line 88) | def cal_loss(pred, gold, smoothing=False):
method forward (line 105) | def forward(self, pred, target):
FILE: OcCo_Torch/models/dgcnn_util.py
function knn (line 6) | def knn(x, k):
function get_graph_feature (line 14) | def get_graph_feature(x, k=20, idx=None, extra_dim=False):
class T_Net (line 38) | class T_Net(nn.Module):
method __init__ (line 41) | def __init__(self, channel=3, k=3):
method forward (line 67) | def forward(self, x):
class encoder (line 86) | class encoder(nn.Module):
method __init__ (line 87) | def __init__(self, channel=3, **kwargs):
method forward (line 111) | def forward(self, x):
FILE: OcCo_Torch/models/pcn_cls.py
class get_model (line 6) | class get_model(nn.Module):
method __init__ (line 7) | def __init__(self, num_class=40, num_channel=3, **kwargs):
method forward (line 19) | def forward(self, x):
class get_loss (line 32) | class get_loss(nn.Module):
method __init__ (line 33) | def __init__(self):
method forward (line 36) | def forward(self, pred, target):
FILE: OcCo_Torch/models/pcn_jigsaw.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 8) | def __init__(self, num_class, num_channel=3, **kwargs):
method forward (line 20) | def forward(self, x):
class get_loss (line 33) | class get_loss(nn.Module):
method __init__ (line 34) | def __init__(self):
method forward (line 37) | def forward(self, pred, target, trans_feat, weight):
FILE: OcCo_Torch/models/pcn_occo.py
class get_model (line 12) | class get_model(nn.Module):
method __init__ (line 13) | def __init__(self, **kwargs):
method build_grid (line 47) | def build_grid(self, batch_size):
method tile (line 55) | def tile(self, tensor, multiples):
method expand_dims (line 75) | def expand_dims(tensor, dim):
method forward (line 81) | def forward(self, x):
class get_loss (line 106) | class get_loss(nn.Module):
method __init__ (line 107) | def __init__(self):
method dist_cd (line 111) | def dist_cd(pc1, pc2):
method forward (line 116) | def forward(self, coarse, fine, gt, alpha):
FILE: OcCo_Torch/models/pcn_partseg.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 8) | def __init__(self, part_num=50, num_channel=3, **kwargs):
method forward (line 21) | def forward(self, point_cloud, label):
class get_loss (line 33) | class get_loss(nn.Module):
method __init__ (line 34) | def __init__(self):
method forward (line 37) | def forward(self, pred, target):
FILE: OcCo_Torch/models/pcn_semseg.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 8) | def __init__(self, num_class, num_channel=9, **kwargs):
method forward (line 20) | def forward(self, x):
class get_loss (line 33) | class get_loss(nn.Module):
method __init__ (line 34) | def __init__(self):
method forward (line 37) | def forward(self, pred, target):
FILE: OcCo_Torch/models/pcn_util.py
class PCNEncoder (line 5) | class PCNEncoder(nn.Module):
method __init__ (line 6) | def __init__(self, global_feat=False, channel=3):
method forward (line 16) | def forward(self, x):
class PCNPartSegEncoder (line 38) | class PCNPartSegEncoder(nn.Module):
method __init__ (line 39) | def __init__(self, channel=3):
method forward (line 47) | def forward(self, x, label):
class encoder (line 70) | class encoder(nn.Module):
method __init__ (line 71) | def __init__(self, num_channel=3, **kwargs):
method forward (line 75) | def forward(self, x):
FILE: OcCo_Torch/models/pointnet_cls.py
class get_model (line 8) | class get_model(nn.Module):
method __init__ (line 9) | def __init__(self, num_class=40, num_channel=3, **kwargs):
method forward (line 20) | def forward(self, x):
class get_loss (line 29) | class get_loss(nn.Module):
method __init__ (line 30) | def __init__(self, mat_diff_loss_scale=0.001):
method forward (line 34) | def forward(self, pred, target, trans_feat):
FILE: OcCo_Torch/models/pointnet_jigsaw.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 8) | def __init__(self, num_class, num_channel=3, **kwargs):
method forward (line 22) | def forward(self, x):
class get_loss (line 35) | class get_loss(nn.Module):
method __init__ (line 36) | def __init__(self, mat_diff_loss_scale=0.001):
method forward (line 40) | def forward(self, pred, target, trans_feat):
FILE: OcCo_Torch/models/pointnet_occo.py
class get_model (line 14) | class get_model(nn.Module):
method __init__ (line 15) | def __init__(self, **kwargs):
method build_grid (line 43) | def build_grid(self, batch_size):
method tile (line 51) | def tile(self, tensor, multiples):
method expand_dims (line 71) | def expand_dims(tensor, dim):
method forward (line 76) | def forward(self, x):
class get_loss (line 99) | class get_loss(nn.Module):
method __init__ (line 100) | def __init__(self):
method dist_cd (line 104) | def dist_cd(pc1, pc2):
method forward (line 109) | def forward(self, coarse, fine, gt, alpha):
FILE: OcCo_Torch/models/pointnet_partseg.py
class get_model (line 8) | class get_model(nn.Module):
method __init__ (line 9) | def __init__(self, part_num=50, num_channel=3, **kwargs):
method forward (line 23) | def forward(self, point_cloud, label):
class get_loss (line 37) | class get_loss(nn.Module):
method __init__ (line 38) | def __init__(self, mat_diff_loss_scale=0.001):
method forward (line 42) | def forward(self, pred, target, trans_feat):
FILE: OcCo_Torch/models/pointnet_semseg.py
class get_model (line 7) | class get_model(nn.Module):
method __init__ (line 8) | def __init__(self, num_class=13, num_channel=9, **kwargs):
method forward (line 23) | def forward(self, x):
class get_loss (line 36) | class get_loss(nn.Module):
method __init__ (line 37) | def __init__(self, mat_diff_loss_scale=0.001):
method forward (line 41) | def forward(self, pred, target, trans_feat):
FILE: OcCo_Torch/models/pointnet_util.py
function feature_transform_regularizer (line 8) | def feature_transform_regularizer(trans):
class STN3d (line 18) | class STN3d(nn.Module):
method __init__ (line 19) | def __init__(self, channel):
method forward (line 35) | def forward(self, x):
class STNkd (line 54) | class STNkd(nn.Module):
method __init__ (line 55) | def __init__(self, k=64):
method forward (line 73) | def forward(self, x):
class PointNetEncoder (line 93) | class PointNetEncoder(nn.Module):
method __init__ (line 94) | def __init__(self, global_feat=True, feature_transform=False,
method forward (line 111) | def forward(self, x):
class PointNetPartSegEncoder (line 153) | class PointNetPartSegEncoder(nn.Module):
method __init__ (line 154) | def __init__(self, feature_transform=True, channel=3):
method forward (line 173) | def forward(self, point_cloud, label):
class encoder (line 207) | class encoder(nn.Module):
method __init__ (line 208) | def __init__(self, num_channel=3, **kwargs):
method forward (line 212) | def forward(self, x):
class detailed_encoder (line 217) | class detailed_encoder(nn.Module):
method __init__ (line 218) | def __init__(self, num_channel=3, **kwargs):
method forward (line 224) | def forward(self, x):
FILE: OcCo_Torch/train_cls.py
function parse_args (line 20) | def parse_args():
function main (line 54) | def main(args):
FILE: OcCo_Torch/train_completion.py
function parse_args (line 18) | def parse_args():
function main (line 50) | def main(args):
FILE: OcCo_Torch/train_jigsaw.py
function parse_args (line 17) | def parse_args():
function main (line 43) | def main(args):
FILE: OcCo_Torch/train_partseg.py
function parse_args (line 39) | def parse_args():
function main (line 68) | def main(args):
FILE: OcCo_Torch/train_semseg.py
function parse_args (line 28) | def parse_args():
function main (line 55) | def main(args):
FILE: OcCo_Torch/train_svm.py
function parse_args (line 18) | def parse_args():
FILE: OcCo_Torch/utils/3DPC_Data_Gen.py
function pc_ssl_3djigsaw_gen (line 12) | def pc_ssl_3djigsaw_gen(pc_xyz, k=2, edge_len=1):
function loadh5DataFile (line 60) | def loadh5DataFile(PathtoFile):
function reduce2fix (line 65) | def reduce2fix(pc, n_points=1024):
FILE: OcCo_Torch/utils/Dataset_Loc.py
function Dataset_Loc (line 5) | def Dataset_Loc(dataset, fname, partial=True, bn=False, few_shot=False):
FILE: OcCo_Torch/utils/Inference_Timer.py
class Inference_Timer (line 5) | class Inference_Timer:
method __init__ (line 6) | def __init__(self, args):
method update_args (line 24) | def update_args(self):
method single_step (line 27) | def single_step(self, model, data):
method update_single_epoch (line 38) | def update_single_epoch(self, logger):
FILE: OcCo_Torch/utils/LMDB_DataFlow.py
function resample_pcd (line 9) | def resample_pcd(pcd, n):
class PreprocessData (line 17) | class PreprocessData(dataflow.ProxyDataFlow):
method __init__ (line 19) | def __init__(self, ds, input_size, output_size):
method get_data (line 24) | def get_data(self):
class BatchData (line 31) | class BatchData(dataflow.ProxyDataFlow):
method __init__ (line 32) | def __init__(self, ds, batch_size, input_size, gt_size, remainder=Fals...
method __len__ (line 40) | def __len__(self):
method __iter__ (line 49) | def __iter__(self):
method _aggregate_batch (line 60) | def _aggregate_batch(self, data_holder, use_list=False):
function lmdb_dataflow (line 75) | def lmdb_dataflow(lmdb_path, batch_size, input_size, output_size, is_tra...
FILE: OcCo_Torch/utils/LMDB_Writer.py
function sample_from_mesh (line 8) | def sample_from_mesh(filename, num_samples=16384):
class pcd_df (line 13) | class pcd_df(DataFlow):
method __init__ (line 14) | def __init__(self, model_list, num_scans, partial_dir, complete_dir, n...
method size (line 21) | def size(self):
method read_pcd (line 25) | def read_pcd(filename):
method get_data (line 29) | def get_data(self):
FILE: OcCo_Torch/utils/ModelNetDataLoader.py
function pc_normalize (line 8) | def pc_normalize(pc):
function farthest_point_sample (line 16) | def farthest_point_sample(point, npoint):
class ModelNetDataLoader (line 40) | class ModelNetDataLoader(Dataset):
method __init__ (line 41) | def __init__(self, root, npoint=1024, split='train', uniform=False, no...
method __len__ (line 63) | def __len__(self):
method _get_item (line 66) | def _get_item(self, index):
method __getitem__ (line 89) | def __getitem__(self, index):
class General_CLSDataLoader_HDF5 (line 93) | class General_CLSDataLoader_HDF5(Dataset):
method __init__ (line 94) | def __init__(self, file_list, num_point=1024):
method loadh5DataFile (line 112) | def loadh5DataFile(PathtoFile):
method __len__ (line 116) | def __len__(self):
method __getitem__ (line 119) | def __getitem__(self, index):
class ModelNetJigsawDataLoader (line 125) | class ModelNetJigsawDataLoader(Dataset):
method __init__ (line 126) | def __init__(self, root=r'./data/modelnet40_ply_hdf5_2048/jigsaw',
method loadh5DataFile (line 156) | def loadh5DataFile(PathtoFile):
method __getitem__ (line 160) | def __getitem__(self, index):
method __len__ (line 165) | def __len__(self):
FILE: OcCo_Torch/utils/PC_Augmentation.py
function pc_normalize (line 12) | def pc_normalize(pc):
function farthest_point_sample (line 21) | def farthest_point_sample(point, npoint):
function random_shift_point_cloud (line 39) | def random_shift_point_cloud(batch_data, shift_range=0.1):
function random_scale_point_cloud (line 48) | def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):
function random_point_dropout (line 57) | def random_point_dropout(batch_pc, max_dropout_ratio=0.875):
function translate_pointcloud_dgcnn (line 67) | def translate_pointcloud_dgcnn(pointcloud):
function jitter_pointcloud_dgcnn (line 75) | def jitter_pointcloud_dgcnn(pointcloud, sigma=0.01, clip=0.02):
FILE: OcCo_Torch/utils/S3DISDataLoader.py
class S3DISDataset_HDF5 (line 20) | class S3DISDataset_HDF5(Dataset):
method __init__ (line 23) | def __init__(self, root='data/indoor3d_sem_seg_hdf5_data', split='trai...
method getDataFiles (line 56) | def getDataFiles(list_filename):
method loadh5DataFile (line 60) | def loadh5DataFile(PathtoFile):
method __getitem__ (line 64) | def __getitem__(self, index):
method __len__ (line 70) | def __len__(self):
class S3DISDataset (line 74) | class S3DISDataset(Dataset):
method __init__ (line 76) | def __init__(self, root, block_points=4096, split='train', test_area=5...
method __getitem__ (line 120) | def __getitem__(self, index):
method __len__ (line 158) | def __len__(self):
class S3DISDatasetWholeScene (line 162) | class S3DISDatasetWholeScene:
method __init__ (line 163) | def __init__(self, root, block_points=8192, split='val', test_area=5, ...
method __getitem__ (line 198) | def __getitem__(self, index):
method __len__ (line 240) | def __len__(self):
class ScannetDatasetWholeScene_evaluation (line 244) | class ScannetDatasetWholeScene_evaluation:
method __init__ (line 246) | def __init__(self, root=root, block_points=8192, split='test', test_ar...
method chunks (line 286) | def chunks(l, n):
method split_data (line 292) | def split_data(data, idx):
method nearest_dist (line 299) | def nearest_dist(block_center, block_center_list):
method __getitem__ (line 306) | def __getitem__(self, index):
method __len__ (line 401) | def __len__(self):
FILE: OcCo_Torch/utils/ShapeNetDataLoader.py
class PartNormalDataset (line 9) | class PartNormalDataset(Dataset):
method __init__ (line 13) | def __init__(self, root, num_point=2048, split='train', use_normal=Fal...
method read_fns (line 67) | def read_fns(path):
method __getitem__ (line 72) | def __getitem__(self, index):
method __len__ (line 90) | def __len__(self):
FILE: OcCo_Torch/utils/TSNE_Visu.py
function parse_args (line 15) | def parse_args():
FILE: OcCo_Torch/utils/Torch_Utility.py
function seed_torch (line 6) | def seed_torch(seed=1029):
function copy_parameters (line 17) | def copy_parameters(model, pretrained, verbose=True):
function weights_init (line 36) | def weights_init(m):
function bn_momentum_adjust (line 48) | def bn_momentum_adjust(m, momentum):
FILE: OcCo_Torch/utils/TrainLogger.py
class TrainLogger (line 7) | class TrainLogger:
method __init__ (line 9) | def __init__(self, args, name='model', subfold='cls', filename='train_...
method setup (line 24) | def setup(self, filename='train_log'):
method mkdir (line 42) | def mkdir(self):
method epoch_init (line 61) | def epoch_init(self, training=True):
method step_update (line 66) | def step_update(self, pred, gt, loss, training=True):
method epoch_update (line 75) | def epoch_update(self, training=True, mode='cls'):
method epoch_summary (line 102) | def epoch_summary(self, writer=None, training=True, mode='cls'):
method calculate_IoU (line 130) | def calculate_IoU(self):
method train_summary (line 141) | def train_summary(self, mode='cls'):
method update_from_checkpoints (line 152) | def update_from_checkpoints(self, checkpoint):
FILE: OcCo_Torch/utils/Visu_Utility.py
function plot_pcd_three_views (line 12) | def plot_pcd_three_views(filename, pcds, titles, suptitle='', sizes=None...
FILE: OcCo_Torch/utils/gen_indoor3d_h5.py
function insert_batch (line 39) | def insert_batch(data, label, last_batch=False):
FILE: OcCo_Torch/utils/indoor3d_util.py
function collect_point_label (line 36) | def collect_point_label(anno_path, out_filename, file_format='txt'):
function data_to_obj (line 79) | def data_to_obj(data, name='example.obj', no_wall=True):
function point_label_to_obj (line 90) | def point_label_to_obj(input_filename, out_filename, label_color=True, e...
function sample_data (line 120) | def sample_data(data, num_sample):
function sample_data_label (line 138) | def sample_data_label(data, label, num_sample):
function room2blocks (line 145) | def room2blocks(data, label, num_point, block_size=1.0, stride=1.0,
function room2blocks_plus (line 216) | def room2blocks_plus(data_label, num_point, block_size, stride,
function room2blocks_wrapper (line 228) | def room2blocks_wrapper(data_label_filename, num_point, block_size=1.0, ...
function room2blocks_plus_normalized (line 241) | def room2blocks_plus_normalized(data_label, num_point, block_size, stride,
function room2blocks_wrapper_normalized (line 268) | def room2blocks_wrapper_normalized(data_label_filename, num_point, block...
function room2samples (line 281) | def room2samples(data, label, sample_num_point):
function room2samples_plus_normalized (line 318) | def room2samples_plus_normalized(data_label, num_point):
function room2samples_wrapper_normalized (line 344) | def room2samples_wrapper_normalized(data_label_filename, num_point):
function collect_bounding_box (line 359) | def collect_bounding_box(anno_path, out_filename):
function bbox_label_to_obj (line 402) | def bbox_label_to_obj(input_filename, out_filename_prefix, easy_view=Fal...
function bbox_label_to_obj_room (line 466) | def bbox_label_to_obj_room(input_filename, out_filename_prefix, easy_vie...
function collect_point_bounding_box (line 546) | def collect_point_bounding_box(anno_path, out_filename, file_format):
FILE: OcCo_Torch/utils/lmdb2hdf5.py
function fix2len (line 8) | def fix2len(point_cloud, fix_length):
FILE: render/Depth_Renderer.py
function random_pose (line 8) | def random_pose():
function setup_blender (line 32) | def setup_blender(width, height, focal_length):
FILE: render/EXR_Process.py
function read_exr (line 11) | def read_exr(exr_path, height, width):
function depth2pcd (line 21) | def depth2pcd(depth, intrinsics, pose):
FILE: sample/mesh_sampling.cpp
function uniform_deviate (line 53) | inline double
function randomPointTriangle (line 60) | inline void
function randPSurface (line 84) | inline void
function uniform_sampling (line 112) | void
function printHelp (line 159) | void
function main (line 179) | int
Condensed preview — 112 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (977K chars).
[
{
"path": ".gitignore",
"chars": 1928,
"preview": "# Byte-compiled / optimized / DLL files\nresults/*/plots\n*/log/\ndemo/\n*/para_restored.txt\n*/pc_distance/__pycache__\n.idea"
},
{
"path": "LICENSE",
"chars": 1069,
"preview": "MIT License\n\nCopyright (c) 2020 Hanchen Wang\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
},
{
"path": "OcCo_TF/.gitignore",
"chars": 1944,
"preview": "# others code\nresults/*/plots\nlog/\ndemo/\ndemo_data/\npara_restored.txt\npc_distance/__pycache__\n\n# Byte-compiled / optimiz"
},
{
"path": "OcCo_TF/Requirements_TF.txt",
"chars": 258,
"preview": "# Originally Designed for Docker Environment, TensorFlow 1.12.0/1.15.0, Python 3.7, CUDA 10.0\n\nlmdb>=0.9\nnumpy>=1.14.0\nh"
},
{
"path": "OcCo_TF/cls_models/__init__.py",
"chars": 66,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n"
},
{
"path": "OcCo_TF/cls_models/dgcnn_cls.py",
"chars": 7520,
"preview": "# Author: Hanchen Wang (hw501@cam.ac.uk)\n# Ref: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/models/dgcnn.p"
},
{
"path": "OcCo_TF/cls_models/pcn_cls.py",
"chars": 3838,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport sys, tensorflow as tf\nsys.path.append('../')\nfr"
},
{
"path": "OcCo_TF/cls_models/pointnet_cls.py",
"chars": 5274,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport sys, os\nimport tensorflow as tf\nBASE_DIR = os.p"
},
{
"path": "OcCo_TF/completion_models/__init__.py",
"chars": 66,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n"
},
{
"path": "OcCo_TF/completion_models/dgcnn_cd.py",
"chars": 6223,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n# author: Hanchen Wang\n\nimport os, sys, tensorflow as "
},
{
"path": "OcCo_TF/completion_models/dgcnn_emd.py",
"chars": 6278,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n# author: Hanchen Wang\n\nimport os, sys, tensorflow as "
},
{
"path": "OcCo_TF/completion_models/pcn_cd.py",
"chars": 3417,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd"
},
{
"path": "OcCo_TF/completion_models/pcn_emd.py",
"chars": 3370,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n# Author: Wentao Yuan (wyuan1@cs.cmu.edu) 05/31/2018\n"
},
{
"path": "OcCo_TF/completion_models/pointnet_cd.py",
"chars": 5801,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport os, sys, tensorflow as tf\nBASE_DIR = os.path.di"
},
{
"path": "OcCo_TF/completion_models/pointnet_emd.py",
"chars": 6142,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport os, sys, tensorflow as tf\nBASE_DIR = os.path.di"
},
{
"path": "OcCo_TF/docker/.dockerignore",
"chars": 17,
"preview": "../data/\n../log/\n"
},
{
"path": "OcCo_TF/docker/Dockerfile_TF",
"chars": 2271,
"preview": "FROM tensorflow/tensorflow:1.12.0-gpu-py3\n\nWORKDIR /workspace/OcCo_TF\nRUN mkdir /home/hcw\nRUN chmod -R 777 /home/hcw\nRUN"
},
{
"path": "OcCo_TF/pc_distance/__init__.py",
"chars": 53,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n"
},
{
"path": "OcCo_TF/pc_distance/makefile",
"chars": 1160,
"preview": "cuda_inc = /usr/local/cuda-9.0/include/\ncuda_lib = /usr/local/cuda-9.0/lib64/\nnvcc = /usr/local/cuda-9.0/bin/nvcc\ntf_inc"
},
{
"path": "OcCo_TF/pc_distance/tf_approxmatch.cpp",
"chars": 14387,
"preview": "#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include <algorithm>\n#include"
},
{
"path": "OcCo_TF/pc_distance/tf_approxmatch.cu",
"chars": 9025,
"preview": "__global__ void approxmatch(int b,int n,int m,const float * __restrict__ xyz1,const float * __restrict__ xyz2,float * __"
},
{
"path": "OcCo_TF/pc_distance/tf_approxmatch.py",
"chars": 4478,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport tensorflow as tf\nfrom tensorflow.python.framewo"
},
{
"path": "OcCo_TF/pc_distance/tf_nndistance.cpp",
"chars": 13072,
"preview": "#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\nREGISTER_OP(\"NnDistance\")\n\t.I"
},
{
"path": "OcCo_TF/pc_distance/tf_nndistance.cu",
"chars": 4573,
"preview": "#if GOOGLE_CUDA\n#define EIGEN_USE_GPU\n// #include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n\n__global__ void N"
},
{
"path": "OcCo_TF/pc_distance/tf_nndistance.py",
"chars": 2189,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\"\"\"Scripts for Chamfer Distance\"\"\"\nimport os, tensorflow as tf\nfrom"
},
{
"path": "OcCo_TF/readme.md",
"chars": 25,
"preview": "## OcCo in TensorFlow\n\n\n\n"
},
{
"path": "OcCo_TF/train_cls.py",
"chars": 12885,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, sys, pdb, time, argparse, datetime, importlib, numpy as"
},
{
"path": "OcCo_TF/train_cls_dgcnn_torchloader.py",
"chars": 10219,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n# Ref: https://github.com/hansen7/NRS_3D/blob/master/t"
},
{
"path": "OcCo_TF/train_cls_torchloader.py",
"chars": 14581,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport os, sys, pdb, time, argparse, datetime, importl"
},
{
"path": "OcCo_TF/train_completion.py",
"chars": 10576,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, pdb, time, argparse, datetime, importlib, numpy as np, "
},
{
"path": "OcCo_TF/utils/Dataset_Assign.py",
"chars": 3573,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport h5py\n\ndef Dataset_Assign(dataset, fname, partia"
},
{
"path": "OcCo_TF/utils/EarlyStoppingCriterion.py",
"chars": 2116,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nclass EarlyStoppingCriterion(object):\n \"\"\"\n adapted from htt"
},
{
"path": "OcCo_TF/utils/ModelNetDataLoader.py",
"chars": 6721,
"preview": "import os, torch, h5py, warnings, numpy as np\nfrom torch.utils.data import Dataset\n\nwarnings.filterwarnings('ignore')\n\n\n"
},
{
"path": "OcCo_TF/utils/Train_Logger.py",
"chars": 7005,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, logging, datetime, numpy as np, sklearn.metrics as metr"
},
{
"path": "OcCo_TF/utils/__init__.py",
"chars": 168,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n#\nfrom tf_util import *\nfrom pc_util import *\nfrom io_util import *"
},
{
"path": "OcCo_TF/utils/check_num_point.py",
"chars": 2983,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\n# Author: Hanchen Wang, hw501@cam.ac.uk\n\nimport numpy "
},
{
"path": "OcCo_TF/utils/check_scale.py",
"chars": 3376,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport numpy as np\nimport os, open3d, sys\n\nLOG_F = ope"
},
{
"path": "OcCo_TF/utils/data_util.py",
"chars": 4150,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref:\n\nimport numpy as np, tensorflow as tf\nfrom tensorpack impor"
},
{
"path": "OcCo_TF/utils/io_util.py",
"chars": 1266,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport h5py, numpy as np\nfrom open3d.open3d.geometry i"
},
{
"path": "OcCo_TF/utils/pc_util.py",
"chars": 3715,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport numpy as np\n\n\ndef jitter_point_cloud(batch_data"
},
{
"path": "OcCo_TF/utils/tf_util.py",
"chars": 19791,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport tensorflow as tf\ntry:\n from pc_distance impo"
},
{
"path": "OcCo_TF/utils/transfer_pretrained_w.py",
"chars": 3596,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport os, argparse, tensorflow as tf\nfrom tensorflow."
},
{
"path": "OcCo_TF/utils/transform_nets.py",
"chars": 7267,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n\nimport os, sys, numpy as np, tensorflow as tf\nimport t"
},
{
"path": "OcCo_TF/utils/visu_util.py",
"chars": 1857,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n# Original Author: Wentao Yuan (wyuan1@cs.cmu.edu) 05/"
},
{
"path": "OcCo_Torch/Requirements_Torch.txt",
"chars": 337,
"preview": "# Originally Designed for Docker Environment:\n# PyTorch 1.3.0, Python 3.6, CUDA 10.1\n# install PyTorch first if not use "
},
{
"path": "OcCo_Torch/bash_template/train_cls_template.sh",
"chars": 1128,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# training pointnet on ModelNet40, from scratch\npython train_cls.py \\\n\t--gpu 0 \\\n\t--model p"
},
{
"path": "OcCo_Torch/bash_template/train_completion_template.sh",
"chars": 404,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# train pointnet-occo model on ModelNet, from scratch\npython train_completion.py \\\n\t--gpu 0"
},
{
"path": "OcCo_Torch/bash_template/train_jigsaw_template.sh",
"chars": 444,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# train pointnet_jigsaw on ModelNet40, from scratch\npython train_jigsaw.py \\\n\t--gpu 0 \\\n\t--"
},
{
"path": "OcCo_Torch/bash_template/train_partseg_template.sh",
"chars": 1122,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# training pointnet on ShapeNetPart, from scratch\npython train_partseg.py \\\n\t--gpu 0 \\\n\t--n"
},
{
"path": "OcCo_Torch/bash_template/train_semseg_template.sh",
"chars": 1007,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# train pointnet_semseg on 6-fold cv of S3DIS, from scratch\nfor area in $(seq 1 1 6)\ndo\npyt"
},
{
"path": "OcCo_Torch/bash_template/train_svm_template.sh",
"chars": 786,
"preview": "#!/usr/bin/env bash\n\ncd ../\n\n# fit a linear svm on ModelNet40 encoded by OcCo PointNet\npython train_svm.py \\\n\t--gpu 0 \\\n"
},
{
"path": "OcCo_Torch/chamfer_distance/__init__.py",
"chars": 46,
"preview": "from .chamfer_distance import ChamferDistance\n"
},
{
"path": "OcCo_Torch/chamfer_distance/chamfer_distance.cpp",
"chars": 6260,
"preview": "#include <torch/torch.h>\n\n// CUDA forward declarations\nint ChamferDistanceKernelLauncher(\n const int b, const int n,\n"
},
{
"path": "OcCo_Torch/chamfer_distance/chamfer_distance.cu",
"chars": 5070,
"preview": "#include <ATen/ATen.h>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n__global__ \nvoid ChamferDistanceKernel(\n\tint b,\n\tin"
},
{
"path": "OcCo_Torch/chamfer_distance/chamfer_distance.py",
"chars": 2600,
"preview": "# Ref: https://github.com/chrdiller/pyTorchChamferDistance\nimport os, torch, torch.nn as nn\nfrom torch.utils.cpp_extens"
},
{
"path": "OcCo_Torch/chamfer_distance/readme.md",
"chars": 1372,
"preview": "# Chamfer Distance for PyTorch\n\nThis is an implementation of the Chamfer Distance as a module for PyTorch. It is written"
},
{
"path": "OcCo_Torch/docker/.dockerignore",
"chars": 27,
"preview": "*/data\n*/log\n*/__pycache__\n"
},
{
"path": "OcCo_Torch/docker/Dockerfile_Torch",
"chars": 844,
"preview": "# https://github.com/pytorch/pytorch/issues/31171#issuecomment-565887573\nFROM pytorch/pytorch:1.3-cuda10.1-cudnn7-devel\n"
},
{
"path": "OcCo_Torch/docker/build_docker_torch.sh",
"chars": 70,
"preview": "#!/bin/bash\ndocker build ../ --rm -t occo_torch -f ./Dockerfile_Torch\n"
},
{
"path": "OcCo_Torch/docker/launch_docker_torch.sh",
"chars": 476,
"preview": "#!/bin/bash\n\ndocker run -it \\\n\t--rm \\\n\t--shm-size=1g \\\n\t--runtime=nvidia \\\n\t--ulimit memlock=-1 \\\n\t--ulimit stack=671088"
},
{
"path": "OcCo_Torch/models/dgcnn_cls.py",
"chars": 3828,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/mode"
},
{
"path": "OcCo_Torch/models/dgcnn_jigsaw.py",
"chars": 4324,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model."
},
{
"path": "OcCo_Torch/models/dgcnn_occo.py",
"chars": 6187,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd"
},
{
"path": "OcCo_Torch/models/dgcnn_partseg.py",
"chars": 5233,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model."
},
{
"path": "OcCo_Torch/models/dgcnn_semseg.py",
"chars": 4184,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/model."
},
{
"path": "OcCo_Torch/models/dgcnn_util.py",
"chars": 5133,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/mode"
},
{
"path": "OcCo_Torch/models/pcn_cls.py",
"chars": 1050,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport pdb, torch, torch.nn as nn, torch.nn.functional as F\nfrom p"
},
{
"path": "OcCo_Torch/models/pcn_jigsaw.py",
"chars": 1461,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom pcn_ut"
},
{
"path": "OcCo_Torch/models/pcn_occo.py",
"chars": 4752,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/models/pcn_cd"
},
{
"path": "OcCo_Torch/models/pcn_partseg.py",
"chars": 1516,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom pcn_ut"
},
{
"path": "OcCo_Torch/models/pcn_semseg.py",
"chars": 1441,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom pcn_ut"
},
{
"path": "OcCo_Torch/models/pcn_util.py",
"chars": 2695,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\n\nclass PCNE"
},
{
"path": "OcCo_Torch/models/pointnet_cls.py",
"chars": 1305,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/m"
},
{
"path": "OcCo_Torch/models/pointnet_jigsaw.py",
"chars": 1849,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom pointn"
},
{
"path": "OcCo_Torch/models/pointnet_occo.py",
"chars": 4347,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/krrish94/chamferdist\n# Ref: https://git"
},
{
"path": "OcCo_Torch/models/pointnet_partseg.py",
"chars": 1825,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/m"
},
{
"path": "OcCo_Torch/models/pointnet_semseg.py",
"chars": 1843,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport torch, torch.nn as nn, torch.nn.functional as F\nfrom pointn"
},
{
"path": "OcCo_Torch/models/pointnet_util.py",
"chars": 7576,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/fxia22/pointnet.pytorch/pointnet/model.p"
},
{
"path": "OcCo_Torch/readme.md",
"chars": 20,
"preview": "## OcCo in PyTorch\n\n"
},
{
"path": "OcCo_Torch/train_cls.py",
"chars": 10440,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/main"
},
{
"path": "OcCo_Torch/train_completion.py",
"chars": 12376,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/train.py\n# R"
},
{
"path": "OcCo_Torch/train_jigsaw.py",
"chars": 8072,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# ref: https://github.com/AnTao97/dgcnn.pytorch/blob/master/main_s"
},
{
"path": "OcCo_Torch/train_partseg.py",
"chars": 15573,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/m"
},
{
"path": "OcCo_Torch/train_semseg.py",
"chars": 10958,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/"
},
{
"path": "OcCo_Torch/train_svm.py",
"chars": 5452,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://scikit-learn.org/stable/modules/svm.html\n# Ref: ht"
},
{
"path": "OcCo_Torch/utils/3DPC_Data_Gen.py",
"chars": 3156,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n# Generating Training Data of 3D Point Cloud for 3D Ji"
},
{
"path": "OcCo_Torch/utils/Dataset_Loc.py",
"chars": 2690,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hc.wang96@gmail.com\n# Modify the path w.r.t your own settings\n\n\ndef Datase"
},
{
"path": "OcCo_Torch/utils/Inference_Timer.py",
"chars": 1521,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, torch, time, numpy as np\n\nclass Inference_Timer:\n de"
},
{
"path": "OcCo_Torch/utils/LMDB_DataFlow.py",
"chars": 3505,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/data_"
},
{
"path": "OcCo_Torch/utils/LMDB_Writer.py",
"chars": 2234,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk\n\nimport os, argparse, numpy as np\nfrom tensorpack import Da"
},
{
"path": "OcCo_Torch/utils/ModelNetDataLoader.py",
"chars": 6366,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\nimport os, torch, h5py, warnings, numpy as np\nfrom torch.utils.data"
},
{
"path": "OcCo_Torch/utils/PC_Augmentation.py",
"chars": 2867,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport numpy as np\n\n\"\"\"\n\t========================================="
},
{
"path": "OcCo_Torch/utils/S3DISDataLoader.py",
"chars": 18549,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, sys, h5py, numpy as np\nfrom torch.utils.data import Dat"
},
{
"path": "OcCo_Torch/utils/ShapeNetDataLoader.py",
"chars": 4349,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/yanx27/Pointnet_Pointnet2_pytorc"
},
{
"path": "OcCo_Torch/utils/TSNE_Visu.py",
"chars": 3851,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://scikit-learn.org/stable/modules/generated/sklearn.m"
},
{
"path": "OcCo_Torch/utils/Torch_Utility.py",
"chars": 1782,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/pytorch/pytorch/issues/7068#issuecomment"
},
{
"path": "OcCo_Torch/utils/TrainLogger.py",
"chars": 7405,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, logging, datetime, numpy as np, sklearn.metrics as metr"
},
{
"path": "OcCo_Torch/utils/Visu_Utility.py",
"chars": 1944,
"preview": "# Copyright (c) 2020. Author: Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/visu_"
},
{
"path": "OcCo_Torch/utils/__init__.py",
"chars": 54,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\n"
},
{
"path": "OcCo_Torch/utils/collect_indoor3d_data.py",
"chars": 1109,
"preview": "# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/collect_indoor3d_data.py\n\nimport os, sys, indoor3d_ut"
},
{
"path": "OcCo_Torch/utils/gen_indoor3d_h5.py",
"chars": 4123,
"preview": "# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/gen_indoor3d_h5.py\n\nimport os, sys, h5py, indoor3d_ut"
},
{
"path": "OcCo_Torch/utils/indoor3d_util.py",
"chars": 25568,
"preview": "# Ref: https://github.com/charlesq34/pointnet/blob/master/sem_seg/indoor3d_util.py\nimport os, sys, glob, numpy as np\n\nB"
},
{
"path": "OcCo_Torch/utils/lmdb2hdf5.py",
"chars": 4652,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, h5py, json, argparse, numpy as np\nfrom LMDB_DataFlow im"
},
{
"path": "readme.md",
"chars": 14259,
"preview": "## OcCo: Unsupervised Point Cloud Pre-training via Occlusion Completion\nThis repository is the official implementation o"
},
{
"path": "render/Depth_Renderer.py",
"chars": 5553,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/render/render"
},
{
"path": "render/EXR_Process.py",
"chars": 3942,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n# Ref: https://github.com/wentaoyuan/pcn/blob/master/render/proces"
},
{
"path": "render/ModelNet_Flist.txt",
"chars": 438328,
"preview": "mantel/train/mantel_0029_normalised\nmantel/train/mantel_0037_normalised\nmantel/train/mantel_0210_normalised\nmantel/train"
},
{
"path": "render/PC_Normalisation.py",
"chars": 990,
"preview": "# Copyright (c) 2020. Hanchen Wang, hw501@cam.ac.uk\n\nimport os, open3d, numpy as np\n\nFile_ = open('ModelNet_flist_short"
},
{
"path": "render/readme.md",
"chars": 1810,
"preview": "This directory contains code that generates partial point clouds objects. \n\nTo start with:\n\n1. Download and Install [Ble"
},
{
"path": "sample/CMakeLists.txt",
"chars": 386,
"preview": "cmake_minimum_required(VERSION 3.0)\n\nproject(mesh_sampling)\n\nfind_package(PCL 1.7 REQUIRED)\ninclude_directories(${PCL_IN"
},
{
"path": "sample/mesh_sampling.cpp",
"chars": 10268,
"preview": "/*\n * Software License Agreement (BSD License)\n *\n * Point Cloud Library (PCL) - www.pointclouds.org\n * Copyright (c) "
},
{
"path": "sample/readme.md",
"chars": 467,
"preview": "[Optional] This directory contains code for a command line tool that uniformly samples a point cloud on a mesh. It is a "
}
]
About this extraction
This page contains the full source code of the hansen7/OcCo GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 112 files (915.5 KB), approximately 273.2k tokens, and a symbol index with 478 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.