[
  {
    "path": ".gitignore",
    "content": "*__pycache__/\ndata/modelnet40_ply_hdf5_2048\ndata/ModelNet40\ndata/modelnet40_c\nruns/\npretrained/\ncor_exp/\n*.out\n/output\n\n\n# Created by https://www.toptal.com/developers/gitignore/api/python,cuda,zsh,c++\n# Edit at https://www.toptal.com/developers/gitignore?templates=python,cuda,zsh,c++\n\n### C++ ###\n# Prerequisites\n*.d\n\n# Compiled Object files\n*.slo\n*.lo\n*.o\n*.obj\n\n# Precompiled Headers\n*.gch\n*.pch\n\n# Compiled Dynamic libraries\n*.so\n*.dylib\n*.dll\n\n# Fortran module files\n*.mod\n*.smod\n\n# Compiled Static libraries\n*.lai\n*.la\n*.a\n*.lib\n\n# Executables\n*.exe\n*.out\n*.app\n\n### CUDA ###\n*.i\n*.ii\n*.gpu\n*.ptx\n*.cubin\n*.fatbin\n\n### Python ###\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n### Zsh ###\n# Zsh compiled script + zrecompile backup\n*.zwc\n*.zwc.old\n\n# Zsh completion-optimization dumpfile\n*zcompdump*\n\n# Zsh zcalc history\n.zcalc_history\n\n# A popular plugin manager's files\n._zinit\n.zinit_lstupd\n\n# zdharma/zshelldoc tool's files\nzsdoc/data\n\n# robbyrussell/oh-my-zsh/plugins/per-directory-history plugin's files\n# (when set-up to store the history in the local directory)\n.directory_history\n\n# MichaelAquilina/zsh-autoswitch-virtualenv plugin's files\n# (for Zsh plugins using Python)\n\n# Zunit tests' output\n/tests/_output/*\n!/tests/_output/.gitkeep\n\n# End of https://www.toptal.com/developers/gitignore/api/python,cuda,zsh,c++\n{\"mode\":\"full\",\"isActive\":false}\n"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"PyGeM\"]\n\tpath = PyGeM\n\turl = https://github.com/mathLab/PyGeM.git\n[submodule \"visualize/mitsuba2\"]\n\tpath = visualize/mitsuba2\n\turl = https://github.com/mitsuba-renderer/mitsuba2\n"
  },
  {
    "path": "CurveNet/README.md",
    "content": "# CurveNet\nOfficial implementation of \"Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis\", ICCV 2021\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/walk-in-the-cloud-learning-curves-for-point/3d-point-cloud-classification-on-modelnet40)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40?p=walk-in-the-cloud-learning-curves-for-point)  \n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/walk-in-the-cloud-learning-curves-for-point/3d-part-segmentation-on-shapenet-part)](https://paperswithcode.com/sota/3d-part-segmentation-on-shapenet-part?p=walk-in-the-cloud-learning-curves-for-point)\n\nPaper: https://arxiv.org/abs/2105.01288\n\n![CurveNet](./poster3.png)\n\n## Requirements\n- Python>=3.7\n- PyTorch>=1.2\n- Packages: glob, h5py, sklearn\n\n## Contents\n- [Point Cloud Classification](#point-cloud-classification)\n- [Point Cloud Part Segmentation](#point-cloud-part-segmentation)\n- [Point Cloud Normal Estimation](#point-cloud-normal-estimation)\n\n**NOTE:** Please change your current directory to ```core/``` first before excuting the following commands.\n\n## Point Cloud Classification\n### Data\n\nThe ModelNet40 dataset is primarily used for the classification experiments. At your first run, the program will automatically download the data if it is not in ```data/```. Or, you can manually download the [offical data](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) and unzip to ```data/```. \n\nAlternatively, you can place your downloaded data anywhere you like, and link the path to ```DATA_DIR``` in ```core/data.py```. Otherwise, the download will still be automatically triggered.\n\n### Train\n\nTrain with our default settings (same as in the paper):\n\n``` \npython3 main_cls.py --exp_name=curvenet_cls_1\n```\n\nTrain with customized settings with the flags: ```--lr```, ```--scheduler```, ```--batch_size```.\n\nAlternatively, you can directly modify ```core/start_cls.sh``` and simply run:\n\n```\n./start_cls.sh\n```\n\n**NOTE:** Our reported model achieves **93.8%/94.2%** accuracy (see sections below). However, due to randomness, the best result might require repeated training processes. Hence, we also provide another benchmark result here (where we repeated 5 runs with different random seeds, and report their average), which is **93.65%** accuracy.\n\n<!-- **NOTE:** Due to randomness, the results could be slightly different than the one reported in our paper. We repeated 5 runs with different random seeds, and got an average of **93.65%** classification accuracy. -->\n\n### Evaluation\n\n\nEvaluate without voting:\n``` \npython3 main_cls.py --exp_name=curvenet_cls_1 --eval=True --model_path=PATH_TO_YOUR_MODEL\n```\n\nAlternatively, you can directly modify ```core/test_cls.sh``` and simply run:\n``` \n./test_cls.sh\n```\n\nFor voting, we used the ```voting_evaluate_cls.py```script provided in [RSCNN](https://github.com/Yochengliu/Relation-Shape-CNN). Please refer to their license for usage.\n\n### Evaluation with our pretrained model:\n\nPlease download our pretrained model ```cls/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).\n\nAnd then run:\n\n``` \npython3 main_cls.py --exp_name=curvenet_cls_pretrained --eval --model_path=PATH_TO_PRETRAINED/cls/models/model.t7\n```\n\n&nbsp;\n## Point Cloud Part Segmentation\n### Data\n\nThe ShapeNet Part dataset is primarily used for the part segmentation experiments. At your first run, the program will automatically download the data if it is not in ```data/```. Or, you can manually download the [offical data](https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip) and unzip to ```data/```. \n\nAlternatively, you can place your downloaded data anywhere you like, and link the path to ```DATA_DIR``` in ```core/data.py```. Otherwise, the download will still be automatically triggered.\n\n### Train\n\nTrain with our default settings (same as in the paper):\n\n``` \npython3 main_partseg.py --exp_name=curvenet_seg_1\n```\n\nTrain with customized settings with the flags: ```--lr```, ```--scheduler```, ```--batch_size```.\n\nAlternatively, you can directly modify ```core/start_part.sh``` and simply run:\n\n```\n./start_part.sh\n```\n\n**NOTE:** Our reported model achieves **86.6%/86.8%** mIoU (see sections below). However, due to randomness, the best result might require repeated training processes. Hence, we also provide another benchmark result here (where we repeated 5 runs with different random seeds, and report their average), which is **86.46** mIoU.\n\n<!-- **NOTE:** Due to randomness, the results could be slightly different than the one reported in our paper. We repeated 5 runs with different random seeds, and got an average of **86.46** mIoU. -->\n\n### Evaluation\n\nEvaluate without voting:\n``` \npython3 main_partseg.py --exp_name=curvenet_seg_1 --eval=True --model_path=PATH_TO_YOUR_MODEL\n```\n\nAlternatively, you can directly modify ```core/test_cls.sh``` and simply run:\n``` \n./test_cls.sh\n```\n\nFor voting, we used the ```voting_evaluate_partseg.py```script provided in [RSCNN](https://github.com/Yochengliu/Relation-Shape-CNN). Please refer to their license for usage.\n\n### Evaluation with our pretrained model:\n\nPlease download our pretrained model ```partseg/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).\n\nAnd then run:\n\n``` \npython3 main_partseg.py --exp_name=curvenet_seg_pretrained --eval=True --model_path=PATH_TO_PRETRAINED/partseg/models/model.t7\n```\n\n&nbsp;\n## Point Cloud Normal Estimation\n\n### Data\n\nThe ModelNet40 dataset is used for the normal estimation experiments. We have preprocessed the raw ModelNet40 dataset into  ```.h5``` files. Each point cloud instance contains 2048 randomly sampled points and point-to-point normal ground truths. \n\nPlease download our processed data [here](https://drive.google.com/file/d/1j6lB3ZOF0_x_l9bqdchAxIYBi7Devie8/view?usp=sharing) and place it to ```data/```, or you need to specify the data root path in ```core/data.py```.\n\n### Train\n\nTrain with our default settings (same as in the paper):\n\n``` \npython3 main_normal.py --exp_name=curvenet_normal_1\n```\n\nTrain with customized settings with the flags: ```--multiplier```, ```--lr```, ```--scheduler```, ```--batch_size```.\n\nAlternatively, you can directly modify ```core/start_normal.sh``` and simply run:\n\n```\n./start_normal.sh\n```\n\n### Evaluation\n\nEvaluate without voting:\n``` \npython3 main_normal.py --exp_name=curvenet_normal_1 --eval=True --model_path=PATH_TO_YOUR_MODEL\n```\n\nAlternatively, you can directly modify ```core/test_normal.sh``` and simply run:\n``` \n./test_normal.sh\n```\n\n### Evaluation with our pretrained model:\n\nPlease download our pretrained model ```normal/``` at [google drive](https://drive.google.com/drive/folders/1kX-zIipyzB0iMaopcijzdTRuHeTzfTSz?usp=sharing).\n\nAnd then run:\n\n``` \npython3 main_normal.py --exp_name=curvenet_normal_pretrained --eval=True --model_path=PATH_TO_PRETRAINED/normal/models/model.t7\n```\n\n## Citation  \n\nIf you find this repo useful in your work or research, please cite:  \n\n```\n@InProceedings{Xiang_2021_ICCV,\n    author    = {Xiang, Tiange and Zhang, Chaoyi and Song, Yang and Yu, Jianhui and Cai, Weidong},\n    title     = {Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis},\n    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},\n    month     = {October},\n    year      = {2021},\n    pages     = {915-924}\n}\n```\n\n## Acknowledgement\n\nOur code borrows a lot from:\n- [DGCNN](https://github.com/WangYueFt/dgcnn)\n- [DGCNN.pytorch](https://github.com/AnTao97/dgcnn.pytorch)\n- [CloserLook3D](https://github.com/zeliu98/CloserLook3D)\n"
  },
  {
    "path": "CurveNet/core/data.py",
    "content": "\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: data.py\n@Time: 2018/10/13 6:21 PM\n\nModified by \n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@Time: 2021/1/21 3:10 PM\n\"\"\"\n\n\nimport os\nimport sys\nimport glob\nimport h5py\nimport numpy as np\nimport torch\nfrom torch.utils.data import Dataset\n\n\n# change this to your data root\nDATA_DIR = '../data/'\n\ndef download_modelnet40():\n    if not os.path.exists(DATA_DIR):\n        os.mkdir(DATA_DIR)\n    if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n        os.mkdir(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048'))\n        www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n        zipfile = os.path.basename(www)\n        os.system('wget %s --no-check-certificate; unzip %s' % (www, zipfile))\n        os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n        os.system('rm %s' % (zipfile))\n\n\ndef download_shapenetpart():\n    if not os.path.exists(DATA_DIR):\n        os.mkdir(DATA_DIR)\n    if not os.path.exists(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data')):\n        os.mkdir(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data'))\n        www = 'https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip'\n        zipfile = os.path.basename(www)\n        os.system('wget %s --no-check-certificate; unzip %s' % (www, zipfile))\n        os.system('mv %s %s' % (zipfile[:-4], os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data')))\n        os.system('rm %s' % (zipfile))\n\n\ndef load_data_normal(partition):\n    f = h5py.File(os.path.join(DATA_DIR, 'modelnet40_normal', 'normal_%s.h5'%partition), 'r+')\n    data = f['xyz'][:].astype('float32')\n    label = f['normal'][:].astype('float32')\n    f.close()\n    return data, label\n\n\ndef load_data_cls(partition):\n    download_modelnet40()\n    all_data = []\n    all_label = []\n    for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40*hdf5_2048', '*%s*.h5'%partition)):\n        f = h5py.File(h5_name, 'r+')\n        data = f['data'][:].astype('float32')\n        label = f['label'][:].astype('int64')\n        f.close()\n        all_data.append(data)\n        all_label.append(label)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    return all_data, all_label\n\n\ndef load_data_partseg(partition):\n    download_shapenetpart()\n    all_data = []\n    all_label = []\n    all_seg = []\n    if partition == 'trainval':\n        file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*train*.h5')) \\\n               + glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*val*.h5'))\n    else:\n        file = glob.glob(os.path.join(DATA_DIR, 'shapenet_part_seg_hdf5_data', 'hdf5_data', '*%s*.h5'%partition))\n    for h5_name in file:\n        f = h5py.File(h5_name, 'r+')\n        data = f['data'][:].astype('float32')\n        label = f['label'][:].astype('int64')\n        seg = f['pid'][:].astype('int64')\n        f.close()\n        all_data.append(data)\n        all_label.append(label)\n        all_seg.append(seg)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    all_seg = np.concatenate(all_seg, axis=0)\n    return all_data, all_label, all_seg\n\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n       \n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\ndef rotate_pointcloud(pointcloud):\n    theta = np.pi*2 * np.random.uniform()\n    rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])\n    pointcloud[:,[0,2]] = pointcloud[:,[0,2]].dot(rotation_matrix) # random rotation (x,z)\n    return pointcloud\n\n\nclass ModelNet40(Dataset):\n    def __init__(self, num_points, partition='train'):\n        self.data, self.label = load_data_cls(partition)\n        self.num_points = num_points\n        self.partition = partition        \n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.partition == 'train':\n            pointcloud = translate_pointcloud(pointcloud)\n            #pointcloud = rotate_pointcloud(pointcloud)\n            np.random.shuffle(pointcloud)\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\nclass ModelNetNormal(Dataset):\n    def __init__(self, num_points, partition='train'):\n        self.data, self.label = load_data_normal(partition)\n        self.num_points = num_points\n        self.partition = partition\n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item][:self.num_points]\n        if self.partition == 'train':\n            #pointcloud = translate_pointcloud(pointcloud)\n            idx = np.arange(0, pointcloud.shape[0], dtype=np.int64)\n            np.random.shuffle(idx)\n            pointcloud = self.data[item][idx]\n            label = self.label[item][idx]\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\nclass ShapeNetPart(Dataset):\n    def __init__(self, num_points=2048, partition='train', class_choice=None):\n        self.data, self.label, self.seg = load_data_partseg(partition)\n        self.cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4, \n                       'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9, \n                       'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}\n        self.seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]\n        self.index_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]\n        self.num_points = num_points\n        self.partition = partition        \n        self.class_choice = class_choice\n\n        if self.class_choice != None:\n            id_choice = self.cat2id[self.class_choice]\n            indices = (self.label == id_choice).squeeze()\n            self.data = self.data[indices]\n            self.label = self.label[indices]\n            self.seg = self.seg[indices]\n            self.seg_num_all = self.seg_num[id_choice]\n            self.seg_start_index = self.index_start[id_choice]\n        else:\n            self.seg_num_all = 50\n            self.seg_start_index = 0\n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        seg = self.seg[item][:self.num_points]\n        if self.partition == 'trainval':\n            pointcloud = translate_pointcloud(pointcloud)\n            indices = list(range(pointcloud.shape[0]))\n            np.random.shuffle(indices)\n            pointcloud = pointcloud[indices]\n            seg = seg[indices]\n        return pointcloud, label, seg\n\n    def __len__(self):\n        return self.data.shape[0]\n"
  },
  {
    "path": "CurveNet/core/main_cls.py",
    "content": "\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: main_cls.py\n@Time: 2018/10/13 10:39 PM\n\nModified by \n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nfrom __future__ import print_function\nimport os\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.optim.lr_scheduler import CosineAnnealingLR, MultiStepLR\nfrom data import ModelNet40\nfrom models.curvenet_cls import CurveNet\nimport numpy as np\nfrom torch.utils.data import DataLoader\nfrom util import cal_loss, IOStream\nimport sklearn.metrics as metrics\n\n\ndef _init_():\n    # fix random seed\n    torch.manual_seed(seed)\n    np.random.seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    torch.cuda.manual_seed(seed)\n    torch.set_printoptions(10)\n    torch.backends.cudnn.benchmark = False\n    torch.backends.cudnn.deterministic = True\n    os.environ['PYTHONHASHSEED'] = str(seed)\n\n    # prepare file structures\n    if not os.path.exists('../checkpoints'):\n        os.makedirs('../checkpoints')\n    if not os.path.exists('../checkpoints/'+args.exp_name):\n        os.makedirs('../checkpoints/'+args.exp_name)\n    if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):\n        os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')\n    os.system('cp main_cls.py ../checkpoints/'+args.exp_name+'/main_cls.py.backup')\n    os.system('cp models/curvenet_cls.py ../checkpoints/'+args.exp_name+'/curvenet_cls.py.backup')\n\ndef train(args, io):\n    train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=8,\n                              batch_size=args.batch_size, shuffle=True, drop_last=True)\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,\n                             batch_size=args.test_batch_size, shuffle=False, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n    io.cprint(\"Let's use\" + str(torch.cuda.device_count()) + \"GPUs!\")\n    \n    # create model\n    model = CurveNet().to(device)\n    model = nn.DataParallel(model)\n\n    if args.use_sgd:\n        io.cprint(\"Use SGD\")\n        opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)\n    else:\n        io.cprint(\"Use Adam\")\n        opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)\n\n    if args.scheduler == 'cos':\n        scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)\n    elif args.scheduler == 'step':\n        scheduler = MultiStepLR(opt, [120, 160], gamma=0.1)\n    \n    criterion = cal_loss\n\n    best_test_acc = 0\n    for epoch in range(args.epochs):\n        ####################\n        # Train\n        ####################\n        train_loss = 0.0\n        count = 0.0\n        model.train()\n        train_pred = []\n        train_true = []\n        for data, label in train_loader:\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            opt.zero_grad()\n            logits = model(data)\n            loss = criterion(logits, label)\n            loss.backward()\n            torch.nn.utils.clip_grad_norm_(model.parameters(), 1)\n            opt.step()\n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            train_loss += loss.item() * batch_size\n            train_true.append(label.cpu().numpy())\n            train_pred.append(preds.detach().cpu().numpy())\n        if args.scheduler == 'cos':\n            scheduler.step()\n        elif args.scheduler == 'step':\n            if opt.param_groups[0]['lr'] > 1e-5:\n                scheduler.step()\n            if opt.param_groups[0]['lr'] < 1e-5:\n                for param_group in opt.param_groups:\n                    param_group['lr'] = 1e-5\n\n        train_true = np.concatenate(train_true)\n        train_pred = np.concatenate(train_pred)\n        outstr = 'Train %d, loss: %.6f, train acc: %.6f' % (epoch, train_loss*1.0/count,\n                                                                metrics.accuracy_score(\n                                                                    train_true, train_pred))\n        io.cprint(outstr)\n\n        ####################\n        # Test\n        ####################\n        test_loss = 0.0\n        count = 0.0\n        model.eval()\n        test_pred = []\n        test_true = []\n        for data, label in test_loader:\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            logits = model(data)\n            loss = criterion(logits, label)\n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            test_loss += loss.item() * batch_size\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n        test_true = np.concatenate(test_true)\n        test_pred = np.concatenate(test_pred)\n        test_acc = metrics.accuracy_score(test_true, test_pred)\n        outstr = 'Test %d, loss: %.6f, test acc: %.6f' % (epoch, test_loss*1.0/count, test_acc)\n        io.cprint(outstr)\n        if test_acc >= best_test_acc:\n            best_test_acc = test_acc\n            torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)\n        io.cprint('best: %.3f' % best_test_acc)\n\ndef test(args, io):\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),\n                             batch_size=args.test_batch_size, shuffle=False, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    #Try to load models\n    model = CurveNet().to(device)\n    model = nn.DataParallel(model)\n    model.load_state_dict(torch.load(args.model_path))\n\n    model = model.eval()\n    test_acc = 0.0\n    count = 0.0\n    test_true = []\n    test_pred = []\n    for data, label in test_loader:\n\n        data, label = data.to(device), label.to(device).squeeze()\n        data = data.permute(0, 2, 1)\n        batch_size = data.size()[0]\n        logits = model(data)\n        preds = logits.max(dim=1)[1]\n        test_true.append(label.cpu().numpy())\n        test_pred.append(preds.detach().cpu().numpy())\n    test_true = np.concatenate(test_true)\n    test_pred = np.concatenate(test_pred)\n    test_acc = metrics.accuracy_score(test_true, test_pred)\n    outstr = 'Test :: test acc: %.6f'%(test_acc)\n    io.cprint(outstr)\n\n\nif __name__ == \"__main__\":\n    # Training settings\n    parser = argparse.ArgumentParser(description='Point Cloud Recognition')\n    parser.add_argument('--exp_name', type=str, default='exp', metavar='N',\n                        help='Name of the experiment')\n    parser.add_argument('--dataset', type=str, default='modelnet40', metavar='N',\n                        choices=['modelnet40'])\n    parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--epochs', type=int, default=200, metavar='N',\n                        help='number of episode to train ')\n    parser.add_argument('--use_sgd', type=bool, default=True,\n                        help='Use SGD')\n    parser.add_argument('--lr', type=float, default=0.001, metavar='LR',\n                        help='learning rate (default: 0.001, 0.1 if using sgd)')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--scheduler', type=str, default='cos', metavar='N',\n                        choices=['cos', 'step'],\n                        help='Scheduler to use, [cos, step]')\n    parser.add_argument('--no_cuda', type=bool, default=False,\n                        help='enables CUDA training')\n    parser.add_argument('--eval', type=bool,  default=False,\n                        help='evaluate the model')\n    parser.add_argument('--num_points', type=int, default=1024,\n                        help='num of points to use')\n    parser.add_argument('--model_path', type=str, default='', metavar='N',\n                        help='Pretrained model path')\n    args = parser.parse_args()\n\n    seed = np.random.randint(1, 10000)\n\n    _init_()\n\n    if args.eval:\n        io = IOStream('../checkpoints/' + args.exp_name + '/eval.log')\n    else:\n        io = IOStream('../checkpoints/' + args.exp_name + '/run.log')\n    io.cprint(str(args))\n    io.cprint('random seed is: ' + str(seed))\n    \n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n    \n    if args.cuda:\n        io.cprint(\n            'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')\n    else:\n        io.cprint('Using CPU')\n\n    if not args.eval:\n        train(args, io)\n    else:\n        with torch.no_grad():\n            test(args, io)\n"
  },
  {
    "path": "CurveNet/core/main_normal.py",
    "content": "\"\"\"\n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@File: main_normal.py\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\n\nfrom __future__ import print_function\nimport os\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.optim.lr_scheduler import CosineAnnealingLR, MultiStepLR\nfrom data import ModelNetNormal\nfrom models.curvenet_normal import CurveNet\nimport numpy as np\nfrom torch.utils.data import DataLoader\nfrom util import IOStream\n\n\ndef _init_():\n    # fix random seed\n    torch.manual_seed(seed)\n    np.random.seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    torch.cuda.manual_seed(seed)\n    torch.set_printoptions(10)\n    torch.backends.cudnn.benchmark = False\n    torch.backends.cudnn.deterministic = True\n    os.environ['PYTHONHASHSEED'] = str(seed)\n\n    # prepare file structures\n    if not os.path.exists('../checkpoints'):\n        os.makedirs('../checkpoints')\n    if not os.path.exists('../checkpoints/'+args.exp_name):\n        os.makedirs('../checkpoints/'+args.exp_name)\n    if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):\n        os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')\n    os.system('cp main_normal.py ../checkpoints/'+args.exp_name+'/main_normal.py.backup')\n    os.system('cp models/curvenet_normal.py ../checkpoints/'+args.exp_name+'/curvenet_normal.py.backup')\n\ndef train(args, io):\n    train_loader = DataLoader(ModelNetNormal(args.num_points, partition='train'), \n                              num_workers=8, batch_size=args.batch_size, shuffle=True, drop_last=True)\n    test_loader = DataLoader(ModelNetNormal(args.num_points, partition='test'), \n                             num_workers=8, batch_size=args.test_batch_size, shuffle=False, drop_last=False)\n    \n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    # create model\n    model = CurveNet(args.multiplier).to(device)\n    model = nn.DataParallel(model)\n    io.cprint(\"Let's use\" + str(torch.cuda.device_count()) + \"GPUs!\")\n\n    if args.use_sgd:\n        io.cprint(\"Use SGD\")\n        opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)\n    else:\n        io.cprint(\"Use Adam\")\n        opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)\n\n    if args.scheduler == 'cos':\n        scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)\n    elif args.scheduler == 'step':\n        scheduler = MultiStepLR(opt, [140, 180], gamma=0.1)\n\n    criterion = torch.nn.CosineEmbeddingLoss()\n\n    best_test_loss = 99\n    for epoch in range(args.epochs):\n        ####################\n        # Train\n        ####################\n        train_loss = 0.0\n        count = 0.0\n        model.train()\n        for data, seg in train_loader:\n            data, seg = data.to(device), seg.to(device)\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            opt.zero_grad()\n            seg_pred = model(data)\n            seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n            #print(seg_pred.shape, seg.shape)\n            loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())\n            loss.backward()\n            torch.nn.utils.clip_grad_norm_(model.parameters(), 1)\n            opt.step()\n            count += batch_size\n            train_loss += loss.item() * batch_size\n\n        if args.scheduler == 'cos':\n            scheduler.step()\n        elif args.scheduler == 'step':\n            if opt.param_groups[0]['lr'] > 1e-5:\n                scheduler.step()\n            if opt.param_groups[0]['lr'] < 1e-5:\n                for param_group in opt.param_groups:\n                    param_group['lr'] = 1e-5\n\n        outstr = 'Train %d, loss: %.6f' % (epoch, train_loss/count)\n        io.cprint(outstr)\n\n        ####################\n        # Test\n        ####################\n        test_loss = 0.0\n        count = 0.0\n        model.eval()\n        for data, seg in test_loader:\n            data, seg = data.to(device), seg.to(device)\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            seg_pred = model(data)\n            seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n            \n            loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())\n            count += batch_size\n            test_loss += loss.item() * batch_size\n        \n        if test_loss*1.0/count <= best_test_loss:\n            best_test_loss = test_loss*1.0/count\n            torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)\n        outstr = 'Test %d, loss: %.6f, best loss %.6f' % (epoch, test_loss/count, best_test_loss)\n        io.cprint(outstr)\n\ndef test(args, io):\n    test_loader = DataLoader(ModelNetNormal(args.num_points, partition='test'),\n                             batch_size=args.test_batch_size, shuffle=False, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    #Try to load models\n    model = CurveNet(args.multiplier).to(device)\n    model = nn.DataParallel(model)\n    model.load_state_dict(torch.load(args.model_path))\n\n    criterion = torch.nn.CosineEmbeddingLoss()\n    \n    model = model.eval()\n    test_loss = 0.0\n    count = 0\n    for data, seg in test_loader:\n        data, seg = data.to(device), seg.to(device)\n        #print(data.shape, seg.shape)\n        data = data.permute(0, 2, 1)\n        batch_size = data.size()[0]\n        seg_pred = model(data)\n        seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n        loss = criterion(seg_pred.view(-1, 3), seg.view(-1,3).squeeze(), torch.tensor(1).cuda())\n        count += batch_size\n        test_loss += loss.item() * batch_size\n    outstr = 'Test :: test loss: %.6f' % (test_loss*1.0/count)\n    io.cprint(outstr)\n\n\nif __name__ == \"__main__\":\n    # Training settings\n    parser = argparse.ArgumentParser(description='Point Cloud Part Segmentation')\n    parser.add_argument('--exp_name', type=str, default='exp', metavar='N',\n                        help='Name of the experiment')\n    parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--epochs', type=int, default=200, metavar='N',\n                        help='number of episode to train ')\n    parser.add_argument('--use_sgd', type=bool, default=True,\n                        help='Use SGD')\n    parser.add_argument('--lr', type=float, default=0.0005, metavar='LR',\n                        help='learning rate')\n    parser.add_argument('--multiplier', type=float, default=2.0, metavar='MP',\n                        help='network expansion multiplier')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--scheduler', type=str, default='cos', metavar='N',\n                        choices=['cos', 'step'],\n                        help='Scheduler to use, [cos, step]')\n    parser.add_argument('--no_cuda', type=bool, default=False,\n                        help='enables CUDA training')\n    parser.add_argument('--eval', type=bool,  default=False,\n                        help='evaluate the model')\n    parser.add_argument('--num_points', type=int, default=1024,\n                        help='num of points to use')\n    parser.add_argument('--model_path', type=str, default='', metavar='N',\n                        help='Pretrained model path')\n    args = parser.parse_args()\n\n    seed = np.random.randint(1, 10000)\n\n    _init_()\n\n    io = IOStream('../checkpoints/' + args.exp_name + '/run.log')\n    io.cprint(str(args))\n    io.cprint('random seed is: ' + str(seed))\n\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n    if args.cuda:\n        io.cprint(\n            'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')\n    else:\n        io.cprint('Using CPU')\n\n    if not args.eval:\n        train(args, io)\n    else:\n        with torch.no_grad():\n            test(args, io)\n"
  },
  {
    "path": "CurveNet/core/main_partseg.py",
    "content": "\"\"\"\n@Author: An Tao\n@Contact: ta19@mails.tsinghua.edu.cn\n@File: main_partseg.py\n@Time: 2019/12/31 11:17 AM\n\nModified by \n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\n\nfrom __future__ import print_function\nimport os\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.optim.lr_scheduler import CosineAnnealingLR, StepLR, MultiStepLR\nfrom data import ShapeNetPart\nfrom models.curvenet_seg import CurveNet\nimport numpy as np\nfrom torch.utils.data import DataLoader\nfrom util import cal_loss, IOStream\nimport sklearn.metrics as metrics\n\nseg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]\nindex_start = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]\n\ndef _init_():\n    # fix random seed\n    torch.manual_seed(seed)\n    np.random.seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    torch.cuda.manual_seed(seed)\n    torch.set_printoptions(10)\n    torch.backends.cudnn.benchmark = False\n    torch.backends.cudnn.deterministic = True\n    os.environ['PYTHONHASHSEED'] = str(seed)\n\n    # prepare file structures\n    if not os.path.exists('../checkpoints'):\n        os.makedirs('../checkpoints')\n    if not os.path.exists('../checkpoints/'+args.exp_name):\n        os.makedirs('../checkpoints/'+args.exp_name)\n    if not os.path.exists('../checkpoints/'+args.exp_name+'/'+'models'):\n        os.makedirs('../checkpoints/'+args.exp_name+'/'+'models')\n    os.system('cp main_partseg.py ../checkpoints/'+args.exp_name+'/main_partseg.py.backup')\n    os.system('cp models/curvenet_seg.py ../checkpoints/'+args.exp_name+'/curvenet_seg.py.backup')\n\ndef calculate_shape_IoU(pred_np, seg_np, label, class_choice, eva=False):\n    label = label.squeeze()\n    shape_ious = []\n    category = {}\n    for shape_idx in range(seg_np.shape[0]):\n        if not class_choice:\n            start_index = index_start[label[shape_idx]]\n            num = seg_num[label[shape_idx]]\n            parts = range(start_index, start_index + num)\n        else:\n            parts = range(seg_num[label[0]])\n        part_ious = []\n        for part in parts:\n            I = np.sum(np.logical_and(pred_np[shape_idx] == part, seg_np[shape_idx] == part))\n            U = np.sum(np.logical_or(pred_np[shape_idx] == part, seg_np[shape_idx] == part))\n            if U == 0:\n                iou = 1  # If the union of groundtruth and prediction points is empty, then count part IoU as 1\n            else:\n                iou = I / float(U)\n            part_ious.append(iou)\n        shape_ious.append(np.mean(part_ious))\n        if label[shape_idx] not in category:\n            category[label[shape_idx]] = [shape_ious[-1]]\n        else:\n            category[label[shape_idx]].append(shape_ious[-1])\n\n    if eva:\n        return shape_ious, category\n    else:\n        return shape_ious\n\ndef train(args, io):\n    train_dataset = ShapeNetPart(partition='trainval', num_points=args.num_points, class_choice=args.class_choice)\n    if (len(train_dataset) < 100):\n        drop_last = False\n    else:\n        drop_last = True\n    train_loader = DataLoader(train_dataset, num_workers=8, batch_size=args.batch_size, shuffle=True, drop_last=drop_last)\n    test_loader = DataLoader(ShapeNetPart(partition='test', num_points=args.num_points, class_choice=args.class_choice), \n                            num_workers=8, batch_size=args.test_batch_size, shuffle=False, drop_last=False)\n    \n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n    io.cprint(\"Let's use\" + str(torch.cuda.device_count()) + \"GPUs!\")\n\n    seg_num_all = train_loader.dataset.seg_num_all\n    seg_start_index = train_loader.dataset.seg_start_index\n\n    # create model\n    model = CurveNet().to(device)\n    model = nn.DataParallel(model)\n\n    if args.use_sgd:\n        print(\"Use SGD\")\n        opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)\n    else:\n        print(\"Use Adam\")\n        opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)\n\n    if args.scheduler == 'cos':\n        scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=1e-3)\n    elif args.scheduler == 'step':\n        scheduler = MultiStepLR(opt, [140, 180], gamma=0.1)\n    criterion = cal_loss\n\n    best_test_iou = 0\n    for epoch in range(args.epochs):\n        ####################\n        # Train\n        ####################\n        train_loss = 0.0\n        count = 0.0\n        model.train()\n        train_true_cls = []\n        train_pred_cls = []\n        train_true_seg = []\n        train_pred_seg = []\n        train_label_seg = []\n        for data, label, seg in train_loader:\n            seg = seg - seg_start_index\n            label_one_hot = np.zeros((label.shape[0], 16))\n            for idx in range(label.shape[0]):\n                label_one_hot[idx, label[idx]] = 1\n            label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))\n            data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            opt.zero_grad()\n            seg_pred = model(data, label_one_hot)\n            seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n            loss = criterion(seg_pred.view(-1, seg_num_all), seg.view(-1,1).squeeze())\n            loss.backward()\n            torch.nn.utils.clip_grad_norm_(model.parameters(), 1)\n            opt.step()\n            pred = seg_pred.max(dim=2)[1]               # (batch_size, num_points)\n            count += batch_size\n            train_loss += loss.item() * batch_size\n            seg_np = seg.cpu().numpy()                  # (batch_size, num_points)\n            pred_np = pred.detach().cpu().numpy()       # (batch_size, num_points)\n            train_true_cls.append(seg_np.reshape(-1))       # (batch_size * num_points)\n            train_pred_cls.append(pred_np.reshape(-1))      # (batch_size * num_points)\n            train_true_seg.append(seg_np)\n            train_pred_seg.append(pred_np)\n            train_label_seg.append(label.reshape(-1))\n        if args.scheduler == 'cos':\n            scheduler.step()\n        elif args.scheduler == 'step':\n            if opt.param_groups[0]['lr'] > 1e-5:\n                scheduler.step()\n            if opt.param_groups[0]['lr'] < 1e-5:\n                for param_group in opt.param_groups:\n                    param_group['lr'] = 1e-5\n        train_true_cls = np.concatenate(train_true_cls)\n        train_pred_cls = np.concatenate(train_pred_cls)\n        train_acc = metrics.accuracy_score(train_true_cls, train_pred_cls)\n        avg_per_class_acc = metrics.balanced_accuracy_score(train_true_cls, train_pred_cls)\n        train_true_seg = np.concatenate(train_true_seg, axis=0)\n        train_pred_seg = np.concatenate(train_pred_seg, axis=0)\n        train_label_seg = np.concatenate(train_label_seg)\n        train_ious = calculate_shape_IoU(train_pred_seg, train_true_seg, train_label_seg, args.class_choice)\n        outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f, train iou: %.6f' % (epoch, \n                                                                                                  train_loss*1.0/count,\n                                                                                                  train_acc,\n                                                                                                  avg_per_class_acc,\n                                                                                                  np.mean(train_ious))\n        io.cprint(outstr)\n\n        ####################\n        # Test\n        ####################\n        test_loss = 0.0\n        count = 0.0\n        model.eval()\n        test_true_cls = []\n        test_pred_cls = []\n        test_true_seg = []\n        test_pred_seg = []\n        test_label_seg = []\n        for data, label, seg in test_loader:\n            seg = seg - seg_start_index\n            label_one_hot = np.zeros((label.shape[0], 16))\n            for idx in range(label.shape[0]):\n                label_one_hot[idx, label[idx]] = 1\n            label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))\n            data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            seg_pred = model(data, label_one_hot)\n            seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n            loss = criterion(seg_pred.view(-1, seg_num_all), seg.view(-1,1).squeeze())\n            pred = seg_pred.max(dim=2)[1]\n            count += batch_size\n            test_loss += loss.item() * batch_size\n            seg_np = seg.cpu().numpy()\n            pred_np = pred.detach().cpu().numpy()\n            test_true_cls.append(seg_np.reshape(-1))\n            test_pred_cls.append(pred_np.reshape(-1))\n            test_true_seg.append(seg_np)\n            test_pred_seg.append(pred_np)\n            test_label_seg.append(label.reshape(-1))\n        test_true_cls = np.concatenate(test_true_cls)\n        test_pred_cls = np.concatenate(test_pred_cls)\n        test_acc = metrics.accuracy_score(test_true_cls, test_pred_cls)\n        avg_per_class_acc = metrics.balanced_accuracy_score(test_true_cls, test_pred_cls)\n        test_true_seg = np.concatenate(test_true_seg, axis=0)\n        test_pred_seg = np.concatenate(test_pred_seg, axis=0)\n        test_label_seg = np.concatenate(test_label_seg)\n        test_ious = calculate_shape_IoU(test_pred_seg, test_true_seg, test_label_seg, args.class_choice)\n        outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f, test iou: %.6f, best iou %.6f' % (epoch,\n                                                                                              test_loss*1.0/count,\n                                                                                              test_acc,\n                                                                                              avg_per_class_acc,\n                                                                                              np.mean(test_ious), best_test_iou)\n        io.cprint(outstr)\n        if np.mean(test_ious) >= best_test_iou:\n            best_test_iou = np.mean(test_ious)\n            torch.save(model.state_dict(), '../checkpoints/%s/models/model.t7' % args.exp_name)\n\n\ndef test(args, io):\n    test_loader = DataLoader(ShapeNetPart(partition='test', num_points=args.num_points, class_choice=args.class_choice),\n                             batch_size=args.test_batch_size, shuffle=True, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    #Try to load models\n    seg_start_index = test_loader.dataset.seg_start_index\n    model = CurveNet().to(device)\n    model = nn.DataParallel(model)\n    model.load_state_dict(torch.load(args.model_path))\n\n    model = model.eval()\n    test_acc = 0.0\n    test_true_cls = []\n    test_pred_cls = []\n    test_true_seg = []\n    test_pred_seg = []\n    test_label_seg = []\n    category = {}\n    for data, label, seg in test_loader:\n        seg = seg - seg_start_index\n        label_one_hot = np.zeros((label.shape[0], 16))\n        for idx in range(label.shape[0]):\n            label_one_hot[idx, label[idx]] = 1\n        label_one_hot = torch.from_numpy(label_one_hot.astype(np.float32))\n        data, label_one_hot, seg = data.to(device), label_one_hot.to(device), seg.to(device)\n        data = data.permute(0, 2, 1)\n        seg_pred = model(data, label_one_hot)\n        seg_pred = seg_pred.permute(0, 2, 1).contiguous()\n        pred = seg_pred.max(dim=2)[1]\n        seg_np = seg.cpu().numpy()\n        pred_np = pred.detach().cpu().numpy()\n        test_true_cls.append(seg_np.reshape(-1))\n        test_pred_cls.append(pred_np.reshape(-1))\n        test_true_seg.append(seg_np)\n        test_pred_seg.append(pred_np)\n        test_label_seg.append(label.reshape(-1))\n\n    test_true_cls = np.concatenate(test_true_cls)\n    test_pred_cls = np.concatenate(test_pred_cls)\n    test_acc = metrics.accuracy_score(test_true_cls, test_pred_cls)\n    avg_per_class_acc = metrics.balanced_accuracy_score(test_true_cls, test_pred_cls)\n    test_true_seg = np.concatenate(test_true_seg, axis=0)\n    test_pred_seg = np.concatenate(test_pred_seg, axis=0)\n    test_label_seg = np.concatenate(test_label_seg)\n    test_ious,category = calculate_shape_IoU(test_pred_seg, test_true_seg, test_label_seg, args.class_choice, eva=True)\n    outstr = 'Test :: test acc: %.6f, test avg acc: %.6f, test iou: %.6f' % (test_acc,\n                                                                             avg_per_class_acc,\n                                                                             np.mean(test_ious))\n    io.cprint(outstr)\n    results = []\n    for key in category.keys():\n        results.append((int(key), np.mean(category[key]), len(category[key])))\n    results.sort(key=lambda x:x[0])\n    for re in results:\n        io.cprint('idx: %d mIoU: %.3f num: %d' % (re[0], re[1], re[2]))\n\n\nif __name__ == \"__main__\":\n    # Training settings\n    parser = argparse.ArgumentParser(description='Point Cloud Part Segmentation')\n    parser.add_argument('--exp_name', type=str, default='exp', metavar='N',\n                        help='Name of the experiment')\n    parser.add_argument('--dataset', type=str, default='shapenetpart', metavar='N',\n                        choices=['shapenetpart'])\n    parser.add_argument('--class_choice', type=str, default=None, metavar='N',\n                        choices=['airplane', 'bag', 'cap', 'car', 'chair',\n                                 'earphone', 'guitar', 'knife', 'lamp', 'laptop', \n                                 'motor', 'mug', 'pistol', 'rocket', 'skateboard', 'table'])\n    parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--epochs', type=int, default=200, metavar='N',\n                        help='number of episode to train ')\n    parser.add_argument('--use_sgd', type=bool, default=True,\n                        help='Use SGD')\n    parser.add_argument('--lr', type=float, default=0.0005, metavar='LR',\n                        help='learning rate (default: 0.001, 0.1 if using sgd)')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--scheduler', type=str, default='step', metavar='N',\n                        choices=['cos', 'step'],\n                        help='Scheduler to use, [cos, step]')\n    parser.add_argument('--no_cuda', type=bool, default=False,\n                        help='enables CUDA training')\n    parser.add_argument('--eval', type=bool,  default=False,\n                        help='evaluate the model')\n    parser.add_argument('--num_points', type=int, default=2048,\n                        help='num of points to use')\n    parser.add_argument('--model_path', type=str, default='', metavar='N',\n                        help='Pretrained model path')\n    args = parser.parse_args()\n\n    seed = np.random.randint(1, 10000)\n\n    _init_()\n\n    if args.eval:\n        io = IOStream('../checkpoints/' + args.exp_name + '/eval.log')\n    else:\n        io = IOStream('../checkpoints/' + args.exp_name + '/run.log')\n    io.cprint(str(args))\n    io.cprint('random seed is: ' + str(seed))\n\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n\n    if args.cuda:\n        io.cprint(\n            'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')\n    else:\n        io.cprint('Using CPU')\n\n    if not args.eval:\n        train(args, io)\n    else:\n        with torch.no_grad():\n            test(args, io)\n"
  },
  {
    "path": "CurveNet/core/models/curvenet_cls.py",
    "content": "\"\"\"\n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@File: curvenet_cls.py\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom .curvenet_util import *\n\n\ncurve_config = {\n        'default': [[100, 5], [100, 5], None, None],\n        'long':  [[10, 30], None,  None,  None]\n    }\n\nclass CurveNet(nn.Module):\n    def __init__(self, num_classes=40, k=20, setting='default'):\n        super(CurveNet, self).__init__()\n\n        assert setting in curve_config\n\n        additional_channel = 32\n        self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)\n\n        # encoder\n        self.cic11 = CIC(npoint=1024, radius=0.05, k=k, in_channels=additional_channel, output_channels=64, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][0])\n        self.cic12 = CIC(npoint=1024, radius=0.05, k=k, in_channels=64, output_channels=64, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][0])\n        \n        self.cic21 = CIC(npoint=1024, radius=0.05, k=k, in_channels=64, output_channels=128, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][1])\n        self.cic22 = CIC(npoint=1024, radius=0.1, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][1])\n\n        self.cic31 = CIC(npoint=256, radius=0.1, k=k, in_channels=128, output_channels=256, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][2])\n        self.cic32 = CIC(npoint=256, radius=0.2, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][2])\n\n        self.cic41 = CIC(npoint=64, radius=0.2, k=k, in_channels=256, output_channels=512, bottleneck_ratio=2, mlp_num=1, curve_config=curve_config[setting][3])\n        self.cic42 = CIC(npoint=64, radius=0.4, k=k, in_channels=512, output_channels=512, bottleneck_ratio=4, mlp_num=1, curve_config=curve_config[setting][3])\n\n        self.conv0 = nn.Sequential(\n            nn.Conv1d(512, 1024, kernel_size=1, bias=False),\n            nn.BatchNorm1d(1024),\n            nn.ReLU(inplace=True))\n        self.conv1 = nn.Linear(1024 * 2, 512, bias=False)\n        self.conv2 = nn.Linear(512, num_classes)\n        self.bn1 = nn.BatchNorm1d(512)\n        self.dp1 = nn.Dropout(p=0.5)\n\n    def forward(self, xyz):\n        l0_points = self.lpfa(xyz, xyz)\n\n        l1_xyz, l1_points = self.cic11(xyz, l0_points)\n        l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)\n\n        l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)\n        l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)\n\n        l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)\n        l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)\n \n        l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)\n        l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)\n\n        x = self.conv0(l4_points)\n        x_max = F.adaptive_max_pool1d(x, 1)\n        x_avg = F.adaptive_avg_pool1d(x, 1)\n        \n        x = torch.cat((x_max, x_avg), dim=1).squeeze(-1)\n        x = F.relu(self.bn1(self.conv1(x).unsqueeze(-1)), inplace=True).squeeze(-1)\n        x = self.dp1(x)\n        x = self.conv2(x)\n        return x\n"
  },
  {
    "path": "CurveNet/core/models/curvenet_normal.py",
    "content": "\"\"\"\n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@File: curvenet_normal.py\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom .curvenet_util import *\n\n\ncurve_config = {\n        'default': [[100, 5], [100, 5], None, None]\n    }\n\nclass CurveNet(nn.Module):\n    def __init__(self, num_classes=3, k=20, multiplier=1.0, setting='default'):\n        super(CurveNet, self).__init__()\n\n        assert setting in curve_config\n\n        additional_channel = 64\n        channels = [128, 256, 512, 1024]\n        channels = [int(c * multiplier) for c in channels]\n        \n        self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)\n\n        # encoder\n        self.cic11 = CIC(npoint=1024, radius=0.1, k=k, in_channels=additional_channel, output_channels=channels[0], bottleneck_ratio=2, curve_config=curve_config[setting][0])\n        self.cic12 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0], output_channels=channels[0], bottleneck_ratio=4, curve_config=curve_config[setting][0])\n        \n        self.cic21 = CIC(npoint=256, radius=0.2, k=k, in_channels=channels[0], output_channels=channels[1], bottleneck_ratio=2, curve_config=curve_config[setting][1])\n        self.cic22 = CIC(npoint=256, radius=0.2, k=k, in_channels=channels[1], output_channels=channels[1], bottleneck_ratio=4, curve_config=curve_config[setting][1])\n\n        self.cic31 = CIC(npoint=64, radius=0.4, k=k, in_channels=channels[1], output_channels=channels[2], bottleneck_ratio=2, curve_config=curve_config[setting][2])\n        self.cic32 = CIC(npoint=64, radius=0.4, k=k, in_channels=channels[2], output_channels=channels[2], bottleneck_ratio=4, curve_config=curve_config[setting][2])\n\n        self.cic41 = CIC(npoint=16, radius=0.8, k=15, in_channels=channels[2], output_channels=channels[3], bottleneck_ratio=2, curve_config=curve_config[setting][3])\n        self.cic42 = CIC(npoint=16, radius=0.8, k=15, in_channels=channels[3], output_channels=channels[3], bottleneck_ratio=4, curve_config=curve_config[setting][3])\n        #self.cic43 = CIC(npoint=16, radius=0.8, k=15, in_channels=2048, output_channels=2048, bottleneck_ratio=4, curve_config=curve_config[setting][3])\n        # decoder\n        self.fp3 = PointNetFeaturePropagation(in_channel=channels[3] + channels[2], mlp=[channels[2], channels[2]], att=[channels[3], channels[3]//2, channels[3]//8])\n        self.up_cic4 = CIC(npoint=64, radius=0.8, k=k, in_channels=channels[2], output_channels=channels[2], bottleneck_ratio=4)\n\n        self.fp2 = PointNetFeaturePropagation(in_channel=channels[2] + channels[1], mlp=[channels[1], channels[1]], att=[channels[2], channels[2]//2, channels[2]//8])\n        self.up_cic3 = CIC(npoint=256, radius=0.4, k=k, in_channels=channels[1], output_channels=channels[1], bottleneck_ratio=4)\n\n        self.fp1 = PointNetFeaturePropagation(in_channel=channels[1] + channels[0], mlp=[channels[0], channels[0]], att=[channels[1], channels[1]//2, channels[1]//8])\n        self.up_cic2 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0]+3, output_channels=channels[0], bottleneck_ratio=4)\n        self.up_cic1 = CIC(npoint=1024, radius=0.1, k=k, in_channels=channels[0], output_channels=channels[0], bottleneck_ratio=4)\n\n        self.point_conv = nn.Sequential(\n            nn.Conv2d(9, additional_channel, kernel_size=1, bias=False),\n            nn.BatchNorm2d(additional_channel),\n            nn.LeakyReLU(negative_slope=0.2, inplace=True))\n\n        self.conv1 = nn.Conv1d(channels[0], num_classes, 1)\n\n    def forward(self, xyz):\n        l0_points = self.lpfa(xyz, xyz)\n\n        l1_xyz, l1_points = self.cic11(xyz, l0_points)\n        l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)\n\n        l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)\n        l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)\n\n        l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)\n        l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)\n \n        l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)\n        l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)\n        #l4_xyz, l4_points = self.cic43(l4_xyz, l4_points)\n\n        l3_points = self.fp3(l3_xyz, l4_xyz, l3_points, l4_points)\n        l3_xyz, l3_points = self.up_cic4(l3_xyz, l3_points)\n        l2_points = self.fp2(l2_xyz, l3_xyz, l2_points, l3_points)\n        l2_xyz, l2_points = self.up_cic3(l2_xyz, l2_points)\n        l1_points = self.fp1(l1_xyz, l2_xyz, l1_points, l2_points)\n\n        x = torch.cat((l1_xyz, l1_points), dim=1)\n\n        xyz, x = self.up_cic2(l1_xyz, x)\n        xyz, x = self.up_cic1(xyz, x)\n\n        x = self.conv1(x)\n        return x\n"
  },
  {
    "path": "CurveNet/core/models/curvenet_seg.py",
    "content": "\"\"\"\n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@File: curvenet_seg.py\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom .curvenet_util import *\n\n\ncurve_config = {\n        'default': [[100, 5], [100, 5], None, None, None]\n    }\n\nclass CurveNet(nn.Module):\n    def __init__(self, num_classes=50, category=16, k=32, setting='default'):\n        super(CurveNet, self).__init__()\n\n        assert setting in curve_config\n\n        additional_channel = 32\n        self.lpfa = LPFA(9, additional_channel, k=k, mlp_num=1, initial=True)\n\n        # encoder\n        self.cic11 = CIC(npoint=2048, radius=0.2, k=k, in_channels=additional_channel, output_channels=64, bottleneck_ratio=2, curve_config=curve_config[setting][0])\n        self.cic12 = CIC(npoint=2048, radius=0.2, k=k, in_channels=64, output_channels=64, bottleneck_ratio=4, curve_config=curve_config[setting][0])\n\n        self.cic21 = CIC(npoint=512, radius=0.4, k=k, in_channels=64, output_channels=128, bottleneck_ratio=2, curve_config=curve_config[setting][1])\n        self.cic22 = CIC(npoint=512, radius=0.4, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4, curve_config=curve_config[setting][1])\n\n        self.cic31 = CIC(npoint=128, radius=0.8, k=k, in_channels=128, output_channels=256, bottleneck_ratio=2, curve_config=curve_config[setting][2])\n        self.cic32 = CIC(npoint=128, radius=0.8, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4, curve_config=curve_config[setting][2])\n\n        self.cic41 = CIC(npoint=32, radius=1.2, k=31, in_channels=256, output_channels=512, bottleneck_ratio=2, curve_config=curve_config[setting][3])\n        self.cic42 = CIC(npoint=32, radius=1.2, k=31, in_channels=512, output_channels=512, bottleneck_ratio=4, curve_config=curve_config[setting][3])\n\n        self.cic51 = CIC(npoint=8, radius=2.0, k=7, in_channels=512, output_channels=1024, bottleneck_ratio=2, curve_config=curve_config[setting][4])\n        self.cic52 = CIC(npoint=8, radius=2.0, k=7, in_channels=1024, output_channels=1024, bottleneck_ratio=4, curve_config=curve_config[setting][4])\n        self.cic53 = CIC(npoint=8, radius=2.0, k=7, in_channels=1024, output_channels=1024, bottleneck_ratio=4, curve_config=curve_config[setting][4])\n\n        # decoder\n        self.fp4 = PointNetFeaturePropagation(in_channel=1024 + 512, mlp=[512, 512], att=[1024, 512, 256])\n        self.up_cic5 = CIC(npoint=32, radius=1.2, k=31, in_channels=512, output_channels=512, bottleneck_ratio=4)\n\n        self.fp3 = PointNetFeaturePropagation(in_channel=512 + 256, mlp=[256, 256], att=[512, 256, 128])\n        self.up_cic4 = CIC(npoint=128, radius=0.8, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4)\n\n        self.fp2 = PointNetFeaturePropagation(in_channel=256 + 128, mlp=[128, 128], att=[256, 128, 64])\n        self.up_cic3 = CIC(npoint=512, radius=0.4, k=k, in_channels=128, output_channels=128, bottleneck_ratio=4)\n\n        self.fp1 = PointNetFeaturePropagation(in_channel=128 + 64, mlp=[64, 64], att=[128, 64, 32])\n        self.up_cic2 = CIC(npoint=2048, radius=0.2, k=k, in_channels=128+64+64+category+3, output_channels=256, bottleneck_ratio=4)\n        self.up_cic1 = CIC(npoint=2048, radius=0.2, k=k, in_channels=256, output_channels=256, bottleneck_ratio=4)\n        \n\n        self.global_conv2 = nn.Sequential(\n            nn.Conv1d(1024, 128, kernel_size=1, bias=False),\n            nn.BatchNorm1d(128),\n            nn.LeakyReLU(negative_slope=0.2))\n        self.global_conv1 = nn.Sequential(\n            nn.Conv1d(512, 64, kernel_size=1, bias=False),\n            nn.BatchNorm1d(64),\n            nn.LeakyReLU(negative_slope=0.2))\n\n        self.conv1 = nn.Conv1d(256, 256, 1, bias=False)\n        self.bn1 = nn.BatchNorm1d(256)\n        self.drop1 = nn.Dropout(0.5)\n        self.conv2 = nn.Conv1d(256, num_classes, 1)\n        self.se = nn.Sequential(nn.AdaptiveAvgPool1d(1),\n                                nn.Conv1d(256, 256//8, 1, bias=False),\n                                nn.BatchNorm1d(256//8),\n                                nn.LeakyReLU(negative_slope=0.2),\n                                nn.Conv1d(256//8, 256, 1, bias=False),\n                                nn.Sigmoid())\n                                \n    def forward(self, xyz, l=None):\n        batch_size = xyz.size(0)\n\n        l0_points = self.lpfa(xyz, xyz)\n\n        l1_xyz, l1_points = self.cic11(xyz, l0_points)\n        l1_xyz, l1_points = self.cic12(l1_xyz, l1_points)\n\n        l2_xyz, l2_points = self.cic21(l1_xyz, l1_points)\n        l2_xyz, l2_points = self.cic22(l2_xyz, l2_points)\n\n        l3_xyz, l3_points = self.cic31(l2_xyz, l2_points)\n        l3_xyz, l3_points = self.cic32(l3_xyz, l3_points)\n \n        l4_xyz, l4_points = self.cic41(l3_xyz, l3_points)\n        l4_xyz, l4_points = self.cic42(l4_xyz, l4_points)\n\n        l5_xyz, l5_points = self.cic51(l4_xyz, l4_points)\n        l5_xyz, l5_points = self.cic52(l5_xyz, l5_points)\n        l5_xyz, l5_points = self.cic53(l5_xyz, l5_points)\n\n        # global features\n        emb1 = self.global_conv1(l4_points)\n        emb1 = emb1.max(dim=-1, keepdim=True)[0] # bs, 64, 1\n        emb2 = self.global_conv2(l5_points)\n        emb2 = emb2.max(dim=-1, keepdim=True)[0] # bs, 128, 1\n\n        # Feature Propagation layers\n        l4_points = self.fp4(l4_xyz, l5_xyz, l4_points, l5_points)\n        l4_xyz, l4_points = self.up_cic5(l4_xyz, l4_points)\n\n        l3_points = self.fp3(l3_xyz, l4_xyz, l3_points, l4_points)\n        l3_xyz, l3_points = self.up_cic4(l3_xyz, l3_points)\n\n        l2_points = self.fp2(l2_xyz, l3_xyz, l2_points, l3_points)\n        l2_xyz, l2_points = self.up_cic3(l2_xyz, l2_points)\n\n        l1_points = self.fp1(l1_xyz, l2_xyz, l1_points, l2_points)\n\n        if l is not None:\n            l = l.view(batch_size, -1, 1)\n            emb = torch.cat((emb1, emb2, l), dim=1) # bs, 128 + 16, 1\n        l = emb.expand(-1,-1, xyz.size(-1))\n        x = torch.cat((l1_xyz, l1_points, l), dim=1)\n\n        xyz, x = self.up_cic2(l1_xyz, x)\n        xyz, x = self.up_cic1(xyz, x)\n\n        x =  F.leaky_relu(self.bn1(self.conv1(x)), 0.2, inplace=True)\n        se = self.se(x)\n        x = x * se\n        x = self.drop1(x)\n        x = self.conv2(x)\n        return x\n"
  },
  {
    "path": "CurveNet/core/models/curvenet_util.py",
    "content": "\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: pointnet_util.py\n@Time: 2018/10/13 10:39 PM\n\nModified by \n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom time import time\nimport numpy as np\n\nfrom .walk import Walk\n\n\ndef knn(x, k):\n    k = k + 1\n    inner = -2 * torch.matmul(x.transpose(2, 1), x)\n    xx = torch.sum(x**2, dim=1, keepdim=True)\n    pairwise_distance = -xx - inner - xx.transpose(2, 1)\n\n    idx = pairwise_distance.topk(k=k, dim=-1)[1]  # (batch_size, num_points, k)\n    return idx\n\ndef normal_knn(x, k):\n    inner = -2 * torch.matmul(x.transpose(2, 1), x)\n    xx = torch.sum(x**2, dim=1, keepdim=True)\n    pairwise_distance = -xx - inner - xx.transpose(2, 1)\n\n    idx = pairwise_distance.topk(k=k, dim=-1)[1]  # (batch_size, num_points, k)\n    return idx\n\ndef pc_normalize(pc):\n    l = pc.shape[0]\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n    pc = pc / m\n    return pc\n\ndef square_distance(src, dst):\n    \"\"\"\n    Calculate Euclid distance between each two points.\n    \"\"\"\n    B, N, _ = src.shape\n    _, M, _ = dst.shape\n    dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))\n    dist += torch.sum(src ** 2, -1).view(B, N, 1)\n    dist += torch.sum(dst ** 2, -1).view(B, 1, M)\n    return dist\n\ndef index_points(points, idx):\n    \"\"\"\n\n    Input:\n        points: input points data, [B, N, C]\n        idx: sample index data, [B, S]\n    Return:\n        new_points:, indexed points data, [B, S, C]\n    \"\"\"\n    device = points.device\n    B = points.shape[0]\n    view_shape = list(idx.shape)\n    view_shape[1:] = [1] * (len(view_shape) - 1)\n    repeat_shape = list(idx.shape)\n    repeat_shape[0] = 1\n    batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)\n    new_points = points[batch_indices, idx, :]\n    return new_points\n\n\ndef farthest_point_sample(xyz, npoint):\n    \"\"\"\n    Input:\n        xyz: pointcloud data, [B, N, 3]\n        npoint: number of samples\n    Return:\n        centroids: sampled pointcloud index, [B, npoint]\n    \"\"\"\n    device = xyz.device\n    B, N, C = xyz.shape\n    centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)\n    distance = torch.ones(B, N).to(device) * 1e10\n    farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device) * 0\n    batch_indices = torch.arange(B, dtype=torch.long).to(device)\n    for i in range(npoint):\n        centroids[:, i] = farthest\n        centroid = xyz[batch_indices, farthest, :].view(B, 1, 3)\n        dist = torch.sum((xyz - centroid) ** 2, -1)\n        mask = dist < distance\n        distance[mask] = dist[mask]\n        farthest = torch.max(distance, -1)[1]\n    return centroids\n\ndef query_ball_point(radius, nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        radius: local region radius\n        nsample: max sample number in local region\n        xyz: all points, [B, N, 3]\n        new_xyz: query points, [B, S, 3]\n    Return:\n        group_idx: grouped points index, [B, S, nsample]\n    \"\"\"\n    device = xyz.device\n    B, N, C = xyz.shape\n    _, S, _ = new_xyz.shape\n    group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])\n    sqrdists = square_distance(new_xyz, xyz)\n    group_idx[sqrdists > radius ** 2] = N\n    group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]\n    group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])\n    mask = group_idx == N\n    group_idx[mask] = group_first[mask]\n    return group_idx\n\ndef sample_and_group(npoint, radius, nsample, xyz, points, returnfps=False):\n    \"\"\"\n    Input:\n        npoint:\n        radius:\n        nsample:\n        xyz: input points position data, [B, N, 3]\n        points: input points data, [B, N, D]\n    Return:\n        new_xyz: sampled points position data, [B, npoint, nsample, 3]\n        new_points: sampled points data, [B, npoint, nsample, 3+D]\n    \"\"\"\n    new_xyz = index_points(xyz, farthest_point_sample(xyz, npoint))\n    torch.cuda.empty_cache()\n\n    idx = query_ball_point(radius, nsample, xyz, new_xyz)\n    torch.cuda.empty_cache()\n\n    new_points = index_points(points, idx)\n    torch.cuda.empty_cache()\n\n    if returnfps:\n        return new_xyz, new_points, idx\n    else:\n        return new_xyz, new_points\n\nclass Attention_block(nn.Module):\n    '''\n    Used in attention U-Net.\n    '''\n    def __init__(self,F_g,F_l,F_int):\n        super(Attention_block,self).__init__()\n        self.W_g = nn.Sequential(\n            nn.Conv1d(F_g, F_int, kernel_size=1,stride=1,padding=0,bias=True),\n            nn.BatchNorm1d(F_int)\n            )\n\n        self.W_x = nn.Sequential(\n            nn.Conv1d(F_l, F_int, kernel_size=1,stride=1,padding=0,bias=True),\n            nn.BatchNorm1d(F_int)\n        )\n\n        self.psi = nn.Sequential(\n            nn.Conv1d(F_int, 1, kernel_size=1,stride=1,padding=0,bias=True),\n            nn.BatchNorm1d(1),\n            nn.Sigmoid()\n        )\n\n    def forward(self,g,x):\n        g1 = self.W_g(g)\n        x1 = self.W_x(x)\n        psi = F.leaky_relu(g1+x1, negative_slope=0.2)\n        psi = self.psi(psi)\n\n        return psi, 1. - psi\n\n\nclass LPFA(nn.Module):\n    def __init__(self, in_channel, out_channel, k, mlp_num=2, initial=False):\n        super(LPFA, self).__init__()\n        self.k = k\n        self.device = torch.device('cuda')\n        self.initial = initial\n\n        if not initial:\n            self.xyz2feature = nn.Sequential(\n                        nn.Conv2d(9, in_channel, kernel_size=1, bias=False),\n                        nn.BatchNorm2d(in_channel))\n\n        self.mlp = []\n        for _ in range(mlp_num):\n            self.mlp.append(nn.Sequential(nn.Conv2d(in_channel, out_channel, 1, bias=False),\n                                 nn.BatchNorm2d(out_channel),\n                                 nn.LeakyReLU(0.2)))\n            in_channel = out_channel\n        self.mlp = nn.Sequential(*self.mlp)        \n\n    def forward(self, x, xyz, idx=None):\n        x = self.group_feature(x, xyz, idx)\n        x = self.mlp(x)\n\n        if self.initial:\n            x = x.max(dim=-1, keepdim=False)[0]\n        else:\n            x = x.mean(dim=-1, keepdim=False)\n\n        return x\n\n    def group_feature(self, x, xyz, idx):\n        batch_size, num_dims, num_points = x.size()\n\n        if idx is None:\n            idx = knn(xyz, k=self.k)[:,:,:self.k]  # (batch_size, num_points, k)\n\n        idx_base = torch.arange(0, batch_size, device=self.device).view(-1, 1, 1) * num_points\n        idx = idx + idx_base\n        idx = idx.view(-1)\n\n        xyz = xyz.transpose(2, 1).contiguous() # bs, n, 3\n        point_feature = xyz.view(batch_size * num_points, -1)[idx, :]\n        point_feature = point_feature.view(batch_size, num_points, self.k, -1)  # bs, n, k, 3\n        points = xyz.view(batch_size, num_points, 1, 3).expand(-1, -1, self.k, -1)  # bs, n, k, 3\n\n        point_feature = torch.cat((points, point_feature, point_feature - points),\n                                dim=3).permute(0, 3, 1, 2).contiguous()\n\n        if self.initial:\n            return point_feature\n\n        x = x.transpose(2, 1).contiguous() # bs, n, c\n        feature = x.view(batch_size * num_points, -1)[idx, :]\n        feature = feature.view(batch_size, num_points, self.k, num_dims)  #bs, n, k, c\n        x = x.view(batch_size, num_points, 1, num_dims)\n        feature = feature - x\n\n        feature = feature.permute(0, 3, 1, 2).contiguous()\n        point_feature = self.xyz2feature(point_feature)  #bs, c, n, k\n        feature = F.leaky_relu(feature + point_feature, 0.2)\n        return feature #bs, c, n, k\n\n\nclass PointNetFeaturePropagation(nn.Module):\n    def __init__(self, in_channel, mlp, att=None):\n        super(PointNetFeaturePropagation, self).__init__()\n        self.mlp_convs = nn.ModuleList()\n        self.mlp_bns = nn.ModuleList()\n        last_channel = in_channel\n        self.att = None\n        if att is not None:\n            self.att = Attention_block(F_g=att[0],F_l=att[1],F_int=att[2])\n        \n        for out_channel in mlp:\n            self.mlp_convs.append(nn.Conv1d(last_channel, out_channel, 1))\n            self.mlp_bns.append(nn.BatchNorm1d(out_channel))\n            last_channel = out_channel\n\n    def forward(self, xyz1, xyz2, points1, points2):\n        \"\"\"\n        Input:\n            xyz1: input points position data, [B, C, N]\n            xyz2: sampled input points position data, [B, C, S], skipped xyz\n            points1: input points data, [B, D, N]\n            points2: input points data, [B, D, S], skipped features\n        Return:\n            new_points: upsampled points data, [B, D', N]\n        \"\"\"\n        xyz1 = xyz1.permute(0, 2, 1)\n        xyz2 = xyz2.permute(0, 2, 1)\n\n        points2 = points2.permute(0, 2, 1)\n        B, N, C = xyz1.shape\n        _, S, _ = xyz2.shape\n\n        if S == 1:\n            interpolated_points = points2.repeat(1, N, 1)\n        else:\n            dists = square_distance(xyz1, xyz2)\n            dists, idx = dists.sort(dim=-1)\n            dists, idx = dists[:, :, :3], idx[:, :, :3]  # [B, N, 3]\n\n            dist_recip = 1.0 / (dists + 1e-8)\n            norm = torch.sum(dist_recip, dim=2, keepdim=True)\n            weight = dist_recip / norm\n            interpolated_points = torch.sum(index_points(points2, idx) * weight.view(B, N, 3, 1), dim=2)\n\n        # skip attention\n        if self.att is not None:\n           psix, psig = self.att(interpolated_points.permute(0, 2, 1), points1)\n           points1 = points1 * psix\n           \n        if points1 is not None:\n            points1 = points1.permute(0, 2, 1)\n            new_points = torch.cat([points1, interpolated_points], dim=-1)\n        else:\n            new_points = interpolated_points\n\n        new_points = new_points.permute(0, 2, 1)\n\n        for i, conv in enumerate(self.mlp_convs):\n            bn = self.mlp_bns[i]\n            new_points = F.leaky_relu(bn(conv(new_points)), 0.2)\n\n        return new_points\n\n\nclass CIC(nn.Module):\n    def __init__(self, npoint, radius, k, in_channels, output_channels, bottleneck_ratio=2, mlp_num=2, curve_config=None):\n        super(CIC, self).__init__()\n        self.in_channels = in_channels\n        self.output_channels = output_channels\n        self.bottleneck_ratio = bottleneck_ratio\n        self.radius = radius\n        self.k = k\n        self.npoint = npoint\n\n        planes = in_channels // bottleneck_ratio\n\n        self.use_curve = curve_config is not None\n        if self.use_curve:\n            self.curveaggregation = CurveAggregation(planes)\n            self.curvegrouping = CurveGrouping(planes, k, curve_config[0], curve_config[1])\n\n        self.conv1 = nn.Sequential(\n            nn.Conv1d(in_channels,\n                      planes,\n                      kernel_size=1,\n                      bias=False),\n            nn.BatchNorm1d(in_channels // bottleneck_ratio),\n            nn.LeakyReLU(negative_slope=0.2, inplace=True))\n\n        self.conv2 = nn.Sequential(\n            nn.Conv1d(planes, output_channels, kernel_size=1, bias=False),\n            nn.BatchNorm1d(output_channels))\n\n        if in_channels != output_channels:\n            self.shortcut = nn.Sequential(\n                nn.Conv1d(in_channels,\n                          output_channels,\n                          kernel_size=1,\n                          bias=False),\n                nn.BatchNorm1d(output_channels))\n\n        self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n        self.maxpool = MaskedMaxPool(npoint, radius, k)\n\n        self.lpfa = LPFA(planes, planes, k, mlp_num=mlp_num, initial=False)\n\n    def forward(self, xyz, x):\n \n        # max pool\n        if xyz.size(-1) != self.npoint:\n            xyz, x = self.maxpool(\n                xyz.transpose(1, 2).contiguous(), x)\n            xyz = xyz.transpose(1, 2)\n\n        shortcut = x\n        x = self.conv1(x)  # bs, c', n\n\n        idx = knn(xyz, self.k)\n\n        if self.use_curve:\n            # curve grouping\n            curves = self.curvegrouping(x, xyz, idx[:,:,1:]) # avoid self-loop\n\n            # curve aggregation\n            x = self.curveaggregation(x, curves)\n\n        x = self.lpfa(x, xyz, idx=idx[:,:,:self.k]) #bs, c', n, k\n\n        x = self.conv2(x)  # bs, c, n\n\n        if self.in_channels != self.output_channels:\n            shortcut = self.shortcut(shortcut)\n\n        x = self.relu(x + shortcut)\n\n        return xyz, x\n\n\nclass CurveAggregation(nn.Module):\n    def __init__(self, in_channel):\n        super(CurveAggregation, self).__init__()\n        self.in_channel = in_channel\n        mid_feature = in_channel // 2\n        self.conva = nn.Conv1d(in_channel,\n                               mid_feature,\n                               kernel_size=1,\n                               bias=False)\n        self.convb = nn.Conv1d(in_channel,\n                               mid_feature,\n                               kernel_size=1,\n                               bias=False)\n        self.convc = nn.Conv1d(in_channel,\n                               mid_feature,\n                               kernel_size=1,\n                               bias=False)\n        self.convn = nn.Conv1d(mid_feature,\n                               mid_feature,\n                               kernel_size=1,\n                               bias=False)\n        self.convl = nn.Conv1d(mid_feature,\n                               mid_feature,\n                               kernel_size=1,\n                               bias=False)\n        self.convd = nn.Sequential(\n            nn.Conv1d(mid_feature * 2,\n                      in_channel,\n                      kernel_size=1,\n                      bias=False),\n            nn.BatchNorm1d(in_channel))\n        self.line_conv_att = nn.Conv2d(in_channel,\n                                       1,\n                                       kernel_size=1,\n                                       bias=False)\n\n    def forward(self, x, curves):\n        curves_att = self.line_conv_att(curves)  # bs, 1, c_n, c_l\n\n        curver_inter = torch.sum(curves * F.softmax(curves_att, dim=-1), dim=-1)  #bs, c, c_n\n        curves_intra = torch.sum(curves * F.softmax(curves_att, dim=-2), dim=-2)  #bs, c, c_l\n\n        curver_inter = self.conva(curver_inter) # bs, mid, n\n        curves_intra = self.convb(curves_intra) # bs, mid ,n\n\n        x_logits = self.convc(x).transpose(1, 2).contiguous()\n        x_inter = F.softmax(torch.bmm(x_logits, curver_inter), dim=-1) # bs, n, c_n\n        x_intra = F.softmax(torch.bmm(x_logits, curves_intra), dim=-1) # bs, l, c_l\n        \n\n        curver_inter = self.convn(curver_inter).transpose(1, 2).contiguous()\n        curves_intra = self.convl(curves_intra).transpose(1, 2).contiguous()\n\n        x_inter = torch.bmm(x_inter, curver_inter)\n        x_intra = torch.bmm(x_intra, curves_intra)\n\n        curve_features = torch.cat((x_inter, x_intra),dim=-1).transpose(1, 2).contiguous()\n        x = x + self.convd(curve_features)\n\n        return F.leaky_relu(x, negative_slope=0.2)\n\n\nclass CurveGrouping(nn.Module):\n    def __init__(self, in_channel, k, curve_num, curve_length):\n        super(CurveGrouping, self).__init__()\n        self.curve_num = curve_num\n        self.curve_length = curve_length\n        self.in_channel = in_channel\n        self.k = k\n\n        self.att = nn.Conv1d(in_channel, 1, kernel_size=1, bias=False)\n\n        self.walk = Walk(in_channel, k, curve_num, curve_length)\n\n    def forward(self, x, xyz, idx):\n        # starting point selection in self attention style\n        x_att = torch.sigmoid(self.att(x))\n        x = x * x_att\n\n        _, start_index = torch.topk(x_att,\n                                    self.curve_num,\n                                    dim=2,\n                                    sorted=False)\n        start_index = start_index.squeeze().unsqueeze(2)\n\n        curves = self.walk(xyz, x, idx, start_index)  #bs, c, c_n, c_l\n        \n        return curves\n\n\nclass MaskedMaxPool(nn.Module):\n    def __init__(self, npoint, radius, k):\n        super(MaskedMaxPool, self).__init__()\n        self.npoint = npoint\n        self.radius = radius\n        self.k = k\n\n    def forward(self, xyz, features):\n        sub_xyz, neighborhood_features = sample_and_group(self.npoint, self.radius, self.k, xyz, features.transpose(1,2))\n\n        neighborhood_features = neighborhood_features.permute(0, 3, 1, 2).contiguous()\n        sub_features = F.max_pool2d(\n            neighborhood_features, kernel_size=[1, neighborhood_features.shape[3]]\n        )  # bs, c, n, 1\n        sub_features = torch.squeeze(sub_features, -1)  # bs, c, n\n        return sub_xyz, sub_features\n"
  },
  {
    "path": "CurveNet/core/models/walk.py",
    "content": "\"\"\"\n@Author: Tiange Xiang\n@Contact: txia7609@uni.sydney.edu.au\n@File: walk.py\n@Time: 2021/01/21 3:10 PM\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef batched_index_select(input, dim, index):\n\tviews = [input.shape[0]] + \\\n\t\t[1 if i != dim else -1 for i in range(1, len(input.shape))]\n\texpanse = list(input.shape)\n\texpanse[0] = -1\n\texpanse[dim] = -1\n\tindex = index.view(views).expand(expanse)\n\treturn torch.gather(input, dim, index)\n\ndef gumbel_softmax(logits, dim, temperature=1):\n    \"\"\"\n    ST-gumple-softmax w/o random gumbel samplings\n    input: [*, n_class]\n    return: flatten --> [*, n_class] an one-hot vector\n    \"\"\"\n    y = F.softmax(logits / temperature, dim=dim)\n\n    shape = y.size()\n    _, ind = y.max(dim=-1)\n    y_hard = torch.zeros_like(y).view(-1, shape[-1])\n    y_hard.scatter_(1, ind.view(-1, 1), 1)\n    y_hard = y_hard.view(*shape)\n\n    y_hard = (y_hard - y).detach() + y\n    return y_hard\n\nclass Walk(nn.Module):\n    '''\n    Walk in the cloud\n    '''\n    def __init__(self, in_channel, k, curve_num, curve_length):\n        super(Walk, self).__init__()\n        self.curve_num = curve_num\n        self.curve_length = curve_length\n        self.k = k\n\n        self.agent_mlp = nn.Sequential(\n            nn.Conv2d(in_channel * 2,\n                        1,\n                        kernel_size=1,\n                        bias=False), nn.BatchNorm2d(1))\n        self.momentum_mlp = nn.Sequential(\n            nn.Conv1d(in_channel * 2,\n                        2,\n                        kernel_size=1,\n                        bias=False), nn.BatchNorm1d(2))\n\n    def crossover_suppression(self, cur, neighbor, bn, n, k):\n        # cur: bs*n, 3\n        # neighbor: bs*n, 3, k\n        neighbor = neighbor.detach()\n        cur = cur.unsqueeze(-1).detach()\n        dot = torch.bmm(cur.transpose(1,2), neighbor) # bs*n, 1, k\n        norm1 = torch.norm(cur, dim=1, keepdim=True)\n        norm2 = torch.norm(neighbor, dim=1, keepdim=True)\n        divider = torch.clamp(norm1 * norm2, min=1e-8)\n        ans = torch.div(dot, divider).squeeze() # bs*n, k\n\n        # normalize to [0, 1]\n        ans = 1. + ans\n        ans = torch.clamp(ans, 0., 1.0)\n\n        return ans.detach()\n\n    def forward(self, xyz, x, adj, cur):\n        bn, c, tot_points = x.size()\n\n        # raw point coordinates\n        xyz = xyz.transpose(1,2).contiguous # bs, n, 3\n\n        # point features\n        x = x.transpose(1,2).contiguous() # bs, n, c\n\n        flatten_x = x.view(bn * tot_points, -1)\n        batch_offset = torch.arange(0, bn, device=torch.device('cuda')).detach() * tot_points\n\n        # indices of neighbors for the starting points\n        tmp_adj = (adj + batch_offset.view(-1,1,1)).view(adj.size(0)*adj.size(1),-1) #bs, n, k\n    \n        # batch flattened indices for teh starting points\n        flatten_cur = (cur + batch_offset.view(-1,1,1)).view(-1)\n\n        curves = []\n\n        # one step at a time\n        for step in range(self.curve_length):\n\n            if step == 0:\n                # get starting point features using flattend indices\n                starting_points =  flatten_x[flatten_cur, :].contiguous()\n                pre_feature = starting_points.view(bn, self.curve_num, -1, 1).transpose(1,2) # bs * n, c\n            else:\n                # dynamic momentum\n                cat_feature = torch.cat((cur_feature.squeeze(), pre_feature.squeeze()),dim=1)\n                att_feature = F.softmax(self.momentum_mlp(cat_feature),dim=1).view(bn, 1, self.curve_num, 2) # bs, 1, n, 2\n                cat_feature = torch.cat((cur_feature, pre_feature),dim=-1) # bs, c, n, 2\n                \n                # update curve descriptor\n                pre_feature = torch.sum(cat_feature * att_feature, dim=-1, keepdim=True) # bs, c, n\n                pre_feature_cos =  pre_feature.transpose(1,2).contiguous().view(bn * self.curve_num, -1)\n\n            pick_idx = tmp_adj[flatten_cur] # bs*n, k\n            \n            # get the neighbors of current points\n            pick_values = flatten_x[pick_idx.view(-1),:]\n\n            # reshape to fit crossover suppresion below\n            pick_values_cos = pick_values.view(bn * self.curve_num, self.k, c)\n            pick_values = pick_values_cos.view(bn, self.curve_num, self.k, c)\n            pick_values_cos = pick_values_cos.transpose(1,2).contiguous()\n            \n            pick_values = pick_values.permute(0,3,1,2) # bs, c, n, k\n\n            pre_feature_expand = pre_feature.expand_as(pick_values)\n            \n            # concat current point features with curve descriptors\n            pre_feature_expand = torch.cat((pick_values, pre_feature_expand),dim=1)\n            \n            # which node to pick next?\n            pre_feature_expand = self.agent_mlp(pre_feature_expand) # bs, 1, n, k\n\n            if step !=0:\n                # cross over supression\n                d = self.crossover_suppression(cur_feature_cos - pre_feature_cos,\n                                               pick_values_cos - cur_feature_cos.unsqueeze(-1), \n                                               bn, self.curve_num, self.k)\n                d = d.view(bn, self.curve_num, self.k).unsqueeze(1) # bs, 1, n, k\n                pre_feature_expand = torch.mul(pre_feature_expand, d)\n\n            pre_feature_expand = gumbel_softmax(pre_feature_expand, -1) #bs, 1, n, k\n\n            cur_feature = torch.sum(pick_values * pre_feature_expand, dim=-1, keepdim=True) # bs, c, n, 1\n\n            cur_feature_cos = cur_feature.transpose(1,2).contiguous().view(bn * self.curve_num, c)\n\n            cur = torch.argmax(pre_feature_expand, dim=-1).view(-1, 1) # bs * n, 1\n\n            flatten_cur = batched_index_select(pick_idx, 1, cur).squeeze() # bs * n\n\n            # collect curve progress\n            curves.append(cur_feature)\n\n        return torch.cat(curves,dim=-1)\n"
  },
  {
    "path": "CurveNet/core/util.py",
    "content": "\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: util\n@Time: 4/5/19 3:47 PM\n\"\"\"\n\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\n\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1)\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n\n\nclass IOStream():\n    def __init__(self, path):\n        self.f = open(path, 'a')\n\n    def cprint(self, text):\n        print(text)\n        self.f.write(text+'\\n')\n        self.f.flush()\n\n    def close(self):\n        self.f.close()\n"
  },
  {
    "path": "GDANet/README.md",
    "content": "# Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud. \nThis repository is built for the paper:\n\n__Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud (_AAAI2021_)__ [[arXiv](https://arxiv.org/abs/2012.10921)]\n<br>\nby [Mutian Xu*](https://mutianxu.github.io/), [Junhao Zhang*](https://junhaozhang98.github.io/), Zhipeng Zhou, Mingye Xu, Xiaojuan Qi and Yu Qiao.\n\n\n## Overview\nGeometry-Disentangled Attention Network for 3D object point cloud classification and segmentation (GDANet):\n<img src = './imgs/GDANet.jpg' width = 800>\n\n## Citation\nIf you find the code or trained models useful, please consider citing:\n\n    @misc{xu2021learning,\n      title={Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud}, \n      author={Mutian Xu and Junhao Zhang and Zhipeng Zhou and Mingye Xu and Xiaojuan Qi and Yu Qiao},\n      year={2021},\n      eprint={2012.10921},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n\n\n## Installation\n\n\n### Requirements\n* Linux (tested on Ubuntu 14.04/16.04)\n* Python 3.5+\n* PyTorch 1.0+\n\n### Dataset\n* Create the folder to symlink the data later:\n    \n    `mkdir -p data`\n    \n* __Object Classification__: \n\n    Download and unzip [ModelNet40](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) (415M), then symlink the path to it as follows (you can alternatively modify the path [here](https://github.com/mutianxu/GDANet/blob/main/util/data_util.py#L12)) :\n    \n    `ln -s /path to modelnet40/modelnet40_ply_hdf5_2048 data`\n    \n* __Shape Part Segmentation__:\n    \n    Download and unzip [ShapeNet Part](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) (674M), then symlink the path to it as follows (you can alternatively modify the path [here](https://github.com/mutianxu/GDANet/blob/main/util/data_util.py#L70)) :\n    \n    `ln -s /path to shapenet part/shapenetcore_partanno_segmentation_benchmark_v0_normal data`\n\n## Usage\n\n### Object Classification on ModelNet40\n* Train:\n \n    `python main_cls.py`\n\n* Test:\n    * Run the voting evaluation script, after this voting you will get an accuracy of 93.8% if all things go right:\n    \n        `python voting_eval_modelnet.py --model_path 'pretrained/GDANet_ModelNet40_93.4.t7'`\n    \n    * You can also directly evaluate our pretrained model without voting to get an accuracy of 93.4%:\n    \n        `python main.py --eval True --model_path 'pretrained/GDANet_ModelNet40_93.4.t7'`\n    \n### Shape Part Segmentation on ShapeNet Part\n* Train:\n    * Training from scratch:\n\n        `python main_ptseg.py`\n   \n    * If you want resume training from checkpoints, specify `resume` in the args:\n\n        `python main_ptseg.py --resume True`\n\n* Test:\n\n    You can choose to test the model with the best instance mIoU, class mIoU or accuracy, by specifying `model_type` in the args:\n    \n    * `python main_ptseg.py --model_type 'ins_iou'` (best instance mIoU, default)\n    \n    * `python main_ptseg.py --model_type 'cls_iou'` (best class mIoU)\n    \n    * `python main_ptseg.py --model_type 'acc'` (best accuracy)\n\n\n## Other information\n\nPlease contact Mutian Xu (mino1018@outlook.com) or Junhao Zhang (junhaozhang98@gmail.com) for further discussion.\n\n## Acknowledgement\nThis code is is partially borrowed from [DGCNN](https://github.com/WangYueFt/dgcnn) and [PointNet++](https://github.com/charlesq34/pointnet2).  "
  },
  {
    "path": "GDANet/model/GDANet_cls.py",
    "content": "import torch.nn as nn\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom .util.GDANet_util import local_operator, GDM, SGCAM\r\n\r\n\r\nclass GDANET(nn.Module):\r\n    def __init__(self, number_class=40):\r\n        super(GDANET, self).__init__()\r\n\r\n        self.bn1 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn11 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn12 = nn.BatchNorm1d(64, momentum=0.1)\r\n\r\n        self.bn2 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn21 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn22 = nn.BatchNorm1d(64, momentum=0.1)\r\n\r\n        self.bn3 = nn.BatchNorm2d(128, momentum=0.1)\r\n        self.bn31 = nn.BatchNorm2d(128, momentum=0.1)\r\n        self.bn32 = nn.BatchNorm1d(128, momentum=0.1)\r\n\r\n        self.bn4 = nn.BatchNorm1d(512, momentum=0.1)\r\n\r\n        self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=True),\r\n                                   self.bn1)\r\n        self.conv11 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),\r\n                                    self.bn11)\r\n        self.conv12 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),\r\n                                    self.bn12)\r\n\r\n        self.conv2 = nn.Sequential(nn.Conv2d(67 * 2, 64, kernel_size=1, bias=True),\r\n                                   self.bn2)\r\n        self.conv21 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),\r\n                                    self.bn21)\r\n        self.conv22 = nn.Sequential(nn.Conv1d(64 * 2, 64, kernel_size=1, bias=True),\r\n                                    self.bn22)\r\n\r\n        self.conv3 = nn.Sequential(nn.Conv2d(131 * 2, 128, kernel_size=1, bias=True),\r\n                                   self.bn3)\r\n        self.conv31 = nn.Sequential(nn.Conv2d(128, 128, kernel_size=1, bias=True),\r\n                                    self.bn31)\r\n        self.conv32 = nn.Sequential(nn.Conv1d(128, 128, kernel_size=1, bias=True),\r\n                                    self.bn32)\r\n\r\n        self.conv4 = nn.Sequential(nn.Conv1d(256, 512, kernel_size=1, bias=True),\r\n                                   self.bn4)\r\n\r\n        self.SGCAM_1s = SGCAM(64)\r\n        self.SGCAM_1g = SGCAM(64)\r\n        self.SGCAM_2s = SGCAM(64)\r\n        self.SGCAM_2g = SGCAM(64)\r\n\r\n        self.linear1 = nn.Linear(1024, 512, bias=True)\r\n        self.bn6 = nn.BatchNorm1d(512)\r\n        self.dp1 = nn.Dropout(p=0.4)\r\n        self.linear2 = nn.Linear(512, 256, bias=True)\r\n        self.bn7 = nn.BatchNorm1d(256)\r\n        self.dp2 = nn.Dropout(p=0.4)\r\n        self.linear3 = nn.Linear(256, number_class, bias=True)\r\n\r\n    def forward(self, x):\r\n        B, C, N = x.size()\r\n        ###############\r\n        \"\"\"block 1\"\"\"\r\n        # Local operator:\r\n        x1 = local_operator(x, k=30)\r\n        x1 = F.relu(self.conv1(x1))\r\n        x1 = F.relu(self.conv11(x1))\r\n        x1 = x1.max(dim=-1, keepdim=False)[0]\r\n\r\n        # Geometry-Disentangle Module:\r\n        x1s, x1g = GDM(x1, M=256)\r\n\r\n        # Sharp-Gentle Complementary Attention Module:\r\n        y1s = self.SGCAM_1s(x1, x1s.transpose(2, 1))\r\n        y1g = self.SGCAM_1g(x1, x1g.transpose(2, 1))\r\n        z1 = torch.cat([y1s, y1g], 1)\r\n        z1 = F.relu(self.conv12(z1))\r\n        ###############\r\n        \"\"\"block 2\"\"\"\r\n        x1t = torch.cat((x, z1), dim=1)\r\n        x2 = local_operator(x1t, k=30)\r\n        x2 = F.relu(self.conv2(x2))\r\n        x2 = F.relu(self.conv21(x2))\r\n        x2 = x2.max(dim=-1, keepdim=False)[0]\r\n\r\n        x2s, x2g = GDM(x2, M=256)\r\n\r\n        y2s = self.SGCAM_2s(x2, x2s.transpose(2, 1))\r\n        y2g = self.SGCAM_2g(x2, x2g.transpose(2, 1))\r\n        z2 = torch.cat([y2s, y2g], 1)\r\n        z2 = F.relu(self.conv22(z2))\r\n        ###############\r\n        x2t = torch.cat((x1t, z2), dim=1)\r\n        x3 = local_operator(x2t, k=30)\r\n        x3 = F.relu(self.conv3(x3))\r\n        x3 = F.relu(self.conv31(x3))\r\n        x3 = x3.max(dim=-1, keepdim=False)[0]\r\n        z3 = F.relu(self.conv32(x3))\r\n        ###############\r\n        x = torch.cat((z1, z2, z3), dim=1)\r\n        x = F.relu(self.conv4(x))\r\n        x11 = F.adaptive_max_pool1d(x, 1).view(B, -1)\r\n        x22 = F.adaptive_avg_pool1d(x, 1).view(B, -1)\r\n        x = torch.cat((x11, x22), 1)\r\n\r\n        x = F.relu(self.bn6(self.linear1(x)))\r\n        x = self.dp1(x)\r\n        x = F.relu(self.bn7(self.linear2(x)))\r\n        x = self.dp2(x)\r\n        x = self.linear3(x)\r\n        return x\r\n"
  },
  {
    "path": "GDANet/model/GDANet_ptseg.py",
    "content": "import torch.nn as nn\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom util.GDANet_util import local_operator_withnorm, local_operator, GDM, SGCAM\r\n\r\n\r\nclass GDANet(nn.Module):\r\n    def __init__(self, num_classes):\r\n        super(GDANet, self).__init__()\r\n\r\n        self.bn1 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn11 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn12 = nn.BatchNorm1d(64, momentum=0.1)\r\n\r\n        self.bn2 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn21 = nn.BatchNorm2d(64, momentum=0.1)\r\n        self.bn22 = nn.BatchNorm1d(64, momentum=0.1)\r\n\r\n        self.bn3 = nn.BatchNorm2d(128, momentum=0.1)\r\n        self.bn31 = nn.BatchNorm2d(128, momentum=0.1)\r\n        self.bn32 = nn.BatchNorm1d(128, momentum=0.1)\r\n\r\n        self.bn4 = nn.BatchNorm1d(512, momentum=0.1)\r\n        self.bnc = nn.BatchNorm1d(64, momentum=0.1)\r\n\r\n        self.bn5 = nn.BatchNorm1d(256, momentum=0.1)\r\n        self.bn6 = nn.BatchNorm1d(256, momentum=0.1)\r\n        self.bn7 = nn.BatchNorm1d(128, momentum=0.1)\r\n\r\n        self.conv1 = nn.Sequential(nn.Conv2d(9, 64, kernel_size=1, bias=True),\r\n                                   self.bn1)\r\n        self.conv11 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),\r\n                                    self.bn11)\r\n        self.conv12 = nn.Sequential(nn.Conv1d(64*2, 64, kernel_size=1, bias=True),\r\n                                    self.bn12)\r\n\r\n        self.conv2 = nn.Sequential(nn.Conv2d(67 * 2, 64, kernel_size=1, bias=True),\r\n                                   self.bn2)\r\n        self.conv21 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, bias=True),\r\n                                    self.bn21)\r\n        self.conv22 = nn.Sequential(nn.Conv1d(64*2, 64, kernel_size=1, bias=True),\r\n                                    self.bn22)\r\n\r\n        self.conv3 = nn.Sequential(nn.Conv2d(131 * 2, 128, kernel_size=1, bias=True),\r\n                                   self.bn3)\r\n        self.conv31 = nn.Sequential(nn.Conv2d(128, 128, kernel_size=1, bias=True),\r\n                                    self.bn31)\r\n        self.conv32 = nn.Sequential(nn.Conv1d(128, 128, kernel_size=1, bias=True),\r\n                                    self.bn32)\r\n\r\n        self.conv4 = nn.Sequential(nn.Conv1d(256, 512, kernel_size=1, bias=True),\r\n                                   self.bn4)\r\n        self.convc = nn.Sequential(nn.Conv1d(16, 64, kernel_size=1, bias=True),\r\n                                   self.bnc)\r\n\r\n        self.conv5 = nn.Sequential(nn.Conv1d(256 + 512 + 64, 256, kernel_size=1, bias=True),\r\n                                   self.bn5)\r\n        self.dp1 = nn.Dropout(0.4)\r\n        self.conv6 = nn.Sequential(nn.Conv1d(256, 256, kernel_size=1, bias=True),\r\n                                   self.bn6)\r\n        self.dp2 = nn.Dropout(0.4)\r\n        self.conv7 = nn.Sequential(nn.Conv1d(256, 128, kernel_size=1, bias=True),\r\n                                   self.bn7)\r\n        self.conv8 = nn.Conv1d(128, num_classes, kernel_size=1, bias=True)\r\n\r\n        self.SGCAM_1s = SGCAM(64)\r\n        self.SGCAM_1g = SGCAM(64)\r\n        self.SGCAM_2s = SGCAM(64)\r\n        self.SGCAM_2g = SGCAM(64)\r\n\r\n    def forward(self, x, norm_plt, cls_label):\r\n        B, C, N = x.size()\r\n        ###############\r\n        \"\"\"block 1\"\"\"\r\n        x1 = local_operator_withnorm(x, norm_plt, k=30)\r\n        x1 = F.relu(self.conv1(x1))\r\n        x1 = F.relu(self.conv11(x1))\r\n        x1 = x1.max(dim=-1, keepdim=False)[0]\r\n        x1h, x1l = GDM(x1, M=512)\r\n\r\n        x1h = self.SGCAM_1s(x1, x1h.transpose(2, 1))\r\n        x1l = self.SGCAM_1g(x1, x1l.transpose(2, 1))\r\n        x1 = torch.cat([x1h, x1l], 1)\r\n        x1 = F.relu(self.conv12(x1))\r\n        ###############\r\n        \"\"\"block 1\"\"\"\r\n        x1t = torch.cat((x, x1), dim=1)\r\n        x2 = local_operator(x1t, k=30)\r\n        x2 = F.relu(self.conv2(x2))\r\n        x2 = F.relu(self.conv21(x2))\r\n        x2 = x2.max(dim=-1, keepdim=False)[0]\r\n        x2h, x2l = GDM(x2, M=512)\r\n\r\n        x2h = self.SGCAM_2s(x2, x2h.transpose(2, 1))\r\n        x2l = self.SGCAM_2g(x2, x2l.transpose(2, 1))\r\n        x2 = torch.cat([x2h, x2l], 1)\r\n        x2 = F.relu(self.conv22(x2))\r\n        ###############\r\n        x2t = torch.cat((x1t, x2), dim=1)\r\n        x3 = local_operator(x2t, k=30)\r\n        x3 = F.relu(self.conv3(x3))\r\n        x3 = F.relu(self.conv31(x3))\r\n        x3 = x3.max(dim=-1, keepdim=False)[0]\r\n        x3 = F.relu(self.conv32(x3))\r\n        ###############\r\n        xx = torch.cat((x1, x2, x3), dim=1)\r\n\r\n        xc = F.relu(self.conv4(xx))\r\n        xc = F.adaptive_max_pool1d(xc, 1).view(B, -1)\r\n\r\n        cls_label = cls_label.view(B, 16, 1)\r\n        cls_label = F.relu(self.convc(cls_label))\r\n        cls = torch.cat((xc.view(B, 512, 1), cls_label), dim=1)\r\n        cls = cls.repeat(1, 1, N)\r\n\r\n        x = torch.cat((xx, cls), dim=1)\r\n        x = F.relu(self.conv5(x))\r\n        x = self.dp1(x)\r\n        x = F.relu(self.conv6(x))\r\n        x = self.dp2(x)\r\n        x = F.relu(self.conv7(x))\r\n        x = self.conv8(x)\r\n        x = F.log_softmax(x, dim=1)\r\n        x = x.permute(0, 2, 1)  # b,n,50\r\n\r\n        return x\r\n\r\n"
  },
  {
    "path": "GDANet/model/__init__.py",
    "content": ""
  },
  {
    "path": "GDANet/model/util/GDANet_util.py",
    "content": "import torch\nfrom torch import nn\n\n\ndef knn(x, k):\n    inner = -2*torch.matmul(x.transpose(2, 1), x)\n    xx = torch.sum(x**2, dim=1, keepdim=True)\n    pairwise_distance = -xx - inner - xx.transpose(2, 1)\n \n    idx = pairwise_distance.topk(k=k, dim=-1)[1]   # (batch_size, num_points, k)\n    return idx, pairwise_distance\n\n\ndef local_operator(x, k):\n    batch_size = x.size(0)\n    num_points = x.size(2)\n    x = x.view(batch_size, -1, num_points)\n    idx, _ = knn(x, k=k)\n    device = torch.device('cuda')\n\n    idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points\n\n    idx = idx + idx_base\n\n    idx = idx.view(-1)\n\n    _, num_dims, _ = x.size()\n\n    x = x.transpose(2, 1).contiguous()\n\n    neighbor = x.view(batch_size * num_points, -1).contiguous()[idx, :]\n\n    neighbor = neighbor.view(batch_size, num_points, k, num_dims).contiguous()\n\n    x = x.view(batch_size, num_points, 1, num_dims).contiguous().repeat(1, 1, k, 1)\n\n    feature = torch.cat((neighbor-x, neighbor), dim=3).permute(0, 3, 1, 2).contiguous()  # local and global all in\n\n    return feature\n\n\ndef local_operator_withnorm(x, norm_plt, k):\n    batch_size = x.size(0)\n    num_points = x.size(2)\n    x = x.view(batch_size, -1, num_points)\n    norm_plt = norm_plt.view(batch_size, -1, num_points)\n    idx, _ = knn(x, k=k)  # (batch_size, num_points, k)\n    device = torch.device('cuda')\n\n    idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points\n\n    idx = idx + idx_base\n\n    idx = idx.view(-1)\n\n    _, num_dims, _ = x.size()\n\n    x = x.transpose(2, 1).contiguous()\n    norm_plt = norm_plt.transpose(2, 1).contiguous()\n\n    neighbor = x.view(batch_size * num_points, -1)[idx, :]\n    neighbor_norm = norm_plt.view(batch_size * num_points, -1)[idx, :]\n\n    neighbor = neighbor.view(batch_size, num_points, k, num_dims)\n    neighbor_norm = neighbor_norm.view(batch_size, num_points, k, num_dims)\n\n    x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)\n\n    feature = torch.cat((neighbor-x, neighbor, neighbor_norm), dim=3).permute(0, 3, 1, 2)  # 3c\n\n    return feature\n\n\ndef GDM(x, M):\n    \"\"\"\n    Geometry-Disentangle Module\n    M: number of disentangled points in both sharp and gentle variation components\n    \"\"\"\n    k = 64  # number of neighbors to decide the range of j in Eq.(5)\n    tau = 0.2  # threshold in Eq.(2)\n    sigma = 2  # parameters of f (Gaussian function in Eq.(2))\n    ###############\n    \"\"\"Graph Construction:\"\"\"\n    device = torch.device('cuda')\n    batch_size = x.size(0)\n    num_points = x.size(2)\n    x = x.view(batch_size, -1, num_points)\n\n    idx, p = knn(x, k=k)  # p: -[(x1-x2)^2+...]\n\n    # here we add a tau\n    p1 = torch.abs(p)\n    p1 = torch.sqrt(p1)\n    mask = p1 < tau\n\n    # here we add a sigma\n    p = p / (sigma * sigma)\n    w = torch.exp(p)  # b,n,n\n    w = torch.mul(mask.float(), w)\n\n    b = 1/torch.sum(w, dim=1)\n    b = b.reshape(batch_size, num_points, 1).repeat(1, 1, num_points)\n    c = torch.eye(num_points, num_points, device=device)\n    c = c.expand(batch_size, num_points, num_points)\n    D = b * c  # b,n,n\n\n    A = torch.matmul(D, w)  # normalized adjacency matrix A_hat\n\n    # Get Aij in a local area:\n    idx2 = idx.view(batch_size * num_points, -1)\n    idx_base2 = torch.arange(0, batch_size * num_points, device=device).view(-1, 1) * num_points\n    idx2 = idx2 + idx_base2\n\n    idx2 = idx2.reshape(batch_size * num_points, k)[:, 1:k]\n    idx2 = idx2.reshape(batch_size * num_points * (k - 1))\n    idx2 = idx2.view(-1)\n\n    A = A.view(-1).contiguous()\n    A = A[idx2].reshape(batch_size, num_points, k - 1).contiguous()  # Aij: b,n,k\n    ###############\n    \"\"\"Disentangling Point Clouds into Sharp(xs) and Gentle(xg) Variation Components:\"\"\"\n    idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1) * num_points\n    idx = idx + idx_base\n    idx = idx.reshape(batch_size * num_points, k)[:, 1:k]\n    idx = idx.reshape(batch_size * num_points * (k - 1))\n\n    _, num_dims, _ = x.size()\n\n    x = x.transpose(2, 1).contiguous()  # b,n,c\n    neighbor = x.view(batch_size * num_points, -1).contiguous()[idx, :]\n    neighbor = neighbor.view(batch_size, num_points, k - 1, num_dims).contiguous()  # b,n,k,c\n    A = A.reshape(batch_size, num_points, k - 1, 1).contiguous()  # b,n,k,1\n    n = A.mul(neighbor)  # b,n,k,c\n    n = torch.sum(n, dim=2)  # b,n,c\n\n    pai = torch.norm(x - n, dim=-1).pow(2)  # Eq.(5)\n    pais = pai.topk(k=M, dim=-1)[1]  # first M points as the sharp variation component\n    paig = (-pai).topk(k=M, dim=-1)[1]  # last M points as the gentle variation component\n\n    pai_base = torch.arange(0, batch_size, device=device).view(-1, 1) * num_points\n    indices = (pais + pai_base).view(-1)\n    indiceg = (paig + pai_base).view(-1)\n\n    xs = x.view(batch_size * num_points, -1).contiguous()[indices, :]\n    xg = x.view(batch_size * num_points, -1).contiguous()[indiceg, :]\n\n    xs = xs.view(batch_size, M, -1).contiguous()  # b,M,c\n    xg = xg.view(batch_size, M, -1).contiguous()  # b,M,c\n\n    return xs, xg\n\n\nclass SGCAM(nn.Module):\n    \"\"\"Sharp-Gentle Complementary Attention Module:\"\"\"\n    def __init__(self, in_channels, inter_channels=None, bn_layer=True):\n        super(SGCAM, self).__init__()\n\n        self.in_channels = in_channels\n        self.inter_channels = inter_channels\n\n        if self.inter_channels is None:\n            self.inter_channels = in_channels // 2\n            if self.inter_channels == 0:\n                self.inter_channels = 1\n\n        conv_nd = nn.Conv1d\n        bn = nn.BatchNorm1d\n\n        self.g = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,\n                         kernel_size=1, stride=1, padding=0)\n\n        if bn_layer:\n            self.W = nn.Sequential(\n                conv_nd(in_channels=self.inter_channels, out_channels=self.in_channels,\n                        kernel_size=1, stride=1, padding=0),\n                bn(self.in_channels)\n            )\n            nn.init.constant(self.W[1].weight, 0)\n            nn.init.constant(self.W[1].bias, 0)\n        else:\n            self.W = conv_nd(in_channels=self.inter_channels, out_channels=self.in_channels,\n                             kernel_size=1, stride=1, padding=0)\n            nn.init.constant(self.W.weight, 0)\n            nn.init.constant(self.W.bias, 0)\n\n        self.theta = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,\n                             kernel_size=1, stride=1, padding=0)\n\n        self.phi = conv_nd(in_channels=self.in_channels, out_channels=self.inter_channels,\n                           kernel_size=1, stride=1, padding=0)\n\n    def forward(self, x, x_2):\n        batch_size = x.size(0)\n\n        g_x = self.g(x_2).view(batch_size, self.inter_channels, -1).contiguous()\n        g_x = g_x.permute(0, 2, 1).contiguous()\n\n        theta_x = self.theta(x).view(batch_size, self.inter_channels, -1).contiguous()\n        theta_x = theta_x.permute(0, 2, 1).contiguous()\n        phi_x = self.phi(x_2).view(batch_size, self.inter_channels, -1).contiguous()\n        W = torch.matmul(theta_x, phi_x)  # Attention Matrix\n        N = W.size(-1)\n        W_div_C = W / N\n\n        y = torch.matmul(W_div_C, g_x)\n        y = y.permute(0, 2, 1).contiguous()\n        y = y.view(batch_size, self.inter_channels, *x.size()[2:]).contiguous()\n        W_y = self.W(y)\n        y = W_y + x\n\n        return y\n\n"
  },
  {
    "path": "GDANet/model/util/__init__.py",
    "content": ""
  },
  {
    "path": "GDANet/model/util/data_util.py",
    "content": "import glob\nimport h5py\nimport numpy as np\nfrom torch.utils.data import Dataset\nimport os\nimport json\n\n\ndef load_data(partition):\n    all_data = []\n    all_label = []\n    for h5_name in glob.glob('./data/modelnet40_ply_hdf5_2048/ply_data_%s*.h5' % partition):\n        f = h5py.File(h5_name)\n        data = f['data'][:].astype('float32')\n        label = f['label'][:].astype('int64')\n        f.close()\n        all_data.append(data)\n        all_label.append(label)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    return all_data, all_label\n\n\ndef pc_normalize(pc):\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))\n    pc = pc / m\n    return pc\n\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n       \n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\n# =========== ModelNet40 =================\nclass ModelNet40(Dataset):\n    def __init__(self, num_points, partition='train'):\n        self.data, self.label = load_data(partition)\n        self.num_points = num_points\n        self.partition = partition  # Here the new given partition will cover the 'train'\n\n    def __getitem__(self, item):  # indice of the pts or label\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.partition == 'train':\n            # pointcloud = pc_normalize(pointcloud)  # you can try to add it or not to train our model\n            pointcloud = translate_pointcloud(pointcloud)\n            np.random.shuffle(pointcloud)  # shuffle the order of pts\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\n# =========== ShapeNet Part =================\nclass PartNormalDataset(Dataset):\n    def __init__(self, npoints=2500, split='train', normalize=False):\n        self.npoints = npoints\n        self.root = './data/shapenetcore_partanno_segmentation_benchmark_v0_normal'\n        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')\n        self.cat = {}\n        self.normalize = normalize\n\n        with open(self.catfile, 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = ls[1]\n        self.cat = {k: v for k, v in self.cat.items()}\n\n        self.meta = {}\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:\n            train_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:\n            val_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:\n            test_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        for item in self.cat:\n            self.meta[item] = []\n            dir_point = os.path.join(self.root, self.cat[item])\n            fns = sorted(os.listdir(dir_point))\n\n            if split == 'trainval':\n                fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]\n            elif split == 'train':\n                fns = [fn for fn in fns if fn[0:-4] in train_ids]\n            elif split == 'val':\n                fns = [fn for fn in fns if fn[0:-4] in val_ids]\n            elif split == 'test':\n                fns = [fn for fn in fns if fn[0:-4] in test_ids]\n            else:\n                print('Unknown split: %s. Exiting..' % (split))\n                exit(-1)\n\n            for fn in fns:\n                token = (os.path.splitext(os.path.basename(fn))[0])\n                self.meta[item].append(os.path.join(dir_point, token + '.txt'))\n\n        self.datapath = []\n        for item in self.cat:\n            for fn in self.meta[item]:\n                self.datapath.append((item, fn))\n\n        self.classes = dict(zip(self.cat, range(len(self.cat))))\n        # Mapping from category ('Chair') to a list of int [10,11,12,13] as segmentation labels\n        self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43],\n                            'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46],\n                            'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27],\n                            'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40],\n                            'Chair': [12, 13, 14, 15], 'Knife': [22, 23]}\n\n        self.cache = {}  # from index to (point_set, cls, seg) tuple\n        self.cache_size = 20000\n\n    def __getitem__(self, index):\n        if index in self.cache:\n            point_set, normal, seg, cls = self.cache[index]\n        else:\n            fn = self.datapath[index]\n            cat = self.datapath[index][0]\n            cls = self.classes[cat]\n            cls = np.array([cls]).astype(np.int32)\n            data = np.loadtxt(fn[1]).astype(np.float32)\n            point_set = data[:, 0:3]\n            normal = data[:, 3:6]\n            seg = data[:, -1].astype(np.int32)\n            if len(self.cache) < self.cache_size:\n                self.cache[index] = (point_set, normal, seg, cls)\n\n        if self.normalize:\n            point_set = pc_normalize(point_set)\n\n        choice = np.random.choice(len(seg), self.npoints, replace=True)\n\n        # resample\n        # note that the number of points in some points clouds is less than 2048, thus use random.choice\n        # remember to use the same seed during train and test for a getting stable result\n        point_set = point_set[choice, :]\n        seg = seg[choice]\n        normal = normal[choice, :]\n\n        return point_set, cls, seg, normal\n\n    def __len__(self):\n        return len(self.datapath)\n\n\nif __name__ == '__main__':\n    train = ModelNet40(1024)\n    test = ModelNet40(1024, 'test')\n    for data, label in train:\n        print(data.shape)\n        print(label.shape)\n"
  },
  {
    "path": "GDANet/model/util/util.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn.functional as F\n\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1) # gold is the groudtruth label in the dataloader\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)  # the number of feature_dim of the ouput, which is output channels\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n\n\n# create a file and write the text into it:\nclass IOStream():\n    def __init__(self, path):\n        self.f = open(path, 'a')\n\n    def cprint(self, text):\n        print(text)\n        self.f.write(text+'\\n')\n        self.f.flush()\n\n    def close(self):\n        self.f.close()\n\n\ndef to_categorical(y, num_classes):\n    \"\"\" 1-hot encodes a tensor \"\"\"\n    new_y = torch.eye(num_classes)[y.cpu().data.numpy(),]\n    if (y.is_cuda):\n        return new_y.cuda(non_blocking=True)\n    return new_y\n\n\ndef compute_overall_iou(pred, target, num_classes):\n    shape_ious = []\n    pred = pred.max(dim=2)[1]    # (batch_size, num_points)  the pred_class_idx of each point in each sample\n    pred_np = pred.cpu().data.numpy()\n\n    target_np = target.cpu().data.numpy()\n    for shape_idx in range(pred.size(0)):   # sample_idx\n        part_ious = []\n        for part in range(num_classes):   # class_idx! no matter which category, only consider all part_classes of all categories, check all 50 classes\n            # for target, each point has a class no matter which category owns this point! also 50 classes!!!\n            # only return 1 when both belongs to this class, which means correct:\n            I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))\n            # always return 1 when either is belongs to this class:\n            U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))\n\n            F = np.sum(target_np[shape_idx] == part)\n\n            if F != 0:\n                iou = I / float(U)    #  iou across all points for this class\n                part_ious.append(iou)   #  append the iou of this class\n        shape_ious.append(np.mean(part_ious))   # each time append an average iou across all classes of this sample (sample_level!)\n    return shape_ious   # [batch_size]\n"
  },
  {
    "path": "LICENSE",
    "content": "BSD 3-Clause License\n\nCopyright (c) 2021, University of Michigan\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "PCT_Pytorch/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2021 Strawberry-Eat-Mango\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "PCT_Pytorch/README.md",
    "content": "## PCT: Point Cloud Transformer\nThis is a Pytorch implementation of PCT: Point Cloud Transformer.\n\nPaper link: https://arxiv.org/pdf/2012.09688.pdf\n\n### Requirements\npython >= 3.7\n\npytorch >= 1.6\n\nh5py\n\nscikit-learn\n\nand\n\n```shell script\npip install pointnet2_ops_lib/.\n```\nThe code is from https://github.com/erikwijmans/Pointnet2_PyTorch https://github.com/WangYueFt/dgcnn and https://github.com/MenghaoGuo/PCT\n\n### Models\nWe get an accuracy of 93.2% on the ModelNet40(http://modelnet.cs.princeton.edu/) validation dataset\n\nThe path of the model is in ./checkpoints/best/models/model.t7\n\n### Example training and testing\n```shell script\n# train\npython main.py --exp_name=train --num_points=1024 --use_sgd=True --batch_size 32 --epochs 250 --lr 0.0001\n\n# test\npython main.py --exp_name=test --num_points=1024 --use_sgd=True --eval=True --model_path=checkpoints/best/models/model.t7 --test_batch_size 8\n\n```\n\n### Citation\nIf it is helpful for your work, please cite this paper:\n```latex\n@misc{guo2020pct,\n      title={PCT: Point Cloud Transformer}, \n      author={Meng-Hao Guo and Jun-Xiong Cai and Zheng-Ning Liu and Tai-Jiang Mu and Ralph R. Martin and Shi-Min Hu},\n      year={2020},\n      eprint={2012.09688},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n```\n"
  },
  {
    "path": "PCT_Pytorch/data.py",
    "content": "import os\nimport glob\nimport h5py\nimport numpy as np\nfrom torch.utils.data import Dataset\n\ndef download():\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, 'data')\n    if not os.path.exists(DATA_DIR):\n        os.mkdir(DATA_DIR)\n    if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n        www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n        zipfile = os.path.basename(www)\n        os.system('wget %s; unzip %s' % (www, zipfile))\n        os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n        os.system('rm %s' % (zipfile))\n\ndef load_data(partition):\n    download()\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, 'data')\n    all_data = []\n    all_label = []\n    for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048', 'ply_data_%s*.h5'%partition)):\n        f = h5py.File(h5_name)\n        data = f['data'][:].astype('float32')\n        label = f['label'][:].astype('int64')\n        f.close()\n        all_data.append(data)\n        all_label.append(label)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    return all_data, all_label\n\ndef random_point_dropout(pc, max_dropout_ratio=0.875):\n    ''' batch_pc: BxNx3 '''\n    # for b in range(batch_pc.shape[0]):\n    dropout_ratio = np.random.random()*max_dropout_ratio # 0~0.875    \n    drop_idx = np.where(np.random.random((pc.shape[0]))<=dropout_ratio)[0]\n    # print ('use random drop', len(drop_idx))\n\n    if len(drop_idx)>0:\n        pc[drop_idx,:] = pc[0,:] # set to the first point\n    return pc\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n       \n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\nclass ModelNet40(Dataset):\n    def __init__(self, num_points, partition='train'):\n        self.data, self.label = load_data(partition)\n        self.num_points = num_points\n        self.partition = partition        \n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.partition == 'train':\n            pointcloud = random_point_dropout(pointcloud) # open for dgcnn not for our idea  for all\n            pointcloud = translate_pointcloud(pointcloud)\n            np.random.shuffle(pointcloud)\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\nif __name__ == '__main__':\n    train = ModelNet40(1024)\n    test = ModelNet40(1024, 'test')\n    for data, label in train:\n        print(data.shape)\n        print(label.shape)\n"
  },
  {
    "path": "PCT_Pytorch/main.py",
    "content": "from __future__ import print_function\nimport os\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\nfrom data import ModelNet40\nfrom .model import Pct\nimport numpy as np\nfrom torch.utils.data import DataLoader\nfrom .util import cal_loss, IOStream\nimport sklearn.metrics as metrics\n\nimport time \n\ndef _init_():\n    if not os.path.exists('checkpoints'):\n        os.makedirs('checkpoints')\n    if not os.path.exists('checkpoints/'+args.exp_name):\n        os.makedirs('checkpoints/'+args.exp_name)\n    if not os.path.exists('checkpoints/'+args.exp_name+'/'+'models'):\n        os.makedirs('checkpoints/'+args.exp_name+'/'+'models')\n    os.system('cp main.py checkpoints'+'/'+args.exp_name+'/'+'main.py.backup')\n    os.system('cp model.py checkpoints' + '/' + args.exp_name + '/' + 'model.py.backup')\n    os.system('cp util.py checkpoints' + '/' + args.exp_name + '/' + 'util.py.backup')\n    os.system('cp data.py checkpoints' + '/' + args.exp_name + '/' + 'data.py.backup')\n\ndef train(args, io):\n    train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=8,\n                            batch_size=args.batch_size, shuffle=True, drop_last=True)\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,\n                            batch_size=args.test_batch_size, shuffle=True, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    model = Pct(args).to(device)\n    print(str(model))\n    model = nn.DataParallel(model)\n\n    if args.use_sgd:\n        print(\"Use SGD\")\n        opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=5e-4)\n    else:\n        print(\"Use Adam\")\n        opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)\n\n    scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr)\n    \n    criterion = cal_loss\n    best_test_acc = 0\n\n    for epoch in range(args.epochs):\n        scheduler.step()\n        train_loss = 0.0\n        count = 0.0\n        model.train()\n        train_pred = []\n        train_true = []\n        idx = 0\n        total_time = 0.0\n        for data, label in (train_loader):\n            data, label = data.to(device), label.to(device).squeeze() \n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            opt.zero_grad()\n\n            start_time = time.time()\n            logits = model(data)\n            loss = criterion(logits, label)\n            loss.backward()\n            opt.step()\n            end_time = time.time()\n            total_time += (end_time - start_time)\n            \n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            train_loss += loss.item() * batch_size\n            train_true.append(label.cpu().numpy())\n            train_pred.append(preds.detach().cpu().numpy())\n            idx += 1\n            \n        print ('train total time is',total_time)\n        train_true = np.concatenate(train_true)\n        train_pred = np.concatenate(train_pred)\n        outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f' % (epoch,\n                                                                                train_loss*1.0/count,\n                                                                                metrics.accuracy_score(\n                                                                                train_true, train_pred),\n                                                                                metrics.balanced_accuracy_score(\n                                                                                train_true, train_pred))\n        io.cprint(outstr)\n\n        ####################\n        # Test\n        ####################\n        test_loss = 0.0\n        count = 0.0\n        model.eval()\n        test_pred = []\n        test_true = []\n        total_time = 0.0\n        for data, label in test_loader:\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            start_time = time.time()\n            logits = model(data)\n            end_time = time.time()\n            total_time += (end_time - start_time)\n            loss = criterion(logits, label)\n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            test_loss += loss.item() * batch_size\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n        print ('test total time is', total_time)\n        test_true = np.concatenate(test_true)\n        test_pred = np.concatenate(test_pred)\n        test_acc = metrics.accuracy_score(test_true, test_pred)\n        avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)\n        outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f' % (epoch,\n                                                                            test_loss*1.0/count,\n                                                                            test_acc,\n                                                                            avg_per_class_acc)\n        io.cprint(outstr)\n        if test_acc >= best_test_acc:\n            best_test_acc = test_acc\n            torch.save(model.state_dict(), 'checkpoints/%s/models/model.t7' % args.exp_name)\n\n\ndef test(args, io):\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),\n                            batch_size=args.test_batch_size, shuffle=True, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    model = Pct(args).to(device)\n    model = nn.DataParallel(model) \n    \n    model.load_state_dict(torch.load(args.model_path))\n    model = model.eval()\n    test_true = []\n    test_pred = []\n\n    for data, label in test_loader:\n        data, label = data.to(device), label.to(device).squeeze()\n        data = data.permute(0, 2, 1)\n        logits = model(data)\n        preds = logits.max(dim=1)[1] \n        if args.test_batch_size == 1:\n            test_true.append([label.cpu().numpy()])\n            test_pred.append([preds.detach().cpu().numpy()])\n        else:\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n\n    test_true = np.concatenate(test_true)\n    test_pred = np.concatenate(test_pred)\n    test_acc = metrics.accuracy_score(test_true, test_pred)\n    avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)\n    outstr = 'Test :: test acc: %.6f, test avg acc: %.6f'%(test_acc, avg_per_class_acc)\n    io.cprint(outstr)\n\nif __name__ == \"__main__\":\n    # Training settings\n    parser = argparse.ArgumentParser(description='Point Cloud Recognition')\n    parser.add_argument('--exp_name', type=str, default='exp', metavar='N',\n                        help='Name of the experiment')\n    parser.add_argument('--dataset', type=str, default='modelnet40', metavar='N',\n                        choices=['modelnet40'])\n    parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--epochs', type=int, default=250, metavar='N',\n                        help='number of episode to train ')\n    parser.add_argument('--use_sgd', type=bool, default=True,\n                        help='Use SGD')\n    parser.add_argument('--lr', type=float, default=0.0001, metavar='LR',\n                        help='learning rate (default: 0.001, 0.1 if using sgd)')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--no_cuda', type=bool, default=False,\n                        help='enables CUDA training')\n    parser.add_argument('--seed', type=int, default=1, metavar='S',\n                        help='random seed (default: 1)')\n    parser.add_argument('--eval', type=bool,  default=False,\n                        help='evaluate the model')\n    parser.add_argument('--num_points', type=int, default=1024,\n                        help='num of points to use')\n    parser.add_argument('--dropout', type=float, default=0.5,\n                        help='dropout rate')\n    parser.add_argument('--model_path', type=str, default='', metavar='N',\n                        help='Pretrained model path')\n    args = parser.parse_args()\n\n    _init_()\n\n    io = IOStream('checkpoints/' + args.exp_name + '/run.log')\n    io.cprint(str(args))\n\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n    torch.manual_seed(args.seed)\n    if args.cuda:\n        io.cprint(\n            'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')\n        torch.cuda.manual_seed(args.seed)\n    else:\n        io.cprint('Using CPU')\n\n    if not args.eval:\n        train(args, io)\n    else:\n        test(args, io)\n"
  },
  {
    "path": "PCT_Pytorch/model.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom .util import sample_and_group \n\nclass Local_op(nn.Module):\n    def __init__(self, in_channels, out_channels):\n        super(Local_op, self).__init__()\n        self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm1d(out_channels)\n        self.bn2 = nn.BatchNorm1d(out_channels)\n\n    def forward(self, x):\n        b, n, s, d = x.size()  # torch.Size([32, 512, 32, 6]) \n        x = x.permute(0, 1, 3, 2)   \n        x = x.reshape(-1, d, s) \n        batch_size, _, N = x.size()\n        x = F.relu(self.bn1(self.conv1(x))) # B, D, N\n        x = F.relu(self.bn2(self.conv2(x))) # B, D, N\n        x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x = x.reshape(b, n, -1).permute(0, 2, 1)\n        return x\n\nclass Pct(nn.Module):\n    def __init__(self, args, output_channels=40):\n        super(Pct, self).__init__()\n        self.args = args\n        self.conv1 = nn.Conv1d(3, 64, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(64, 64, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(64)\n        self.gather_local_0 = Local_op(in_channels=128, out_channels=128)\n        self.gather_local_1 = Local_op(in_channels=256, out_channels=256)\n\n        self.pt_last = Point_Transformer_Last(args)\n\n        self.conv_fuse = nn.Sequential(nn.Conv1d(1280, 1024, kernel_size=1, bias=False),\n                                    nn.BatchNorm1d(1024),\n                                    nn.LeakyReLU(negative_slope=0.2))\n\n\n        self.linear1 = nn.Linear(1024, 512, bias=False)\n        self.bn6 = nn.BatchNorm1d(512)\n        self.dp1 = nn.Dropout(p=args.dropout)\n        self.linear2 = nn.Linear(512, 256)\n        self.bn7 = nn.BatchNorm1d(256)\n        self.dp2 = nn.Dropout(p=args.dropout)\n        self.linear3 = nn.Linear(256, output_channels)\n\n    def forward(self, x):\n        xyz = x.permute(0, 2, 1)\n        batch_size, _, _ = x.size()\n        # B, D, N\n        x = F.relu(self.bn1(self.conv1(x)))\n        # B, D, N\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = x.permute(0, 2, 1)\n        new_xyz, new_feature = sample_and_group(npoint=512, radius=0.15, nsample=32, xyz=xyz, points=x)         \n        feature_0 = self.gather_local_0(new_feature)\n        feature = feature_0.permute(0, 2, 1)\n        new_xyz, new_feature = sample_and_group(npoint=256, radius=0.2, nsample=32, xyz=new_xyz, points=feature) \n        feature_1 = self.gather_local_1(new_feature)\n\n        x = self.pt_last(feature_1)\n        x = torch.cat([x, feature_1], dim=1)\n        x = self.conv_fuse(x)\n        x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)\n        x = self.dp1(x)\n        x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)\n        x = self.dp2(x)\n        x = self.linear3(x)\n\n        return x\n\nclass Point_Transformer_Last(nn.Module):\n    def __init__(self, args, channels=256):\n        super(Point_Transformer_Last, self).__init__()\n        self.args = args\n        self.conv1 = nn.Conv1d(channels, channels, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(channels, channels, kernel_size=1, bias=False)\n\n        self.bn1 = nn.BatchNorm1d(channels)\n        self.bn2 = nn.BatchNorm1d(channels)\n\n        self.sa1 = SA_Layer(channels)\n        self.sa2 = SA_Layer(channels)\n        self.sa3 = SA_Layer(channels)\n        self.sa4 = SA_Layer(channels)\n\n    def forward(self, x):\n        # \n        # b, 3, npoint, nsample  \n        # conv2d 3 -> 128 channels 1, 1\n        # b * npoint, c, nsample \n        # permute reshape\n        batch_size, _, N = x.size()\n\n        # B, D, N\n        x = F.relu(self.bn1(self.conv1(x)))\n        x = F.relu(self.bn2(self.conv2(x)))\n        x1 = self.sa1(x)\n        x2 = self.sa2(x1)\n        x3 = self.sa3(x2)\n        x4 = self.sa4(x3)\n        x = torch.cat((x1, x2, x3, x4), dim=1)\n\n        return x\n\nclass SA_Layer(nn.Module):\n    def __init__(self, channels):\n        super(SA_Layer, self).__init__()\n        self.q_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)\n        self.k_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)\n        self.q_conv.weight = self.k_conv.weight\n        self.q_conv.bias = self.k_conv.bias\n\n        self.v_conv = nn.Conv1d(channels, channels, 1)\n        self.trans_conv = nn.Conv1d(channels, channels, 1)\n        self.after_norm = nn.BatchNorm1d(channels)\n        self.act = nn.ReLU()\n        self.softmax = nn.Softmax(dim=-1)\n\n    def forward(self, x):\n        # b, n, c\n        x_q = self.q_conv(x).permute(0, 2, 1)\n        # b, c, n\n        x_k = self.k_conv(x)\n        x_v = self.v_conv(x)\n        # b, n, n\n        energy = torch.bmm(x_q, x_k)\n\n        attention = self.softmax(energy)\n        attention = attention / (1e-9 + attention.sum(dim=1, keepdim=True))\n        # b, c, n\n        x_r = torch.bmm(x_v, attention)\n        x_r = self.act(self.after_norm(self.trans_conv(x - x_r)))\n        x = x + x_r\n        return x"
  },
  {
    "path": "PCT_Pytorch/model_new.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom util import sample_and_group \n\nclass Local_op(nn.Module):\n    def __init__(self, in_channels, out_channels):\n        super(Local_op, self).__init__()\n        self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm1d(out_channels)\n        self.bn2 = nn.BatchNorm1d(out_channels)\n\n    def forward(self, x):\n        b, n, s, d = x.size()  # torch.Size([32, 512, 32, 6]) \n        x = x.permute(0, 1, 3, 2)   \n        x = x.reshape(-1, d, s) \n        batch_size, _, N = x.size()\n        x = F.relu(self.bn1(self.conv1(x))) # B, D, N\n        x = F.relu(self.bn2(self.conv2(x))) # B, D, N\n        x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x = x.reshape(b, n, -1).permute(0, 2, 1)\n        return x\n\nclass Pct(nn.Module):\n    def __init__(self, args, output_channels=40):\n        super(Pct, self).__init__()\n        self.args = args\n        self.conv1 = nn.Conv1d(3, 64, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(64, 64, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(64)\n        self.gather_local_0 = Local_op(in_channels=128, out_channels=128)\n        self.gather_local_1 = Local_op(in_channels=256, out_channels=256)\n\n        self.pt_last = Point_Transformer_Last(args)\n\n        self.conv_fuse = nn.Sequential(nn.Conv1d(1280, 1024, kernel_size=1, bias=False),\n                                    nn.BatchNorm1d(1024),\n                                    nn.LeakyReLU(negative_slope=0.2))\n\n        self.linear1 = nn.Linear(1024, 512, bias=False)\n        self.bn6 = nn.BatchNorm1d(512)\n        self.dp1 = nn.Dropout(p=args.dropout)\n        self.linear2 = nn.Linear(512, 256)\n        self.bn7 = nn.BatchNorm1d(256)\n        self.dp2 = nn.Dropout(p=args.dropout)\n        self.linear3 = nn.Linear(256, output_channels)\n\n    def forward(self, x):\n        xyz = x.permute(0, 2, 1)\n        batch_size, _, _ = x.size()\n        # B, D, N\n        x = F.relu(self.bn1(self.conv1(x)))\n        # B, D, N\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = x.permute(0, 2, 1)\n        new_xyz, new_feature = sample_and_group(npoint=512, radius=0.15, nsample=32, xyz=xyz, points=x)         \n        feature_0 = self.gather_local_0(new_feature)\n        feature = feature_0.permute(0, 2, 1)\n        new_xyz, new_feature = sample_and_group(npoint=256, radius=0.2, nsample=32, xyz=new_xyz, points=feature) \n        feature_1 = self.gather_local_1(new_feature)\n\n        x = self.pt_last(feature_1, new_xyz)\n        x = torch.cat([x, feature_1], dim=1)\n        x = self.conv_fuse(x)\n        x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)\n        x = self.dp1(x)\n        x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)\n        x = self.dp2(x)\n        x = self.linear3(x)\n\n        return x\n\nclass Point_Transformer_Last(nn.Module):\n    def __init__(self, args, channels=256):\n        super(Point_Transformer_Last, self).__init__()\n        self.args = args\n        self.conv1 = nn.Conv1d(channels, channels, kernel_size=1, bias=False)\n        self.pos_xyz = nn.Conv1d(3, channels, 1)\n        self.bn1 = nn.BatchNorm1d(channels)\n\n        self.sa1 = SA_Layer(channels)\n        self.sa2 = SA_Layer(channels)\n        self.sa3 = SA_Layer(channels)\n        self.sa4 = SA_Layer(channels)\n\n    def forward(self, x, xyz):\n        # \n        # b, 3, npoint, nsample  \n        # conv2d 3 -> 128 channels 1, 1\n        # b * npoint, c, nsample \n        # permute reshape\n        batch_size, _, N = x.size()\n        xyz = xyz.permute(0, 2, 1)\n        xyz = self.pos_xyz(xyz)\n        # B, D, N\n        x = F.relu(self.bn1(self.conv1(x)))\n        x1 = self.sa1(x, xyz)\n        x2 = self.sa2(x1, xyz)\n        x3 = self.sa3(x2, xyz)\n        x4 = self.sa4(x3, xyz)\n        x = torch.cat((x1, x2, x3, x4), dim=1)\n\n        return x\n\nclass SA_Layer(nn.Module):\n    def __init__(self, channels):\n        super(SA_Layer, self).__init__()\n\n        self.q_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)\n        self.k_conv = nn.Conv1d(channels, channels // 4, 1, bias=False)\n        self.q_conv.weight = self.k_conv.weight\n        self.q_conv.bias = self.k_conv.bias\n\n        self.v_conv = nn.Conv1d(channels, channels, 1)\n        self.trans_conv = nn.Conv1d(channels, channels, 1)\n        self.after_norm = nn.BatchNorm1d(channels)\n        self.act = nn.ReLU()\n        self.softmax = nn.Softmax(dim=-1)\n\n    def forward(self, x, xyz):\n        # b, n, c\n        x = x + xyz\n        x_q = self.q_conv(x).permute(0, 2, 1)\n        # b, c, n\n        x_k = self.k_conv(x)\n        x_v = self.v_conv(x)\n        # b, n, n\n        energy = torch.bmm(x_q, x_k)\n\n        attention = self.softmax(energy)\n        attention = attention / (1e-9 + attention.sum(dim=1, keepdim=True))\n        # b, c, n\n        x_r = torch.bmm(x_v, attention)\n        x_r = self.act(self.after_norm(self.trans_conv(x - x_r)))\n        x = x + x_r\n        return x\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/MANIFEST.in",
    "content": "graft pointnet2_ops/_ext-src\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/__init__.py",
    "content": "import pointnet2_ops.pointnet2_modules\nimport pointnet2_ops.pointnet2_utils\nfrom pointnet2_ops._version import __version__\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/ball_query.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,\n                      const int nsample);\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/cuda_utils.h",
    "content": "#ifndef _CUDA_UTILS_H\n#define _CUDA_UTILS_H\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <cmath>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n#include <vector>\n\n#define TOTAL_THREADS 512\n\ninline int opt_n_threads(int work_size) {\n  const int pow_2 = std::log(static_cast<double>(work_size)) / std::log(2.0);\n\n  return max(min(1 << pow_2, TOTAL_THREADS), 1);\n}\n\ninline dim3 opt_block_config(int x, int y) {\n  const int x_threads = opt_n_threads(x);\n  const int y_threads =\n      max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);\n  dim3 block_config(x_threads, y_threads, 1);\n\n  return block_config;\n}\n\n#define CUDA_CHECK_ERRORS()                                           \\\n  do {                                                                \\\n    cudaError_t err = cudaGetLastError();                             \\\n    if (cudaSuccess != err) {                                         \\\n      fprintf(stderr, \"CUDA kernel failed : %s\\n%s at L:%d in %s\\n\",  \\\n              cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \\\n              __FILE__);                                              \\\n      exit(-1);                                                       \\\n    }                                                                 \\\n  } while (0)\n\n#endif\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/group_points.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor group_points(at::Tensor points, at::Tensor idx);\nat::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/interpolate.h",
    "content": "#pragma once\n\n#include <torch/extension.h>\n#include <vector>\n\nstd::vector<at::Tensor> three_nn(at::Tensor unknowns, at::Tensor knows);\nat::Tensor three_interpolate(at::Tensor points, at::Tensor idx,\n                             at::Tensor weight);\nat::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,\n                                  at::Tensor weight, const int m);\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/sampling.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor gather_points(at::Tensor points, at::Tensor idx);\nat::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);\nat::Tensor furthest_point_sampling(at::Tensor points, const int nsamples);\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/include/utils.h",
    "content": "#pragma once\n#include <ATen/cuda/CUDAContext.h>\n#include <torch/extension.h>\n\n#define CHECK_CUDA(x)                                    \\\n  do {                                                   \\\n    AT_ASSERT(x.is_cuda(), #x \" must be a CUDA tensor\"); \\\n  } while (0)\n\n#define CHECK_CONTIGUOUS(x)                                          \\\n  do {                                                               \\\n    AT_ASSERT(x.is_contiguous(), #x \" must be a contiguous tensor\"); \\\n  } while (0)\n\n#define CHECK_IS_INT(x)                               \\\n  do {                                                \\\n    AT_ASSERT(x.scalar_type() == at::ScalarType::Int, \\\n              #x \" must be an int tensor\");           \\\n  } while (0)\n\n#define CHECK_IS_FLOAT(x)                               \\\n  do {                                                  \\\n    AT_ASSERT(x.scalar_type() == at::ScalarType::Float, \\\n              #x \" must be a float tensor\");            \\\n  } while (0)\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query.cpp",
    "content": "#include \"ball_query.h\"\n#include \"utils.h\"\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n                                     int nsample, const float *new_xyz,\n                                     const float *xyz, int *idx);\n\nat::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,\n                      const int nsample) {\n  CHECK_CONTIGUOUS(new_xyz);\n  CHECK_CONTIGUOUS(xyz);\n  CHECK_IS_FLOAT(new_xyz);\n  CHECK_IS_FLOAT(xyz);\n\n  if (new_xyz.is_cuda()) {\n    CHECK_CUDA(xyz);\n  }\n\n  at::Tensor idx =\n      torch::zeros({new_xyz.size(0), new_xyz.size(1), nsample},\n                   at::device(new_xyz.device()).dtype(at::ScalarType::Int));\n\n  if (new_xyz.is_cuda()) {\n    query_ball_point_kernel_wrapper(xyz.size(0), xyz.size(1), new_xyz.size(1),\n                                    radius, nsample, new_xyz.data_ptr<float>(),\n                                    xyz.data_ptr<float>(), idx.data_ptr<int>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return idx;\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/ball_query_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: new_xyz(b, m, 3) xyz(b, n, 3)\n// output: idx(b, m, nsample)\n__global__ void query_ball_point_kernel(int b, int n, int m, float radius,\n                                        int nsample,\n                                        const float *__restrict__ new_xyz,\n                                        const float *__restrict__ xyz,\n                                        int *__restrict__ idx) {\n  int batch_index = blockIdx.x;\n  xyz += batch_index * n * 3;\n  new_xyz += batch_index * m * 3;\n  idx += m * nsample * batch_index;\n\n  int index = threadIdx.x;\n  int stride = blockDim.x;\n\n  float radius2 = radius * radius;\n  for (int j = index; j < m; j += stride) {\n    float new_x = new_xyz[j * 3 + 0];\n    float new_y = new_xyz[j * 3 + 1];\n    float new_z = new_xyz[j * 3 + 2];\n    for (int k = 0, cnt = 0; k < n && cnt < nsample; ++k) {\n      float x = xyz[k * 3 + 0];\n      float y = xyz[k * 3 + 1];\n      float z = xyz[k * 3 + 2];\n      float d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) +\n                 (new_z - z) * (new_z - z);\n      if (d2 < radius2) {\n        if (cnt == 0) {\n          for (int l = 0; l < nsample; ++l) {\n            idx[j * nsample + l] = k;\n          }\n        }\n        idx[j * nsample + cnt] = k;\n        ++cnt;\n      }\n    }\n  }\n}\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n                                     int nsample, const float *new_xyz,\n                                     const float *xyz, int *idx) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  query_ball_point_kernel<<<b, opt_n_threads(m), 0, stream>>>(\n      b, n, m, radius, nsample, new_xyz, xyz, idx);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/bindings.cpp",
    "content": "#include \"ball_query.h\"\n#include \"group_points.h\"\n#include \"interpolate.h\"\n#include \"sampling.h\"\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"gather_points\", &gather_points);\n  m.def(\"gather_points_grad\", &gather_points_grad);\n  m.def(\"furthest_point_sampling\", &furthest_point_sampling);\n\n  m.def(\"three_nn\", &three_nn);\n  m.def(\"three_interpolate\", &three_interpolate);\n  m.def(\"three_interpolate_grad\", &three_interpolate_grad);\n\n  m.def(\"ball_query\", &ball_query);\n\n  m.def(\"group_points\", &group_points);\n  m.def(\"group_points_grad\", &group_points_grad);\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points.cpp",
    "content": "#include \"group_points.h\"\n#include \"utils.h\"\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n                                 const float *points, const int *idx,\n                                 float *out);\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                      int nsample, const float *grad_out,\n                                      const int *idx, float *grad_points);\n\nat::Tensor group_points(at::Tensor points, at::Tensor idx) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n\n  if (points.is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1), idx.size(2)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.is_cuda()) {\n    group_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),\n                                idx.size(1), idx.size(2),\n                                points.data_ptr<float>(), idx.data_ptr<int>(),\n                                output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n\nat::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n\n  if (grad_out.is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), n},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.is_cuda()) {\n    group_points_grad_kernel_wrapper(\n        grad_out.size(0), grad_out.size(1), n, idx.size(1), idx.size(2),\n        grad_out.data_ptr<float>(), idx.data_ptr<int>(),\n        output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/group_points_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: points(b, c, n) idx(b, npoints, nsample)\n// output: out(b, c, npoints, nsample)\n__global__ void group_points_kernel(int b, int c, int n, int npoints,\n                                    int nsample,\n                                    const float *__restrict__ points,\n                                    const int *__restrict__ idx,\n                                    float *__restrict__ out) {\n  int batch_index = blockIdx.x;\n  points += batch_index * n * c;\n  idx += batch_index * npoints * nsample;\n  out += batch_index * npoints * nsample * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * npoints; i += stride) {\n    const int l = i / npoints;\n    const int j = i % npoints;\n    for (int k = 0; k < nsample; ++k) {\n      int ii = idx[j * nsample + k];\n      out[(l * npoints + j) * nsample + k] = points[l * n + ii];\n    }\n  }\n}\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n                                 const float *points, const int *idx,\n                                 float *out) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  group_points_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n      b, c, n, npoints, nsample, points, idx, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, npoints, nsample), idx(b, npoints, nsample)\n// output: grad_points(b, c, n)\n__global__ void group_points_grad_kernel(int b, int c, int n, int npoints,\n                                         int nsample,\n                                         const float *__restrict__ grad_out,\n                                         const int *__restrict__ idx,\n                                         float *__restrict__ grad_points) {\n  int batch_index = blockIdx.x;\n  grad_out += batch_index * npoints * nsample * c;\n  idx += batch_index * npoints * nsample;\n  grad_points += batch_index * n * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * npoints; i += stride) {\n    const int l = i / npoints;\n    const int j = i % npoints;\n    for (int k = 0; k < nsample; ++k) {\n      int ii = idx[j * nsample + k];\n      atomicAdd(grad_points + l * n + ii,\n                grad_out[(l * npoints + j) * nsample + k]);\n    }\n  }\n}\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                      int nsample, const float *grad_out,\n                                      const int *idx, float *grad_points) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  group_points_grad_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n      b, c, n, npoints, nsample, grad_out, idx, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate.cpp",
    "content": "#include \"interpolate.h\"\n#include \"utils.h\"\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n                             const float *known, float *dist2, int *idx);\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n                                      const float *points, const int *idx,\n                                      const float *weight, float *out);\nvoid three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,\n                                           const float *grad_out,\n                                           const int *idx, const float *weight,\n                                           float *grad_points);\n\nstd::vector<at::Tensor> three_nn(at::Tensor unknowns, at::Tensor knows) {\n  CHECK_CONTIGUOUS(unknowns);\n  CHECK_CONTIGUOUS(knows);\n  CHECK_IS_FLOAT(unknowns);\n  CHECK_IS_FLOAT(knows);\n\n  if (unknowns.is_cuda()) {\n    CHECK_CUDA(knows);\n  }\n\n  at::Tensor idx =\n      torch::zeros({unknowns.size(0), unknowns.size(1), 3},\n                   at::device(unknowns.device()).dtype(at::ScalarType::Int));\n  at::Tensor dist2 =\n      torch::zeros({unknowns.size(0), unknowns.size(1), 3},\n                   at::device(unknowns.device()).dtype(at::ScalarType::Float));\n\n  if (unknowns.is_cuda()) {\n    three_nn_kernel_wrapper(unknowns.size(0), unknowns.size(1), knows.size(1),\n                            unknowns.data_ptr<float>(), knows.data_ptr<float>(),\n                            dist2.data_ptr<float>(), idx.data_ptr<int>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return {dist2, idx};\n}\n\nat::Tensor three_interpolate(at::Tensor points, at::Tensor idx,\n                             at::Tensor weight) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_CONTIGUOUS(weight);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n  CHECK_IS_FLOAT(weight);\n\n  if (points.is_cuda()) {\n    CHECK_CUDA(idx);\n    CHECK_CUDA(weight);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.is_cuda()) {\n    three_interpolate_kernel_wrapper(\n        points.size(0), points.size(1), points.size(2), idx.size(1),\n        points.data_ptr<float>(), idx.data_ptr<int>(), weight.data_ptr<float>(),\n        output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\nat::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,\n                                  at::Tensor weight, const int m) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_CONTIGUOUS(weight);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n  CHECK_IS_FLOAT(weight);\n\n  if (grad_out.is_cuda()) {\n    CHECK_CUDA(idx);\n    CHECK_CUDA(weight);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), m},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.is_cuda()) {\n    three_interpolate_grad_kernel_wrapper(\n        grad_out.size(0), grad_out.size(1), grad_out.size(2), m,\n        grad_out.data_ptr<float>(), idx.data_ptr<int>(),\n        weight.data_ptr<float>(), output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/interpolate_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: unknown(b, n, 3) known(b, m, 3)\n// output: dist2(b, n, 3), idx(b, n, 3)\n__global__ void three_nn_kernel(int b, int n, int m,\n                                const float *__restrict__ unknown,\n                                const float *__restrict__ known,\n                                float *__restrict__ dist2,\n                                int *__restrict__ idx) {\n  int batch_index = blockIdx.x;\n  unknown += batch_index * n * 3;\n  known += batch_index * m * 3;\n  dist2 += batch_index * n * 3;\n  idx += batch_index * n * 3;\n\n  int index = threadIdx.x;\n  int stride = blockDim.x;\n  for (int j = index; j < n; j += stride) {\n    float ux = unknown[j * 3 + 0];\n    float uy = unknown[j * 3 + 1];\n    float uz = unknown[j * 3 + 2];\n\n    double best1 = 1e40, best2 = 1e40, best3 = 1e40;\n    int besti1 = 0, besti2 = 0, besti3 = 0;\n    for (int k = 0; k < m; ++k) {\n      float x = known[k * 3 + 0];\n      float y = known[k * 3 + 1];\n      float z = known[k * 3 + 2];\n      float d = (ux - x) * (ux - x) + (uy - y) * (uy - y) + (uz - z) * (uz - z);\n      if (d < best1) {\n        best3 = best2;\n        besti3 = besti2;\n        best2 = best1;\n        besti2 = besti1;\n        best1 = d;\n        besti1 = k;\n      } else if (d < best2) {\n        best3 = best2;\n        besti3 = besti2;\n        best2 = d;\n        besti2 = k;\n      } else if (d < best3) {\n        best3 = d;\n        besti3 = k;\n      }\n    }\n    dist2[j * 3 + 0] = best1;\n    dist2[j * 3 + 1] = best2;\n    dist2[j * 3 + 2] = best3;\n\n    idx[j * 3 + 0] = besti1;\n    idx[j * 3 + 1] = besti2;\n    idx[j * 3 + 2] = besti3;\n  }\n}\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n                             const float *known, float *dist2, int *idx) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_nn_kernel<<<b, opt_n_threads(n), 0, stream>>>(b, n, m, unknown, known,\n                                                      dist2, idx);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: points(b, c, m), idx(b, n, 3), weight(b, n, 3)\n// output: out(b, c, n)\n__global__ void three_interpolate_kernel(int b, int c, int m, int n,\n                                         const float *__restrict__ points,\n                                         const int *__restrict__ idx,\n                                         const float *__restrict__ weight,\n                                         float *__restrict__ out) {\n  int batch_index = blockIdx.x;\n  points += batch_index * m * c;\n\n  idx += batch_index * n * 3;\n  weight += batch_index * n * 3;\n\n  out += batch_index * n * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * n; i += stride) {\n    const int l = i / n;\n    const int j = i % n;\n    float w1 = weight[j * 3 + 0];\n    float w2 = weight[j * 3 + 1];\n    float w3 = weight[j * 3 + 2];\n\n    int i1 = idx[j * 3 + 0];\n    int i2 = idx[j * 3 + 1];\n    int i3 = idx[j * 3 + 2];\n\n    out[i] = points[l * m + i1] * w1 + points[l * m + i2] * w2 +\n             points[l * m + i3] * w3;\n  }\n}\n\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n                                      const float *points, const int *idx,\n                                      const float *weight, float *out) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_interpolate_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n      b, c, m, n, points, idx, weight, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, n), idx(b, n, 3), weight(b, n, 3)\n// output: grad_points(b, c, m)\n\n__global__ void three_interpolate_grad_kernel(\n    int b, int c, int n, int m, const float *__restrict__ grad_out,\n    const int *__restrict__ idx, const float *__restrict__ weight,\n    float *__restrict__ grad_points) {\n  int batch_index = blockIdx.x;\n  grad_out += batch_index * n * c;\n  idx += batch_index * n * 3;\n  weight += batch_index * n * 3;\n  grad_points += batch_index * m * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * n; i += stride) {\n    const int l = i / n;\n    const int j = i % n;\n    float w1 = weight[j * 3 + 0];\n    float w2 = weight[j * 3 + 1];\n    float w3 = weight[j * 3 + 2];\n\n    int i1 = idx[j * 3 + 0];\n    int i2 = idx[j * 3 + 1];\n    int i3 = idx[j * 3 + 2];\n\n    atomicAdd(grad_points + l * m + i1, grad_out[i] * w1);\n    atomicAdd(grad_points + l * m + i2, grad_out[i] * w2);\n    atomicAdd(grad_points + l * m + i3, grad_out[i] * w3);\n  }\n}\n\nvoid three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,\n                                           const float *grad_out,\n                                           const int *idx, const float *weight,\n                                           float *grad_points) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_interpolate_grad_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n      b, c, n, m, grad_out, idx, weight, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling.cpp",
    "content": "#include \"sampling.h\"\n#include \"utils.h\"\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n                                  const float *points, const int *idx,\n                                  float *out);\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                       const float *grad_out, const int *idx,\n                                       float *grad_points);\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n                                            const float *dataset, float *temp,\n                                            int *idxs);\n\nat::Tensor gather_points(at::Tensor points, at::Tensor idx) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n\n  if (points.is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.is_cuda()) {\n    gather_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),\n                                 idx.size(1), points.data_ptr<float>(),\n                                 idx.data_ptr<int>(), output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n\nat::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx,\n                              const int n) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n\n  if (grad_out.is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), n},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.is_cuda()) {\n    gather_points_grad_kernel_wrapper(grad_out.size(0), grad_out.size(1), n,\n                                      idx.size(1), grad_out.data_ptr<float>(),\n                                      idx.data_ptr<int>(),\n                                      output.data_ptr<float>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\nat::Tensor furthest_point_sampling(at::Tensor points, const int nsamples) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_IS_FLOAT(points);\n\n  at::Tensor output =\n      torch::zeros({points.size(0), nsamples},\n                   at::device(points.device()).dtype(at::ScalarType::Int));\n\n  at::Tensor tmp =\n      torch::full({points.size(0), points.size(1)}, 1e10,\n                  at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.is_cuda()) {\n    furthest_point_sampling_kernel_wrapper(\n        points.size(0), points.size(1), nsamples, points.data_ptr<float>(),\n        tmp.data_ptr<float>(), output.data_ptr<int>());\n  } else {\n    AT_ASSERT(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: points(b, c, n) idx(b, m)\n// output: out(b, c, m)\n__global__ void gather_points_kernel(int b, int c, int n, int m,\n                                     const float *__restrict__ points,\n                                     const int *__restrict__ idx,\n                                     float *__restrict__ out) {\n  for (int i = blockIdx.x; i < b; i += gridDim.x) {\n    for (int l = blockIdx.y; l < c; l += gridDim.y) {\n      for (int j = threadIdx.x; j < m; j += blockDim.x) {\n        int a = idx[i * m + j];\n        out[(i * c + l) * m + j] = points[(i * c + l) * n + a];\n      }\n    }\n  }\n}\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n                                  const float *points, const int *idx,\n                                  float *out) {\n  gather_points_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0,\n                         at::cuda::getCurrentCUDAStream()>>>(b, c, n, npoints,\n                                                             points, idx, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, m) idx(b, m)\n// output: grad_points(b, c, n)\n__global__ void gather_points_grad_kernel(int b, int c, int n, int m,\n                                          const float *__restrict__ grad_out,\n                                          const int *__restrict__ idx,\n                                          float *__restrict__ grad_points) {\n  for (int i = blockIdx.x; i < b; i += gridDim.x) {\n    for (int l = blockIdx.y; l < c; l += gridDim.y) {\n      for (int j = threadIdx.x; j < m; j += blockDim.x) {\n        int a = idx[i * m + j];\n        atomicAdd(grad_points + (i * c + l) * n + a,\n                  grad_out[(i * c + l) * m + j]);\n      }\n    }\n  }\n}\n\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                       const float *grad_out, const int *idx,\n                                       float *grad_points) {\n  gather_points_grad_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0,\n                              at::cuda::getCurrentCUDAStream()>>>(\n      b, c, n, npoints, grad_out, idx, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n\n__device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i,\n                         int idx1, int idx2) {\n  const float v1 = dists[idx1], v2 = dists[idx2];\n  const int i1 = dists_i[idx1], i2 = dists_i[idx2];\n  dists[idx1] = max(v1, v2);\n  dists_i[idx1] = v2 > v1 ? i2 : i1;\n}\n\n// Input dataset: (b, n, 3), tmp: (b, n)\n// Ouput idxs (b, m)\ntemplate <unsigned int block_size>\n__global__ void furthest_point_sampling_kernel(\n    int b, int n, int m, const float *__restrict__ dataset,\n    float *__restrict__ temp, int *__restrict__ idxs) {\n  if (m <= 0) return;\n  __shared__ float dists[block_size];\n  __shared__ int dists_i[block_size];\n\n  int batch_index = blockIdx.x;\n  dataset += batch_index * n * 3;\n  temp += batch_index * n;\n  idxs += batch_index * m;\n\n  int tid = threadIdx.x;\n  const int stride = block_size;\n\n  int old = 0;\n  if (threadIdx.x == 0) idxs[0] = old;\n\n  __syncthreads();\n  for (int j = 1; j < m; j++) {\n    int besti = 0;\n    float best = -1;\n    float x1 = dataset[old * 3 + 0];\n    float y1 = dataset[old * 3 + 1];\n    float z1 = dataset[old * 3 + 2];\n    for (int k = tid; k < n; k += stride) {\n      float x2, y2, z2;\n      x2 = dataset[k * 3 + 0];\n      y2 = dataset[k * 3 + 1];\n      z2 = dataset[k * 3 + 2];\n      float mag = (x2 * x2) + (y2 * y2) + (z2 * z2);\n      if (mag <= 1e-3) continue;\n\n      float d =\n          (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1);\n\n      float d2 = min(d, temp[k]);\n      temp[k] = d2;\n      besti = d2 > best ? k : besti;\n      best = d2 > best ? d2 : best;\n    }\n    dists[tid] = best;\n    dists_i[tid] = besti;\n    __syncthreads();\n\n    if (block_size >= 512) {\n      if (tid < 256) {\n        __update(dists, dists_i, tid, tid + 256);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 256) {\n      if (tid < 128) {\n        __update(dists, dists_i, tid, tid + 128);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 128) {\n      if (tid < 64) {\n        __update(dists, dists_i, tid, tid + 64);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 64) {\n      if (tid < 32) {\n        __update(dists, dists_i, tid, tid + 32);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 32) {\n      if (tid < 16) {\n        __update(dists, dists_i, tid, tid + 16);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 16) {\n      if (tid < 8) {\n        __update(dists, dists_i, tid, tid + 8);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 8) {\n      if (tid < 4) {\n        __update(dists, dists_i, tid, tid + 4);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 4) {\n      if (tid < 2) {\n        __update(dists, dists_i, tid, tid + 2);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 2) {\n      if (tid < 1) {\n        __update(dists, dists_i, tid, tid + 1);\n      }\n      __syncthreads();\n    }\n\n    old = dists_i[0];\n    if (tid == 0) idxs[j] = old;\n  }\n}\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n                                            const float *dataset, float *temp,\n                                            int *idxs) {\n  unsigned int n_threads = opt_n_threads(n);\n\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  switch (n_threads) {\n    case 512:\n      furthest_point_sampling_kernel<512>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 256:\n      furthest_point_sampling_kernel<256>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 128:\n      furthest_point_sampling_kernel<128>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 64:\n      furthest_point_sampling_kernel<64>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 32:\n      furthest_point_sampling_kernel<32>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 16:\n      furthest_point_sampling_kernel<16>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 8:\n      furthest_point_sampling_kernel<8>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 4:\n      furthest_point_sampling_kernel<4>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 2:\n      furthest_point_sampling_kernel<2>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 1:\n      furthest_point_sampling_kernel<1>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    default:\n      furthest_point_sampling_kernel<512>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n  }\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/_version.py",
    "content": "__version__ = \"3.0.0\"\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_modules.py",
    "content": "from typing import List, Optional, Tuple\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom pointnet2_ops import pointnet2_utils\n\n\ndef build_shared_mlp(mlp_spec: List[int], bn: bool = True):\n    layers = []\n    for i in range(1, len(mlp_spec)):\n        layers.append(\n            nn.Conv2d(mlp_spec[i - 1], mlp_spec[i], kernel_size=1, bias=not bn)\n        )\n        if bn:\n            layers.append(nn.BatchNorm2d(mlp_spec[i]))\n        layers.append(nn.ReLU(True))\n\n    return nn.Sequential(*layers)\n\n\nclass _PointnetSAModuleBase(nn.Module):\n    def __init__(self):\n        super(_PointnetSAModuleBase, self).__init__()\n        self.npoint = None\n        self.groupers = None\n        self.mlps = None\n\n    def forward(\n        self, xyz: torch.Tensor, features: Optional[torch.Tensor]\n    ) -> Tuple[torch.Tensor, torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor of the xyz coordinates of the features\n        features : torch.Tensor\n            (B, C, N) tensor of the descriptors of the the features\n\n        Returns\n        -------\n        new_xyz : torch.Tensor\n            (B, npoint, 3) tensor of the new features' xyz\n        new_features : torch.Tensor\n            (B,  \\sum_k(mlps[k][-1]), npoint) tensor of the new_features descriptors\n        \"\"\"\n\n        new_features_list = []\n\n        xyz_flipped = xyz.transpose(1, 2).contiguous()\n        new_xyz = (\n            pointnet2_utils.gather_operation(\n                xyz_flipped, pointnet2_utils.furthest_point_sample(xyz, self.npoint)\n            )\n            .transpose(1, 2)\n            .contiguous()\n            if self.npoint is not None\n            else None\n        )\n\n        for i in range(len(self.groupers)):\n            new_features = self.groupers[i](\n                xyz, new_xyz, features\n            )  # (B, C, npoint, nsample)\n\n            new_features = self.mlps[i](new_features)  # (B, mlp[-1], npoint, nsample)\n            new_features = F.max_pool2d(\n                new_features, kernel_size=[1, new_features.size(3)]\n            )  # (B, mlp[-1], npoint, 1)\n            new_features = new_features.squeeze(-1)  # (B, mlp[-1], npoint)\n\n            new_features_list.append(new_features)\n\n        return new_xyz, torch.cat(new_features_list, dim=1)\n\n\nclass PointnetSAModuleMSG(_PointnetSAModuleBase):\n    r\"\"\"Pointnet set abstrction layer with multiscale grouping\n\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radii : list of float32\n        list of radii to group with\n    nsamples : list of int32\n        Number of samples in each ball query\n    mlps : list of list of int32\n        Spec of the pointnet before the global max_pool for each scale\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, npoint, radii, nsamples, mlps, bn=True, use_xyz=True):\n        # type: (PointnetSAModuleMSG, int, List[float], List[int], List[List[int]], bool, bool) -> None\n        super(PointnetSAModuleMSG, self).__init__()\n\n        assert len(radii) == len(nsamples) == len(mlps)\n\n        self.npoint = npoint\n        self.groupers = nn.ModuleList()\n        self.mlps = nn.ModuleList()\n        for i in range(len(radii)):\n            radius = radii[i]\n            nsample = nsamples[i]\n            self.groupers.append(\n                pointnet2_utils.QueryAndGroup(radius, nsample, use_xyz=use_xyz)\n                if npoint is not None\n                else pointnet2_utils.GroupAll(use_xyz)\n            )\n            mlp_spec = mlps[i]\n            if use_xyz:\n                mlp_spec[0] += 3\n\n            self.mlps.append(build_shared_mlp(mlp_spec, bn))\n\n\nclass PointnetSAModule(PointnetSAModuleMSG):\n    r\"\"\"Pointnet set abstrction layer\n\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radius : float\n        Radius of ball\n    nsample : int\n        Number of samples in the ball query\n    mlp : list\n        Spec of the pointnet before the global max_pool\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n        self, mlp, npoint=None, radius=None, nsample=None, bn=True, use_xyz=True\n    ):\n        # type: (PointnetSAModule, List[int], int, float, int, bool, bool) -> None\n        super(PointnetSAModule, self).__init__(\n            mlps=[mlp],\n            npoint=npoint,\n            radii=[radius],\n            nsamples=[nsample],\n            bn=bn,\n            use_xyz=use_xyz,\n        )\n\n\nclass PointnetFPModule(nn.Module):\n    r\"\"\"Propigates the features of one set to another\n\n    Parameters\n    ----------\n    mlp : list\n        Pointnet module parameters\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, mlp, bn=True):\n        # type: (PointnetFPModule, List[int], bool) -> None\n        super(PointnetFPModule, self).__init__()\n        self.mlp = build_shared_mlp(mlp, bn=bn)\n\n    def forward(self, unknown, known, unknow_feats, known_feats):\n        # type: (PointnetFPModule, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of the xyz positions of the unknown features\n        known : torch.Tensor\n            (B, m, 3) tensor of the xyz positions of the known features\n        unknow_feats : torch.Tensor\n            (B, C1, n) tensor of the features to be propigated to\n        known_feats : torch.Tensor\n            (B, C2, m) tensor of features to be propigated\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, mlp[-1], n) tensor of the features of the unknown features\n        \"\"\"\n\n        if known is not None:\n            dist, idx = pointnet2_utils.three_nn(unknown, known)\n            dist_recip = 1.0 / (dist + 1e-8)\n            norm = torch.sum(dist_recip, dim=2, keepdim=True)\n            weight = dist_recip / norm\n\n            interpolated_feats = pointnet2_utils.three_interpolate(\n                known_feats, idx, weight\n            )\n        else:\n            interpolated_feats = known_feats.expand(\n                *(known_feats.size()[0:2] + [unknown.size(1)])\n            )\n\n        if unknow_feats is not None:\n            new_features = torch.cat(\n                [interpolated_feats, unknow_feats], dim=1\n            )  # (B, C2 + C1, n)\n        else:\n            new_features = interpolated_feats\n\n        new_features = new_features.unsqueeze(-1)\n        new_features = self.mlp(new_features)\n\n        return new_features.squeeze(-1)\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/pointnet2_ops/pointnet2_utils.py",
    "content": "import torch\nimport torch.nn as nn\nimport warnings\nfrom torch.autograd import Function\nfrom typing import *\n\ntry:\n    import pointnet2_ops._ext as _ext\nexcept ImportError:\n    from torch.utils.cpp_extension import load\n    import glob\n    import os.path as osp\n    import os\n\n    warnings.warn(\"Unable to load pointnet2_ops cpp extension. JIT Compiling.\")\n\n    _ext_src_root = osp.join(osp.dirname(__file__), \"_ext-src\")\n    _ext_sources = glob.glob(osp.join(_ext_src_root, \"src\", \"*.cpp\")) + glob.glob(\n        osp.join(_ext_src_root, \"src\", \"*.cu\")\n    )\n    _ext_headers = glob.glob(osp.join(_ext_src_root, \"include\", \"*\"))\n\n    os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"3.7+PTX;5.0;6.0;6.1;6.2;7.0;7.5\"\n    _ext = load(\n        \"_ext\",\n        sources=_ext_sources,\n        extra_include_paths=[osp.join(_ext_src_root, \"include\")],\n        extra_cflags=[\"-O3\"],\n        extra_cuda_cflags=[\"-O3\", \"-Xfatbin\", \"-compress-all\"],\n        with_cuda=True,\n    )\n\n\nclass FurthestPointSampling(Function):\n    @staticmethod\n    def forward(ctx, xyz, npoint):\n        # type: (Any, torch.Tensor, int) -> torch.Tensor\n        r\"\"\"\n        Uses iterative furthest point sampling to select a set of npoint features that have the largest\n        minimum distance\n\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor where N > npoint\n        npoint : int32\n            number of features in the sampled set\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint) tensor containing the set\n        \"\"\"\n        out = _ext.furthest_point_sampling(xyz, npoint)\n\n        ctx.mark_non_differentiable(out)\n\n        return out\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        return ()\n\n\nfurthest_point_sample = FurthestPointSampling.apply\n\n\nclass GatherOperation(Function):\n    @staticmethod\n    def forward(ctx, features, idx):\n        # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor\n\n        idx : torch.Tensor\n            (B, npoint) tensor of the features to gather\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint) tensor\n        \"\"\"\n\n        ctx.save_for_backward(idx, features)\n\n        return _ext.gather_points(features, idx)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        idx, features = ctx.saved_tensors\n        N = features.size(2)\n\n        grad_features = _ext.gather_points_grad(grad_out.contiguous(), idx, N)\n        return grad_features, None\n\n\ngather_operation = GatherOperation.apply\n\n\nclass ThreeNN(Function):\n    @staticmethod\n    def forward(ctx, unknown, known):\n        # type: (Any, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]\n        r\"\"\"\n            Find the three nearest neighbors of unknown in known\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of known features\n        known : torch.Tensor\n            (B, m, 3) tensor of unknown features\n\n        Returns\n        -------\n        dist : torch.Tensor\n            (B, n, 3) l2 distance to the three nearest neighbors\n        idx : torch.Tensor\n            (B, n, 3) index of 3 nearest neighbors\n        \"\"\"\n        dist2, idx = _ext.three_nn(unknown, known)\n        dist = torch.sqrt(dist2)\n\n        ctx.mark_non_differentiable(dist, idx)\n\n        return dist, idx\n\n    @staticmethod\n    def backward(ctx, grad_dist, grad_idx):\n        return ()\n\n\nthree_nn = ThreeNN.apply\n\n\nclass ThreeInterpolate(Function):\n    @staticmethod\n    def forward(ctx, features, idx, weight):\n        # type(Any, torch.Tensor, torch.Tensor, torch.Tensor) -> Torch.Tensor\n        r\"\"\"\n            Performs weight linear interpolation on 3 features\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, c, m) Features descriptors to be interpolated from\n        idx : torch.Tensor\n            (B, n, 3) three nearest neighbors of the target features in features\n        weight : torch.Tensor\n            (B, n, 3) weights\n\n        Returns\n        -------\n        torch.Tensor\n            (B, c, n) tensor of the interpolated features\n        \"\"\"\n        ctx.save_for_backward(idx, weight, features)\n\n        return _ext.three_interpolate(features, idx, weight)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        # type: (Any, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, c, n) tensor with gradients of ouputs\n\n        Returns\n        -------\n        grad_features : torch.Tensor\n            (B, c, m) tensor with gradients of features\n\n        None\n\n        None\n        \"\"\"\n        idx, weight, features = ctx.saved_tensors\n        m = features.size(2)\n\n        grad_features = _ext.three_interpolate_grad(\n            grad_out.contiguous(), idx, weight, m\n        )\n\n        return grad_features, torch.zeros_like(idx), torch.zeros_like(weight)\n\n\nthree_interpolate = ThreeInterpolate.apply\n\n\nclass GroupingOperation(Function):\n    @staticmethod\n    def forward(ctx, features, idx):\n        # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor of features to group\n        idx : torch.Tensor\n            (B, npoint, nsample) tensor containing the indicies of features to group with\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint, nsample) tensor\n        \"\"\"\n        ctx.save_for_backward(idx, features)\n\n        return _ext.group_points(features, idx)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        # type: (Any, torch.tensor) -> Tuple[torch.Tensor, torch.Tensor]\n        r\"\"\"\n\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, C, npoint, nsample) tensor of the gradients of the output from forward\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, N) gradient of the features\n        None\n        \"\"\"\n        idx, features = ctx.saved_tensors\n        N = features.size(2)\n\n        grad_features = _ext.group_points_grad(grad_out.contiguous(), idx, N)\n\n        return grad_features, torch.zeros_like(idx)\n\n\ngrouping_operation = GroupingOperation.apply\n\n\nclass BallQuery(Function):\n    @staticmethod\n    def forward(ctx, radius, nsample, xyz, new_xyz):\n        # type: (Any, float, int, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        radius : float\n            radius of the balls\n        nsample : int\n            maximum number of features in the balls\n        xyz : torch.Tensor\n            (B, N, 3) xyz coordinates of the features\n        new_xyz : torch.Tensor\n            (B, npoint, 3) centers of the ball query\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint, nsample) tensor with the indicies of the features that form the query balls\n        \"\"\"\n        output = _ext.ball_query(new_xyz, xyz, radius, nsample)\n\n        ctx.mark_non_differentiable(output)\n\n        return output\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        return ()\n\n\nball_query = BallQuery.apply\n\n\nclass QueryAndGroup(nn.Module):\n    r\"\"\"\n    Groups with a ball query of radius\n\n    Parameters\n    ---------\n    radius : float32\n        Radius of ball\n    nsample : int32\n        Maximum number of features to gather in the ball\n    \"\"\"\n\n    def __init__(self, radius, nsample, use_xyz=True):\n        # type: (QueryAndGroup, float, int, bool) -> None\n        super(QueryAndGroup, self).__init__()\n        self.radius, self.nsample, self.use_xyz = radius, nsample, use_xyz\n\n    def forward(self, xyz, new_xyz, features=None):\n        # type: (QueryAndGroup, torch.Tensor. torch.Tensor, torch.Tensor) -> Tuple[Torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            centriods (B, npoint, 3)\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, 3 + C, npoint, nsample) tensor\n        \"\"\"\n\n        idx = ball_query(self.radius, self.nsample, xyz, new_xyz)\n        xyz_trans = xyz.transpose(1, 2).contiguous()\n        grouped_xyz = grouping_operation(xyz_trans, idx)  # (B, 3, npoint, nsample)\n        grouped_xyz -= new_xyz.transpose(1, 2).unsqueeze(-1)\n\n        if features is not None:\n            grouped_features = grouping_operation(features, idx)\n            if self.use_xyz:\n                new_features = torch.cat(\n                    [grouped_xyz, grouped_features], dim=1\n                )  # (B, C + 3, npoint, nsample)\n            else:\n                new_features = grouped_features\n        else:\n            assert (\n                self.use_xyz\n            ), \"Cannot have not features and not use xyz as a feature!\"\n            new_features = grouped_xyz\n\n        return new_features\n\n\nclass GroupAll(nn.Module):\n    r\"\"\"\n    Groups all features\n\n    Parameters\n    ---------\n    \"\"\"\n\n    def __init__(self, use_xyz=True):\n        # type: (GroupAll, bool) -> None\n        super(GroupAll, self).__init__()\n        self.use_xyz = use_xyz\n\n    def forward(self, xyz, new_xyz, features=None):\n        # type: (GroupAll, torch.Tensor, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            Ignored\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, C + 3, 1, N) tensor\n        \"\"\"\n\n        grouped_xyz = xyz.transpose(1, 2).unsqueeze(2)\n        if features is not None:\n            grouped_features = features.unsqueeze(2)\n            if self.use_xyz:\n                new_features = torch.cat(\n                    [grouped_xyz, grouped_features], dim=1\n                )  # (B, 3 + C, 1, N)\n            else:\n                new_features = grouped_features\n        else:\n            new_features = grouped_xyz\n\n        return new_features\n"
  },
  {
    "path": "PCT_Pytorch/pointnet2_ops_lib/setup.py",
    "content": "import glob\nimport os\nimport os.path as osp\n\nfrom setuptools import find_packages, setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\nthis_dir = osp.dirname(osp.abspath(__file__))\n_ext_src_root = osp.join(\"pointnet2_ops\", \"_ext-src\")\n_ext_sources = glob.glob(osp.join(_ext_src_root, \"src\", \"*.cpp\")) + glob.glob(\n    osp.join(_ext_src_root, \"src\", \"*.cu\")\n)\n_ext_headers = glob.glob(osp.join(_ext_src_root, \"include\", \"*\"))\n\nrequirements = [\"torch>=1.4\"]\n\nexec(open(osp.join(\"pointnet2_ops\", \"_version.py\")).read())\n\nos.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"3.7+PTX;5.0;6.0;6.1;6.2;7.0;7.5\"\nsetup(\n    name=\"pointnet2_ops\",\n    version=__version__,\n    author=\"Erik Wijmans\",\n    packages=find_packages(),\n    install_requires=requirements,\n    ext_modules=[\n        CUDAExtension(\n            name=\"pointnet2_ops._ext\",\n            sources=_ext_sources,\n            extra_compile_args={\n                \"cxx\": [\"-O3\"],\n                \"nvcc\": [\"-O3\", \"-Xfatbin\", \"-compress-all\"],\n            },\n            include_dirs=[osp.join(this_dir, _ext_src_root, \"include\")],\n        )\n    ],\n    cmdclass={\"build_ext\": BuildExtension},\n    include_package_data=True,\n)\n"
  },
  {
    "path": "PCT_Pytorch/test.sh",
    "content": "python main.py --exp_name=test --num_points=1024 --use_sgd=True --eval=True --model_path=checkpoints/best/models/model.t7 --test_batch_size 8\n"
  },
  {
    "path": "PCT_Pytorch/train.sh",
    "content": "CUDA_VISIBLE_DEVICES=0 python3.7 main.py --exp_name=train --num_points=1024 --use_sgd=True --batch_size 32 --epochs 250 --lr 0.0001\n"
  },
  {
    "path": "PCT_Pytorch/util.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom pointnet2_ops import pointnet2_utils\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1)\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n\nclass IOStream():\n    def __init__(self, path):\n        self.f = open(path, 'a')\n\n    def cprint(self, text):\n        print(text)\n        self.f.write(text+'\\n')\n        self.f.flush()\n\n    def close(self):\n        self.f.close()\n\ndef square_distance(src, dst):\n    \"\"\"\n    Calculate Euclid distance between each two points.\n    src^T * dst = xn * xm + yn * ym + zn * zm；\n    sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn;\n    sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm;\n    dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2\n         = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst\n    Input:\n        src: source points, [B, N, C]\n        dst: target points, [B, M, C]\n    Output:\n        dist: per-point square distance, [B, N, M]\n    \"\"\"\n    B, N, _ = src.shape\n    _, M, _ = dst.shape\n    dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))\n    dist += torch.sum(src ** 2, -1).view(B, N, 1)\n    dist += torch.sum(dst ** 2, -1).view(B, 1, M)\n    return dist\n\ndef index_points(points, idx):\n    \"\"\"\n    Input:\n        points: input points data, [B, N, C]\n        idx: sample index data, [B, S]\n    Return:\n        new_points:, indexed points data, [B, S, C]\n    \"\"\"\n    device = points.device\n    B = points.shape[0]\n    view_shape = list(idx.shape)\n    view_shape[1:] = [1] * (len(view_shape) - 1)\n    repeat_shape = list(idx.shape)\n    repeat_shape[0] = 1\n    batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)\n    new_points = points[batch_indices, idx, :]\n    return new_points\n\ndef query_ball_point(radius, nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        radius: local region radius\n        nsample: max sample number in local region\n        xyz: all points, [B, N, 3]\n        new_xyz: query points, [B, S, 3]\n    Return:\n        group_idx: grouped points index, [B, S, nsample]\n    \"\"\"\n    device = xyz.device\n    B, N, C = xyz.shape\n    _, S, _ = new_xyz.shape\n    group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])\n    sqrdists = square_distance(new_xyz, xyz)\n    group_idx[sqrdists > radius ** 2] = N\n    group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]\n    group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])\n    mask = group_idx == N\n    group_idx[mask] = group_first[mask]\n    return group_idx\n\ndef knn_point(nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        nsample: max sample number in local region\n        xyz: all points, [B, N, C]\n        new_xyz: query points, [B, S, C]\n    Return:\n        group_idx: grouped points index, [B, S, nsample]\n    \"\"\"\n    sqrdists = square_distance(new_xyz, xyz)\n    _, group_idx = torch.topk(sqrdists, nsample, dim = -1, largest=False, sorted=False)\n    return group_idx\n\ndef sample_and_group(npoint, radius, nsample, xyz, points):\n    \"\"\"\n    Input:\n        npoint:\n        radius:\n        nsample:\n        xyz: input points position data, [B, N, 3]\n        points: input points data, [B, N, D]\n    Return:\n        new_xyz: sampled points position data, [B, npoint, nsample, 3]\n        new_points: sampled points data, [B, npoint, nsample, 3+D]\n    \"\"\"\n    B, N, C = xyz.shape\n    S = npoint \n    xyz = xyz.contiguous()\n\n    fps_idx = pointnet2_utils.furthest_point_sample(xyz, npoint).long() # [B, npoint]\n    new_xyz = index_points(xyz, fps_idx) \n    new_points = index_points(points, fps_idx)\n    # new_xyz = xyz[:]\n    # new_points = points[:]\n\n    idx = knn_point(nsample, xyz, new_xyz)\n    #idx = query_ball_point(radius, nsample, xyz, new_xyz)\n    grouped_xyz = index_points(xyz, idx) # [B, npoint, nsample, C]\n    grouped_xyz_norm = grouped_xyz - new_xyz.view(B, S, 1, C)\n    grouped_points = index_points(points, idx)\n    grouped_points_norm = grouped_points - new_points.view(B, S, 1, -1)\n    new_points = torch.cat([grouped_points_norm, new_points.view(B, S, 1, -1).repeat(1, 1, nsample, 1)], dim=-1)\n    return new_xyz, new_points"
  },
  {
    "path": "README.md",
    "content": "# Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions \n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/benchmarking-robustness-of-3d-point-cloud/3d-point-cloud-classification-on-modelnet40-c)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40-c?p=benchmarking-robustness-of-3d-point-cloud)\n\nThis repo contains the dataset and code for the paper [Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions](https://arxiv.org/abs/2201.12296) by Jiachen Sun et al. This codebase is based on [SimpleView](https://github.com/princeton-vl/SimpleView), and we thank the authors for their great contributions.\n\n## ModelNet40-C\n![image](https://github.com/jiachens/ModelNet40-C/blob/master/img/1.png)\n\n![image](https://github.com/jiachens/ModelNet40-C/blob/master/img/54.png)\n\n\nMore visualizations can be found [here](https://github.com/jiachens/ModelNet40-C/blob/master/img).\n\n[Download ModelNet40-C from Google Drive.](https://drive.google.com/drive/folders/10YeQRh92r_WdL-Dnog2zQfFr03UW4qXX?usp=sharing)\n\n[Download ModelNet40-C using our provided script.](#download-datasets-including-modelnet40-c-and-pre-trained-models)\n\n[Download ModelNet40-C from Zenodo.](https://zenodo.org/record/6017834#.YgNeKu7MK3J)\n\n## ModelNet40-C Leaderboard \n\n**Architecture+Data Augmentation Leaderboard** </br>\n\n| **Architecture** | **Data Augmentation**    | **Corruption Error Rate (%)** | **Clean Error Rate (%)** | **Checkpoint**                                                                                   |\n|------------------|---------------|:-------------------------:|:--------------------:|--------------------------------------------------------------------------------------------------|\n| PCT              | PointCutMix-R |            16.3           |          7.2         | [checkpoint](https://drive.google.com/file/d/1OcH0o7V_RhAOj9pSuS39G43VrBWS-1v3/view?usp=sharing) |\n| PCT              | PointCutMix-K |            16.5           |          6.9         | [checkpoint](https://drive.google.com/file/d/1T4KwMkgAqAItHZc-Q96H1qGMPNoObJkJ/view?usp=sharing) |\n| DGCNN            | PointCutMix-R |            17.3           |          6.8         | [checkpoint](https://drive.google.com/file/d/1Z_6D_MmjecDHhY2q-I-aok9nlD9RkAS1/view?usp=sharing) |\n| PCT              | RSMix         |            17.3           |          6.9         | [checkpoint](https://drive.google.com/file/d/18BqbMCpdbEGdyQVdMwYPDrff5bmgeF9B/view?usp=sharing) |\n| DGCNN            | PointCutMix-K |            17.3           |          7.4         | [checkpoint](https://drive.google.com/file/d/1rUQApmyEJUpv7JzkhJuwEeZOmmZ9vEDU/view?usp=sharing) |\n| RSCNN            | PointCutMix-R |            17.9           |          7.6         | [checkpoint](https://drive.google.com/file/d/1EggUiFcCgpHOwjgQKRgBxST1utfOAryc/view?usp=sharing) |\n| DGCNN            | RSMix         |            18.1           |          7.1         | [checkpoint](https://drive.google.com/file/d/11tNaF-YsJ6hZNm2pY6LX6Ny-ceGkI0Cr/view?usp=sharing) |\n| PCT              | PGD Adv Train |            18.4           |          8.9         | [checkpoint](https://drive.google.com/file/d/1Y7JaW-CLPCcqQQGiuL9BfVKfAkm6MxEA/view?usp=sharing) |\n| PointNet++       | PointCutMix-R |            19.1           |          7.1         | [checkpoint](https://drive.google.com/file/d/1un_H1oq18MrN604mbR3htBNqdOgnXqwQ/view?usp=sharing) |\n| PointNet++       | PointMixup    |            19.3           |          7.1         | [checkpoint](https://drive.google.com/file/d/1fzFOeJcenn7a4glsfs7IEcSTjovZoAkB/view?usp=sharing) |\n| PCT              | PointMixup    |            19.5           |          7.4         | [checkpoint](https://drive.google.com/file/d/1OcBm-PCImcW8h1mb9ZY4CcX2nDN_rB8b/view?usp=sharing) |\n| SimpleView       | PointCutMix-R |            19.7           |          7.9         | [checkpoint](https://drive.google.com/file/d/178LQKtmCeNIbdPXYZXZHRmAQt-YCY_eL/view?usp=sharing) |\n| RSCNN            | PointMixup    |            19.8           |          7.2         | [checkpoint](https://drive.google.com/file/d/1FRPU_QTR3vda1CqPWKkREprIZshv4cYk/view?usp=sharing) |\n| PointNet++       | PointCutMix-K |            20.2           |          6.7         | [checkpoint](https://drive.google.com/file/d/1JLL7ym-fMUS4VFisf-AENB5trYJb_0-J/view?usp=sharing) |\n\nWe allow users to directly download all pre-trained models with every data augmentation method [here](#download-datasets-including-modelnet40-c-and-pre-trained-models).\n\n**Architecture Leaderboard** </br>\n\n| **Architecture** | **Corruption Error Rate (%)** | **Clean Error Rate (%)** | **Checkpoint**                                                                                   | \n|------------------|:-------------------------:|:--------------------:|--------------------------------------------------------------------------------------------------|\n| CurveNet       |            22.7           |          6.6         | checkpoint |\n| PointNet++       |            23.6           |          7.0         | [checkpoint](https://drive.google.com/file/d/18_297KJ8slsJq1rGDsvuQ29VICs-EJTa/view?usp=sharing) |\n| PCT              |            25.5           |          7.1         | [checkpoint](https://drive.google.com/file/d/1NFAhupQKn-sBLYW1YpUAf4jdqMpFcV7Z/view?usp=sharing) |\n| GDANet       |            25.6           |          7.5         | checkpoint |\n| DGCNN            |            25.9           |          7.4         | [checkpoint](https://drive.google.com/file/d/1JMCmujJM4J_OyxuZuDN4befFmtG1_p49/view?usp=sharing) |\n| RSCNN            |            26.2           |          7.7         | [checkpoint](https://drive.google.com/file/d/1RKhXKjZvKvZM2the2qqFhnytAX2H634U/view?usp=sharing) |\n| SimpleView       |            27.2           |          6.1         | [checkpoint](https://drive.google.com/file/d/1jscF5p3Q7DHWl-FgGGemQP3CeXITsTyY/view?usp=sharing) |\n| PointNet         |            28.3           |          9.3         | [checkpoint](https://drive.google.com/file/d/1eW26u0nm6HETwDSiCyCEoLLY3WnOVt73/view?usp=sharing) |\n| PointMLP         |            31.9           |          6.3         | checkpoint |\n| PointMLP-Elite         |            32.4           |          7.2         | checkpoint |\n\nMore models' results coming soon ......\n\nWe allow users to directly download all pre-trained models with standard training [here](#download-datasets-including-modelnet40-c-and-pre-trained-models).\n\n## Getting Started\n\nFirst clone the repository. We would refer to the directory containing the code as `ModelNet40-C`.\n\n```\ngit clone --recurse-submodules git@github.com:jiachens/ModelNet40-C.git\n```\n\n#### Requirements\nThe code is tested on Linux OS with Python version **3.7.5**, CUDA version **10.0**, CuDNN version **7.6** and GCC version **5.4**. We recommend using these versions especially for installing [pointnet++ custom CUDA modules](https://github.com/erikwijmans/Pointnet2_PyTorch/tree/22e8cf527b696b63b66f3873d80ae5f93744bdef).\n\n[02-23-2022] The updated codes are tested on Python version **3.7.5**, CUDA version **11.4**, CuDNN version **8.2** and GCC version **7.5** with the latest ```torch``` and ```torchvision``` libs, but we still suggest the original setup in case of any unstable bugs.\n\n#### Install Libraries\nWe recommend you first install [Anaconda](https://anaconda.org/) and create a virtual environment.\n```\nconda create --name modelnetc python=3.7.5\n```\n\nActivate the virtual environment and install the libraries. Make sure you are in `ModelNet40-C`.\n```\nconda activate modelnetc\npip install -r requirements.txt\nconda install sed  # for downloading data and pretrained models\n```\n\nFor PointNet++, we need to install custom CUDA modules. Make sure you have access to a GPU during this step. You might need to set the appropriate `TORCH_CUDA_ARCH_LIST` environment variable depending on your GPU model. The following command should work for most cases `export TORCH_CUDA_ARCH_LIST=\"6.0;6.1;6.2;7.0;7.5\"`. However, if the install fails, check if `TORCH_CUDA_ARCH_LIST` is correctly set. More details could be found [here](https://en.wikipedia.org/wiki/CUDA#GPUs_supported).\n\nThird-party modules `pointnet2_pyt`, `PCT_Pytorch`, `emd`, and `PyGeM` can be installed by the following script.\n\n```\n./setup.sh\n```\n\n#### Download Datasets Including ModelNet40-C and Pre-trained Models\nMake sure you are in `ModelNet40-C`. `download.sh` script can be used for downloading all the data and the pretrained models. It also places them at the correct locations.\n\nTo download ModelNet40 execute the following command. This will download the ModelNet40 point cloud dataset released with pointnet++ as well as the validation splits used in our work.\n```\n./download.sh modelnet40\n```\nTo generate the ModelNet40-C dataset, please run:\n```\npython data/process.py\npython data/generate_c.py\n```\nNOTE that the generation needs a monitor connected since Open3D library does not support background rendering. \n\nWe also allow users to download ModelNet40-C directly. Please fill this [Google form](https://docs.google.com/forms/d/e/1FAIpQLSdrzt8EtQdjGMlwIwWAzb39KzzVzijpK6-sPEaps07MjQwGGQ/viewform?usp=sf_link) while downloading our dataset.\n```\n./download.sh modelnet40_c\n```\nTo download the pretrained models with standard training recipe, execute the following command.\n```\n./download.sh cor_exp\n```\nTo download the pretrained models using different data augmentation strategies, execute the following command.\n```\n./download.sh runs\n```\n\n#### New Features\n\\[02-23-2022\\]\n- We include PointMLP-Elite and GDANet in our benchmark\n\n\\[02-18-2022\\]\n- We include CurveNet and PointMLP in our benchmark\n\n\\[01-28-2022\\]\n- We include Point Cloud Transformer (PCT) in our benchmark\n- `ModelNet40-C/configs` contains config files to enable different data augmentations and test-time adaptation methods\n- `ModelNet40-C/aug_utils.py` contains the data augmentation codes in our paper\n- `ModelNet40-C/third_party` contains the test-time adaptation used in our paper\n\n#### Code Organization In Originial SimpleView\n- `ModelNet40-C/models`: Code for various models in PyTorch.\n- `ModelNet40-C/configs`: Configuration files for various models.\n- `ModelNet40-C/main.py`: Training and testing any models.\n- `ModelNet40-C/configs.py`: Hyperparameters for different models and dataloader.\n- `ModelNet40-C/dataloader.py`: Code for different variants of the dataloader.\n- `ModelNet40-C/*_utils.py`: Code for various utility functions.\n\n \n## Running Experiments\n\n#### Training and Config files\nTo train or test any model, we use the `main.py` script. The format for running this script is as follows. \n```\npython main.py --exp-config <path to the config>\n```\n\nThe config files are named as `<protocol>_<model_name><_extra>_run_<seed>.yaml` (`<protocol> ∈ [dgcnn, pointnet2, rscnn]`; `<model_name> ∈ [dgcnn, pointnet2, rscnn, pointnet, simpleview]` ). For example, the config file to run an experiment for PointNet++ in DGCNN protocol with seed 1 `dgcnn_pointnet2_run_1.yaml`. To run a new experiment with a different seed, you need to change the `SEED` parameter in the config file. All of our experiments are done based on seed 1.\n\nWe additionally leverage PointCutMix: `configs/cutmix`, PointMixup: `configs/mixup`, RSMix: `configs/rsmix`, and PGD-based adversarial training `configs/pgd` as the training-time config files.\n\nFor example, to train PCT with PointCutMix-R, please use the following command:\n```\npython main.py --exp-config configs/cutmix/pct_r.yaml\n```\n\n#### Evaluate a pretrained model\nWe provide pretrained models. They can be downloaded using the `./download.sh cor_exp` and `./download.sh runs` commands and are stored in the `ModelNet40-C/runs` (for data augmentation recipes) and `ModelNet40-C/cor_exp` (for standard trained models) folders. To test a pretrained model, the command is of the following format.\n\nAdditionally, we provide test-time config files in `configs/bn` and `configs/tent` for BN and TENT in our paper with the following commands:\n```\npython main.py --entry test --model-path <cor_exp/runs>/<cfg_name>/<model_name>.pth --exp-config configs/<cfg_name>.yaml\n```\n\nWe list all the evaluation commands in the `eval_cor.sh`, `eval_og.sh`, `eval_tent_cutmix.sh` scripts. Note that in `eval_cor.sh` it is expected that pgd with PointNet++, RSCNN, and SimpleView do not have outputs since they do not fit the adversarial training framework. We have mentioned this in our paper.\n\n## Citation\nPlease cite our paper and SimpleView if you use our benchmark and analysis results. Thank you!\n```\n@article{sun2022benchmarking,\n      title={Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions}, \n      author={Jiachen Sun and Qingzhao Zhang and Bhavya Kailkhura and Zhiding Yu and Chaowei Xiao and Z. Morley Mao},\n      journal={arXiv preprint arXiv:2201.12296},\n      year={2022}\n}\n```\n```\n@article{goyal2021revisiting,\n  title={Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline},\n  author={Goyal, Ankit and Law, Hei and Liu, Bowei and Newell, Alejandro and Deng, Jia},\n  journal={International Conference on Machine Learning},\n  year={2021}\n}\n```\n\n## References\n\n[1] [Zhang, Jinlai, et al. \"PointCutMix: Regularization Strategy for Point Cloud Classification.\" arXiv preprint arXiv:2101.01461 (2021).](https://arxiv.org/pdf/2101.01461.pdf)\n\n[2] [Chen, Yunlu, et al. \"Pointmixup: Augmentation for point clouds.\" Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer International Publishing, 2020.](https://link.springer.com/content/pdf/10.1007/978-3-030-58580-8_20.pdf)\n\n[3] [Lee, Dogyoon, et al. \"Regularization Strategy for Point Cloud via Rigidly Mixed Sample.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.](https://openaccess.thecvf.com/content/CVPR2021/papers/Lee_Regularization_Strategy_for_Point_Cloud_via_Rigidly_Mixed_Sample_CVPR_2021_paper.pdf)\n\n[4] [Sun, Jiachen, et al. \"Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions.\" Advances in Neural Information Processing Systems 34 (2021).](https://proceedings.neurips.cc/paper/2021/file/82cadb0649a3af4968404c9f6031b233-Paper.pdf)\n\n[5] [Schneider, Steffen, et al. \"Improving robustness against common corruptions by covariate shift adaptation.\" arXiv preprint arXiv:2006.16971 (2020).](https://arxiv.org/pdf/2006.16971.pdf)\n\n[6] [Wang, Dequan, et al. \"Tent: Fully test-time adaptation by entropy minimization.\" arXiv preprint arXiv:2006.10726 (2020).](https://arxiv.org/pdf/2006.10726.pdf)\n\n[7] [Qi, Charles R., et al. \"Pointnet: Deep learning on point sets for 3d classification and segmentation.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.](https://openaccess.thecvf.com/content_cvpr_2017/papers/Qi_PointNet_Deep_Learning_CVPR_2017_paper.pdf)\n\n[8] [Qi, Charles R., et al. \"Pointnet++: Deep hierarchical feature learning on point sets in a metric space.\" arXiv preprint arXiv:1706.02413 (2017).](https://arxiv.org/pdf/1706.02413.pdf)\n\n[9] [Liu, Yongcheng, et al. \"Relation-shape convolutional neural network for point cloud analysis.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.](https://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Relation-Shape_Convolutional_Neural_Network_for_Point_Cloud_Analysis_CVPR_2019_paper.pdf)\n\n[10] [Wang, Yue, et al. \"Dynamic graph cnn for learning on point clouds.\" Acm Transactions On Graphics (tog) 38.5 (2019): 1-12.](https://dl.acm.org/doi/pdf/10.1145/3326362)\n\n[11] [Goyal, Ankit, et al. \"Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline.\" arXiv preprint arXiv:2106.05304 (2021).](https://arxiv.org/pdf/2106.05304.pdf)\n\n[12] [Guo, Meng-Hao, et al. \"Pct: Point cloud transformer.\" Computational Visual Media 7.2 (2021): 187-199.](https://link.springer.com/content/pdf/10.1007/s41095-021-0229-5.pdf)\n\n\n[13] [Xiang, Tiange, et al. \"Walk in the cloud: Learning curves for point clouds shape analysis.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.](https://arxiv.org/pdf/2105.01288.pdf)\n\n[14] [Ma, Xu, et al. \"Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework.\" arXiv preprint arXiv:2202.07123 (2022).](https://arxiv.org/pdf/2202.07123.pdf)\n\n[15] [Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud.](https://arxiv.org/abs/2012.10921)\n"
  },
  {
    "path": "all_utils.py",
    "content": "import tensorboardX\nimport pdb\nimport sys\nfrom collections import MutableMapping, Hashable\nimport csv\nimport os\nimport torch\nimport torch.nn.functional as F\nimport numpy as np\nfrom progressbar import ProgressBar\nimport sys\n\n\n# Additional information that might be necessary to get the model\nDATASET_NUM_CLASS = {\n    'modelnet40': 40,\n    'modelnet40_c': 40,\n    'modelnet40_rscnn': 40,\n    'modelnet40_pn2': 40,\n    'modelnet40_dgcnn': 40,\n}\n\nclass TensorboardManager:\n    def __init__(self, path):\n        self.writer = tensorboardX.SummaryWriter(path)\n\n    def update(self, split, step, vals):\n        for k, v in vals.items():\n            self.writer.add_scalar('%s_%s' % (split, k), v, step)\n\n    def close(self):\n        self.writer.flush()\n        self.writer.close()\n\n\nclass TrackTrain:\n    def __init__(self, early_stop_patience):\n        self.early_stop_patience = early_stop_patience\n        self.counter = -1\n        self.best_epoch_val = -1\n        self.best_epoch_train = -1\n        self.best_epoch_test = -1\n        self.best_val = float(\"-inf\")\n        self.best_test = float(\"-inf\")\n        self.best_train = float(\"-inf\")\n        self.test_best_val = float(\"-inf\")\n\n    def record_epoch(self, epoch_id, train_metric, val_metric, test_metric):\n        assert epoch_id == (self.counter + 1)\n        self.counter += 1\n\n        if val_metric >= self.best_val:\n            self.best_val = val_metric\n            self.best_epoch_val = epoch_id\n            self.test_best_val = test_metric\n\n        if test_metric >= self.best_test:\n            self.best_test = test_metric\n            self.best_epoch_test = epoch_id\n\n        if train_metric >= self.best_train:\n            self.best_train = train_metric\n            self.best_epoch_train = epoch_id\n\n\n    def save_model(self, epoch_id, split):\n        \"\"\"\n        Whether to save the current model or not\n        :param epoch_id:\n        :param split:\n        :return:\n        \"\"\"\n        assert epoch_id == self.counter\n        if split == 'val':\n            if self.best_epoch_val == epoch_id:\n                _save_model = True\n            else:\n                _save_model = False\n        elif split == 'test':\n            if self.best_epoch_test == epoch_id:\n                _save_model = True\n            else:\n                _save_model = False\n        elif split == 'train':\n            if self.best_epoch_train == epoch_id:\n                _save_model = True\n            else:\n                _save_model = False\n        else:\n            assert False\n\n        return _save_model\n\n    def early_stop(self, epoch_id):\n        assert epoch_id == self.counter\n        if (epoch_id - self.best_epoch_val) > self.early_stop_patience:\n            return True\n        else:\n            return False\n\n\nclass PerfTrackVal:\n    \"\"\"\n    Records epoch wise performance for validation\n    \"\"\"\n    def __init__(self, task, extra_param=None):\n        self.task = task\n        if task in ['cls', 'cls_trans']:\n            assert extra_param is None\n            self.all = []\n            self.class_seen = None\n            self.class_corr = None\n        else:\n            assert False\n    def update(self, data_batch, out):\n        if self.task in ['cls', 'cls_trans']:\n            correct = self.get_correct_list(out['logit'], data_batch['label'])\n            self.all.extend(correct)\n            self.update_class_see_corr(out['logit'], data_batch['label'])\n        else:\n            assert False\n    def agg(self):\n        if self.task in ['cls', 'cls_trans']:\n            perf = {\n                'acc': self.get_avg_list(self.all),\n                'class_acc': np.mean(np.array(self.class_corr) / np.array(self.class_seen,dtype=np.float))\n            }\n        else:\n            assert False\n        return perf\n\n    def update_class_see_corr(self, logit, label):\n        if self.class_seen is None:\n            num_class = logit.shape[1]\n            self.class_seen = [0] * num_class\n            self.class_corr = [0] * num_class\n\n        pred_label = logit.argmax(axis=1).to('cpu').tolist()\n        for _pred_label, _label in zip(pred_label, label):\n            self.class_seen[_label] += 1\n            if _pred_label == _label:\n                self.class_corr[_pred_label] += 1\n\n    @staticmethod\n    def get_correct_list(logit, label):\n        label = label.to(logit.device)\n        pred_class = logit.argmax(axis=1)\n        return (label == pred_class).to('cpu').tolist()\n    @staticmethod\n    def get_avg_list(all_list):\n        for x in all_list:\n            assert isinstance(x, bool)\n        return sum(all_list) / len(all_list)\n\n\nclass PerfTrackTrain(PerfTrackVal):\n    \"\"\"\n    Records epoch wise performance during training\n    \"\"\"\n    def __init__(self, task, extra_param=None):\n        super().__init__(task, extra_param)\n        # add a list to track loss\n        self.all_loss = []\n\n    def update_loss(self, loss):\n        self.all_loss.append(loss.item())\n\n    def agg_loss(self):\n        # print(self.all_loss)\n        return sum(self.all_loss) / len(self.all_loss)\n\n    def update_all(self, data_batch, out, loss):\n        self.update(data_batch, out)\n        self.update_loss(loss)\n\n\n# source: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/util.py\ndef smooth_loss(pred, gold):\n    eps = 0.2\n\n    n_class = pred.size(1)\n\n    one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n    one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n    log_prb = F.log_softmax(pred, dim=1)\n\n    loss = -(one_hot * log_prb).sum(dim=1).mean()\n\n    return loss\n\n\ndef rscnn_voting_evaluate_cls(loader, model, data_batch_to_points_target,\n                              points_to_inp, out_to_prob, log_file):\n    \"\"\"\n    :param loader:\n    :param model:\n    :param data_batch_to_points_target:\n    :param points_to_inp: transform the points to input for the particular model\n    that is evaluated\n    :param out_to_prob:\n    :return:\n    \"\"\"\n    import rs_cnn.data.data_utils as d_utils\n    import pointnet2.utils.pointnet2_utils as pointnet2_utils\n    import numpy as np\n\n    terminal = sys.stdout\n    log = open(log_file, \"w\")\n\n    NUM_REPEAT = 300\n    NUM_VOTE = 10\n    PointcloudScale = d_utils.PointcloudScale()   # initialize random scaling\n\n    def data_aug(vote_id, pc):\n        # furthest point sampling\n        # (B, npoint)\n        fps_idx = pointnet2_utils.furthest_point_sample(points, 1200)\n        new_fps_idx = fps_idx[:, np.random.choice(1200, num_points, False)]\n        new_points = pointnet2_utils.gather_operation(points.transpose(1, 2).contiguous(), new_fps_idx).transpose(1, 2).contiguous()\n        if vote_id > 0:\n            pc_out = PointcloudScale(new_points)\n        else:\n            pc_out = pc\n        return pc_out\n    print(f\"RSCNN EVALUATE, NUM_REPEAT {NUM_REPEAT}, NUM_VOTE {NUM_VOTE}\")\n\n    num_points = loader.dataset.num_points\n    print(f\"Number of points {num_points}\")\n\n    # evaluate\n    sys.stdout.flush()\n    PointcloudScale = d_utils.PointcloudScale()   # initialize random scaling\n    model.eval()\n    global_acc = 0\n    with torch.no_grad():\n        for i in range(NUM_REPEAT):\n            preds = []\n            labels = []\n            for j, data in enumerate(loader, 0):\n                points, target = data_batch_to_points_target(data)\n                points, target = points.cuda(), target.cuda()\n                pred = 0\n                for v in range(NUM_VOTE):\n                    new_points = data_aug(v, points)\n                    inp = points_to_inp(new_points)\n                    out = model(**inp)\n                    prob = out_to_prob(out)\n                    pred += prob\n                    # pred += F.softmax(model(**inp), dim = 1)\n\n                pred /= NUM_VOTE\n                target = target.view(-1)\n                _, pred_choice = torch.max(pred.data, -1)\n\n                preds.append(pred_choice)\n                labels.append(target.data)\n\n            preds = torch.cat(preds, 0)\n            labels = torch.cat(labels, 0)\n            acc = (preds == labels).sum().float() / labels.numel()\n            if acc > global_acc:\n                global_acc = acc\n            message1 = 'Repeat %3d \\t Acc: %0.6f' % (i + 1, acc)\n            message2 = '\\nBest voting till now, acc: %0.6f' % (global_acc)\n            message = f'{message1} \\n {message2}'\n            terminal.write(message)\n            log.write(message)\n\n    message = '\\nBest voting acc: %0.6f' % (global_acc)\n    terminal.write(message)\n    log.write(message)\n\n    return global_acc\n\n\n# https://github.com/charlesq34/pointnet2/blob/master/evaluate.py\n# https://github.com/charlesq34/pointnet2/issues/8\n# we try to keep the variables names similar to the original implementation\ndef pn2_vote_evaluate_cls(dataloader, model, log_file, num_votes=[12]):\n    from pointnet2_tf.utils import provider\n    model.eval()\n\n    terminal = sys.stdout\n    log = open(log_file, \"w\")\n\n    if isinstance(num_votes, list):\n        pass\n    else:\n        num_votes = [num_votes]\n\n    for _num_votes in num_votes:\n        print(f\"num_votes: {_num_votes}\")\n\n        NUM_CLASSES = DATASET_NUM_CLASS[dataloader.dataset.dataset_name]\n        SHAPE_NAMES = [line.rstrip() for line in\n                       open('./data/modelnet40_ply_hdf5_2048/shape_names.txt')]\n\n        total_correct = 0\n        total_seen = 0\n        total_seen_class = [0 for _ in range(NUM_CLASSES)]\n        total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n        with torch.no_grad():\n            for _batch_data in dataloader:\n                # based on https://github.com/charlesq34/pointnet2/blob/master/evaluate.py#L125-L150\n                batch_data, batch_label = np.array(_batch_data['pc'].cpu()), np.array(_batch_data['label'].cpu())\n                bsize = batch_data.shape[0]\n                BATCH_SIZE = batch_data.shape[0]\n                NUM_POINT = batch_data.shape[1]\n\n                batch_pred_sum = np.zeros((BATCH_SIZE, NUM_CLASSES)) # score for classes\n                for vote_idx in range(_num_votes):\n                    # Shuffle point order to achieve different farthest samplings\n                    shuffled_indices = np.arange(NUM_POINT)\n                    np.random.shuffle(shuffled_indices)\n                    rotated_data = provider.rotate_point_cloud_by_angle(\n                        batch_data[:, shuffled_indices, :], vote_idx/float(_num_votes) * np.pi * 2)\n\n                    inp = {'pc': torch.tensor(rotated_data)}\n                    out =  model(**inp)\n                    pred_val = np.array(out['logit'].cpu())\n                    batch_pred_sum += pred_val\n\n                pred_val = np.argmax(batch_pred_sum, 1)\n                correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n                total_correct += correct\n                total_seen += bsize\n\n                for i in range(bsize):\n                    l = batch_label[i]\n                    total_seen_class[l] += 1\n                    total_correct_class[l] += (pred_val[i] == l)\n\n\n            class_accuracies = np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float)\n            message = \"\"\n            for i, name in enumerate(SHAPE_NAMES):\n                message += f\"\\n {'%10s: %0.3f' % (name, class_accuracies[i])}\"\n            message += f\"\\n {'eval accuracy: %f'% (total_correct / float(total_seen))}\"\n            message += f\"\\n {'eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float)))}\"\n            terminal.write(message)\n            log.write(message)\n"
  },
  {
    "path": "aug_utils.py",
    "content": "import numpy as np\nimport torch\nimport sys\nfrom main import get_loss\nsys.path.append(\"./emd/\")\nimport emd_module as emd\n\ndef cutmix_r(data_batch,cfg):\n    r = np.random.rand(1)\n    if cfg.AUG.BETA > 0 and r < cfg.AUG.PROB:\n        lam = np.random.beta(cfg.AUG.BETA, cfg.AUG.BETA)\n        B = data_batch['pc'].size()[0]\n\n        rand_index = torch.randperm(B).cuda()\n        target_a = data_batch['label']\n        target_b = data_batch['label'][rand_index]\n\n        point_a = torch.zeros(B, 1024, 3)\n        point_b = torch.zeros(B, 1024, 3)\n        point_c = torch.zeros(B, 1024, 3)\n        point_a = data_batch['pc']\n        point_b = data_batch['pc'][rand_index]\n        point_c = data_batch['pc'][rand_index]\n        # point_a, point_b, point_c = point_a.to(device), point_b.to(device), point_c.to(device)\n\n        remd = emd.emdModule()\n        remd = remd.cuda()\n        dis, ind = remd(point_a, point_b, 0.005, 300)\n        for ass in range(B):\n            point_c[ass, :, :] = point_c[ass, ind[ass].long(), :]\n\n        int_lam = int(cfg.DATALOADER.MODELNET40_DGCNN.num_points * lam)\n        int_lam = max(1, int_lam)\n        gamma = np.random.choice(cfg.DATALOADER.MODELNET40_DGCNN.num_points, int_lam, replace=False, p=None)\n        for i2 in range(B):\n            data_batch['pc'][i2, gamma, :] = point_c[i2, gamma, :]\n\n        # adjust lambda to exactly match point ratio\n        lam = int_lam * 1.0 / cfg.DATALOADER.MODELNET40_DGCNN.num_points\n        # points = data_batch['pc'].transpose(2, 1)\n        data_batch['label_2'] = target_b\n        data_batch['lam'] = lam\n\n    return data_batch\n        # pred, trans_feat = model(points)\n        # loss = criterion(pred, target_a.long()) * (1. - lam) + criterion(pred, target_b.long()) * lam\n    \n\n\ndef cutmix_k(data_batch,cfg):\n    r = np.random.rand(1)\n    if cfg.AUG.BETA > 0 and r < cfg.AUG.PROB:\n        lam = np.random.beta(cfg.AUG.BETA, cfg.AUG.BETA)\n        B = data_batch['pc'].size()[0]\n\n        rand_index = torch.randperm(B).cuda()\n        target_a = data_batch['label']\n        target_b = data_batch['label'][rand_index]\n\n        point_a = torch.zeros(B, 1024, 3)\n        point_b = torch.zeros(B, 1024, 3)\n        point_c = torch.zeros(B, 1024, 3)\n        point_a = data_batch['pc']\n        point_b = data_batch['pc'][rand_index]\n        point_c = data_batch['pc'][rand_index]\n\n        remd = emd.emdModule()\n        remd = remd.cuda()\n        dis, ind = remd(point_a, point_b, 0.005, 300)\n        for ass in range(B):\n            point_c[ass, :, :] = point_c[ass, ind[ass].long(), :]\n\n        int_lam = int(cfg.DATALOADER.MODELNET40_DGCNN.num_points * lam)\n        int_lam = max(1, int_lam)\n\n        random_point = torch.from_numpy(np.random.choice(1024, B, replace=False, p=None))\n        # kNN\n        ind1 = torch.tensor(range(B))\n        query = point_a[ind1, random_point].view(B, 1, 3)\n        dist = torch.sqrt(torch.sum((point_a - query.repeat(1, cfg.DATALOADER.MODELNET40_DGCNN.num_points, 1)) ** 2, 2))\n        idxs = dist.topk(int_lam, dim=1, largest=False, sorted=True).indices\n        for i2 in range(B):\n            data_batch['pc'][i2, idxs[i2], :] = point_c[i2, idxs[i2], :]\n        # adjust lambda to exactly match point ratio\n        lam = int_lam * 1.0 / cfg.DATALOADER.MODELNET40_DGCNN.num_points\n        # points = points.transpose(2, 1)\n        # pred, trans_feat = model(points)\n        # loss = criterion(pred, target_a.long()) * (1. - lam) + criterion(pred, target_b.long()) * lam\n        data_batch['label_2'] = target_b\n        data_batch['lam'] = lam\n        \n    return data_batch\n\n\ndef mixup(data_batch,cfg):\n\n    batch_size = data_batch['pc'].size()[0]\n    idx_minor = torch.randperm(batch_size)\n    mixrates = (0.5 - np.abs(np.random.beta(cfg.AUG.MIXUPRATE, cfg.AUG.MIXUPRATE, batch_size) - 0.5))\n    label_main = data_batch['label']\n    label_minor = data_batch['label'][idx_minor]\n    label_new = torch.zeros(batch_size, 40)\n    for i in range(batch_size):\n        if label_main[i] == label_minor[i]: # same label\n            label_new[i][label_main[i]] = 1.0\n        else:\n            label_new[i][label_main[i]] = 1 - mixrates[i]\n            label_new[i][label_minor[i]] = mixrates[i]\n    label = label_new\n\n    data_minor = data_batch['pc'][idx_minor]\n    mix_rate = torch.tensor(mixrates).float()\n    mix_rate = mix_rate.unsqueeze_(1).unsqueeze_(2)\n\n    mix_rate_expand_xyz = mix_rate.expand(data_batch['pc'].shape)\n\n    remd = emd.emdModule()\n    remd = remd.cuda()\n    _, ass = remd(data_batch['pc'], data_minor, 0.005, 300)\n    ass = ass.long()\n    for i in range(batch_size):\n        data_minor[i] = data_minor[i][ass[i]]\n    data_batch['pc'] = data_batch['pc'] * (1 - mix_rate_expand_xyz) + data_minor * mix_rate_expand_xyz\n    data_batch['label_2'] = label_minor\n    data_batch['lam'] = torch.tensor(mix_rate).squeeze_()\n\n    return data_batch\n\n\ndef knn_points(k, xyz, query, nsample=512):\n    B, N, C = xyz.shape\n    _, S, _ = query.shape # S=1\n    \n    tmp_idx = np.arange(N)\n    group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)\n    sqrdists = square_distance(query, xyz) # Bx1,N #제곱거리\n    tmp = np.sort(sqrdists, axis=2)\n    knn_dist = np.zeros((B,1))\n    for i in range(B):\n        knn_dist[i][0] = tmp[i][0][k]\n        group_idx[i][sqrdists[i]>knn_dist[i][0]]=N\n    # group_idx[sqrdists > radius ** 2] = N\n    # print(\"group idx : \\n\",group_idx)\n    # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor\n    group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]\n    # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])\n    tmp_idx = group_idx[:,:,0]\n    group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)\n    # repeat the first value of the idx in each batch \n    mask = group_idx == N\n    group_idx[mask] = group_first[mask]\n    return group_idx\n    \ndef cut_points_knn(data_batch, idx, radius, nsample=512, k=512):\n    \"\"\"\n        input\n        points : BxNx3(=6 with normal)\n        idx : Bx1 one scalar(int) between 0~len(points)\n        \n        output\n        idx : Bxn_sample\n    \"\"\"\n    B, N, C = data_batch.shape\n    B, S = idx.shape\n    query_points = np.zeros((B,1,C))\n    # print(\"idx : \\n\",idx)\n    for i in range(B):\n        query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)\n    # B x n_sample\n    group_idx = knn_points(k=k, xyz=data_batch[:,:,:3], query=query_points[:,:,:3], nsample=nsample)\n    return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6\n\ndef cut_points(data_batch, idx, radius, nsample=512):\n    \"\"\"\n        input\n        points : BxNx3(=6 with normal)\n        idx : Bx1 one scalar(int) between 0~len(points)\n        \n        output\n        idx : Bxn_sample\n    \"\"\"\n    B, N, C = data_batch.shape\n    B, S = idx.shape\n    query_points = np.zeros((B,1,C))\n    # print(\"idx : \\n\",idx)\n    for i in range(B):\n        query_points[i][0]=data_batch[i][idx[i][0]] # Bx1x3(=6 with normal)\n    # B x n_sample\n    group_idx = query_ball_point_for_rsmix(radius, nsample, data_batch[:,:,:3], query_points[:,:,:3])\n    return group_idx, query_points # group_idx: 16x?x6, query_points: 16x1x6\n\n\ndef query_ball_point_for_rsmix(radius, nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        radius: local region radius\n        nsample: max sample number in local region\n        xyz: all points, [B, N, 3]\n        new_xyz: query points, [B, S, 3]\n    Return:\n        group_idx: grouped points index, [B, S, nsample], S=1\n    \"\"\"\n    # device = xyz.device\n    B, N, C = xyz.shape\n    _, S, _ = new_xyz.shape\n    # group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])\n    tmp_idx = np.arange(N)\n    group_idx = np.repeat(tmp_idx[np.newaxis,np.newaxis,:], B, axis=0)\n    \n    sqrdists = square_distance(new_xyz, xyz)\n    group_idx[sqrdists > radius ** 2] = N\n    \n    # group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] # for torch.tensor\n    group_idx = np.sort(group_idx, axis=2)[:, :, :nsample]\n    # group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])\n    tmp_idx = group_idx[:,:,0]\n    group_first = np.repeat(tmp_idx[:,np.newaxis,:], nsample, axis=2)\n    # repeat the first value of the idx in each batch \n    mask = group_idx == N\n    group_idx[mask] = group_first[mask]\n    return group_idx\n\ndef square_distance(src, dst):\n    \"\"\"\n    Calculate Euclid distance between each two points.\n    src^T * dst = xn * xm + yn * ym + zn * zm；\n    sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn;\n    sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm;\n    dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2\n         = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst\n    Input:\n        src: source points, [B, N, C]\n        dst: target points, [B, M, C]\n    Output:\n        dist: per-point square distance, [B, N, M]\n    \"\"\"\n    B, N, _ = src.shape\n    _, M, _ = dst.shape\n    # dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))\n    # dist += torch.sum(src ** 2, -1).view(B, N, 1)\n    # dist += torch.sum(dst ** 2, -1).view(B, 1, M)\n\n    dist = -2 * np.matmul(src, dst.transpose(0, 2, 1))\n    dist += np.sum(src ** 2, -1).reshape(B, N, 1)\n    dist += np.sum(dst ** 2, -1).reshape(B, 1, M)\n    \n    return dist\n\n\ndef pts_num_ctrl(pts_erase_idx, pts_add_idx):\n    '''\n        input : pts - to erase \n                pts - to add\n        output :pts - to add (number controled)\n    '''\n    if len(pts_erase_idx)>=len(pts_add_idx):\n        num_diff = len(pts_erase_idx)-len(pts_add_idx)\n        if num_diff == 0:\n            pts_add_idx_ctrled = pts_add_idx\n        else:\n            pts_add_idx_ctrled = np.append(pts_add_idx, pts_add_idx[np.random.randint(0, len(pts_add_idx), size=num_diff)])\n    else:\n        pts_add_idx_ctrled = np.sort(np.random.choice(pts_add_idx, size=len(pts_erase_idx), replace=False))\n    return pts_add_idx_ctrled\n\ndef rsmix(data, cfg, n_sample=512, KNN=False):\n    cut_rad = np.random.beta(cfg.AUG.BETA, cfg.AUG.BETA)\n    data_batch = data['pc'].numpy()\n    label_batch = data['label'].numpy()\n\n    rand_index = np.random.choice(data_batch.shape[0],data_batch.shape[0], replace=False) # label dim : (16,) for model\n    \n    if len(label_batch.shape) is 1:\n        label_batch = np.expand_dims(label_batch, axis=1)\n        \n    label_a = label_batch[:,0]\n    label_b = label_batch[rand_index][:,0]\n        \n    data_batch_rand = data_batch[rand_index] # BxNx3(with normal=6)\n    rand_idx_1 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))\n    rand_idx_2 = np.random.randint(0,data_batch.shape[1], (data_batch.shape[0],1))\n    if KNN:\n        knn_para = min(int(np.ceil(cut_rad*n_sample)),n_sample)\n        pts_erase_idx, query_point_1 = cut_points_knn(data_batch, rand_idx_1, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_1 x 3(or 6)\n        pts_add_idx, query_point_2 = cut_points_knn(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample, k=knn_para) # B x num_points_in_radius_2 x 3(or 6)\n    else:\n        pts_erase_idx, query_point_1 = cut_points(data_batch, rand_idx_1, cut_rad, nsample=n_sample) # B x num_points_in_radius_1 x 3(or 6)\n        pts_add_idx, query_point_2 = cut_points(data_batch_rand, rand_idx_2, cut_rad, nsample=n_sample) # B x num_points_in_radius_2 x 3(or 6)\n    \n    query_dist = query_point_1[:,:,:3] - query_point_2[:,:,:3]\n    \n    pts_replaced = np.zeros((1,data_batch.shape[1],data_batch.shape[2]))\n    lam = np.zeros(data_batch.shape[0],dtype=float)\n\n    for i in range(data_batch.shape[0]):\n        if pts_erase_idx[i][0][0]==data_batch.shape[1]:\n            tmp_pts_replaced = np.expand_dims(data_batch[i], axis=0)\n            lam_tmp = 0\n        elif pts_add_idx[i][0][0]==data_batch.shape[1]:\n            pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)\n            tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)\n            dup_points_idx = np.random.randint(0,len(tmp_pts_erased), size=len(pts_erase_idx_tmp))\n            tmp_pts_replaced = np.expand_dims(np.concatenate((tmp_pts_erased, data_batch[i][dup_points_idx]), axis=0), axis=0)\n            lam_tmp = 0\n        else:\n            pts_erase_idx_tmp = np.unique(pts_erase_idx[i].reshape(n_sample,),axis=0)\n            pts_add_idx_tmp = np.unique(pts_add_idx[i].reshape(n_sample,),axis=0)\n            pts_add_idx_ctrled_tmp = pts_num_ctrl(pts_erase_idx_tmp,pts_add_idx_tmp)\n            tmp_pts_erased = np.delete(data_batch[i], pts_erase_idx_tmp, axis=0) # B x N-num_rad_1 x 3(or 6)\n            # input(\"INPUT : \")\n            tmp_pts_to_add = np.take(data_batch_rand[i], pts_add_idx_ctrled_tmp, axis=0)\n            tmp_pts_to_add[:,:3] = query_dist[i]+tmp_pts_to_add[:,:3]\n            \n            tmp_pts_replaced = np.expand_dims(np.vstack((tmp_pts_erased,tmp_pts_to_add)), axis=0)\n            \n            lam_tmp = len(pts_add_idx_ctrled_tmp)/(len(pts_add_idx_ctrled_tmp)+len(tmp_pts_erased))\n        \n        pts_replaced = np.concatenate((pts_replaced, tmp_pts_replaced),axis=0)\n        lam[i] = lam_tmp\n    \n    data_batch_mixed = np.delete(pts_replaced, [0], axis=0)    \n        \n    data['pc'] = torch.FloatTensor(data_batch_mixed)\n    data['label'] = torch.tensor(label_a)\n    data['label_2'] = torch.tensor(label_b)\n    data['lam'] = torch.tensor(lam)\n\n    return data\n\n\ndef pgd(data_batch,model, task, loss_name, dataset_name, step= 7, eps=0.05, alpha=0.01):\n    model.eval()\n    data = data_batch['pc']\n    adv_data=data.clone()\n    adv_data=adv_data+(torch.rand_like(adv_data)*eps*2-eps)\n    adv_data.detach()\n    adv_data_batch = {}\n\n    for _ in range(step):\n        adv_data.requires_grad=True\n        out = model(**{'pc':adv_data})\n        adv_data_batch['pc'] = adv_data\n        adv_data_batch['label'] = data_batch['label']\n        model.zero_grad()\n        loss = get_loss(task, loss_name, adv_data_batch, out, dataset_name)\n        loss.backward()\n        with torch.no_grad():\n            adv_data = adv_data + alpha * adv_data.grad.sign()\n            delta = adv_data-data\n            # print(delta)\n            delta = torch.clamp(delta,-eps,eps)\n            adv_data = (data+delta).detach_()\n    \n    return adv_data_batch\n"
  },
  {
    "path": "configs/bn/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nADAPT:\n  METHOD: bn\n"
  },
  {
    "path": "configs/bn/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\nADAPT:\n  METHOD: bn"
  },
  {
    "path": "configs/bn/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: bn"
  },
  {
    "path": "configs/bn/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: bn"
  },
  {
    "path": "configs/bn/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: bn"
  },
  {
    "path": "configs/bn/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nADAPT:\n  METHOD: bn"
  },
  {
    "path": "configs/corruption/curvenet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_curvenet_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: curvenet\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/corruption/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/corruption/gdanet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_gdanet_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: gdanet\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/corruption/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n"
  },
  {
    "path": "configs/corruption/pointMLP.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointMLP_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointMLP\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/corruption/pointMLP2.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointMLP2_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointMLP2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/corruption/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/corruption/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/corruption/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/corruption/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/cutmix/dgcnn_k.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5\n\n"
  },
  {
    "path": "configs/cutmix/dgcnn_r.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5\n\n"
  },
  {
    "path": "configs/cutmix/pct_k.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/pct_r.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/pointnet2_k.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/pointnet2_r.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/pointnet_k.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/cutmix/pointnet_r.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/cutmix/rscnn_k.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/rscnn_r.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/cutmix/simpleview_k.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_k_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nAUG:\n  NAME: cutmix_k\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/cutmix/simpleview_r.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: cutmix_r_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nAUG:\n  NAME: cutmix_r\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/dgcnn_curvenet_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_curvenet_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: curvenet\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_0.25_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_0.25_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_0.25_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_0.25_valid_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_0.5_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_0.5_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_0.5_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_0.5_valid_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_ce_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_ce_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_ce_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_ce_valid_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/dgcnn_dgcnn_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_dgcnn_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0001\n"
  },
  {
    "path": "configs/dgcnn_gdanet_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_gdanet_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: gdanet\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pct_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n"
  },
  {
    "path": "configs/dgcnn_pointMLP2_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointMLP2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointMLP2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointMLP_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointMLP_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointMLP\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_0.25_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_0.25_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_0.25_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_0.25_valid_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_0.5_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_0.5_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_0.5_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_0.5_valid_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_ce_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_ce_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_ce_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_ce_valid_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet2_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet2_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_0.25_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_0.25_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_0.25_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_0.25_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_0.5_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_0.5_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_0.5_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_0.5_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_ce_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_ce_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_ce_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_ce_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_pointnet_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_pointnet_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_0.25_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_0.25_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_0.25_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_0.25_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_0.5_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_0.5_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_0.5_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_0.5_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_ce_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_ce_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_ce_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_ce_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_rscnn_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_rscnn_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/dgcnn_simpleview_0.25_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_0.25_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/dgcnn_simpleview_0.25_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.3125_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_0.25_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\n"
  },
  {
    "path": "configs/dgcnn_simpleview_0.5_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_0.5_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/dgcnn_simpleview_0.5_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_split_0.625_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_0.5_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\n"
  },
  {
    "path": "configs/dgcnn_simpleview_ce_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_ce_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/dgcnn_simpleview_ce_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_ce_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\n"
  },
  {
    "path": "configs/dgcnn_simpleview_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/dgcnn_simpleview_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_DGCNN:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: dgcnn_simpleview_valid_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\n"
  },
  {
    "path": "configs/mixup/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5\n\n"
  },
  {
    "path": "configs/mixup/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/mixup/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/mixup/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/mixup/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/mixup/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: mixup_simpleview_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  METRIC: acc\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: mixup\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/pgd/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: pgd_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nAUG:\n  NAME: pgd\n\n"
  },
  {
    "path": "configs/pgd/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: pgd_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n\nAUG:\n  NAME: pgd\n"
  },
  {
    "path": "configs/pgd/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: pgd_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: pgd"
  },
  {
    "path": "configs/pointnet2_dgcnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_dgcnn_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0001\n"
  },
  {
    "path": "configs/pointnet2_dgcnn_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_PN2:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_dgcnn_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0001\n"
  },
  {
    "path": "configs/pointnet2_pointnet2_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_pointnet2_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_pointnet2_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_PN2:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_pointnet2_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_pointnet_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_pointnet_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_pointnet_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_PN2:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_pointnet_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_rscnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_rscnn_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_rscnn_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_PN2:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_rscnn_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/pointnet2_simpleview_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_simpleview_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/pointnet2_simpleview_valid_run_1.yaml",
    "content": "DATALOADER:\n  MODELNET40_PN2:\n    train_data_path: ./data/modelnet40_ply_hdf5_2048/train_minus_valid_files.txt\n    valid_data_path: ./data/modelnet40_ply_hdf5_2048/valid_files.txt\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_pn2\n  EXP_ID: pointnet2_simpleview_valid_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nEXP_EXTRA:\n  no_test: true\n  no_val: false\n  val_eval_freq: 25\n"
  },
  {
    "path": "configs/rscnn_dgcnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_rscnn\n  EXP_ID: rscnn_dgcnn_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\n"
  },
  {
    "path": "configs/rscnn_pointnet2_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_rscnn\n  EXP_ID: rscnn_pointnet2_run_1\n  LOSS_NAME: cross_entropy\n  METRIC: acc\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/rscnn_pointnet_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_rscnn\n  EXP_ID: rscnn_pointnet_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/rscnn_rscnn_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_rscnn\n  EXP_ID: rscnn_rscnn_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\n"
  },
  {
    "path": "configs/rscnn_simpleview_run_1.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_rscnn\n  EXP_ID: rscnn_simpleview_run_1\n  LOSS_NAME: cross_entropy\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\n"
  },
  {
    "path": "configs/rsmix/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_dgcnn_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: dgcnn\n  SEED: 1\n  METRIC: acc\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/rsmix/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\n\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/rsmix/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/rsmix/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/rsmix/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5\n"
  },
  {
    "path": "configs/rsmix/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_dgcnn\n  EXP_ID: rsmix_simpleview_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  METRIC: acc\n  TASK: cls\nTRAIN:\n  l2: 0.0\nAUG:\n  NAME: rsmix\n  BETA: 1.\n  PROB: 0.5"
  },
  {
    "path": "configs/tent/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 16\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nADAPT:\n  METHOD: tent\n  ITER: 10\n"
  },
  {
    "path": "configs/tent/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 16\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/dgcnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_dgcnn_run_1\n  LOSS_NAME: smooth\n  METRIC: acc\n  MODEL_NAME: dgcnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 1e-4\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/pct.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pct_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pct\n  SEED: 1\n  TASK: cls\n  OPTIMIZER: pct\nTRAIN:\n  l2: 1e-4\n  learning_rate: 0.0001\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/pointnet.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet_run_1 \n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet\n  SEED: 1\n  TASK: cls_trans\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/pointnet2.yaml",
    "content": "DATALOADER:\n  batch_size: 16\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_pointnet2_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: pointnet2\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/rscnn.yaml",
    "content": "DATALOADER:\n  batch_size: 32\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_rscnn_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: rscnn\n  SEED: 1\n  TASK: cls\nTRAIN:\n  l2: 0.0\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs/tent_cutmix/simpleview.yaml",
    "content": "DATALOADER:\n  batch_size: 18\n  num_workers: 0\nEXP:\n  DATASET: modelnet40_c\n  EXP_ID: c_simpleview_run_1\n  LOSS_NAME: smooth\n  MODEL_NAME: simpleview\n  SEED: 1\n  TASK: cls\nADAPT:\n  METHOD: tent\n  ITER: 10"
  },
  {
    "path": "configs.py",
    "content": "from yacs.config import CfgNode as CN\n\n_C = CN()\n# -----------------------------------------------------------------------------\n# EXPERIMENT\n# -----------------------------------------------------------------------------\n_C.EXP = CN()\n_C.EXP.EXP_ID = \"\"\n_C.EXP.SEED = 0\n_C.EXP.TASK = 'cls'\n_C.EXP.DATASET = 'modelnet40'\n_C.EXP.MODEL_NAME = 'mv'\n_C.EXP.LOSS_NAME = 'cross_entropy'\n_C.EXP.OPTIMIZER = 'vanilla'\n_C.EXP.METRIC = 'acc'\n#------------------------------------------------------------------------------\n# Extra Experiment Parameters\n#------------------------------------------------------------------------------\n_C.EXP_EXTRA = CN()\n_C.EXP_EXTRA.no_val = True\n_C.EXP_EXTRA.no_test = False\n_C.EXP_EXTRA.val_eval_freq = 1\n_C.EXP_EXTRA.test_eval_freq = 1\n_C.EXP_EXTRA.save_ckp = 25\n# -----------------------------------------------------------------------------\n# DATALOADER (contains things common across the datasets)\n# -----------------------------------------------------------------------------\n_C.DATALOADER = CN()\n_C.DATALOADER.batch_size = 60\n_C.DATALOADER.num_workers = 0\n# -----------------------------------------------------------------------------\n# TRAINING DETAILS (contains things common across the training)\n# -----------------------------------------------------------------------------\n_C.TRAIN = CN()\n_C.TRAIN.num_epochs = 300\n_C.TRAIN.learning_rate = 1e-3\n_C.TRAIN.lr_decay_factor = 0.5\n_C.TRAIN.lr_reduce_patience = 10\n_C.TRAIN.l2 = 0.0\n_C.TRAIN.early_stop = 300\n_C.TRAIN.lr_clip = 0.00001\n#-----------------------------------------------------------------------------\n# MODELNET40_RSCNN\n#-----------------------------------------------------------------------------\n_C.DATALOADER.MODELNET40_RSCNN = CN()\n_C.DATALOADER.MODELNET40_RSCNN.data_path       = './data/'\n_C.DATALOADER.MODELNET40_RSCNN.train_data_path = 'train_files.txt'\n_C.DATALOADER.MODELNET40_RSCNN.valid_data_path = 'train_files.txt'\n_C.DATALOADER.MODELNET40_RSCNN.test_data_path  = 'test_files.txt'\n_C.DATALOADER.MODELNET40_RSCNN.num_points      = 1024\n#-----------------------------------------------------------------------------\n# MODELNET40_PN2\n#-----------------------------------------------------------------------------\n_C.DATALOADER.MODELNET40_PN2 = CN()\n_C.DATALOADER.MODELNET40_PN2.train_data_path = './data/modelnet40_ply_hdf5_2048/train_files.txt'\n_C.DATALOADER.MODELNET40_PN2.valid_data_path = './data/modelnet40_ply_hdf5_2048/train_files.txt'\n_C.DATALOADER.MODELNET40_PN2.test_data_path  = './data/modelnet40_ply_hdf5_2048/test_files.txt'\n_C.DATALOADER.MODELNET40_PN2.num_points      = 1024\n#-----------------------------------------------------------------------------\n# MODELNET40_DGCNN\n#-----------------------------------------------------------------------------\n_C.DATALOADER.MODELNET40_DGCNN = CN()\n_C.DATALOADER.MODELNET40_DGCNN.train_data_path = './data/modelnet40_ply_hdf5_2048/train_files.txt'\n_C.DATALOADER.MODELNET40_DGCNN.valid_data_path = './data/modelnet40_ply_hdf5_2048/train_files.txt'\n_C.DATALOADER.MODELNET40_DGCNN.test_data_path  = './data/modelnet40_ply_hdf5_2048/test_files.txt'\n_C.DATALOADER.MODELNET40_DGCNN.num_points      = 1024\n#-----------------------------------------------------------------------------\n# MODELNET40_C\n#-----------------------------------------------------------------------------\n_C.DATALOADER.MODELNET40_C = CN()\n_C.DATALOADER.MODELNET40_C.test_data_path  = './data/modelnet40_c/'\n_C.DATALOADER.MODELNET40_C.corruption      = 'uniform'\n_C.DATALOADER.MODELNET40_C.severity        = 1\n# ----------------------------------------------------------------------------\n# MODEL\n# -----------------------------------------------------------------------------\n_C.MODEL = CN()\n# -----------------------------------------------------------------------------\n# MV MODEL\n# -----------------------------------------------------------------------------\n_C.MODEL.MV = CN()\n_C.MODEL.MV.backbone = 'resnet18'\n_C.MODEL.MV.feat_size = 16\n# -----------------------------------------------------------------------------\n# RSCNN MODEL\n# -----------------------------------------------------------------------------\n_C.MODEL.RSCNN = CN()\n_C.MODEL.RSCNN.ssn_or_msn = True\n# -----------------------------------------------------------------------------\n# PN2 MODEL\n# -----------------------------------------------------------------------------\n_C.MODEL.PN2 = CN()\n_C.MODEL.PN2.version_cls = 1.0\n\n_C.AUG = CN()\n_C.AUG.NAME = 'none'\n_C.AUG.BETA = 1.\n_C.AUG.PROB = 0.5\n_C.AUG.MIXUPRATE = 0.4\n\n_C.ADAPT = CN()\n_C.ADAPT.METHOD = 'none'\n_C.ADAPT.ITER = 1\n\n\ndef get_cfg_defaults():\n  \"\"\"Get a yacs CfgNode object with default values for my_project.\"\"\"\n  # Return a clone so that the defaults will not be altered\n  # This is for the \"local variable\" use pattern\n  return _C.clone()\n"
  },
  {
    "path": "data/convert.py",
    "content": "import open3d as o3d\n\n\ndef load_mesh(filepath):\n    return o3d.io.read_triangle_mesh(filepath)\n\n\ndef export_mesh(mesh, filepath):\n    o3d.io.write_triangle_mesh(filepath, mesh)\n\n\ndef load_pcd(filepath):\n    return o3d.io.read_point_cloud(filepath)\n\n\ndef export_pcd(pcd, filepath):\n    o3d.io.write_point_cloud(filepath, pcd)\n\n\ndef mesh_to_pcd(mesh, number_of_points=2048):\n    return mesh.sample_points_uniformly(number_of_points=number_of_points)\n"
  },
  {
    "path": "data/create_modelnet40_small.py",
    "content": "#!/usr/bin/env python\nimport os\nimport h5py\nimport numpy as np\n\nnp.random.seed(123)\n\n\ndef main(split_size):\n    modelnet40_dir = \"./data/modelnet40_ply_hdf5_2048/\"\n\n    modelnet40_train_file = os.path.join(\n        modelnet40_dir, \"train_minus_valid_files.txt\")\n\n    modelnet40_train_split_file = os.path.join(\n        modelnet40_dir, f\"train_minus_valid_split_{split_size}_files.txt\")\n\n    modelnet40_train_split_path = f\"ply_data_trainminusval_split_{split_size}.h5\"\n\n    with open(modelnet40_train_file, \"r\") as f:\n        modelnet40_train_paths = [l.strip() for l in f.readlines()]\n\n    data = []\n    labels = []\n    for modelnet40_train_path in modelnet40_train_paths:\n        train_h5 = h5py.File(modelnet40_train_path, \"r\")\n\n        data.append(train_h5[\"data\"][:])\n        labels.append(train_h5[\"label\"][:])\n\n    data = np.concatenate(data)\n    labels = np.concatenate(labels)\n\n    train_data = []\n    train_label = []\n    for i in range(40):\n        cls_inds = np.where(labels == i)[0]\n        num_objs = len(cls_inds)\n        num_train = int(num_objs * split_size)\n        cls_data = data[cls_inds]\n\n        np.random.shuffle(cls_data)\n\n        train_data.append(cls_data[:num_train])\n        train_label += [i] * num_train\n\n    train_data = np.concatenate(train_data)\n    train_label = np.array(train_label).reshape(-1, 1)\n\n    with open(modelnet40_train_split_file, \"w\") as f:\n        f.write(os.path.join(modelnet40_dir,\n                             modelnet40_train_split_path) + \"\\n\")\n\n    with h5py.File(\n            os.path.join(modelnet40_dir, modelnet40_train_split_path),\n            \"w\") as f:\n        f.create_dataset(\"data\", data=train_data)\n        f.create_dataset(\"label\", data=train_label)\n\n    print('data: {}'.format(data.shape))\n    print('train data: {}'.format(train_data.shape))\n    print('min_label: {}'.format(labels.min()))\n    print('max_label: {}'.format(labels.max()))\n\n\nif __name__ == \"__main__\":\n    main(0.5 / 0.8)\n    main(0.25 / 0.8)\n"
  },
  {
    "path": "data/create_modelnet40_valid.py",
    "content": "#!/usr/bin/env python\nimport os\nimport h5py\nimport numpy as np\n\nnp.random.seed(123)\ndef main():\n    modelnet40_dir = \"./data/modelnet40_ply_hdf5_2048/\"\n\n    modelnet40_train_minus_valid_path = \"ply_data_trainminusval.h5\"\n    modelnet40_valid_path             = \"ply_data_valid.h5\"\n\n    modelnet40_train_minus_valid_file = os.path.join(modelnet40_dir, \"train_minus_valid_files.txt\")\n    modelnet40_valid_file             = os.path.join(modelnet40_dir, \"valid_files.txt\")\n\n    modelnet40_train_file = os.path.join(modelnet40_dir, \"train_files.txt\")\n    with open(modelnet40_train_file, \"r\") as f:\n        modelnet40_train_paths = [l.strip() for l in f.readlines()]\n\n    data   = []\n    labels = []\n    for modelnet40_train_path in modelnet40_train_paths:\n        train_h5 = h5py.File(modelnet40_train_path, \"r\")\n\n        data.append(train_h5[\"data\"][:])\n        labels.append(train_h5[\"label\"][:])\n\n    data   = np.concatenate(data)\n    labels = np.concatenate(labels)\n\n    train_data  = []\n    train_label = []\n    valid_data  = []\n    valid_label = []\n    for i in range(40):\n        cls_inds  = np.where(labels == i)[0]\n        num_objs  = len(cls_inds)\n        num_train = int(num_objs * 0.8)\n        num_valid = num_objs - num_train\n        cls_data  = data[cls_inds]\n\n        np.random.shuffle(cls_data)\n\n        train_data.append(cls_data[:num_train])\n        valid_data.append(cls_data[num_train:])\n\n        train_label += [i] * num_train\n        valid_label += [i] * num_valid\n\n    train_data  = np.concatenate(train_data)\n    valid_data  = np.concatenate(valid_data)\n    train_label = np.array(train_label).reshape(-1, 1)\n    valid_label = np.array(valid_label).reshape(-1, 1)\n\n    with open(modelnet40_train_minus_valid_file, \"w\") as f:\n        f.write(os.path.join(modelnet40_dir, modelnet40_train_minus_valid_path) + \"\\n\")\n\n    with open(modelnet40_valid_file, \"w\") as f:\n        f.write(os.path.join(modelnet40_dir, modelnet40_valid_path) + \"\\n\")\n\n    with h5py.File(os.path.join(modelnet40_dir, modelnet40_train_minus_valid_path), \"w\") as f:\n        f.create_dataset(\"data\",  data=train_data)\n        f.create_dataset(\"label\", data=train_label)\n\n    with h5py.File(os.path.join(modelnet40_dir, modelnet40_valid_path), \"w\") as f:\n        f.create_dataset(\"data\",  data=valid_data)\n        f.create_dataset(\"label\", data=valid_label)\n\n    print('data: {}'.format(data.shape))\n    print('min_label: {}'.format(labels.min()))\n    print('max_label: {}'.format(labels.max()))\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "data/distortion.py",
    "content": "import pygem\nfrom pygem import FFD, RBF, IDW\nimport open3d as o3d\nimport copy\nimport numpy as np\nnp.random.seed(2021)\n\n\ndef core_distortion(points, n_control_points=[2,2,2], displacement=None):\n    \"\"\"\n        Ref: http://mathlab.github.io/PyGeM/tutorial-1-ffd.html\n    \"\"\"\n    # the size of displacement matrix: 3 * control_points.shape\n    if displacement is None:\n        displacement = np.zeros((3,*n_control_points))\n\n    ffd = FFD(n_control_points=n_control_points)\n    ffd.box_length = [2.,2.,2.]\n    ffd.box_origin = [-1., -1., -1.]\n    ffd.array_mu_x = displacement[0,:,:,:]\n    ffd.array_mu_y = displacement[1,:,:,:]\n    ffd.array_mu_z = displacement[2,:,:,:]\n    new_points = ffd(points)\n\n    return new_points\n\n\ndef distortion(points, direction_mask=np.array([1,1,1]), point_mask=np.ones((5,5,5)), severity=0.5):\n\n    \n    n_control_points=[5,5,5]\n    # random\n    displacement = np.random.rand(3,*n_control_points) * 2 * severity - np.ones((3,*n_control_points)) * severity\n    displacement *= np.transpose(np.tile(direction_mask, (5, 5, 5, 1)), (3, 0, 1, 2))\n    displacement *= np.tile(point_mask, (3, 1, 1, 1))\n    \n    points = core_distortion(points, n_control_points=n_control_points, displacement=displacement)\n    \n    # points = denomalize(points, scale, offset)\n    # set_points(data, points)\n    return points\n\n\ndef distortion_2(points, severity=(0.4,3), func = 'gaussian_spline'):\n\n    rbf = RBF(func=func)\n    xv = np.linspace(-1, 1, severity[1])\n    yv = np.linspace(-1, 1, severity[1])\n    zv = np.linspace(-1, 1, severity[1])\n    z, y, x = np.meshgrid(zv, yv, xv)\n    mesh = np.array([x.ravel(), y.ravel(), z.ravel()]).T\n    rbf.original_control_points = mesh\n    alpha = np.random.uniform(-np.pi,np.pi,mesh.shape[0])\n    gamma = np.random.uniform(-np.pi,np.pi,mesh.shape[0])\n    distance = np.ones(mesh.shape[0]) * severity[0]\n    displacement_x = distance * np.cos(alpha) * np.sin(gamma)\n    displacement_y = distance * np.sin(alpha) * np.sin(gamma)\n    displacement_z = distance * np.cos(gamma)\n    displacement = np.array([displacement_x,displacement_y,displacement_z]).T\n    rbf.deformed_control_points = mesh + displacement\n    new_points = rbf(points)\n    return new_points\n\n\ndef distortion_3(points, severity=(0.4,3)):\n\n    idw = IDW()\n    xv = np.linspace(-1, 1, severity[1])\n    yv = np.linspace(-1, 1, severity[1])\n    zv = np.linspace(-1, 1, severity[1])\n    z, y, x = np.meshgrid(zv, yv, xv)\n    mesh = np.array([x.ravel(), y.ravel(), z.ravel()]).T\n    idw.original_control_points = mesh\n    alpha = np.random.uniform(-np.pi,np.pi,mesh.shape[0])\n    gamma = np.random.uniform(-np.pi,np.pi,mesh.shape[0])\n    distance = np.ones(mesh.shape[0]) * severity[0]\n    displacement_x = distance * np.cos(alpha) * np.sin(gamma)\n    displacement_y = distance * np.sin(alpha) * np.sin(gamma)\n    displacement_z = distance * np.cos(gamma)\n    displacement = np.array([displacement_x,displacement_y,displacement_z]).T\n    idw.deformed_control_points = mesh + displacement\n    new_points = idw(points)\n    return new_points\n\n"
  },
  {
    "path": "data/generate_c.py",
    "content": "###  Generate Various Common Corruptions ###\nfrom operator import index\nimport os\nimport h5py\nimport json\nimport numpy as np\nfrom numpy import random\nfrom convert import *\nimport distortion\nfrom occlusion import *\nfrom util import *\nnp.random.seed(2021)\n\n\n### Transformation ###\n'''\nRotate the point cloud\n'''\ndef rotation(pointcloud,severity):\n    N, C = pointcloud.shape\n    c = [2.5, 5, 7.5, 10, 15][severity-1]\n    theta = np.random.uniform(c-2.5,c+2.5) * np.random.choice([-1,1]) * np.pi / 180.\n    gamma = np.random.uniform(c-2.5,c+2.5) * np.random.choice([-1,1]) * np.pi / 180.\n    beta = np.random.uniform(c-2.5,c+2.5) * np.random.choice([-1,1]) * np.pi / 180.\n\n    matrix_1 = np.array([[1,0,0],[0,np.cos(theta),-np.sin(theta)],[0,np.sin(theta),np.cos(theta)]])\n    matrix_2 = np.array([[np.cos(gamma),0,np.sin(gamma)],[0,1,0],[-np.sin(gamma),0,np.cos(gamma)]])\n    matrix_3 = np.array([[np.cos(beta),-np.sin(beta),0],[np.sin(beta),np.cos(beta),0],[0,0,1]])\n    \n    new_pc = np.matmul(pointcloud,matrix_1)\n    new_pc = np.matmul(new_pc,matrix_2)\n    new_pc = np.matmul(new_pc,matrix_3).astype('float32')\n\n    return normalize(new_pc)\n\n'''\nShear the point cloud\n'''\ndef shear(pointcloud,severity):\n    N, C = pointcloud.shape\n    c = [0.05, 0.1, 0.15, 0.2, 0.25][severity-1]\n    a = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n    b = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n    d = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n    e = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n    f = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n    g = np.random.uniform(c-0.05,c+0.05) * np.random.choice([-1,1])\n\n    matrix = np.array([[1,0,b],[d,1,e],[f,0,1]])\n    new_pc = np.matmul(pointcloud,matrix).astype('float32')\n    return normalize(new_pc)\n\n'''\nScale the point cloud\n'''\ndef scale(pointcloud,severity):\n    #TODO\n    N, C = pointcloud.shape\n    c = [0.1, 0.2, 0.3, 0.4, 0.5][severity-1]\n    a=b=d=1\n    r = np.random.randint(0,3)\n    t = np.random.choice([-1,1])\n    if r == 0:\n        a += c * t\n        b += c * (-t)\n    elif r == 1:\n        b += c * t\n        d += c * (-t)\n    elif r == 2:\n        a += c * t\n        d += c * (-t)\n\n    matrix = np.array([[a,0,0],[0,b,0],[0,0,d]])\n    new_pc = np.matmul(pointcloud,matrix).astype('float32')\n    return normalize(new_pc)\n\n\n### Noise ###\n'''\nAdd Uniform noise to point cloud \n'''\ndef uniform_noise(pointcloud, severity):\n    #TODO\n    N, C = pointcloud.shape\n    c = [0.01, 0.02, 0.03, 0.04, 0.05][severity-1]\n    jitter = np.random.uniform(-c,c,(N, C))\n    new_pc = (pointcloud + jitter).astype('float32')\n    return normalize(new_pc)\n\n'''\nAdd Gaussian noise to point cloud \n'''\ndef gaussian_noise(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [0.01, 0.015, 0.02, 0.025, 0.03][severity-1]\n    jitter = np.random.normal(size=(N, C)) * c\n    new_pc = (pointcloud + jitter).astype('float32')\n    new_pc = np.clip(new_pc,-1,1)\n    return new_pc\n\n'''\nAdd noise to the edge-length-2 cude\n'''\ndef background_noise(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [N//45, N//40, N//35, N//30, N//20][severity-1]\n    jitter = np.random.uniform(-1,1,(c, C))\n    new_pc = np.concatenate((pointcloud,jitter),axis=0).astype('float32')\n    return normalize(new_pc)\n\n'''\nUpsampling\n'''\ndef upsampling(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [N//5, N//4, N//3, N//2, N][severity-1]\n    index = np.random.choice(ORIG_NUM, c, replace=False)\n    add = pointcloud[index] + np.random.uniform(-0.05,0.05,(c, C))\n    new_pc = np.concatenate((pointcloud,add),axis=0).astype('float32')\n    return normalize(new_pc)\n    \n'''\nAdd impulse noise\n'''\ndef impulse_noise(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [N//30, N//25, N//20, N//15, N//10][severity-1]\n    index = np.random.choice(ORIG_NUM, c, replace=False)\n    pointcloud[index] += np.random.choice([-1,1], size=(c,C)) * 0.1\n    return normalize(pointcloud)\n    \n\n### Point Number Modification ###\n'''\nCutout several part in the point cloud\n'''\ndef cutout(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [(2,30), (3,30), (5,30), (7,30), (10,30)][severity-1]\n    for _ in range(c[0]):\n        i = np.random.choice(pointcloud.shape[0],1)\n        picked = pointcloud[i]\n        dist = np.sum((pointcloud - picked)**2, axis=1, keepdims=True)\n        idx = np.argpartition(dist, c[1], axis=0)[:c[1]]\n        # pointcloud[idx.squeeze()] = 0\n        pointcloud = np.delete(pointcloud, idx.squeeze(), axis=0)\n    # print(pointcloud.shape)\n    return pointcloud\n\n'''\nUniformly sampling the point cloud\n'''\ndef uniform_sampling(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [N//15, N//10, N//8, N//6, N//2, 3 * N//4][severity-1]\n    index = np.random.choice(ORIG_NUM, ORIG_NUM - c, replace=False)\n    return pointcloud[index]\n\n'''\nDensity-based up-sampling the point cloud\n'''\ndef density_inc(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [(1,100), (2,100), (3,100), (4,100), (5,100)][severity-1]\n    # idx = np.random.choice(N,c[0])\n    temp = []\n    for _ in range(c[0]):\n        i = np.random.choice(pointcloud.shape[0],1)\n        picked = pointcloud[i]\n        dist = np.sum((pointcloud - picked)**2, axis=1, keepdims=True)\n        idx = np.argpartition(dist, c[1], axis=0)[:c[1]]\n        # idx_2 = np.random.choice(c[1],int((3/4) * c[1]),replace=False)\n        # idx = idx[idx_2]\n        temp.append(pointcloud[idx.squeeze()])\n        pointcloud = np.delete(pointcloud, idx.squeeze(), axis=0)\n    \n    idx = np.random.choice(pointcloud.shape[0],1024 - c[0] * c[1])\n    temp.append(pointcloud[idx.squeeze()])\n\n    pointcloud = np.concatenate(temp)\n    # print(pointcloud.shape)\n    return pointcloud\n\n'''\nDensity-based sampling the point cloud\n'''\ndef density(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [(1,100), (2,100), (3,100), (4,100), (5,100)][severity-1]\n    for _ in range(c[0]):\n        i = np.random.choice(pointcloud.shape[0],1)\n        picked = pointcloud[i]\n        dist = np.sum((pointcloud - picked)**2, axis=1, keepdims=True)\n        idx = np.argpartition(dist, c[1], axis=0)[:c[1]]\n        idx_2 = np.random.choice(c[1],int((3/4) * c[1]),replace=False)\n        idx = idx[idx_2]\n        pointcloud = np.delete(pointcloud, idx.squeeze(), axis=0)\n        # pointcloud[idx.squeeze()] = 0\n    # print(pointcloud.shape)\n    return pointcloud\n\ndef occlusion(severity):\n    ## severity here does not stand for real severity ##\n    pointcloud = []\n    f_0 = open(\"./data/modelnet40_ply_hdf5_2048/ply_data_test_0_id2file.json\")\n    f_1 = open(\"./data/modelnet40_ply_hdf5_2048/ply_data_test_1_id2file.json\")\n    lsit_0 = json.load(f_0)\n    lsit_1 = json.load(f_1)\n    f_0.close()\n    f_1.close()\n\n    for item in lsit_0 + lsit_1:\n        folder = item.split('/')[0]\n        mesh = item.split('/')[1][:-3] + 'off'\n        # print(mesh)\n        original_data = load_mesh(\"./data/ModelNet40/\" + folder + \"/test/\" + mesh)\n        new_pc = occlusion_1(original_data,'occlusion',severity,n_points=1024)\n\n        theta =  -np.pi / 2.\n        gamma =  0\n        beta = np.pi\n\n        matrix_1 = np.array([[1,0,0],[0,np.cos(theta),-np.sin(theta)],[0,np.sin(theta),np.cos(theta)]])\n        matrix_2 = np.array([[np.cos(gamma),0,np.sin(gamma)],[0,1,0],[-np.sin(gamma),0,np.cos(gamma)]])\n        matrix_3 = np.array([[np.cos(beta),-np.sin(beta),0],[np.sin(beta),np.cos(beta),0],[0,0,1]])\n        \n        new_pc = np.matmul(new_pc,matrix_1)\n        new_pc = np.matmul(new_pc,matrix_2)\n        new_pc = normalize(np.matmul(new_pc,matrix_3).astype('float32'))\n\n        pointcloud.append(new_pc)\n\n    pointcloud = np.stack(pointcloud,axis=0)\n\n    np.save(\"./data/modelnet40_c/data_occlusion_\" + str(severity) + \".npy\", pointcloud)\n    return\n\ndef simulate_lidar(pointcloud,pose,severity):\n    pose = pose.transpose()\n    #####################################\n    # simplify the rotation to I matrix #\n    pose[:3,:3] = 0\n    pose[0,0] = pose[1,1] = pose[2,2] = 1 \n    # Translate the point cloud #\n    pose[3,[0,1,2]] = -pose[3,[0,1,2]] \n    #####################################\n\n    pointcloud_new = np.concatenate([pointcloud,np.ones((pointcloud.shape[0],1))],axis=1)\n    pointcloud_new = np.dot(pointcloud_new,pose)\n\n    pointcloud_new = appendSpherical_np(pointcloud_new[:,:3])\n    delta = 1. * np.pi / 180.\n    cur = np.min(pointcloud_new[:,4])\n\n    new_pc = []\n    \n    while cur + delta < np.max(pointcloud_new[:4]):\n        pointcloud_new[(pointcloud_new[:,4] >= cur+delta/4) & (pointcloud_new[:,4] < cur + delta*3/4),4] = cur + delta / 2.\n        new_pc.append(pointcloud_new[(pointcloud_new[:,4] >= cur+delta/4) & (pointcloud_new[:,4] < cur + delta*3/4)])\n        cur += delta\n    new_pc = np.concatenate(new_pc,axis=0)\n    # pointcloud = np.dot(pointcloud,np.linalg.inv(pose))\n    new_pc = appendCart_np(new_pc[:,3:])\n    new_pc = np.concatenate([new_pc[:,3:],np.ones((new_pc.shape[0],1))],axis=1)\n    new_pc = np.dot(new_pc,np.linalg.inv(pose))\n    index = np.random.choice(new_pc.shape[0],768)\n    new_pc = new_pc[index]\n    return new_pc[:,:3]\n\ndef lidar(severity):\n    ## severity here does not stand for real severity ##\n    pointcloud = []\n    f_0 = open(\"./data/modelnet40_ply_hdf5_2048/ply_data_test_0_id2file.json\")\n    f_1 = open(\"./data/modelnet40_ply_hdf5_2048/ply_data_test_1_id2file.json\")\n    lsit_0 = json.load(f_0)\n    lsit_1 = json.load(f_1)\n    f_0.close()\n    f_1.close()\n\n    for item in lsit_0 + lsit_1:\n        folder = item.split('/')[0]\n        mesh = item.split('/')[1][:-3] + 'off'\n        original_data = load_mesh(\"./data/ModelNet40/\" + folder + \"/test/\" + mesh)\n        new_pc,pose = occlusion_1(original_data,'lidar',severity,n_points=1024)\n\n        new_pc = simulate_lidar(new_pc,pose,severity)\n\n        theta =  -np.pi / 2.\n        gamma =  0\n        beta = np.pi\n\n        matrix_1 = np.array([[1,0,0],[0,np.cos(theta),-np.sin(theta)],[0,np.sin(theta),np.cos(theta)]])\n        matrix_2 = np.array([[np.cos(gamma),0,np.sin(gamma)],[0,1,0],[-np.sin(gamma),0,np.cos(gamma)]])\n        matrix_3 = np.array([[np.cos(beta),-np.sin(beta),0],[np.sin(beta),np.cos(beta),0],[0,0,1]])\n        \n        new_pc = np.matmul(new_pc,matrix_1)\n        new_pc = np.matmul(new_pc,matrix_2)\n        new_pc = np.matmul(new_pc,matrix_3).astype('float32')\n\n\n        pointcloud.append(new_pc)\n\n    pointcloud = np.stack(pointcloud,axis=0)\n\n    np.save(\"./data/modelnet40_c/data_lidar_\" + str(severity) + \".npy\", pointcloud)\n    return\n\n\ndef ffd_distortion(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [0.1,0.2,0.3,0.4,0.5][severity-1]\n    new_pc = distortion.distortion(pointcloud,severity=c)\n    return normalize(new_pc)\n\ndef rbf_distortion(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [(0.025,5),(0.05,5),(0.075,5),(0.1,5),(0.125,5)][severity-1]\n    new_pc = distortion.distortion_2(pointcloud,severity=c,func='multi_quadratic_biharmonic_spline')\n    return normalize(new_pc).astype('float32')\n\ndef rbf_distortion_inv(pointcloud, severity):\n    N, C = pointcloud.shape\n    c = [(0.025,5),(0.05,5),(0.075,5),(0.1,5),(0.125,5)][severity-1]\n    new_pc = distortion.distortion_2(pointcloud,severity=c,func='inv_multi_quadratic_biharmonic_spline')\n    return normalize(new_pc).astype('float32')\n\n\n\ndef load_data():\n    os.makedirs(\"./data/modelnet40_c\",exist_ok = True)\n    modelnet40_dir = \"./data/modelnet40_ply_hdf5_2048/\"\n    modelnet40_test_file = os.path.join(modelnet40_dir, \"test_files.txt\")\n    with open(modelnet40_test_file, \"r\") as f:\n        modelnet40_test_paths = [l.strip() for l in f.readlines()]\n\n    data   = []\n    labels = []\n    for modelnet40_test_path in modelnet40_test_paths:\n        test_h5 = h5py.File(modelnet40_test_path, \"r\")\n\n        data.append(test_h5[\"data\"][:])\n        labels.append(test_h5[\"label\"][:])\n\n    data   = np.concatenate(data)\n    labels = np.concatenate(labels)\n\n    np.save(\"./data/modelnet40_c/label.npy\", labels)\n\n    return data, labels\n\n\ndef save_data(data,corruption,severity):\n\n    if not MAP[corruption]:\n        np.save(\"./data/modelnet40_c/data_\" + corruption + \".npy\", data)\n        return\n        \n    new_data = []\n    for i in range(data.shape[0]):\n        if corruption in ['occlusion', 'lidar']:\n            new_data.append(MAP[corruption](severity))\n        else:\n            new_data.append(MAP[corruption](data[i],severity))\n    new_data = np.stack(new_data,axis=0)\n    np.save(\"./data/modelnet40_c/data_\" + corruption + \"_\" + str(severity) + \".npy\", new_data)\n\n\nMAP = {'uniform': uniform_noise,\n       'gaussian': gaussian_noise,\n       'background': background_noise,\n       'impulse': impulse_noise,\n       'scale': scale,\n       'upsampling': upsampling,\n       'shear': shear,\n       'rotation': rotation,\n       'cutout': cutout,\n       'density': density,\n       'density_inc': density_inc,\n       'distortion': ffd_distortion,\n       'distortion_rbf': rbf_distortion,\n       'distortion_rbf_inv': rbf_distortion_inv,\n       'occlusion': occlusion,\n       'lidar': lidar,\n       'original': None,\n}\n\nORIG_NUM = 1024\n\nif __name__ == \"__main__\":\n    data, labels = load_data()\n    for cor in MAP.keys():\n        # if cor in ['occlusion', 'lidar']:\n        #     continue\n        for sev in [1,2,3,4,5]:\n            if cor == 'density_inc':\n                ORIG_NUM = 2048\n            else:\n                ORIG_NUM = 1024\n            index = np.random.choice(data.shape[1],ORIG_NUM,replace=False)\n            save_data(data[:,index,:], cor, sev)\n            print(\"Done with Corruption: {} with Severity: {}\".format(cor,sev))\n\n"
  },
  {
    "path": "data/occlusion.py",
    "content": "import open3d as o3d\nimport numpy as np\nfrom util import get_points, set_points, normalize, shuffle_data\n\n\n\ndef random_pose(severity):\n    \"\"\"generate a random camera pose\"\"\"\n\n    theta = 2 * np.pi * severity / 5\n    delta = np.pi / 5\n    angle_x = np.random.uniform(2./3. * np.pi, 5./6. * np.pi)\n    angle_y = 0\n    angle_z = np.random.uniform(theta-delta,theta+delta)\n    Rx = np.array([[1, 0, 0],\n                   [0, np.cos(angle_x), -np.sin(angle_x)],\n                   [0, np.sin(angle_x), np.cos(angle_x)]])\n    Ry = np.array([[np.cos(angle_y), 0, np.sin(angle_y)],\n                   [0, 1, 0],\n                   [-np.sin(angle_y), 0, np.cos(angle_y)]])\n    Rz = np.array([[np.cos(angle_z), -np.sin(angle_z), 0],\n                   [np.sin(angle_z), np.cos(angle_z), 0],\n                   [0, 0, 1]])\n    R = np.dot(Rz, np.dot(Ry, Rx))\n    # a rotation matrix with arbitrarily chosen yaw, pitch, roll\n    # Set camera pointing to the origin and 1 unit away from the origin\n    t = np.expand_dims(-R[:, 2] * 3., 1)  # select the third column, reshape into (3, 1)-vector\n\n    matrix = np.concatenate([np.concatenate([R.T, -np.dot(R.T,t)], 1), [[0, 0, 0, 1]]], 0)\n    return matrix\n\ndef lidar_pose(severity):\n    \"\"\"generate a random LiDAR pose\"\"\"\n    theta = 2 * np.pi * severity / 5\n    delta = np.pi / 5\n    angle_x = 5./8. * np.pi\n    angle_y = 0\n    angle_z = np.random.uniform(theta-delta,theta+delta)\n    Rx = np.array([[1, 0, 0],\n                   [0, np.cos(angle_x), -np.sin(angle_x)],\n                   [0, np.sin(angle_x), np.cos(angle_x)]])\n    Ry = np.array([[np.cos(angle_y), 0, np.sin(angle_y)],\n                   [0, 1, 0],\n                   [-np.sin(angle_y), 0, np.cos(angle_y)]])\n    Rz = np.array([[np.cos(angle_z), -np.sin(angle_z), 0],\n                   [np.sin(angle_z), np.cos(angle_z), 0],\n                   [0, 0, 1]])\n    R = np.dot(Rz, np.dot(Ry, Rx))\n    # a rotation matrix with arbitrarily chosen yaw, pitch, roll\n    # Set camera pointing to the origin and 1 unit away from the origin\n    t = np.expand_dims(-R[:, 2] * 5, 1)  # select the third column, reshape into (3, 1)-vector\n    pose = np.concatenate([np.concatenate([R, t], 1), [[0, 0, 0, 1]]], 0)\n    matrix = np.concatenate([np.concatenate([R.T, -np.dot(R.T,t)], 1), [[0, 0, 0, 1]]], 0)\n    return matrix, pose\n\n\n\ndef get_default_camera_extrinsic():\n    return np.array([[1,0,0,1],\n                    [0,1,0,0],\n                    [0,0,1,2],\n                    [0,0,0,1]])\n\n\ndef get_default_camera_intrinsic(width=1920, height=1080):\n    return {\n        \"width\": width,\n        \"height\": height,\n        \"fx\": 365,\n        \"fy\": 365,\n        \"cx\": width / 2 - 0.5,\n        \"cy\": height / 2 - 0.5\n    }\n\n\ndef core_occlusion(mesh, type, camera_extrinsic=None, camera_intrinsic=None, window_width=1080, window_height=720, n_points=None, downsample_ratio=None):\n    if camera_extrinsic is None:\n        camera_extrinsic = get_default_camera_extrinsic()\n    \n    if camera_intrinsic is None:\n        camera_intrinsic = get_default_camera_intrinsic()\n\n    camera_parameters = o3d.camera.PinholeCameraParameters()\n    camera_parameters.extrinsic = camera_extrinsic\n    camera_parameters.intrinsic.set_intrinsics(**camera_intrinsic)\n\n    viewer = o3d.visualization.Visualizer()\n    viewer.create_window(width=window_width, height=window_height)\n    viewer.add_geometry(mesh)\n\n    control = viewer.get_view_control()\n    control.convert_from_pinhole_camera_parameters(camera_parameters)\n    # viewer.run()\n\n    depth = viewer.capture_depth_float_buffer(do_render=True)\n\n    viewer.destroy_window()\n    pcd = o3d.geometry.PointCloud.create_from_depth_image(depth, camera_parameters.intrinsic, extrinsic=camera_parameters.extrinsic)\n\n    if downsample_ratio is not None:\n        ratio =  int((1 - downsample_ratio) / downsample_ratio)\n        pcd = pcd.uniform_down_sample(ratio)\n    elif n_points is not None:\n        # print(np.asarray(pcd.points).shape[0])\n        ratio =  int(np.asarray(pcd.points).shape[0] / n_points)\n        if ratio > 0:\n            # if type == 'occlusion':\n            set_points(pcd, shuffle_data(np.asarray(pcd.points)))\n            pcd = pcd.uniform_down_sample(ratio)\n    \n    return pcd\n\n\ndef occlusion_1(mesh, type, severity, window_width=1080, window_height=720, n_points=None, downsample_ratio=None):\n    points = get_points(mesh)\n    points = normalize(points)\n    set_points(mesh, points)\n    if type == 'occlusion':\n        camera_extrinsic = random_pose(severity)\n    elif type == 'lidar':\n        camera_extrinsic,pose = lidar_pose(severity)\n    camera_intrinsic = get_default_camera_intrinsic(window_width, window_height)\n    pcd = core_occlusion(mesh, type, camera_extrinsic=camera_extrinsic, camera_intrinsic=camera_intrinsic, window_width=window_width, window_height=window_height, n_points=n_points, downsample_ratio=downsample_ratio)\n\n    points = get_points(pcd)\n    if points.shape[0] < n_points:\n        index = np.random.choice(points.shape[0], n_points)\n        points = points[index]\n    # points = normalize(points)\n    # points = denomalize(points, scale, offset)\n    if type == 'lidar':\n        return points[:n_points,:], pose\n    else:\n        return points[:n_points,:]\n\n"
  },
  {
    "path": "data/process.py",
    "content": "import os\n\nSHAPE = [\"airplane\",\n\"bathtub\",\n\"bed\",\n\"bench\",\n\"bookshelf\",\n\"bottle\",\n\"bowl\",\n\"car\",\n\"chair\",\n\"cone\",\n\"cup\",\n\"curtain\",\n\"desk\",\n\"door\",\n\"dresser\",\n\"flower_pot\",\n\"glass_box\",\n\"guitar\",\n\"keyboard\",\n\"lamp\",\n\"laptop\",\n\"mantel\",\n\"monitor\",\n\"night_stand\",\n\"person\",\n\"piano\",\n\"plant\",\n\"radio\",\n\"range_hood\",\n\"sink\",\n\"sofa\",\n\"stairs\",\n\"stool\",\n\"table\",\n\"tent\",\n\"toilet\",\n\"tv_stand\",\n\"vase\",\n\"wardrobe\",\n\"xbox\"\n]\n\nif __name__ == '__main__':\n    for object in SHAPE:\n        g = os.walk(\"data/ModelNet40/\"+object+\"/test\")\n        for path,dir_list,file_list in g:  \n            for file in file_list:\n                # print(file)\n                with open(os.path.join(path,file), \"r\") as f:\n                    lines = f.readlines()\n                if len(lines[0]) == 4:\n                    continue\n                else:\n                    lines.insert(0,'OFF\\n')\n                    lines[1] = lines[1][3:]\n                    # print(lines)\n                    with open(os.path.join(path,file), \"w\") as f:\n                        for line in lines:\n                            f.write(line)\n"
  },
  {
    "path": "data/util.py",
    "content": "import open3d as o3d\nimport numpy as np\nimport copy\n\n\ndef get_points(data):\n    if isinstance(data, o3d.cpu.pybind.geometry.TriangleMesh):\n        return np.asarray(data.vertices)\n    elif isinstance(data, o3d.cpu.pybind.geometry.PointCloud):\n        return np.asarray(data.points)\n    else:\n        raise Exception(\"Wrong input data format: should be pointcloud or mesh\")\n\n\ndef set_points(data, points):\n    if isinstance(data, o3d.cpu.pybind.geometry.TriangleMesh):\n        data.vertices = o3d.utility.Vector3dVector(points)\n        return data\n    elif isinstance(data, o3d.cpu.pybind.geometry.PointCloud):\n        data.points = o3d.utility.Vector3dVector(points)\n        return data\n    else:\n        raise Exception(\"Wrong input data format: should be pointcloud or mesh\")\n\n\ndef normalize(new_pc):\n    new_pc[:,0] -= (np.max(new_pc[:,0]) + np.min(new_pc[:,0])) / 2\n    new_pc[:,1] -= (np.max(new_pc[:,1]) + np.min(new_pc[:,1])) / 2\n    new_pc[:,2] -= (np.max(new_pc[:,2]) + np.min(new_pc[:,2])) / 2\n    leng_x, leng_y, leng_z = np.max(new_pc[:,0]) - np.min(new_pc[:,0]), np.max(new_pc[:,1]) - np.min(new_pc[:,1]), np.max(new_pc[:,2]) - np.min(new_pc[:,2])\n    if leng_x >= leng_y and leng_x >= leng_z:\n        ratio = 2.0 / leng_x\n    elif leng_y >= leng_x and leng_y >= leng_z:\n        ratio = 2.0 / leng_y\n    else:\n        ratio = 2.0 / leng_z\n    new_pc *= ratio\n    return new_pc\n\n\ndef denomalize(points, scale, offset, hard_copy=False):\n    if hard_copy:\n        new_points = copy.deepcopy(points)\n    else:\n        new_points = points\n\n    n_points = new_points.shape[0]\n    new_points = new_points * np.tile(scale, (n_points,1)) + np.tile(offset, (n_points,1))\n    return new_points\n\ndef shuffle_data(data):\n\n    idx = np.arange(data.shape[0])\n    np.random.shuffle(idx)\n    return data[idx, ...]\n\n\ndef appendSpherical_np(xyz):\n    ptsnew = np.hstack((xyz, np.zeros(xyz.shape)))\n    xy = xyz[:,0]**2 + xyz[:,1]**2\n    ptsnew[:,3] = np.sqrt(xy + xyz[:,2]**2)\n    ptsnew[:,4] = np.arctan2(np.sqrt(xy), xyz[:,2]) # for elevation angle defined from Z-axis down\n    #ptsnew[:,4] = np.arctan2(xyz[:,2], np.sqrt(xy)) # for elevation angle defined from XY-plane up\n    ptsnew[:,5] = np.arctan2(xyz[:,1], xyz[:,0])\n    return ptsnew\n\ndef appendCart_np(xyz):\n    ptsnew = np.hstack((xyz, np.zeros(xyz.shape)))\n    ptsnew[:,3] = ptsnew[:,0] * np.sin(ptsnew[:,1]) * np.cos(ptsnew[:,2])\n    ptsnew[:,4] = ptsnew[:,0] * np.sin(ptsnew[:,1]) * np.sin(ptsnew[:,2])\n    ptsnew[:,5] = ptsnew[:,0] * np.cos(ptsnew[:,1]) \n    return ptsnew\n\n    "
  },
  {
    "path": "dataloader.py",
    "content": "import numpy as np\nimport torch\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms\nimport os\n\nfrom pc_utils import (rotate_point_cloud, PointcloudScaleAndTranslate)\nimport rs_cnn.data.data_utils as rscnn_d_utils\nfrom rs_cnn.data.ModelNet40Loader import ModelNet40Cls as rscnn_ModelNet40Cls\nimport PCT_Pytorch.pointnet2_ops_lib.pointnet2_ops.pointnet2_utils as pointnet2_utils\nfrom pointnet2_tf.modelnet_h5_dataset import ModelNetH5Dataset as pointnet2_ModelNetH5Dataset\nfrom dgcnn.pytorch.data import ModelNet40 as dgcnn_ModelNet40\n\n\n# distilled from the following sources:\n# https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/data/ModelNet40Loader.py\n# https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/train_cls.py\nclass ModelNet40Rscnn(Dataset):\n    def __init__(self, split, data_path, train_data_path,\n                 valid_data_path, test_data_path, num_points):\n\n        self.split = split\n        self.num_points = num_points\n        _transforms = transforms.Compose([rscnn_d_utils.PointcloudToTensor()])\n        rscnn_params = {\n            'num_points': 1024,  # although it does not matter\n            'root': data_path,\n            'transforms': _transforms,\n            'train': (split in [\"train\", \"valid\"]),\n            'data_file': {\n                'train': train_data_path,\n                'valid': valid_data_path,\n                'test':  test_data_path\n            }[self.split]\n        }\n        self.rscnn_dataset = rscnn_ModelNet40Cls(**rscnn_params)\n        self.PointcloudScaleAndTranslate = PointcloudScaleAndTranslate()\n\n    def __len__(self):\n        return self.rscnn_dataset.__len__()\n\n    def __getitem__(self, idx):\n        point, label = self.rscnn_dataset.__getitem__(idx)\n        # for compatibility with the overall code\n        point = np.array(point)\n        label = label[0].item()\n\n        return {'pc': point, 'label': label}\n\n    def batch_proc(self, data_batch, device):\n        point = data_batch['pc'].to(device)\n        if self.split == \"train\":\n            # (B, npoint)\n            fps_idx = pointnet2_utils.furthest_point_sample(point, 1200)\n            fps_idx = fps_idx[:, np.random.choice(1200, self.num_points,\n                                                  False)]\n            point = pointnet2_utils.gather_operation(\n                point.transpose(1, 2).contiguous(),\n                fps_idx).transpose(1, 2).contiguous()  # (B, N, 3)\n            point.data = self.PointcloudScaleAndTranslate(point.data)\n        else:\n            fps_idx = pointnet2_utils.furthest_point_sample(\n                point, self.num_points)  # (B, npoint)\n            point = pointnet2_utils.gather_operation(\n                point.transpose(1, 2).contiguous(),\n                fps_idx).transpose(1, 2).contiguous()\n        # to maintain compatibility\n        point = point.cpu()\n        return {'pc': point, 'label': data_batch['label']}\n\n\n# distilled from the following sources:\n# https://github.com/charlesq34/pointnet2/blob/7961e26e31d0ba5a72020635cee03aac5d0e754a/modelnet_h5_dataset.py\n# https://github.com/charlesq34/pointnet2/blob/7961e26e31d0ba5a72020635cee03aac5d0e754a/train.py\nclass ModelNet40PN2(Dataset):\n    def __init__(self, split, train_data_path,\n                 valid_data_path, test_data_path, num_points):\n        self.split = split\n        self.dataset_name = 'modelnet40_pn2'\n        data_path = {\n            \"train\": train_data_path,\n            \"valid\": valid_data_path,\n            \"test\":  test_data_path\n        }[self.split]\n        pointnet2_params = {\n            'list_filename': data_path,\n            # this has nothing to do with actual dataloader batch size\n            'batch_size': 32,\n            'npoints': num_points,\n            'shuffle': False\n        }\n\n        # loading all the pointnet2data\n        self._dataset = pointnet2_ModelNetH5Dataset(**pointnet2_params)\n        all_pc = []\n        all_label = []\n        while self._dataset.has_next_batch():\n            # augmentation here has nothing to do with actual data_augmentation\n            pc, label = self._dataset.next_batch(augment=False)\n            all_pc.append(pc)\n            all_label.append(label)\n        self.all_pc = np.concatenate(all_pc)\n        self.all_label = np.concatenate(all_label)\n\n    def __len__(self):\n        return self.all_pc.shape[0]\n\n    def __getitem__(self, idx):\n        return {'pc': self.all_pc[idx], 'label': np.int64(self.all_label[idx])}\n\n    def batch_proc(self, data_batch, device):\n        if self.split == \"train\":\n            point = np.array(data_batch['pc'])\n            point = self._dataset._augment_batch_data(point)\n            # converted to tensor to maintain compatibility with the other code\n            data_batch['pc'] = torch.tensor(point)\n        else:\n            pass\n\n        return data_batch\n\n\nclass ModelNet40Dgcnn(Dataset):\n    def __init__(self, split, train_data_path,\n                 valid_data_path, test_data_path, num_points):\n        self.split = split\n        self.data_path = {\n            \"train\": train_data_path,\n            \"valid\": valid_data_path,\n            \"test\":  test_data_path\n        }[self.split]\n\n        dgcnn_params = {\n            'partition': 'train' if split in ['train', 'valid'] else 'test',\n            'num_points': num_points,\n            \"data_path\":  self.data_path\n        }\n        self.dataset = dgcnn_ModelNet40(**dgcnn_params)\n\n    def __len__(self):\n        return self.dataset.__len__()\n\n    def __getitem__(self, idx):\n        pc, label = self.dataset.__getitem__(idx)\n        return {'pc': pc, 'label': label.item()}\n\ndef load_data(data_path,corruption,severity):\n\n    DATA_DIR = os.path.join(data_path, 'data_' + corruption + '_' +str(severity) + '.npy')\n    # if corruption in ['occlusion']:\n    #     LABEL_DIR = os.path.join(data_path, 'label_occlusion.npy')\n    LABEL_DIR = os.path.join(data_path, 'label.npy')\n    all_data = np.load(DATA_DIR)\n    all_label = np.load(LABEL_DIR)\n    return all_data, all_label\n\nclass ModelNet40C(Dataset):\n    def __init__(self, split, test_data_path,corruption,severity):\n        assert split == 'test'\n        self.split = split\n        self.data_path = {\n            \"test\":  test_data_path\n        }[self.split]\n        self.corruption = corruption\n        self.severity = severity\n\n        self.data, self.label = load_data(self.data_path, self.corruption, self.severity)\n        # self.num_points = num_points\n        self.partition =  'test'\n\n    def __getitem__(self, item):\n        pointcloud = self.data[item]#[:self.num_points]\n        label = self.label[item]\n        return {'pc': pointcloud, 'label': label.item()}\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\ndef create_dataloader(split, cfg):\n    num_workers = cfg.DATALOADER.num_workers\n    batch_size = cfg.DATALOADER.batch_size\n    dataset_args = {\n        \"split\": split\n    }\n\n    if cfg.EXP.DATASET == \"modelnet40_rscnn\":\n        dataset_args.update(dict(**cfg.DATALOADER.MODELNET40_RSCNN))\n        # augmentation directly done in the code so that\n        # it is as similar to the vanilla code as possible\n        dataset = ModelNet40Rscnn(**dataset_args)\n    elif cfg.EXP.DATASET == \"modelnet40_pn2\":\n        dataset_args.update(dict(**cfg.DATALOADER.MODELNET40_PN2))\n        dataset = ModelNet40PN2(**dataset_args)\n    elif cfg.EXP.DATASET == \"modelnet40_dgcnn\":\n        dataset_args.update(dict(**cfg.DATALOADER.MODELNET40_DGCNN))\n        dataset = ModelNet40Dgcnn(**dataset_args)\n    elif cfg.EXP.DATASET == \"modelnet40_c\":\n        dataset_args.update(dict(**cfg.DATALOADER.MODELNET40_C))\n        dataset = ModelNet40C(**dataset_args)\n    else:\n        assert False\n\n    if \"batch_proc\" not in dir(dataset):\n        dataset.batch_proc = None\n\n    return DataLoader(\n        dataset,\n        batch_size,\n        num_workers=num_workers,\n        shuffle=(split == \"train\"),\n        drop_last=(split == \"train\"),\n        pin_memory=(torch.cuda.is_available()) and (not num_workers)\n    )\n\n"
  },
  {
    "path": "dgcnn/.gitignore",
    "content": "data/\nlog/\n*.pyc\n.DS_Store\npytorch/pretrained/\npytorch/checkpoints/\ntensorflow/part_seg/train_results/\n\n"
  },
  {
    "path": "dgcnn/README.md",
    "content": "# Dynamic Graph CNN for Learning on Point Clouds\nWe propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures.\n\n[[Project]](https://liuziwei7.github.io/projects/DGCNN) [[Paper]](https://arxiv.org/abs/1801.07829)     \n\n## Overview\n`DGCNN` is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentation and part segmentation.\n\n<img src='./tensorflow/misc/demo_teaser.png' width=800>\n\nFurther information please contact [Yue Wang](https://www.csail.mit.edu/person/yue-wang) and [Yongbin Sun](https://autoid.mit.edu/people-2).\n\n## Author's Implementations\n\nThe classification experiments in our paper are done with the pytorch implementation.\n\n* [tensorflow-dgcnn](./tensorflow)\n* [pytorch-dgcnn](./pytorch)\n\n## Other Implementations\n* [pytorch-geometric](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.EdgeConv)\n* [pytorch-dgcnn](https://github.com/AnTao97/dgcnn.pytorch) (This implementation on S3DIS achieves significant better results than our tensorflow implementation)\n\n\n## Citation\nPlease cite this paper if you want to use it in your work,\n\n\t@article{dgcnn,\n\t  title={Dynamic Graph CNN for Learning on Point Clouds},\n\t  author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.},\n\t  journal={ACM Transactions on Graphics (TOG)},\n\t  year={2019}\n\t}\n\n## License\nMIT License\n\n## Acknowledgement\nThe structure of this codebase is borrowed from [PointNet](https://github.com/charlesq34/pointnet).\n"
  },
  {
    "path": "dgcnn/pytorch/README.md",
    "content": "# Dynamic Graph CNN for Learning on Point Clouds (PyTorch)\n\n## Point Cloud Classification\n* Run the training script:\n\n\n``` 1024 points\npython main.py --exp_name=dgcnn_1024 --model=dgcnn --num_points=1024 --k=20 --use_sgd=True\n```\n\n``` 2048 points\npython main.py --exp_name=dgcnn_2048 --model=dgcnn --num_points=2048 --k=40 --use_sgd=True\n```\n\n* Run the evaluation script after training finished:\n\n``` 1024 points\npython main.py --exp_name=dgcnn_1024_eval --model=dgcnn --num_points=1024 --k=20 --use_sgd=True --eval=True --model_path=checkpoints/dgcnn_1024/models/model.t7\n```\n\n``` 2048 points\npython main.py --exp_name=dgcnn_2048_eval --model=dgcnn --num_points=2048 --k=40 --use_sgd=True --eval=True --model_path=checkpoints/dgcnn_2048/models/model.t7\n```\n\n* Run the evaluation script with pretrained models:\n\n``` 1024 points\npython main.py --exp_name=dgcnn_1024_eval --model=dgcnn --num_points=1024 --k=20 --use_sgd=True --eval=True --model_path=pretrained/model.1024.t7\n```\n\n``` 2048 points\npython main.py --exp_name=dgcnn_2048_eval --model=dgcnn --num_points=2048 --k=40 --use_sgd=True --eval=True --model_path=pretrained/model.2048.t7\n```\n"
  },
  {
    "path": "dgcnn/pytorch/data.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: data.py\n@Time: 2018/10/13 6:21 PM\n\"\"\"\n\n\nimport os\nimport sys\nimport glob\nimport h5py\nimport numpy as np\nfrom torch.utils.data import Dataset\n\n\ndef download():\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, '../../data')\n    if not os.path.exists(DATA_DIR):\n        os.mkdir(DATA_DIR)\n    if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n        www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n        zipfile = os.path.basename(www)\n        os.system('wget %s; unzip %s' % (www, zipfile))\n        os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n        os.system('rm %s' % (zipfile))\n\n\ndef load_data(data_path):\n    download()\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, '../../data')\n    all_data = []\n    all_label = []\n    with open(data_path, \"r\") as f:\n        # for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048', 'ply_data_%s*.h5'%partition)):\n        for h5_name in f.readlines():\n            # h5_name = os.path.join(BASE_DIR, \"../../\", h5_name.strip())\n            f = h5py.File(h5_name.strip(), 'r')\n            data = f['data'][:].astype('float32')\n            label = f['label'][:].astype('int64')\n            f.close()\n            all_data.append(data)\n            all_label.append(label)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    return all_data, all_label\n\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n\n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\nclass ModelNet40(Dataset):\n    def __init__(self, num_points, data_path, partition='train'):\n        self.data, self.label = load_data(data_path)\n        self.num_points = num_points\n        self.partition = partition\n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.partition == 'train':\n            pointcloud = translate_pointcloud(pointcloud)\n            np.random.shuffle(pointcloud)\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\nif __name__ == '__main__':\n    train = ModelNet40(1024)\n    test = ModelNet40(1024, 'test')\n    for data, label in train:\n        print(data.shape)\n        print(label.shape)\n"
  },
  {
    "path": "dgcnn/pytorch/main.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: main.py\n@Time: 2018/10/13 10:39 PM\n\"\"\"\n\n\nfrom __future__ import print_function\nimport os\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\nfrom data import ModelNet40\nfrom model import PointNet, DGCNN\nimport numpy as np\nfrom torch.utils.data import DataLoader\nfrom util import cal_loss, IOStream\nimport sklearn.metrics as metrics\n\n\ndef _init_():\n    if not os.path.exists('checkpoints'):\n        os.makedirs('checkpoints')\n    if not os.path.exists('checkpoints/'+args.exp_name):\n        os.makedirs('checkpoints/'+args.exp_name)\n    if not os.path.exists('checkpoints/'+args.exp_name+'/'+'models'):\n        os.makedirs('checkpoints/'+args.exp_name+'/'+'models')\n    os.system('cp main.py checkpoints'+'/'+args.exp_name+'/'+'main.py.backup')\n    os.system('cp model.py checkpoints' + '/' + args.exp_name + '/' + 'model.py.backup')\n    os.system('cp util.py checkpoints' + '/' + args.exp_name + '/' + 'util.py.backup')\n    os.system('cp data.py checkpoints' + '/' + args.exp_name + '/' + 'data.py.backup')\n\ndef train(args, io):\n    train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=8,\n                              batch_size=args.batch_size, shuffle=True, drop_last=True)\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=8,\n                             batch_size=args.test_batch_size, shuffle=True, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    #Try to load models\n    if args.model == 'pointnet':\n        model = PointNet(args).to(device)\n    elif args.model == 'dgcnn':\n        model = DGCNN(args).to(device)\n    else:\n        raise Exception(\"Not implemented\")\n    print(str(model))\n\n    model = nn.DataParallel(model)\n    print(\"Let's use\", torch.cuda.device_count(), \"GPUs!\")\n\n    if args.use_sgd:\n        print(\"Use SGD\")\n        opt = optim.SGD(model.parameters(), lr=args.lr*100, momentum=args.momentum, weight_decay=1e-4)\n    else:\n        print(\"Use Adam\")\n        opt = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-4)\n\n    scheduler = CosineAnnealingLR(opt, args.epochs, eta_min=args.lr)\n\n    print(f\"Using the smoothing loss {bool(args.smoothing)}\")\n    criterion = lambda x,y: cal_loss(x, y, bool(args.smoothing))\n\n    best_test_acc = 0\n    for epoch in range(args.epochs):\n        scheduler.step()\n        ####################\n        # Train\n        ####################\n        train_loss = 0.0\n        count = 0.0\n        model.train()\n        train_pred = []\n        train_true = []\n        for data, label in train_loader:\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            opt.zero_grad()\n            logits = model(data)\n            loss = criterion(logits, label)\n            loss.backward()\n            opt.step()\n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            train_loss += loss.item() * batch_size\n            train_true.append(label.cpu().numpy())\n            train_pred.append(preds.detach().cpu().numpy())\n        train_true = np.concatenate(train_true)\n        train_pred = np.concatenate(train_pred)\n        outstr = 'Train %d, loss: %.6f, train acc: %.6f, train avg acc: %.6f' % (epoch,\n                                                                                 train_loss*1.0/count,\n                                                                                 metrics.accuracy_score(\n                                                                                     train_true, train_pred),\n                                                                                 metrics.balanced_accuracy_score(\n                                                                                     train_true, train_pred))\n        io.cprint(outstr)\n\n        ####################\n        # Test\n        ####################\n        test_loss = 0.0\n        count = 0.0\n        model.eval()\n        test_pred = []\n        test_true = []\n        for data, label in test_loader:\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            batch_size = data.size()[0]\n            logits = model(data)\n            loss = criterion(logits, label)\n            preds = logits.max(dim=1)[1]\n            count += batch_size\n            test_loss += loss.item() * batch_size\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n        test_true = np.concatenate(test_true)\n        test_pred = np.concatenate(test_pred)\n        test_acc = metrics.accuracy_score(test_true, test_pred)\n        avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)\n        outstr = 'Test %d, loss: %.6f, test acc: %.6f, test avg acc: %.6f' % (epoch,\n                                                                              test_loss*1.0/count,\n                                                                              test_acc,\n                                                                              avg_per_class_acc)\n        io.cprint(outstr)\n        if test_acc >= best_test_acc:\n            best_test_acc = test_acc\n            torch.save(model.state_dict(), 'checkpoints/%s/models/model.t7' % args.exp_name)\n\n\ndef test(args, io):\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points),\n                             batch_size=args.test_batch_size, shuffle=True, drop_last=False)\n\n    device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n\n    #Try to load models\n    model = DGCNN(args).to(device)\n    model = nn.DataParallel(model)\n    model.load_state_dict(torch.load(args.model_path))\n    model = model.eval()\n    test_acc = 0.0\n    count = 0.0\n    test_true = []\n    test_pred = []\n    for data, label in test_loader:\n\n        data, label = data.to(device), label.to(device).squeeze()\n        data = data.permute(0, 2, 1)\n        batch_size = data.size()[0]\n        logits = model(data)\n        preds = logits.max(dim=1)[1]\n        test_true.append(label.cpu().numpy())\n        test_pred.append(preds.detach().cpu().numpy())\n    test_true = np.concatenate(test_true)\n    test_pred = np.concatenate(test_pred)\n    test_acc = metrics.accuracy_score(test_true, test_pred)\n    avg_per_class_acc = metrics.balanced_accuracy_score(test_true, test_pred)\n    outstr = 'Test :: test acc: %.6f, test avg acc: %.6f'%(test_acc, avg_per_class_acc)\n    io.cprint(outstr)\n\n\nif __name__ == \"__main__\":\n    # Training settings\n    parser = argparse.ArgumentParser(description='Point Cloud Recognition')\n    parser.add_argument('--exp_name', type=str, default='exp', metavar='N',\n                        help='Name of the experiment')\n    parser.add_argument('--model', type=str, default='dgcnn', metavar='N',\n                        choices=['pointnet', 'dgcnn'],\n                        help='Model to use, [pointnet, dgcnn]')\n    parser.add_argument('--dataset', type=str, default='modelnet40', metavar='N',\n                        choices=['modelnet40'])\n    parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--test_batch_size', type=int, default=16, metavar='batch_size',\n                        help='Size of batch)')\n    parser.add_argument('--epochs', type=int, default=250, metavar='N',\n                        help='number of episode to train ')\n    parser.add_argument('--use_sgd', type=bool, default=True,\n                        help='Use SGD')\n    parser.add_argument('--lr', type=float, default=0.001, metavar='LR',\n                        help='learning rate (default: 0.001, 0.1 if using sgd)')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--no_cuda', type=bool, default=False,\n                        help='enables CUDA training')\n    parser.add_argument('--seed', type=int, default=1, metavar='S',\n                        help='random seed (default: 1)')\n    parser.add_argument('--eval', type=bool,  default=False,\n                        help='evaluate the model')\n    parser.add_argument('--num_points', type=int, default=1024,\n                        help='num of points to use')\n    parser.add_argument('--dropout', type=float, default=0.5,\n                        help='dropout rate')\n    parser.add_argument('--emb_dims', type=int, default=1024, metavar='N',\n                        help='Dimension of embeddings')\n    parser.add_argument('--k', type=int, default=20, metavar='N',\n                        help='Num of nearest neighbors to use')\n    parser.add_argument('--model_path', type=str, default='', metavar='N',\n                        help='Pretrained model path')\n    parser.add_argument('--smoothing', type=int, default=1,\n                        help='Whether to use smoothing in the loss')\n    parser.add_argument('--leaky_relu', type=int, default=1,\n                        help='Whether to use leaky_relu')\n    args = parser.parse_args()\n\n    _init_()\n\n    io = IOStream('checkpoints/' + args.exp_name + '/run.log')\n    io.cprint(str(args))\n\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n    torch.manual_seed(args.seed)\n    if args.cuda:\n        io.cprint(\n            'Using GPU : ' + str(torch.cuda.current_device()) + ' from ' + str(torch.cuda.device_count()) + ' devices')\n        torch.cuda.manual_seed(args.seed)\n    else:\n        io.cprint('Using CPU')\n\n    if not args.eval:\n        train(args, io)\n    else:\n        test(args, io)\n"
  },
  {
    "path": "dgcnn/pytorch/model.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: model.py\n@Time: 2018/10/13 6:35 PM\n\"\"\"\n\n\nimport os\nimport sys\nimport copy\nimport math\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef knn(x, k):\n    inner = -2*torch.matmul(x.transpose(2, 1), x)\n    xx = torch.sum(x**2, dim=1, keepdim=True)\n    pairwise_distance = -xx - inner - xx.transpose(2, 1)\n\n    idx = pairwise_distance.topk(k=k, dim=-1)[1]   # (batch_size, num_points, k)\n    return idx\n\n\ndef get_graph_feature(x, k=20, idx=None):\n    batch_size = x.size(0)\n    num_points = x.size(2)\n    x = x.view(batch_size, -1, num_points)\n    if idx is None:\n        idx = knn(x, k=k)   # (batch_size, num_points, k)\n    device = torch.device('cuda')\n\n    idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1)*num_points\n\n    idx = idx + idx_base\n\n    idx = idx.view(-1)\n\n    _, num_dims, _ = x.size()\n\n    x = x.transpose(2, 1).contiguous()   # (batch_size, num_points, num_dims)  -> (batch_size*num_points, num_dims) #   batch_size * num_points * k + range(0, batch_size*num_points)\n    feature = x.view(batch_size*num_points, -1)[idx, :]\n    feature = feature.view(batch_size, num_points, k, num_dims) \n    x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)\n    \n    feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()\n  \n    return feature\n\n\nclass PointNet(nn.Module):\n    def __init__(self, args, output_channels=40):\n        super(PointNet, self).__init__()\n        self.args = args\n        self.conv1 = nn.Conv1d(3, 64, kernel_size=1, bias=False)\n        self.conv2 = nn.Conv1d(64, 64, kernel_size=1, bias=False)\n        self.conv3 = nn.Conv1d(64, 64, kernel_size=1, bias=False)\n        self.conv4 = nn.Conv1d(64, 128, kernel_size=1, bias=False)\n        self.conv5 = nn.Conv1d(128, args.emb_dims, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(64)\n        self.bn3 = nn.BatchNorm1d(64)\n        self.bn4 = nn.BatchNorm1d(128)\n        self.bn5 = nn.BatchNorm1d(args.emb_dims)\n        self.linear1 = nn.Linear(args.emb_dims, 512, bias=False)\n        self.bn6 = nn.BatchNorm1d(512)\n        self.dp1 = nn.Dropout()\n        self.linear2 = nn.Linear(512, output_channels)\n\n    def forward(self, x):\n        x = F.relu(self.bn1(self.conv1(x)))\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = F.relu(self.bn3(self.conv3(x)))\n        x = F.relu(self.bn4(self.conv4(x)))\n        x = F.relu(self.bn5(self.conv5(x)))\n        x = F.adaptive_max_pool1d(x, 1).squeeze()\n        x = F.relu(self.bn6(self.linear1(x)))\n        x = self.dp1(x)\n        x = self.linear2(x)\n        return x\n\n\nclass DGCNN(nn.Module):\n    def __init__(self, args, output_channels=40):\n        super(DGCNN, self).__init__()\n        self.args = args\n        self.k = args.k\n        self.leaky_relu = bool(args.leaky_relu)\n        \n        self.bn1 = nn.BatchNorm2d(64)\n        self.bn2 = nn.BatchNorm2d(64)\n        self.bn3 = nn.BatchNorm2d(128)\n        self.bn4 = nn.BatchNorm2d(256)\n        self.bn5 = nn.BatchNorm1d(args.emb_dims)\n\n        if self.leaky_relu:\n            act_mod = nn.LeakyReLU\n            act_mod_args = {'negative_slope': 0.2}\n        else:\n            act_mod = nn.ReLU\n            act_mod_args = {}\n\n        self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=False),\n                                   self.bn1,\n                                   act_mod(**act_mod_args))\n        self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),\n                                   self.bn2,\n                                   act_mod(**act_mod_args))\n        self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),\n                                   self.bn3,\n                                   act_mod(**act_mod_args))\n        self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),\n                                   self.bn4,\n                                   act_mod(**act_mod_args))\n        self.conv5 = nn.Sequential(nn.Conv1d(512, args.emb_dims, kernel_size=1, bias=False),\n                                   self.bn5,\n                                   act_mod(**act_mod_args))\n        self.linear1 = nn.Linear(args.emb_dims*2, 512, bias=False)\n        self.bn6 = nn.BatchNorm1d(512)\n        self.dp1 = nn.Dropout(p=args.dropout)\n        self.linear2 = nn.Linear(512, 256)\n        self.bn7 = nn.BatchNorm1d(256)\n        self.dp2 = nn.Dropout(p=args.dropout)\n        self.linear3 = nn.Linear(256, output_channels)\n\n    def forward(self, x):\n        batch_size = x.size(0)\n        x = get_graph_feature(x, k=self.k)\n        x = self.conv1(x)\n        x1 = x.max(dim=-1, keepdim=False)[0]\n\n        x = get_graph_feature(x1, k=self.k)\n        x = self.conv2(x)\n        x2 = x.max(dim=-1, keepdim=False)[0]\n\n        x = get_graph_feature(x2, k=self.k)\n        x = self.conv3(x)\n        x3 = x.max(dim=-1, keepdim=False)[0]\n\n        x = get_graph_feature(x3, k=self.k)\n        x = self.conv4(x)\n        x4 = x.max(dim=-1, keepdim=False)[0]\n\n        x = torch.cat((x1, x2, x3, x4), dim=1)\n\n        x = self.conv5(x)\n        x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)\n        x = torch.cat((x1, x2), 1)\n\n        if self.leaky_relu:\n            act = lambda y: F.leaky_relu(y, negative_slope=0.2)\n        else:\n            act = F.relu\n\n        x = act(self.bn6(self.linear1(x)))\n        x = self.dp1(x)\n        x = act(self.bn7(self.linear2(x)))\n        x = self.dp2(x)\n        x = self.linear3(x)\n        return x\n"
  },
  {
    "path": "dgcnn/pytorch/util.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Author: Yue Wang\n@Contact: yuewangx@mit.edu\n@File: util\n@Time: 4/5/19 3:47 PM\n\"\"\"\n\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\n\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1)\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n\n\nclass IOStream():\n    def __init__(self, path):\n        self.f = open(path, 'a')\n\n    def cprint(self, text):\n        print(text)\n        self.f.write(text+'\\n')\n        self.f.flush()\n\n    def close(self):\n        self.f.close()\n"
  },
  {
    "path": "dgcnn/tensorflow/README.md",
    "content": "# Dynamic Graph CNN for Learning on Point Clouds (TensorFlow)\n\n## Point Cloud Classification\n* Run the training script:\n\n``` bash\npython train.py\n```\n\n* Run the evaluation script after training finished:\n\n``` bash\npython evalutate.py\n\n```\n"
  },
  {
    "path": "dgcnn/tensorflow/evaluate.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport argparse\nimport socket\nimport importlib\nimport time\nimport os\nimport scipy.misc\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, 'models'))\nsys.path.append(os.path.join(BASE_DIR, 'utils'))\nimport provider\nimport pc_util\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='dgcnn', help='Model name: dgcnn [default: dgcnn]')\nparser.add_argument('--batch_size', type=int, default=4, help='Batch Size during training [default: 1]')\nparser.add_argument('--num_point', type=int, default=1024, help='Point Number [256/512/1024/2048] [default: 1024]')\nparser.add_argument('--model_path', default='log/model.ckpt', help='model checkpoint file path [default: log/model.ckpt]')\nparser.add_argument('--dump_dir', default='dump', help='dump folder path [dump]')\nparser.add_argument('--visu', action='store_true', help='Whether to dump image for error case [default: False]')\nFLAGS = parser.parse_args()\n\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMODEL_PATH = FLAGS.model_path\nGPU_INDEX = FLAGS.gpu\nMODEL = importlib.import_module(FLAGS.model) # import network module\nDUMP_DIR = FLAGS.dump_dir\nif not os.path.exists(DUMP_DIR): os.mkdir(DUMP_DIR)\nLOG_FOUT = open(os.path.join(DUMP_DIR, 'log_evaluate.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nNUM_CLASSES = 40\nSHAPE_NAMES = [line.rstrip() for line in \\\n    open(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/shape_names.txt'))] \n\nHOSTNAME = socket.gethostname()\n\n# ModelNet40 official train/test split\nTRAIN_FILES = provider.getDataFiles( \\\n    os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/train_files.txt'))\nTEST_FILES = provider.getDataFiles(\\\n    os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/test_files.txt'))\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef evaluate(num_votes):\n    is_training = False\n     \n    with tf.device('/gpu:'+str(GPU_INDEX)):\n        pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n        is_training_pl = tf.placeholder(tf.bool, shape=())\n\n        # simple model\n        pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl)\n        loss = MODEL.get_loss(pred, labels_pl, end_points)\n        \n        # Add ops to save and restore all the variables.\n        saver = tf.train.Saver()\n        \n    # Create a session\n    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    config.allow_soft_placement = True\n    config.log_device_placement = True\n    sess = tf.Session(config=config)\n\n    # Restore variables from disk.\n    saver.restore(sess, MODEL_PATH)\n    log_string(\"Model restored.\")\n\n    ops = {'pointclouds_pl': pointclouds_pl,\n           'labels_pl': labels_pl,\n           'is_training_pl': is_training_pl,\n           'pred': pred,\n           'loss': loss}\n\n    eval_one_epoch(sess, ops, num_votes)\n\n   \ndef eval_one_epoch(sess, ops, num_votes=1, topk=1):\n    error_cnt = 0\n    is_training = False\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n    fout = open(os.path.join(DUMP_DIR, 'pred_label.txt'), 'w')\n    for fn in range(len(TEST_FILES)):\n        log_string('----'+str(fn)+'----')\n        current_data, current_label = provider.loadDataFile(TEST_FILES[fn])\n        current_data = current_data[:,0:NUM_POINT,:]\n        current_label = np.squeeze(current_label)\n        print(current_data.shape)\n        \n        file_size = current_data.shape[0]\n        num_batches = file_size // BATCH_SIZE\n        print(file_size)\n        \n        for batch_idx in range(num_batches):\n            start_idx = batch_idx * BATCH_SIZE\n            end_idx = (batch_idx+1) * BATCH_SIZE\n            cur_batch_size = end_idx - start_idx\n            \n            # Aggregating BEG\n            batch_loss_sum = 0 # sum of losses for the batch\n            batch_pred_sum = np.zeros((cur_batch_size, NUM_CLASSES)) # score for classes\n            batch_pred_classes = np.zeros((cur_batch_size, NUM_CLASSES)) # 0/1 for classes\n            for vote_idx in range(num_votes):\n                rotated_data = provider.rotate_point_cloud_by_angle(current_data[start_idx:end_idx, :, :],\n                                                  vote_idx/float(num_votes) * np.pi * 2)\n                feed_dict = {ops['pointclouds_pl']: rotated_data,\n                             ops['labels_pl']: current_label[start_idx:end_idx],\n                             ops['is_training_pl']: is_training}\n                loss_val, pred_val = sess.run([ops['loss'], ops['pred']],\n                                          feed_dict=feed_dict)\n                batch_pred_sum += pred_val\n                batch_pred_val = np.argmax(pred_val, 1)\n                for el_idx in range(cur_batch_size):\n                    batch_pred_classes[el_idx, batch_pred_val[el_idx]] += 1\n                batch_loss_sum += (loss_val * cur_batch_size / float(num_votes))\n            # pred_val_topk = np.argsort(batch_pred_sum, axis=-1)[:,-1*np.array(range(topk))-1]\n            # pred_val = np.argmax(batch_pred_classes, 1)\n            pred_val = np.argmax(batch_pred_sum, 1)\n            # Aggregating END\n            \n            correct = np.sum(pred_val == current_label[start_idx:end_idx])\n            # correct = np.sum(pred_val_topk[:,0:topk] == label_val)\n            total_correct += correct\n            total_seen += cur_batch_size\n            loss_sum += batch_loss_sum\n\n            for i in range(start_idx, end_idx):\n                l = current_label[i]\n                total_seen_class[l] += 1\n                total_correct_class[l] += (pred_val[i-start_idx] == l)\n                fout.write('%d, %d\\n' % (pred_val[i-start_idx], l))\n                \n                if pred_val[i-start_idx] != l and FLAGS.visu: # ERROR CASE, DUMP!\n                    img_filename = '%d_label_%s_pred_%s.jpg' % (error_cnt, SHAPE_NAMES[l],\n                                                           SHAPE_NAMES[pred_val[i-start_idx]])\n                    img_filename = os.path.join(DUMP_DIR, img_filename)\n                    output_img = pc_util.point_cloud_three_views(np.squeeze(current_data[i, :, :]))\n                    scipy.misc.imsave(img_filename, output_img)\n                    error_cnt += 1\n                \n    log_string('eval mean loss: %f' % (loss_sum / float(total_seen)))\n    log_string('eval accuracy: %f' % (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    \n    class_accuracies = np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float)\n    for i, name in enumerate(SHAPE_NAMES):\n        log_string('%10s:\\t%0.3f' % (name, class_accuracies[i]))\n    \n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        evaluate(num_votes=12)\n    LOG_FOUT.close()\n"
  },
  {
    "path": "dgcnn/tensorflow/models/dgcnn.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport math\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nsys.path.append(os.path.join(BASE_DIR, '../../utils'))\nimport tf_util\nfrom transform_nets import input_transform_net\n\n\ndef placeholder_inputs(batch_size, num_point):\n  pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 3))\n  labels_pl = tf.placeholder(tf.int32, shape=(batch_size))\n  return pointclouds_pl, labels_pl\n\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n  \"\"\" Classification PointNet, input is BxNx3, output Bx40 \"\"\"\n  batch_size = point_cloud.get_shape()[0].value\n  num_point = point_cloud.get_shape()[1].value\n  end_points = {}\n  k = 20\n\n  adj_matrix = tf_util.pairwise_distance(point_cloud)\n  nn_idx = tf_util.knn(adj_matrix, k=k)\n  edge_feature = tf_util.get_edge_feature(point_cloud, nn_idx=nn_idx, k=k)\n\n  with tf.variable_scope('transform_net1') as sc:\n    transform = input_transform_net(edge_feature, is_training, bn_decay, K=3)\n\n  point_cloud_transformed = tf.matmul(point_cloud, transform)\n  adj_matrix = tf_util.pairwise_distance(point_cloud_transformed)\n  nn_idx = tf_util.knn(adj_matrix, k=k)\n  edge_feature = tf_util.get_edge_feature(point_cloud_transformed, nn_idx=nn_idx, k=k)\n\n  net = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='dgcnn1', bn_decay=bn_decay)\n  net = tf.reduce_max(net, axis=-2, keep_dims=True)\n  net1 = net\n\n  adj_matrix = tf_util.pairwise_distance(net)\n  nn_idx = tf_util.knn(adj_matrix, k=k)\n  edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=k)\n\n  net = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='dgcnn2', bn_decay=bn_decay)\n  net = tf.reduce_max(net, axis=-2, keep_dims=True)\n  net2 = net\n \n  adj_matrix = tf_util.pairwise_distance(net)\n  nn_idx = tf_util.knn(adj_matrix, k=k)\n  edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=k)  \n\n  net = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='dgcnn3', bn_decay=bn_decay)\n  net = tf.reduce_max(net, axis=-2, keep_dims=True)\n  net3 = net\n\n  adj_matrix = tf_util.pairwise_distance(net)\n  nn_idx = tf_util.knn(adj_matrix, k=k)\n  edge_feature = tf_util.get_edge_feature(net, nn_idx=nn_idx, k=k)  \n  \n  net = tf_util.conv2d(edge_feature, 128, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='dgcnn4', bn_decay=bn_decay)\n  net = tf.reduce_max(net, axis=-2, keep_dims=True)\n  net4 = net\n\n  net = tf_util.conv2d(tf.concat([net1, net2, net3, net4], axis=-1), 1024, [1, 1], \n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='agg', bn_decay=bn_decay)\n \n  net = tf.reduce_max(net, axis=1, keep_dims=True) \n\n  # MLP on global point cloud vector\n  net = tf.reshape(net, [batch_size, -1]) \n  net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,\n                                scope='fc1', bn_decay=bn_decay)\n  net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training,\n                         scope='dp1')\n  net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,\n                                scope='fc2', bn_decay=bn_decay)\n  net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training,\n                        scope='dp2')\n  net = tf_util.fully_connected(net, 40, activation_fn=None, scope='fc3')\n\n  return net, end_points\n\n\ndef get_loss(pred, label, end_points):\n  \"\"\" pred: B*NUM_CLASSES,\n      label: B, \"\"\"\n  labels = tf.one_hot(indices=label, depth=40)\n  loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=pred, label_smoothing=0.2)\n  classify_loss = tf.reduce_mean(loss)\n  return classify_loss\n\n\nif __name__=='__main__':\n  batch_size = 2\n  num_pt = 124\n  pos_dim = 3\n\n  input_feed = np.random.rand(batch_size, num_pt, pos_dim)\n  label_feed = np.random.rand(batch_size)\n  label_feed[label_feed>=0.5] = 1\n  label_feed[label_feed<0.5] = 0\n  label_feed = label_feed.astype(np.int32)\n\n  # # np.save('./debug/input_feed.npy', input_feed)\n  # input_feed = np.load('./debug/input_feed.npy')\n  # print input_feed\n\n  with tf.Graph().as_default():\n    input_pl, label_pl = placeholder_inputs(batch_size, num_pt)\n    pos, ftr = get_model(input_pl, tf.constant(True))\n    # loss = get_loss(logits, label_pl, None)\n\n    with tf.Session() as sess:\n      sess.run(tf.global_variables_initializer())\n      feed_dict = {input_pl: input_feed, label_pl: label_feed}\n      res1, res2 = sess.run([pos, ftr], feed_dict=feed_dict)\n      print res1.shape\n      print res1\n\n      print res2.shape\n      print res2\n\n\n\n\n\n\n\n\n\n\n\n\n"
  },
  {
    "path": "dgcnn/tensorflow/models/transform_nets.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tf_util\n\ndef input_transform_net(edge_feature, is_training, bn_decay=None, K=3, is_dist=False):\n  \"\"\" Input (XYZ) Transform Net, input is BxNx3 gray image\n    Return:\n      Transformation matrix of size 3xK \"\"\"\n  batch_size = edge_feature.get_shape()[0].value\n  num_point = edge_feature.get_shape()[1].value\n\n  # input_image = tf.expand_dims(point_cloud, -1)\n  net = tf_util.conv2d(edge_feature, 64, [1,1],\n             padding='VALID', stride=[1,1],\n             bn=True, is_training=is_training,\n             scope='tconv1', bn_decay=bn_decay, is_dist=is_dist)\n  net = tf_util.conv2d(net, 128, [1,1],\n             padding='VALID', stride=[1,1],\n             bn=True, is_training=is_training,\n             scope='tconv2', bn_decay=bn_decay, is_dist=is_dist)\n  \n  net = tf.reduce_max(net, axis=-2, keep_dims=True)\n  \n  net = tf_util.conv2d(net, 1024, [1,1],\n             padding='VALID', stride=[1,1],\n             bn=True, is_training=is_training,\n             scope='tconv3', bn_decay=bn_decay, is_dist=is_dist)\n  net = tf_util.max_pool2d(net, [num_point,1],\n               padding='VALID', scope='tmaxpool')\n\n  net = tf.reshape(net, [batch_size, -1])\n  net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,\n                  scope='tfc1', bn_decay=bn_decay,is_dist=is_dist)\n  net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,\n                  scope='tfc2', bn_decay=bn_decay,is_dist=is_dist)\n\n  with tf.variable_scope('transform_XYZ') as sc:\n    # assert(K==3)\n    with tf.device('/cpu:0'):\n      weights = tf.get_variable('weights', [256, K*K],\n                    initializer=tf.constant_initializer(0.0),\n                    dtype=tf.float32)\n      biases = tf.get_variable('biases', [K*K],\n                   initializer=tf.constant_initializer(0.0),\n                   dtype=tf.float32)\n    biases += tf.constant(np.eye(K).flatten(), dtype=tf.float32)\n    transform = tf.matmul(net, weights)\n    transform = tf.nn.bias_add(transform, biases)\n\n  transform = tf.reshape(transform, [batch_size, K, K])\n  return transform"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/README.md",
    "content": "## Part segmentation\n\n### Dataset \n\nLoad the data for part segmentation.\n\n```\nsh +x download_data.sh\n```\n\n### Train\n\nTrain the model on 2 GPUs, each with 12 GB memeory. \n\n```\npython train_multi_gpu.py\n```\n\nModel parameters are saved every 5 epochs in \"train_results/trained_models/\".\n\n### Evaluation\n\nTo evaluate the model saved after epoch n, \n\n```\npython test.py --model_path train_results/trained_models/epoch_n.ckpt\n```\n\nFor example, if we want to test the model saved after 175 epochs (provided), \n\n```\npython test.py --model_path train_results/trained_models/epoch_175.ckpt\n```\n"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/download_data.sh",
    "content": "#!/bin/bash\n\n# Download original ShapeNetPart dataset (around 1GB) ['PartAnnotation']\nwget https://shapenet.cs.stanford.edu/ericyi/shapenetcore_partanno_v0.zip\nunzip shapenetcore_partanno_v0.zip\nrm shapenetcore_partanno_v0.zip\n\n# Download HDF5 for ShapeNet Part segmentation (around 346MB) ['hdf5_data']\nwget https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip\nunzip shapenet_part_seg_hdf5_data.zip\nrm shapenet_part_seg_hdf5_data.zip\n"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/part_seg_model.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport math\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(os.path.dirname(BASE_DIR))\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nsys.path.append(os.path.join(BASE_DIR, '../models'))\nsys.path.append(os.path.join(BASE_DIR, '../'))\nimport tf_util\nfrom transform_nets import input_transform_net\n\ndef get_model(point_cloud, input_label, is_training, cat_num, part_num, \\\n    batch_size, num_point, weight_decay, bn_decay=None):\n\n  batch_size = point_cloud.get_shape()[0].value\n  num_point = point_cloud.get_shape()[1].value\n  input_image = tf.expand_dims(point_cloud, -1)\n\n  k = 20\n\n  adj = tf_util.pairwise_distance(point_cloud)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(input_image, nn_idx=nn_idx, k=k)\n\n  with tf.variable_scope('transform_net1') as sc:\n    transform = input_transform_net(edge_feature, is_training, bn_decay, K=3, is_dist=True)\n  point_cloud_transformed = tf.matmul(point_cloud, transform)\n  \n  input_image = tf.expand_dims(point_cloud_transformed, -1)\n  adj = tf_util.pairwise_distance(point_cloud_transformed)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(input_image, nn_idx=nn_idx, k=k)\n\n  out1 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv1', bn_decay=bn_decay, is_dist=True)\n  \n  out2 = tf_util.conv2d(out1, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv2', bn_decay=bn_decay, is_dist=True)\n\n  net_1 = tf.reduce_max(out2, axis=-2, keep_dims=True)\n\n\n\n  adj = tf_util.pairwise_distance(net_1)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(net_1, nn_idx=nn_idx, k=k)\n\n  out3 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv3', bn_decay=bn_decay, is_dist=True)\n\n  out4 = tf_util.conv2d(out3, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv4', bn_decay=bn_decay, is_dist=True)\n  \n  net_2 = tf.reduce_max(out4, axis=-2, keep_dims=True)\n  \n  \n\n  adj = tf_util.pairwise_distance(net_2)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(net_2, nn_idx=nn_idx, k=k)\n\n  out5 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv5', bn_decay=bn_decay, is_dist=True)\n\n  # out6 = tf_util.conv2d(out5, 64, [1,1],\n  #                      padding='VALID', stride=[1,1],\n  #                      bn=True, is_training=is_training, weight_decay=weight_decay,\n  #                      scope='adj_conv6', bn_decay=bn_decay, is_dist=True)\n\n  net_3 = tf.reduce_max(out5, axis=-2, keep_dims=True)\n\n\n\n  out7 = tf_util.conv2d(tf.concat([net_1, net_2, net_3], axis=-1), 1024, [1, 1], \n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='adj_conv7', bn_decay=bn_decay, is_dist=True)\n\n  out_max = tf_util.max_pool2d(out7, [num_point, 1], padding='VALID', scope='maxpool')\n\n\n  one_hot_label_expand = tf.reshape(input_label, [batch_size, 1, 1, cat_num])\n  one_hot_label_expand = tf_util.conv2d(one_hot_label_expand, 64, [1, 1], \n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='one_hot_label_expand', bn_decay=bn_decay, is_dist=True)\n  out_max = tf.concat(axis=3, values=[out_max, one_hot_label_expand])\n  expand = tf.tile(out_max, [1, num_point, 1, 1])\n\n  concat = tf.concat(axis=3, values=[expand, \n                                     net_1,\n                                     net_2,\n                                     net_3])\n\n  net2 = tf_util.conv2d(concat, 256, [1,1], padding='VALID', stride=[1,1], bn_decay=bn_decay,\n            bn=True, is_training=is_training, scope='seg/conv1', weight_decay=weight_decay, is_dist=True)\n  net2 = tf_util.dropout(net2, keep_prob=0.6, is_training=is_training, scope='seg/dp1')\n  net2 = tf_util.conv2d(net2, 256, [1,1], padding='VALID', stride=[1,1], bn_decay=bn_decay,\n            bn=True, is_training=is_training, scope='seg/conv2', weight_decay=weight_decay, is_dist=True)\n  net2 = tf_util.dropout(net2, keep_prob=0.6, is_training=is_training, scope='seg/dp2')\n  net2 = tf_util.conv2d(net2, 128, [1,1], padding='VALID', stride=[1,1], bn_decay=bn_decay,\n            bn=True, is_training=is_training, scope='seg/conv3', weight_decay=weight_decay, is_dist=True)\n  net2 = tf_util.conv2d(net2, part_num, [1,1], padding='VALID', stride=[1,1], activation_fn=None, \n            bn=False, scope='seg/conv4', weight_decay=weight_decay, is_dist=True)\n\n  net2 = tf.reshape(net2, [batch_size, num_point, part_num])\n\n  return net2\n\n\ndef get_loss(seg_pred, seg):\n  per_instance_seg_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=seg_pred, labels=seg), axis=1)\n  seg_loss = tf.reduce_mean(per_instance_seg_loss)\n  per_instance_seg_pred_res = tf.argmax(seg_pred, 2)\n  \n  return seg_loss, per_instance_seg_loss, per_instance_seg_pred_res\n\n"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/test.py",
    "content": "import argparse\nimport tensorflow as tf\nimport json\nimport numpy as np\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.dirname(BASE_DIR))\nimport provider\nimport part_seg_model as model\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model_path', default='train_results/trained_models/epoch_160.ckpt', help='Model checkpoint path')\nFLAGS = parser.parse_args()\n\n# DEFAULT SETTINGS\npretrained_model_path = FLAGS.model_path \nhdf5_data_dir = os.path.join(BASE_DIR, './hdf5_data')\nply_data_dir = os.path.join(BASE_DIR, './PartAnnotation')\ngpu_to_use = 0\noutput_dir = os.path.join(BASE_DIR, './test_results')\noutput_verbose = False  \n\n# MAIN SCRIPT\npoint_num = 3000            \nbatch_size = 1\n\ntest_file_list = os.path.join(BASE_DIR, 'testing_ply_file_list.txt')\n\noid2cpid = json.load(open(os.path.join(hdf5_data_dir, 'overallid_to_catid_partid.json'), 'r')) \n\nobject2setofoid = {}\nfor idx in range(len(oid2cpid)):\n  objid, pid = oid2cpid[idx]\n  if not objid in object2setofoid.keys():\n    object2setofoid[objid] = []\n  object2setofoid[objid].append(idx)\n\nall_obj_cat_file = os.path.join(hdf5_data_dir, 'all_object_categories.txt')\nfin = open(all_obj_cat_file, 'r')\nlines = [line.rstrip() for line in fin.readlines()]\nobjcats = [line.split()[1] for line in lines] \nobjnames = [line.split()[0] for line in lines] \non2oid = {objcats[i]:i for i in range(len(objcats))} \nfin.close()\n\ncolor_map_file = os.path.join(hdf5_data_dir, 'part_color_mapping.json')\ncolor_map = json.load(open(color_map_file, 'r'))\n\nNUM_OBJ_CATS = 16\nNUM_PART_CATS = 50\n\ncpid2oid = json.load(open(os.path.join(hdf5_data_dir, 'catid_partid_to_overallid.json'), 'r'))\n\ndef printout(flog, data):\n  print(data)\n  flog.write(data + '\\n')\n\ndef output_color_point_cloud(data, seg, out_file):\n  with open(out_file, 'w') as f:\n    l = len(seg)\n    for i in range(l):\n      color = color_map[seg[i]]\n      f.write('v %f %f %f %f %f %f\\n' % (data[i][0], data[i][1], data[i][2], color[0], color[1], color[2]))\n\ndef output_color_point_cloud_red_blue(data, seg, out_file):\n  with open(out_file, 'w') as f:\n    l = len(seg)\n    for i in range(l):\n      if seg[i] == 1:\n        color = [0, 0, 1]\n      elif seg[i] == 0:\n        color = [1, 0, 0]\n      else:\n        color = [0, 0, 0]\n\n      f.write('v %f %f %f %f %f %f\\n' % (data[i][0], data[i][1], data[i][2], color[0], color[1], color[2]))\n\n\ndef pc_normalize(pc):\n  l = pc.shape[0]\n  centroid = np.mean(pc, axis=0)\n  pc = pc - centroid\n  m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n  pc = pc / m\n  return pc\n\ndef placeholder_inputs():\n  pointclouds_ph = tf.placeholder(tf.float32, shape=(batch_size, point_num, 3))\n  input_label_ph = tf.placeholder(tf.float32, shape=(batch_size, NUM_OBJ_CATS))\n  return pointclouds_ph, input_label_ph\n\ndef output_color_point_cloud(data, seg, out_file):\n  with open(out_file, 'w') as f:\n    l = len(seg)\n    for i in range(l):\n      color = color_map[seg[i]]\n      f.write('v %f %f %f %f %f %f\\n' % (data[i][0], data[i][1], data[i][2], color[0], color[1], color[2]))\n\ndef load_pts_seg_files(pts_file, seg_file, catid):\n  with open(pts_file, 'r') as f:\n    pts_str = [item.rstrip() for item in f.readlines()]\n    pts = np.array([np.float32(s.split()) for s in pts_str], dtype=np.float32)\n  with open(seg_file, 'r') as f:\n    part_ids = np.array([int(item.rstrip()) for item in f.readlines()], dtype=np.uint8)\n    seg = np.array([cpid2oid[catid+'_'+str(x)] for x in part_ids])\n  return pts, seg\n\ndef pc_augment_to_point_num(pts, pn):\n  assert(pts.shape[0] <= pn)\n  cur_len = pts.shape[0]\n  res = np.array(pts)\n  while cur_len < pn:\n    res = np.concatenate((res, pts))\n    cur_len += pts.shape[0]\n  return res[:pn, :]\n\ndef convert_label_to_one_hot(labels):\n  label_one_hot = np.zeros((labels.shape[0], NUM_OBJ_CATS))\n  for idx in range(labels.shape[0]):\n    label_one_hot[idx, labels[idx]] = 1\n  return label_one_hot\n\ndef predict():\n  is_training = False\n  \n  with tf.device('/gpu:'+str(gpu_to_use)):\n    pointclouds_ph, input_label_ph = placeholder_inputs()\n    is_training_ph = tf.placeholder(tf.bool, shape=())\n\n    seg_pred = model.get_model(pointclouds_ph, input_label_ph, \\\n        cat_num=NUM_OBJ_CATS, part_num=NUM_PART_CATS, is_training=is_training_ph, \\\n        batch_size=batch_size, num_point=point_num, weight_decay=0.0, bn_decay=None)\n    \n  saver = tf.train.Saver()\n\n  config = tf.ConfigProto()\n  config.gpu_options.allow_growth = True\n  config.allow_soft_placement = True\n\n  with tf.Session(config=config) as sess:\n    if not os.path.exists(output_dir):\n      os.mkdir(output_dir)\n\n    flog = open(os.path.join(output_dir, 'log.txt'), 'a')\n\n    printout(flog, 'Loading model %s' % pretrained_model_path)\n    saver.restore(sess, pretrained_model_path)\n    printout(flog, 'Model restored.')\n    \n    batch_data = np.zeros([batch_size, point_num, 3]).astype(np.float32)\n\n    total_acc = 0.0\n    total_seen = 0\n    total_acc_iou = 0.0\n\n    total_per_cat_acc = np.zeros((NUM_OBJ_CATS)).astype(np.float32)\n    total_per_cat_iou = np.zeros((NUM_OBJ_CATS)).astype(np.float32)\n    total_per_cat_seen = np.zeros((NUM_OBJ_CATS)).astype(np.int32)\n\n    ffiles = open(test_file_list, 'r')\n    lines = [line.rstrip() for line in ffiles.readlines()]\n    pts_files = [line.split()[0] for line in lines]\n    seg_files = [line.split()[1] for line in lines]\n    labels = [line.split()[2] for line in lines]\n    ffiles.close()\n\n    len_pts_files = len(pts_files)\n    for shape_idx in range(len_pts_files):\n      if shape_idx % 100 == 0:\n        printout(flog, '%d/%d ...' % (shape_idx, len_pts_files))\n\n      cur_gt_label = on2oid[labels[shape_idx]] # 0/1/.../15\n\n      cur_label_one_hot = np.zeros((1, NUM_OBJ_CATS), dtype=np.float32)\n      cur_label_one_hot[0, cur_gt_label] = 1\n\n      pts_file_to_load = os.path.join(ply_data_dir, pts_files[shape_idx])\n      seg_file_to_load = os.path.join(ply_data_dir, seg_files[shape_idx])\n\n      pts, seg = load_pts_seg_files(pts_file_to_load, seg_file_to_load, objcats[cur_gt_label])\n      ori_point_num = len(seg)\n\n      batch_data[0, ...] = pc_augment_to_point_num(pc_normalize(pts), point_num)\n\n      seg_pred_res = sess.run(seg_pred, feed_dict={\n            pointclouds_ph: batch_data,\n            input_label_ph: cur_label_one_hot, \n            is_training_ph: is_training})\n\n      seg_pred_res = seg_pred_res[0, ...]\n\n      iou_oids = object2setofoid[objcats[cur_gt_label]]\n      non_cat_labels = list(set(np.arange(NUM_PART_CATS)).difference(set(iou_oids)))\n\n      mini = np.min(seg_pred_res)\n      seg_pred_res[:, non_cat_labels] = mini - 1000\n\n      seg_pred_val = np.argmax(seg_pred_res, axis=1)[:ori_point_num]\n\n      seg_acc = np.mean(seg_pred_val == seg)\n\n      total_acc += seg_acc\n      total_seen += 1\n\n      total_per_cat_seen[cur_gt_label] += 1\n      total_per_cat_acc[cur_gt_label] += seg_acc\n\n      mask = np.int32(seg_pred_val == seg)\n\n      total_iou = 0.0\n      iou_log = ''\n      for oid in iou_oids:\n        n_pred = np.sum(seg_pred_val == oid)\n        n_gt = np.sum(seg == oid)\n        n_intersect = np.sum(np.int32(seg == oid) * mask)\n        n_union = n_pred + n_gt - n_intersect\n        iou_log += '_' + str(n_pred)+'_'+str(n_gt)+'_'+str(n_intersect)+'_'+str(n_union)+'_'\n        if n_union == 0:\n          total_iou += 1\n          iou_log += '_1\\n'\n        else:\n          total_iou += n_intersect * 1.0 / n_union\n          iou_log += '_'+str(n_intersect * 1.0 / n_union)+'\\n'\n\n      avg_iou = total_iou / len(iou_oids)\n      total_acc_iou += avg_iou\n      total_per_cat_iou[cur_gt_label] += avg_iou\n      \n      if output_verbose:\n        output_color_point_cloud(pts, seg, os.path.join(output_dir, str(shape_idx)+'_gt.obj'))\n        output_color_point_cloud(pts, seg_pred_val, os.path.join(output_dir, str(shape_idx)+'_pred.obj'))\n        output_color_point_cloud_red_blue(pts, np.int32(seg == seg_pred_val), \n            os.path.join(output_dir, str(shape_idx)+'_diff.obj'))\n\n        with open(os.path.join(output_dir, str(shape_idx)+'.log'), 'w') as fout:\n          fout.write('Total Point: %d\\n\\n' % ori_point_num)\n          fout.write('Ground Truth: %s\\n' % objnames[cur_gt_label])\n          fout.write('Accuracy: %f\\n' % seg_acc)\n          fout.write('IoU: %f\\n\\n' % avg_iou)\n          fout.write('IoU details: %s\\n' % iou_log)\n\n    printout(flog, 'Accuracy: %f' % (total_acc / total_seen))\n    printout(flog, 'IoU: %f' % (total_acc_iou / total_seen))\n\n    for cat_idx in range(NUM_OBJ_CATS):\n      printout(flog, '\\t ' + objcats[cat_idx] + ' Total Number: ' + str(total_per_cat_seen[cat_idx]))\n      if total_per_cat_seen[cat_idx] > 0:\n        printout(flog, '\\t ' + objcats[cat_idx] + ' Accuracy: ' + \\\n            str(total_per_cat_acc[cat_idx] / total_per_cat_seen[cat_idx]))\n        printout(flog, '\\t ' + objcats[cat_idx] + ' IoU: '+ \\\n            str(total_per_cat_iou[cat_idx] / total_per_cat_seen[cat_idx]))\n    \nwith tf.Graph().as_default():\n  predict()\n"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/testing_ply_file_list.txt",
    "content": "03001627/points/355fa0f35b61fdd7aa74a6b5ee13e775.pts 03001627/expert_verified/points_label/355fa0f35b61fdd7aa74a6b5ee13e775.seg 03001627\n04379243/points/408c3db9b4ee6be2e9f3e9c758fef992.pts 04379243/expert_verified/points_label/408c3db9b4ee6be2e9f3e9c758fef992.seg 04379243\n02691156/points/a1708ad923f3b51abbf3143b1cb6076a.pts 02691156/expert_verified/points_label/a1708ad923f3b51abbf3143b1cb6076a.seg 02691156\n03001627/points/2783a969fa42cdecbe31379a5751d820.pts 03001627/expert_verified/points_label/2783a969fa42cdecbe31379a5751d820.seg 03001627\n03001627/points/ed56af61297594bf1c4300651205adf3.pts 03001627/expert_verified/points_label/ed56af61297594bf1c4300651205adf3.seg 03001627\n03001627/points/c0857de5101f704f3c5e1addd9922bf2.pts 03001627/expert_verified/points_label/c0857de5101f704f3c5e1addd9922bf2.seg 03001627\n02691156/points/b72804a8bd3dbbaca8607f540cc62ba.pts 02691156/expert_verified/points_label/b72804a8bd3dbbaca8607f540cc62ba.seg 02691156\n03001627/points/df609533cd186278398c7598b0d2e5d5.pts 03001627/expert_verified/points_label/df609533cd186278398c7598b0d2e5d5.seg 03001627\n04379243/points/c24b7a315dbf2f3178ab7c8b395efbfe.pts 04379243/expert_verified/points_label/c24b7a315dbf2f3178ab7c8b395efbfe.seg 04379243\n03636649/points/b8c87ad9d4930983a8d82fc8a3e54728.pts 03636649/expert_verified/points_label/b8c87ad9d4930983a8d82fc8a3e54728.seg 03636649\n02691156/points/8add45a11c9fcb446eb5821e78d8898a.pts 02691156/expert_verified/points_label/8add45a11c9fcb446eb5821e78d8898a.seg 02691156\n04379243/points/94d6518cf1e00eaac013a7bed5288654.pts 04379243/expert_verified/points_label/94d6518cf1e00eaac013a7bed5288654.seg 04379243\n04379243/points/1dbb8fd083f96ad279b3e1be3524f72f.pts 04379243/expert_verified/points_label/1dbb8fd083f96ad279b3e1be3524f72f.seg 04379243\n03001627/points/452115e132539be4daaaeef365d8f6e5.pts 03001627/expert_verified/points_label/452115e132539be4daaaeef365d8f6e5.seg 03001627\n04379243/points/bd25dfa62c3c2cf772bd03149507655d.pts 04379243/expert_verified/points_label/bd25dfa62c3c2cf772bd03149507655d.seg 04379243\n03948459/points/b1bbe535a833635d91f9af3df5b0c8fc.pts 03948459/expert_verified/points_label/b1bbe535a833635d91f9af3df5b0c8fc.seg 03948459\n04379243/points/d41c8af82fe98a019fb4103277a6b93.pts 04379243/expert_verified/points_label/d41c8af82fe98a019fb4103277a6b93.seg 04379243\n03001627/points/3109a0b9f9bc5fecb4cd1bd556007aed.pts 03001627/expert_verified/points_label/3109a0b9f9bc5fecb4cd1bd556007aed.seg 03001627\n03001627/points/d38129a3301d31350b1fc43ca5e85e.pts 03001627/expert_verified/points_label/d38129a3301d31350b1fc43ca5e85e.seg 03001627\n03636649/points/495af808806f1727a753b1b88fff4abb.pts 03636649/expert_verified/points_label/495af808806f1727a753b1b88fff4abb.seg 03636649\n04379243/points/4d3cc502d4444c848cbb8bac2032149c.pts 04379243/expert_verified/points_label/4d3cc502d4444c848cbb8bac2032149c.seg 04379243\n02691156/points/ed7e1a38fe33830b87697d3904b168b.pts 02691156/expert_verified/points_label/ed7e1a38fe33830b87697d3904b168b.seg 02691156\n04379243/points/cf076ced8264a480cce90f0d61ed7a70.pts 04379243/expert_verified/points_label/cf076ced8264a480cce90f0d61ed7a70.seg 04379243\n04379243/points/c04b363fd824528bd42b9650f19dd425.pts 04379243/expert_verified/points_label/c04b363fd824528bd42b9650f19dd425.seg 04379243\n04379243/points/9705c2610980d0fdb2d0500bdfc28f70.pts 04379243/expert_verified/points_label/9705c2610980d0fdb2d0500bdfc28f70.seg 04379243\n02691156/points/de29a1335c332a5ef7bc9a344bb7bae5.pts 02691156/expert_verified/points_label/de29a1335c332a5ef7bc9a344bb7bae5.seg 02691156\n03001627/points/75d0664363f418efe461a9a9741d9415.pts 03001627/expert_verified/points_label/75d0664363f418efe461a9a9741d9415.seg 03001627\n03001627/points/3421ad5a45b85f7a4b3c42e318f3affc.pts 03001627/expert_verified/points_label/3421ad5a45b85f7a4b3c42e318f3affc.seg 03001627\n03001627/points/c67a255a26e30abb6b9f3980da0b1dff.pts 03001627/expert_verified/points_label/c67a255a26e30abb6b9f3980da0b1dff.seg 03001627\n04379243/points/6791c92944c99c029f1deb04fb8ae481.pts 04379243/expert_verified/points_label/6791c92944c99c029f1deb04fb8ae481.seg 04379243\n04379243/points/4b5536d2e9c5b9b7febad4f49b26ec52.pts 04379243/expert_verified/points_label/4b5536d2e9c5b9b7febad4f49b26ec52.seg 04379243\n04379243/points/c5fc6c1e0d446d37acce1c6e70b58979.pts 04379243/expert_verified/points_label/c5fc6c1e0d446d37acce1c6e70b58979.seg 04379243\n03001627/points/9c8d3c5779871705d22218517e73100.pts 03001627/expert_verified/points_label/9c8d3c5779871705d22218517e73100.seg 03001627\n04379243/points/4f70d14dc276a9539a83764a2641fc5c.pts 04379243/expert_verified/points_label/4f70d14dc276a9539a83764a2641fc5c.seg 04379243\n04379243/points/9d8f0444a8c09adff0d4c8f4dd125299.pts 04379243/expert_verified/points_label/9d8f0444a8c09adff0d4c8f4dd125299.seg 04379243\n04379243/points/57fbb082f660c4f7716b680dedf77108.pts 04379243/expert_verified/points_label/57fbb082f660c4f7716b680dedf77108.seg 04379243\n02958343/points/cb19594e73992a3d51008e496c6cfd2e.pts 02958343/expert_verified/points_label/cb19594e73992a3d51008e496c6cfd2e.seg 02958343\n03624134/points/9d424831d05d363d870906b5178d97bd.pts 03624134/expert_verified/points_label/9d424831d05d363d870906b5178d97bd.seg 03624134\n03001627/points/b884ff155c4117a7508dd48e67ad44bc.pts 03001627/expert_verified/points_label/b884ff155c4117a7508dd48e67ad44bc.seg 03001627\n02958343/points/7a5eba46ba4cfac35aa429db266f0c30.pts 02958343/expert_verified/points_label/7a5eba46ba4cfac35aa429db266f0c30.seg 02958343\n02691156/points/4def53f149137451b0009f08a96f38a9.pts 02691156/expert_verified/points_label/4def53f149137451b0009f08a96f38a9.seg 02691156\n03001627/points/fa8f7c225d3b9f1def4a09e7eb872bd9.pts 03001627/expert_verified/points_label/fa8f7c225d3b9f1def4a09e7eb872bd9.seg 03001627\n04225987/points/f5d7698b5a57d61226e0640b67de606.pts 04225987/expert_verified/points_label/f5d7698b5a57d61226e0640b67de606.seg 04225987\n03001627/points/9aece6c6436cde6fd9ac1bf1eddffd24.pts 03001627/expert_verified/points_label/9aece6c6436cde6fd9ac1bf1eddffd24.seg 03001627\n04099429/points/15474cf9caa757a528eba1f0b7744e9.pts 04099429/expert_verified/points_label/15474cf9caa757a528eba1f0b7744e9.seg 04099429\n02691156/points/571cfb1da3d5b3704b5910188444efc8.pts 02691156/expert_verified/points_label/571cfb1da3d5b3704b5910188444efc8.seg 02691156\n03636649/points/5d97be0e2414bfe0a8930422448288ea.pts 03636649/expert_verified/points_label/5d97be0e2414bfe0a8930422448288ea.seg 03636649\n02958343/points/648ceaad362345518a6cf8c6b92417f2.pts 02958343/expert_verified/points_label/648ceaad362345518a6cf8c6b92417f2.seg 02958343\n03001627/points/8a845bb67ee8486d6199d6fe090be061.pts 03001627/expert_verified/points_label/8a845bb67ee8486d6199d6fe090be061.seg 03001627\n04379243/points/3645a90e02d16f0584aa8fa8b66ba302.pts 04379243/expert_verified/points_label/3645a90e02d16f0584aa8fa8b66ba302.seg 04379243\n04379243/points/ecf3d40b14300d3c0c26b04b6b8e17a.pts 04379243/expert_verified/points_label/ecf3d40b14300d3c0c26b04b6b8e17a.seg 04379243\n04379243/points/a860e5edcaec268e615bcf72f8385966.pts 04379243/expert_verified/points_label/a860e5edcaec268e615bcf72f8385966.seg 04379243\n03001627/points/5edfec789343e0c3319f1c1eee46f332.pts 03001627/expert_verified/points_label/5edfec789343e0c3319f1c1eee46f332.seg 03001627\n02691156/points/92fb0d6a866fe7aca8607f540cc62ba.pts 02691156/expert_verified/points_label/92fb0d6a866fe7aca8607f540cc62ba.seg 02691156\n02958343/points/e4886a4d0c6ea960fe21694bd5f519d1.pts 02958343/expert_verified/points_label/e4886a4d0c6ea960fe21694bd5f519d1.seg 02958343\n03636649/points/e3ee6b31e54e95b7d42b9650f19dd425.pts 03636649/expert_verified/points_label/e3ee6b31e54e95b7d42b9650f19dd425.seg 03636649\n03467517/points/d546e034a6c659a425cd348738a8052a.pts 03467517/expert_verified/points_label/d546e034a6c659a425cd348738a8052a.seg 03467517\n03001627/points/26a6ce644504c5fa22963ea1e168015d.pts 03001627/expert_verified/points_label/26a6ce644504c5fa22963ea1e168015d.seg 03001627\n02691156/points/b2b1c1d5c757af8a7209009cfb89d4bd.pts 02691156/expert_verified/points_label/b2b1c1d5c757af8a7209009cfb89d4bd.seg 02691156\n03467517/points/4bd2492d56d6b8c537b5646da91e9ed0.pts 03467517/expert_verified/points_label/4bd2492d56d6b8c537b5646da91e9ed0.seg 03467517\n04379243/points/92ed9344484dd026dfd21203bf8b4b46.pts 04379243/expert_verified/points_label/92ed9344484dd026dfd21203bf8b4b46.seg 04379243\n04379243/points/2d1d8a2f976387bd3145205f02ff9fc5.pts 04379243/expert_verified/points_label/2d1d8a2f976387bd3145205f02ff9fc5.seg 04379243\n03467517/points/5b7fcd85ce6fd1931377689fa4e4b2d6.pts 03467517/expert_verified/points_label/5b7fcd85ce6fd1931377689fa4e4b2d6.seg 03467517\n02691156/points/4cee36a2e8dd3b24b87697d3904b168b.pts 02691156/expert_verified/points_label/4cee36a2e8dd3b24b87697d3904b168b.seg 02691156\n03001627/points/f23c1bb951fa8909bc01640b1b5116e7.pts 03001627/expert_verified/points_label/f23c1bb951fa8909bc01640b1b5116e7.seg 03001627\n04379243/points/370b45eeeb9b11416f04d49e4de95b59.pts 04379243/expert_verified/points_label/370b45eeeb9b11416f04d49e4de95b59.seg 04379243\n03001627/points/3885255ca5d75e69da2260dc4a1fc2c6.pts 03001627/expert_verified/points_label/3885255ca5d75e69da2260dc4a1fc2c6.seg 03001627\n02691156/points/452c18f8997c53741adbb4c4e06ad649.pts 02691156/expert_verified/points_label/452c18f8997c53741adbb4c4e06ad649.seg 02691156\n03001627/points/8b39b501c9fa4d349b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/8b39b501c9fa4d349b9f2eb77f5e247e.seg 03001627\n04379243/points/94966aa8a7a6f540f6807434c358ea12.pts 04379243/expert_verified/points_label/94966aa8a7a6f540f6807434c358ea12.seg 04379243\n03001627/points/9b6f17ce2db29c4c9ae35d137ece64f9.pts 03001627/expert_verified/points_label/9b6f17ce2db29c4c9ae35d137ece64f9.seg 03001627\n03467517/points/85bef84a26a91bff9ce363b13bdd195d.pts 03467517/expert_verified/points_label/85bef84a26a91bff9ce363b13bdd195d.seg 03467517\n03624134/points/e98bc872371c852e15b040d25222e627.pts 03624134/expert_verified/points_label/e98bc872371c852e15b040d25222e627.seg 03624134\n04379243/points/5dff67091a2f7ef1ab988fe471b1bd06.pts 04379243/expert_verified/points_label/5dff67091a2f7ef1ab988fe471b1bd06.seg 04379243\n03001627/points/e6f37dff25ec4ca4f815ebdb2df45512.pts 03001627/expert_verified/points_label/e6f37dff25ec4ca4f815ebdb2df45512.seg 03001627\n02691156/points/85a15c26a6e9921ae008cc4902bfe3cd.pts 02691156/expert_verified/points_label/85a15c26a6e9921ae008cc4902bfe3cd.seg 02691156\n03001627/points/94371ddd6d62f7b762ec387b772e9e1.pts 03001627/expert_verified/points_label/94371ddd6d62f7b762ec387b772e9e1.seg 03001627\n02691156/points/4374a3b4b98e247b398db3ebdf468ed7.pts 02691156/expert_verified/points_label/4374a3b4b98e247b398db3ebdf468ed7.seg 02691156\n03948459/points/8fa02aab7237289667fdfbdf64f19325.pts 03948459/expert_verified/points_label/8fa02aab7237289667fdfbdf64f19325.seg 03948459\n04379243/points/9f1fcee83cacf964f4b6538438a0b930.pts 04379243/expert_verified/points_label/9f1fcee83cacf964f4b6538438a0b930.seg 04379243\n04225987/points/f5643778dbcd653655a834a7aafb0236.pts 04225987/expert_verified/points_label/f5643778dbcd653655a834a7aafb0236.seg 04225987\n03636649/points/cdbe11124dbf418167ac0fa90111fad0.pts 03636649/expert_verified/points_label/cdbe11124dbf418167ac0fa90111fad0.seg 03636649\n03001627/points/e3d23dc47ddd9620c9be65dfbd21428b.pts 03001627/expert_verified/points_label/e3d23dc47ddd9620c9be65dfbd21428b.seg 03001627\n03001627/points/efd0411eaf2396c4de7ed732f5aeea4.pts 03001627/expert_verified/points_label/efd0411eaf2396c4de7ed732f5aeea4.seg 03001627\n03636649/points/7ad15667f654fc08664b3b9b23ddfcbc.pts 03636649/expert_verified/points_label/7ad15667f654fc08664b3b9b23ddfcbc.seg 03636649\n04379243/points/55d5fce641343449d42b9650f19dd425.pts 04379243/expert_verified/points_label/55d5fce641343449d42b9650f19dd425.seg 04379243\n03467517/points/a31ef3a8c70b789b93f0194265a9746c.pts 03467517/expert_verified/points_label/a31ef3a8c70b789b93f0194265a9746c.seg 03467517\n03001627/points/ccfc857f35c138ede785b88cc9024b2a.pts 03001627/expert_verified/points_label/ccfc857f35c138ede785b88cc9024b2a.seg 03001627\n02691156/points/e3fd510add7b1aa3c19eb6ab3736de88.pts 02691156/expert_verified/points_label/e3fd510add7b1aa3c19eb6ab3736de88.seg 02691156\n03636649/points/213d911cc489c352b5db3f95d706a0c9.pts 03636649/expert_verified/points_label/213d911cc489c352b5db3f95d706a0c9.seg 03636649\n04225987/points/c171d90db4c4ba56cdb1768065dafd0c.pts 04225987/expert_verified/points_label/c171d90db4c4ba56cdb1768065dafd0c.seg 04225987\n03797390/points/10f6e09036350e92b3f21f1137c3c347.pts 03797390/expert_verified/points_label/10f6e09036350e92b3f21f1137c3c347.seg 03797390\n02691156/points/a374b0448461438ef3d4cc10d9776c62.pts 02691156/expert_verified/points_label/a374b0448461438ef3d4cc10d9776c62.seg 02691156\n03001627/points/b6457a76f24de9f67aa6f8353fce2005.pts 03001627/expert_verified/points_label/b6457a76f24de9f67aa6f8353fce2005.seg 03001627\n03001627/points/7fe08cd7a9b76c1dcbde89e0c48a01bf.pts 03001627/expert_verified/points_label/7fe08cd7a9b76c1dcbde89e0c48a01bf.seg 03001627\n03001627/points/58867a00409c47c0813a1237d2827540.pts 03001627/expert_verified/points_label/58867a00409c47c0813a1237d2827540.seg 03001627\n02958343/points/65e3e2893669a09cc7b48e36e31209b9.pts 02958343/expert_verified/points_label/65e3e2893669a09cc7b48e36e31209b9.seg 02958343\n03948459/points/edec08542b9312b712b38b1d99376c0b.pts 03948459/expert_verified/points_label/edec08542b9312b712b38b1d99376c0b.seg 03948459\n03636649/points/cd80cc92cf732e8d8a17805dbfb751e2.pts 03636649/expert_verified/points_label/cd80cc92cf732e8d8a17805dbfb751e2.seg 03636649\n03467517/points/87650e8ff3d85672381b7fbf79296afb.pts 03467517/expert_verified/points_label/87650e8ff3d85672381b7fbf79296afb.seg 03467517\n03636649/points/1e91664763d371937dd73da65dc0e6a7.pts 03636649/expert_verified/points_label/1e91664763d371937dd73da65dc0e6a7.seg 03636649\n04379243/points/104c8e90ecf0e5351ed672982b7954af.pts 04379243/expert_verified/points_label/104c8e90ecf0e5351ed672982b7954af.seg 04379243\n04379243/points/1834fac2f46a26f91933ffef19678834.pts 04379243/expert_verified/points_label/1834fac2f46a26f91933ffef19678834.seg 04379243\n04379243/points/ed0be8928caab4bdab610b0c94236463.pts 04379243/expert_verified/points_label/ed0be8928caab4bdab610b0c94236463.seg 04379243\n04379243/points/105f53a6471f3ceb4a420e3c1b966720.pts 04379243/expert_verified/points_label/105f53a6471f3ceb4a420e3c1b966720.seg 04379243\n04379243/points/7bf5f689da285153583ff8a5fc7c1869.pts 04379243/expert_verified/points_label/7bf5f689da285153583ff8a5fc7c1869.seg 04379243\n02958343/points/eface8341d001e9ceb01ae4a4788bd4f.pts 02958343/expert_verified/points_label/eface8341d001e9ceb01ae4a4788bd4f.seg 02958343\n03001627/points/517880899d26080471a782a4379556c7.pts 03001627/expert_verified/points_label/517880899d26080471a782a4379556c7.seg 03001627\n03001627/points/5ef3e4abd4386c8871bc6030acc85f1e.pts 03001627/expert_verified/points_label/5ef3e4abd4386c8871bc6030acc85f1e.seg 03001627\n03001627/points/3eb60e6679d1df1dde7eedbb2790491b.pts 03001627/expert_verified/points_label/3eb60e6679d1df1dde7eedbb2790491b.seg 03001627\n03001627/points/4702e6196503ff84f1c0e03f321d0b20.pts 03001627/expert_verified/points_label/4702e6196503ff84f1c0e03f321d0b20.seg 03001627\n02958343/points/b0a7789537663f7ba1ff2929b2f5cf19.pts 02958343/expert_verified/points_label/b0a7789537663f7ba1ff2929b2f5cf19.seg 02958343\n03636649/points/2ce7732982343c1d9792f6094a78f8d5.pts 03636649/expert_verified/points_label/2ce7732982343c1d9792f6094a78f8d5.seg 03636649\n03467517/points/78a75ce8dc8dc197dc2b574e941c815b.pts 03467517/expert_verified/points_label/78a75ce8dc8dc197dc2b574e941c815b.seg 03467517\n03636649/points/348d6ddf9e02cbddf647dc544bb0ab61.pts 03636649/expert_verified/points_label/348d6ddf9e02cbddf647dc544bb0ab61.seg 03636649\n03001627/points/e56087cd55cce8b4f41a4361d0ca9bc8.pts 03001627/expert_verified/points_label/e56087cd55cce8b4f41a4361d0ca9bc8.seg 03001627\n03642806/points/4d3dde22f529195bc887d5d9a11f3155.pts 03642806/expert_verified/points_label/4d3dde22f529195bc887d5d9a11f3155.seg 03642806\n03001627/points/78e1977bc5f0f4041552c6ecbda964b.pts 03001627/expert_verified/points_label/78e1977bc5f0f4041552c6ecbda964b.seg 03001627\n04379243/points/44360c91a7e91098d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/44360c91a7e91098d93768e7b9b1eabf.seg 04379243\n02691156/points/52ca6970fb09b561f9f7510373841dd9.pts 02691156/expert_verified/points_label/52ca6970fb09b561f9f7510373841dd9.seg 02691156\n02958343/points/383f8d508b6f25f565d21723f535417.pts 02958343/expert_verified/points_label/383f8d508b6f25f565d21723f535417.seg 02958343\n03001627/points/d6da5457b0682e24696b74614952b2d0.pts 03001627/expert_verified/points_label/d6da5457b0682e24696b74614952b2d0.seg 03001627\n02691156/points/9f5dda6f01bbe29bf810506e9ae2dcc2.pts 02691156/expert_verified/points_label/9f5dda6f01bbe29bf810506e9ae2dcc2.seg 02691156\n03467517/points/35e77edd3ae6ad4993f0194265a9746c.pts 03467517/expert_verified/points_label/35e77edd3ae6ad4993f0194265a9746c.seg 03467517\n03001627/points/590d04438aeffbb58f447453fccbd9d3.pts 03001627/expert_verified/points_label/590d04438aeffbb58f447453fccbd9d3.seg 03001627\n03001627/points/cdfa898eadf316122056b4bd5d870b47.pts 03001627/expert_verified/points_label/cdfa898eadf316122056b4bd5d870b47.seg 03001627\n03001627/points/8e678a54f2ee4e5e492d9da2668ec34c.pts 03001627/expert_verified/points_label/8e678a54f2ee4e5e492d9da2668ec34c.seg 03001627\n04379243/points/1804dd6f5c827c1a4bf8d5f43e57b138.pts 04379243/expert_verified/points_label/1804dd6f5c827c1a4bf8d5f43e57b138.seg 04379243\n02691156/points/23eed87ac79f1b152f9c405cf0817830.pts 02691156/expert_verified/points_label/23eed87ac79f1b152f9c405cf0817830.seg 02691156\n02691156/points/97bc5fffde64178f43afdb9c81ff2967.pts 02691156/expert_verified/points_label/97bc5fffde64178f43afdb9c81ff2967.seg 02691156\n03001627/points/3b1f1913f2bc0dc171dbe96559c7bcae.pts 03001627/expert_verified/points_label/3b1f1913f2bc0dc171dbe96559c7bcae.seg 03001627\n04379243/points/82e1c0b874b0a9e035cd53a06b1d2317.pts 04379243/expert_verified/points_label/82e1c0b874b0a9e035cd53a06b1d2317.seg 04379243\n03001627/points/e0a0d5c2ba6fdca215b55266697a17be.pts 03001627/expert_verified/points_label/e0a0d5c2ba6fdca215b55266697a17be.seg 03001627\n03636649/points/9b558be5e2b60e3eb09f0ca9c143fdfd.pts 03636649/expert_verified/points_label/9b558be5e2b60e3eb09f0ca9c143fdfd.seg 03636649\n03001627/points/813be9a8485050571563f0911e3e5fc0.pts 03001627/expert_verified/points_label/813be9a8485050571563f0911e3e5fc0.seg 03001627\n02958343/points/6ca9967adcf862a461c6c61410fc904b.pts 02958343/expert_verified/points_label/6ca9967adcf862a461c6c61410fc904b.seg 02958343\n03624134/points/5663637633c938d1395331ebe4786cd.pts 03624134/expert_verified/points_label/5663637633c938d1395331ebe4786cd.seg 03624134\n03636649/points/ec8dc2311d381a9e3d39d8012919dd25.pts 03636649/expert_verified/points_label/ec8dc2311d381a9e3d39d8012919dd25.seg 03636649\n04379243/points/b685208ccf38786a6f1e07a56c129dfc.pts 04379243/expert_verified/points_label/b685208ccf38786a6f1e07a56c129dfc.seg 04379243\n03636649/points/ce621e6df1ab9ae35d2cdb96c1afe34.pts 03636649/expert_verified/points_label/ce621e6df1ab9ae35d2cdb96c1afe34.seg 03636649\n02691156/points/b092d523bdd320e4ca8607f540cc62ba.pts 02691156/expert_verified/points_label/b092d523bdd320e4ca8607f540cc62ba.seg 02691156\n04379243/points/401fe961ec7b0cb5dcfcef693e7ec696.pts 04379243/expert_verified/points_label/401fe961ec7b0cb5dcfcef693e7ec696.seg 04379243\n04225987/points/1e5fd1de723cc66cbb1ed6d4d8526a19.pts 04225987/expert_verified/points_label/1e5fd1de723cc66cbb1ed6d4d8526a19.seg 04225987\n03001627/points/b987a2ca54c6ddecb74697ced5978572.pts 03001627/expert_verified/points_label/b987a2ca54c6ddecb74697ced5978572.seg 03001627\n04379243/points/3e42e3386f4aea9277cf3bb06f394ad.pts 04379243/expert_verified/points_label/3e42e3386f4aea9277cf3bb06f394ad.seg 04379243\n02958343/points/1198255e3d20d2f323f3ca54768fe2ee.pts 02958343/expert_verified/points_label/1198255e3d20d2f323f3ca54768fe2ee.seg 02958343\n04379243/points/2b564ff0989caf58ab610b0c94236463.pts 04379243/expert_verified/points_label/2b564ff0989caf58ab610b0c94236463.seg 04379243\n03636649/points/941271c5d9b192eaccd8f9b9403fd602.pts 03636649/expert_verified/points_label/941271c5d9b192eaccd8f9b9403fd602.seg 03636649\n02691156/points/6aeae52e38f892a7e0091ae06332b2d5.pts 02691156/expert_verified/points_label/6aeae52e38f892a7e0091ae06332b2d5.seg 02691156\n04379243/points/4cdfd605352adcb0da13974b3533fb59.pts 04379243/expert_verified/points_label/4cdfd605352adcb0da13974b3533fb59.seg 04379243\n04379243/points/7c24e4f8778e224799a5e8f6c5baa224.pts 04379243/expert_verified/points_label/7c24e4f8778e224799a5e8f6c5baa224.seg 04379243\n03001627/points/6272c21e439e0205c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/6272c21e439e0205c8687ff9b0b4e4ac.seg 03001627\n02691156/points/acd8f367c36a3d84fc7a6d75b3d807ff.pts 02691156/expert_verified/points_label/acd8f367c36a3d84fc7a6d75b3d807ff.seg 02691156\n04379243/points/d58bdda16e6bba6f796740c80be6053.pts 04379243/expert_verified/points_label/d58bdda16e6bba6f796740c80be6053.seg 04379243\n03636649/points/f97506704760741b460fa882e24b7e4a.pts 03636649/expert_verified/points_label/f97506704760741b460fa882e24b7e4a.seg 03636649\n03636649/points/9f5c3ea9f8254b8bd42b9650f19dd425.pts 03636649/expert_verified/points_label/9f5c3ea9f8254b8bd42b9650f19dd425.seg 03636649\n03797390/points/79e673336e836d1333becb3a9550cbb1.pts 03797390/expert_verified/points_label/79e673336e836d1333becb3a9550cbb1.seg 03797390\n03948459/points/2d573d37cce5b48b9f433921788191f3.pts 03948459/expert_verified/points_label/2d573d37cce5b48b9f433921788191f3.seg 03948459\n04379243/points/7aaad1c5c2be8c24a9ed7bb5b55809f8.pts 04379243/expert_verified/points_label/7aaad1c5c2be8c24a9ed7bb5b55809f8.seg 04379243\n04379243/points/c6c412c771ab0ae015a34fa27bdf3d03.pts 04379243/expert_verified/points_label/c6c412c771ab0ae015a34fa27bdf3d03.seg 04379243\n03467517/points/819251e11b46438ff6ff9bebca919581.pts 03467517/expert_verified/points_label/819251e11b46438ff6ff9bebca919581.seg 03467517\n03001627/points/51f4ea68be319fe8990e5087098e19c.pts 03001627/expert_verified/points_label/51f4ea68be319fe8990e5087098e19c.seg 03001627\n03467517/points/66b24797480ba515d57700c05b1862d8.pts 03467517/expert_verified/points_label/66b24797480ba515d57700c05b1862d8.seg 03467517\n03790512/points/9d3b07f4475d501e8249f134aca4c817.pts 03790512/expert_verified/points_label/9d3b07f4475d501e8249f134aca4c817.seg 03790512\n04379243/points/72cfb60a075369ab7252c133a7e17d94.pts 04379243/expert_verified/points_label/72cfb60a075369ab7252c133a7e17d94.seg 04379243\n04379243/points/12a2733fc5f6b31ef8574543281e850f.pts 04379243/expert_verified/points_label/12a2733fc5f6b31ef8574543281e850f.seg 04379243\n03636649/points/aed950102f1e9c7a659dda512294c744.pts 03636649/expert_verified/points_label/aed950102f1e9c7a659dda512294c744.seg 03636649\n03001627/points/3126c6e9277b775b245ac1812a4e4d0c.pts 03001627/expert_verified/points_label/3126c6e9277b775b245ac1812a4e4d0c.seg 03001627\n02958343/points/8decf42b145f98d148d2ba4615e03b21.pts 02958343/expert_verified/points_label/8decf42b145f98d148d2ba4615e03b21.seg 02958343\n03467517/points/2f9bd6e61e038d8fd4b4ae2ff4c58b57.pts 03467517/expert_verified/points_label/2f9bd6e61e038d8fd4b4ae2ff4c58b57.seg 03467517\n03467517/points/6a983b2ff1b8a42e1285d7bfa3e922e4.pts 03467517/expert_verified/points_label/6a983b2ff1b8a42e1285d7bfa3e922e4.seg 03467517\n03261776/points/e33d6e8e39a75268957b6a4f3924d982.pts 03261776/expert_verified/points_label/e33d6e8e39a75268957b6a4f3924d982.seg 03261776\n04379243/points/fe2f2b120d84ed909b896cf832106977.pts 04379243/expert_verified/points_label/fe2f2b120d84ed909b896cf832106977.seg 04379243\n02958343/points/1328a95d69cefe32f200a72c9245aee7.pts 02958343/expert_verified/points_label/1328a95d69cefe32f200a72c9245aee7.seg 02958343\n03001627/points/58409b308683d908ca2bec46a3b47519.pts 03001627/expert_verified/points_label/58409b308683d908ca2bec46a3b47519.seg 03001627\n03001627/points/507a5070cde81fd867936ca58e67cec6.pts 03001627/expert_verified/points_label/507a5070cde81fd867936ca58e67cec6.seg 03001627\n04379243/points/ec68e1edbb7e9bc7e93cebb6ba9ca43e.pts 04379243/expert_verified/points_label/ec68e1edbb7e9bc7e93cebb6ba9ca43e.seg 04379243\n03001627/points/7facccfa81369078a8930422448288ea.pts 03001627/expert_verified/points_label/7facccfa81369078a8930422448288ea.seg 03001627\n03001627/points/be0c5a0e91c99e804e1a714ee619465a.pts 03001627/expert_verified/points_label/be0c5a0e91c99e804e1a714ee619465a.seg 03001627\n03001627/points/d73e46e07bdb3fe75fe4ecea39e8bd40.pts 03001627/expert_verified/points_label/d73e46e07bdb3fe75fe4ecea39e8bd40.seg 03001627\n03636649/points/122fb7bfa09c184ca249f8489bc060dd.pts 03636649/expert_verified/points_label/122fb7bfa09c184ca249f8489bc060dd.seg 03636649\n03001627/points/9ef3323c6ced7dfef313a0fb5fd4d79.pts 03001627/expert_verified/points_label/9ef3323c6ced7dfef313a0fb5fd4d79.seg 03001627\n02691156/points/d8452d4fe51f2bab3554ccf8c30febe7.pts 02691156/expert_verified/points_label/d8452d4fe51f2bab3554ccf8c30febe7.seg 02691156\n02691156/points/d59d75f52ac9b241ae0d772a1c85134a.pts 02691156/expert_verified/points_label/d59d75f52ac9b241ae0d772a1c85134a.seg 02691156\n02691156/points/f9e80ce23d9536623fddedb0bf24c68a.pts 02691156/expert_verified/points_label/f9e80ce23d9536623fddedb0bf24c68a.seg 02691156\n02691156/points/e69631d34410f99ac4f72bf08dc79a6.pts 02691156/expert_verified/points_label/e69631d34410f99ac4f72bf08dc79a6.seg 02691156\n04379243/points/f7196ec7d732af5166decb1b3cdc5557.pts 04379243/expert_verified/points_label/f7196ec7d732af5166decb1b3cdc5557.seg 04379243\n03261776/points/c5e47b627cb7818f17e22b7299bb7bc6.pts 03261776/expert_verified/points_label/c5e47b627cb7818f17e22b7299bb7bc6.seg 03261776\n03001627/points/5a60c649a221293d72ed554eb3baedcc.pts 03001627/expert_verified/points_label/5a60c649a221293d72ed554eb3baedcc.seg 03001627\n04379243/points/b117aac2e13630bb5d23c9bbb429abf9.pts 04379243/expert_verified/points_label/b117aac2e13630bb5d23c9bbb429abf9.seg 04379243\n03642806/points/e4c34c87ed1bc2191ef7a71d6e01357e.pts 03642806/expert_verified/points_label/e4c34c87ed1bc2191ef7a71d6e01357e.seg 03642806\n02691156/points/3fb7ceab42d7b17219ba010ddb4974fe.pts 02691156/expert_verified/points_label/3fb7ceab42d7b17219ba010ddb4974fe.seg 02691156\n04379243/points/fc472163ea149f8e19fb4103277a6b93.pts 04379243/expert_verified/points_label/fc472163ea149f8e19fb4103277a6b93.seg 04379243\n03001627/points/5ef73c9bee1b4adcd019a8a03d4a2a3.pts 03001627/expert_verified/points_label/5ef73c9bee1b4adcd019a8a03d4a2a3.seg 03001627\n02691156/points/384e72f69e6f24404cb288947cda4a2c.pts 02691156/expert_verified/points_label/384e72f69e6f24404cb288947cda4a2c.seg 02691156\n03636649/points/3fca250636e2b47a8d0fc77aab7a8d33.pts 03636649/expert_verified/points_label/3fca250636e2b47a8d0fc77aab7a8d33.seg 03636649\n04379243/points/46957ba752c3554bd42b9650f19dd425.pts 04379243/expert_verified/points_label/46957ba752c3554bd42b9650f19dd425.seg 04379243\n03001627/points/bce7ff621a5440bb34ee5c94ebdf7f1d.pts 03001627/expert_verified/points_label/bce7ff621a5440bb34ee5c94ebdf7f1d.seg 03001627\n02691156/points/66ae19841350ac2d4ba2821676102936.pts 02691156/expert_verified/points_label/66ae19841350ac2d4ba2821676102936.seg 02691156\n03001627/points/e53b07b648e8d041107a17cfae0b6df6.pts 03001627/expert_verified/points_label/e53b07b648e8d041107a17cfae0b6df6.seg 03001627\n03624134/points/d1c757548ead4a4d8d03ca4865da5b6.pts 03624134/expert_verified/points_label/d1c757548ead4a4d8d03ca4865da5b6.seg 03624134\n04379243/points/d19b4bde0766723c9b3bb0ef2a08be04.pts 04379243/expert_verified/points_label/d19b4bde0766723c9b3bb0ef2a08be04.seg 04379243\n03001627/points/6ecec258a1b6fe2a6fee8e2140acec9.pts 03001627/expert_verified/points_label/6ecec258a1b6fe2a6fee8e2140acec9.seg 03001627\n02691156/points/ab95a4e7f2d3cf9ca8607f540cc62ba.pts 02691156/expert_verified/points_label/ab95a4e7f2d3cf9ca8607f540cc62ba.seg 02691156\n03624134/points/b61c9b5f29ad581c860a45e027159a9a.pts 03624134/expert_verified/points_label/b61c9b5f29ad581c860a45e027159a9a.seg 03624134\n03001627/points/c7da2d72f9927f1881dff5c2e57ad46e.pts 03001627/expert_verified/points_label/c7da2d72f9927f1881dff5c2e57ad46e.seg 03001627\n04379243/points/b9886dd3c4a651f3664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/b9886dd3c4a651f3664b3b9b23ddfcbc.seg 04379243\n02691156/points/abc465975af79827dfb86dddee1d6ac3.pts 02691156/expert_verified/points_label/abc465975af79827dfb86dddee1d6ac3.seg 02691156\n03636649/points/7be01530bf43f2ed8a83637b92bdc7.pts 03636649/expert_verified/points_label/7be01530bf43f2ed8a83637b92bdc7.seg 03636649\n02691156/points/b81339a2f1dbc0de9598ceb95c7f0752.pts 02691156/expert_verified/points_label/b81339a2f1dbc0de9598ceb95c7f0752.seg 02691156\n03001627/points/69709cb300ae3784ee72e5c46412e9a7.pts 03001627/expert_verified/points_label/69709cb300ae3784ee72e5c46412e9a7.seg 03001627\n03001627/points/ec25a41ca233ed096e5a467428553af2.pts 03001627/expert_verified/points_label/ec25a41ca233ed096e5a467428553af2.seg 03001627\n04379243/points/4e9394f9f64859aef4ef86403cccc399.pts 04379243/expert_verified/points_label/4e9394f9f64859aef4ef86403cccc399.seg 04379243\n04379243/points/c477235c02413bfc44d2ca62bee212a0.pts 04379243/expert_verified/points_label/c477235c02413bfc44d2ca62bee212a0.seg 04379243\n04379243/points/41b0491fdb14d41bd25ca1a27cf9bdec.pts 04379243/expert_verified/points_label/41b0491fdb14d41bd25ca1a27cf9bdec.seg 04379243\n02691156/points/59eecc0a983a27a8130cc35407fba74a.pts 02691156/expert_verified/points_label/59eecc0a983a27a8130cc35407fba74a.seg 02691156\n03467517/points/22129fab1497437cc3f912172873d52f.pts 03467517/expert_verified/points_label/22129fab1497437cc3f912172873d52f.seg 03467517\n04379243/points/6365205d2324234fc8a1efeb4b91d393.pts 04379243/expert_verified/points_label/6365205d2324234fc8a1efeb4b91d393.seg 04379243\n03001627/points/2a75b2bb82d7f77c3f9d6e0ade5188b0.pts 03001627/expert_verified/points_label/2a75b2bb82d7f77c3f9d6e0ade5188b0.seg 03001627\n03001627/points/8f226d6b3089d3b7bca860dd9b04c52c.pts 03001627/expert_verified/points_label/8f226d6b3089d3b7bca860dd9b04c52c.seg 03001627\n03624134/points/5e515b18ed17a418b056c98b2e5e5e4e.pts 03624134/expert_verified/points_label/5e515b18ed17a418b056c98b2e5e5e4e.seg 03624134\n02691156/points/5bc41589eba11a4e15477d594f1fbd99.pts 02691156/expert_verified/points_label/5bc41589eba11a4e15477d594f1fbd99.seg 02691156\n03001627/points/2bbf00f0c583fd8a4b3c42e318f3affc.pts 03001627/expert_verified/points_label/2bbf00f0c583fd8a4b3c42e318f3affc.seg 03001627\n03790512/points/9e9300a6e1caec217395d58407f193ba.pts 03790512/expert_verified/points_label/9e9300a6e1caec217395d58407f193ba.seg 03790512\n03636649/points/81894e0739e3fea9d49b2e04785f8492.pts 03636649/expert_verified/points_label/81894e0739e3fea9d49b2e04785f8492.seg 03636649\n02958343/points/cdc8453c63ffc13e20f29d4da2b76f7a.pts 02958343/expert_verified/points_label/cdc8453c63ffc13e20f29d4da2b76f7a.seg 02958343\n04379243/points/7a0b6685a30298fb8ae8d7de284e7d2.pts 04379243/expert_verified/points_label/7a0b6685a30298fb8ae8d7de284e7d2.seg 04379243\n03001627/points/c5ee6b77f9f84adeed52100e321c9f3e.pts 03001627/expert_verified/points_label/c5ee6b77f9f84adeed52100e321c9f3e.seg 03001627\n04379243/points/4e87db85d5dab96822339a4b4aacca6b.pts 04379243/expert_verified/points_label/4e87db85d5dab96822339a4b4aacca6b.seg 04379243\n02958343/points/6dbae14e481e8fb9333e0bf0b765fa12.pts 02958343/expert_verified/points_label/6dbae14e481e8fb9333e0bf0b765fa12.seg 02958343\n03467517/points/bad8978268948ea3d3eb77b119df6d.pts 03467517/expert_verified/points_label/bad8978268948ea3d3eb77b119df6d.seg 03467517\n03001627/points/c552529c54b0612e53041c49040be3d5.pts 03001627/expert_verified/points_label/c552529c54b0612e53041c49040be3d5.seg 03001627\n02958343/points/dca8ed788347b28c171cf359a50c99bc.pts 02958343/expert_verified/points_label/dca8ed788347b28c171cf359a50c99bc.seg 02958343\n04379243/points/99720647e210078beaf288f952624966.pts 04379243/expert_verified/points_label/99720647e210078beaf288f952624966.seg 04379243\n03001627/points/b1f4b2c32f8a2fa77ee217c21e683487.pts 03001627/expert_verified/points_label/b1f4b2c32f8a2fa77ee217c21e683487.seg 03001627\n04379243/points/41cdb5b619790d5a74eb542502c2205f.pts 04379243/expert_verified/points_label/41cdb5b619790d5a74eb542502c2205f.seg 04379243\n04379243/points/a25141a07c77c25467de2aaf749e5256.pts 04379243/expert_verified/points_label/a25141a07c77c25467de2aaf749e5256.seg 04379243\n04379243/points/e9c3a3aa2278608bec15b38012222fa8.pts 04379243/expert_verified/points_label/e9c3a3aa2278608bec15b38012222fa8.seg 04379243\n03636649/points/8e025c4aa0b0201a81a172d69c52a28a.pts 03636649/expert_verified/points_label/8e025c4aa0b0201a81a172d69c52a28a.seg 03636649\n03001627/points/e175bc785390e8f6c05575120a46cd3b.pts 03001627/expert_verified/points_label/e175bc785390e8f6c05575120a46cd3b.seg 03001627\n02691156/points/ecb4ae05d7dd135a619550d2af0b6117.pts 02691156/expert_verified/points_label/ecb4ae05d7dd135a619550d2af0b6117.seg 02691156\n02691156/points/87069f21b11c180799a771d197c7b487.pts 02691156/expert_verified/points_label/87069f21b11c180799a771d197c7b487.seg 02691156\n02691156/points/ca11efc8928c10908b96ae1a0a8b84ec.pts 02691156/expert_verified/points_label/ca11efc8928c10908b96ae1a0a8b84ec.seg 02691156\n03790512/points/365c1f92a54c8cb52a45a87054fa7272.pts 03790512/expert_verified/points_label/365c1f92a54c8cb52a45a87054fa7272.seg 03790512\n03636649/points/23040992da19679aaa7cb30470f3273c.pts 03636649/expert_verified/points_label/23040992da19679aaa7cb30470f3273c.seg 03636649\n02691156/points/9441549e323552f2f001dddaf44c449b.pts 02691156/expert_verified/points_label/9441549e323552f2f001dddaf44c449b.seg 02691156\n02958343/points/17bfc66c6bc0a99d68c415156b102065.pts 02958343/expert_verified/points_label/17bfc66c6bc0a99d68c415156b102065.seg 02958343\n03001627/points/671d34c27cc0f1bf2deeb5ec76cf103b.pts 03001627/expert_verified/points_label/671d34c27cc0f1bf2deeb5ec76cf103b.seg 03001627\n03642806/points/464edfe14e9fa45c3394926146371698.pts 03642806/expert_verified/points_label/464edfe14e9fa45c3394926146371698.seg 03642806\n04379243/points/279c8601278e827dab610b0c94236463.pts 04379243/expert_verified/points_label/279c8601278e827dab610b0c94236463.seg 04379243\n04379243/points/29d9c6d84c6a126917b431cae0dd70ed.pts 04379243/expert_verified/points_label/29d9c6d84c6a126917b431cae0dd70ed.seg 04379243\n04379243/points/5d3d902051858e56ed1397afd2317e5b.pts 04379243/expert_verified/points_label/5d3d902051858e56ed1397afd2317e5b.seg 04379243\n02958343/points/aa78d4465ae18312711f9e3a79a13dcf.pts 02958343/expert_verified/points_label/aa78d4465ae18312711f9e3a79a13dcf.seg 02958343\n03001627/points/d561ff6788ab46517b016084e2ae95e.pts 03001627/expert_verified/points_label/d561ff6788ab46517b016084e2ae95e.seg 03001627\n03001627/points/b24ed89d85b74771216fff6094e6695c.pts 03001627/expert_verified/points_label/b24ed89d85b74771216fff6094e6695c.seg 03001627\n03636649/points/f6eeb5d67c32616648fda83c10428379.pts 03636649/expert_verified/points_label/f6eeb5d67c32616648fda83c10428379.seg 03636649\n03001627/points/3b3a9f4e3aa9f2f4d39a194653571dfc.pts 03001627/expert_verified/points_label/3b3a9f4e3aa9f2f4d39a194653571dfc.seg 03001627\n03001627/points/bd0b06e158bcee8ac0d89fc15154c9a2.pts 03001627/expert_verified/points_label/bd0b06e158bcee8ac0d89fc15154c9a2.seg 03001627\n04379243/points/89251f322490e7047e38640a31d0bc3.pts 04379243/expert_verified/points_label/89251f322490e7047e38640a31d0bc3.seg 04379243\n03001627/points/935f5e58e9e15231febad4f49b26ec52.pts 03001627/expert_verified/points_label/935f5e58e9e15231febad4f49b26ec52.seg 03001627\n03467517/points/8f59fee745f1e37ea5c8e9fc8b2242fd.pts 03467517/expert_verified/points_label/8f59fee745f1e37ea5c8e9fc8b2242fd.seg 03467517\n02691156/points/fddcb2b3d45ce98e641c309f1fd7e183.pts 02691156/expert_verified/points_label/fddcb2b3d45ce98e641c309f1fd7e183.seg 02691156\n03001627/points/d915d2f1664bf76e71a70be9f12ce8b0.pts 03001627/expert_verified/points_label/d915d2f1664bf76e71a70be9f12ce8b0.seg 03001627\n02958343/points/1ae9732840a315afab2c2809513f396e.pts 02958343/expert_verified/points_label/1ae9732840a315afab2c2809513f396e.seg 02958343\n04379243/points/b658e507c84d6202610c2a68437007d6.pts 04379243/expert_verified/points_label/b658e507c84d6202610c2a68437007d6.seg 04379243\n02958343/points/707d1e19b465d075adbfb30d8d1b297e.pts 02958343/expert_verified/points_label/707d1e19b465d075adbfb30d8d1b297e.seg 02958343\n04379243/points/5b74412eba257e5182b796aa5845e185.pts 04379243/expert_verified/points_label/5b74412eba257e5182b796aa5845e185.seg 04379243\n03636649/points/a801be11157a7f243d39d8012919dd25.pts 03636649/expert_verified/points_label/a801be11157a7f243d39d8012919dd25.seg 03636649\n02691156/points/26e10058cf9835aaca8607f540cc62ba.pts 02691156/expert_verified/points_label/26e10058cf9835aaca8607f540cc62ba.seg 02691156\n03636649/points/bc704db7b62582e5d1cbf3e52b9b6237.pts 03636649/expert_verified/points_label/bc704db7b62582e5d1cbf3e52b9b6237.seg 03636649\n02691156/points/d2e2e23f5be557e2d1ab3b031c100cb1.pts 02691156/expert_verified/points_label/d2e2e23f5be557e2d1ab3b031c100cb1.seg 02691156\n03001627/points/920af478601258e24762da3a3017ade.pts 03001627/expert_verified/points_label/920af478601258e24762da3a3017ade.seg 03001627\n03001627/points/3ffd794e5100258483bc207d8a5912e3.pts 03001627/expert_verified/points_label/3ffd794e5100258483bc207d8a5912e3.seg 03001627\n04379243/points/69c536d9e450cb79436e6787c76ef3f0.pts 04379243/expert_verified/points_label/69c536d9e450cb79436e6787c76ef3f0.seg 04379243\n04379243/points/6cf6a546e2ecbffe815a7efb12912.pts 04379243/expert_verified/points_label/6cf6a546e2ecbffe815a7efb12912.seg 04379243\n03001627/points/815f436a40c28da51f56aa11cd5e0c3e.pts 03001627/expert_verified/points_label/815f436a40c28da51f56aa11cd5e0c3e.seg 03001627\n03642806/points/4504a4d244d05ddbf5f79806bd65844f.pts 03642806/expert_verified/points_label/4504a4d244d05ddbf5f79806bd65844f.seg 03642806\n04379243/points/8ad9868947e7391113625562b56161f0.pts 04379243/expert_verified/points_label/8ad9868947e7391113625562b56161f0.seg 04379243\n03001627/points/6b9c3d42724275cf7a5c8cd74a7bc29a.pts 03001627/expert_verified/points_label/6b9c3d42724275cf7a5c8cd74a7bc29a.seg 03001627\n04379243/points/67e32538a35a5011a0ab1d82ef09f78f.pts 04379243/expert_verified/points_label/67e32538a35a5011a0ab1d82ef09f78f.seg 04379243\n03624134/points/2743e37a65e198d51592d7a04a86fa53.pts 03624134/expert_verified/points_label/2743e37a65e198d51592d7a04a86fa53.seg 03624134\n04379243/points/12df5c215f4364b7fe388cf6c4c3705d.pts 04379243/expert_verified/points_label/12df5c215f4364b7fe388cf6c4c3705d.seg 04379243\n02958343/points/55e0897c0ac089a6da5cb3be8feeaadc.pts 02958343/expert_verified/points_label/55e0897c0ac089a6da5cb3be8feeaadc.seg 02958343\n02773838/points/4e4fcfffec161ecaed13f430b2941481.pts 02773838/expert_verified/points_label/4e4fcfffec161ecaed13f430b2941481.seg 02773838\n04379243/points/8ce70dead5119191cc3492a06e9bd850.pts 04379243/expert_verified/points_label/8ce70dead5119191cc3492a06e9bd850.seg 04379243\n02691156/points/e033b6ad34586a86cc1c9e8218bfe7fc.pts 02691156/expert_verified/points_label/e033b6ad34586a86cc1c9e8218bfe7fc.seg 02691156\n03636649/points/600b2f00113ad714e2367b9e27f16a71.pts 03636649/expert_verified/points_label/600b2f00113ad714e2367b9e27f16a71.seg 03636649\n04379243/points/a74cad1781afed87dcfcef693e7ec696.pts 04379243/expert_verified/points_label/a74cad1781afed87dcfcef693e7ec696.seg 04379243\n03001627/points/5402eecc67e489502fa77440dcb93214.pts 03001627/expert_verified/points_label/5402eecc67e489502fa77440dcb93214.seg 03001627\n03001627/points/d5bd6ea417eba6ce456cbf78e1e89022.pts 03001627/expert_verified/points_label/d5bd6ea417eba6ce456cbf78e1e89022.seg 03001627\n03001627/points/d4edd167061dac5f52a3901fa1436b1a.pts 03001627/expert_verified/points_label/d4edd167061dac5f52a3901fa1436b1a.seg 03001627\n03636649/points/9fc3ddc511f4ef62dced62abd38a02b0.pts 03636649/expert_verified/points_label/9fc3ddc511f4ef62dced62abd38a02b0.seg 03636649\n02691156/points/92a83ecaa10e8d3f78e919a72d9a39e7.pts 02691156/expert_verified/points_label/92a83ecaa10e8d3f78e919a72d9a39e7.seg 02691156\n03001627/points/fee36ec8c8ae503fc68456e8da5b9a30.pts 03001627/expert_verified/points_label/fee36ec8c8ae503fc68456e8da5b9a30.seg 03001627\n04379243/points/1df409cfefbb51658b9b51ae4415d5aa.pts 04379243/expert_verified/points_label/1df409cfefbb51658b9b51ae4415d5aa.seg 04379243\n03001627/points/76283716a2c6586e266d673a6188bf4c.pts 03001627/expert_verified/points_label/76283716a2c6586e266d673a6188bf4c.seg 03001627\n04379243/points/29b2aaca87d19a3c5759f4335ff2e408.pts 04379243/expert_verified/points_label/29b2aaca87d19a3c5759f4335ff2e408.seg 04379243\n04379243/points/21ca4d36a0f6fa69b937d98d58545fa.pts 04379243/expert_verified/points_label/21ca4d36a0f6fa69b937d98d58545fa.seg 04379243\n02691156/points/da1acb401541235be4d2773f0358b43b.pts 02691156/expert_verified/points_label/da1acb401541235be4d2773f0358b43b.seg 02691156\n04379243/points/553c416f33c5e5e18b9b51ae4415d5aa.pts 04379243/expert_verified/points_label/553c416f33c5e5e18b9b51ae4415d5aa.seg 04379243\n04379243/points/174832b73cd6d91c9856fa70a578baeb.pts 04379243/expert_verified/points_label/174832b73cd6d91c9856fa70a578baeb.seg 04379243\n02691156/points/1c2e9dedbcf511e616a077c4c0fc1181.pts 02691156/expert_verified/points_label/1c2e9dedbcf511e616a077c4c0fc1181.seg 02691156\n03001627/points/893c689b192bbe33ebadcdfba7971b71.pts 03001627/expert_verified/points_label/893c689b192bbe33ebadcdfba7971b71.seg 03001627\n04379243/points/52037005fbff92d08fa35606145b47dc.pts 04379243/expert_verified/points_label/52037005fbff92d08fa35606145b47dc.seg 04379243\n04225987/points/e38a4e6fb32b51a1bebb1fbb949ea955.pts 04225987/expert_verified/points_label/e38a4e6fb32b51a1bebb1fbb949ea955.seg 04225987\n03636649/points/42bc0dce81734d892610e2a20d7c4b61.pts 03636649/expert_verified/points_label/42bc0dce81734d892610e2a20d7c4b61.seg 03636649\n04379243/points/cb7ebc943b1b424988386fe1512ed26f.pts 04379243/expert_verified/points_label/cb7ebc943b1b424988386fe1512ed26f.seg 04379243\n03624134/points/2d6e9b23e171760c3e332fb3cb6ebe50.pts 03624134/expert_verified/points_label/2d6e9b23e171760c3e332fb3cb6ebe50.seg 03624134\n04379243/points/d05ff7b47febe58a656db3f863b4b796.pts 04379243/expert_verified/points_label/d05ff7b47febe58a656db3f863b4b796.seg 04379243\n03636649/points/e178ab3b967c7fddc901d9dddb735c9f.pts 03636649/expert_verified/points_label/e178ab3b967c7fddc901d9dddb735c9f.seg 03636649\n04379243/points/527b2d1e964f056383be1aa5a5ab0c80.pts 04379243/expert_verified/points_label/527b2d1e964f056383be1aa5a5ab0c80.seg 04379243\n03001627/points/f1a1bb6ad29d703078d928ba1c4a6f75.pts 03001627/expert_verified/points_label/f1a1bb6ad29d703078d928ba1c4a6f75.seg 03001627\n04379243/points/ed9dc0937009dc031311158f08f2982a.pts 04379243/expert_verified/points_label/ed9dc0937009dc031311158f08f2982a.seg 04379243\n02691156/points/e41c5719ad09055f1b880c747ee1f83.pts 02691156/expert_verified/points_label/e41c5719ad09055f1b880c747ee1f83.seg 02691156\n04379243/points/34bbe284f7499df071a782a4379556c7.pts 04379243/expert_verified/points_label/34bbe284f7499df071a782a4379556c7.seg 04379243\n02691156/points/973df01cea43c7f690b1d6deb98feec6.pts 02691156/expert_verified/points_label/973df01cea43c7f690b1d6deb98feec6.seg 02691156\n03001627/points/ed97d1c954fca49851ceffe90913a32.pts 03001627/expert_verified/points_label/ed97d1c954fca49851ceffe90913a32.seg 03001627\n03001627/points/3a74e3d5172ee94fdef1c01cbd4ae0c.pts 03001627/expert_verified/points_label/3a74e3d5172ee94fdef1c01cbd4ae0c.seg 03001627\n04379243/points/194b279c7e892a2d15fa8082e5524f79.pts 04379243/expert_verified/points_label/194b279c7e892a2d15fa8082e5524f79.seg 04379243\n04379243/points/23ece3bf871619366ff454af1e8947f3.pts 04379243/expert_verified/points_label/23ece3bf871619366ff454af1e8947f3.seg 04379243\n02691156/points/7de379891610f5feaf7dd1bfd65143a9.pts 02691156/expert_verified/points_label/7de379891610f5feaf7dd1bfd65143a9.seg 02691156\n04379243/points/54ba7e77a2bf5fe3158b7df020486ff2.pts 04379243/expert_verified/points_label/54ba7e77a2bf5fe3158b7df020486ff2.seg 04379243\n03001627/points/39825fb4341ebd1ccb002c1e2b5fc68b.pts 03001627/expert_verified/points_label/39825fb4341ebd1ccb002c1e2b5fc68b.seg 03001627\n03001627/points/a32febea4a0ac30171a782a4379556c7.pts 03001627/expert_verified/points_label/a32febea4a0ac30171a782a4379556c7.seg 03001627\n02691156/points/b9ba988dd9a6cf426e8b6dd39a855b69.pts 02691156/expert_verified/points_label/b9ba988dd9a6cf426e8b6dd39a855b69.seg 02691156\n02691156/points/37b1f7f02c4b87dbca8607f540cc62ba.pts 02691156/expert_verified/points_label/37b1f7f02c4b87dbca8607f540cc62ba.seg 02691156\n04379243/points/8ce538a671c6e684d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/8ce538a671c6e684d93768e7b9b1eabf.seg 04379243\n04225987/points/48bf45bffab55d7cf14c37b285d25cdf.pts 04225987/expert_verified/points_label/48bf45bffab55d7cf14c37b285d25cdf.seg 04225987\n02691156/points/820ba20e5da8325f19ba010ddb4974fe.pts 02691156/expert_verified/points_label/820ba20e5da8325f19ba010ddb4974fe.seg 02691156\n02691156/points/ff52c059efaca3c1ca8607f540cc62ba.pts 02691156/expert_verified/points_label/ff52c059efaca3c1ca8607f540cc62ba.seg 02691156\n04379243/points/99737ff619cae25d6effbd64ad6b71b8.pts 04379243/expert_verified/points_label/99737ff619cae25d6effbd64ad6b71b8.seg 04379243\n04379243/points/e3b7fbed310c2c397c8d78b9aede742.pts 04379243/expert_verified/points_label/e3b7fbed310c2c397c8d78b9aede742.seg 04379243\n03001627/points/e8eedd37cb054e37b59d74a7c956bd18.pts 03001627/expert_verified/points_label/e8eedd37cb054e37b59d74a7c956bd18.seg 03001627\n03790512/points/8134a965cc0b134bb37378f3c85478b4.pts 03790512/expert_verified/points_label/8134a965cc0b134bb37378f3c85478b4.seg 03790512\n03636649/points/da5f13f4048dbd72fcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/da5f13f4048dbd72fcb8d8c6d4df8143.seg 03636649\n03001627/points/f5d8dd0309401ebac47a35332c17cce2.pts 03001627/expert_verified/points_label/f5d8dd0309401ebac47a35332c17cce2.seg 03001627\n02691156/points/521eab9363fdc2a07209009cfb89d4bd.pts 02691156/expert_verified/points_label/521eab9363fdc2a07209009cfb89d4bd.seg 02691156\n03636649/points/b1e552b454366a9d7787152e5befb05b.pts 03636649/expert_verified/points_label/b1e552b454366a9d7787152e5befb05b.seg 03636649\n02958343/points/8590a6c8270375e34b5a812ecf553410.pts 02958343/expert_verified/points_label/8590a6c8270375e34b5a812ecf553410.seg 02958343\n04379243/points/d46537f513283d6cdcfcef693e7ec696.pts 04379243/expert_verified/points_label/d46537f513283d6cdcfcef693e7ec696.seg 04379243\n03001627/points/60a5795c905f3bb157f5033576317e1.pts 03001627/expert_verified/points_label/60a5795c905f3bb157f5033576317e1.seg 03001627\n02691156/points/8996445c6d2407c0fb5c1b0f759e2bc1.pts 02691156/expert_verified/points_label/8996445c6d2407c0fb5c1b0f759e2bc1.seg 02691156\n03624134/points/5e15d63317014f30ceea8802f71596b5.pts 03624134/expert_verified/points_label/5e15d63317014f30ceea8802f71596b5.seg 03624134\n03642806/points/9d48ab8c41174e60888cad7f6c0e6001.pts 03642806/expert_verified/points_label/9d48ab8c41174e60888cad7f6c0e6001.seg 03642806\n04379243/points/4cd35d6ec155d39633207e4c3ac155a4.pts 04379243/expert_verified/points_label/4cd35d6ec155d39633207e4c3ac155a4.seg 04379243\n04379243/points/884d2cc0d3aa8a72640e544a5d67c33a.pts 04379243/expert_verified/points_label/884d2cc0d3aa8a72640e544a5d67c33a.seg 04379243\n03001627/points/8191bad981637a71b356ab8b24c147.pts 03001627/expert_verified/points_label/8191bad981637a71b356ab8b24c147.seg 03001627\n03261776/points/de3b9b253e8f1aaf8b15c58b209760b5.pts 03261776/expert_verified/points_label/de3b9b253e8f1aaf8b15c58b209760b5.seg 03261776\n03636649/points/5b744ac897fe8bc557f40ff86fe708ff.pts 03636649/expert_verified/points_label/5b744ac897fe8bc557f40ff86fe708ff.seg 03636649\n04379243/points/6cd84ff61583805c85e2af9bf984f0b5.pts 04379243/expert_verified/points_label/6cd84ff61583805c85e2af9bf984f0b5.seg 04379243\n04379243/points/e65066d6b0b83719c3bd24f986301745.pts 04379243/expert_verified/points_label/e65066d6b0b83719c3bd24f986301745.seg 04379243\n04379243/points/f3efcbd9745da90619fb4103277a6b93.pts 04379243/expert_verified/points_label/f3efcbd9745da90619fb4103277a6b93.seg 04379243\n04379243/points/8ac4d93e65b9d58d9b937d98d58545fa.pts 04379243/expert_verified/points_label/8ac4d93e65b9d58d9b937d98d58545fa.seg 04379243\n03636649/points/b69c3a0a46b932e3d3c1fbbc2200e255.pts 03636649/expert_verified/points_label/b69c3a0a46b932e3d3c1fbbc2200e255.seg 03636649\n03636649/points/5c7965b0835a1a241de9bf5a9c22fde.pts 03636649/expert_verified/points_label/5c7965b0835a1a241de9bf5a9c22fde.seg 03636649\n03001627/points/27ea798c55699b6d2c528d33bca1ac2.pts 03001627/expert_verified/points_label/27ea798c55699b6d2c528d33bca1ac2.seg 03001627\n03467517/points/dc623742d6d1518e19959b248340fafd.pts 03467517/expert_verified/points_label/dc623742d6d1518e19959b248340fafd.seg 03467517\n03001627/points/c6cb59e7645dd14d661ff085a0f14b7.pts 03001627/expert_verified/points_label/c6cb59e7645dd14d661ff085a0f14b7.seg 03001627\n03948459/points/a3679104af613021912d826efe946a9f.pts 03948459/expert_verified/points_label/a3679104af613021912d826efe946a9f.seg 03948459\n03467517/points/b6d2d35747549a5b93f0194265a9746c.pts 03467517/expert_verified/points_label/b6d2d35747549a5b93f0194265a9746c.seg 03467517\n02691156/points/2c1fff0653854166e7a636089598229.pts 02691156/expert_verified/points_label/2c1fff0653854166e7a636089598229.seg 02691156\n04379243/points/1040cd764facf6981190e285a2cbc9c.pts 04379243/expert_verified/points_label/1040cd764facf6981190e285a2cbc9c.seg 04379243\n03001627/points/485831d92925bf03f3d7c13662c10792.pts 03001627/expert_verified/points_label/485831d92925bf03f3d7c13662c10792.seg 03001627\n03636649/points/284986b4c72d624abd73284bc3c3cbac.pts 03636649/expert_verified/points_label/284986b4c72d624abd73284bc3c3cbac.seg 03636649\n02691156/points/4c008f39378be18bc0909d98a1ff2b4.pts 02691156/expert_verified/points_label/4c008f39378be18bc0909d98a1ff2b4.seg 02691156\n04379243/points/9611888ee0db1ecaf7d4d3ced798ad90.pts 04379243/expert_verified/points_label/9611888ee0db1ecaf7d4d3ced798ad90.seg 04379243\n03467517/points/12e30808350dd945f4b498e11fb60a4b.pts 03467517/expert_verified/points_label/12e30808350dd945f4b498e11fb60a4b.seg 03467517\n03467517/points/3243edb05f5e8803ac61a2f8346a8f.pts 03467517/expert_verified/points_label/3243edb05f5e8803ac61a2f8346a8f.seg 03467517\n04379243/points/ec4675f62f6946118cbb8bac2032149c.pts 04379243/expert_verified/points_label/ec4675f62f6946118cbb8bac2032149c.seg 04379243\n04379243/points/eb00a4e8b33d257cad16260d4d73b56.pts 04379243/expert_verified/points_label/eb00a4e8b33d257cad16260d4d73b56.seg 04379243\n03001627/points/5607b02869c1f8a019fb4103277a6b93.pts 03001627/expert_verified/points_label/5607b02869c1f8a019fb4103277a6b93.seg 03001627\n03636649/points/d456beea1501f278f70220cd6be776f7.pts 03636649/expert_verified/points_label/d456beea1501f278f70220cd6be776f7.seg 03636649\n02691156/points/3feeb5f8ecbfcb4ba8f0518e94fcfb22.pts 02691156/expert_verified/points_label/3feeb5f8ecbfcb4ba8f0518e94fcfb22.seg 02691156\n04379243/points/fe130356df1977499c2a886f3b75f1ff.pts 04379243/expert_verified/points_label/fe130356df1977499c2a886f3b75f1ff.seg 04379243\n02958343/points/aa7f127bb8cd9db73755eb267a6f3b6b.pts 02958343/expert_verified/points_label/aa7f127bb8cd9db73755eb267a6f3b6b.seg 02958343\n04379243/points/84a3c87bba5a472af51f77a6d7299806.pts 04379243/expert_verified/points_label/84a3c87bba5a472af51f77a6d7299806.seg 04379243\n04099429/points/2de8ee55ff69502863098049d14fe32f.pts 04099429/expert_verified/points_label/2de8ee55ff69502863098049d14fe32f.seg 04099429\n03624134/points/539ff9b2a7a0329e759e4c424bcdaafe.pts 03624134/expert_verified/points_label/539ff9b2a7a0329e759e4c424bcdaafe.seg 03624134\n03948459/points/f3f6678898938575575e33965575974.pts 03948459/expert_verified/points_label/f3f6678898938575575e33965575974.seg 03948459\n04379243/points/c26dfd3453d81bf7788eb1f5e7ba6e7b.pts 04379243/expert_verified/points_label/c26dfd3453d81bf7788eb1f5e7ba6e7b.seg 04379243\n03001627/points/8117c55b8bbdbbc54c5c5c89015f1980.pts 03001627/expert_verified/points_label/8117c55b8bbdbbc54c5c5c89015f1980.seg 03001627\n03624134/points/40ccb8ac250e0ea5880595487ba7a30b.pts 03624134/expert_verified/points_label/40ccb8ac250e0ea5880595487ba7a30b.seg 03624134\n04379243/points/a0d2754011acdcc9d8a0e410093d6619.pts 04379243/expert_verified/points_label/a0d2754011acdcc9d8a0e410093d6619.seg 04379243\n03790512/points/5bd41c7d3e158ac93ff4d2f5a7608a24.pts 03790512/expert_verified/points_label/5bd41c7d3e158ac93ff4d2f5a7608a24.seg 03790512\n04379243/points/8f440a7c0e2af79f3ed0ffd59feeec00.pts 04379243/expert_verified/points_label/8f440a7c0e2af79f3ed0ffd59feeec00.seg 04379243\n03001627/points/734ac9809aada180d18df440db206fb1.pts 03001627/expert_verified/points_label/734ac9809aada180d18df440db206fb1.seg 03001627\n03001627/points/54f33a7cb3621d5ced98cca8f0ccd5f7.pts 03001627/expert_verified/points_label/54f33a7cb3621d5ced98cca8f0ccd5f7.seg 03001627\n03001627/points/d274fc14092387c1e17e1cb731e2fa4f.pts 03001627/expert_verified/points_label/d274fc14092387c1e17e1cb731e2fa4f.seg 03001627\n03636649/points/6ccb43088eda061dbfc838749f053cf9.pts 03636649/expert_verified/points_label/6ccb43088eda061dbfc838749f053cf9.seg 03636649\n02773838/points/1b9ef45fefefa35ed13f430b2941481.pts 02773838/expert_verified/points_label/1b9ef45fefefa35ed13f430b2941481.seg 02773838\n03001627/points/35053caa62eea36c116cc4e115d5fd2.pts 03001627/expert_verified/points_label/35053caa62eea36c116cc4e115d5fd2.seg 03001627\n04379243/points/b893c20bfb5d718371a782a4379556c7.pts 04379243/expert_verified/points_label/b893c20bfb5d718371a782a4379556c7.seg 04379243\n04379243/points/1a5062241d7903076f88aa1b7f7cc6c6.pts 04379243/expert_verified/points_label/1a5062241d7903076f88aa1b7f7cc6c6.seg 04379243\n02958343/points/add26d8f4f91ba04c84b95bddf75b22d.pts 02958343/expert_verified/points_label/add26d8f4f91ba04c84b95bddf75b22d.seg 02958343\n03636649/points/f85f26c5a807b22312bea13341a54c3f.pts 03636649/expert_verified/points_label/f85f26c5a807b22312bea13341a54c3f.seg 03636649\n03001627/points/8a232028c2b2cfad43649af30eba8304.pts 03001627/expert_verified/points_label/8a232028c2b2cfad43649af30eba8304.seg 03001627\n03636649/points/3a5a0f4c78e17b284f0c4075db76b7c.pts 03636649/expert_verified/points_label/3a5a0f4c78e17b284f0c4075db76b7c.seg 03636649\n04379243/points/df811f7a858750875634c21965ee6bab.pts 04379243/expert_verified/points_label/df811f7a858750875634c21965ee6bab.seg 04379243\n02691156/points/48706d323b9041d5438a95791ca4064d.pts 02691156/expert_verified/points_label/48706d323b9041d5438a95791ca4064d.seg 02691156\n03790512/points/170cfc531a4fd09fe6905ba5363784c3.pts 03790512/expert_verified/points_label/170cfc531a4fd09fe6905ba5363784c3.seg 03790512\n03467517/points/d4b2ddb52e8dcd3593f0194265a9746c.pts 03467517/expert_verified/points_label/d4b2ddb52e8dcd3593f0194265a9746c.seg 03467517\n03636649/points/2af78c0b040634e5881cd5e2fd8f0f3b.pts 03636649/expert_verified/points_label/2af78c0b040634e5881cd5e2fd8f0f3b.seg 03636649\n04379243/points/90cd6a48cf2789a9b430d97a45d5824.pts 04379243/expert_verified/points_label/90cd6a48cf2789a9b430d97a45d5824.seg 04379243\n03001627/points/43290694390ad1adfc735c9ceab0161a.pts 03001627/expert_verified/points_label/43290694390ad1adfc735c9ceab0161a.seg 03001627\n03636649/points/ed57181b9e7644a3f51f77a6d7299806.pts 03636649/expert_verified/points_label/ed57181b9e7644a3f51f77a6d7299806.seg 03636649\n03261776/points/a9661a8bb610d902957b6a4f3924d982.pts 03261776/expert_verified/points_label/a9661a8bb610d902957b6a4f3924d982.seg 03261776\n02691156/points/b31bbc50a0d3a4366cf1b4a8fc3914e.pts 02691156/expert_verified/points_label/b31bbc50a0d3a4366cf1b4a8fc3914e.seg 02691156\n03001627/points/cd5ad4afabaed0d3e762624dc3c8fa2a.pts 03001627/expert_verified/points_label/cd5ad4afabaed0d3e762624dc3c8fa2a.seg 03001627\n02958343/points/d2e1dc21db9b45df6436916a86a90ed7.pts 02958343/expert_verified/points_label/d2e1dc21db9b45df6436916a86a90ed7.seg 02958343\n02691156/points/de9e093bb17848c3b2bd4a92202f8700.pts 02691156/expert_verified/points_label/de9e093bb17848c3b2bd4a92202f8700.seg 02691156\n03467517/points/40cd2cafde62ff7ca24eeca91f583600.pts 03467517/expert_verified/points_label/40cd2cafde62ff7ca24eeca91f583600.seg 03467517\n02958343/points/56e0fef0632aed0f1d27be7764701cfe.pts 02958343/expert_verified/points_label/56e0fef0632aed0f1d27be7764701cfe.seg 02958343\n04379243/points/a4d149a48607de3d92f4c88fd91c6b1b.pts 04379243/expert_verified/points_label/a4d149a48607de3d92f4c88fd91c6b1b.seg 04379243\n03636649/points/45f11cb4099c9c87bbc7a6acbd8f058b.pts 03636649/expert_verified/points_label/45f11cb4099c9c87bbc7a6acbd8f058b.seg 03636649\n04379243/points/3558aeeb9698722acf19858fd1963d10.pts 04379243/expert_verified/points_label/3558aeeb9698722acf19858fd1963d10.seg 04379243\n03636649/points/2a52bd01472ec7e1589ec67c01f5c1a7.pts 03636649/expert_verified/points_label/2a52bd01472ec7e1589ec67c01f5c1a7.seg 03636649\n03467517/points/58bb21c325f021088f01c8e793a6e062.pts 03467517/expert_verified/points_label/58bb21c325f021088f01c8e793a6e062.seg 03467517\n04379243/points/3997cdee934a9b238eb3bc6c6d15f9bf.pts 04379243/expert_verified/points_label/3997cdee934a9b238eb3bc6c6d15f9bf.seg 04379243\n03001627/points/c4cab2a416a4537e2871cc0b3cc1a485.pts 03001627/expert_verified/points_label/c4cab2a416a4537e2871cc0b3cc1a485.seg 03001627\n04379243/points/6aaa78b81528f4846674ff79eed6185a.pts 04379243/expert_verified/points_label/6aaa78b81528f4846674ff79eed6185a.seg 04379243\n03636649/points/fd5f6ab819910a66dc7f95a5a82e36f7.pts 03636649/expert_verified/points_label/fd5f6ab819910a66dc7f95a5a82e36f7.seg 03636649\n04379243/points/8e3303cae6cc104bad4f8ccb153c24e.pts 04379243/expert_verified/points_label/8e3303cae6cc104bad4f8ccb153c24e.seg 04379243\n03001627/points/2f0318b23d899a84493f17f4fe9b9eb2.pts 03001627/expert_verified/points_label/2f0318b23d899a84493f17f4fe9b9eb2.seg 03001627\n04379243/points/2406cdcd4c60c84132884c4c87a2e061.pts 04379243/expert_verified/points_label/2406cdcd4c60c84132884c4c87a2e061.seg 04379243\n03790512/points/55caf44a43f2c04d468bac13e007a6e9.pts 03790512/expert_verified/points_label/55caf44a43f2c04d468bac13e007a6e9.seg 03790512\n03001627/points/ee665ce6679ac8cfb502ac2eb9128f9a.pts 03001627/expert_verified/points_label/ee665ce6679ac8cfb502ac2eb9128f9a.seg 03001627\n02691156/points/32edb6ba5788dc12d8ff6111270336a9.pts 02691156/expert_verified/points_label/32edb6ba5788dc12d8ff6111270336a9.seg 02691156\n03636649/points/d0fde1daedab10365240248232b90795.pts 03636649/expert_verified/points_label/d0fde1daedab10365240248232b90795.seg 03636649\n04379243/points/61b88b501933ebae8f7068c66465c4d6.pts 04379243/expert_verified/points_label/61b88b501933ebae8f7068c66465c4d6.seg 04379243\n03001627/points/93556cf01e19f638bf80985a99195eb8.pts 03001627/expert_verified/points_label/93556cf01e19f638bf80985a99195eb8.seg 03001627\n04379243/points/f3b8c91c5dd1cb6b8722573b29f0d6d8.pts 04379243/expert_verified/points_label/f3b8c91c5dd1cb6b8722573b29f0d6d8.seg 04379243\n04379243/points/eae36b396f6b5f97664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/eae36b396f6b5f97664b3b9b23ddfcbc.seg 04379243\n03624134/points/8bd5c4f395695ebdf40d02cc9d84a93a.pts 03624134/expert_verified/points_label/8bd5c4f395695ebdf40d02cc9d84a93a.seg 03624134\n03001627/points/8c81ff18e04584547f409062bafc8e2.pts 03001627/expert_verified/points_label/8c81ff18e04584547f409062bafc8e2.seg 03001627\n03001627/points/77e7660d71c6f3befebad4f49b26ec52.pts 03001627/expert_verified/points_label/77e7660d71c6f3befebad4f49b26ec52.seg 03001627\n03261776/points/bc404e52bfcd2038538cf6df9faa9b65.pts 03261776/expert_verified/points_label/bc404e52bfcd2038538cf6df9faa9b65.seg 03261776\n03001627/points/f09af71bebd4bea8a2651abaf391628e.pts 03001627/expert_verified/points_label/f09af71bebd4bea8a2651abaf391628e.seg 03001627\n03001627/points/8c8efbe62a1547942b90a0fb76278f6f.pts 03001627/expert_verified/points_label/8c8efbe62a1547942b90a0fb76278f6f.seg 03001627\n04379243/points/aed5697ff59e3d3035478a6869a3602d.pts 04379243/expert_verified/points_label/aed5697ff59e3d3035478a6869a3602d.seg 04379243\n02691156/points/5ac00867c7d78b1690b1d6deb98feec6.pts 02691156/expert_verified/points_label/5ac00867c7d78b1690b1d6deb98feec6.seg 02691156\n03001627/points/c709aa613431c0538a653a9f65a410f6.pts 03001627/expert_verified/points_label/c709aa613431c0538a653a9f65a410f6.seg 03001627\n03624134/points/8facbe9d9f4da233d15a5887ec2183c9.pts 03624134/expert_verified/points_label/8facbe9d9f4da233d15a5887ec2183c9.seg 03624134\n03642806/points/dbcd5a88a9d4f1d7579cfe4420588034.pts 03642806/expert_verified/points_label/dbcd5a88a9d4f1d7579cfe4420588034.seg 03642806\n03636649/points/f29a94f969dd55ffc35131da26f8061a.pts 03636649/expert_verified/points_label/f29a94f969dd55ffc35131da26f8061a.seg 03636649\n02958343/points/5e014eb2bd03daab9fbe97de4a41d527.pts 02958343/expert_verified/points_label/5e014eb2bd03daab9fbe97de4a41d527.seg 02958343\n04379243/points/7105bd044f464358beedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/7105bd044f464358beedb4c8fd29e2d1.seg 04379243\n04379243/points/c827c0d4ef212f2b30cb1fe6fdc7d605.pts 04379243/expert_verified/points_label/c827c0d4ef212f2b30cb1fe6fdc7d605.seg 04379243\n04379243/points/19bc9c781df1da46824080f516909671.pts 04379243/expert_verified/points_label/19bc9c781df1da46824080f516909671.seg 04379243\n03001627/points/71b53a5f441d45b742b7e4c0136bdb7e.pts 03001627/expert_verified/points_label/71b53a5f441d45b742b7e4c0136bdb7e.seg 03001627\n02958343/points/e7e94f8dbbe8c1e9784da3853aae78cd.pts 02958343/expert_verified/points_label/e7e94f8dbbe8c1e9784da3853aae78cd.seg 02958343\n03790512/points/832c4a316c419228b37378f3c85478b4.pts 03790512/expert_verified/points_label/832c4a316c419228b37378f3c85478b4.seg 03790512\n02954340/points/c7122c44495a5ac6aceb0fa31f18f016.pts 02954340/expert_verified/points_label/c7122c44495a5ac6aceb0fa31f18f016.seg 02954340\n03001627/points/6b32d3a9198f8b03d1dcc55e36186e4e.pts 03001627/expert_verified/points_label/6b32d3a9198f8b03d1dcc55e36186e4e.seg 03001627\n03636649/points/7893d0b50a7b6a768ec45924afa4ac91.pts 03636649/expert_verified/points_label/7893d0b50a7b6a768ec45924afa4ac91.seg 03636649\n02691156/points/befcb95d80e0e49119ba010ddb4974fe.pts 02691156/expert_verified/points_label/befcb95d80e0e49119ba010ddb4974fe.seg 02691156\n03001627/points/b70600293bab55c0593ebeeedbff73b.pts 03001627/expert_verified/points_label/b70600293bab55c0593ebeeedbff73b.seg 03001627\n02691156/points/7fedb48b457ee9f31629b98cc1b1b992.pts 02691156/expert_verified/points_label/7fedb48b457ee9f31629b98cc1b1b992.seg 02691156\n04099429/points/e04bda8655d9e606ebcdf982796b4fa.pts 04099429/expert_verified/points_label/e04bda8655d9e606ebcdf982796b4fa.seg 04099429\n04379243/points/25bcea593e4314c3436e6787c76ef3f0.pts 04379243/expert_verified/points_label/25bcea593e4314c3436e6787c76ef3f0.seg 04379243\n03636649/points/f3a9cc3060fd6b0e6e4f8fc909e0d34e.pts 03636649/expert_verified/points_label/f3a9cc3060fd6b0e6e4f8fc909e0d34e.seg 03636649\n04379243/points/516928532093f765bababe11fcea8796.pts 04379243/expert_verified/points_label/516928532093f765bababe11fcea8796.seg 04379243\n03001627/points/31569815c88e79de4458bae25a4e518a.pts 03001627/expert_verified/points_label/31569815c88e79de4458bae25a4e518a.seg 03001627\n03001627/points/a08ad49c281128ea53615647c93fc704.pts 03001627/expert_verified/points_label/a08ad49c281128ea53615647c93fc704.seg 03001627\n03642806/points/f5fc954736b06be15fd06491ae919ea3.pts 03642806/expert_verified/points_label/f5fc954736b06be15fd06491ae919ea3.seg 03642806\n04379243/points/15b495c101881d96e2367b9e27f16a71.pts 04379243/expert_verified/points_label/15b495c101881d96e2367b9e27f16a71.seg 04379243\n02691156/points/ebd991666f177f8f575bf8a4b14be4f4.pts 02691156/expert_verified/points_label/ebd991666f177f8f575bf8a4b14be4f4.seg 02691156\n02691156/points/f7739764eb1c78a053f370d353cea84.pts 02691156/expert_verified/points_label/f7739764eb1c78a053f370d353cea84.seg 02691156\n03636649/points/8a6d770e6b4942c5ef3a2c64cef919d0.pts 03636649/expert_verified/points_label/8a6d770e6b4942c5ef3a2c64cef919d0.seg 03636649\n04379243/points/2fcc875b28c5557dcfcef693e7ec696.pts 04379243/expert_verified/points_label/2fcc875b28c5557dcfcef693e7ec696.seg 04379243\n03636649/points/896abd405c79547086485c798787f66b.pts 03636649/expert_verified/points_label/896abd405c79547086485c798787f66b.seg 03636649\n02691156/points/356a633ea047c549ca8607f540cc62ba.pts 02691156/expert_verified/points_label/356a633ea047c549ca8607f540cc62ba.seg 02691156\n03001627/points/c983108db7fcfa3619fb4103277a6b93.pts 03001627/expert_verified/points_label/c983108db7fcfa3619fb4103277a6b93.seg 03001627\n04225987/points/97f85bc59f09a9f455c660e6cd8e92b.pts 04225987/expert_verified/points_label/97f85bc59f09a9f455c660e6cd8e92b.seg 04225987\n03636649/points/69a708be7245f4c9786e8e92cc08146.pts 03636649/expert_verified/points_label/69a708be7245f4c9786e8e92cc08146.seg 03636649\n04379243/points/f71296c0a7e93ec282db9fca4b68095.pts 04379243/expert_verified/points_label/f71296c0a7e93ec282db9fca4b68095.seg 04379243\n02691156/points/33faf711ed54a4d3db22b838c125a50b.pts 02691156/expert_verified/points_label/33faf711ed54a4d3db22b838c125a50b.seg 02691156\n03642806/points/5d544ee4b094c6606436916a86a90ed7.pts 03642806/expert_verified/points_label/5d544ee4b094c6606436916a86a90ed7.seg 03642806\n02691156/points/a0d63ee7fd87f93619ba010ddb4974fe.pts 02691156/expert_verified/points_label/a0d63ee7fd87f93619ba010ddb4974fe.seg 02691156\n03001627/points/e30b412be565a1026efe57da6d3d385e.pts 03001627/expert_verified/points_label/e30b412be565a1026efe57da6d3d385e.seg 03001627\n04379243/points/fe5e1df0653804d6ce4670b160b81e9.pts 04379243/expert_verified/points_label/fe5e1df0653804d6ce4670b160b81e9.seg 04379243\n02691156/points/fd41d04f1aabbaea3fddedb0bf24c68a.pts 02691156/expert_verified/points_label/fd41d04f1aabbaea3fddedb0bf24c68a.seg 02691156\n03624134/points/e79481b2fde3a3ab340fbf70397ab69a.pts 03624134/expert_verified/points_label/e79481b2fde3a3ab340fbf70397ab69a.seg 03624134\n04379243/points/d06d27bc9ad1faabd7bf6fb68df7f786.pts 04379243/expert_verified/points_label/d06d27bc9ad1faabd7bf6fb68df7f786.seg 04379243\n03001627/points/e4931ffa06d7b05cb04cb542e2c50eb4.pts 03001627/expert_verified/points_label/e4931ffa06d7b05cb04cb542e2c50eb4.seg 03001627\n03001627/points/d4b5f8edc72b4676f4175ee3a177350a.pts 03001627/expert_verified/points_label/d4b5f8edc72b4676f4175ee3a177350a.seg 03001627\n03636649/points/4f16fffbe480b835276206fae5d3c473.pts 03636649/expert_verified/points_label/4f16fffbe480b835276206fae5d3c473.seg 03636649\n03001627/points/8ade914cd21b6e49656f29b05c68d39f.pts 03001627/expert_verified/points_label/8ade914cd21b6e49656f29b05c68d39f.seg 03001627\n03001627/points/1e304b967d5253d5dd079f8cece51712.pts 03001627/expert_verified/points_label/1e304b967d5253d5dd079f8cece51712.seg 03001627\n04379243/points/6d0ef6312f8af87a53e946fb2184f0c4.pts 04379243/expert_verified/points_label/6d0ef6312f8af87a53e946fb2184f0c4.seg 04379243\n03948459/points/79c0cac016998c7cf7ba4a82f8032357.pts 03948459/expert_verified/points_label/79c0cac016998c7cf7ba4a82f8032357.seg 03948459\n03642806/points/b51683c6285fa0f69067ac5c9d4ee692.pts 03642806/expert_verified/points_label/b51683c6285fa0f69067ac5c9d4ee692.seg 03642806\n04379243/points/93cdfd14889492dd91a4fd87fee47737.pts 04379243/expert_verified/points_label/93cdfd14889492dd91a4fd87fee47737.seg 04379243\n03636649/points/da8141b45da808199a06a7de97b096dc.pts 03636649/expert_verified/points_label/da8141b45da808199a06a7de97b096dc.seg 03636649\n04379243/points/7d22cd72bf2762b19a4b266ed4d507c9.pts 04379243/expert_verified/points_label/7d22cd72bf2762b19a4b266ed4d507c9.seg 04379243\n04225987/points/aa886bed91a13113d5498a74ca9ca78b.pts 04225987/expert_verified/points_label/aa886bed91a13113d5498a74ca9ca78b.seg 04225987\n04379243/points/55547d2fae0e3dc21705bfd3afcd10e.pts 04379243/expert_verified/points_label/55547d2fae0e3dc21705bfd3afcd10e.seg 04379243\n04379243/points/222c56ff9cddbaf4139eb23f7c8036f.pts 04379243/expert_verified/points_label/222c56ff9cddbaf4139eb23f7c8036f.seg 04379243\n03636649/points/292f1f97a543d735dedf3c967c85981a.pts 03636649/expert_verified/points_label/292f1f97a543d735dedf3c967c85981a.seg 03636649\n04379243/points/9e2318099f77d3df3527ecfeb345775f.pts 04379243/expert_verified/points_label/9e2318099f77d3df3527ecfeb345775f.seg 04379243\n04379243/points/6ace903899706a5819fb4103277a6b93.pts 04379243/expert_verified/points_label/6ace903899706a5819fb4103277a6b93.seg 04379243\n03636649/points/c080aefc6cbff8c81185ac82ed4da80d.pts 03636649/expert_verified/points_label/c080aefc6cbff8c81185ac82ed4da80d.seg 03636649\n03790512/points/9dd4ae1c34af4766b4f2746c8140d6d6.pts 03790512/expert_verified/points_label/9dd4ae1c34af4766b4f2746c8140d6d6.seg 03790512\n03001627/points/e199b1f6a70c9f56df44d20a516c07b3.pts 03001627/expert_verified/points_label/e199b1f6a70c9f56df44d20a516c07b3.seg 03001627\n04379243/points/8129d4c51abc3356bababe11fcea8796.pts 04379243/expert_verified/points_label/8129d4c51abc3356bababe11fcea8796.seg 04379243\n03001627/points/c9d8573a048c0e959c0ca344f487323e.pts 03001627/expert_verified/points_label/c9d8573a048c0e959c0ca344f487323e.seg 03001627\n04379243/points/25eefc5a3c7b30e1f103d473de33521a.pts 04379243/expert_verified/points_label/25eefc5a3c7b30e1f103d473de33521a.seg 04379243\n03624134/points/c20cca071ea58e3ef2c542131520d62e.pts 03624134/expert_verified/points_label/c20cca071ea58e3ef2c542131520d62e.seg 03624134\n03001627/points/c86cfe147872280463626070a93463cf.pts 03001627/expert_verified/points_label/c86cfe147872280463626070a93463cf.seg 03001627\n03001627/points/3853339519aca1bdfcd4910413c446d9.pts 03001627/expert_verified/points_label/3853339519aca1bdfcd4910413c446d9.seg 03001627\n03001627/points/8cb44a50906b827615e7ec87bf4cc5ab.pts 03001627/expert_verified/points_label/8cb44a50906b827615e7ec87bf4cc5ab.seg 03001627\n02691156/points/fd9f1cdaa381599bca8607f540cc62ba.pts 02691156/expert_verified/points_label/fd9f1cdaa381599bca8607f540cc62ba.seg 02691156\n03001627/points/80dabf9ddbdc92f681806e3880250dff.pts 03001627/expert_verified/points_label/80dabf9ddbdc92f681806e3880250dff.seg 03001627\n04379243/points/5919dea71f3bcb071d54ab02e78bef2.pts 04379243/expert_verified/points_label/5919dea71f3bcb071d54ab02e78bef2.seg 04379243\n03636649/points/292ba732e002629e68c2f5eb1dd4dfaa.pts 03636649/expert_verified/points_label/292ba732e002629e68c2f5eb1dd4dfaa.seg 03636649\n04379243/points/5d77e8f6ad3741a0c30ab36bf7b0552.pts 04379243/expert_verified/points_label/5d77e8f6ad3741a0c30ab36bf7b0552.seg 04379243\n03467517/points/21a517abc4729e6e352e5d4d2615db5b.pts 03467517/expert_verified/points_label/21a517abc4729e6e352e5d4d2615db5b.seg 03467517\n03467517/points/6554f6429eb7b67585e3c97721f726e4.pts 03467517/expert_verified/points_label/6554f6429eb7b67585e3c97721f726e4.seg 03467517\n02958343/points/f84ba2039d0a4ec5afe717997470b28d.pts 02958343/expert_verified/points_label/f84ba2039d0a4ec5afe717997470b28d.seg 02958343\n02691156/points/29fd29045703ff18b4a8b7176ed97248.pts 02691156/expert_verified/points_label/29fd29045703ff18b4a8b7176ed97248.seg 02691156\n03467517/points/a7f449a1f2cd1f1693f0194265a9746c.pts 03467517/expert_verified/points_label/a7f449a1f2cd1f1693f0194265a9746c.seg 03467517\n03790512/points/7fcee59a33976221a88e8cb97b773125.pts 03790512/expert_verified/points_label/7fcee59a33976221a88e8cb97b773125.seg 03790512\n04099429/points/2407c2684ee757e89c4176ab56cb612.pts 04099429/expert_verified/points_label/2407c2684ee757e89c4176ab56cb612.seg 04099429\n04379243/points/f621e2ad900ad48535836c728d324152.pts 04379243/expert_verified/points_label/f621e2ad900ad48535836c728d324152.seg 04379243\n03001627/points/9a54daea9071a536bf80985a99195eb8.pts 03001627/expert_verified/points_label/9a54daea9071a536bf80985a99195eb8.seg 03001627\n03001627/points/fd9e909b082d8175d319c38340319ae4.pts 03001627/expert_verified/points_label/fd9e909b082d8175d319c38340319ae4.seg 03001627\n03001627/points/a8dd9990ecd74c45435897641a7ee684.pts 03001627/expert_verified/points_label/a8dd9990ecd74c45435897641a7ee684.seg 03001627\n03636649/points/c6424950ca9447627d8864caa856253b.pts 03636649/expert_verified/points_label/c6424950ca9447627d8864caa856253b.seg 03636649\n03948459/points/7f3ec97cfaea31137504cc74f24f0eee.pts 03948459/expert_verified/points_label/7f3ec97cfaea31137504cc74f24f0eee.seg 03948459\n02691156/points/43abe330362164e99be82ec29531a70f.pts 02691156/expert_verified/points_label/43abe330362164e99be82ec29531a70f.seg 02691156\n03001627/points/499c4b519c708ae84cd08aa7c510fb85.pts 03001627/expert_verified/points_label/499c4b519c708ae84cd08aa7c510fb85.seg 03001627\n04379243/points/4c7931492b41f960d50eef20e0914a48.pts 04379243/expert_verified/points_label/4c7931492b41f960d50eef20e0914a48.seg 04379243\n03001627/points/3f36e261cc87648ac3bd24f986301745.pts 03001627/expert_verified/points_label/3f36e261cc87648ac3bd24f986301745.seg 03001627\n03001627/points/a09a88c11d0b27368821ad3452f1c8c9.pts 03001627/expert_verified/points_label/a09a88c11d0b27368821ad3452f1c8c9.seg 03001627\n04379243/points/89cc879f005dcf50f1f50f6a678fb494.pts 04379243/expert_verified/points_label/89cc879f005dcf50f1f50f6a678fb494.seg 04379243\n02958343/points/d34b0494fc4d756ab927782fc69a1fbb.pts 02958343/expert_verified/points_label/d34b0494fc4d756ab927782fc69a1fbb.seg 02958343\n02958343/points/705840df46a582e2ac826a3c82da491.pts 02958343/expert_verified/points_label/705840df46a582e2ac826a3c82da491.seg 02958343\n02691156/points/74a5f937c22aa08a3e70653c1b3170b5.pts 02691156/expert_verified/points_label/74a5f937c22aa08a3e70653c1b3170b5.seg 02691156\n03948459/points/a0a1633186261a031274aa253a241db2.pts 03948459/expert_verified/points_label/a0a1633186261a031274aa253a241db2.seg 03948459\n03001627/points/2de04227fae28e70b6eb6f056d511fe1.pts 03001627/expert_verified/points_label/2de04227fae28e70b6eb6f056d511fe1.seg 03001627\n02691156/points/1e9ef313876bfba7d02c6d35cc802839.pts 02691156/expert_verified/points_label/1e9ef313876bfba7d02c6d35cc802839.seg 02691156\n03636649/points/e99793b871d27333d42b9650f19dd425.pts 03636649/expert_verified/points_label/e99793b871d27333d42b9650f19dd425.seg 03636649\n03001627/points/7228d43e00af4c1e2746490e2236e9a8.pts 03001627/expert_verified/points_label/7228d43e00af4c1e2746490e2236e9a8.seg 03001627\n03636649/points/66111d2c7a23b0feb404555b84577afb.pts 03636649/expert_verified/points_label/66111d2c7a23b0feb404555b84577afb.seg 03636649\n03001627/points/2499541ace317cbb8cb5d9909aeb1309.pts 03001627/expert_verified/points_label/2499541ace317cbb8cb5d9909aeb1309.seg 03001627\n04379243/points/d151d9f45d8b14536cd661fb5fd95741.pts 04379243/expert_verified/points_label/d151d9f45d8b14536cd661fb5fd95741.seg 04379243\n03001627/points/ea7be2b97e78d5b35a4480134e0cdd21.pts 03001627/expert_verified/points_label/ea7be2b97e78d5b35a4480134e0cdd21.seg 03001627\n02958343/points/9c35f00f81110738783854950b26f0d3.pts 02958343/expert_verified/points_label/9c35f00f81110738783854950b26f0d3.seg 02958343\n03001627/points/e30bd575bbd6c68c9710e093c764abec.pts 03001627/expert_verified/points_label/e30bd575bbd6c68c9710e093c764abec.seg 03001627\n03790512/points/61b17f12bec91d057395d58407f193ba.pts 03790512/expert_verified/points_label/61b17f12bec91d057395d58407f193ba.seg 03790512\n04379243/points/cd895c35fff495cdd0b93fa304cfa755.pts 04379243/expert_verified/points_label/cd895c35fff495cdd0b93fa304cfa755.seg 04379243\n02958343/points/b70d970f8020c25dd141480e2c154d3.pts 02958343/expert_verified/points_label/b70d970f8020c25dd141480e2c154d3.seg 02958343\n04379243/points/2642d805c53e243d629f73b53bd7a234.pts 04379243/expert_verified/points_label/2642d805c53e243d629f73b53bd7a234.seg 04379243\n04379243/points/1bce2f4937d36446a32c566d71fa585c.pts 04379243/expert_verified/points_label/1bce2f4937d36446a32c566d71fa585c.seg 04379243\n04379243/points/7c1bcea89b0037a2d67bd369ec608dad.pts 04379243/expert_verified/points_label/7c1bcea89b0037a2d67bd369ec608dad.seg 04379243\n04379243/points/3154c61c595bd600e56ddd87eb888f65.pts 04379243/expert_verified/points_label/3154c61c595bd600e56ddd87eb888f65.seg 04379243\n03001627/points/7a1de77ca204eaf28a514cac7cb18507.pts 03001627/expert_verified/points_label/7a1de77ca204eaf28a514cac7cb18507.seg 03001627\n04379243/points/77ecc55547840f06d42b9650f19dd425.pts 04379243/expert_verified/points_label/77ecc55547840f06d42b9650f19dd425.seg 04379243\n02691156/points/9a8aecab136ce50db7ef47444625afb2.pts 02691156/expert_verified/points_label/9a8aecab136ce50db7ef47444625afb2.seg 02691156\n02958343/points/24866846d728484e1d1a964dea8a7aab.pts 02958343/expert_verified/points_label/24866846d728484e1d1a964dea8a7aab.seg 02958343\n04099429/points/9b75297c580ff937b61ce5beb9f92726.pts 04099429/expert_verified/points_label/9b75297c580ff937b61ce5beb9f92726.seg 04099429\n04225987/points/90dbe261a4d56dcf1082f2ea630bf69e.pts 04225987/expert_verified/points_label/90dbe261a4d56dcf1082f2ea630bf69e.seg 04225987\n03001627/points/81b27636162e148bb3fb065fa3089331.pts 03001627/expert_verified/points_label/81b27636162e148bb3fb065fa3089331.seg 03001627\n03642806/points/66d47a84a3d522dc9311bf79d4774e73.pts 03642806/expert_verified/points_label/66d47a84a3d522dc9311bf79d4774e73.seg 03642806\n03001627/points/2a05ae00b701fda36567137a59cb1a56.pts 03001627/expert_verified/points_label/2a05ae00b701fda36567137a59cb1a56.seg 03001627\n04379243/points/79df23303a3192c1cdf1dfd78f33901b.pts 04379243/expert_verified/points_label/79df23303a3192c1cdf1dfd78f33901b.seg 04379243\n04379243/points/bf17779bec6abccf161bc5243aab8ea4.pts 04379243/expert_verified/points_label/bf17779bec6abccf161bc5243aab8ea4.seg 04379243\n03001627/points/ece1a921c1bfd44947f5e245ee376525.pts 03001627/expert_verified/points_label/ece1a921c1bfd44947f5e245ee376525.seg 03001627\n03636649/points/15c51ecb58bf304fef3a2c64cef919d0.pts 03636649/expert_verified/points_label/15c51ecb58bf304fef3a2c64cef919d0.seg 03636649\n04379243/points/5d93e285b2006520ab610b0c94236463.pts 04379243/expert_verified/points_label/5d93e285b2006520ab610b0c94236463.seg 04379243\n03636649/points/b2d5929e66044aeac7db9c21ccfbc4a1.pts 03636649/expert_verified/points_label/b2d5929e66044aeac7db9c21ccfbc4a1.seg 03636649\n04379243/points/f3164e1781a296597f6f00dc967c386.pts 04379243/expert_verified/points_label/f3164e1781a296597f6f00dc967c386.seg 04379243\n04379243/points/798a07e42d76013582695d8aaeacccc5.pts 04379243/expert_verified/points_label/798a07e42d76013582695d8aaeacccc5.seg 04379243\n03948459/points/cc014e78b5cd9e7ed957eaf7f4edb205.pts 03948459/expert_verified/points_label/cc014e78b5cd9e7ed957eaf7f4edb205.seg 03948459\n03636649/points/b3a98808fb1ccd892a5041fadf25a502.pts 03636649/expert_verified/points_label/b3a98808fb1ccd892a5041fadf25a502.seg 03636649\n04379243/points/9472c006a5d35b9ab606ece4189242ff.pts 04379243/expert_verified/points_label/9472c006a5d35b9ab606ece4189242ff.seg 04379243\n03001627/points/3f04adffb69b5ebee95cd0dc8c2f0e83.pts 03001627/expert_verified/points_label/3f04adffb69b5ebee95cd0dc8c2f0e83.seg 03001627\n03001627/points/26aa22bd1da8b8c5b1a5c6ecbc81953c.pts 03001627/expert_verified/points_label/26aa22bd1da8b8c5b1a5c6ecbc81953c.seg 03001627\n03001627/points/f68ecc9ec512915f36d8dd30a594b2af.pts 03001627/expert_verified/points_label/f68ecc9ec512915f36d8dd30a594b2af.seg 03001627\n03642806/points/6489453e322cdb53f9f3c6290096f50f.pts 03642806/expert_verified/points_label/6489453e322cdb53f9f3c6290096f50f.seg 03642806\n03001627/points/c53fa6829ec9a947d13b7d13ee32497.pts 03001627/expert_verified/points_label/c53fa6829ec9a947d13b7d13ee32497.seg 03001627\n04379243/points/7f1bd688960e2c1b97f2016c3d6097c9.pts 04379243/expert_verified/points_label/7f1bd688960e2c1b97f2016c3d6097c9.seg 04379243\n02958343/points/edb2ab8a1d7e20f36436916a86a90ed7.pts 02958343/expert_verified/points_label/edb2ab8a1d7e20f36436916a86a90ed7.seg 02958343\n04379243/points/159a2a760327ca5bababe11fcea8796.pts 04379243/expert_verified/points_label/159a2a760327ca5bababe11fcea8796.seg 04379243\n02958343/points/988108a7536d686824065b218dc1b5b9.pts 02958343/expert_verified/points_label/988108a7536d686824065b218dc1b5b9.seg 02958343\n03636649/points/c695408a86062c4d242ea50288b3f64.pts 03636649/expert_verified/points_label/c695408a86062c4d242ea50288b3f64.seg 03636649\n04379243/points/2e7cb2cbfbbb4d002ee19ebe356c2dcb.pts 04379243/expert_verified/points_label/2e7cb2cbfbbb4d002ee19ebe356c2dcb.seg 04379243\n02691156/points/3d23703a618ce7df1e569ed4e4cfe84.pts 02691156/expert_verified/points_label/3d23703a618ce7df1e569ed4e4cfe84.seg 02691156\n03636649/points/97b7d9aabe38f91df11c97be803c47d.pts 03636649/expert_verified/points_label/97b7d9aabe38f91df11c97be803c47d.seg 03636649\n04379243/points/5be1589df948b227c955e5ed03ef3a2f.pts 04379243/expert_verified/points_label/5be1589df948b227c955e5ed03ef3a2f.seg 04379243\n04379243/points/8ea7ca2c8b48eb68ab610b0c94236463.pts 04379243/expert_verified/points_label/8ea7ca2c8b48eb68ab610b0c94236463.seg 04379243\n02958343/points/eb56379e243b0e2090da6b3e2ed8b49d.pts 02958343/expert_verified/points_label/eb56379e243b0e2090da6b3e2ed8b49d.seg 02958343\n03001627/points/cc30a723aeba69a139e0f39f5249b0ba.pts 03001627/expert_verified/points_label/cc30a723aeba69a139e0f39f5249b0ba.seg 03001627\n03001627/points/ff8efd10f5e6c5c7c6c0380e62f2644.pts 03001627/expert_verified/points_label/ff8efd10f5e6c5c7c6c0380e62f2644.seg 03001627\n02691156/points/d0ee4253d406b3f05e9e2656aff7dd5b.pts 02691156/expert_verified/points_label/d0ee4253d406b3f05e9e2656aff7dd5b.seg 02691156\n02691156/points/9afe827a622d8ca28699933784576e73.pts 02691156/expert_verified/points_label/9afe827a622d8ca28699933784576e73.seg 02691156\n03467517/points/d82fc6db200cdf6ea24eeca91f583600.pts 03467517/expert_verified/points_label/d82fc6db200cdf6ea24eeca91f583600.seg 03467517\n03642806/points/6123321e3af0b6328204b359ccd3949e.pts 03642806/expert_verified/points_label/6123321e3af0b6328204b359ccd3949e.seg 03642806\n03636649/points/e15defcb3dd448094fffb007974c9976.pts 03636649/expert_verified/points_label/e15defcb3dd448094fffb007974c9976.seg 03636649\n03001627/points/c7fe45610d10cb108ad3a7d07aac2767.pts 03001627/expert_verified/points_label/c7fe45610d10cb108ad3a7d07aac2767.seg 03001627\n04379243/points/bfaa1c23d2622422ad16260d4d73b56.pts 04379243/expert_verified/points_label/bfaa1c23d2622422ad16260d4d73b56.seg 04379243\n04379243/points/8e3fc5f1f8e9658ce8b2b8dc0c816caf.pts 04379243/expert_verified/points_label/8e3fc5f1f8e9658ce8b2b8dc0c816caf.seg 04379243\n03467517/points/1a96f73d0929bd4793f0194265a9746c.pts 03467517/expert_verified/points_label/1a96f73d0929bd4793f0194265a9746c.seg 03467517\n02691156/points/86b11ae736659136ca8607f540cc62ba.pts 02691156/expert_verified/points_label/86b11ae736659136ca8607f540cc62ba.seg 02691156\n04379243/points/4c4c719ac4b61d8f812c9aaa38f9a422.pts 04379243/expert_verified/points_label/4c4c719ac4b61d8f812c9aaa38f9a422.seg 04379243\n04379243/points/443eca86041e57ab1e99b149cff6a230.pts 04379243/expert_verified/points_label/443eca86041e57ab1e99b149cff6a230.seg 04379243\n03948459/points/6b2d89a7f2b173f0d9deb3f829cc2475.pts 03948459/expert_verified/points_label/6b2d89a7f2b173f0d9deb3f829cc2475.seg 03948459\n04379243/points/8d84471c4af977d917271868b642acd3.pts 04379243/expert_verified/points_label/8d84471c4af977d917271868b642acd3.seg 04379243\n03636649/points/b78bef16d4f44844931e98da3a93e73e.pts 03636649/expert_verified/points_label/b78bef16d4f44844931e98da3a93e73e.seg 03636649\n03636649/points/29985e44b73051d923500a5b036df62e.pts 03636649/expert_verified/points_label/29985e44b73051d923500a5b036df62e.seg 03636649\n03642806/points/4f3575df3821e08c466909b3e9553909.pts 03642806/expert_verified/points_label/4f3575df3821e08c466909b3e9553909.seg 03642806\n03001627/points/3774a2b8c71e70b9f18a36d57b7cced0.pts 03001627/expert_verified/points_label/3774a2b8c71e70b9f18a36d57b7cced0.seg 03001627\n03001627/points/3ea40a75f22515557dcf230d8b7d162e.pts 03001627/expert_verified/points_label/3ea40a75f22515557dcf230d8b7d162e.seg 03001627\n03001627/points/33c4f94e97c3fefd19fb4103277a6b93.pts 03001627/expert_verified/points_label/33c4f94e97c3fefd19fb4103277a6b93.seg 03001627\n03636649/points/d7760d5f9e1e6a622cd2160e449d45ae.pts 03636649/expert_verified/points_label/d7760d5f9e1e6a622cd2160e449d45ae.seg 03636649\n02954340/points/7f9ddfff396634f17790cd6f6e8952aa.pts 02954340/expert_verified/points_label/7f9ddfff396634f17790cd6f6e8952aa.seg 02954340\n03001627/points/5e706e87ca60bd19ecb01bc908e8cea6.pts 03001627/expert_verified/points_label/5e706e87ca60bd19ecb01bc908e8cea6.seg 03001627\n04379243/points/90c19c729cabdb864b8710a3469971b1.pts 04379243/expert_verified/points_label/90c19c729cabdb864b8710a3469971b1.seg 04379243\n02691156/points/d08471df3e76602427743256ca3834f.pts 02691156/expert_verified/points_label/d08471df3e76602427743256ca3834f.seg 02691156\n02958343/points/67c229c70e64a25e69c2e0a91b39f742.pts 02958343/expert_verified/points_label/67c229c70e64a25e69c2e0a91b39f742.seg 02958343\n04379243/points/1011e1c9812b84d2a9ed7bb5b55809f8.pts 04379243/expert_verified/points_label/1011e1c9812b84d2a9ed7bb5b55809f8.seg 04379243\n03636649/points/3e2d51c40b37c9c086052e834fbd2c4a.pts 03636649/expert_verified/points_label/3e2d51c40b37c9c086052e834fbd2c4a.seg 03636649\n03001627/points/6b385a32489bab4abbc7a6acbd8f058b.pts 03001627/expert_verified/points_label/6b385a32489bab4abbc7a6acbd8f058b.seg 03001627\n03001627/points/61d29e8133da0b58d1fd43e2bf80195.pts 03001627/expert_verified/points_label/61d29e8133da0b58d1fd43e2bf80195.seg 03001627\n04379243/points/d5f2968e4b7254ccf4104961857ca9c.pts 04379243/expert_verified/points_label/d5f2968e4b7254ccf4104961857ca9c.seg 04379243\n04379243/points/30c9865cfc4294a7ad16260d4d73b56.pts 04379243/expert_verified/points_label/30c9865cfc4294a7ad16260d4d73b56.seg 04379243\n03001627/points/76919a456a23b9779368d1198f406e7.pts 03001627/expert_verified/points_label/76919a456a23b9779368d1198f406e7.seg 03001627\n03001627/points/c12da8acb2c7973597e755dddca14449.pts 03001627/expert_verified/points_label/c12da8acb2c7973597e755dddca14449.seg 03001627\n02958343/points/a5dcd1196a1ffa9739f20966eb25504f.pts 02958343/expert_verified/points_label/a5dcd1196a1ffa9739f20966eb25504f.seg 02958343\n02691156/points/1deb997079e0b3cd6c1cd53dbc9f7b8e.pts 02691156/expert_verified/points_label/1deb997079e0b3cd6c1cd53dbc9f7b8e.seg 02691156\n03636649/points/afb7cc3bbc3595a4e9b3dff83c7ff715.pts 03636649/expert_verified/points_label/afb7cc3bbc3595a4e9b3dff83c7ff715.seg 03636649\n03636649/points/b4aee889d5e2a826f6747912091f1965.pts 03636649/expert_verified/points_label/b4aee889d5e2a826f6747912091f1965.seg 03636649\n03636649/points/ea71ba1d8d8c8e5888a1de3dc61bfeef.pts 03636649/expert_verified/points_label/ea71ba1d8d8c8e5888a1de3dc61bfeef.seg 03636649\n02958343/points/b0c2225ab347e28f1a48cf85d161a723.pts 02958343/expert_verified/points_label/b0c2225ab347e28f1a48cf85d161a723.seg 02958343\n03001627/points/1ab8a3b55c14a7b27eaeab1f0c9120b7.pts 03001627/expert_verified/points_label/1ab8a3b55c14a7b27eaeab1f0c9120b7.seg 03001627\n03261776/points/c6d19db35f69bae7b6d9c2cee7f2f72b.pts 03261776/expert_verified/points_label/c6d19db35f69bae7b6d9c2cee7f2f72b.seg 03261776\n03001627/points/6d6e634ff34bd350c511e6b9b3b344f3.pts 03001627/expert_verified/points_label/6d6e634ff34bd350c511e6b9b3b344f3.seg 03001627\n02691156/points/ce682d7a2bbf77b6fc4b92d3d335214a.pts 02691156/expert_verified/points_label/ce682d7a2bbf77b6fc4b92d3d335214a.seg 02691156\n03261776/points/943048e64cc2bc980a070963925e308.pts 03261776/expert_verified/points_label/943048e64cc2bc980a070963925e308.seg 03261776\n03642806/points/5a63c5f29f0bc0eb12d8efb2f101da03.pts 03642806/expert_verified/points_label/5a63c5f29f0bc0eb12d8efb2f101da03.seg 03642806\n04379243/points/19678fdb9bc926505e4b35ff1ea95f37.pts 04379243/expert_verified/points_label/19678fdb9bc926505e4b35ff1ea95f37.seg 04379243\n02958343/points/52f2a2472411fe2e6b418c7d9fedcaa9.pts 02958343/expert_verified/points_label/52f2a2472411fe2e6b418c7d9fedcaa9.seg 02958343\n03001627/points/1ee92a9d78cccbda98d2e7dbe701ca48.pts 03001627/expert_verified/points_label/1ee92a9d78cccbda98d2e7dbe701ca48.seg 03001627\n03001627/points/795f38ce5d8519938077cafed2bb8242.pts 03001627/expert_verified/points_label/795f38ce5d8519938077cafed2bb8242.seg 03001627\n03001627/points/5e5121cc58c4fea78ce66f12ba927a2b.pts 03001627/expert_verified/points_label/5e5121cc58c4fea78ce66f12ba927a2b.seg 03001627\n03001627/points/b998016472e9dd7a9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/b998016472e9dd7a9b9f2eb77f5e247e.seg 03001627\n04379243/points/30b506e5e1fc282afdfcfddf24fb29ec.pts 04379243/expert_verified/points_label/30b506e5e1fc282afdfcfddf24fb29ec.seg 04379243\n03624134/points/bcd7ed830358dbd6d58ea69ee1ced10e.pts 03624134/expert_verified/points_label/bcd7ed830358dbd6d58ea69ee1ced10e.seg 03624134\n03001627/points/40d202afdcc49c6d35836c728d324152.pts 03001627/expert_verified/points_label/40d202afdcc49c6d35836c728d324152.seg 03001627\n03467517/points/fdb74c27462dfd837c481698bd5233b4.pts 03467517/expert_verified/points_label/fdb74c27462dfd837c481698bd5233b4.seg 03467517\n02691156/points/dc7c5d12854b9467b96212c8f6cd06e.pts 02691156/expert_verified/points_label/dc7c5d12854b9467b96212c8f6cd06e.seg 02691156\n02691156/points/48e9c61de4db838d84b83051fa0ae5d2.pts 02691156/expert_verified/points_label/48e9c61de4db838d84b83051fa0ae5d2.seg 02691156\n04379243/points/d187561a6b0cbd0acaed5ce7390f30b7.pts 04379243/expert_verified/points_label/d187561a6b0cbd0acaed5ce7390f30b7.seg 04379243\n04379243/points/ae9e04d050f5cba1492d9da2668ec34c.pts 04379243/expert_verified/points_label/ae9e04d050f5cba1492d9da2668ec34c.seg 04379243\n04379243/points/72c884f3b9b9119966f379f51753f72b.pts 04379243/expert_verified/points_label/72c884f3b9b9119966f379f51753f72b.seg 04379243\n02691156/points/917694a71164f2148e8405d6c51a908.pts 02691156/expert_verified/points_label/917694a71164f2148e8405d6c51a908.seg 02691156\n03001627/points/a2441f03fed7c13def31f91fe6afc8fa.pts 03001627/expert_verified/points_label/a2441f03fed7c13def31f91fe6afc8fa.seg 03001627\n03001627/points/49c955a80749d2e1a5ffdf44ff86b795.pts 03001627/expert_verified/points_label/49c955a80749d2e1a5ffdf44ff86b795.seg 03001627\n03636649/points/c43c89d862e10552b24ecc319936dfe2.pts 03636649/expert_verified/points_label/c43c89d862e10552b24ecc319936dfe2.seg 03636649\n03636649/points/e5ff9311bee487f5ca4aaad7dc0e3a16.pts 03636649/expert_verified/points_label/e5ff9311bee487f5ca4aaad7dc0e3a16.seg 03636649\n02958343/points/ba0ac1d1e25d3fad63f2c3a55558a78f.pts 02958343/expert_verified/points_label/ba0ac1d1e25d3fad63f2c3a55558a78f.seg 02958343\n04379243/points/2f58b1ca8634a6b48b9b51ae4415d5aa.pts 04379243/expert_verified/points_label/2f58b1ca8634a6b48b9b51ae4415d5aa.seg 04379243\n03001627/points/c585ee093bfd52af6512b7b24f3d84.pts 03001627/expert_verified/points_label/c585ee093bfd52af6512b7b24f3d84.seg 03001627\n03001627/points/46f6a6e0f239282fc8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/46f6a6e0f239282fc8687ff9b0b4e4ac.seg 03001627\n03642806/points/f72dc1ffeae0168aadcfd37206a0d18b.pts 03642806/expert_verified/points_label/f72dc1ffeae0168aadcfd37206a0d18b.seg 03642806\n03948459/points/1e83ef6ed5d0b78b7efb854782e23566.pts 03948459/expert_verified/points_label/1e83ef6ed5d0b78b7efb854782e23566.seg 03948459\n03001627/points/95e5f6e550761aefe65b629e4a22f51e.pts 03001627/expert_verified/points_label/95e5f6e550761aefe65b629e4a22f51e.seg 03001627\n03001627/points/b38d05caee69c7ac8fc6229eb64e56a.pts 03001627/expert_verified/points_label/b38d05caee69c7ac8fc6229eb64e56a.seg 03001627\n02691156/points/4ff50b9f815c58acca8607f540cc62ba.pts 02691156/expert_verified/points_label/4ff50b9f815c58acca8607f540cc62ba.seg 02691156\n03636649/points/78a11c0b8e964c9b41657e31b569b105.pts 03636649/expert_verified/points_label/78a11c0b8e964c9b41657e31b569b105.seg 03636649\n02958343/points/b1f75a8e8b9e921a8a6cf8c6b92417f2.pts 02958343/expert_verified/points_label/b1f75a8e8b9e921a8a6cf8c6b92417f2.seg 02958343\n02958343/points/a836fc66c01eccca58c27e607f6e2d4c.pts 02958343/expert_verified/points_label/a836fc66c01eccca58c27e607f6e2d4c.seg 02958343\n02691156/points/fac4af109beb0108b4f192eea1889928.pts 02691156/expert_verified/points_label/fac4af109beb0108b4f192eea1889928.seg 02691156\n03467517/points/b9c10bf6fc2095f93f0194265a9746c.pts 03467517/expert_verified/points_label/b9c10bf6fc2095f93f0194265a9746c.seg 03467517\n02691156/points/b976a48c015d6ced5e9e2656aff7dd5b.pts 02691156/expert_verified/points_label/b976a48c015d6ced5e9e2656aff7dd5b.seg 02691156\n04379243/points/889f48aa85accd2ee73947fdf756a329.pts 04379243/expert_verified/points_label/889f48aa85accd2ee73947fdf756a329.seg 04379243\n02691156/points/b6d61068ef2bf2d46059aeb39e538eb2.pts 02691156/expert_verified/points_label/b6d61068ef2bf2d46059aeb39e538eb2.seg 02691156\n04379243/points/d94de64641651a2079b3e1be3524f72f.pts 04379243/expert_verified/points_label/d94de64641651a2079b3e1be3524f72f.seg 04379243\n03001627/points/117bd6da01905949a81116f5456ee312.pts 03001627/expert_verified/points_label/117bd6da01905949a81116f5456ee312.seg 03001627\n03636649/points/845542d0f578a9db1ec48bc3c478566d.pts 03636649/expert_verified/points_label/845542d0f578a9db1ec48bc3c478566d.seg 03636649\n04379243/points/9391dcc782fa7f6bfdad344760a9dafd.pts 04379243/expert_verified/points_label/9391dcc782fa7f6bfdad344760a9dafd.seg 04379243\n04379243/points/fe99a1127734f7852b70eac6546e93fd.pts 04379243/expert_verified/points_label/fe99a1127734f7852b70eac6546e93fd.seg 04379243\n03001627/points/4e358c2dc0513971f98c0761af40e04.pts 03001627/expert_verified/points_label/4e358c2dc0513971f98c0761af40e04.seg 03001627\n03636649/points/53afad2e573b26b141657e31b569b105.pts 03636649/expert_verified/points_label/53afad2e573b26b141657e31b569b105.seg 03636649\n04379243/points/3e51742cb382aa1f79b3e1be3524f72f.pts 04379243/expert_verified/points_label/3e51742cb382aa1f79b3e1be3524f72f.seg 04379243\n02958343/points/4f17af1ca7ae689d409b2c4484d833cc.pts 02958343/expert_verified/points_label/4f17af1ca7ae689d409b2c4484d833cc.seg 02958343\n03467517/points/c739664436ac5237aa0c867d5b070a5d.pts 03467517/expert_verified/points_label/c739664436ac5237aa0c867d5b070a5d.seg 03467517\n03797390/points/61c10dccfa8e508e2d66cbf6a91063.pts 03797390/expert_verified/points_label/61c10dccfa8e508e2d66cbf6a91063.seg 03797390\n03467517/points/aa86d20d03b2303593f0194265a9746c.pts 03467517/expert_verified/points_label/aa86d20d03b2303593f0194265a9746c.seg 03467517\n04379243/points/2f98d5e721e84debaa8081a7009091db.pts 04379243/expert_verified/points_label/2f98d5e721e84debaa8081a7009091db.seg 04379243\n04379243/points/2a0f853dadd841f96f1e07a56c129dfc.pts 04379243/expert_verified/points_label/2a0f853dadd841f96f1e07a56c129dfc.seg 04379243\n03001627/points/8031478c3fe31ddcc337647acafe65f0.pts 03001627/expert_verified/points_label/8031478c3fe31ddcc337647acafe65f0.seg 03001627\n03636649/points/a53112591be182b9d93768e7b9b1eabf.pts 03636649/expert_verified/points_label/a53112591be182b9d93768e7b9b1eabf.seg 03636649\n03001627/points/5bc916f8b9d0a7c6b40f0ac0fb9a650d.pts 03001627/expert_verified/points_label/5bc916f8b9d0a7c6b40f0ac0fb9a650d.seg 03001627\n02691156/points/f2d4b8440d4bde5330afbcb38d77d0c3.pts 02691156/expert_verified/points_label/f2d4b8440d4bde5330afbcb38d77d0c3.seg 02691156\n03001627/points/e4274fc2b9e4a5511882515d09f3979e.pts 03001627/expert_verified/points_label/e4274fc2b9e4a5511882515d09f3979e.seg 03001627\n03001627/points/9ab18a33335373b2659dda512294c744.pts 03001627/expert_verified/points_label/9ab18a33335373b2659dda512294c744.seg 03001627\n04379243/points/32ea6609eb659a2cec3367bccf60e518.pts 04379243/expert_verified/points_label/32ea6609eb659a2cec3367bccf60e518.seg 04379243\n04379243/points/759cb93134fd5efde76bc197b3a3ffc0.pts 04379243/expert_verified/points_label/759cb93134fd5efde76bc197b3a3ffc0.seg 04379243\n03001627/points/b8b5e172ee58899df2d9e72ba502035.pts 03001627/expert_verified/points_label/b8b5e172ee58899df2d9e72ba502035.seg 03001627\n03001627/points/1886b3e3f3d4af3ace522e6dda26fb51.pts 03001627/expert_verified/points_label/1886b3e3f3d4af3ace522e6dda26fb51.seg 03001627\n03948459/points/3f5f657bec9a21814ce6ac98dc4781fe.pts 03948459/expert_verified/points_label/3f5f657bec9a21814ce6ac98dc4781fe.seg 03948459\n04379243/points/5adf5a7173e588ad76e9713f57a5fcb6.pts 04379243/expert_verified/points_label/5adf5a7173e588ad76e9713f57a5fcb6.seg 04379243\n03001627/points/f33b6f791e9d64387d01b77e04a0bc7b.pts 03001627/expert_verified/points_label/f33b6f791e9d64387d01b77e04a0bc7b.seg 03001627\n04379243/points/4e928377ae98ed8d99e8bf807e902261.pts 04379243/expert_verified/points_label/4e928377ae98ed8d99e8bf807e902261.seg 04379243\n03001627/points/d7867d215f52107ba5e8cf3aa1686d66.pts 03001627/expert_verified/points_label/d7867d215f52107ba5e8cf3aa1686d66.seg 03001627\n02691156/points/bddc2c1a4fae008947a1dbf5fd48a4dd.pts 02691156/expert_verified/points_label/bddc2c1a4fae008947a1dbf5fd48a4dd.seg 02691156\n02958343/points/bafacc7f28509d4157abc6fa0d632bc7.pts 02958343/expert_verified/points_label/bafacc7f28509d4157abc6fa0d632bc7.seg 02958343\n02691156/points/a14b262838529c2c81e1d9f6b27f1a92.pts 02691156/expert_verified/points_label/a14b262838529c2c81e1d9f6b27f1a92.seg 02691156\n03001627/points/38afa26a419ea3abed040525648fc6d7.pts 03001627/expert_verified/points_label/38afa26a419ea3abed040525648fc6d7.seg 03001627\n04379243/points/79f63a1564928af071a782a4379556c7.pts 04379243/expert_verified/points_label/79f63a1564928af071a782a4379556c7.seg 04379243\n04379243/points/cbd1cd9b5423f890beedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/cbd1cd9b5423f890beedb4c8fd29e2d1.seg 04379243\n02691156/points/d74767519393a937f73e5bc170b7e2be.pts 02691156/expert_verified/points_label/d74767519393a937f73e5bc170b7e2be.seg 02691156\n03001627/points/9a82269e56737217e16571f1d370cad9.pts 03001627/expert_verified/points_label/9a82269e56737217e16571f1d370cad9.seg 03001627\n03001627/points/6e1e73e14637a28da1c367d7a459a9b7.pts 03001627/expert_verified/points_label/6e1e73e14637a28da1c367d7a459a9b7.seg 03001627\n03797390/points/eecb13f61a93b4048f58d8b19de93f99.pts 03797390/expert_verified/points_label/eecb13f61a93b4048f58d8b19de93f99.seg 03797390\n03001627/points/4f7523a3d276bfae4b3c42e318f3affc.pts 03001627/expert_verified/points_label/4f7523a3d276bfae4b3c42e318f3affc.seg 03001627\n03624134/points/f19fe19693937db1cb03b57fca000b1f.pts 03624134/expert_verified/points_label/f19fe19693937db1cb03b57fca000b1f.seg 03624134\n02958343/points/c3858a8b73dcb137e3bdba9430565083.pts 02958343/expert_verified/points_label/c3858a8b73dcb137e3bdba9430565083.seg 02958343\n04379243/points/3ce930bb150aef8a69fb38085fbc320c.pts 04379243/expert_verified/points_label/3ce930bb150aef8a69fb38085fbc320c.seg 04379243\n04379243/points/75e3cbf4b1ef0df971a782a4379556c7.pts 04379243/expert_verified/points_label/75e3cbf4b1ef0df971a782a4379556c7.seg 04379243\n04379243/points/5040f8f3e2293db448e116352760c52d.pts 04379243/expert_verified/points_label/5040f8f3e2293db448e116352760c52d.seg 04379243\n04379243/points/edaf24be15738ea2c5d1c45cadcaa3eb.pts 04379243/expert_verified/points_label/edaf24be15738ea2c5d1c45cadcaa3eb.seg 04379243\n04379243/points/6fb52c296531dc17beedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/6fb52c296531dc17beedb4c8fd29e2d1.seg 04379243\n04379243/points/e777df6ffb40e3a1853d412328e7e7a6.pts 04379243/expert_verified/points_label/e777df6ffb40e3a1853d412328e7e7a6.seg 04379243\n03001627/points/9c103621101bcf9919fb4103277a6b93.pts 03001627/expert_verified/points_label/9c103621101bcf9919fb4103277a6b93.seg 03001627\n03001627/points/5d20adaf6d8f89fa2f1c10544d7d6f.pts 03001627/expert_verified/points_label/5d20adaf6d8f89fa2f1c10544d7d6f.seg 03001627\n02691156/points/b80bd34ab330babbc8727b27ee96a4b7.pts 02691156/expert_verified/points_label/b80bd34ab330babbc8727b27ee96a4b7.seg 02691156\n04379243/points/50d898f6d1c05cee2d99129afd32edf4.pts 04379243/expert_verified/points_label/50d898f6d1c05cee2d99129afd32edf4.seg 04379243\n04379243/points/c0c836c630cdb4bb664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/c0c836c630cdb4bb664b3b9b23ddfcbc.seg 04379243\n03790512/points/a1553e0bb7897a7ace0bf41e5f45753d.pts 03790512/expert_verified/points_label/a1553e0bb7897a7ace0bf41e5f45753d.seg 03790512\n03467517/points/7701180906a0aa156a7ae841f1f88f87.pts 03467517/expert_verified/points_label/7701180906a0aa156a7ae841f1f88f87.seg 03467517\n03467517/points/3ef569c13f4ab5f83ac61a2f8346a8f.pts 03467517/expert_verified/points_label/3ef569c13f4ab5f83ac61a2f8346a8f.seg 03467517\n03636649/points/3834d7f376879c03eca29403b7226aa1.pts 03636649/expert_verified/points_label/3834d7f376879c03eca29403b7226aa1.seg 03636649\n02958343/points/34ab29cea66952f16f48edd113a40fce.pts 02958343/expert_verified/points_label/34ab29cea66952f16f48edd113a40fce.seg 02958343\n02958343/points/e24f388736f4e6fd2cdd250493632937.pts 02958343/expert_verified/points_label/e24f388736f4e6fd2cdd250493632937.seg 02958343\n03001627/points/3ae022522800685c610195e4fb10d1de.pts 03001627/expert_verified/points_label/3ae022522800685c610195e4fb10d1de.seg 03001627\n02691156/points/49660fd24e5c2fbab87697d3904b168b.pts 02691156/expert_verified/points_label/49660fd24e5c2fbab87697d3904b168b.seg 02691156\n03642806/points/2d5d4d79cd464298566636e42679cc7f.pts 03642806/expert_verified/points_label/2d5d4d79cd464298566636e42679cc7f.seg 03642806\n04379243/points/7988dedacce42552ab610b0c94236463.pts 04379243/expert_verified/points_label/7988dedacce42552ab610b0c94236463.seg 04379243\n04379243/points/91ed62f2b3fd5919f12d7184a2ad3430.pts 04379243/expert_verified/points_label/91ed62f2b3fd5919f12d7184a2ad3430.seg 04379243\n03001627/points/a5898fefb1733333a82b0d8d157287f5.pts 03001627/expert_verified/points_label/a5898fefb1733333a82b0d8d157287f5.seg 03001627\n04379243/points/b4ef1de99422b08768661782af60b711.pts 04379243/expert_verified/points_label/b4ef1de99422b08768661782af60b711.seg 04379243\n03001627/points/df2b7e697ab6ca0f155d75bbf62b80.pts 03001627/expert_verified/points_label/df2b7e697ab6ca0f155d75bbf62b80.seg 03001627\n03467517/points/408a8e1b51266b9ccc34b900bb2492e.pts 03467517/expert_verified/points_label/408a8e1b51266b9ccc34b900bb2492e.seg 03467517\n03001627/points/597f2b2153af0c544aabcf2a7cb640f9.pts 03001627/expert_verified/points_label/597f2b2153af0c544aabcf2a7cb640f9.seg 03001627\n03001627/points/6870fbd4a7b733b0674f1c30a8cad95a.pts 03001627/expert_verified/points_label/6870fbd4a7b733b0674f1c30a8cad95a.seg 03001627\n03001627/points/e35d7d19dcdc9e5c30e06a011e63236a.pts 03001627/expert_verified/points_label/e35d7d19dcdc9e5c30e06a011e63236a.seg 03001627\n04225987/points/58ade10f7f87edc6e860048d7ced02e3.pts 04225987/expert_verified/points_label/58ade10f7f87edc6e860048d7ced02e3.seg 04225987\n04379243/points/39cf5ae2b497715a84253b2030fab070.pts 04379243/expert_verified/points_label/39cf5ae2b497715a84253b2030fab070.seg 04379243\n04379243/points/ab7b0db92f96381f8cbb8bac2032149c.pts 04379243/expert_verified/points_label/ab7b0db92f96381f8cbb8bac2032149c.seg 04379243\n03001627/points/b117b01ab380362db8134b0fbf68257d.pts 03001627/expert_verified/points_label/b117b01ab380362db8134b0fbf68257d.seg 03001627\n03467517/points/913f3c90f5b78256e98e318d424a4bb9.pts 03467517/expert_verified/points_label/913f3c90f5b78256e98e318d424a4bb9.seg 03467517\n04379243/points/831985fb385a5b2a9ae2d75b4fc35b7.pts 04379243/expert_verified/points_label/831985fb385a5b2a9ae2d75b4fc35b7.seg 04379243\n03467517/points/482b8b9a225b6ca1d57700c05b1862d8.pts 03467517/expert_verified/points_label/482b8b9a225b6ca1d57700c05b1862d8.seg 03467517\n03001627/points/93a6876247c7a015d84b8ba651dfb8ac.pts 03001627/expert_verified/points_label/93a6876247c7a015d84b8ba651dfb8ac.seg 03001627\n04379243/points/a78273aa10b2dfb0bc8d334f99e7f52.pts 04379243/expert_verified/points_label/a78273aa10b2dfb0bc8d334f99e7f52.seg 04379243\n04379243/points/3c686ac317c496f9a71c812e027f94d9.pts 04379243/expert_verified/points_label/3c686ac317c496f9a71c812e027f94d9.seg 04379243\n02691156/points/50755e616df58fe566cf1b4a8fc3914e.pts 02691156/expert_verified/points_label/50755e616df58fe566cf1b4a8fc3914e.seg 02691156\n03001627/points/8cedc8e684d60ff42a06d8c81262ef96.pts 03001627/expert_verified/points_label/8cedc8e684d60ff42a06d8c81262ef96.seg 03001627\n04379243/points/f74c321042dbc8e684d78f017ff73fd6.pts 04379243/expert_verified/points_label/f74c321042dbc8e684d78f017ff73fd6.seg 04379243\n02958343/points/5130947e5f18e73a8321b7d65a99d2a.pts 02958343/expert_verified/points_label/5130947e5f18e73a8321b7d65a99d2a.seg 02958343\n03261776/points/f5d210ff14ca9d29b6d9c2cee7f2f72b.pts 03261776/expert_verified/points_label/f5d210ff14ca9d29b6d9c2cee7f2f72b.seg 03261776\n03001627/points/d36de0f850783d8fd6b3090036b71698.pts 03001627/expert_verified/points_label/d36de0f850783d8fd6b3090036b71698.seg 03001627\n03001627/points/6897c2665267cca39eea64ae4d2b4158.pts 03001627/expert_verified/points_label/6897c2665267cca39eea64ae4d2b4158.seg 03001627\n03001627/points/6e98c5d61e008b4c2871cc0b3cc1a485.pts 03001627/expert_verified/points_label/6e98c5d61e008b4c2871cc0b3cc1a485.seg 03001627\n02958343/points/92f697d036addb55ed576c2966428f.pts 02958343/expert_verified/points_label/92f697d036addb55ed576c2966428f.seg 02958343\n04379243/points/f3fd419f725aa894ba5342d638d0c267.pts 04379243/expert_verified/points_label/f3fd419f725aa894ba5342d638d0c267.seg 04379243\n04379243/points/62eff79cf2e75bc2765ee729adbdf968.pts 04379243/expert_verified/points_label/62eff79cf2e75bc2765ee729adbdf968.seg 04379243\n03001627/points/98a1f8651c962402492d9da2668ec34c.pts 03001627/expert_verified/points_label/98a1f8651c962402492d9da2668ec34c.seg 03001627\n03636649/points/d90639e69c82f864eb2d9895648d1206.pts 03636649/expert_verified/points_label/d90639e69c82f864eb2d9895648d1206.seg 03636649\n02954340/points/a1494210f6774b87b3e0e60b857dde8f.pts 02954340/expert_verified/points_label/a1494210f6774b87b3e0e60b857dde8f.seg 02954340\n03467517/points/d528407fe43b5df193f0194265a9746c.pts 03467517/expert_verified/points_label/d528407fe43b5df193f0194265a9746c.seg 03467517\n03636649/points/776e4b38023091002cd2160e449d45ae.pts 03636649/expert_verified/points_label/776e4b38023091002cd2160e449d45ae.seg 03636649\n04379243/points/91df49ec00f2c5ce73f1ca2ca101a20d.pts 04379243/expert_verified/points_label/91df49ec00f2c5ce73f1ca2ca101a20d.seg 04379243\n04379243/points/47f25d5b367326ceaaf15b62af6b513f.pts 04379243/expert_verified/points_label/47f25d5b367326ceaaf15b62af6b513f.seg 04379243\n04379243/points/f5d6579b3a1f5a879d2be74cfb51ade1.pts 04379243/expert_verified/points_label/f5d6579b3a1f5a879d2be74cfb51ade1.seg 04379243\n02691156/points/f6ea6663b48bf78261f1ef59130c405d.pts 02691156/expert_verified/points_label/f6ea6663b48bf78261f1ef59130c405d.seg 02691156\n03001627/points/63da17eda9d415b5319c5e90e9cc9126.pts 03001627/expert_verified/points_label/63da17eda9d415b5319c5e90e9cc9126.seg 03001627\n02691156/points/9fb60716f0f5a2b84408eb298433d643.pts 02691156/expert_verified/points_label/9fb60716f0f5a2b84408eb298433d643.seg 02691156\n02773838/points/5161d9adede671d6edc32c5c9ec9f827.pts 02773838/expert_verified/points_label/5161d9adede671d6edc32c5c9ec9f827.seg 02773838\n04379243/points/696beb1883be838cc955e5ed03ef3a2f.pts 04379243/expert_verified/points_label/696beb1883be838cc955e5ed03ef3a2f.seg 04379243\n03001627/points/bc184c3cbe3349b19fb4103277a6b93.pts 03001627/expert_verified/points_label/bc184c3cbe3349b19fb4103277a6b93.seg 03001627\n03642806/points/28fbfd8b8c9c6f16e1e44e2fc05361d9.pts 03642806/expert_verified/points_label/28fbfd8b8c9c6f16e1e44e2fc05361d9.seg 03642806\n04379243/points/506e4e67efe1794c1dacbc3d67b5a11a.pts 04379243/expert_verified/points_label/506e4e67efe1794c1dacbc3d67b5a11a.seg 04379243\n02691156/points/a48676cfe44fd9bee40acb87a6be88b3.pts 02691156/expert_verified/points_label/a48676cfe44fd9bee40acb87a6be88b3.seg 02691156\n04379243/points/9e5926bfdc7f01749e65a3d2929a9516.pts 04379243/expert_verified/points_label/9e5926bfdc7f01749e65a3d2929a9516.seg 04379243\n04379243/points/dc47d49db6ac670635d498476a30ff0e.pts 04379243/expert_verified/points_label/dc47d49db6ac670635d498476a30ff0e.seg 04379243\n04379243/points/33c6e3b21a67b750e78d7b497732dce1.pts 04379243/expert_verified/points_label/33c6e3b21a67b750e78d7b497732dce1.seg 04379243\n04379243/points/27295a6f585b7817febad4f49b26ec52.pts 04379243/expert_verified/points_label/27295a6f585b7817febad4f49b26ec52.seg 04379243\n03624134/points/6f8b660661269406504c6b6d62466c67.pts 03624134/expert_verified/points_label/6f8b660661269406504c6b6d62466c67.seg 03624134\n03642806/points/dbc61cbed5f7f2b33c1abb78f1519c49.pts 03642806/expert_verified/points_label/dbc61cbed5f7f2b33c1abb78f1519c49.seg 03642806\n03001627/points/374bec02e71fe06528b4c5ec471dc963.pts 03001627/expert_verified/points_label/374bec02e71fe06528b4c5ec471dc963.seg 03001627\n03001627/points/b41aaea5754adae0444b41d6d7f557fa.pts 03001627/expert_verified/points_label/b41aaea5754adae0444b41d6d7f557fa.seg 03001627\n03001627/points/7f4f73ad1b3f882ba14472becb07b261.pts 03001627/expert_verified/points_label/7f4f73ad1b3f882ba14472becb07b261.seg 03001627\n03001627/points/b80122c3a0543a7b7eaeab1f0c9120b7.pts 03001627/expert_verified/points_label/b80122c3a0543a7b7eaeab1f0c9120b7.seg 03001627\n04379243/points/2e4fbab46e264616d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/2e4fbab46e264616d93768e7b9b1eabf.seg 04379243\n03001627/points/4a12589099b05c51e13b3410f3683610.pts 03001627/expert_verified/points_label/4a12589099b05c51e13b3410f3683610.seg 03001627\n03001627/points/bc523df998d94c7223ac0bd64c9cb255.pts 03001627/expert_verified/points_label/bc523df998d94c7223ac0bd64c9cb255.seg 03001627\n02691156/points/218caa58819e10d1fe40308d822f996c.pts 02691156/expert_verified/points_label/218caa58819e10d1fe40308d822f996c.seg 02691156\n04379243/points/a5e951c9d7a9a93f8cbb8bac2032149c.pts 04379243/expert_verified/points_label/a5e951c9d7a9a93f8cbb8bac2032149c.seg 04379243\n03636649/points/f228f6cd86162beb659dda512294c744.pts 03636649/expert_verified/points_label/f228f6cd86162beb659dda512294c744.seg 03636649\n03467517/points/648a820e550bdfd093f0194265a9746c.pts 03467517/expert_verified/points_label/648a820e550bdfd093f0194265a9746c.seg 03467517\n03624134/points/8f61777bf6b57fedc13545c5b1a2e607.pts 03624134/expert_verified/points_label/8f61777bf6b57fedc13545c5b1a2e607.seg 03624134\n03001627/points/bb9efb4912a018b3c329e2758ab09ecb.pts 03001627/expert_verified/points_label/bb9efb4912a018b3c329e2758ab09ecb.seg 03001627\n03001627/points/fdac1f9c0b030841c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/fdac1f9c0b030841c8687ff9b0b4e4ac.seg 03001627\n02691156/points/8ac8c21b63ff535fca8607f540cc62ba.pts 02691156/expert_verified/points_label/8ac8c21b63ff535fca8607f540cc62ba.seg 02691156\n03467517/points/4e4d180e78d8b52a93f0194265a9746c.pts 03467517/expert_verified/points_label/4e4d180e78d8b52a93f0194265a9746c.seg 03467517\n03636649/points/7bc1b202ebf000625949e084b65603cf.pts 03636649/expert_verified/points_label/7bc1b202ebf000625949e084b65603cf.seg 03636649\n03001627/points/3c8362c1e57c30d7e6c5cd45aa112726.pts 03001627/expert_verified/points_label/3c8362c1e57c30d7e6c5cd45aa112726.seg 03001627\n03001627/points/5510d5af1ab5714b3c42e318f3affc.pts 03001627/expert_verified/points_label/5510d5af1ab5714b3c42e318f3affc.seg 03001627\n04379243/points/4d393b562df7cfad9a16b095d67f7209.pts 04379243/expert_verified/points_label/4d393b562df7cfad9a16b095d67f7209.seg 04379243\n03797390/points/e984fd7e97c2be347eaeab1f0c9120b7.pts 03797390/expert_verified/points_label/e984fd7e97c2be347eaeab1f0c9120b7.seg 03797390\n03001627/points/483d22dbbee32ee54e5c7d89bdfc49a3.pts 03001627/expert_verified/points_label/483d22dbbee32ee54e5c7d89bdfc49a3.seg 03001627\n02691156/points/a5cd14be786fc8175e9e2656aff7dd5b.pts 02691156/expert_verified/points_label/a5cd14be786fc8175e9e2656aff7dd5b.seg 02691156\n03636649/points/d4bbd93c0d85e77d7934a0d24a61231.pts 03636649/expert_verified/points_label/d4bbd93c0d85e77d7934a0d24a61231.seg 03636649\n03467517/points/7027bc171baae1d663e148e250c0340d.pts 03467517/expert_verified/points_label/7027bc171baae1d663e148e250c0340d.seg 03467517\n03636649/points/1a44dd6ee873d443da13974b3533fb59.pts 03636649/expert_verified/points_label/1a44dd6ee873d443da13974b3533fb59.seg 03636649\n04379243/points/2e3037a285fd8b5c1be2a853ec4f9e8.pts 04379243/expert_verified/points_label/2e3037a285fd8b5c1be2a853ec4f9e8.seg 04379243\n04379243/points/e3b585b15506fa7113f96345312df593.pts 04379243/expert_verified/points_label/e3b585b15506fa7113f96345312df593.seg 04379243\n02958343/points/ee1d28a50a2b71e129348d14ca881f7d.pts 02958343/expert_verified/points_label/ee1d28a50a2b71e129348d14ca881f7d.seg 02958343\n03001627/points/22af872ac796ed26ff8d7c1096fae070.pts 03001627/expert_verified/points_label/22af872ac796ed26ff8d7c1096fae070.seg 03001627\n03642806/points/9b4ab67eb448c49c11ced4a54f2e6229.pts 03642806/expert_verified/points_label/9b4ab67eb448c49c11ced4a54f2e6229.seg 03642806\n03624134/points/1640911b9dc0ef0da95c6095f89cd899.pts 03624134/expert_verified/points_label/1640911b9dc0ef0da95c6095f89cd899.seg 03624134\n03001627/points/f6810de4042cc5ce57bd4bc6eae9b341.pts 03001627/expert_verified/points_label/f6810de4042cc5ce57bd4bc6eae9b341.seg 03001627\n03001627/points/c46eb7460be602b6bf80985a99195eb8.pts 03001627/expert_verified/points_label/c46eb7460be602b6bf80985a99195eb8.seg 03001627\n03624134/points/debbbf239d59d8724662dc124dd336ed.pts 03624134/expert_verified/points_label/debbbf239d59d8724662dc124dd336ed.seg 03624134\n04379243/points/5b51e63726f21bb6a75d03186a0409e2.pts 04379243/expert_verified/points_label/5b51e63726f21bb6a75d03186a0409e2.seg 04379243\n02691156/points/b59a7cab8e95f6eaf3a7414a84b5637.pts 02691156/expert_verified/points_label/b59a7cab8e95f6eaf3a7414a84b5637.seg 02691156\n03001627/points/52c32b187590e8f3bba5aaac798c64af.pts 03001627/expert_verified/points_label/52c32b187590e8f3bba5aaac798c64af.seg 03001627\n03001627/points/1c173d970e21e9a8be95ff480950e9ef.pts 03001627/expert_verified/points_label/1c173d970e21e9a8be95ff480950e9ef.seg 03001627\n03624134/points/7238d0009faeacb5fd770de1635caa0.pts 03624134/expert_verified/points_label/7238d0009faeacb5fd770de1635caa0.seg 03624134\n04379243/points/cc554812025dc498e7ed5b5b11f935c9.pts 04379243/expert_verified/points_label/cc554812025dc498e7ed5b5b11f935c9.seg 04379243\n04379243/points/fff492e352c8cb336240c88cd4684446.pts 04379243/expert_verified/points_label/fff492e352c8cb336240c88cd4684446.seg 04379243\n03636649/points/e0a2948797cc33b2e19a0cc107ada7cd.pts 03636649/expert_verified/points_label/e0a2948797cc33b2e19a0cc107ada7cd.seg 03636649\n03636649/points/fe02f6594ed8b96ae85a3dc26b76b2ae.pts 03636649/expert_verified/points_label/fe02f6594ed8b96ae85a3dc26b76b2ae.seg 03636649\n04379243/points/d4a7a1dc0f1a51986f15d61c214769af.pts 04379243/expert_verified/points_label/d4a7a1dc0f1a51986f15d61c214769af.seg 04379243\n03624134/points/3dbda789bc59a5f99246ea0301684d80.pts 03624134/expert_verified/points_label/3dbda789bc59a5f99246ea0301684d80.seg 03624134\n04379243/points/b82e068c2c18cd67b09f0ca9c143fdfd.pts 04379243/expert_verified/points_label/b82e068c2c18cd67b09f0ca9c143fdfd.seg 04379243\n03001627/points/b360f2264526521f1dee989d1177ef4e.pts 03001627/expert_verified/points_label/b360f2264526521f1dee989d1177ef4e.seg 03001627\n02691156/points/8ff8f3c845e7ae8443afdb9c81ff2967.pts 02691156/expert_verified/points_label/8ff8f3c845e7ae8443afdb9c81ff2967.seg 02691156\n03001627/points/ea87765cf9dbe2fe55f46d55537192b6.pts 03001627/expert_verified/points_label/ea87765cf9dbe2fe55f46d55537192b6.seg 03001627\n03001627/points/df23ca11080bb439676c272956dad3c2.pts 03001627/expert_verified/points_label/df23ca11080bb439676c272956dad3c2.seg 03001627\n03790512/points/a3dfeae5bced3533b37378f3c85478b4.pts 03790512/expert_verified/points_label/a3dfeae5bced3533b37378f3c85478b4.seg 03790512\n04379243/points/9af7a071bbd432baa5526f91aecc0c37.pts 04379243/expert_verified/points_label/9af7a071bbd432baa5526f91aecc0c37.seg 04379243\n03001627/points/a8b5f5b6bf0cb2d6876b399a99a15c0f.pts 03001627/expert_verified/points_label/a8b5f5b6bf0cb2d6876b399a99a15c0f.seg 03001627\n03001627/points/c7e590c0390e8d5debe67d9b32c3ddf8.pts 03001627/expert_verified/points_label/c7e590c0390e8d5debe67d9b32c3ddf8.seg 03001627\n03790512/points/4f30742005b7c20e883158c0007ed9ba.pts 03790512/expert_verified/points_label/4f30742005b7c20e883158c0007ed9ba.seg 03790512\n04379243/points/40b632472f8e69a7664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/40b632472f8e69a7664b3b9b23ddfcbc.seg 04379243\n03467517/points/d71c17b4d1ffa131f10a27cbb87f3a5.pts 03467517/expert_verified/points_label/d71c17b4d1ffa131f10a27cbb87f3a5.seg 03467517\n04379243/points/f563e9cd92a0dbe5a07b1c1d0ca9cf45.pts 04379243/expert_verified/points_label/f563e9cd92a0dbe5a07b1c1d0ca9cf45.seg 04379243\n03797390/points/1a97f3c83016abca21d0de04f408950f.pts 03797390/expert_verified/points_label/1a97f3c83016abca21d0de04f408950f.seg 03797390\n04379243/points/c3135e3b21b42e132449009b96f8a6ed.pts 04379243/expert_verified/points_label/c3135e3b21b42e132449009b96f8a6ed.seg 04379243\n03636649/points/89b168160388c29da996f5a90dae9cac.pts 03636649/expert_verified/points_label/89b168160388c29da996f5a90dae9cac.seg 03636649\n02958343/points/8bbbfdbec9251733ace5721ccacba16.pts 02958343/expert_verified/points_label/8bbbfdbec9251733ace5721ccacba16.seg 02958343\n04379243/points/db5a895ae7358c97b66213207f46bee7.pts 04379243/expert_verified/points_label/db5a895ae7358c97b66213207f46bee7.seg 04379243\n03001627/points/6a28919186eb55ecf69d0cf4fdc89b12.pts 03001627/expert_verified/points_label/6a28919186eb55ecf69d0cf4fdc89b12.seg 03001627\n04379243/points/e7169243daef074dc82dc2efb3363de1.pts 04379243/expert_verified/points_label/e7169243daef074dc82dc2efb3363de1.seg 04379243\n03467517/points/4ae5a491c3ffb473462c6cdd250c26bb.pts 03467517/expert_verified/points_label/4ae5a491c3ffb473462c6cdd250c26bb.seg 03467517\n04379243/points/e1a8e9e2059f4792fbb8cbddab1c2002.pts 04379243/expert_verified/points_label/e1a8e9e2059f4792fbb8cbddab1c2002.seg 04379243\n03467517/points/364f85832427992343820c03f9f59458.pts 03467517/expert_verified/points_label/364f85832427992343820c03f9f59458.seg 03467517\n02958343/points/4822076e48b366371f0d59cde6139796.pts 02958343/expert_verified/points_label/4822076e48b366371f0d59cde6139796.seg 02958343\n03636649/points/d34a10201a5448a253cf897b7fc1d12.pts 03636649/expert_verified/points_label/d34a10201a5448a253cf897b7fc1d12.seg 03636649\n03467517/points/77095861248c816693f0194265a9746c.pts 03467517/expert_verified/points_label/77095861248c816693f0194265a9746c.seg 03467517\n04379243/points/dacde6546ca2e07f66dc6ea1ac82d91f.pts 04379243/expert_verified/points_label/dacde6546ca2e07f66dc6ea1ac82d91f.seg 04379243\n03636649/points/670ad2964ad5a98c9f1a71e46bbde97c.pts 03636649/expert_verified/points_label/670ad2964ad5a98c9f1a71e46bbde97c.seg 03636649\n02691156/points/77c9fd0f0c6b0e9fca8607f540cc62ba.pts 02691156/expert_verified/points_label/77c9fd0f0c6b0e9fca8607f540cc62ba.seg 02691156\n03001627/points/5fc6b04623ae6a9963ed57e35c972b4b.pts 03001627/expert_verified/points_label/5fc6b04623ae6a9963ed57e35c972b4b.seg 03001627\n02958343/points/f18093ac0242d439f500cc506a763c18.pts 02958343/expert_verified/points_label/f18093ac0242d439f500cc506a763c18.seg 02958343\n03001627/points/2fed64c67552aa689c1db271ad9472a7.pts 03001627/expert_verified/points_label/2fed64c67552aa689c1db271ad9472a7.seg 03001627\n03001627/points/bf7e8e0dc4f4038cc2567be77cb7ab45.pts 03001627/expert_verified/points_label/bf7e8e0dc4f4038cc2567be77cb7ab45.seg 03001627\n04379243/points/690e073a4000c7ae540e292bd26f307a.pts 04379243/expert_verified/points_label/690e073a4000c7ae540e292bd26f307a.seg 04379243\n03467517/points/5fc56e6d220d775e381b7fbf79296afb.pts 03467517/expert_verified/points_label/5fc56e6d220d775e381b7fbf79296afb.seg 03467517\n04379243/points/8af3fd230ea7ac6518101790733ed6b2.pts 04379243/expert_verified/points_label/8af3fd230ea7ac6518101790733ed6b2.seg 04379243\n03636649/points/80436dff2a30721849655ac7c771b113.pts 03636649/expert_verified/points_label/80436dff2a30721849655ac7c771b113.seg 03636649\n03790512/points/b767982d38b5171e429f1c522640e6f0.pts 03790512/expert_verified/points_label/b767982d38b5171e429f1c522640e6f0.seg 03790512\n03001627/points/40e6fb27aeb9c9ab44f999802029a79a.pts 03001627/expert_verified/points_label/40e6fb27aeb9c9ab44f999802029a79a.seg 03001627\n04379243/points/59e1afdec89de9442b70eac6546e93fd.pts 04379243/expert_verified/points_label/59e1afdec89de9442b70eac6546e93fd.seg 04379243\n02691156/points/43d8125d940bb2ae850f318836ee7512.pts 02691156/expert_verified/points_label/43d8125d940bb2ae850f318836ee7512.seg 02691156\n02691156/points/cbc9d6ae9d22fcc57f3efc94c2d31dc5.pts 02691156/expert_verified/points_label/cbc9d6ae9d22fcc57f3efc94c2d31dc5.seg 02691156\n04379243/points/f585560965413925d706ecb3379aa341.pts 04379243/expert_verified/points_label/f585560965413925d706ecb3379aa341.seg 04379243\n04379243/points/adee49b8f5251efeaade78cbbf8fad3b.pts 04379243/expert_verified/points_label/adee49b8f5251efeaade78cbbf8fad3b.seg 04379243\n03261776/points/ccf84f2cbd3ebeb247ba1bc05b9a0f37.pts 03261776/expert_verified/points_label/ccf84f2cbd3ebeb247ba1bc05b9a0f37.seg 03261776\n03001627/points/2343e2c4fa69f33a2ff834514c92e8fd.pts 03001627/expert_verified/points_label/2343e2c4fa69f33a2ff834514c92e8fd.seg 03001627\n03636649/points/1d89da4ac1538ada9c949ae6274aa016.pts 03636649/expert_verified/points_label/1d89da4ac1538ada9c949ae6274aa016.seg 03636649\n03001627/points/51e14c516e45ec3b18ed59365c9648a7.pts 03001627/expert_verified/points_label/51e14c516e45ec3b18ed59365c9648a7.seg 03001627\n03001627/points/1e276a016b664e424d678187b8261d95.pts 03001627/expert_verified/points_label/1e276a016b664e424d678187b8261d95.seg 03001627\n03636649/points/4deef34d95367b58c0d95250e682f6ee.pts 03636649/expert_verified/points_label/4deef34d95367b58c0d95250e682f6ee.seg 03636649\n03001627/points/5d3eff6a1b9a119da011ccf7cbabf68e.pts 03001627/expert_verified/points_label/5d3eff6a1b9a119da011ccf7cbabf68e.seg 03001627\n04379243/points/9afaf5ab87a889f67acae9ce58893de5.pts 04379243/expert_verified/points_label/9afaf5ab87a889f67acae9ce58893de5.seg 04379243\n04379243/points/5431993203dfcf797ec12e029bc725db.pts 04379243/expert_verified/points_label/5431993203dfcf797ec12e029bc725db.seg 04379243\n03001627/points/6a01eed3a575987211e48e4bcdc4a2a3.pts 03001627/expert_verified/points_label/6a01eed3a575987211e48e4bcdc4a2a3.seg 03001627\n02958343/points/a8f2c3adc0671c15c64e95fc6a597455.pts 02958343/expert_verified/points_label/a8f2c3adc0671c15c64e95fc6a597455.seg 02958343\n04379243/points/f60960ae4dc8e293c8ce22a41ea48e48.pts 04379243/expert_verified/points_label/f60960ae4dc8e293c8ce22a41ea48e48.seg 04379243\n03624134/points/3a4f0118a57093cbf7c4ed45ce654123.pts 03624134/expert_verified/points_label/3a4f0118a57093cbf7c4ed45ce654123.seg 03624134\n03636649/points/52783aa89adf06f3250c527721570ba0.pts 03636649/expert_verified/points_label/52783aa89adf06f3250c527721570ba0.seg 03636649\n03001627/points/b13a4df698183bf9afb6676a5cd782b6.pts 03001627/expert_verified/points_label/b13a4df698183bf9afb6676a5cd782b6.seg 03001627\n03636649/points/26f725bb6578936cd247b9308cd5c441.pts 03636649/expert_verified/points_label/26f725bb6578936cd247b9308cd5c441.seg 03636649\n03001627/points/6df1ecffaa0abdbf327289c00b6dc9ca.pts 03001627/expert_verified/points_label/6df1ecffaa0abdbf327289c00b6dc9ca.seg 03001627\n04379243/points/3c475d9f0433a7eaad2650d014e970a5.pts 04379243/expert_verified/points_label/3c475d9f0433a7eaad2650d014e970a5.seg 04379243\n02958343/points/fee1c13922c07e8711b978ff9450f61b.pts 02958343/expert_verified/points_label/fee1c13922c07e8711b978ff9450f61b.seg 02958343\n04379243/points/6bc941dbd290c7f21acdac000802e11c.pts 04379243/expert_verified/points_label/6bc941dbd290c7f21acdac000802e11c.seg 04379243\n02958343/points/6333b9c777384ad14362be10a3fc8255.pts 02958343/expert_verified/points_label/6333b9c777384ad14362be10a3fc8255.seg 02958343\n03001627/points/9a35f15e924e19db637adadafee6f182.pts 03001627/expert_verified/points_label/9a35f15e924e19db637adadafee6f182.seg 03001627\n03001627/points/b0531a0d44fc22144224ee0743294f79.pts 03001627/expert_verified/points_label/b0531a0d44fc22144224ee0743294f79.seg 03001627\n03636649/points/913ff6452d0ea43c9d62807daf4a2134.pts 03636649/expert_verified/points_label/913ff6452d0ea43c9d62807daf4a2134.seg 03636649\n03467517/points/e45f323ce7ecab8393f0194265a9746c.pts 03467517/expert_verified/points_label/e45f323ce7ecab8393f0194265a9746c.seg 03467517\n02691156/points/aa2af754642256c08699933784576e73.pts 02691156/expert_verified/points_label/aa2af754642256c08699933784576e73.seg 02691156\n04379243/points/75b308ba45762ad499e8bf807e902261.pts 04379243/expert_verified/points_label/75b308ba45762ad499e8bf807e902261.seg 04379243\n03001627/points/3622d983fd6d7b98e3a73d090627e9ba.pts 03001627/expert_verified/points_label/3622d983fd6d7b98e3a73d090627e9ba.seg 03001627\n04225987/points/db4c8bf323465e4c537d393009a79347.pts 04225987/expert_verified/points_label/db4c8bf323465e4c537d393009a79347.seg 04225987\n04379243/points/132bfde1fabe9ab771a782a4379556c7.pts 04379243/expert_verified/points_label/132bfde1fabe9ab771a782a4379556c7.seg 04379243\n03001627/points/3dc8243b17bc790620768660cf080d12.pts 03001627/expert_verified/points_label/3dc8243b17bc790620768660cf080d12.seg 03001627\n04379243/points/ccb96ea5f047c97f278d386bfa54545.pts 04379243/expert_verified/points_label/ccb96ea5f047c97f278d386bfa54545.seg 04379243\n04379243/points/14ae5631e7dfa10430bbd4cddd04c77b.pts 04379243/expert_verified/points_label/14ae5631e7dfa10430bbd4cddd04c77b.seg 04379243\n04379243/points/78a81cbd2a5720d93a938fdd57fac3b4.pts 04379243/expert_verified/points_label/78a81cbd2a5720d93a938fdd57fac3b4.seg 04379243\n04379243/points/307bdd2a06137694a10ff7fd5e43a633.pts 04379243/expert_verified/points_label/307bdd2a06137694a10ff7fd5e43a633.seg 04379243\n03001627/points/f3573756e64259f2b29d280b4e59c527.pts 03001627/expert_verified/points_label/f3573756e64259f2b29d280b4e59c527.seg 03001627\n04379243/points/1815c6431b06dfb4f008d8a3590fb522.pts 04379243/expert_verified/points_label/1815c6431b06dfb4f008d8a3590fb522.seg 04379243\n04379243/points/7fda06ada2d897baadab4c26397edfab.pts 04379243/expert_verified/points_label/7fda06ada2d897baadab4c26397edfab.seg 04379243\n04379243/points/86b48365b2bd587e61830bc1b4d6c5ea.pts 04379243/expert_verified/points_label/86b48365b2bd587e61830bc1b4d6c5ea.seg 04379243\n03948459/points/6aae44dd39fb9476f059c10da31213ea.pts 03948459/expert_verified/points_label/6aae44dd39fb9476f059c10da31213ea.seg 03948459\n04379243/points/424c77a1f39ac41620dd2dd4d7d7656c.pts 04379243/expert_verified/points_label/424c77a1f39ac41620dd2dd4d7d7656c.seg 04379243\n03001627/points/8778c23fd21bdebf8a80d99ff4e76c20.pts 03001627/expert_verified/points_label/8778c23fd21bdebf8a80d99ff4e76c20.seg 03001627\n03001627/points/257deb231ce652169f2349486c570dd4.pts 03001627/expert_verified/points_label/257deb231ce652169f2349486c570dd4.seg 03001627\n03642806/points/e5559cd005d5c4942a7b0c74c5f22fc4.pts 03642806/expert_verified/points_label/e5559cd005d5c4942a7b0c74c5f22fc4.seg 03642806\n03001627/points/986e49bd8314d7424addf6a5f8726274.pts 03001627/expert_verified/points_label/986e49bd8314d7424addf6a5f8726274.seg 03001627\n04379243/points/b3fc5247186936f1dcfcef693e7ec696.pts 04379243/expert_verified/points_label/b3fc5247186936f1dcfcef693e7ec696.seg 04379243\n02691156/points/da9d111e1175d318bbf3143b1cb6076a.pts 02691156/expert_verified/points_label/da9d111e1175d318bbf3143b1cb6076a.seg 02691156\n04379243/points/54b26954e478b1a34ea8d5f5f27d7ce3.pts 04379243/expert_verified/points_label/54b26954e478b1a34ea8d5f5f27d7ce3.seg 04379243\n03001627/points/2d44744a7ea0bf724b3c42e318f3affc.pts 03001627/expert_verified/points_label/2d44744a7ea0bf724b3c42e318f3affc.seg 03001627\n04379243/points/9dd63148e5b0a4f79eaa55bb236fb6e1.pts 04379243/expert_verified/points_label/9dd63148e5b0a4f79eaa55bb236fb6e1.seg 04379243\n04379243/points/6ab7ebf9b94176456f1e07a56c129dfc.pts 04379243/expert_verified/points_label/6ab7ebf9b94176456f1e07a56c129dfc.seg 04379243\n03001627/points/6aaa9bd6e835eb0f9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/6aaa9bd6e835eb0f9b9f2eb77f5e247e.seg 03001627\n03636649/points/34020466b4342812218c9f1216abefd.pts 03636649/expert_verified/points_label/34020466b4342812218c9f1216abefd.seg 03636649\n03001627/points/df7735e2bce09a511f98c0761af40e04.pts 03001627/expert_verified/points_label/df7735e2bce09a511f98c0761af40e04.seg 03001627\n03636649/points/1d963d5c54613202b0aa15078ea6f391.pts 03636649/expert_verified/points_label/1d963d5c54613202b0aa15078ea6f391.seg 03636649\n03636649/points/5a9e0dd068e2436bd7ebac63aa51083.pts 03636649/expert_verified/points_label/5a9e0dd068e2436bd7ebac63aa51083.seg 03636649\n03001627/points/b1f50d8d41a8c53b6197fd390b16d14d.pts 03001627/expert_verified/points_label/b1f50d8d41a8c53b6197fd390b16d14d.seg 03001627\n03001627/points/285931af369b12c2ccd42a2d6eea63ed.pts 03001627/expert_verified/points_label/285931af369b12c2ccd42a2d6eea63ed.seg 03001627\n03636649/points/69429d8ffb5009a82060e7309fc3fc6.pts 03636649/expert_verified/points_label/69429d8ffb5009a82060e7309fc3fc6.seg 03636649\n04379243/points/63b53646b3562677d395837145ded71.pts 04379243/expert_verified/points_label/63b53646b3562677d395837145ded71.seg 04379243\n03001627/points/ee5ee3f6759aabacf2f43e6f841bd32b.pts 03001627/expert_verified/points_label/ee5ee3f6759aabacf2f43e6f841bd32b.seg 03001627\n02691156/points/bdfbf1c555dacd9d325212819caa597d.pts 02691156/expert_verified/points_label/bdfbf1c555dacd9d325212819caa597d.seg 02691156\n04379243/points/9f321f05a7808719ab610b0c94236463.pts 04379243/expert_verified/points_label/9f321f05a7808719ab610b0c94236463.seg 04379243\n03624134/points/fb1f385d487d13d7aa0079d6fb0f853c.pts 03624134/expert_verified/points_label/fb1f385d487d13d7aa0079d6fb0f853c.seg 03624134\n04379243/points/109738784a0a6129a02c88fe01f2b9c1.pts 04379243/expert_verified/points_label/109738784a0a6129a02c88fe01f2b9c1.seg 04379243\n03467517/points/65e3bdc247b3ce3d4de904d1abbce016.pts 03467517/expert_verified/points_label/65e3bdc247b3ce3d4de904d1abbce016.seg 03467517\n02691156/points/94ce3a5ad2576e73a5cac89017eae8d1.pts 02691156/expert_verified/points_label/94ce3a5ad2576e73a5cac89017eae8d1.seg 02691156\n03001627/points/80fab0c55a60abb7dafb0be26f6b45d5.pts 03001627/expert_verified/points_label/80fab0c55a60abb7dafb0be26f6b45d5.seg 03001627\n04379243/points/e6ee101d3cb13bdd16a2b5862518c93.pts 04379243/expert_verified/points_label/e6ee101d3cb13bdd16a2b5862518c93.seg 04379243\n04379243/points/6f2ffe8c014a6a458af30108ea9ccb6c.pts 04379243/expert_verified/points_label/6f2ffe8c014a6a458af30108ea9ccb6c.seg 04379243\n02958343/points/504793ed2da6cf7eba3e2415e22cd45c.pts 02958343/expert_verified/points_label/504793ed2da6cf7eba3e2415e22cd45c.seg 02958343\n03467517/points/9e26dcbac33f056c343b0b12983b9982.pts 03467517/expert_verified/points_label/9e26dcbac33f056c343b0b12983b9982.seg 03467517\n03467517/points/a92cd0b5d559075daa9518d76daaca23.pts 03467517/expert_verified/points_label/a92cd0b5d559075daa9518d76daaca23.seg 03467517\n03636649/points/b6989c99bba1226539b3360f500ac52a.pts 03636649/expert_verified/points_label/b6989c99bba1226539b3360f500ac52a.seg 03636649\n03624134/points/cc38f97557029b2a2b5fd8277662be97.pts 03624134/expert_verified/points_label/cc38f97557029b2a2b5fd8277662be97.seg 03624134\n03790512/points/41cc9674e700c3fdb37378f3c85478b4.pts 03790512/expert_verified/points_label/41cc9674e700c3fdb37378f3c85478b4.seg 03790512\n03001627/points/56b171b1f1521d27291d12adef12641b.pts 03001627/expert_verified/points_label/56b171b1f1521d27291d12adef12641b.seg 03001627\n03636649/points/ddc2d39dac6e84506c5b8009db95f66f.pts 03636649/expert_verified/points_label/ddc2d39dac6e84506c5b8009db95f66f.seg 03636649\n02691156/points/edc185566c1df89c35fc197bbabcd5bd.pts 02691156/expert_verified/points_label/edc185566c1df89c35fc197bbabcd5bd.seg 02691156\n04379243/points/fb5e8a6361262c26acf7920879052e93.pts 04379243/expert_verified/points_label/fb5e8a6361262c26acf7920879052e93.seg 04379243\n04379243/points/8862cddf90fddb3119fb4103277a6b93.pts 04379243/expert_verified/points_label/8862cddf90fddb3119fb4103277a6b93.seg 04379243\n02691156/points/d5a94c9f09d238c4c3a35cee92bb95b.pts 02691156/expert_verified/points_label/d5a94c9f09d238c4c3a35cee92bb95b.seg 02691156\n03636649/points/1682d4404196cf127588e2ca59b15f8.pts 03636649/expert_verified/points_label/1682d4404196cf127588e2ca59b15f8.seg 03636649\n04379243/points/2f33abdfe147813e44949d7685cb63ea.pts 04379243/expert_verified/points_label/2f33abdfe147813e44949d7685cb63ea.seg 04379243\n03001627/points/e158f7ba6828db5c654ea6737b0d3597.pts 03001627/expert_verified/points_label/e158f7ba6828db5c654ea6737b0d3597.seg 03001627\n04379243/points/564474f25a4400c5dc20930e6fc85682.pts 04379243/expert_verified/points_label/564474f25a4400c5dc20930e6fc85682.seg 04379243\n04379243/points/eb379b2b95e76502e258d1c3e7302e7b.pts 04379243/expert_verified/points_label/eb379b2b95e76502e258d1c3e7302e7b.seg 04379243\n03001627/points/3a1b54325b3565e72ca4b544d68c52.pts 03001627/expert_verified/points_label/3a1b54325b3565e72ca4b544d68c52.seg 03001627\n04225987/points/393ca71bd734f3071082f2ea630bf69e.pts 04225987/expert_verified/points_label/393ca71bd734f3071082f2ea630bf69e.seg 04225987\n03636649/points/bd1cbcb990375022b45fed2806c331ab.pts 03636649/expert_verified/points_label/bd1cbcb990375022b45fed2806c331ab.seg 03636649\n03001627/points/6a9dce6566cd61652b339ec555ba3bfc.pts 03001627/expert_verified/points_label/6a9dce6566cd61652b339ec555ba3bfc.seg 03001627\n02691156/points/94379090010cd6bb874c9ce092a813ef.pts 02691156/expert_verified/points_label/94379090010cd6bb874c9ce092a813ef.seg 02691156\n02773838/points/d3bd250ca3cb8e29976855a35549333.pts 02773838/expert_verified/points_label/d3bd250ca3cb8e29976855a35549333.seg 02773838\n03001627/points/36cb782fbc164ac312591a3ac05fadf1.pts 03001627/expert_verified/points_label/36cb782fbc164ac312591a3ac05fadf1.seg 03001627\n03642806/points/2211a40cc77a085362c091e763f81d3.pts 03642806/expert_verified/points_label/2211a40cc77a085362c091e763f81d3.seg 03642806\n04379243/points/5cbd726c3ffd8fc49b458816be7a3962.pts 04379243/expert_verified/points_label/5cbd726c3ffd8fc49b458816be7a3962.seg 04379243\n02691156/points/72aee7d0e998a68aca8607f540cc62ba.pts 02691156/expert_verified/points_label/72aee7d0e998a68aca8607f540cc62ba.seg 02691156\n04379243/points/1c3310f4c05ce1f6a192483aa282f8e5.pts 04379243/expert_verified/points_label/1c3310f4c05ce1f6a192483aa282f8e5.seg 04379243\n04379243/points/4ced745f960f7439b91767277279ac70.pts 04379243/expert_verified/points_label/4ced745f960f7439b91767277279ac70.seg 04379243\n03642806/points/8d70fb6adc63e21eb7e0383b9609fa5.pts 03642806/expert_verified/points_label/8d70fb6adc63e21eb7e0383b9609fa5.seg 03642806\n03001627/points/2bd6800d64c01d677721fafb59ea099.pts 03001627/expert_verified/points_label/2bd6800d64c01d677721fafb59ea099.seg 03001627\n03467517/points/1abe78447898821e93f0194265a9746c.pts 03467517/expert_verified/points_label/1abe78447898821e93f0194265a9746c.seg 03467517\n02691156/points/9bf3c126d5918c41f5c7319b71bdce6e.pts 02691156/expert_verified/points_label/9bf3c126d5918c41f5c7319b71bdce6e.seg 02691156\n03642806/points/1312ea502b4e9b51701c1f58e22b85e8.pts 03642806/expert_verified/points_label/1312ea502b4e9b51701c1f58e22b85e8.seg 03642806\n04379243/points/a9cc8112fb8c4ed5dfd21203bf8b4b46.pts 04379243/expert_verified/points_label/a9cc8112fb8c4ed5dfd21203bf8b4b46.seg 04379243\n03642806/points/62b25a5e3119b8409023147b38c03c9f.pts 03642806/expert_verified/points_label/62b25a5e3119b8409023147b38c03c9f.seg 03642806\n04379243/points/a4fcd8afe8b6de585beaf00da5b709c2.pts 04379243/expert_verified/points_label/a4fcd8afe8b6de585beaf00da5b709c2.seg 04379243\n03636649/points/907fd296708ae71dd5fab5deb286066.pts 03636649/expert_verified/points_label/907fd296708ae71dd5fab5deb286066.seg 03636649\n04379243/points/c5ae96124c15c734e6c5cd45aa112726.pts 04379243/expert_verified/points_label/c5ae96124c15c734e6c5cd45aa112726.seg 04379243\n03642806/points/ef6d43add46d0cae4e07b09c086cc5c4.pts 03642806/expert_verified/points_label/ef6d43add46d0cae4e07b09c086cc5c4.seg 03642806\n04379243/points/8d07df2bf706cda58c5591114064d173.pts 04379243/expert_verified/points_label/8d07df2bf706cda58c5591114064d173.seg 04379243\n02958343/points/5316fab78a6732f0428df271ebc70bc0.pts 02958343/expert_verified/points_label/5316fab78a6732f0428df271ebc70bc0.seg 02958343\n03467517/points/7946e354e342f560c5a468097fc791e4.pts 03467517/expert_verified/points_label/7946e354e342f560c5a468097fc791e4.seg 03467517\n03467517/points/d3684d071dcb6bffd3193ed047bef161.pts 03467517/expert_verified/points_label/d3684d071dcb6bffd3193ed047bef161.seg 03467517\n04379243/points/33b081062b2195e71771ee930e861b13.pts 04379243/expert_verified/points_label/33b081062b2195e71771ee930e861b13.seg 04379243\n02958343/points/511962626501e4abf500cc506a763c18.pts 02958343/expert_verified/points_label/511962626501e4abf500cc506a763c18.seg 02958343\n03797390/points/c82b9f1b98f044fc15cf6e5ad80f2da.pts 03797390/expert_verified/points_label/c82b9f1b98f044fc15cf6e5ad80f2da.seg 03797390\n04379243/points/49f625856c796254d249abd69334079c.pts 04379243/expert_verified/points_label/49f625856c796254d249abd69334079c.seg 04379243\n03001627/points/ca4900c42b8016ef8397cd720acaa508.pts 03001627/expert_verified/points_label/ca4900c42b8016ef8397cd720acaa508.seg 03001627\n03636649/points/31a15957bd4f32f87eedf2c7d21f7cfa.pts 03636649/expert_verified/points_label/31a15957bd4f32f87eedf2c7d21f7cfa.seg 03636649\n03797390/points/928a383f79698c3fb6d9bc28c8d8a2c4.pts 03797390/expert_verified/points_label/928a383f79698c3fb6d9bc28c8d8a2c4.seg 03797390\n04379243/points/17e5a64889ca085fa5526f91aecc0c37.pts 04379243/expert_verified/points_label/17e5a64889ca085fa5526f91aecc0c37.seg 04379243\n02958343/points/cbe2dc469c47bb80425b2c354eccabaf.pts 02958343/expert_verified/points_label/cbe2dc469c47bb80425b2c354eccabaf.seg 02958343\n03001627/points/19c8189116dd7cd3e95c611687989498.pts 03001627/expert_verified/points_label/19c8189116dd7cd3e95c611687989498.seg 03001627\n03636649/points/7f518fe982aae1b5940c8a2639c8747.pts 03636649/expert_verified/points_label/7f518fe982aae1b5940c8a2639c8747.seg 03636649\n03636649/points/7b1fef0071908d4bd93768e7b9b1eabf.pts 03636649/expert_verified/points_label/7b1fef0071908d4bd93768e7b9b1eabf.seg 03636649\n03001627/points/475e2c8f7a2c1bbd9acf9a86c283d1a2.pts 03001627/expert_verified/points_label/475e2c8f7a2c1bbd9acf9a86c283d1a2.seg 03001627\n03467517/points/5c805aca7aa8bdd3ac61a2f8346a8f.pts 03467517/expert_verified/points_label/5c805aca7aa8bdd3ac61a2f8346a8f.seg 03467517\n03790512/points/8032295bd3851d75468bac13e007a6e9.pts 03790512/expert_verified/points_label/8032295bd3851d75468bac13e007a6e9.seg 03790512\n02691156/points/3e0561d70c7fd4f51c6e4e20f2b76086.pts 02691156/expert_verified/points_label/3e0561d70c7fd4f51c6e4e20f2b76086.seg 02691156\n02691156/points/e5610bbacaf098508b96ae1a0a8b84ec.pts 02691156/expert_verified/points_label/e5610bbacaf098508b96ae1a0a8b84ec.seg 02691156\n03467517/points/97e8ee1b6df404bd57700c05b1862d8.pts 03467517/expert_verified/points_label/97e8ee1b6df404bd57700c05b1862d8.seg 03467517\n03636649/points/981b55897cee64403c8d0fdfb1cc2535.pts 03636649/expert_verified/points_label/981b55897cee64403c8d0fdfb1cc2535.seg 03636649\n04379243/points/204d9ecc196990ebe8479ad2eabcbab4.pts 04379243/expert_verified/points_label/204d9ecc196990ebe8479ad2eabcbab4.seg 04379243\n04379243/points/9d039675f4d51869f3edd695842c6d58.pts 04379243/expert_verified/points_label/9d039675f4d51869f3edd695842c6d58.seg 04379243\n03467517/points/cb5b2e3f499e4fdecc571cd3cf8f17a1.pts 03467517/expert_verified/points_label/cb5b2e3f499e4fdecc571cd3cf8f17a1.seg 03467517\n04379243/points/5243b5491a4f8a16a2b5862518c93.pts 04379243/expert_verified/points_label/5243b5491a4f8a16a2b5862518c93.seg 04379243\n04379243/points/efbf0d75648b7c7d5792b99b8245d225.pts 04379243/expert_verified/points_label/efbf0d75648b7c7d5792b99b8245d225.seg 04379243\n03001627/points/c8265e04c94bcb5a1346e336f65f96f6.pts 03001627/expert_verified/points_label/c8265e04c94bcb5a1346e336f65f96f6.seg 03001627\n02958343/points/94cfcfb74e246f938acb0ff76f4aec7d.pts 02958343/expert_verified/points_label/94cfcfb74e246f938acb0ff76f4aec7d.seg 02958343\n03467517/points/a0b6f040538d26e3ac61a2f8346a8f.pts 03467517/expert_verified/points_label/a0b6f040538d26e3ac61a2f8346a8f.seg 03467517\n03001627/points/70f1f85d47c970bb78dd615a59de5f05.pts 03001627/expert_verified/points_label/70f1f85d47c970bb78dd615a59de5f05.seg 03001627\n04379243/points/f4976e80b8533bcf85518f8659f21d56.pts 04379243/expert_verified/points_label/f4976e80b8533bcf85518f8659f21d56.seg 04379243\n03636649/points/9fdaafde365beafc37f7ce56c66316ea.pts 03636649/expert_verified/points_label/9fdaafde365beafc37f7ce56c66316ea.seg 03636649\n03467517/points/22033c6d7e5a90f193f0194265a9746c.pts 03467517/expert_verified/points_label/22033c6d7e5a90f193f0194265a9746c.seg 03467517\n02691156/points/c1b5dc92221bcdad5fc84bf2b9ef981.pts 02691156/expert_verified/points_label/c1b5dc92221bcdad5fc84bf2b9ef981.seg 02691156\n04379243/points/79d0985603f7ff3be6c5cd45aa112726.pts 04379243/expert_verified/points_label/79d0985603f7ff3be6c5cd45aa112726.seg 04379243\n03467517/points/5d6c1516b83dec8663e148e250c0340d.pts 03467517/expert_verified/points_label/5d6c1516b83dec8663e148e250c0340d.seg 03467517\n04379243/points/79c5df613523a462d42b9650f19dd425.pts 04379243/expert_verified/points_label/79c5df613523a462d42b9650f19dd425.seg 04379243\n03001627/points/f19e8da9d8f369c531e63f1270e2b445.pts 03001627/expert_verified/points_label/f19e8da9d8f369c531e63f1270e2b445.seg 03001627\n03001627/points/9a711bb7070ae88de948e3d64826c640.pts 03001627/expert_verified/points_label/9a711bb7070ae88de948e3d64826c640.seg 03001627\n03467517/points/2adbf6c3f8f2d9ca7fe36b1f0a632ed8.pts 03467517/expert_verified/points_label/2adbf6c3f8f2d9ca7fe36b1f0a632ed8.seg 03467517\n03001627/points/837ba605a4ab4a4f19fb4103277a6b93.pts 03001627/expert_verified/points_label/837ba605a4ab4a4f19fb4103277a6b93.seg 03001627\n03001627/points/807f08096308af5e28c0cecb7de2397a.pts 03001627/expert_verified/points_label/807f08096308af5e28c0cecb7de2397a.seg 03001627\n03467517/points/275c4f98ef07f2b393f0194265a9746c.pts 03467517/expert_verified/points_label/275c4f98ef07f2b393f0194265a9746c.seg 03467517\n04379243/points/57afaabf994feb305512673aa47c7e3d.pts 04379243/expert_verified/points_label/57afaabf994feb305512673aa47c7e3d.seg 04379243\n03001627/points/d9156f5552178de2713decb1a0563b12.pts 03001627/expert_verified/points_label/d9156f5552178de2713decb1a0563b12.seg 03001627\n03948459/points/fe62130ce6fcd9b77754fed890b42399.pts 03948459/expert_verified/points_label/fe62130ce6fcd9b77754fed890b42399.seg 03948459\n03261776/points/1757fe64e76a9630fc176230c2f2d294.pts 03261776/expert_verified/points_label/1757fe64e76a9630fc176230c2f2d294.seg 03261776\n03790512/points/3fd1bff496b369f71765540024eb9fef.pts 03790512/expert_verified/points_label/3fd1bff496b369f71765540024eb9fef.seg 03790512\n02958343/points/a6d494af391a97686436916a86a90ed7.pts 02958343/expert_verified/points_label/a6d494af391a97686436916a86a90ed7.seg 02958343\n04099429/points/59389aac7b1ea9b09b28f5f9cf8893b5.pts 04099429/expert_verified/points_label/59389aac7b1ea9b09b28f5f9cf8893b5.seg 04099429\n04379243/points/c399ed276ed35cb9a6ce08f0d82ba063.pts 04379243/expert_verified/points_label/c399ed276ed35cb9a6ce08f0d82ba063.seg 04379243\n03624134/points/e4f610f36ba3c6f69246ea0301684d80.pts 03624134/expert_verified/points_label/e4f610f36ba3c6f69246ea0301684d80.seg 03624134\n03636649/points/90b0f9a1ac2e54ecbc7f58784fda27b5.pts 03636649/expert_verified/points_label/90b0f9a1ac2e54ecbc7f58784fda27b5.seg 03636649\n03636649/points/e5e9ff118631c2a3ee088de33038f12a.pts 03636649/expert_verified/points_label/e5e9ff118631c2a3ee088de33038f12a.seg 03636649\n04099429/points/4936716925b1cd6428eba1f0b7744e9.pts 04099429/expert_verified/points_label/4936716925b1cd6428eba1f0b7744e9.seg 04099429\n04379243/points/6e446bb5adf14b0b6121178eafd002fd.pts 04379243/expert_verified/points_label/6e446bb5adf14b0b6121178eafd002fd.seg 04379243\n03001627/points/7ea38c936513f5df3772b104757a4809.pts 03001627/expert_verified/points_label/7ea38c936513f5df3772b104757a4809.seg 03001627\n04379243/points/23d68e01b77089ae76ad4f5e7c7020eb.pts 04379243/expert_verified/points_label/23d68e01b77089ae76ad4f5e7c7020eb.seg 04379243\n03636649/points/4d6bced89943df73b4edf02c99e16daa.pts 03636649/expert_verified/points_label/4d6bced89943df73b4edf02c99e16daa.seg 03636649\n04379243/points/3459eec8eb56fa312bac236fe109e385.pts 04379243/expert_verified/points_label/3459eec8eb56fa312bac236fe109e385.seg 04379243\n03261776/points/1a5e2a7cddc8e46aa681aea7976a4565.pts 03261776/expert_verified/points_label/1a5e2a7cddc8e46aa681aea7976a4565.seg 03261776\n03001627/points/ed0d65c68a1fa5c485e2f8b1d3a373fe.pts 03001627/expert_verified/points_label/ed0d65c68a1fa5c485e2f8b1d3a373fe.seg 03001627\n03636649/points/7b005e23eae2768eb08c032bedc99529.pts 03636649/expert_verified/points_label/7b005e23eae2768eb08c032bedc99529.seg 03636649\n04379243/points/3f2e9c14ab1d26a0ebead06af665220.pts 04379243/expert_verified/points_label/3f2e9c14ab1d26a0ebead06af665220.seg 04379243\n03001627/points/383ab6330284af461fc4ae93e00c18e5.pts 03001627/expert_verified/points_label/383ab6330284af461fc4ae93e00c18e5.seg 03001627\n02691156/points/fc7387d630c84bb9c863ab010b80d9ed.pts 02691156/expert_verified/points_label/fc7387d630c84bb9c863ab010b80d9ed.seg 02691156\n04225987/points/344e9402d06bd94031145076011658c5.pts 04225987/expert_verified/points_label/344e9402d06bd94031145076011658c5.seg 04225987\n04379243/points/745a2b060d0f692bf4b6538438a0b930.pts 04379243/expert_verified/points_label/745a2b060d0f692bf4b6538438a0b930.seg 04379243\n04379243/points/928ea87878a7bbe26cf876b69450cd4e.pts 04379243/expert_verified/points_label/928ea87878a7bbe26cf876b69450cd4e.seg 04379243\n03001627/points/5fe56a4a9d5508c3b2373df00b89e5d.pts 03001627/expert_verified/points_label/5fe56a4a9d5508c3b2373df00b89e5d.seg 03001627\n02691156/points/6a75658fb8242b9c590874dcd9dc8481.pts 02691156/expert_verified/points_label/6a75658fb8242b9c590874dcd9dc8481.seg 02691156\n03948459/points/f377665c5b17d0ce61b636d79e46a7e9.pts 03948459/expert_verified/points_label/f377665c5b17d0ce61b636d79e46a7e9.seg 03948459\n03642806/points/ab21f75b97d6b1054f22ce0a3592d5.pts 03642806/expert_verified/points_label/ab21f75b97d6b1054f22ce0a3592d5.seg 03642806\n04379243/points/a2baf45f001e118e2c79f7f31759bfa7.pts 04379243/expert_verified/points_label/a2baf45f001e118e2c79f7f31759bfa7.seg 04379243\n02691156/points/19ff8fce1658f864ca8607f540cc62ba.pts 02691156/expert_verified/points_label/19ff8fce1658f864ca8607f540cc62ba.seg 02691156\n04379243/points/8bb3a7e1cb24fe6febad4f49b26ec52.pts 04379243/expert_verified/points_label/8bb3a7e1cb24fe6febad4f49b26ec52.seg 04379243\n04379243/points/dbc5a4d1dc3a6e8271a782a4379556c7.pts 04379243/expert_verified/points_label/dbc5a4d1dc3a6e8271a782a4379556c7.seg 04379243\n03001627/points/e6c11fed9469141ace8fba09dd640742.pts 03001627/expert_verified/points_label/e6c11fed9469141ace8fba09dd640742.seg 03001627\n03797390/points/f99e19b8c4a729353deb88581ea8417a.pts 03797390/expert_verified/points_label/f99e19b8c4a729353deb88581ea8417a.seg 03797390\n03001627/points/d454f99b99248bf337c99625b0c170be.pts 03001627/expert_verified/points_label/d454f99b99248bf337c99625b0c170be.seg 03001627\n03636649/points/7c23362b39f318cbb18d6f615cb18bdd.pts 03636649/expert_verified/points_label/7c23362b39f318cbb18d6f615cb18bdd.seg 03636649\n03001627/points/d8e2e2a923b372731cf97e154cc62f43.pts 03001627/expert_verified/points_label/d8e2e2a923b372731cf97e154cc62f43.seg 03001627\n03642806/points/621882a4afd2a126369873c1090720a1.pts 03642806/expert_verified/points_label/621882a4afd2a126369873c1090720a1.seg 03642806\n04379243/points/d5d1e750bb492dd5391e4d6c585a697a.pts 04379243/expert_verified/points_label/d5d1e750bb492dd5391e4d6c585a697a.seg 04379243\n03467517/points/42f3172b8770d2fd2200c35bfa7099ee.pts 03467517/expert_verified/points_label/42f3172b8770d2fd2200c35bfa7099ee.seg 03467517\n03624134/points/a2288d5f3a44233bc40c6b891c4913bd.pts 03624134/expert_verified/points_label/a2288d5f3a44233bc40c6b891c4913bd.seg 03624134\n02691156/points/90612205109d7458e84aab2e1d454e3c.pts 02691156/expert_verified/points_label/90612205109d7458e84aab2e1d454e3c.seg 02691156\n03001627/points/2c03bcb2a133ce28bb6caad47eee6580.pts 03001627/expert_verified/points_label/2c03bcb2a133ce28bb6caad47eee6580.seg 03001627\n03001627/points/f23d3a85baabd7ae32d9baba75737e72.pts 03001627/expert_verified/points_label/f23d3a85baabd7ae32d9baba75737e72.seg 03001627\n04379243/points/90be5de0faef91ef3f7e27638e63d848.pts 04379243/expert_verified/points_label/90be5de0faef91ef3f7e27638e63d848.seg 04379243\n02691156/points/d5f01e2aa54bbf28ca8607f540cc62ba.pts 02691156/expert_verified/points_label/d5f01e2aa54bbf28ca8607f540cc62ba.seg 02691156\n02691156/points/4f0bf26c62bb7c8b7e1c97634acf0214.pts 02691156/expert_verified/points_label/4f0bf26c62bb7c8b7e1c97634acf0214.seg 02691156\n03001627/points/4246c8c293c56ea34b3c42e318f3affc.pts 03001627/expert_verified/points_label/4246c8c293c56ea34b3c42e318f3affc.seg 03001627\n04379243/points/9b42cb91ccead6d42f6d10c5d1d56320.pts 04379243/expert_verified/points_label/9b42cb91ccead6d42f6d10c5d1d56320.seg 04379243\n03001627/points/c67b7b62e529295dfc30525e763ef5eb.pts 03001627/expert_verified/points_label/c67b7b62e529295dfc30525e763ef5eb.seg 03001627\n04379243/points/394c63a5658ef759b515d1675be6b5d3.pts 04379243/expert_verified/points_label/394c63a5658ef759b515d1675be6b5d3.seg 04379243\n03636649/points/13ba3fbe8fbc53f3ef3a2c64cef919d0.pts 03636649/expert_verified/points_label/13ba3fbe8fbc53f3ef3a2c64cef919d0.seg 03636649\n04379243/points/cb860d60db8f3d18febad4f49b26ec52.pts 04379243/expert_verified/points_label/cb860d60db8f3d18febad4f49b26ec52.seg 04379243\n04379243/points/657aad273d665f5dd9823f45c4411583.pts 04379243/expert_verified/points_label/657aad273d665f5dd9823f45c4411583.seg 04379243\n03001627/points/64fcd1ba0df5d54d79b3e1be3524f72f.pts 03001627/expert_verified/points_label/64fcd1ba0df5d54d79b3e1be3524f72f.seg 03001627\n03642806/points/8489cb783d249651b674654e7bbe623d.pts 03642806/expert_verified/points_label/8489cb783d249651b674654e7bbe623d.seg 03642806\n03467517/points/3824a2336972d144a24eeca91f583600.pts 03467517/expert_verified/points_label/3824a2336972d144a24eeca91f583600.seg 03467517\n03797390/points/99eaa69cf6fe8811dec712af445786fe.pts 03797390/expert_verified/points_label/99eaa69cf6fe8811dec712af445786fe.seg 03797390\n03001627/points/e31d71ed32273fede42ac999db581f5e.pts 03001627/expert_verified/points_label/e31d71ed32273fede42ac999db581f5e.seg 03001627\n03001627/points/9a42cff883cbd358106f706dac6c58f0.pts 03001627/expert_verified/points_label/9a42cff883cbd358106f706dac6c58f0.seg 03001627\n04379243/points/b515a107aa3a3fd0e3dff0d5ebb43915.pts 04379243/expert_verified/points_label/b515a107aa3a3fd0e3dff0d5ebb43915.seg 04379243\n03001627/points/bd6a8b133fa4d269491d6cee03fef2a9.pts 03001627/expert_verified/points_label/bd6a8b133fa4d269491d6cee03fef2a9.seg 03001627\n03001627/points/51c8f249e778e84a5bae8923b29985ad.pts 03001627/expert_verified/points_label/51c8f249e778e84a5bae8923b29985ad.seg 03001627\n02691156/points/f12eefbbefabe566ca8607f540cc62ba.pts 02691156/expert_verified/points_label/f12eefbbefabe566ca8607f540cc62ba.seg 02691156\n02691156/points/ad6e93a1db3e1da5977e4bb19a62128e.pts 02691156/expert_verified/points_label/ad6e93a1db3e1da5977e4bb19a62128e.seg 02691156\n03001627/points/efa83c67ce47bfca304edcf7c4314468.pts 03001627/expert_verified/points_label/efa83c67ce47bfca304edcf7c4314468.seg 03001627\n03624134/points/d6e9e4e07bafca0fa37f3fc191551700.pts 03624134/expert_verified/points_label/d6e9e4e07bafca0fa37f3fc191551700.seg 03624134\n03642806/points/e083105e9c2a28bb0c3a03d0a1f182f.pts 03642806/expert_verified/points_label/e083105e9c2a28bb0c3a03d0a1f182f.seg 03642806\n03001627/points/d2992fd5e6715bad3bbf93f83cbaf271.pts 03001627/expert_verified/points_label/d2992fd5e6715bad3bbf93f83cbaf271.seg 03001627\n04379243/points/4a27cb9384782ce33e95c55cb020b7e6.pts 04379243/expert_verified/points_label/4a27cb9384782ce33e95c55cb020b7e6.seg 04379243\n04379243/points/cf046edeff204b81cdf7280ff8af6720.pts 04379243/expert_verified/points_label/cf046edeff204b81cdf7280ff8af6720.seg 04379243\n03001627/points/6534f04a1c349a3c8c6540fe6bc16d6f.pts 03001627/expert_verified/points_label/6534f04a1c349a3c8c6540fe6bc16d6f.seg 03001627\n03636649/points/1917888a2b6901091735ea0e092a805a.pts 03636649/expert_verified/points_label/1917888a2b6901091735ea0e092a805a.seg 03636649\n03636649/points/b37e07ac31fa4f311735ea0e092a805a.pts 03636649/expert_verified/points_label/b37e07ac31fa4f311735ea0e092a805a.seg 03636649\n03636649/points/2f6f1fe66631572c6c5b8009db95f66f.pts 03636649/expert_verified/points_label/2f6f1fe66631572c6c5b8009db95f66f.seg 03636649\n03467517/points/feab270427cee00a24eeca91f583600.pts 03467517/expert_verified/points_label/feab270427cee00a24eeca91f583600.seg 03467517\n02691156/points/e30e25fe047ce1ea10b08ceced9a0113.pts 02691156/expert_verified/points_label/e30e25fe047ce1ea10b08ceced9a0113.seg 02691156\n03636649/points/b2347fe81bd2db6a4b3c42e318f3affc.pts 03636649/expert_verified/points_label/b2347fe81bd2db6a4b3c42e318f3affc.seg 03636649\n03001627/points/bb7755090f984ba85dd1bba5b1310523.pts 03001627/expert_verified/points_label/bb7755090f984ba85dd1bba5b1310523.seg 03001627\n02691156/points/bc7ead8b45952ab8822054a0a020bf4a.pts 02691156/expert_verified/points_label/bc7ead8b45952ab8822054a0a020bf4a.seg 02691156\n02691156/points/5a1d4af1f417d28566cf1b4a8fc3914e.pts 02691156/expert_verified/points_label/5a1d4af1f417d28566cf1b4a8fc3914e.seg 02691156\n02691156/points/a6cbada42d1a30d0f5c7319b71bdce6e.pts 02691156/expert_verified/points_label/a6cbada42d1a30d0f5c7319b71bdce6e.seg 02691156\n02691156/points/b785b39d10c33b5de9f07d25f575b2d4.pts 02691156/expert_verified/points_label/b785b39d10c33b5de9f07d25f575b2d4.seg 02691156\n03001627/points/2df8d2af1bc4b9972056b4bd5d870b47.pts 03001627/expert_verified/points_label/2df8d2af1bc4b9972056b4bd5d870b47.seg 03001627\n03797390/points/d46b98f63a017578ea456f4bbbc96af9.pts 03797390/expert_verified/points_label/d46b98f63a017578ea456f4bbbc96af9.seg 03797390\n04379243/points/1adf96850963550f19fb4103277a6b93.pts 04379243/expert_verified/points_label/1adf96850963550f19fb4103277a6b93.seg 04379243\n03001627/points/cb7a4324fdfa690e96dd43aa0ec847c9.pts 03001627/expert_verified/points_label/cb7a4324fdfa690e96dd43aa0ec847c9.seg 03001627\n03624134/points/c19088b4c32c0f1d22b38218e60be05.pts 03624134/expert_verified/points_label/c19088b4c32c0f1d22b38218e60be05.seg 03624134\n04379243/points/1acf7b0939f3eea2eafdf94e5032b200.pts 04379243/expert_verified/points_label/1acf7b0939f3eea2eafdf94e5032b200.seg 04379243\n03467517/points/d50d06b159363b1693f0194265a9746c.pts 03467517/expert_verified/points_label/d50d06b159363b1693f0194265a9746c.seg 03467517\n02691156/points/dacb447d7820e7f7ca8607f540cc62ba.pts 02691156/expert_verified/points_label/dacb447d7820e7f7ca8607f540cc62ba.seg 02691156\n04379243/points/c3a9dc47c5bf10aac3bd24f986301745.pts 04379243/expert_verified/points_label/c3a9dc47c5bf10aac3bd24f986301745.seg 04379243\n04379243/points/4791914b3bcaf57efebad4f49b26ec52.pts 04379243/expert_verified/points_label/4791914b3bcaf57efebad4f49b26ec52.seg 04379243\n03001627/points/bf3f14225e8f899db62f9fb4b7f0626.pts 03001627/expert_verified/points_label/bf3f14225e8f899db62f9fb4b7f0626.seg 03001627\n04379243/points/4f5c111a89b3fd27aa29e9f0529e8ef7.pts 04379243/expert_verified/points_label/4f5c111a89b3fd27aa29e9f0529e8ef7.seg 04379243\n03001627/points/6af8d7bfa508b8d23759750e8db40476.pts 03001627/expert_verified/points_label/6af8d7bfa508b8d23759750e8db40476.seg 03001627\n02691156/points/427030abcc0f11a8947bbeb9022263b8.pts 02691156/expert_verified/points_label/427030abcc0f11a8947bbeb9022263b8.seg 02691156\n03642806/points/367fbaea8743ec1cc98452c8fce6b43.pts 03642806/expert_verified/points_label/367fbaea8743ec1cc98452c8fce6b43.seg 03642806\n04379243/points/419412b927d11c7d8312881285c04cb3.pts 04379243/expert_verified/points_label/419412b927d11c7d8312881285c04cb3.seg 04379243\n03001627/points/56cc047440e7c999a23949c21eddef76.pts 03001627/expert_verified/points_label/56cc047440e7c999a23949c21eddef76.seg 03001627\n03790512/points/fdb6223c286cb653cc9e7530f9d8e186.pts 03790512/expert_verified/points_label/fdb6223c286cb653cc9e7530f9d8e186.seg 03790512\n03636649/points/6b2a590446ad5794b10e111f2d30684d.pts 03636649/expert_verified/points_label/6b2a590446ad5794b10e111f2d30684d.seg 03636649\n03001627/points/a3ce9ba74ab50352e6fe3612af521500.pts 03001627/expert_verified/points_label/a3ce9ba74ab50352e6fe3612af521500.seg 03001627\n02958343/points/9986dd19b2c459152470de2774d6099.pts 02958343/expert_verified/points_label/9986dd19b2c459152470de2774d6099.seg 02958343\n03642806/points/b806daf849a5dba289c212008d2a390e.pts 03642806/expert_verified/points_label/b806daf849a5dba289c212008d2a390e.seg 03642806\n04379243/points/2eb503dde3cc027d86c701087a194026.pts 04379243/expert_verified/points_label/2eb503dde3cc027d86c701087a194026.seg 04379243\n03001627/points/c4a4710012ee39bd19f4b416b31c46e0.pts 03001627/expert_verified/points_label/c4a4710012ee39bd19f4b416b31c46e0.seg 03001627\n02958343/points/bd8654fbca233e41ddb8f37b1865d989.pts 02958343/expert_verified/points_label/bd8654fbca233e41ddb8f37b1865d989.seg 02958343\n03001627/points/6fd485a2345c3dd69233bf560301e53.pts 03001627/expert_verified/points_label/6fd485a2345c3dd69233bf560301e53.seg 03001627\n02691156/points/aebc4c46b3cb7c3bca8607f540cc62ba.pts 02691156/expert_verified/points_label/aebc4c46b3cb7c3bca8607f540cc62ba.seg 02691156\n03001627/points/9343df9a7ed6cbba1923501fcdd899bb.pts 03001627/expert_verified/points_label/9343df9a7ed6cbba1923501fcdd899bb.seg 03001627\n04379243/points/7fadae39394c5622c3bd24f986301745.pts 04379243/expert_verified/points_label/7fadae39394c5622c3bd24f986301745.seg 04379243\n03001627/points/d619fd50c4d0fb46dea83bbf303af433.pts 03001627/expert_verified/points_label/d619fd50c4d0fb46dea83bbf303af433.seg 03001627\n04379243/points/ef02c88a34b3888a1b1a00a31bfed97b.pts 04379243/expert_verified/points_label/ef02c88a34b3888a1b1a00a31bfed97b.seg 04379243\n03467517/points/71d0016078dea05a94ca7929d4ba6d2d.pts 03467517/expert_verified/points_label/71d0016078dea05a94ca7929d4ba6d2d.seg 03467517\n03001627/points/5623d0ec9efedbc9d4da89766e80607a.pts 03001627/expert_verified/points_label/5623d0ec9efedbc9d4da89766e80607a.seg 03001627\n04379243/points/21486e6d0bd896ad5cca18918d24f6cd.pts 04379243/expert_verified/points_label/21486e6d0bd896ad5cca18918d24f6cd.seg 04379243\n03636649/points/978df83c1cee012729a60d6ab40898d.pts 03636649/expert_verified/points_label/978df83c1cee012729a60d6ab40898d.seg 03636649\n02691156/points/350d12f5290908c7f446f92b52bbd82a.pts 02691156/expert_verified/points_label/350d12f5290908c7f446f92b52bbd82a.seg 02691156\n03636649/points/86d7a728dc35d634f800b597bc1c1eb5.pts 03636649/expert_verified/points_label/86d7a728dc35d634f800b597bc1c1eb5.seg 03636649\n03001627/points/3b4292989394ba62f51f77a6d7299806.pts 03001627/expert_verified/points_label/3b4292989394ba62f51f77a6d7299806.seg 03001627\n03001627/points/f5f18fccf9e16800dbd185de408ea209.pts 03001627/expert_verified/points_label/f5f18fccf9e16800dbd185de408ea209.seg 03001627\n04379243/points/4d873bf1a658dcd523eb3ad3d378722a.pts 04379243/expert_verified/points_label/4d873bf1a658dcd523eb3ad3d378722a.seg 04379243\n03001627/points/a3e4639ff201f69b22a3043dcd383f68.pts 03001627/expert_verified/points_label/a3e4639ff201f69b22a3043dcd383f68.seg 03001627\n04379243/points/8d247c6f6aaf805a2530bfb25087f2b0.pts 04379243/expert_verified/points_label/8d247c6f6aaf805a2530bfb25087f2b0.seg 04379243\n03467517/points/511fc5ccf4f1c857a24eeca91f583600.pts 03467517/expert_verified/points_label/511fc5ccf4f1c857a24eeca91f583600.seg 03467517\n02691156/points/4635326bc4fdc3e9297cd7e2ef7dfa80.pts 02691156/expert_verified/points_label/4635326bc4fdc3e9297cd7e2ef7dfa80.seg 02691156\n03001627/points/525776b59266140381dff5c2e57ad46e.pts 03001627/expert_verified/points_label/525776b59266140381dff5c2e57ad46e.seg 03001627\n03001627/points/f1d6552ca66b2e37713decb1a0563b12.pts 03001627/expert_verified/points_label/f1d6552ca66b2e37713decb1a0563b12.seg 03001627\n04379243/points/40ff8ae39ad13d014a873bbe35452b88.pts 04379243/expert_verified/points_label/40ff8ae39ad13d014a873bbe35452b88.seg 04379243\n02691156/points/59f258b7aa7c1f7aa7d0c1e4eb8db7dc.pts 02691156/expert_verified/points_label/59f258b7aa7c1f7aa7d0c1e4eb8db7dc.seg 02691156\n04379243/points/63aa14915f59ed8671a782a4379556c7.pts 04379243/expert_verified/points_label/63aa14915f59ed8671a782a4379556c7.seg 04379243\n02691156/points/e16f9cc7dedcacdb9b0435532743fd43.pts 02691156/expert_verified/points_label/e16f9cc7dedcacdb9b0435532743fd43.seg 02691156\n04379243/points/c5b83c681c085f2195493ccf8f26ab2c.pts 04379243/expert_verified/points_label/c5b83c681c085f2195493ccf8f26ab2c.seg 04379243\n03001627/points/b2ba1569509cdb439451566a8c6563ed.pts 03001627/expert_verified/points_label/b2ba1569509cdb439451566a8c6563ed.seg 03001627\n02691156/points/265f5348ab2320b2148672750a1a335.pts 02691156/expert_verified/points_label/265f5348ab2320b2148672750a1a335.seg 02691156\n03001627/points/47da08d9c7cd7e104b3c42e318f3affc.pts 03001627/expert_verified/points_label/47da08d9c7cd7e104b3c42e318f3affc.seg 03001627\n03001627/points/458356b9c5a8d7bd7cc86734cb2f5062.pts 03001627/expert_verified/points_label/458356b9c5a8d7bd7cc86734cb2f5062.seg 03001627\n02691156/points/d20e3ed9b3430672bbf3143b1cb6076a.pts 02691156/expert_verified/points_label/d20e3ed9b3430672bbf3143b1cb6076a.seg 02691156\n04379243/points/c45e6ceae72c7a97be8908669c476d49.pts 04379243/expert_verified/points_label/c45e6ceae72c7a97be8908669c476d49.seg 04379243\n03001627/points/d9bbd1a1eaf6d2259d3ea1c6b57a0095.pts 03001627/expert_verified/points_label/d9bbd1a1eaf6d2259d3ea1c6b57a0095.seg 03001627\n02958343/points/8242b114695b68286f522b2bb8ded829.pts 02958343/expert_verified/points_label/8242b114695b68286f522b2bb8ded829.seg 02958343\n03001627/points/e4b40369894a16ce6821a1e68ba5ebab.pts 03001627/expert_verified/points_label/e4b40369894a16ce6821a1e68ba5ebab.seg 03001627\n03636649/points/dfe800d8d8642e9647bc3701b998a7d5.pts 03636649/expert_verified/points_label/dfe800d8d8642e9647bc3701b998a7d5.seg 03636649\n04379243/points/bdf7606e8d493149664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/bdf7606e8d493149664b3b9b23ddfcbc.seg 04379243\n03001627/points/6015aaa9ef170d9bfdef1c01cbd4ae0c.pts 03001627/expert_verified/points_label/6015aaa9ef170d9bfdef1c01cbd4ae0c.seg 03001627\n03624134/points/df7a65224f295122ed9c5b25fef60d04.pts 03624134/expert_verified/points_label/df7a65224f295122ed9c5b25fef60d04.seg 03624134\n03467517/points/df959f68bb22e402a24eeca91f583600.pts 03467517/expert_verified/points_label/df959f68bb22e402a24eeca91f583600.seg 03467517\n04379243/points/69604fc24b7976d69ccce4c6d5bb195f.pts 04379243/expert_verified/points_label/69604fc24b7976d69ccce4c6d5bb195f.seg 04379243\n04379243/points/23aca164c7b2e2d4ad8af6714b643432.pts 04379243/expert_verified/points_label/23aca164c7b2e2d4ad8af6714b643432.seg 04379243\n03636649/points/e37796d40348fa5fd8013bb984303089.pts 03636649/expert_verified/points_label/e37796d40348fa5fd8013bb984303089.seg 03636649\n04379243/points/8cb6a2e9ba365c94593ebeeedbff73b.pts 04379243/expert_verified/points_label/8cb6a2e9ba365c94593ebeeedbff73b.seg 04379243\n03001627/points/d6f2d44c693d2e857062f2d72cde5c95.pts 03001627/expert_verified/points_label/d6f2d44c693d2e857062f2d72cde5c95.seg 03001627\n03948459/points/ed29dd43ad28f042d1987c07c912c6e1.pts 03948459/expert_verified/points_label/ed29dd43ad28f042d1987c07c912c6e1.seg 03948459\n03001627/points/ca01fd0de2534323c594a0e804f37c1a.pts 03001627/expert_verified/points_label/ca01fd0de2534323c594a0e804f37c1a.seg 03001627\n03636649/points/e7b719516449701362525a4d857f099d.pts 03636649/expert_verified/points_label/e7b719516449701362525a4d857f099d.seg 03636649\n02691156/points/bd48d0beb5d1acf1d2106c9042f1bde9.pts 02691156/expert_verified/points_label/bd48d0beb5d1acf1d2106c9042f1bde9.seg 02691156\n03636649/points/7cb828eb3b8e424b1e88064118b89a3e.pts 03636649/expert_verified/points_label/7cb828eb3b8e424b1e88064118b89a3e.seg 03636649\n03001627/points/fdd21f7f2ca9f0bcbdcbca499b446e89.pts 03001627/expert_verified/points_label/fdd21f7f2ca9f0bcbdcbca499b446e89.seg 03001627\n03636649/points/d779977c2417752b815c6de5374a8dd2.pts 03636649/expert_verified/points_label/d779977c2417752b815c6de5374a8dd2.seg 03636649\n02691156/points/f3e2df468c15795872517bb0a6b4d3ef.pts 02691156/expert_verified/points_label/f3e2df468c15795872517bb0a6b4d3ef.seg 02691156\n04379243/points/e3cc0b06be2c972cab610b0c94236463.pts 04379243/expert_verified/points_label/e3cc0b06be2c972cab610b0c94236463.seg 04379243\n03261776/points/ca1c1c9aba8f4491a656de49935d2359.pts 03261776/expert_verified/points_label/ca1c1c9aba8f4491a656de49935d2359.seg 03261776\n03001627/points/c535629f9661293dc16ef5c633c71b56.pts 03001627/expert_verified/points_label/c535629f9661293dc16ef5c633c71b56.seg 03001627\n03636649/points/699fcda4f4e9166ec5eb7aae719027b2.pts 03636649/expert_verified/points_label/699fcda4f4e9166ec5eb7aae719027b2.seg 03636649\n03001627/points/8a5d60067de905336c183a120a388982.pts 03001627/expert_verified/points_label/8a5d60067de905336c183a120a388982.seg 03001627\n02691156/points/4ad92be763c2ded8fca1f1143bb6bc17.pts 02691156/expert_verified/points_label/4ad92be763c2ded8fca1f1143bb6bc17.seg 02691156\n04379243/points/14d6b4b09dfc54e9d679a95896f75103.pts 04379243/expert_verified/points_label/14d6b4b09dfc54e9d679a95896f75103.seg 04379243\n02691156/points/5e9129782c45b26992e39b8eae3e6b15.pts 02691156/expert_verified/points_label/5e9129782c45b26992e39b8eae3e6b15.seg 02691156\n02691156/points/2aec6e6096e640add00d52e62bf14ee9.pts 02691156/expert_verified/points_label/2aec6e6096e640add00d52e62bf14ee9.seg 02691156\n03642806/points/7b4260884a1dfd76b080af510dd640b.pts 03642806/expert_verified/points_label/7b4260884a1dfd76b080af510dd640b.seg 03642806\n03636649/points/3a0edfd418e020b97f32712aef0efc5a.pts 03636649/expert_verified/points_label/3a0edfd418e020b97f32712aef0efc5a.seg 03636649\n03467517/points/1c374a198daaddc493f0194265a9746c.pts 03467517/expert_verified/points_label/1c374a198daaddc493f0194265a9746c.seg 03467517\n04379243/points/9d90a58677e619f94b8710a3469971b1.pts 04379243/expert_verified/points_label/9d90a58677e619f94b8710a3469971b1.seg 04379243\n02691156/points/26f8a11864fd6bf7b68211fcc7956ac6.pts 02691156/expert_verified/points_label/26f8a11864fd6bf7b68211fcc7956ac6.seg 02691156\n02773838/points/f5108ede5ca11f041f6736765dee4fa9.pts 02773838/expert_verified/points_label/f5108ede5ca11f041f6736765dee4fa9.seg 02773838\n03001627/points/41ce60d5443c203eb31c248b8665b2e7.pts 03001627/expert_verified/points_label/41ce60d5443c203eb31c248b8665b2e7.seg 03001627\n03797390/points/a637500654ca8d16c97cfc3e8a6b1d16.pts 03797390/expert_verified/points_label/a637500654ca8d16c97cfc3e8a6b1d16.seg 03797390\n03001627/points/9ee4b9c97bcf4b3715dec43ae6a12831.pts 03001627/expert_verified/points_label/9ee4b9c97bcf4b3715dec43ae6a12831.seg 03001627\n03001627/points/e2dbad7996e7e13430c589758b4b5646.pts 03001627/expert_verified/points_label/e2dbad7996e7e13430c589758b4b5646.seg 03001627\n03001627/points/ec9f1fc13f2e4ae2c3bd24f986301745.pts 03001627/expert_verified/points_label/ec9f1fc13f2e4ae2c3bd24f986301745.seg 03001627\n03624134/points/172b9a77462dcdeaed90ead9558ee6cb.pts 03624134/expert_verified/points_label/172b9a77462dcdeaed90ead9558ee6cb.seg 03624134\n04379243/points/713a4be770bb19b9586b2526565371c0.pts 04379243/expert_verified/points_label/713a4be770bb19b9586b2526565371c0.seg 04379243\n04379243/points/f2e6820ca69d9b7719fb4103277a6b93.pts 04379243/expert_verified/points_label/f2e6820ca69d9b7719fb4103277a6b93.seg 04379243\n03001627/points/11a06e6f68b1d99c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/11a06e6f68b1d99c8687ff9b0b4e4ac.seg 03001627\n04379243/points/cfd7e354a5ae982aa0ab1d82ef09f78f.pts 04379243/expert_verified/points_label/cfd7e354a5ae982aa0ab1d82ef09f78f.seg 04379243\n03797390/points/8012f52dd0a4d2f718a93a45bf780820.pts 03797390/expert_verified/points_label/8012f52dd0a4d2f718a93a45bf780820.seg 03797390\n03636649/points/57c1bc69df779d87bbc7a6acbd8f058b.pts 03636649/expert_verified/points_label/57c1bc69df779d87bbc7a6acbd8f058b.seg 03636649\n03948459/points/664579680dc09267e1f2a1daf140ac9f.pts 03948459/expert_verified/points_label/664579680dc09267e1f2a1daf140ac9f.seg 03948459\n03001627/points/ca032d3b6dcbe1cea3056fa1e8da3997.pts 03001627/expert_verified/points_label/ca032d3b6dcbe1cea3056fa1e8da3997.seg 03001627\n02691156/points/4a837740b388aa45d8ff6111270336a9.pts 02691156/expert_verified/points_label/4a837740b388aa45d8ff6111270336a9.seg 02691156\n04099429/points/64803bab9799d0e698d2d2b2ae2563b0.pts 04099429/expert_verified/points_label/64803bab9799d0e698d2d2b2ae2563b0.seg 04099429\n04379243/points/c2c36909e461e10adaaaeef365d8f6e5.pts 04379243/expert_verified/points_label/c2c36909e461e10adaaaeef365d8f6e5.seg 04379243\n04379243/points/bc842e548e68a3cbb48513409ae7c51d.pts 04379243/expert_verified/points_label/bc842e548e68a3cbb48513409ae7c51d.seg 04379243\n03467517/points/4709e55a82a63f64d57700c05b1862d8.pts 03467517/expert_verified/points_label/4709e55a82a63f64d57700c05b1862d8.seg 03467517\n04379243/points/dc6f030d9ee566a5dcfcef693e7ec696.pts 04379243/expert_verified/points_label/dc6f030d9ee566a5dcfcef693e7ec696.seg 04379243\n03001627/points/8be8093e99b94bd9cf320c31965db5a1.pts 03001627/expert_verified/points_label/8be8093e99b94bd9cf320c31965db5a1.seg 03001627\n02958343/points/a0a1b0377d72e86bab3dd76bf33b0f5e.pts 02958343/expert_verified/points_label/a0a1b0377d72e86bab3dd76bf33b0f5e.seg 02958343\n03001627/points/efc684ff4dc6ff49ccd42a2d6eea63ed.pts 03001627/expert_verified/points_label/efc684ff4dc6ff49ccd42a2d6eea63ed.seg 03001627\n03001627/points/ff2223a085d32243696b74614952b2d0.pts 03001627/expert_verified/points_label/ff2223a085d32243696b74614952b2d0.seg 03001627\n02954340/points/8b2951e32e0906bb5f6cb4951755315c.pts 02954340/expert_verified/points_label/8b2951e32e0906bb5f6cb4951755315c.seg 02954340\n04379243/points/82b69c9b72a5159ce76bc197b3a3ffc0.pts 04379243/expert_verified/points_label/82b69c9b72a5159ce76bc197b3a3ffc0.seg 04379243\n03642806/points/5b5247b13d5b21bdad2954b86711abbd.pts 03642806/expert_verified/points_label/5b5247b13d5b21bdad2954b86711abbd.seg 03642806\n03636649/points/44e442591f82cd4cab0ac374f450cdc.pts 03636649/expert_verified/points_label/44e442591f82cd4cab0ac374f450cdc.seg 03636649\n03001627/points/2a1184b04dd8f30e3e92f39ce48d644.pts 03001627/expert_verified/points_label/2a1184b04dd8f30e3e92f39ce48d644.seg 03001627\n03636649/points/bc49fe3559e18fcb7d910d51d878f708.pts 03636649/expert_verified/points_label/bc49fe3559e18fcb7d910d51d878f708.seg 03636649\n03624134/points/c50af8af50613e822bf26da672b84220.pts 03624134/expert_verified/points_label/c50af8af50613e822bf26da672b84220.seg 03624134\n04225987/points/c0280aaad5473e8398c63cb68f11df34.pts 04225987/expert_verified/points_label/c0280aaad5473e8398c63cb68f11df34.seg 04225987\n03636649/points/5849d1a237cb493c659dda512294c744.pts 03636649/expert_verified/points_label/5849d1a237cb493c659dda512294c744.seg 03636649\n02958343/points/fcd90d547fdeb629f200a72c9245aee7.pts 02958343/expert_verified/points_label/fcd90d547fdeb629f200a72c9245aee7.seg 02958343\n03001627/points/34898c36e711fbde713decb1a0563b12.pts 03001627/expert_verified/points_label/34898c36e711fbde713decb1a0563b12.seg 03001627\n02691156/points/af696fc30a96a0c8bc0909d98a1ff2b4.pts 02691156/expert_verified/points_label/af696fc30a96a0c8bc0909d98a1ff2b4.seg 02691156\n04379243/points/f28e030e715b9d3e318462aca9e62b6b.pts 04379243/expert_verified/points_label/f28e030e715b9d3e318462aca9e62b6b.seg 04379243\n02691156/points/3c7e4628a9ea201bbf3143b1cb6076a.pts 02691156/expert_verified/points_label/3c7e4628a9ea201bbf3143b1cb6076a.seg 02691156\n03636649/points/f092117adb1e9254d1cbf3e52b9b6237.pts 03636649/expert_verified/points_label/f092117adb1e9254d1cbf3e52b9b6237.seg 03636649\n04379243/points/7dd881a26eea656d193afeeca14e3baa.pts 04379243/expert_verified/points_label/7dd881a26eea656d193afeeca14e3baa.seg 04379243\n03001627/points/79a3115a6f96eef7c151419181ef256.pts 03001627/expert_verified/points_label/79a3115a6f96eef7c151419181ef256.seg 03001627\n04379243/points/fc51355d4d03ff4ae6c5cd45aa112726.pts 04379243/expert_verified/points_label/fc51355d4d03ff4ae6c5cd45aa112726.seg 04379243\n04379243/points/34121f5cc12135148c1cf3f7d7f0373.pts 04379243/expert_verified/points_label/34121f5cc12135148c1cf3f7d7f0373.seg 04379243\n03624134/points/d5167211e757e79f012465c621a63e3.pts 03624134/expert_verified/points_label/d5167211e757e79f012465c621a63e3.seg 03624134\n04379243/points/5b375eacdbe49cfaaa539cd22945e538.pts 04379243/expert_verified/points_label/5b375eacdbe49cfaaa539cd22945e538.seg 04379243\n02691156/points/d3d788c1fb35227619ba010ddb4974fe.pts 02691156/expert_verified/points_label/d3d788c1fb35227619ba010ddb4974fe.seg 02691156\n02691156/points/f26ea1a00455f44fb88e2a19106395c2.pts 02691156/expert_verified/points_label/f26ea1a00455f44fb88e2a19106395c2.seg 02691156\n03001627/points/798a46965d9e0edfcea003eff0268278.pts 03001627/expert_verified/points_label/798a46965d9e0edfcea003eff0268278.seg 03001627\n02691156/points/3069d990d52051eb3a34c2907e8f3f1f.pts 02691156/expert_verified/points_label/3069d990d52051eb3a34c2907e8f3f1f.seg 02691156\n02691156/points/8c42e3042a4beaa7d5c40787c7bb7824.pts 02691156/expert_verified/points_label/8c42e3042a4beaa7d5c40787c7bb7824.seg 02691156\n04379243/points/45c5ee611c73b90a509330ce00eb0b20.pts 04379243/expert_verified/points_label/45c5ee611c73b90a509330ce00eb0b20.seg 04379243\n03001627/points/22ada577361ed0374b3c42e318f3affc.pts 03001627/expert_verified/points_label/22ada577361ed0374b3c42e318f3affc.seg 03001627\n04379243/points/b6ad7be371729438dcfcef693e7ec696.pts 04379243/expert_verified/points_label/b6ad7be371729438dcfcef693e7ec696.seg 04379243\n03636649/points/4c266f2b866c59e761fef32872c6fa53.pts 03636649/expert_verified/points_label/4c266f2b866c59e761fef32872c6fa53.seg 03636649\n04379243/points/812dd06fc99f174e9f2349486c570dd4.pts 04379243/expert_verified/points_label/812dd06fc99f174e9f2349486c570dd4.seg 04379243\n02691156/points/36a5bd4ca6a0b191532d23702363f9a5.pts 02691156/expert_verified/points_label/36a5bd4ca6a0b191532d23702363f9a5.seg 02691156\n03001627/points/be0890a6a0f3fcf841f91bc9e1dece3b.pts 03001627/expert_verified/points_label/be0890a6a0f3fcf841f91bc9e1dece3b.seg 03001627\n03642806/points/6008f256f3beafd9988abef1fd117e7.pts 03642806/expert_verified/points_label/6008f256f3beafd9988abef1fd117e7.seg 03642806\n03001627/points/490941bf4a532b62492d9da2668ec34c.pts 03001627/expert_verified/points_label/490941bf4a532b62492d9da2668ec34c.seg 03001627\n03636649/points/94940283714fdff6244ba644cf33cb2e.pts 03636649/expert_verified/points_label/94940283714fdff6244ba644cf33cb2e.seg 03636649\n03642806/points/6227e7dd1a391e8d54f22ce0a3592d5.pts 03642806/expert_verified/points_label/6227e7dd1a391e8d54f22ce0a3592d5.seg 03642806\n02691156/points/b2ceeee3c5b75962ac4f72bf08dc79a6.pts 02691156/expert_verified/points_label/b2ceeee3c5b75962ac4f72bf08dc79a6.seg 02691156\n03642806/points/55a05b33f34e7211f71cb38553f14917.pts 03642806/expert_verified/points_label/55a05b33f34e7211f71cb38553f14917.seg 03642806\n02773838/points/74c548ef3ca7b1987515e7bb7dba4019.pts 02773838/expert_verified/points_label/74c548ef3ca7b1987515e7bb7dba4019.seg 02773838\n03467517/points/defcf80fcef4b51b3f431ca2c1260d62.pts 03467517/expert_verified/points_label/defcf80fcef4b51b3f431ca2c1260d62.seg 03467517\n04379243/points/eaea1cf98b61abd043383304411cc9ec.pts 04379243/expert_verified/points_label/eaea1cf98b61abd043383304411cc9ec.seg 04379243\n03001627/points/7f6858bd9d4af9df97316612e1a4343a.pts 03001627/expert_verified/points_label/7f6858bd9d4af9df97316612e1a4343a.seg 03001627\n03001627/points/3c27660aacbcf99886327adaa986dff.pts 03001627/expert_verified/points_label/3c27660aacbcf99886327adaa986dff.seg 03001627\n04379243/points/229d510bace435811572ee5ddf1b55b.pts 04379243/expert_verified/points_label/229d510bace435811572ee5ddf1b55b.seg 04379243\n03636649/points/83c0ad378b5802b73d39d8012919dd25.pts 03636649/expert_verified/points_label/83c0ad378b5802b73d39d8012919dd25.seg 03636649\n02691156/points/f009f3112625ee00b8cf782e8c539948.pts 02691156/expert_verified/points_label/f009f3112625ee00b8cf782e8c539948.seg 02691156\n02691156/points/f13827d156628467b4cdad9a5bf52dd5.pts 02691156/expert_verified/points_label/f13827d156628467b4cdad9a5bf52dd5.seg 02691156\n03636649/points/526251a7530426a4b3c42e318f3affc.pts 03636649/expert_verified/points_label/526251a7530426a4b3c42e318f3affc.seg 03636649\n03001627/points/a1133464132d65fcfce0ccdae30f97db.pts 03001627/expert_verified/points_label/a1133464132d65fcfce0ccdae30f97db.seg 03001627\n02691156/points/d844094b073a0452b04b2d1c5ce9783b.pts 02691156/expert_verified/points_label/d844094b073a0452b04b2d1c5ce9783b.seg 02691156\n03948459/points/2f5b4bcb8d4dd901609e2d916fa0da27.pts 03948459/expert_verified/points_label/2f5b4bcb8d4dd901609e2d916fa0da27.seg 03948459\n03636649/points/a4c06cd5032733af543df75232f6ff2b.pts 03636649/expert_verified/points_label/a4c06cd5032733af543df75232f6ff2b.seg 03636649\n03636649/points/64eaa45bd2e01db8991ff09eca5b27a8.pts 03636649/expert_verified/points_label/64eaa45bd2e01db8991ff09eca5b27a8.seg 03636649\n03636649/points/5bc478e9c4e0bb8180936c51aa7ffcf5.pts 03636649/expert_verified/points_label/5bc478e9c4e0bb8180936c51aa7ffcf5.seg 03636649\n03636649/points/b02bd8e5ef9cfe354b3c42e318f3affc.pts 03636649/expert_verified/points_label/b02bd8e5ef9cfe354b3c42e318f3affc.seg 03636649\n03636649/points/cf6c082b9534049494db33559ec0df30.pts 03636649/expert_verified/points_label/cf6c082b9534049494db33559ec0df30.seg 03636649\n04225987/points/af4343c5b78b70b11082f2ea630bf69e.pts 04225987/expert_verified/points_label/af4343c5b78b70b11082f2ea630bf69e.seg 04225987\n03467517/points/c084022f2ddbf95493f0194265a9746c.pts 03467517/expert_verified/points_label/c084022f2ddbf95493f0194265a9746c.seg 03467517\n03001627/points/550dd11407c28f9f3bd04286517a8395.pts 03001627/expert_verified/points_label/550dd11407c28f9f3bd04286517a8395.seg 03001627\n04379243/points/702cebffa33a19f019f079d1b712f46f.pts 04379243/expert_verified/points_label/702cebffa33a19f019f079d1b712f46f.seg 04379243\n04379243/points/388d9e7b2b8a8f909492fbce0bd54e2e.pts 04379243/expert_verified/points_label/388d9e7b2b8a8f909492fbce0bd54e2e.seg 04379243\n03636649/points/7634fbdcaa6b304d62c83ac1e3a4ebaa.pts 03636649/expert_verified/points_label/7634fbdcaa6b304d62c83ac1e3a4ebaa.seg 03636649\n03636649/points/14d3d2418165ec86bba785994a529f86.pts 03636649/expert_verified/points_label/14d3d2418165ec86bba785994a529f86.seg 03636649\n04379243/points/13e19274b358ec867aa3000697a75d55.pts 04379243/expert_verified/points_label/13e19274b358ec867aa3000697a75d55.seg 04379243\n03467517/points/727fcc85add981325e683993f34d42f2.pts 03467517/expert_verified/points_label/727fcc85add981325e683993f34d42f2.seg 03467517\n02691156/points/947d6b9cd1966e2e719b5362fe06bbb.pts 02691156/expert_verified/points_label/947d6b9cd1966e2e719b5362fe06bbb.seg 02691156\n04379243/points/ee5f85db427865e63e5399147a5b4763.pts 04379243/expert_verified/points_label/ee5f85db427865e63e5399147a5b4763.seg 04379243\n02691156/points/1678946724380812de689e373096b0e3.pts 02691156/expert_verified/points_label/1678946724380812de689e373096b0e3.seg 02691156\n03001627/points/3fdef0a7606c397331ad067823a3f0ce.pts 03001627/expert_verified/points_label/3fdef0a7606c397331ad067823a3f0ce.seg 03001627\n03636649/points/1bb465b8f22315d1116f219d90a571c2.pts 03636649/expert_verified/points_label/1bb465b8f22315d1116f219d90a571c2.seg 03636649\n04379243/points/9dd5b7e6f90ee322b56d92c5d7b06038.pts 04379243/expert_verified/points_label/9dd5b7e6f90ee322b56d92c5d7b06038.seg 04379243\n03467517/points/7eee3b79e053759143891ae68a82472e.pts 03467517/expert_verified/points_label/7eee3b79e053759143891ae68a82472e.seg 03467517\n03001627/points/f4b6bf9253918b52944d8f8e13d63fde.pts 03001627/expert_verified/points_label/f4b6bf9253918b52944d8f8e13d63fde.seg 03001627\n03636649/points/92e0f64c08f0c8ac3c8d0fdfb1cc2535.pts 03636649/expert_verified/points_label/92e0f64c08f0c8ac3c8d0fdfb1cc2535.seg 03636649\n03624134/points/d63521a0dfac9c1f342494fa6f09f376.pts 03624134/expert_verified/points_label/d63521a0dfac9c1f342494fa6f09f376.seg 03624134\n04379243/points/c7ff0afab4b7885a52160ba64fb535b2.pts 04379243/expert_verified/points_label/c7ff0afab4b7885a52160ba64fb535b2.seg 04379243\n02958343/points/89765af115d9a4955591fcdffe729c55.pts 02958343/expert_verified/points_label/89765af115d9a4955591fcdffe729c55.seg 02958343\n03636649/points/70bf2aaedbf9499ec889c00efdaf9928.pts 03636649/expert_verified/points_label/70bf2aaedbf9499ec889c00efdaf9928.seg 03636649\n02958343/points/ef15b938dcfa9893c4d922e8a1141322.pts 02958343/expert_verified/points_label/ef15b938dcfa9893c4d922e8a1141322.seg 02958343\n03636649/points/4bb676c497969016de98d10ab5975b59.pts 03636649/expert_verified/points_label/4bb676c497969016de98d10ab5975b59.seg 03636649\n04379243/points/1c8121e1ad6cd6fc7a480f3f1d55ed3f.pts 04379243/expert_verified/points_label/1c8121e1ad6cd6fc7a480f3f1d55ed3f.seg 04379243\n04379243/points/83b8e64089968ae8fd3feb4581507302.pts 04379243/expert_verified/points_label/83b8e64089968ae8fd3feb4581507302.seg 04379243\n03636649/points/a4c0f3aed58f0e092fdae21c212bf119.pts 03636649/expert_verified/points_label/a4c0f3aed58f0e092fdae21c212bf119.seg 03636649\n04379243/points/e02925509615eb5a4eaf5bbf36d243d4.pts 04379243/expert_verified/points_label/e02925509615eb5a4eaf5bbf36d243d4.seg 04379243\n04379243/points/c5087fce38b009ae30bbd4cddd04c77b.pts 04379243/expert_verified/points_label/c5087fce38b009ae30bbd4cddd04c77b.seg 04379243\n03001627/points/5107542cfbf142f36209799e55a657c.pts 03001627/expert_verified/points_label/5107542cfbf142f36209799e55a657c.seg 03001627\n04379243/points/94a62cfdb84e88ca9a3528690d225ee1.pts 04379243/expert_verified/points_label/94a62cfdb84e88ca9a3528690d225ee1.seg 04379243\n04379243/points/80ad1f839582d183fbf6f493308acc40.pts 04379243/expert_verified/points_label/80ad1f839582d183fbf6f493308acc40.seg 04379243\n03001627/points/91819d15c2c044ebd47ffa500636d198.pts 03001627/expert_verified/points_label/91819d15c2c044ebd47ffa500636d198.seg 03001627\n03636649/points/77a5a12147a6624d786810c22b062a88.pts 03636649/expert_verified/points_label/77a5a12147a6624d786810c22b062a88.seg 03636649\n03001627/points/beb4c42cfa1c3b282811d30bba54859.pts 03001627/expert_verified/points_label/beb4c42cfa1c3b282811d30bba54859.seg 03001627\n03636649/points/e529fc190753cc9df647dc544bb0ab61.pts 03636649/expert_verified/points_label/e529fc190753cc9df647dc544bb0ab61.seg 03636649\n04379243/points/680d4a8b5a30601a4b3c42e318f3affc.pts 04379243/expert_verified/points_label/680d4a8b5a30601a4b3c42e318f3affc.seg 04379243\n03001627/points/1d6f4020cab4ec1962d6a66a1a314d66.pts 03001627/expert_verified/points_label/1d6f4020cab4ec1962d6a66a1a314d66.seg 03001627\n03001627/points/5b3fd3199d1bc950c1ae25a29e9d46d3.pts 03001627/expert_verified/points_label/5b3fd3199d1bc950c1ae25a29e9d46d3.seg 03001627\n03001627/points/17e916fc863540ee3def89b32cef8e45.pts 03001627/expert_verified/points_label/17e916fc863540ee3def89b32cef8e45.seg 03001627\n04379243/points/a5d5fc6b0bb7881419fb4103277a6b93.pts 04379243/expert_verified/points_label/a5d5fc6b0bb7881419fb4103277a6b93.seg 04379243\n03001627/points/eafec1b145972dcd815b2b467e8e2eac.pts 03001627/expert_verified/points_label/eafec1b145972dcd815b2b467e8e2eac.seg 03001627\n04379243/points/1fb2be490f45ec6e19fb4103277a6b93.pts 04379243/expert_verified/points_label/1fb2be490f45ec6e19fb4103277a6b93.seg 04379243\n02691156/points/8b61ba80d9e487deca8607f540cc62ba.pts 02691156/expert_verified/points_label/8b61ba80d9e487deca8607f540cc62ba.seg 02691156\n03467517/points/2d767b3fbb8a3053b8836869016d1afd.pts 03467517/expert_verified/points_label/2d767b3fbb8a3053b8836869016d1afd.seg 03467517\n04379243/points/e0940f2229e42007d98e761e6d91dfc8.pts 04379243/expert_verified/points_label/e0940f2229e42007d98e761e6d91dfc8.seg 04379243\n03001627/points/bb90094030f369e4305a3b2fd9173d6f.pts 03001627/expert_verified/points_label/bb90094030f369e4305a3b2fd9173d6f.seg 03001627\n02958343/points/c6e3d9cf26016b5752aa494042b7c9db.pts 02958343/expert_verified/points_label/c6e3d9cf26016b5752aa494042b7c9db.seg 02958343\n03001627/points/bd0fab2e72b445bd1e722bceee6e83aa.pts 03001627/expert_verified/points_label/bd0fab2e72b445bd1e722bceee6e83aa.seg 03001627\n02691156/points/e86fd13a49f0ee0a62b600da24e0965.pts 02691156/expert_verified/points_label/e86fd13a49f0ee0a62b600da24e0965.seg 02691156\n03001627/points/eeebe3fe14ee4d3aebefe6b1d594ad2e.pts 03001627/expert_verified/points_label/eeebe3fe14ee4d3aebefe6b1d594ad2e.seg 03001627\n04379243/points/398dbb0a34ca527871a782a4379556c7.pts 04379243/expert_verified/points_label/398dbb0a34ca527871a782a4379556c7.seg 04379243\n04379243/points/737cc2beda4a023619fb4103277a6b93.pts 04379243/expert_verified/points_label/737cc2beda4a023619fb4103277a6b93.seg 04379243\n03001627/points/3895b96949fd81c5f07fee5fc5c45ee2.pts 03001627/expert_verified/points_label/3895b96949fd81c5f07fee5fc5c45ee2.seg 03001627\n04379243/points/bba5ce8555c8fa89ba18ade30e563d37.pts 04379243/expert_verified/points_label/bba5ce8555c8fa89ba18ade30e563d37.seg 04379243\n04379243/points/cab027dd0162c5b7f1426260885dd0ef.pts 04379243/expert_verified/points_label/cab027dd0162c5b7f1426260885dd0ef.seg 04379243\n04379243/points/75f2bc98aecf198974984b9cd0997a52.pts 04379243/expert_verified/points_label/75f2bc98aecf198974984b9cd0997a52.seg 04379243\n04379243/points/8d4fe49d942ec85ff4b6538438a0b930.pts 04379243/expert_verified/points_label/8d4fe49d942ec85ff4b6538438a0b930.seg 04379243\n03001627/points/89dd53d0377c28207f7114254c4286d2.pts 03001627/expert_verified/points_label/89dd53d0377c28207f7114254c4286d2.seg 03001627\n03636649/points/a37695d83a39adb52866fbd701f50f71.pts 03636649/expert_verified/points_label/a37695d83a39adb52866fbd701f50f71.seg 03636649\n04379243/points/f99ebf0f053140525a0e5699b3040a35.pts 04379243/expert_verified/points_label/f99ebf0f053140525a0e5699b3040a35.seg 04379243\n03624134/points/bbfd2df3edce576e1e652fa812161367.pts 03624134/expert_verified/points_label/bbfd2df3edce576e1e652fa812161367.seg 03624134\n04379243/points/f0d8620b49ea76db83130614d8020b3.pts 04379243/expert_verified/points_label/f0d8620b49ea76db83130614d8020b3.seg 04379243\n04379243/points/d01a6b35a54c8f77dd986a55e273fa14.pts 04379243/expert_verified/points_label/d01a6b35a54c8f77dd986a55e273fa14.seg 04379243\n03001627/points/2f6b0ddf12d1311795bea7c29e873d16.pts 03001627/expert_verified/points_label/2f6b0ddf12d1311795bea7c29e873d16.seg 03001627\n03001627/points/5695fd37d1e673cebf964fc57f6a7d6d.pts 03001627/expert_verified/points_label/5695fd37d1e673cebf964fc57f6a7d6d.seg 03001627\n03636649/points/746b82746c6a02cca5f600ed2cf472ac.pts 03636649/expert_verified/points_label/746b82746c6a02cca5f600ed2cf472ac.seg 03636649\n03001627/points/bcc4ea0133864bfe4d4c0769270d8651.pts 03001627/expert_verified/points_label/bcc4ea0133864bfe4d4c0769270d8651.seg 03001627\n03624134/points/81ba3f06ec38eaa46016d22b1dfacd4b.pts 03624134/expert_verified/points_label/81ba3f06ec38eaa46016d22b1dfacd4b.seg 03624134\n04379243/points/2a2d6560f14a01c6afac72146bbc9d59.pts 04379243/expert_verified/points_label/2a2d6560f14a01c6afac72146bbc9d59.seg 04379243\n04379243/points/856e86709df98497dcfcef693e7ec696.pts 04379243/expert_verified/points_label/856e86709df98497dcfcef693e7ec696.seg 04379243\n03948459/points/7418810de4b13e8430b6ca3ac82edfa3.pts 03948459/expert_verified/points_label/7418810de4b13e8430b6ca3ac82edfa3.seg 03948459\n03001627/points/11e0f0dfd3d0b22130ddb6ead95f49cc.pts 03001627/expert_verified/points_label/11e0f0dfd3d0b22130ddb6ead95f49cc.seg 03001627\n04379243/points/5c6748b094725d9af008d8a3590fb522.pts 04379243/expert_verified/points_label/5c6748b094725d9af008d8a3590fb522.seg 04379243\n04379243/points/17f3a2945d6550cbf7628281ecb18112.pts 04379243/expert_verified/points_label/17f3a2945d6550cbf7628281ecb18112.seg 04379243\n04379243/points/889c9aedc4ba47592fb02b79d375eea5.pts 04379243/expert_verified/points_label/889c9aedc4ba47592fb02b79d375eea5.seg 04379243\n04379243/points/c0b74c61865b563067dc358060e3c47b.pts 04379243/expert_verified/points_label/c0b74c61865b563067dc358060e3c47b.seg 04379243\n03636649/points/783b81aa54a69a26d42b9650f19dd425.pts 03636649/expert_verified/points_label/783b81aa54a69a26d42b9650f19dd425.seg 03636649\n03467517/points/8b8b084109eef6d81082f2ea630bf69e.pts 03467517/expert_verified/points_label/8b8b084109eef6d81082f2ea630bf69e.seg 03467517\n03001627/points/8a9af7d8a83d90fcd53e36731300f5b4.pts 03001627/expert_verified/points_label/8a9af7d8a83d90fcd53e36731300f5b4.seg 03001627\n03001627/points/47aca56ff3a7b8a71a782a4379556c7.pts 03001627/expert_verified/points_label/47aca56ff3a7b8a71a782a4379556c7.seg 03001627\n03001627/points/9fae8d94a028e9ec2818b21315fe1bde.pts 03001627/expert_verified/points_label/9fae8d94a028e9ec2818b21315fe1bde.seg 03001627\n03001627/points/9a41550ba7dd31e3bf80985a99195eb8.pts 03001627/expert_verified/points_label/9a41550ba7dd31e3bf80985a99195eb8.seg 03001627\n03001627/points/184b4797cea77beb5ca1c42bb8ac17a.pts 03001627/expert_verified/points_label/184b4797cea77beb5ca1c42bb8ac17a.seg 03001627\n04379243/points/bc1ff7fc750617d690f7bef12e52ac08.pts 04379243/expert_verified/points_label/bc1ff7fc750617d690f7bef12e52ac08.seg 04379243\n02691156/points/5fb64e3fc0abe449ca8607f540cc62ba.pts 02691156/expert_verified/points_label/5fb64e3fc0abe449ca8607f540cc62ba.seg 02691156\n03001627/points/2e0beb3b6927a2b7e45ef4135c266a12.pts 03001627/expert_verified/points_label/2e0beb3b6927a2b7e45ef4135c266a12.seg 03001627\n03467517/points/a38684b166ce2c77c155f88004a92bc8.pts 03467517/expert_verified/points_label/a38684b166ce2c77c155f88004a92bc8.seg 03467517\n02691156/points/b590adb6d3486f6e90b1d6deb98feec6.pts 02691156/expert_verified/points_label/b590adb6d3486f6e90b1d6deb98feec6.seg 02691156\n03636649/points/9d41e23f00d11d153033d35b49a20c8.pts 03636649/expert_verified/points_label/9d41e23f00d11d153033d35b49a20c8.seg 03636649\n03001627/points/f4b141ab64a6c4e771a782a4379556c7.pts 03001627/expert_verified/points_label/f4b141ab64a6c4e771a782a4379556c7.seg 03001627\n03948459/points/19e45672a3109f18be4927dbd39f74e9.pts 03948459/expert_verified/points_label/19e45672a3109f18be4927dbd39f74e9.seg 03948459\n04379243/points/58475b1b20ece0c5eeb8d422649e5f2b.pts 04379243/expert_verified/points_label/58475b1b20ece0c5eeb8d422649e5f2b.seg 04379243\n04379243/points/400393a56fc243c442c39a4fb8d01418.pts 04379243/expert_verified/points_label/400393a56fc243c442c39a4fb8d01418.seg 04379243\n03001627/points/a128eda00983dd01fb7d9615be5ab4b0.pts 03001627/expert_verified/points_label/a128eda00983dd01fb7d9615be5ab4b0.seg 03001627\n04379243/points/6af9a593129b028eb67e68783d58425a.pts 04379243/expert_verified/points_label/6af9a593129b028eb67e68783d58425a.seg 04379243\n03001627/points/40f188600cf8362b654ea6737b0d3597.pts 03001627/expert_verified/points_label/40f188600cf8362b654ea6737b0d3597.seg 03001627\n04379243/points/a4af8f822fa8d95456c08464b83f209e.pts 04379243/expert_verified/points_label/a4af8f822fa8d95456c08464b83f209e.seg 04379243\n03001627/points/d9558dccfe8e3381e45ef4135c266a12.pts 03001627/expert_verified/points_label/d9558dccfe8e3381e45ef4135c266a12.seg 03001627\n04379243/points/631028ddb76eed4dbb0085d0daabdaea.pts 04379243/expert_verified/points_label/631028ddb76eed4dbb0085d0daabdaea.seg 04379243\n03001627/points/8967e65c1541d1874aa7f42ef07f614e.pts 03001627/expert_verified/points_label/8967e65c1541d1874aa7f42ef07f614e.seg 03001627\n04379243/points/38feb6b209579f6faadbf8208284c675.pts 04379243/expert_verified/points_label/38feb6b209579f6faadbf8208284c675.seg 04379243\n03624134/points/60277f4060b8703e4e18d7136dc2dc80.pts 03624134/expert_verified/points_label/60277f4060b8703e4e18d7136dc2dc80.seg 03624134\n03467517/points/a78c3356a5dca4e7670b811945485012.pts 03467517/expert_verified/points_label/a78c3356a5dca4e7670b811945485012.seg 03467517\n03797390/points/645b0e2ef3b95979204df312eabf367f.pts 03797390/expert_verified/points_label/645b0e2ef3b95979204df312eabf367f.seg 03797390\n03467517/points/bd6057c7ac1ef31193f0194265a9746c.pts 03467517/expert_verified/points_label/bd6057c7ac1ef31193f0194265a9746c.seg 03467517\n03790512/points/bcbcfdad5e0e1d9ba88e8cb97b773125.pts 03790512/expert_verified/points_label/bcbcfdad5e0e1d9ba88e8cb97b773125.seg 03790512\n03636649/points/761fb0822bb05bc8ee0cd746086d989.pts 03636649/expert_verified/points_label/761fb0822bb05bc8ee0cd746086d989.seg 03636649\n03636649/points/be13324c84d2a9d72b151d8b52c53b90.pts 03636649/expert_verified/points_label/be13324c84d2a9d72b151d8b52c53b90.seg 03636649\n04379243/points/7b3dfbd70333485d219a1300d9489f4e.pts 04379243/expert_verified/points_label/7b3dfbd70333485d219a1300d9489f4e.seg 04379243\n04379243/points/22c5cbe6271736bffebad4f49b26ec52.pts 04379243/expert_verified/points_label/22c5cbe6271736bffebad4f49b26ec52.seg 04379243\n02958343/points/4b7b3b54dc04df53c19f1e8ed99ac2fa.pts 02958343/expert_verified/points_label/4b7b3b54dc04df53c19f1e8ed99ac2fa.seg 02958343\n03636649/points/947c6753d77d8082290e2f84c414e6be.pts 03636649/expert_verified/points_label/947c6753d77d8082290e2f84c414e6be.seg 03636649\n02958343/points/36c2770d00fdd0bdf1ee968c9039cc3.pts 02958343/expert_verified/points_label/36c2770d00fdd0bdf1ee968c9039cc3.seg 02958343\n03001627/points/4ac17ecd78880859e302b6082b0ffc09.pts 03001627/expert_verified/points_label/4ac17ecd78880859e302b6082b0ffc09.seg 03001627\n03636649/points/70b78b9439a9de7530f6e0ede20c4525.pts 03636649/expert_verified/points_label/70b78b9439a9de7530f6e0ede20c4525.seg 03636649\n04379243/points/d8be4b45afb21cf1616fb9ab42452112.pts 04379243/expert_verified/points_label/d8be4b45afb21cf1616fb9ab42452112.seg 04379243\n02691156/points/fe266c740580c102ff9ce0c50c2cd25a.pts 02691156/expert_verified/points_label/fe266c740580c102ff9ce0c50c2cd25a.seg 02691156\n02958343/points/30f4617775480bcce27281f3b76d1f5.pts 02958343/expert_verified/points_label/30f4617775480bcce27281f3b76d1f5.seg 02958343\n03467517/points/34874708b51c7ed493f0194265a9746c.pts 03467517/expert_verified/points_label/34874708b51c7ed493f0194265a9746c.seg 03467517\n04225987/points/abdc4a823b1f78c397f47f3057557cbe.pts 04225987/expert_verified/points_label/abdc4a823b1f78c397f47f3057557cbe.seg 04225987\n03948459/points/14fe99eb0c105a90fc9c56fb43681c11.pts 03948459/expert_verified/points_label/14fe99eb0c105a90fc9c56fb43681c11.seg 03948459\n04379243/points/f5aecb6607876495e03eb69820d1aaf2.pts 04379243/expert_verified/points_label/f5aecb6607876495e03eb69820d1aaf2.seg 04379243\n03001627/points/3c81fab5678a3872327289c00b6dc9ca.pts 03001627/expert_verified/points_label/3c81fab5678a3872327289c00b6dc9ca.seg 03001627\n04379243/points/fe3351c94fbab8ce3002761e7a3ba3bd.pts 04379243/expert_verified/points_label/fe3351c94fbab8ce3002761e7a3ba3bd.seg 04379243\n04379243/points/5f0c33039269b7a9f0e84b9d9ad447e2.pts 04379243/expert_verified/points_label/5f0c33039269b7a9f0e84b9d9ad447e2.seg 04379243\n03001627/points/fa7347547e290732bf65e1af50b5b7d4.pts 03001627/expert_verified/points_label/fa7347547e290732bf65e1af50b5b7d4.seg 03001627\n04379243/points/9c33336af33fd905776d8bc79b9caa2c.pts 04379243/expert_verified/points_label/9c33336af33fd905776d8bc79b9caa2c.seg 04379243\n03001627/points/1d828c69106609f8cd783766d090e665.pts 03001627/expert_verified/points_label/1d828c69106609f8cd783766d090e665.seg 03001627\n04379243/points/5fbb7a5f01f646ca5830980abc1c717a.pts 04379243/expert_verified/points_label/5fbb7a5f01f646ca5830980abc1c717a.seg 04379243\n03636649/points/777a686890d74b350359b4e03cfdfa.pts 03636649/expert_verified/points_label/777a686890d74b350359b4e03cfdfa.seg 03636649\n02773838/points/3077a9b76724b6d35de21284bb515a83.pts 02773838/expert_verified/points_label/3077a9b76724b6d35de21284bb515a83.seg 02773838\n03642806/points/b233163860361eda8cfacef5204026d6.pts 03642806/expert_verified/points_label/b233163860361eda8cfacef5204026d6.seg 03642806\n02958343/points/f10f279643fbb3276a78cd0552215cff.pts 02958343/expert_verified/points_label/f10f279643fbb3276a78cd0552215cff.seg 02958343\n02691156/points/2c64c521c114df40e51f766854841067.pts 02691156/expert_verified/points_label/2c64c521c114df40e51f766854841067.seg 02691156\n03001627/points/3b8f2b955ee9a904b3c42e318f3affc.pts 03001627/expert_verified/points_label/3b8f2b955ee9a904b3c42e318f3affc.seg 03001627\n04379243/points/2a64bd38a4e42f33dc43fde5155b3946.pts 04379243/expert_verified/points_label/2a64bd38a4e42f33dc43fde5155b3946.seg 04379243\n03001627/points/52310bca00e6a3671201d487ecde379e.pts 03001627/expert_verified/points_label/52310bca00e6a3671201d487ecde379e.seg 03001627\n03001627/points/5346017af72c1843169d299c5f567c18.pts 03001627/expert_verified/points_label/5346017af72c1843169d299c5f567c18.seg 03001627\n02954340/points/c1436c38beba0005284432ce2f42f498.pts 02954340/expert_verified/points_label/c1436c38beba0005284432ce2f42f498.seg 02954340\n03636649/points/34ce1de178694f87e76bc197b3a3ffc0.pts 03636649/expert_verified/points_label/34ce1de178694f87e76bc197b3a3ffc0.seg 03636649\n03001627/points/8e7714615a4b1e6f82390c5f604e0d9b.pts 03001627/expert_verified/points_label/8e7714615a4b1e6f82390c5f604e0d9b.seg 03001627\n03948459/points/a3e6dcfc074489fd8ec2966c0323533e.pts 03948459/expert_verified/points_label/a3e6dcfc074489fd8ec2966c0323533e.seg 03948459\n02691156/points/3ad337dcef167024fe6302fece358e4a.pts 02691156/expert_verified/points_label/3ad337dcef167024fe6302fece358e4a.seg 02691156\n04379243/points/124cc3b92266c2767156f312cf4e035e.pts 04379243/expert_verified/points_label/124cc3b92266c2767156f312cf4e035e.seg 04379243\n04379243/points/ee5f0411fcff59951105a3fc18779f13.pts 04379243/expert_verified/points_label/ee5f0411fcff59951105a3fc18779f13.seg 04379243\n04379243/points/b1117a83ebf5a4c9c337a931444a5063.pts 04379243/expert_verified/points_label/b1117a83ebf5a4c9c337a931444a5063.seg 04379243\n03001627/points/fb847cd696ec711197f2016c3d6097c9.pts 03001627/expert_verified/points_label/fb847cd696ec711197f2016c3d6097c9.seg 03001627\n02691156/points/50da48c8e7644508fca1f1143bb6bc17.pts 02691156/expert_verified/points_label/50da48c8e7644508fca1f1143bb6bc17.seg 02691156\n02958343/points/78c0bec338fa1c01d6b98bf27ff43caf.pts 02958343/expert_verified/points_label/78c0bec338fa1c01d6b98bf27ff43caf.seg 02958343\n02691156/points/37fbd275a734ec1b66cf1b4a8fc3914e.pts 02691156/expert_verified/points_label/37fbd275a734ec1b66cf1b4a8fc3914e.seg 02691156\n03636649/points/e053e531fc4341b5fcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/e053e531fc4341b5fcb8d8c6d4df8143.seg 03636649\n02691156/points/3db61220251b3c9de719b5362fe06bbb.pts 02691156/expert_verified/points_label/3db61220251b3c9de719b5362fe06bbb.seg 02691156\n03642806/points/a7f983f1d0642745135a402b573354e4.pts 03642806/expert_verified/points_label/a7f983f1d0642745135a402b573354e4.seg 03642806\n03001627/points/4e26eab28703c12bdd5f3f2440a93d21.pts 03001627/expert_verified/points_label/4e26eab28703c12bdd5f3f2440a93d21.seg 03001627\n04225987/points/24e46e195f4907887a70e5e6aa241c88.pts 04225987/expert_verified/points_label/24e46e195f4907887a70e5e6aa241c88.seg 04225987\n02691156/points/3ab1e94b6c3a1730c56cc5a87f567365.pts 02691156/expert_verified/points_label/3ab1e94b6c3a1730c56cc5a87f567365.seg 02691156\n03001627/points/61b984febe54b752d61420a53a0cb96d.pts 03001627/expert_verified/points_label/61b984febe54b752d61420a53a0cb96d.seg 03001627\n04379243/points/adf574f947f00bdd548b2639ebc3e759.pts 04379243/expert_verified/points_label/adf574f947f00bdd548b2639ebc3e759.seg 04379243\n03001627/points/ef76b9cbf76bad40586ef70b3cee4240.pts 03001627/expert_verified/points_label/ef76b9cbf76bad40586ef70b3cee4240.seg 03001627\n04379243/points/abef0c609ad3e9c2edea4b985280bcc1.pts 04379243/expert_verified/points_label/abef0c609ad3e9c2edea4b985280bcc1.seg 04379243\n02773838/points/1b84dededd445058e44a5473032f38f.pts 02773838/expert_verified/points_label/1b84dededd445058e44a5473032f38f.seg 02773838\n04379243/points/cd09a9641ea97d873823cce3247aa03b.pts 04379243/expert_verified/points_label/cd09a9641ea97d873823cce3247aa03b.seg 04379243\n03636649/points/6aa1ce4e245001589f1a71e46bbde97c.pts 03636649/expert_verified/points_label/6aa1ce4e245001589f1a71e46bbde97c.seg 03636649\n04379243/points/bb1aa2cdf216d348e76bc197b3a3ffc0.pts 04379243/expert_verified/points_label/bb1aa2cdf216d348e76bc197b3a3ffc0.seg 04379243\n04379243/points/da1e75a8647bfd919778416969ddad32.pts 04379243/expert_verified/points_label/da1e75a8647bfd919778416969ddad32.seg 04379243\n02958343/points/3d0308da43d52e3ef56f8ea3d9016e55.pts 02958343/expert_verified/points_label/3d0308da43d52e3ef56f8ea3d9016e55.seg 02958343\n04379243/points/1ca75076bcebfac76c3484ac7eef025f.pts 04379243/expert_verified/points_label/1ca75076bcebfac76c3484ac7eef025f.seg 04379243\n02691156/points/97ec5b82d9757b639cb1b92881e8e76.pts 02691156/expert_verified/points_label/97ec5b82d9757b639cb1b92881e8e76.seg 02691156\n02691156/points/75db11c354c6342aad01ec966c80ac91.pts 02691156/expert_verified/points_label/75db11c354c6342aad01ec966c80ac91.seg 02691156\n02691156/points/caf80ecbad22a7384e1799d9d4d697c3.pts 02691156/expert_verified/points_label/caf80ecbad22a7384e1799d9d4d697c3.seg 02691156\n03001627/points/d6e0a95f00c7af6fbae0ffb97058b7cc.pts 03001627/expert_verified/points_label/d6e0a95f00c7af6fbae0ffb97058b7cc.seg 03001627\n04379243/points/fa72e9cf7308066b1c072ac0b83fe07a.pts 04379243/expert_verified/points_label/fa72e9cf7308066b1c072ac0b83fe07a.seg 04379243\n03790512/points/455485399ab75f93429f1c522640e6f0.pts 03790512/expert_verified/points_label/455485399ab75f93429f1c522640e6f0.seg 03790512\n03642806/points/241ec8a746dd1cfc78f71a335ebabfa5.pts 03642806/expert_verified/points_label/241ec8a746dd1cfc78f71a335ebabfa5.seg 03642806\n04379243/points/c6575b4c39a341c698d5fc0473d00a1c.pts 04379243/expert_verified/points_label/c6575b4c39a341c698d5fc0473d00a1c.seg 04379243\n02958343/points/219a0021526791d18bb5c0bf5eec83fc.pts 02958343/expert_verified/points_label/219a0021526791d18bb5c0bf5eec83fc.seg 02958343\n02691156/points/49917fb82beca4beca8607f540cc62ba.pts 02691156/expert_verified/points_label/49917fb82beca4beca8607f540cc62ba.seg 02691156\n03636649/points/dac278ab197b5efefaa6996ece0d86f4.pts 03636649/expert_verified/points_label/dac278ab197b5efefaa6996ece0d86f4.seg 03636649\n03467517/points/f146c58eaa06f5e4d57700c05b1862d8.pts 03467517/expert_verified/points_label/f146c58eaa06f5e4d57700c05b1862d8.seg 03467517\n04379243/points/aaf6be1d92a8c61fdcfcef693e7ec696.pts 04379243/expert_verified/points_label/aaf6be1d92a8c61fdcfcef693e7ec696.seg 04379243\n03001627/points/46789c1fb150dfaf51f77a6d7299806.pts 03001627/expert_verified/points_label/46789c1fb150dfaf51f77a6d7299806.seg 03001627\n03790512/points/4a2f0b20ef680347395d58407f193ba.pts 03790512/expert_verified/points_label/4a2f0b20ef680347395d58407f193ba.seg 03790512\n04379243/points/28ce06aa6f25b39f2d19175e7d19b7cb.pts 04379243/expert_verified/points_label/28ce06aa6f25b39f2d19175e7d19b7cb.seg 04379243\n02958343/points/1710ff46ca275e171df27141dea8c9a.pts 02958343/expert_verified/points_label/1710ff46ca275e171df27141dea8c9a.seg 02958343\n03636649/points/b57bcdb88c669663ec2a7a1f5fe7365d.pts 03636649/expert_verified/points_label/b57bcdb88c669663ec2a7a1f5fe7365d.seg 03636649\n04379243/points/c348d279fd22730a9741b7ee128375de.pts 04379243/expert_verified/points_label/c348d279fd22730a9741b7ee128375de.seg 04379243\n03001627/points/76fe7cf10c5dbf1edcb466b6f48b5810.pts 03001627/expert_verified/points_label/76fe7cf10c5dbf1edcb466b6f48b5810.seg 03001627\n04379243/points/7727cc0cb47705632dfc2f8d5d30193c.pts 04379243/expert_verified/points_label/7727cc0cb47705632dfc2f8d5d30193c.seg 04379243\n03797390/points/586e67c53f181dc22adf8abaa25e0215.pts 03797390/expert_verified/points_label/586e67c53f181dc22adf8abaa25e0215.seg 03797390\n04379243/points/d9b418e6ec14dbf50efffb055ed6bd1.pts 04379243/expert_verified/points_label/d9b418e6ec14dbf50efffb055ed6bd1.seg 04379243\n04379243/points/f52e52094d8240b2dcfcef693e7ec696.pts 04379243/expert_verified/points_label/f52e52094d8240b2dcfcef693e7ec696.seg 04379243\n02691156/points/821309c2037b49135fab3f99161dc2c2.pts 02691156/expert_verified/points_label/821309c2037b49135fab3f99161dc2c2.seg 02691156\n02954340/points/254e230d31a62470a52821bf1aa3b19a.pts 02954340/expert_verified/points_label/254e230d31a62470a52821bf1aa3b19a.seg 02954340\n02691156/points/e8de6c58f4a772d771d03b466c72ce41.pts 02691156/expert_verified/points_label/e8de6c58f4a772d771d03b466c72ce41.seg 02691156\n03642806/points/f1c6801e84c85a07bfb149497503af.pts 03642806/expert_verified/points_label/f1c6801e84c85a07bfb149497503af.seg 03642806\n02691156/points/a04d10b24ede5e9a3de778e85611513b.pts 02691156/expert_verified/points_label/a04d10b24ede5e9a3de778e85611513b.seg 02691156\n03467517/points/c8acdfaec5008118343b0b12983b9982.pts 03467517/expert_verified/points_label/c8acdfaec5008118343b0b12983b9982.seg 03467517\n03001627/points/9c3e53d9d1e653c0bf80985a99195eb8.pts 03001627/expert_verified/points_label/9c3e53d9d1e653c0bf80985a99195eb8.seg 03001627\n02691156/points/123bd9e948881939c38a1d3458dafa1b.pts 02691156/expert_verified/points_label/123bd9e948881939c38a1d3458dafa1b.seg 02691156\n03948459/points/abc7a1373f4b30291adcc40d88daf7c8.pts 03948459/expert_verified/points_label/abc7a1373f4b30291adcc40d88daf7c8.seg 03948459\n03636649/points/c906a9c7ae536a0c7fb7f79251dd7727.pts 03636649/expert_verified/points_label/c906a9c7ae536a0c7fb7f79251dd7727.seg 03636649\n03797390/points/e71102b6da1d63f3a363b55cbd344baa.pts 03797390/expert_verified/points_label/e71102b6da1d63f3a363b55cbd344baa.seg 03797390\n03642806/points/22389f9c3c049ce757c29983a611b1c6.pts 03642806/expert_verified/points_label/22389f9c3c049ce757c29983a611b1c6.seg 03642806\n04379243/points/5c2c29fd07c365afe5c65540d3456093.pts 04379243/expert_verified/points_label/5c2c29fd07c365afe5c65540d3456093.seg 04379243\n03001627/points/9a8dfc7a6831749f504721639e19f609.pts 03001627/expert_verified/points_label/9a8dfc7a6831749f504721639e19f609.seg 03001627\n03001627/points/d49ce87d43cf4c8f1679065e1c457f94.pts 03001627/expert_verified/points_label/d49ce87d43cf4c8f1679065e1c457f94.seg 03001627\n02691156/points/dfa36bffe436a98ee0534173b9189765.pts 02691156/expert_verified/points_label/dfa36bffe436a98ee0534173b9189765.seg 02691156\n04379243/points/987b7b49a1435a4b1b17743c18fb63dc.pts 04379243/expert_verified/points_label/987b7b49a1435a4b1b17743c18fb63dc.seg 04379243\n04379243/points/8d0d7787f4babee7e66285d36ebb986.pts 04379243/expert_verified/points_label/8d0d7787f4babee7e66285d36ebb986.seg 04379243\n04379243/points/4f06092100d0164013d2510999d0f1d2.pts 04379243/expert_verified/points_label/4f06092100d0164013d2510999d0f1d2.seg 04379243\n02958343/points/fce2b933f93d132f4f45033b2f001552.pts 02958343/expert_verified/points_label/fce2b933f93d132f4f45033b2f001552.seg 02958343\n04379243/points/3817a222e96acc4ca78510b72d2281ea.pts 04379243/expert_verified/points_label/3817a222e96acc4ca78510b72d2281ea.seg 04379243\n03001627/points/7ee09fdece7d9142afdb9a672b7d3b8a.pts 03001627/expert_verified/points_label/7ee09fdece7d9142afdb9a672b7d3b8a.seg 03001627\n04379243/points/676d05aaaeecb8a04b3c42e318f3affc.pts 04379243/expert_verified/points_label/676d05aaaeecb8a04b3c42e318f3affc.seg 04379243\n03624134/points/6813197ad5e7011fcc34b900bb2492e.pts 03624134/expert_verified/points_label/6813197ad5e7011fcc34b900bb2492e.seg 03624134\n04379243/points/ea367e390741fc38dcfcef693e7ec696.pts 04379243/expert_verified/points_label/ea367e390741fc38dcfcef693e7ec696.seg 04379243\n04379243/points/2e5ac0552fa296c43bbab77a66bc3671.pts 04379243/expert_verified/points_label/2e5ac0552fa296c43bbab77a66bc3671.seg 04379243\n03467517/points/32a337387527f39193f0194265a9746c.pts 03467517/expert_verified/points_label/32a337387527f39193f0194265a9746c.seg 03467517\n03001627/points/97cd4ed02e022ce7174150bd56e389a8.pts 03001627/expert_verified/points_label/97cd4ed02e022ce7174150bd56e389a8.seg 03001627\n04379243/points/88e06a85e2a0f99fa7e7cb173e141227.pts 04379243/expert_verified/points_label/88e06a85e2a0f99fa7e7cb173e141227.seg 04379243\n04379243/points/c5a02d586ea431a1e76bc197b3a3ffc0.pts 04379243/expert_verified/points_label/c5a02d586ea431a1e76bc197b3a3ffc0.seg 04379243\n03001627/points/bcdcb4928e07e4174a623eb2e3317415.pts 03001627/expert_verified/points_label/bcdcb4928e07e4174a623eb2e3317415.seg 03001627\n02691156/points/934dd5529c22cd05bc0909d98a1ff2b4.pts 02691156/expert_verified/points_label/934dd5529c22cd05bc0909d98a1ff2b4.seg 02691156\n03001627/points/e696f4c7cd88b8b52ff834514c92e8fd.pts 03001627/expert_verified/points_label/e696f4c7cd88b8b52ff834514c92e8fd.seg 03001627\n02691156/points/93ba822e84586999e3375a6b96a1d765.pts 02691156/expert_verified/points_label/93ba822e84586999e3375a6b96a1d765.seg 02691156\n02958343/points/3ac664a7486a0bdff200a72c9245aee7.pts 02958343/expert_verified/points_label/3ac664a7486a0bdff200a72c9245aee7.seg 02958343\n02691156/points/545cadae487b55bbc46ba5100bcdc520.pts 02691156/expert_verified/points_label/545cadae487b55bbc46ba5100bcdc520.seg 02691156\n03001627/points/c47f71319ead4eb8a4fb72f4f3b0e317.pts 03001627/expert_verified/points_label/c47f71319ead4eb8a4fb72f4f3b0e317.seg 03001627\n04379243/points/39bb09201e0cd201c17e7f250c5222bd.pts 04379243/expert_verified/points_label/39bb09201e0cd201c17e7f250c5222bd.seg 04379243\n04379243/points/13782b95eeefcedacf004563556ddb36.pts 04379243/expert_verified/points_label/13782b95eeefcedacf004563556ddb36.seg 04379243\n03001627/points/3cc90d903e0ec7aa61e11d707ecb7fa0.pts 03001627/expert_verified/points_label/3cc90d903e0ec7aa61e11d707ecb7fa0.seg 03001627\n04379243/points/4079aaabaa6451a2765ca89770f206ec.pts 04379243/expert_verified/points_label/4079aaabaa6451a2765ca89770f206ec.seg 04379243\n04379243/points/4bbf789edb243cafc955e5ed03ef3a2f.pts 04379243/expert_verified/points_label/4bbf789edb243cafc955e5ed03ef3a2f.seg 04379243\n02773838/points/6187bd900c3bc002ed13f430b2941481.pts 02773838/expert_verified/points_label/6187bd900c3bc002ed13f430b2941481.seg 02773838\n04379243/points/6dc6bb97c387b2f3af4e8812cf1b9e1.pts 04379243/expert_verified/points_label/6dc6bb97c387b2f3af4e8812cf1b9e1.seg 04379243\n03467517/points/9c260623916034b6f7d037d5768b173f.pts 03467517/expert_verified/points_label/9c260623916034b6f7d037d5768b173f.seg 03467517\n02691156/points/8d5c3d38de9c3685f2e77d54f4da142.pts 02691156/expert_verified/points_label/8d5c3d38de9c3685f2e77d54f4da142.seg 02691156\n04379243/points/6152e14b042aa17546f41dc2aaef556b.pts 04379243/expert_verified/points_label/6152e14b042aa17546f41dc2aaef556b.seg 04379243\n03467517/points/68a8bf89972cd337a77e8142614cdaae.pts 03467517/expert_verified/points_label/68a8bf89972cd337a77e8142614cdaae.seg 03467517\n02691156/points/3d5354863690ac7eca27bba175814d1.pts 02691156/expert_verified/points_label/3d5354863690ac7eca27bba175814d1.seg 02691156\n04379243/points/3411daa955306811d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/3411daa955306811d93768e7b9b1eabf.seg 04379243\n04379243/points/8594658920d6ea7b23656ce81843.pts 04379243/expert_verified/points_label/8594658920d6ea7b23656ce81843.seg 04379243\n02691156/points/a074750e28ed3818203936772104a82d.pts 02691156/expert_verified/points_label/a074750e28ed3818203936772104a82d.seg 02691156\n04379243/points/fcd4d0e1777f4841dcfcef693e7ec696.pts 04379243/expert_verified/points_label/fcd4d0e1777f4841dcfcef693e7ec696.seg 04379243\n03948459/points/708e38e7b733fd22bfae4699de9cb91a.pts 03948459/expert_verified/points_label/708e38e7b733fd22bfae4699de9cb91a.seg 03948459\n04379243/points/3c4e1361b066ea3b8ca998f0f87d0c84.pts 04379243/expert_verified/points_label/3c4e1361b066ea3b8ca998f0f87d0c84.seg 04379243\n03624134/points/38798b7013607bbf1e0b76f10c6e38af.pts 03624134/expert_verified/points_label/38798b7013607bbf1e0b76f10c6e38af.seg 03624134\n02691156/points/2176fa9f69e5e1dcca8607f540cc62ba.pts 02691156/expert_verified/points_label/2176fa9f69e5e1dcca8607f540cc62ba.seg 02691156\n03467517/points/8dd7df733a5ba17acae98171fea031ef.pts 03467517/expert_verified/points_label/8dd7df733a5ba17acae98171fea031ef.seg 03467517\n03001627/points/d3f31fd0fc99f45e8b3f6b4a44a70e52.pts 03001627/expert_verified/points_label/d3f31fd0fc99f45e8b3f6b4a44a70e52.seg 03001627\n02691156/points/118e8142a8cb1fe19a4a28ef635593ce.pts 02691156/expert_verified/points_label/118e8142a8cb1fe19a4a28ef635593ce.seg 02691156\n03624134/points/de62211649b4cced49384f9741ad64d8.pts 03624134/expert_verified/points_label/de62211649b4cced49384f9741ad64d8.seg 03624134\n03642806/points/7a4342f61ed7b153341aafe10fd0cbd4.pts 03642806/expert_verified/points_label/7a4342f61ed7b153341aafe10fd0cbd4.seg 03642806\n03001627/points/ba56f02dee485974c242632b2a8c3129.pts 03001627/expert_verified/points_label/ba56f02dee485974c242632b2a8c3129.seg 03001627\n04379243/points/97b7baeb8a172de42f56f09e5bc67bee.pts 04379243/expert_verified/points_label/97b7baeb8a172de42f56f09e5bc67bee.seg 04379243\n04379243/points/7b2af227264af938d42b9650f19dd425.pts 04379243/expert_verified/points_label/7b2af227264af938d42b9650f19dd425.seg 04379243\n04379243/points/e25fdb977fb867fdc3bd24f986301745.pts 04379243/expert_verified/points_label/e25fdb977fb867fdc3bd24f986301745.seg 04379243\n03467517/points/33da9c54f43be3e17693a84bff425e3.pts 03467517/expert_verified/points_label/33da9c54f43be3e17693a84bff425e3.seg 03467517\n02691156/points/e1e5cfcabcbe26a03087f84b199fd297.pts 02691156/expert_verified/points_label/e1e5cfcabcbe26a03087f84b199fd297.seg 02691156\n03636649/points/ba05811f301cdd791735ea0e092a805a.pts 03636649/expert_verified/points_label/ba05811f301cdd791735ea0e092a805a.seg 03636649\n03001627/points/6678f63c9b584a549d9e5580ae9f8738.pts 03001627/expert_verified/points_label/6678f63c9b584a549d9e5580ae9f8738.seg 03001627\n04379243/points/b6b8ede77085c0a95bea7c29e873d16.pts 04379243/expert_verified/points_label/b6b8ede77085c0a95bea7c29e873d16.seg 04379243\n02691156/points/d81042a53dd1cc5bd90bfc986bc4c94d.pts 02691156/expert_verified/points_label/d81042a53dd1cc5bd90bfc986bc4c94d.seg 02691156\n03001627/points/37b432326fecc8a1327289c00b6dc9ca.pts 03001627/expert_verified/points_label/37b432326fecc8a1327289c00b6dc9ca.seg 03001627\n03636649/points/c898f9b1dddbb8801735ea0e092a805a.pts 03636649/expert_verified/points_label/c898f9b1dddbb8801735ea0e092a805a.seg 03636649\n03001627/points/5d02aed0e9c93e829b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/5d02aed0e9c93e829b9f2eb77f5e247e.seg 03001627\n03001627/points/9a864d5de972a8c7cb686b8b855fed61.pts 03001627/expert_verified/points_label/9a864d5de972a8c7cb686b8b855fed61.seg 03001627\n04379243/points/b14a14cc2f3c38c9e3def9c422df2282.pts 04379243/expert_verified/points_label/b14a14cc2f3c38c9e3def9c422df2282.seg 04379243\n04379243/points/f2893a87ec37f8b3781cb4570305e329.pts 04379243/expert_verified/points_label/f2893a87ec37f8b3781cb4570305e329.seg 04379243\n02691156/points/3fa511e1882e41eeca8607f540cc62ba.pts 02691156/expert_verified/points_label/3fa511e1882e41eeca8607f540cc62ba.seg 02691156\n02691156/points/444d67950ff9a4cc1139bebb00fe5be8.pts 02691156/expert_verified/points_label/444d67950ff9a4cc1139bebb00fe5be8.seg 02691156\n03001627/points/3d3b7f63f5525b1ae37f5a622d383617.pts 03001627/expert_verified/points_label/3d3b7f63f5525b1ae37f5a622d383617.seg 03001627\n03001627/points/30beaf15d2d2beb1febad4f49b26ec52.pts 03001627/expert_verified/points_label/30beaf15d2d2beb1febad4f49b26ec52.seg 03001627\n04379243/points/59f04ddbd896f4f5430644dfe647c381.pts 04379243/expert_verified/points_label/59f04ddbd896f4f5430644dfe647c381.seg 04379243\n04379243/points/eb9b9b8d186a974a7afee304cce81d6f.pts 04379243/expert_verified/points_label/eb9b9b8d186a974a7afee304cce81d6f.seg 04379243\n03790512/points/7c4fc3a05d5fc8b1d0f568c31c1cd62a.pts 03790512/expert_verified/points_label/7c4fc3a05d5fc8b1d0f568c31c1cd62a.seg 03790512\n04379243/points/68142013a4f5e7c2febad4f49b26ec52.pts 04379243/expert_verified/points_label/68142013a4f5e7c2febad4f49b26ec52.seg 04379243\n02958343/points/8053e014516531ddc3f500d7b182f6.pts 02958343/expert_verified/points_label/8053e014516531ddc3f500d7b182f6.seg 02958343\n02958343/points/1a3782ae4bd711b66b418c7d9fedcaa9.pts 02958343/expert_verified/points_label/1a3782ae4bd711b66b418c7d9fedcaa9.seg 02958343\n04379243/points/cc58de930acd321fac242c3aebc81b2f.pts 04379243/expert_verified/points_label/cc58de930acd321fac242c3aebc81b2f.seg 04379243\n02691156/points/d4dac019726e980e203936772104a82d.pts 02691156/expert_verified/points_label/d4dac019726e980e203936772104a82d.seg 02691156\n02954340/points/6e983d20e0bf80296829cd4082fbdbdf.pts 02954340/expert_verified/points_label/6e983d20e0bf80296829cd4082fbdbdf.seg 02954340\n03636649/points/fad026744a6abb1937cf479d4bb58d.pts 03636649/expert_verified/points_label/fad026744a6abb1937cf479d4bb58d.seg 03636649\n02958343/points/4d2d4e26349be1f3be2cbcda9b6dc9b2.pts 02958343/expert_verified/points_label/4d2d4e26349be1f3be2cbcda9b6dc9b2.seg 02958343\n03636649/points/280fa01686e780ba3501c961e91ff6d7.pts 03636649/expert_verified/points_label/280fa01686e780ba3501c961e91ff6d7.seg 03636649\n04379243/points/f02907c5c42e1e766f1e07a56c129dfc.pts 04379243/expert_verified/points_label/f02907c5c42e1e766f1e07a56c129dfc.seg 04379243\n04379243/points/5f100571ffd90f8252b4875f731f71cd.pts 04379243/expert_verified/points_label/5f100571ffd90f8252b4875f731f71cd.seg 04379243\n04379243/points/f718cb5d6202341dc183308b9aafe2ca.pts 04379243/expert_verified/points_label/f718cb5d6202341dc183308b9aafe2ca.seg 04379243\n03642806/points/b436271050d647052f8d6d501b18a4b5.pts 03642806/expert_verified/points_label/b436271050d647052f8d6d501b18a4b5.seg 03642806\n03001627/points/6dddf2b95ca09bf5febad4f49b26ec52.pts 03001627/expert_verified/points_label/6dddf2b95ca09bf5febad4f49b26ec52.seg 03001627\n02691156/points/b812c2df636aa0218b96ae1a0a8b84ec.pts 02691156/expert_verified/points_label/b812c2df636aa0218b96ae1a0a8b84ec.seg 02691156\n02958343/points/89edb3d434f4c983afe1d4530f4c6e24.pts 02958343/expert_verified/points_label/89edb3d434f4c983afe1d4530f4c6e24.seg 02958343\n02958343/points/80ac9cc0d4c9dde3b7a7bc444c2d756b.pts 02958343/expert_verified/points_label/80ac9cc0d4c9dde3b7a7bc444c2d756b.seg 02958343\n04379243/points/b62d45745434ac46c4cfe384be4426c3.pts 04379243/expert_verified/points_label/b62d45745434ac46c4cfe384be4426c3.seg 04379243\n04379243/points/9c4afb731e910d3723500a5b036df62e.pts 04379243/expert_verified/points_label/9c4afb731e910d3723500a5b036df62e.seg 04379243\n04379243/points/43fcddd5232a6021a56e8b79ca4e2911.pts 04379243/expert_verified/points_label/43fcddd5232a6021a56e8b79ca4e2911.seg 04379243\n04379243/points/6724ae69c0bde4c09b7dad6c9c46bcf1.pts 04379243/expert_verified/points_label/6724ae69c0bde4c09b7dad6c9c46bcf1.seg 04379243\n03001627/points/323fc7b1d2b44cb7ff2b8acf844d34d2.pts 03001627/expert_verified/points_label/323fc7b1d2b44cb7ff2b8acf844d34d2.seg 03001627\n03001627/points/434cee44934612a81f98c0761af40e04.pts 03001627/expert_verified/points_label/434cee44934612a81f98c0761af40e04.seg 03001627\n03636649/points/31dee666120727b0be78c8b300d2a963.pts 03636649/expert_verified/points_label/31dee666120727b0be78c8b300d2a963.seg 03636649\n02958343/points/48f5446e6ac9c1b51f1446551412bde4.pts 02958343/expert_verified/points_label/48f5446e6ac9c1b51f1446551412bde4.seg 02958343\n04379243/points/aa3eb180a4f6d8d42de421c2ab5cfb52.pts 04379243/expert_verified/points_label/aa3eb180a4f6d8d42de421c2ab5cfb52.seg 04379243\n04379243/points/14e5e4db3246dacff12d7184a2ad3430.pts 04379243/expert_verified/points_label/14e5e4db3246dacff12d7184a2ad3430.seg 04379243\n03001627/points/96c0ecd1ef80e818c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/96c0ecd1ef80e818c8687ff9b0b4e4ac.seg 03001627\n04225987/points/d4c042d11f29dffa1082f2ea630bf69e.pts 04225987/expert_verified/points_label/d4c042d11f29dffa1082f2ea630bf69e.seg 04225987\n03642806/points/7ebff305b2e93504239603972bcd2e7b.pts 03642806/expert_verified/points_label/7ebff305b2e93504239603972bcd2e7b.seg 03642806\n03467517/points/369fc7f8d880e1b793f0194265a9746c.pts 03467517/expert_verified/points_label/369fc7f8d880e1b793f0194265a9746c.seg 03467517\n04379243/points/25f69a74efbff4d071a782a4379556c7.pts 04379243/expert_verified/points_label/25f69a74efbff4d071a782a4379556c7.seg 04379243\n04379243/points/7cd4844def36a9f5bc7589eefbdbc3c5.pts 04379243/expert_verified/points_label/7cd4844def36a9f5bc7589eefbdbc3c5.seg 04379243\n03467517/points/5852a24dde24a8ef93f0194265a9746c.pts 03467517/expert_verified/points_label/5852a24dde24a8ef93f0194265a9746c.seg 03467517\n03001627/points/df8440d8678f3a91c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/df8440d8678f3a91c8687ff9b0b4e4ac.seg 03001627\n04379243/points/49bf25ff4401946524c10ba1eb690638.pts 04379243/expert_verified/points_label/49bf25ff4401946524c10ba1eb690638.seg 04379243\n03001627/points/7eedcb6d76b8c23a9cdb421f6af95e5f.pts 03001627/expert_verified/points_label/7eedcb6d76b8c23a9cdb421f6af95e5f.seg 03001627\n03797390/points/ff1a44e1c1785d618bca309f2c51966a.pts 03797390/expert_verified/points_label/ff1a44e1c1785d618bca309f2c51966a.seg 03797390\n02958343/points/85f3dc3318f5200c8672c9b355cd2075.pts 02958343/expert_verified/points_label/85f3dc3318f5200c8672c9b355cd2075.seg 02958343\n02691156/points/c9be9f07f5ae7c375d7629390efe0a2.pts 02691156/expert_verified/points_label/c9be9f07f5ae7c375d7629390efe0a2.seg 02691156\n02691156/points/14cd2f1de7f68bf3ab550998f901c8e1.pts 02691156/expert_verified/points_label/14cd2f1de7f68bf3ab550998f901c8e1.seg 02691156\n02958343/points/81fad64b8fd8f010b17445a1c29f6d34.pts 02958343/expert_verified/points_label/81fad64b8fd8f010b17445a1c29f6d34.seg 02958343\n02958343/points/fe2ce22107693354f1cc1cb691702a23.pts 02958343/expert_verified/points_label/fe2ce22107693354f1cc1cb691702a23.seg 02958343\n02691156/points/74cbf170c5f2fb587d9c9c8a8ba32919.pts 02691156/expert_verified/points_label/74cbf170c5f2fb587d9c9c8a8ba32919.seg 02691156\n02691156/points/67dbb0de722cf5cd7a734abc5ba1db0f.pts 02691156/expert_verified/points_label/67dbb0de722cf5cd7a734abc5ba1db0f.seg 02691156\n04379243/points/fa345f8f107d93b9ba70f71694a4b74c.pts 04379243/expert_verified/points_label/fa345f8f107d93b9ba70f71694a4b74c.seg 04379243\n04379243/points/a45a7ba9a2842a55634c21965ee6bab.pts 04379243/expert_verified/points_label/a45a7ba9a2842a55634c21965ee6bab.seg 04379243\n04379243/points/8d7ac6078989980fad16260d4d73b56.pts 04379243/expert_verified/points_label/8d7ac6078989980fad16260d4d73b56.seg 04379243\n03001627/points/e803b31e2185d0405784b22e1081a3e1.pts 03001627/expert_verified/points_label/e803b31e2185d0405784b22e1081a3e1.seg 03001627\n04379243/points/aaf3aeda0f848344b87028a4b477349f.pts 04379243/expert_verified/points_label/aaf3aeda0f848344b87028a4b477349f.seg 04379243\n03636649/points/e94aab17400945413225afab722d9fd2.pts 03636649/expert_verified/points_label/e94aab17400945413225afab722d9fd2.seg 03636649\n03001627/points/d2c465e85d2e8f1fcea003eff0268278.pts 03001627/expert_verified/points_label/d2c465e85d2e8f1fcea003eff0268278.seg 03001627\n03001627/points/88376e3d3a23d263de29d28278a34a18.pts 03001627/expert_verified/points_label/88376e3d3a23d263de29d28278a34a18.seg 03001627\n04379243/points/4775e71d37374444febad4f49b26ec52.pts 04379243/expert_verified/points_label/4775e71d37374444febad4f49b26ec52.seg 04379243\n03636649/points/f12822778713f5e35b36bbc16e99b441.pts 03636649/expert_verified/points_label/f12822778713f5e35b36bbc16e99b441.seg 03636649\n03636649/points/963e6743370d5c5c9b5d51fa8cce1753.pts 03636649/expert_verified/points_label/963e6743370d5c5c9b5d51fa8cce1753.seg 03636649\n04379243/points/13c51c08c3695a09eda47978b73f5994.pts 04379243/expert_verified/points_label/13c51c08c3695a09eda47978b73f5994.seg 04379243\n04379243/points/89827ac677337629ab610b0c94236463.pts 04379243/expert_verified/points_label/89827ac677337629ab610b0c94236463.seg 04379243\n04379243/points/89b478643e53d3d6285c99063fc6fcf8.pts 04379243/expert_verified/points_label/89b478643e53d3d6285c99063fc6fcf8.seg 04379243\n04379243/points/401cd99ace3b92fadf6cfab91d65bb91.pts 04379243/expert_verified/points_label/401cd99ace3b92fadf6cfab91d65bb91.seg 04379243\n04379243/points/74c3d551e32a1cca664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/74c3d551e32a1cca664b3b9b23ddfcbc.seg 04379243\n04379243/points/db64db160fd13a514e1a714ee619465a.pts 04379243/expert_verified/points_label/db64db160fd13a514e1a714ee619465a.seg 04379243\n03001627/points/8e664a0bcaf9d2a45ca1aaa0789db621.pts 03001627/expert_verified/points_label/8e664a0bcaf9d2a45ca1aaa0789db621.seg 03001627\n03001627/points/43897195d7f893d759c257be4c612509.pts 03001627/expert_verified/points_label/43897195d7f893d759c257be4c612509.seg 03001627\n04379243/points/e6d8569c0957e7453002761e7a3ba3bd.pts 04379243/expert_verified/points_label/e6d8569c0957e7453002761e7a3ba3bd.seg 04379243\n03636649/points/ead77648c9c7dbf8d42b9650f19dd425.pts 03636649/expert_verified/points_label/ead77648c9c7dbf8d42b9650f19dd425.seg 03636649\n03636649/points/c54d3a5a9c8a655e46407779dbd69b2d.pts 03636649/expert_verified/points_label/c54d3a5a9c8a655e46407779dbd69b2d.seg 03636649\n03001627/points/379f0efc898d7a7e9fe74a48bbc553d7.pts 03001627/expert_verified/points_label/379f0efc898d7a7e9fe74a48bbc553d7.seg 03001627\n04379243/points/c1d44782ac45d6fe3671949e4f99cc76.pts 04379243/expert_verified/points_label/c1d44782ac45d6fe3671949e4f99cc76.seg 04379243\n04379243/points/7b3b160dafe7e122d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/7b3b160dafe7e122d93768e7b9b1eabf.seg 04379243\n03001627/points/7f271ecbdeb7610d637adadafee6f182.pts 03001627/expert_verified/points_label/7f271ecbdeb7610d637adadafee6f182.seg 03001627\n02958343/points/df34c25a1e1abe9428044fe9244db50a.pts 02958343/expert_verified/points_label/df34c25a1e1abe9428044fe9244db50a.seg 02958343\n03948459/points/98c0bd351e275b3c96893524e607761d.pts 03948459/expert_verified/points_label/98c0bd351e275b3c96893524e607761d.seg 03948459\n03636649/points/b96c8cc6529167bfcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/b96c8cc6529167bfcb8d8c6d4df8143.seg 03636649\n03624134/points/a33847e9c32c1afc93ac017b81605788.pts 03624134/expert_verified/points_label/a33847e9c32c1afc93ac017b81605788.seg 03624134\n03001627/points/594d5b7f3e705a1ab3234e0da44b11e4.pts 03001627/expert_verified/points_label/594d5b7f3e705a1ab3234e0da44b11e4.seg 03001627\n03001627/points/f0f04644e071d9348ca588a3264b9f86.pts 03001627/expert_verified/points_label/f0f04644e071d9348ca588a3264b9f86.seg 03001627\n02691156/points/4bdb2c4fc6701174ca8607f540cc62ba.pts 02691156/expert_verified/points_label/4bdb2c4fc6701174ca8607f540cc62ba.seg 02691156\n03001627/points/fc2a1c4c332f7731e45ef4135c266a12.pts 03001627/expert_verified/points_label/fc2a1c4c332f7731e45ef4135c266a12.seg 03001627\n02691156/points/df68b8fb9f4531b42e690fa6dfd5d610.pts 02691156/expert_verified/points_label/df68b8fb9f4531b42e690fa6dfd5d610.seg 02691156\n03642806/points/517de75577ac6e8a42b9615216f9a30d.pts 03642806/expert_verified/points_label/517de75577ac6e8a42b9615216f9a30d.seg 03642806\n03001627/points/74cc57ea0e2e06dbe4106b1d06dc89b3.pts 03001627/expert_verified/points_label/74cc57ea0e2e06dbe4106b1d06dc89b3.seg 03001627\n02691156/points/d72a483cf8a0cf2bbbf3143b1cb6076a.pts 02691156/expert_verified/points_label/d72a483cf8a0cf2bbbf3143b1cb6076a.seg 02691156\n03001627/points/9c7b2ed3770d1a6ea6fee8e2140acec9.pts 03001627/expert_verified/points_label/9c7b2ed3770d1a6ea6fee8e2140acec9.seg 03001627\n04379243/points/28fb9a81898f88c4ae8375def5e736d8.pts 04379243/expert_verified/points_label/28fb9a81898f88c4ae8375def5e736d8.seg 04379243\n03636649/points/c0b0d7e15d3dfab1733c22d8b8e1c33d.pts 03636649/expert_verified/points_label/c0b0d7e15d3dfab1733c22d8b8e1c33d.seg 03636649\n03001627/points/bb04dc0b336abf4b263915c09bc4854f.pts 03001627/expert_verified/points_label/bb04dc0b336abf4b263915c09bc4854f.seg 03001627\n03001627/points/6caccdad9f8d4f0a7f1cdfc0a8f38f2e.pts 03001627/expert_verified/points_label/6caccdad9f8d4f0a7f1cdfc0a8f38f2e.seg 03001627\n04379243/points/86ad91ef08c53dd77189b31b3e8c8ef3.pts 04379243/expert_verified/points_label/86ad91ef08c53dd77189b31b3e8c8ef3.seg 04379243\n03790512/points/80e717f07645a4a0b37378f3c85478b4.pts 03790512/expert_verified/points_label/80e717f07645a4a0b37378f3c85478b4.seg 03790512\n02691156/points/7d226c520a29c7705e28caa3b26a73fd.pts 02691156/expert_verified/points_label/7d226c520a29c7705e28caa3b26a73fd.seg 02691156\n04379243/points/89c095a52766ecb05d2ac47f638a4ea4.pts 04379243/expert_verified/points_label/89c095a52766ecb05d2ac47f638a4ea4.seg 04379243\n04379243/points/7b92f6facc2a27bc84cc0348a73b80c3.pts 04379243/expert_verified/points_label/7b92f6facc2a27bc84cc0348a73b80c3.seg 04379243\n04379243/points/d578287c4a9452efa9af104529ef47c3.pts 04379243/expert_verified/points_label/d578287c4a9452efa9af104529ef47c3.seg 04379243\n03636649/points/1475fe59961fc726f096eadaad23f93d.pts 03636649/expert_verified/points_label/1475fe59961fc726f096eadaad23f93d.seg 03636649\n03790512/points/7d75e8200565ffa7b37378f3c85478b4.pts 03790512/expert_verified/points_label/7d75e8200565ffa7b37378f3c85478b4.seg 03790512\n04379243/points/852826a94cce36ea9f1deb04fb8ae481.pts 04379243/expert_verified/points_label/852826a94cce36ea9f1deb04fb8ae481.seg 04379243\n03001627/points/9c50878c91aeb8126bb6bc0db07c71e8.pts 03001627/expert_verified/points_label/9c50878c91aeb8126bb6bc0db07c71e8.seg 03001627\n02691156/points/ce827e4c857d553f71d03b466c72ce41.pts 02691156/expert_verified/points_label/ce827e4c857d553f71d03b466c72ce41.seg 02691156\n03001627/points/3aab16309520fb21dc0a8cba62d9a78a.pts 03001627/expert_verified/points_label/3aab16309520fb21dc0a8cba62d9a78a.seg 03001627\n03001627/points/697cfbe6e043136b737a00f007529fbf.pts 03001627/expert_verified/points_label/697cfbe6e043136b737a00f007529fbf.seg 03001627\n04379243/points/fd7769d0eba554c53def89b32cef8e45.pts 04379243/expert_verified/points_label/fd7769d0eba554c53def89b32cef8e45.seg 04379243\n03948459/points/d7e86e0e5b1982d4bf0ab4d7096d87f2.pts 03948459/expert_verified/points_label/d7e86e0e5b1982d4bf0ab4d7096d87f2.seg 03948459\n03001627/points/70cb8d70d961ca48b04cb542e2c50eb4.pts 03001627/expert_verified/points_label/70cb8d70d961ca48b04cb542e2c50eb4.seg 03001627\n03636649/points/c26b7862f2afb7ee4b3c42e318f3affc.pts 03636649/expert_verified/points_label/c26b7862f2afb7ee4b3c42e318f3affc.seg 03636649\n03624134/points/906b20dc0a5a5022714112b147c95c8b.pts 03624134/expert_verified/points_label/906b20dc0a5a5022714112b147c95c8b.seg 03624134\n03001627/points/f5caa9b5ada31a8b3cf15c77de45986.pts 03001627/expert_verified/points_label/f5caa9b5ada31a8b3cf15c77de45986.seg 03001627\n04379243/points/6110d87def4fa88c154c6bbaeb7d331f.pts 04379243/expert_verified/points_label/6110d87def4fa88c154c6bbaeb7d331f.seg 04379243\n03642806/points/b5f6fd84a3f44ddb1aa47689117a61e1.pts 03642806/expert_verified/points_label/b5f6fd84a3f44ddb1aa47689117a61e1.seg 03642806\n03001627/points/95317d46812e4ed4df5aea2392d894b4.pts 03001627/expert_verified/points_label/95317d46812e4ed4df5aea2392d894b4.seg 03001627\n02691156/points/471ca950dbdf0c6c5f80f808704d6409.pts 02691156/expert_verified/points_label/471ca950dbdf0c6c5f80f808704d6409.seg 02691156\n04379243/points/c9f85a671d551086d61f9b2773e1d72a.pts 04379243/expert_verified/points_label/c9f85a671d551086d61f9b2773e1d72a.seg 04379243\n04379243/points/70f1b5f74faa9bda664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/70f1b5f74faa9bda664b3b9b23ddfcbc.seg 04379243\n02691156/points/9a266b3a734e374687bf26680c510802.pts 02691156/expert_verified/points_label/9a266b3a734e374687bf26680c510802.seg 02691156\n03001627/points/4c0983329afcd06f730e89ca0d2d13c3.pts 03001627/expert_verified/points_label/4c0983329afcd06f730e89ca0d2d13c3.seg 03001627\n04379243/points/a7172fa4177661f4858699aaad4acee4.pts 04379243/expert_verified/points_label/a7172fa4177661f4858699aaad4acee4.seg 04379243\n04379243/points/504d908a55f3e0c764810cc21086da42.pts 04379243/expert_verified/points_label/504d908a55f3e0c764810cc21086da42.seg 04379243\n03948459/points/7ba9f65e926d5e3e6fe695987d47043.pts 03948459/expert_verified/points_label/7ba9f65e926d5e3e6fe695987d47043.seg 03948459\n04379243/points/5b546ef5de5d10f3ecc9201d3d846bc1.pts 04379243/expert_verified/points_label/5b546ef5de5d10f3ecc9201d3d846bc1.seg 04379243\n04379243/points/80f986ae572fce791429f9a19502375a.pts 04379243/expert_verified/points_label/80f986ae572fce791429f9a19502375a.seg 04379243\n04379243/points/fd7a579772b195532de421c2ab5cfb52.pts 04379243/expert_verified/points_label/fd7a579772b195532de421c2ab5cfb52.seg 04379243\n03001627/points/e09466e9c122dbfdf51f77a6d7299806.pts 03001627/expert_verified/points_label/e09466e9c122dbfdf51f77a6d7299806.seg 03001627\n04379243/points/2a80c95b4bbcb73d87ed2480ebb0f3d2.pts 04379243/expert_verified/points_label/2a80c95b4bbcb73d87ed2480ebb0f3d2.seg 04379243\n03467517/points/e0d74618e316b0f16d9376f644442e99.pts 03467517/expert_verified/points_label/e0d74618e316b0f16d9376f644442e99.seg 03467517\n03001627/points/587ebb2aa71acfe644dd3aaee16d3f4c.pts 03001627/expert_verified/points_label/587ebb2aa71acfe644dd3aaee16d3f4c.seg 03001627\n03467517/points/10d2c216c70b788485b61f146daff2fb.pts 03467517/expert_verified/points_label/10d2c216c70b788485b61f146daff2fb.seg 03467517\n04379243/points/3c72ddd0dca19bbedcfcef693e7ec696.pts 04379243/expert_verified/points_label/3c72ddd0dca19bbedcfcef693e7ec696.seg 04379243\n03001627/points/2742c0a5e984d92fa0dcc52ca811e565.pts 03001627/expert_verified/points_label/2742c0a5e984d92fa0dcc52ca811e565.seg 03001627\n03624134/points/792f252dcb06f042dd56c1edf3f6e336.pts 03624134/expert_verified/points_label/792f252dcb06f042dd56c1edf3f6e336.seg 03624134\n02691156/points/8fa9e2e8dbed43911f32208e53f871eb.pts 02691156/expert_verified/points_label/8fa9e2e8dbed43911f32208e53f871eb.seg 02691156\n03001627/points/d4f5c3e3eab52d0a3334fb6668ccd834.pts 03001627/expert_verified/points_label/d4f5c3e3eab52d0a3334fb6668ccd834.seg 03001627\n03642806/points/520d98e360cf44ec8139dd63d55edc44.pts 03642806/expert_verified/points_label/520d98e360cf44ec8139dd63d55edc44.seg 03642806\n03467517/points/2eba922263fc1580cc010a80df5d3c87.pts 03467517/expert_verified/points_label/2eba922263fc1580cc010a80df5d3c87.seg 03467517\n04379243/points/53c11596c3fc36a8a5094cb6d104b35.pts 04379243/expert_verified/points_label/53c11596c3fc36a8a5094cb6d104b35.seg 04379243\n03467517/points/265009e163bf5c6f69da8e7f9a803d12.pts 03467517/expert_verified/points_label/265009e163bf5c6f69da8e7f9a803d12.seg 03467517\n04379243/points/fbdf9bffeb353474c3a767747b75e56.pts 04379243/expert_verified/points_label/fbdf9bffeb353474c3a767747b75e56.seg 04379243\n03636649/points/b4af7e9a7338a9a3225afab722d9fd2.pts 03636649/expert_verified/points_label/b4af7e9a7338a9a3225afab722d9fd2.seg 03636649\n03001627/points/55eeb952519ceb87c3bd24f986301745.pts 03001627/expert_verified/points_label/55eeb952519ceb87c3bd24f986301745.seg 03001627\n04379243/points/2259e09ebd0ed2befebad4f49b26ec52.pts 04379243/expert_verified/points_label/2259e09ebd0ed2befebad4f49b26ec52.seg 04379243\n04379243/points/63fedc0334f5552dbec3a71604e140e3.pts 04379243/expert_verified/points_label/63fedc0334f5552dbec3a71604e140e3.seg 04379243\n03001627/points/70ac5cb405df84575e62305d14755686.pts 03001627/expert_verified/points_label/70ac5cb405df84575e62305d14755686.seg 03001627\n03001627/points/3f41b4339ebd59c1c397356311cbeea4.pts 03001627/expert_verified/points_label/3f41b4339ebd59c1c397356311cbeea4.seg 03001627\n04379243/points/10bb44a54a12a74e4719088c8e42c6ab.pts 04379243/expert_verified/points_label/10bb44a54a12a74e4719088c8e42c6ab.seg 04379243\n04379243/points/a83cda80e5c5a0fc3719086e0b4ab8be.pts 04379243/expert_verified/points_label/a83cda80e5c5a0fc3719086e0b4ab8be.seg 04379243\n04379243/points/74983e99e7606eb114708467db3d00e2.pts 04379243/expert_verified/points_label/74983e99e7606eb114708467db3d00e2.seg 04379243\n03001627/points/e052eaa1d5bbe795ded10515704c9720.pts 03001627/expert_verified/points_label/e052eaa1d5bbe795ded10515704c9720.seg 03001627\n02691156/points/35892510dcd7cebb87bf26680c510802.pts 02691156/expert_verified/points_label/35892510dcd7cebb87bf26680c510802.seg 02691156\n03001627/points/7f73cc6c1c9121a9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/7f73cc6c1c9121a9b9f2eb77f5e247e.seg 03001627\n03001627/points/2a8554af80cfa5e719fb4103277a6b93.pts 03001627/expert_verified/points_label/2a8554af80cfa5e719fb4103277a6b93.seg 03001627\n04379243/points/f82a5f3c2a57655d825da2b9ec9c8c29.pts 04379243/expert_verified/points_label/f82a5f3c2a57655d825da2b9ec9c8c29.seg 04379243\n02691156/points/319cf93077118d19f64801ad2940cdd5.pts 02691156/expert_verified/points_label/319cf93077118d19f64801ad2940cdd5.seg 02691156\n03790512/points/5bb3597d49c58017b37378f3c85478b4.pts 03790512/expert_verified/points_label/5bb3597d49c58017b37378f3c85478b4.seg 03790512\n02958343/points/17926c1ef484b73e6758a098566bc94e.pts 02958343/expert_verified/points_label/17926c1ef484b73e6758a098566bc94e.seg 02958343\n04379243/points/345c1bb95b12ff8c013a7bed5288654.pts 04379243/expert_verified/points_label/345c1bb95b12ff8c013a7bed5288654.seg 04379243\n03001627/points/3b788994cd578990c35131da26f8061a.pts 03001627/expert_verified/points_label/3b788994cd578990c35131da26f8061a.seg 03001627\n03636649/points/c25cc72cd06852e75bbea6ee257e41cc.pts 03636649/expert_verified/points_label/c25cc72cd06852e75bbea6ee257e41cc.seg 03636649\n03001627/points/4e4570768f981ca7b95617254e8005c0.pts 03001627/expert_verified/points_label/4e4570768f981ca7b95617254e8005c0.seg 03001627\n03642806/points/ef6d92c90aeabf5becae27d182a3e41c.pts 03642806/expert_verified/points_label/ef6d92c90aeabf5becae27d182a3e41c.seg 03642806\n04379243/points/97718e2651d22b3a74740f837351e7eb.pts 04379243/expert_verified/points_label/97718e2651d22b3a74740f837351e7eb.seg 04379243\n03948459/points/1f646ff59cabdddcd810dcd63f342aca.pts 03948459/expert_verified/points_label/1f646ff59cabdddcd810dcd63f342aca.seg 03948459\n02958343/points/74f7b559d6af926012f2e446484bbaf7.pts 02958343/expert_verified/points_label/74f7b559d6af926012f2e446484bbaf7.seg 02958343\n03001627/points/8b3619396de4df10db8860d0872e9c55.pts 03001627/expert_verified/points_label/8b3619396de4df10db8860d0872e9c55.seg 03001627\n03001627/points/44ddb3d46266bb0ffebad4f49b26ec52.pts 03001627/expert_verified/points_label/44ddb3d46266bb0ffebad4f49b26ec52.seg 03001627\n03001627/points/a5f300f3975497fa9dcf2183c858e6e5.pts 03001627/expert_verified/points_label/a5f300f3975497fa9dcf2183c858e6e5.seg 03001627\n03467517/points/113b65f0e68314737c481698bd5233b4.pts 03467517/expert_verified/points_label/113b65f0e68314737c481698bd5233b4.seg 03467517\n03001627/points/49795a9ebd9a9c6d2c697f0a1454869.pts 03001627/expert_verified/points_label/49795a9ebd9a9c6d2c697f0a1454869.seg 03001627\n03001627/points/5822ae77b06bea3091da37ff8bdd2524.pts 03001627/expert_verified/points_label/5822ae77b06bea3091da37ff8bdd2524.seg 03001627\n03467517/points/15222c5926c7058cc6df7dab8e567ef6.pts 03467517/expert_verified/points_label/15222c5926c7058cc6df7dab8e567ef6.seg 03467517\n02691156/points/14d9c576d06622198f52dc705c3109b9.pts 02691156/expert_verified/points_label/14d9c576d06622198f52dc705c3109b9.seg 02691156\n04379243/points/62ae9ded861138be9d2be74cfb51ade1.pts 04379243/expert_verified/points_label/62ae9ded861138be9d2be74cfb51ade1.seg 04379243\n02958343/points/7b067be3aa39b1a124853ec273f6c1d2.pts 02958343/expert_verified/points_label/7b067be3aa39b1a124853ec273f6c1d2.seg 02958343\n03636649/points/66cf69a98ff895e2b55fde51a411949f.pts 03636649/expert_verified/points_label/66cf69a98ff895e2b55fde51a411949f.seg 03636649\n04379243/points/3253f2c59e6bd2a119fb4103277a6b93.pts 04379243/expert_verified/points_label/3253f2c59e6bd2a119fb4103277a6b93.seg 04379243\n02691156/points/fe0c4db38fb6399990b1d6deb98feec6.pts 02691156/expert_verified/points_label/fe0c4db38fb6399990b1d6deb98feec6.seg 02691156\n02691156/points/6d93492543d1087eb87697d3904b168b.pts 02691156/expert_verified/points_label/6d93492543d1087eb87697d3904b168b.seg 02691156\n03636649/points/402f7ce2b87e7d1ac066b9622c005c53.pts 03636649/expert_verified/points_label/402f7ce2b87e7d1ac066b9622c005c53.seg 03636649\n04379243/points/272a4cf3cfff3eb1e173cee47fbaa88.pts 04379243/expert_verified/points_label/272a4cf3cfff3eb1e173cee47fbaa88.seg 04379243\n02691156/points/6420a3ff5e526d59e16519c843f95ce0.pts 02691156/expert_verified/points_label/6420a3ff5e526d59e16519c843f95ce0.seg 02691156\n03001627/points/487040c5fdc68fdfe6cfc789522bfbab.pts 03001627/expert_verified/points_label/487040c5fdc68fdfe6cfc789522bfbab.seg 03001627\n04379243/points/8f48ccd17a15baf5ce01c07526cf2aa4.pts 04379243/expert_verified/points_label/8f48ccd17a15baf5ce01c07526cf2aa4.seg 04379243\n03001627/points/40e5d8e71ee3902a31358207d42bcb21.pts 03001627/expert_verified/points_label/40e5d8e71ee3902a31358207d42bcb21.seg 03001627\n03636649/points/68491d576b5d35aade8e7376ce4e111f.pts 03636649/expert_verified/points_label/68491d576b5d35aade8e7376ce4e111f.seg 03636649\n03467517/points/80aa2f0d66100844925eded29d6897b9.pts 03467517/expert_verified/points_label/80aa2f0d66100844925eded29d6897b9.seg 03467517\n03001627/points/7929676e756dcd41577b5d737869717e.pts 03001627/expert_verified/points_label/7929676e756dcd41577b5d737869717e.seg 03001627\n03001627/points/2cf7ccf97b09187fcb7547c95fbdff26.pts 03001627/expert_verified/points_label/2cf7ccf97b09187fcb7547c95fbdff26.seg 03001627\n02691156/points/e8409b544c626028a9b2becd26dc2fc1.pts 02691156/expert_verified/points_label/e8409b544c626028a9b2becd26dc2fc1.seg 02691156\n02691156/points/1e2de00cf19a0a33554ccf8c30febe7.pts 02691156/expert_verified/points_label/1e2de00cf19a0a33554ccf8c30febe7.seg 02691156\n02691156/points/8f40518bd30467151e5ae32cb9e3711f.pts 02691156/expert_verified/points_label/8f40518bd30467151e5ae32cb9e3711f.seg 02691156\n02958343/points/4f0147c8a158087a4c19dab9f2c7c52d.pts 02958343/expert_verified/points_label/4f0147c8a158087a4c19dab9f2c7c52d.seg 02958343\n03624134/points/954fb0819736737a1b9c8e2fdbfc1118.pts 03624134/expert_verified/points_label/954fb0819736737a1b9c8e2fdbfc1118.seg 03624134\n04379243/points/415a08a66b8527519f803a8da27dd9a9.pts 04379243/expert_verified/points_label/415a08a66b8527519f803a8da27dd9a9.seg 04379243\n03001627/points/4bdbecfbc925219157915a20ae9ec6b6.pts 03001627/expert_verified/points_label/4bdbecfbc925219157915a20ae9ec6b6.seg 03001627\n03624134/points/2f74196bd5cb462727c767f081f1365a.pts 03624134/expert_verified/points_label/2f74196bd5cb462727c767f081f1365a.seg 03624134\n02958343/points/b5b6b09711cbee6daa44bfa127abe4bb.pts 02958343/expert_verified/points_label/b5b6b09711cbee6daa44bfa127abe4bb.seg 02958343\n03001627/points/43e74f15a986eb626a90f735365ac29e.pts 03001627/expert_verified/points_label/43e74f15a986eb626a90f735365ac29e.seg 03001627\n03624134/points/385bb539629cd6991dd89e5fcd05911a.pts 03624134/expert_verified/points_label/385bb539629cd6991dd89e5fcd05911a.seg 03624134\n03642806/points/fdec2b8af5dd988cef56c22fd326c67.pts 03642806/expert_verified/points_label/fdec2b8af5dd988cef56c22fd326c67.seg 03642806\n02958343/points/244a8476648bd073834daea73aa18748.pts 02958343/expert_verified/points_label/244a8476648bd073834daea73aa18748.seg 02958343\n03467517/points/d91b0745e57f6508dc6782957fd2f5d2.pts 03467517/expert_verified/points_label/d91b0745e57f6508dc6782957fd2f5d2.seg 03467517\n04379243/points/83f1ff21744e71ad2690c0a5b39562ad.pts 04379243/expert_verified/points_label/83f1ff21744e71ad2690c0a5b39562ad.seg 04379243\n03001627/points/49aa713bec70ee1f1104b8f54582c707.pts 03001627/expert_verified/points_label/49aa713bec70ee1f1104b8f54582c707.seg 03001627\n03001627/points/9231ef07326eae09b04cb542e2c50eb4.pts 03001627/expert_verified/points_label/9231ef07326eae09b04cb542e2c50eb4.seg 03001627\n03642806/points/b211cfb105e9f97e6436916a86a90ed7.pts 03642806/expert_verified/points_label/b211cfb105e9f97e6436916a86a90ed7.seg 03642806\n03001627/points/fdfedb5bb8cd35374233148ffd345970.pts 03001627/expert_verified/points_label/fdfedb5bb8cd35374233148ffd345970.seg 03001627\n04379243/points/3037fac5bc67207e23fa92d98173c06f.pts 04379243/expert_verified/points_label/3037fac5bc67207e23fa92d98173c06f.seg 04379243\n04379243/points/40d0dd3fe786e120d75c27ddd792e41a.pts 04379243/expert_verified/points_label/40d0dd3fe786e120d75c27ddd792e41a.seg 04379243\n03001627/points/e6ea5e70c2f29d881e8fd793667dc14f.pts 03001627/expert_verified/points_label/e6ea5e70c2f29d881e8fd793667dc14f.seg 03001627\n04379243/points/9502eecc3a057115b129901f80d24b7b.pts 04379243/expert_verified/points_label/9502eecc3a057115b129901f80d24b7b.seg 04379243\n03001627/points/e68bb6f55e2454fac7f1f7c0570e288d.pts 03001627/expert_verified/points_label/e68bb6f55e2454fac7f1f7c0570e288d.seg 03001627\n02691156/points/9bd8d0fa75bc21c5e3375a6b96a1d765.pts 02691156/expert_verified/points_label/9bd8d0fa75bc21c5e3375a6b96a1d765.seg 02691156\n02958343/points/1714b6e57c8c4983fb1aad5dae793ff4.pts 02958343/expert_verified/points_label/1714b6e57c8c4983fb1aad5dae793ff4.seg 02958343\n02691156/points/8a84a26158da1db7668586dcfb752ad.pts 02691156/expert_verified/points_label/8a84a26158da1db7668586dcfb752ad.seg 02691156\n02691156/points/36d8c865f766e3e097872638b21438e3.pts 02691156/expert_verified/points_label/36d8c865f766e3e097872638b21438e3.seg 02691156\n03001627/points/96e8a51b1680b756e99481ddc3bbddfb.pts 03001627/expert_verified/points_label/96e8a51b1680b756e99481ddc3bbddfb.seg 03001627\n02958343/points/37ad66d0433beb633df8f4ac45647158.pts 02958343/expert_verified/points_label/37ad66d0433beb633df8f4ac45647158.seg 02958343\n04379243/points/56a57ef7c3385c9f2f38c0d2792fb5e.pts 04379243/expert_verified/points_label/56a57ef7c3385c9f2f38c0d2792fb5e.seg 04379243\n03467517/points/dbdf45cab0adbded1f260c1b356c52ce.pts 03467517/expert_verified/points_label/dbdf45cab0adbded1f260c1b356c52ce.seg 03467517\n04379243/points/868bab5194e93577858699aaad4acee4.pts 04379243/expert_verified/points_label/868bab5194e93577858699aaad4acee4.seg 04379243\n04379243/points/2bbd62449b56abee659dda512294c744.pts 04379243/expert_verified/points_label/2bbd62449b56abee659dda512294c744.seg 04379243\n04379243/points/a18aa2d20d516333daf1f22b6daf05ed.pts 04379243/expert_verified/points_label/a18aa2d20d516333daf1f22b6daf05ed.seg 04379243\n03636649/points/7a2362fbddbee9a4d197f67767b32741.pts 03636649/expert_verified/points_label/7a2362fbddbee9a4d197f67767b32741.seg 03636649\n03636649/points/f9259d31df38bd5decd204cd7180226d.pts 03636649/expert_verified/points_label/f9259d31df38bd5decd204cd7180226d.seg 03636649\n04379243/points/54e85b248576c4eb57cd80d4b17e7e11.pts 04379243/expert_verified/points_label/54e85b248576c4eb57cd80d4b17e7e11.seg 04379243\n04379243/points/1299579419252fa954b02959579aa6bb.pts 04379243/expert_verified/points_label/1299579419252fa954b02959579aa6bb.seg 04379243\n04379243/points/49ad167497a2af8c9672e39f89e4622e.pts 04379243/expert_verified/points_label/49ad167497a2af8c9672e39f89e4622e.seg 04379243\n04379243/points/55221b101eec29dc656a19d1d18fdbac.pts 04379243/expert_verified/points_label/55221b101eec29dc656a19d1d18fdbac.seg 04379243\n04379243/points/e8870f3190f6b8d4bd1025bd755a15aa.pts 04379243/expert_verified/points_label/e8870f3190f6b8d4bd1025bd755a15aa.seg 04379243\n02691156/points/9818f0b88fed05b24b0a1bcf2fb497ec.pts 02691156/expert_verified/points_label/9818f0b88fed05b24b0a1bcf2fb497ec.seg 02691156\n02691156/points/9ba460913d86466f62347b4731688b0f.pts 02691156/expert_verified/points_label/9ba460913d86466f62347b4731688b0f.seg 02691156\n04379243/points/574447022c4473d455f46d55537192b6.pts 04379243/expert_verified/points_label/574447022c4473d455f46d55537192b6.seg 04379243\n04379243/points/7b5b7bfa8580e913e2580b23e60e4674.pts 04379243/expert_verified/points_label/7b5b7bfa8580e913e2580b23e60e4674.seg 04379243\n04225987/points/48f26ddc704fec2f379c6a1d59ef7283.pts 04225987/expert_verified/points_label/48f26ddc704fec2f379c6a1d59ef7283.seg 04225987\n04379243/points/b7821e69687d767aab610b0c94236463.pts 04379243/expert_verified/points_label/b7821e69687d767aab610b0c94236463.seg 04379243\n02691156/points/e42443669339a6c1a5a118bd15e6e34f.pts 02691156/expert_verified/points_label/e42443669339a6c1a5a118bd15e6e34f.seg 02691156\n04379243/points/2444551d00693a0fab610b0c94236463.pts 04379243/expert_verified/points_label/2444551d00693a0fab610b0c94236463.seg 04379243\n03467517/points/5e452914684ea7fc398707f20de9db08.pts 03467517/expert_verified/points_label/5e452914684ea7fc398707f20de9db08.seg 03467517\n03001627/points/cc6840207c0cf55db30e42459dcb06f.pts 03001627/expert_verified/points_label/cc6840207c0cf55db30e42459dcb06f.seg 03001627\n04379243/points/9046b2e610065fe5a5d95e73eecd308a.pts 04379243/expert_verified/points_label/9046b2e610065fe5a5d95e73eecd308a.seg 04379243\n03467517/points/c651a91562b86ed8edb9371445f615ae.pts 03467517/expert_verified/points_label/c651a91562b86ed8edb9371445f615ae.seg 03467517\n03001627/points/9bb6d3d76d4f5ba94b3c42e318f3affc.pts 03001627/expert_verified/points_label/9bb6d3d76d4f5ba94b3c42e318f3affc.seg 03001627\n03001627/points/7fb336186da77367962800be79c6e52.pts 03001627/expert_verified/points_label/7fb336186da77367962800be79c6e52.seg 03001627\n04379243/points/b69b2ff85d0ec661d8f9dd7647048a0c.pts 04379243/expert_verified/points_label/b69b2ff85d0ec661d8f9dd7647048a0c.seg 04379243\n03001627/points/d2815e678f173616e6cfc789522bfbab.pts 03001627/expert_verified/points_label/d2815e678f173616e6cfc789522bfbab.seg 03001627\n03636649/points/b8350fcf08ff0b2ca950bf8f33cff658.pts 03636649/expert_verified/points_label/b8350fcf08ff0b2ca950bf8f33cff658.seg 03636649\n04379243/points/202e7b5c3ec079e299e8bf807e902261.pts 04379243/expert_verified/points_label/202e7b5c3ec079e299e8bf807e902261.seg 04379243\n03001627/points/c8938f54fecab41e77cd061c90fcdb44.pts 03001627/expert_verified/points_label/c8938f54fecab41e77cd061c90fcdb44.seg 03001627\n04379243/points/894e095c7036c8411933ffef19678834.pts 04379243/expert_verified/points_label/894e095c7036c8411933ffef19678834.seg 04379243\n03001627/points/4362e715455f42ba9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/4362e715455f42ba9b9f2eb77f5e247e.seg 03001627\n04379243/points/8963760f8bec0fee7f807d3c406ee.pts 04379243/expert_verified/points_label/8963760f8bec0fee7f807d3c406ee.seg 04379243\n03948459/points/4acb6494e3aaeb39998978df244b5bd.pts 03948459/expert_verified/points_label/4acb6494e3aaeb39998978df244b5bd.seg 03948459\n03636649/points/c1b939cc403a0662664b3b9b23ddfcbc.pts 03636649/expert_verified/points_label/c1b939cc403a0662664b3b9b23ddfcbc.seg 03636649\n04379243/points/e64876f5590e6fb7c3bd24f986301745.pts 04379243/expert_verified/points_label/e64876f5590e6fb7c3bd24f986301745.seg 04379243\n02691156/points/b8ce3803485b620b2c674305897e1782.pts 02691156/expert_verified/points_label/b8ce3803485b620b2c674305897e1782.seg 02691156\n03636649/points/a60c6cf7d4893f2ba26bf7a8fd4719ad.pts 03636649/expert_verified/points_label/a60c6cf7d4893f2ba26bf7a8fd4719ad.seg 03636649\n04379243/points/6ca66a443e651c1423500a5b036df62e.pts 04379243/expert_verified/points_label/6ca66a443e651c1423500a5b036df62e.seg 04379243\n04379243/points/51930b149cf6125373fa072a624ce947.pts 04379243/expert_verified/points_label/51930b149cf6125373fa072a624ce947.seg 04379243\n02691156/points/eb658ff31f0becea1d0f8853f6d023e3.pts 02691156/expert_verified/points_label/eb658ff31f0becea1d0f8853f6d023e3.seg 02691156\n03642806/points/3f45cde6f7a13138e256fb3794905772.pts 03642806/expert_verified/points_label/3f45cde6f7a13138e256fb3794905772.seg 03642806\n03001627/points/ea572cc193b804399c66df0f068d2a36.pts 03001627/expert_verified/points_label/ea572cc193b804399c66df0f068d2a36.seg 03001627\n03001627/points/9e0a0ad80be6df7789d2595edb5088ee.pts 03001627/expert_verified/points_label/9e0a0ad80be6df7789d2595edb5088ee.seg 03001627\n04379243/points/8eed35fd5b777acf58316b27df6c8e87.pts 04379243/expert_verified/points_label/8eed35fd5b777acf58316b27df6c8e87.seg 04379243\n03642806/points/5baaa726f51cd09b507f3bf1d3472684.pts 03642806/expert_verified/points_label/5baaa726f51cd09b507f3bf1d3472684.seg 03642806\n02691156/points/789f032dccc6092977b7d0d4764c121d.pts 02691156/expert_verified/points_label/789f032dccc6092977b7d0d4764c121d.seg 02691156\n03001627/points/9682d28e03acd2e3735013f3db728e20.pts 03001627/expert_verified/points_label/9682d28e03acd2e3735013f3db728e20.seg 03001627\n02958343/points/b50f9931670e25ef44ccce632b473b8c.pts 02958343/expert_verified/points_label/b50f9931670e25ef44ccce632b473b8c.seg 02958343\n03467517/points/d3972d599036251369da8e7f9a803d12.pts 03467517/expert_verified/points_label/d3972d599036251369da8e7f9a803d12.seg 03467517\n02691156/points/329987191cce68bfe64acd170567d820.pts 02691156/expert_verified/points_label/329987191cce68bfe64acd170567d820.seg 02691156\n03636649/points/ab3e153cd23e992b576a354bb9319732.pts 03636649/expert_verified/points_label/ab3e153cd23e992b576a354bb9319732.seg 03636649\n04379243/points/f850a69b0d308fbc19fb4103277a6b93.pts 04379243/expert_verified/points_label/f850a69b0d308fbc19fb4103277a6b93.seg 04379243\n04379243/points/1645b28322131b6258c407efcf93be6b.pts 04379243/expert_verified/points_label/1645b28322131b6258c407efcf93be6b.seg 04379243\n03001627/points/195464ae11f6bfe1cba091e036bf65ed.pts 03001627/expert_verified/points_label/195464ae11f6bfe1cba091e036bf65ed.seg 03001627\n02691156/points/edd9583988b62c90328f15e6c60d0e90.pts 02691156/expert_verified/points_label/edd9583988b62c90328f15e6c60d0e90.seg 02691156\n04225987/points/36aaae334d636ec28043db94fbc8c982.pts 04225987/expert_verified/points_label/36aaae334d636ec28043db94fbc8c982.seg 04225987\n04379243/points/c3c467718eb9b2a313f96345312df593.pts 04379243/expert_verified/points_label/c3c467718eb9b2a313f96345312df593.seg 04379243\n02691156/points/a1848a4a69b14704ca8607f540cc62ba.pts 02691156/expert_verified/points_label/a1848a4a69b14704ca8607f540cc62ba.seg 02691156\n02958343/points/c8bd4d0ac34266ffaaa232d0915adae9.pts 02958343/expert_verified/points_label/c8bd4d0ac34266ffaaa232d0915adae9.seg 02958343\n04379243/points/ad61a5bc7cba29b88cc413950b617e8f.pts 04379243/expert_verified/points_label/ad61a5bc7cba29b88cc413950b617e8f.seg 04379243\n03642806/points/466ea85bb4653ba3a715ae636b111d77.pts 03642806/expert_verified/points_label/466ea85bb4653ba3a715ae636b111d77.seg 03642806\n03001627/points/e93714e5553f63619215045784774049.pts 03001627/expert_verified/points_label/e93714e5553f63619215045784774049.seg 03001627\n03636649/points/b88c9a7aaab268fb42b08fbc749346d6.pts 03636649/expert_verified/points_label/b88c9a7aaab268fb42b08fbc749346d6.seg 03636649\n03636649/points/6ba931adfa36c7965208aab875b932bc.pts 03636649/expert_verified/points_label/6ba931adfa36c7965208aab875b932bc.seg 03636649\n03001627/points/e3479f55f5894bb3c7f1f7c0570e288d.pts 03001627/expert_verified/points_label/e3479f55f5894bb3c7f1f7c0570e288d.seg 03001627\n03467517/points/4c5288cc18896f8f352e5d4d2615db5b.pts 03467517/expert_verified/points_label/4c5288cc18896f8f352e5d4d2615db5b.seg 03467517\n03001627/points/631e102e9a689339b0ec386df15ab64f.pts 03001627/expert_verified/points_label/631e102e9a689339b0ec386df15ab64f.seg 03001627\n04379243/points/6daed91ae491c9cbe22ea6d770699e4b.pts 04379243/expert_verified/points_label/6daed91ae491c9cbe22ea6d770699e4b.seg 04379243\n03001627/points/40e73a326cf95d0361c93c4994c91bd1.pts 03001627/expert_verified/points_label/40e73a326cf95d0361c93c4994c91bd1.seg 03001627\n03467517/points/dc7708c870000008a24eeca91f583600.pts 03467517/expert_verified/points_label/dc7708c870000008a24eeca91f583600.seg 03467517\n03001627/points/1ac6531a337de85f2f7628d6bf38bcc4.pts 03001627/expert_verified/points_label/1ac6531a337de85f2f7628d6bf38bcc4.seg 03001627\n04379243/points/5191d64e9a1b9664bfdcc70dcc16baa1.pts 04379243/expert_verified/points_label/5191d64e9a1b9664bfdcc70dcc16baa1.seg 04379243\n03636649/points/c4dc0ac169c91ff29f8c3d2002c77ddb.pts 03636649/expert_verified/points_label/c4dc0ac169c91ff29f8c3d2002c77ddb.seg 03636649\n03624134/points/b8648ae17fb9937949f73a97204d432b.pts 03624134/expert_verified/points_label/b8648ae17fb9937949f73a97204d432b.seg 03624134\n04379243/points/a465210c23b0136d7afee304cce81d6f.pts 04379243/expert_verified/points_label/a465210c23b0136d7afee304cce81d6f.seg 04379243\n03001627/points/513686d6d63a1d8e577b5d737869717e.pts 03001627/expert_verified/points_label/513686d6d63a1d8e577b5d737869717e.seg 03001627\n03624134/points/bee1a473472639e25ca3862a7efa6401.pts 03624134/expert_verified/points_label/bee1a473472639e25ca3862a7efa6401.seg 03624134\n02691156/points/adb3ea03d7b954255e9e2656aff7dd5b.pts 02691156/expert_verified/points_label/adb3ea03d7b954255e9e2656aff7dd5b.seg 02691156\n02691156/points/959f28c6724979ef9a6e43b878d5b335.pts 02691156/expert_verified/points_label/959f28c6724979ef9a6e43b878d5b335.seg 02691156\n04379243/points/dec1d2cf8a4563d36cb02543e4df83bf.pts 04379243/expert_verified/points_label/dec1d2cf8a4563d36cb02543e4df83bf.seg 04379243\n03790512/points/a9c432d1dc4034762a45a87054fa7272.pts 03790512/expert_verified/points_label/a9c432d1dc4034762a45a87054fa7272.seg 03790512\n03001627/points/1b5e876f3559c231532a8e162f399205.pts 03001627/expert_verified/points_label/1b5e876f3559c231532a8e162f399205.seg 03001627\n04379243/points/82e5309809e455d5f15fed2243deb166.pts 04379243/expert_verified/points_label/82e5309809e455d5f15fed2243deb166.seg 04379243\n03467517/points/8f1f54d337bf6ccac782e6226a4f593e.pts 03467517/expert_verified/points_label/8f1f54d337bf6ccac782e6226a4f593e.seg 03467517\n04379243/points/67d97102f9c54cc95512673aa47c7e3d.pts 04379243/expert_verified/points_label/67d97102f9c54cc95512673aa47c7e3d.seg 04379243\n02691156/points/e0cc4f538a8da2d65d3bbd70fc7759b7.pts 02691156/expert_verified/points_label/e0cc4f538a8da2d65d3bbd70fc7759b7.seg 02691156\n04379243/points/d0008b042256fb5f7ab911835312d4f1.pts 04379243/expert_verified/points_label/d0008b042256fb5f7ab911835312d4f1.seg 04379243\n03467517/points/44c05e219618a6395b3335548350bdee.pts 03467517/expert_verified/points_label/44c05e219618a6395b3335548350bdee.seg 03467517\n03001627/points/3f7808c221b01668b4d174e5c61f344.pts 03001627/expert_verified/points_label/3f7808c221b01668b4d174e5c61f344.seg 03001627\n03467517/points/51abcb617b2faf3a24eeca91f583600.pts 03467517/expert_verified/points_label/51abcb617b2faf3a24eeca91f583600.seg 03467517\n03636649/points/f38370fc4c112017a6e7138fdd58748.pts 03636649/expert_verified/points_label/f38370fc4c112017a6e7138fdd58748.seg 03636649\n03001627/points/37607ea19e352af4fffc97a61124b1a9.pts 03001627/expert_verified/points_label/37607ea19e352af4fffc97a61124b1a9.seg 03001627\n02958343/points/2cb6de89f5b6e702b626f6a649199824.pts 02958343/expert_verified/points_label/2cb6de89f5b6e702b626f6a649199824.seg 02958343\n04099429/points/d781243cc1d1d2e91a0ec553feb1c2c3.pts 04099429/expert_verified/points_label/d781243cc1d1d2e91a0ec553feb1c2c3.seg 04099429\n04379243/points/900afcc9f0f5fbfd858699aaad4acee4.pts 04379243/expert_verified/points_label/900afcc9f0f5fbfd858699aaad4acee4.seg 04379243\n03001627/points/d13eb19745344ae5fb0eb7e753c06942.pts 03001627/expert_verified/points_label/d13eb19745344ae5fb0eb7e753c06942.seg 03001627\n02958343/points/5785192c95cdd67b704715417c0f83c1.pts 02958343/expert_verified/points_label/5785192c95cdd67b704715417c0f83c1.seg 02958343\n03001627/points/5bb5b15807158f71504721639e19f609.pts 03001627/expert_verified/points_label/5bb5b15807158f71504721639e19f609.seg 03001627\n03636649/points/ba05f660341b7b7b70be09f44cb2fef5.pts 03636649/expert_verified/points_label/ba05f660341b7b7b70be09f44cb2fef5.seg 03636649\n02691156/points/97066012fbca5983c74417871493eae8.pts 02691156/expert_verified/points_label/97066012fbca5983c74417871493eae8.seg 02691156\n03001627/points/4499729e53c858ae71a782a4379556c7.pts 03001627/expert_verified/points_label/4499729e53c858ae71a782a4379556c7.seg 03001627\n04379243/points/41d280b7db61ebddfebad4f49b26ec52.pts 04379243/expert_verified/points_label/41d280b7db61ebddfebad4f49b26ec52.seg 04379243\n02773838/points/30bf69aa24dbb3fc9de193e488fc4dce.pts 02773838/expert_verified/points_label/30bf69aa24dbb3fc9de193e488fc4dce.seg 02773838\n03467517/points/6c9a9c0e2af9d5b35f713e773d664ec2.pts 03467517/expert_verified/points_label/6c9a9c0e2af9d5b35f713e773d664ec2.seg 03467517\n04379243/points/f979c7a650d29ea819fb4103277a6b93.pts 04379243/expert_verified/points_label/f979c7a650d29ea819fb4103277a6b93.seg 04379243\n03001627/points/b631b78c2dcc748cba5342d638d0c267.pts 03001627/expert_verified/points_label/b631b78c2dcc748cba5342d638d0c267.seg 03001627\n03467517/points/d2ad57f36e00c602baba3b7560fe62f4.pts 03467517/expert_verified/points_label/d2ad57f36e00c602baba3b7560fe62f4.seg 03467517\n04379243/points/5771d5a3084b3ca3a2d7b309863cb1b.pts 04379243/expert_verified/points_label/5771d5a3084b3ca3a2d7b309863cb1b.seg 04379243\n03636649/points/2d638c6b6b2feb9248da169d95204ce2.pts 03636649/expert_verified/points_label/2d638c6b6b2feb9248da169d95204ce2.seg 03636649\n02958343/points/63a4e46bbbd855fc2b63d3b2a8c4e8b.pts 02958343/expert_verified/points_label/63a4e46bbbd855fc2b63d3b2a8c4e8b.seg 02958343\n04379243/points/8c67fd5a15e8d9defebad4f49b26ec52.pts 04379243/expert_verified/points_label/8c67fd5a15e8d9defebad4f49b26ec52.seg 04379243\n03467517/points/28c3903b29f6b38363e148e250c0340d.pts 03467517/expert_verified/points_label/28c3903b29f6b38363e148e250c0340d.seg 03467517\n04379243/points/ab2967188299bea54cb0654f4cfa9684.pts 04379243/expert_verified/points_label/ab2967188299bea54cb0654f4cfa9684.seg 04379243\n02691156/points/a9a7f21271b3efbaf446f92b52bbd82a.pts 02691156/expert_verified/points_label/a9a7f21271b3efbaf446f92b52bbd82a.seg 02691156\n04379243/points/c3e43144fd61c56f19fb4103277a6b93.pts 04379243/expert_verified/points_label/c3e43144fd61c56f19fb4103277a6b93.seg 04379243\n03001627/points/7fcde5fc8e023dd2a6fee8e2140acec9.pts 03001627/expert_verified/points_label/7fcde5fc8e023dd2a6fee8e2140acec9.seg 03001627\n03790512/points/70d9cc5115bfedeeab548456bc75847f.pts 03790512/expert_verified/points_label/70d9cc5115bfedeeab548456bc75847f.seg 03790512\n03001627/points/3c0dd3719baecf3319fb4103277a6b93.pts 03001627/expert_verified/points_label/3c0dd3719baecf3319fb4103277a6b93.seg 03001627\n03636649/points/55077c2175d97b8889ab11a408196888.pts 03636649/expert_verified/points_label/55077c2175d97b8889ab11a408196888.seg 03636649\n04379243/points/71fc8c7cdb48978282fa4d4f2c19b2ce.pts 04379243/expert_verified/points_label/71fc8c7cdb48978282fa4d4f2c19b2ce.seg 04379243\n04379243/points/f0d5eefef970fa4b9f2349486c570dd4.pts 04379243/expert_verified/points_label/f0d5eefef970fa4b9f2349486c570dd4.seg 04379243\n03642806/points/90c01fd78513bb99c9b20aa1b8066c46.pts 03642806/expert_verified/points_label/90c01fd78513bb99c9b20aa1b8066c46.seg 03642806\n04379243/points/ca6c07357ba5125b8e2adb29857f8a1.pts 04379243/expert_verified/points_label/ca6c07357ba5125b8e2adb29857f8a1.seg 04379243\n04379243/points/634bcd3197e337aafe4e4de1adda2150.pts 04379243/expert_verified/points_label/634bcd3197e337aafe4e4de1adda2150.seg 04379243\n04379243/points/7b411de42d4960eb6e25f3efedf6785f.pts 04379243/expert_verified/points_label/7b411de42d4960eb6e25f3efedf6785f.seg 04379243\n04379243/points/878414eb6e86494d9a8ef44e1d2c5b75.pts 04379243/expert_verified/points_label/878414eb6e86494d9a8ef44e1d2c5b75.seg 04379243\n03001627/points/f3fa7bd00b76f6a87a8a6b9421844d96.pts 03001627/expert_verified/points_label/f3fa7bd00b76f6a87a8a6b9421844d96.seg 03001627\n03467517/points/a2c1ee6a7ddb50a493f0194265a9746c.pts 03467517/expert_verified/points_label/a2c1ee6a7ddb50a493f0194265a9746c.seg 03467517\n04379243/points/25bc205f6de491f4ccde40b1205ec7ff.pts 04379243/expert_verified/points_label/25bc205f6de491f4ccde40b1205ec7ff.seg 04379243\n03636649/points/771d4def2e44bc169eb34048e600e1ea.pts 03636649/expert_verified/points_label/771d4def2e44bc169eb34048e600e1ea.seg 03636649\n03624134/points/6ebe2a22b8d9d70862a95b942081dfee.pts 03624134/expert_verified/points_label/6ebe2a22b8d9d70862a95b942081dfee.seg 03624134\n02691156/points/9b1fc3881a5335cb44012f72ba1e15a8.pts 02691156/expert_verified/points_label/9b1fc3881a5335cb44012f72ba1e15a8.seg 02691156\n03001627/points/3dc252fd90d82b18c9be65dfbd21428b.pts 03001627/expert_verified/points_label/3dc252fd90d82b18c9be65dfbd21428b.seg 03001627\n04379243/points/f6f180c3e72caacb5077539b37310c29.pts 04379243/expert_verified/points_label/f6f180c3e72caacb5077539b37310c29.seg 04379243\n03642806/points/25bc168b214b54799e28e9cf32e5157.pts 03642806/expert_verified/points_label/25bc168b214b54799e28e9cf32e5157.seg 03642806\n04379243/points/ac9fae8af57729945eee45c00c4de9d3.pts 04379243/expert_verified/points_label/ac9fae8af57729945eee45c00c4de9d3.seg 04379243\n03001627/points/e8126f9e2d106620d2f33aaf794b5932.pts 03001627/expert_verified/points_label/e8126f9e2d106620d2f33aaf794b5932.seg 03001627\n03624134/points/3dc5a6d79ed591bda709dec9a148b2fe.pts 03624134/expert_verified/points_label/3dc5a6d79ed591bda709dec9a148b2fe.seg 03624134\n04379243/points/8f73278956fecb80327289c00b6dc9ca.pts 04379243/expert_verified/points_label/8f73278956fecb80327289c00b6dc9ca.seg 04379243\n03948459/points/5f46578efd2c65e5d4ac2f5fcaa742ac.pts 03948459/expert_verified/points_label/5f46578efd2c65e5d4ac2f5fcaa742ac.seg 03948459\n03624134/points/a05ea45d396c86784e52b614e584a543.pts 03624134/expert_verified/points_label/a05ea45d396c86784e52b614e584a543.seg 03624134\n03001627/points/cd939609247df917d9d3572bbd9cf789.pts 03001627/expert_verified/points_label/cd939609247df917d9d3572bbd9cf789.seg 03001627\n03261776/points/17c9866b42ae1831df4cfe396cee719e.pts 03261776/expert_verified/points_label/17c9866b42ae1831df4cfe396cee719e.seg 03261776\n03797390/points/3d3e993f7baa4d7ef1ff24a8b1564a36.pts 03797390/expert_verified/points_label/3d3e993f7baa4d7ef1ff24a8b1564a36.seg 03797390\n03467517/points/36b49aff54f6d7e893f0194265a9746c.pts 03467517/expert_verified/points_label/36b49aff54f6d7e893f0194265a9746c.seg 03467517\n02691156/points/48df2496242053da4ee0fb6a51564c3.pts 02691156/expert_verified/points_label/48df2496242053da4ee0fb6a51564c3.seg 02691156\n04379243/points/7ad23def902ea4f37b7a2c2624e46d0a.pts 04379243/expert_verified/points_label/7ad23def902ea4f37b7a2c2624e46d0a.seg 04379243\n04379243/points/1a8fe5baa2d4b5f7ee84261b3d20656.pts 04379243/expert_verified/points_label/1a8fe5baa2d4b5f7ee84261b3d20656.seg 04379243\n03467517/points/d685415d4fcd3205a24eeca91f583600.pts 03467517/expert_verified/points_label/d685415d4fcd3205a24eeca91f583600.seg 03467517\n02958343/points/8e308d28d463427f43f0e92e826556b8.pts 02958343/expert_verified/points_label/8e308d28d463427f43f0e92e826556b8.seg 02958343\n04379243/points/dc68436ab1a576f6573d2c9ac4b23fdf.pts 04379243/expert_verified/points_label/dc68436ab1a576f6573d2c9ac4b23fdf.seg 04379243\n04379243/points/1a153612bcdab3e23cc149415a408229.pts 04379243/expert_verified/points_label/1a153612bcdab3e23cc149415a408229.seg 04379243\n03001627/points/19ce953da9aa8065d747a43c11e738e9.pts 03001627/expert_verified/points_label/19ce953da9aa8065d747a43c11e738e9.seg 03001627\n04379243/points/db2d4f781756e687d8864caa856253b.pts 04379243/expert_verified/points_label/db2d4f781756e687d8864caa856253b.seg 04379243\n04379243/points/d8f851bbc98dccc23fa92d98173c06f.pts 04379243/expert_verified/points_label/d8f851bbc98dccc23fa92d98173c06f.seg 04379243\n03467517/points/e585e31db7568c4cf0e1c0df18936d05.pts 03467517/expert_verified/points_label/e585e31db7568c4cf0e1c0df18936d05.seg 03467517\n03001627/points/98ac0106ad244505e04fc3fcc1c852e0.pts 03001627/expert_verified/points_label/98ac0106ad244505e04fc3fcc1c852e0.seg 03001627\n03001627/points/1b81441b7e597235d61420a53a0cb96d.pts 03001627/expert_verified/points_label/1b81441b7e597235d61420a53a0cb96d.seg 03001627\n03001627/points/918145be863f7aeaf050758b903e6054.pts 03001627/expert_verified/points_label/918145be863f7aeaf050758b903e6054.seg 03001627\n02691156/points/1af4b32eafffb0f7ee60c37cbf99c1c.pts 02691156/expert_verified/points_label/1af4b32eafffb0f7ee60c37cbf99c1c.seg 02691156\n03636649/points/f4e1a4032b1686cec35131da26f8061a.pts 03636649/expert_verified/points_label/f4e1a4032b1686cec35131da26f8061a.seg 03636649\n04379243/points/9c4dfafdbd7f9b76c955e5ed03ef3a2f.pts 04379243/expert_verified/points_label/9c4dfafdbd7f9b76c955e5ed03ef3a2f.seg 04379243\n02691156/points/80b8f4da6b77eb66d208f79049825a82.pts 02691156/expert_verified/points_label/80b8f4da6b77eb66d208f79049825a82.seg 02691156\n03642806/points/de2e95eac460c361e862e3cac45aa769.pts 03642806/expert_verified/points_label/de2e95eac460c361e862e3cac45aa769.seg 03642806\n04379243/points/e2571e4eba2d9f5eab610b0c94236463.pts 04379243/expert_verified/points_label/e2571e4eba2d9f5eab610b0c94236463.seg 04379243\n04379243/points/a0445e4888d56666b9d7c2fc41e80228.pts 04379243/expert_verified/points_label/a0445e4888d56666b9d7c2fc41e80228.seg 04379243\n03001627/points/873c017f35957717b56a13a4b2372aa4.pts 03001627/expert_verified/points_label/873c017f35957717b56a13a4b2372aa4.seg 03001627\n03001627/points/3af90da238ac4ddbf91663a74ccd2338.pts 03001627/expert_verified/points_label/3af90da238ac4ddbf91663a74ccd2338.seg 03001627\n02958343/points/9698be0fd3516f01fbeda5389ab05f5f.pts 02958343/expert_verified/points_label/9698be0fd3516f01fbeda5389ab05f5f.seg 02958343\n03790512/points/655b9dd9425cc3a12a45a87054fa7272.pts 03790512/expert_verified/points_label/655b9dd9425cc3a12a45a87054fa7272.seg 03790512\n04379243/points/ec1c92efffb9ee78beedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/ec1c92efffb9ee78beedb4c8fd29e2d1.seg 04379243\n04379243/points/3b7fc97192e483ebb0bf045ee98272fc.pts 04379243/expert_verified/points_label/3b7fc97192e483ebb0bf045ee98272fc.seg 04379243\n03467517/points/8c3d3e69d03d3443e84e459fb01822f.pts 03467517/expert_verified/points_label/8c3d3e69d03d3443e84e459fb01822f.seg 03467517\n02691156/points/e0058b4948f87d3b87697d3904b168b.pts 02691156/expert_verified/points_label/e0058b4948f87d3b87697d3904b168b.seg 02691156\n03001627/points/4428b7dc4b6696812905b6e26038a78.pts 03001627/expert_verified/points_label/4428b7dc4b6696812905b6e26038a78.seg 03001627\n03636649/points/f7093dd024fd09fc7219d6d5c4afbaff.pts 03636649/expert_verified/points_label/f7093dd024fd09fc7219d6d5c4afbaff.seg 03636649\n04379243/points/7d0c5e28089c2b7bd99e852ee772dfa4.pts 04379243/expert_verified/points_label/7d0c5e28089c2b7bd99e852ee772dfa4.seg 04379243\n03636649/points/4916f793d87dd184d42b9650f19dd425.pts 03636649/expert_verified/points_label/4916f793d87dd184d42b9650f19dd425.seg 03636649\n04379243/points/1ffcbc064f473b7de7c13848b2d8f5ec.pts 04379243/expert_verified/points_label/1ffcbc064f473b7de7c13848b2d8f5ec.seg 04379243\n03636649/points/e180510d07b65fff571108a6d1e94edd.pts 03636649/expert_verified/points_label/e180510d07b65fff571108a6d1e94edd.seg 03636649\n03636649/points/d9f6bd064c9fd456fcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/d9f6bd064c9fd456fcb8d8c6d4df8143.seg 03636649\n04379243/points/ec81c49ee12e8a70fd06de9ba37d44bd.pts 04379243/expert_verified/points_label/ec81c49ee12e8a70fd06de9ba37d44bd.seg 04379243\n03636649/points/4a868756ae6404a5c0bc57897eddf6f.pts 03636649/expert_verified/points_label/4a868756ae6404a5c0bc57897eddf6f.seg 03636649\n02958343/points/9c827e532de4967285089a13cc567dbd.pts 02958343/expert_verified/points_label/9c827e532de4967285089a13cc567dbd.seg 02958343\n03797390/points/1c9f9e25c654cbca3c71bf3f4dd78475.pts 03797390/expert_verified/points_label/1c9f9e25c654cbca3c71bf3f4dd78475.seg 03797390\n03001627/points/ca3670f77268f899febad4f49b26ec52.pts 03001627/expert_verified/points_label/ca3670f77268f899febad4f49b26ec52.seg 03001627\n04379243/points/9b8e6eb835f0c8bcf37af16b2893f1d4.pts 04379243/expert_verified/points_label/9b8e6eb835f0c8bcf37af16b2893f1d4.seg 04379243\n03001627/points/5c9d582488732ee0d7f7a4c4609b0913.pts 03001627/expert_verified/points_label/5c9d582488732ee0d7f7a4c4609b0913.seg 03001627\n04379243/points/684ccc0f629ee45cab610b0c94236463.pts 04379243/expert_verified/points_label/684ccc0f629ee45cab610b0c94236463.seg 04379243\n03001627/points/4913388a4c94547a81806e3880250dff.pts 03001627/expert_verified/points_label/4913388a4c94547a81806e3880250dff.seg 03001627\n03636649/points/73378b714c5bfed2b922d818b19db1e.pts 03636649/expert_verified/points_label/73378b714c5bfed2b922d818b19db1e.seg 03636649\n03001627/points/4a89a789f817ab5414038d588fd1342f.pts 03001627/expert_verified/points_label/4a89a789f817ab5414038d588fd1342f.seg 03001627\n04379243/points/df7761a3b4ac638c9eaceb124b71b7be.pts 04379243/expert_verified/points_label/df7761a3b4ac638c9eaceb124b71b7be.seg 04379243\n03001627/points/46557f689f4cf5dd2acd2bb6205825cb.pts 03001627/expert_verified/points_label/46557f689f4cf5dd2acd2bb6205825cb.seg 03001627\n04379243/points/2db1f557e247ded7e907b6d9dc1d71b7.pts 04379243/expert_verified/points_label/2db1f557e247ded7e907b6d9dc1d71b7.seg 04379243\n04379243/points/b69d9e876e7a80a29f2349486c570dd4.pts 04379243/expert_verified/points_label/b69d9e876e7a80a29f2349486c570dd4.seg 04379243\n04379243/points/a94ea7183f27073248c0c0980e363341.pts 04379243/expert_verified/points_label/a94ea7183f27073248c0c0980e363341.seg 04379243\n03636649/points/8f85c2195890ccf671f0940f5ed452dc.pts 03636649/expert_verified/points_label/8f85c2195890ccf671f0940f5ed452dc.seg 03636649\n02691156/points/cc80380c511ec8e2c91a9d486db717.pts 02691156/expert_verified/points_label/cc80380c511ec8e2c91a9d486db717.seg 02691156\n03642806/points/6b61ef17b4f45050b598e8984f11eb0c.pts 03642806/expert_verified/points_label/6b61ef17b4f45050b598e8984f11eb0c.seg 03642806\n04379243/points/d9ce0b512e0420f8be95ff480950e9ef.pts 04379243/expert_verified/points_label/d9ce0b512e0420f8be95ff480950e9ef.seg 04379243\n04379243/points/c27a1c6a26642c907ecc778b34d42f32.pts 04379243/expert_verified/points_label/c27a1c6a26642c907ecc778b34d42f32.seg 04379243\n04379243/points/debd06d3176a5b728cbb8bac2032149c.pts 04379243/expert_verified/points_label/debd06d3176a5b728cbb8bac2032149c.seg 04379243\n04099429/points/fa07813a89527d195d1df55cbe0874aa.pts 04099429/expert_verified/points_label/fa07813a89527d195d1df55cbe0874aa.seg 04099429\n03001627/points/2a98a638f675f46e7d44dc16af152638.pts 03001627/expert_verified/points_label/2a98a638f675f46e7d44dc16af152638.seg 03001627\n03624134/points/ec1eb959cc203f1de5a365227cfe63ec.pts 03624134/expert_verified/points_label/ec1eb959cc203f1de5a365227cfe63ec.seg 03624134\n04379243/points/db0c430a51ac45c19d2be74cfb51ade1.pts 04379243/expert_verified/points_label/db0c430a51ac45c19d2be74cfb51ade1.seg 04379243\n04379243/points/26b2a15646f6a3a06f1e07a56c129dfc.pts 04379243/expert_verified/points_label/26b2a15646f6a3a06f1e07a56c129dfc.seg 04379243\n04379243/points/90343e416528b576f41d9ea5f63b1b05.pts 04379243/expert_verified/points_label/90343e416528b576f41d9ea5f63b1b05.seg 04379243\n03001627/points/43d38ad2f5d103adf9b9977a2406713a.pts 03001627/expert_verified/points_label/43d38ad2f5d103adf9b9977a2406713a.seg 03001627\n03001627/points/e279758e8a5b6a8d492d9da2668ec34c.pts 03001627/expert_verified/points_label/e279758e8a5b6a8d492d9da2668ec34c.seg 03001627\n03642806/points/71907a4a567dce3bb0de1e7a6809fd90.pts 03642806/expert_verified/points_label/71907a4a567dce3bb0de1e7a6809fd90.seg 03642806\n03636649/points/2958cd9fd799bf02cfbcbf340cec6da1.pts 03636649/expert_verified/points_label/2958cd9fd799bf02cfbcbf340cec6da1.seg 03636649\n04379243/points/bd7c71ca15b0d4e56c252f74b6220e29.pts 04379243/expert_verified/points_label/bd7c71ca15b0d4e56c252f74b6220e29.seg 04379243\n04379243/points/51c6a7298408c3f19730cb37c9a5f63b.pts 04379243/expert_verified/points_label/51c6a7298408c3f19730cb37c9a5f63b.seg 04379243\n02691156/points/e3de366a0cfb59ed38294c37c250d7cd.pts 02691156/expert_verified/points_label/e3de366a0cfb59ed38294c37c250d7cd.seg 02691156\n03467517/points/f288cd2146b8f4c1f0e1c0df18936d05.pts 03467517/expert_verified/points_label/f288cd2146b8f4c1f0e1c0df18936d05.seg 03467517\n04379243/points/270430ab9efb9d85c0f947750540fb22.pts 04379243/expert_verified/points_label/270430ab9efb9d85c0f947750540fb22.seg 04379243\n04379243/points/f5ad10e6a938aa80e85c7a030ebdf69a.pts 04379243/expert_verified/points_label/f5ad10e6a938aa80e85c7a030ebdf69a.seg 04379243\n04379243/points/8343d98e3710f5bee1b32bbe69d5bc15.pts 04379243/expert_verified/points_label/8343d98e3710f5bee1b32bbe69d5bc15.seg 04379243\n03790512/points/40b7a63fd9ede0cf48272812609617e2.pts 03790512/expert_verified/points_label/40b7a63fd9ede0cf48272812609617e2.seg 03790512\n03467517/points/16bc13ee237ebeb38460585fe283a1c9.pts 03467517/expert_verified/points_label/16bc13ee237ebeb38460585fe283a1c9.seg 03467517\n02691156/points/a56143efe74ee89ebbf3143b1cb6076a.pts 02691156/expert_verified/points_label/a56143efe74ee89ebbf3143b1cb6076a.seg 02691156\n04379243/points/9a6ab25d91c92a5a35acfdef2ece21c0.pts 04379243/expert_verified/points_label/9a6ab25d91c92a5a35acfdef2ece21c0.seg 04379243\n03467517/points/c9b60abdc17708fb78ad94b294a9faa6.pts 03467517/expert_verified/points_label/c9b60abdc17708fb78ad94b294a9faa6.seg 03467517\n04379243/points/cde67434193a2a6f19fb4103277a6b93.pts 04379243/expert_verified/points_label/cde67434193a2a6f19fb4103277a6b93.seg 04379243\n04379243/points/6b62c85b16e300557005dacb6907e37d.pts 04379243/expert_verified/points_label/6b62c85b16e300557005dacb6907e37d.seg 04379243\n04379243/points/7956ac7aba6295d1c2fd07f66cbad0f7.pts 04379243/expert_verified/points_label/7956ac7aba6295d1c2fd07f66cbad0f7.seg 04379243\n04379243/points/dcda90e411cb4e35506d1e1cc84da713.pts 04379243/expert_verified/points_label/dcda90e411cb4e35506d1e1cc84da713.seg 04379243\n02691156/points/c494f446954523a8a32748a9f843a0bf.pts 02691156/expert_verified/points_label/c494f446954523a8a32748a9f843a0bf.seg 02691156\n02691156/points/18e6f319062ccb49ca8607f540cc62ba.pts 02691156/expert_verified/points_label/18e6f319062ccb49ca8607f540cc62ba.seg 02691156\n04379243/points/b7cead95e18b570d2c97486f63c12d76.pts 04379243/expert_verified/points_label/b7cead95e18b570d2c97486f63c12d76.seg 04379243\n03948459/points/f6d52684720d52a01ab78426351eea4a.pts 03948459/expert_verified/points_label/f6d52684720d52a01ab78426351eea4a.seg 03948459\n04379243/points/7eeceefed2b3aa2794f3bda96cf548cc.pts 04379243/expert_verified/points_label/7eeceefed2b3aa2794f3bda96cf548cc.seg 04379243\n03001627/points/5eaa2730f10054d0f6cabe1df6f4c9d9.pts 03001627/expert_verified/points_label/5eaa2730f10054d0f6cabe1df6f4c9d9.seg 03001627\n03001627/points/92f79b8e45269847f0efa341b439d741.pts 03001627/expert_verified/points_label/92f79b8e45269847f0efa341b439d741.seg 03001627\n03001627/points/cbaca6a6edfa2d512b520984c067934c.pts 03001627/expert_verified/points_label/cbaca6a6edfa2d512b520984c067934c.seg 03001627\n04379243/points/390e0db80fe12ef65fa6da97b9eb4a2f.pts 04379243/expert_verified/points_label/390e0db80fe12ef65fa6da97b9eb4a2f.seg 04379243\n04379243/points/2ec33e8b457ac0fa278d386bfa54545.pts 04379243/expert_verified/points_label/2ec33e8b457ac0fa278d386bfa54545.seg 04379243\n04225987/points/ac2b6924a60a7a87aa4f69d519551495.pts 04225987/expert_verified/points_label/ac2b6924a60a7a87aa4f69d519551495.seg 04225987\n02958343/points/468780ef4ace9a422e877e82c90c24d.pts 02958343/expert_verified/points_label/468780ef4ace9a422e877e82c90c24d.seg 02958343\n03001627/points/78c9204b2eac432b65b77a565916c7f.pts 03001627/expert_verified/points_label/78c9204b2eac432b65b77a565916c7f.seg 03001627\n04379243/points/b278b58e294a7d2bac242c3aebc81b2f.pts 04379243/expert_verified/points_label/b278b58e294a7d2bac242c3aebc81b2f.seg 04379243\n04379243/points/fc95d34ab1afb92b9118eee0b123125f.pts 04379243/expert_verified/points_label/fc95d34ab1afb92b9118eee0b123125f.seg 04379243\n03790512/points/54f016b47a5864cd5dde04c96fd8146.pts 03790512/expert_verified/points_label/54f016b47a5864cd5dde04c96fd8146.seg 03790512\n04379243/points/9afa121e3aec8bd7c387f328a37d8ece.pts 04379243/expert_verified/points_label/9afa121e3aec8bd7c387f328a37d8ece.seg 04379243\n04379243/points/382889dbc86b5dd919fb4103277a6b93.pts 04379243/expert_verified/points_label/382889dbc86b5dd919fb4103277a6b93.seg 04379243\n03467517/points/b83a81b2476ec59e59610f6f40382499.pts 03467517/expert_verified/points_label/b83a81b2476ec59e59610f6f40382499.seg 03467517\n03001627/points/5d959b0f79a22e8c67c9124d122355ab.pts 03001627/expert_verified/points_label/5d959b0f79a22e8c67c9124d122355ab.seg 03001627\n02691156/points/c4111dbb21e1f17043afdb9c81ff2967.pts 02691156/expert_verified/points_label/c4111dbb21e1f17043afdb9c81ff2967.seg 02691156\n02691156/points/46829981c5c25285bfc0a2c490b4c222.pts 02691156/expert_verified/points_label/46829981c5c25285bfc0a2c490b4c222.seg 02691156\n04379243/points/497659c4723fbc4fe90ff84c89de437.pts 04379243/expert_verified/points_label/497659c4723fbc4fe90ff84c89de437.seg 04379243\n02691156/points/a805c30d4b09f11f62347b4731688b0f.pts 02691156/expert_verified/points_label/a805c30d4b09f11f62347b4731688b0f.seg 02691156\n03636649/points/e485053f3e0d18252cd2160e449d45ae.pts 03636649/expert_verified/points_label/e485053f3e0d18252cd2160e449d45ae.seg 03636649\n02958343/points/2fb5fe84c28b8b35cc02882a83047172.pts 02958343/expert_verified/points_label/2fb5fe84c28b8b35cc02882a83047172.seg 02958343\n03636649/points/f7a4590c54e2ac7ce62fad6b4f42c880.pts 03636649/expert_verified/points_label/f7a4590c54e2ac7ce62fad6b4f42c880.seg 03636649\n03642806/points/9fc5b76d363ca64ed03066fc8168e9c6.pts 03642806/expert_verified/points_label/9fc5b76d363ca64ed03066fc8168e9c6.seg 03642806\n02691156/points/be080a797406422843afdb9c81ff2967.pts 02691156/expert_verified/points_label/be080a797406422843afdb9c81ff2967.seg 02691156\n04379243/points/81a84fcb2b247a3348eaa510713cb074.pts 04379243/expert_verified/points_label/81a84fcb2b247a3348eaa510713cb074.seg 04379243\n03001627/points/47c540c2e9c3483ce79a6b87656a120a.pts 03001627/expert_verified/points_label/47c540c2e9c3483ce79a6b87656a120a.seg 03001627\n03001627/points/5073d7a546b9a4d0e810eba61b778ebb.pts 03001627/expert_verified/points_label/5073d7a546b9a4d0e810eba61b778ebb.seg 03001627\n03001627/points/e4a890f2330ebd7e4a11872aa986426d.pts 03001627/expert_verified/points_label/e4a890f2330ebd7e4a11872aa986426d.seg 03001627\n03001627/points/a7200578bd7bea065dc3653f8341633a.pts 03001627/expert_verified/points_label/a7200578bd7bea065dc3653f8341633a.seg 03001627\n03467517/points/b004331ee5cc39caa24eeca91f583600.pts 03467517/expert_verified/points_label/b004331ee5cc39caa24eeca91f583600.seg 03467517\n04379243/points/f01768b8b8ba025ee45ef4135c266a12.pts 04379243/expert_verified/points_label/f01768b8b8ba025ee45ef4135c266a12.seg 04379243\n03642806/points/5173aa7f75ff3cf1b55fde51a411949f.pts 03642806/expert_verified/points_label/5173aa7f75ff3cf1b55fde51a411949f.seg 03642806\n03636649/points/e7e45a8f0b0ab311c754474f0ac106.pts 03636649/expert_verified/points_label/e7e45a8f0b0ab311c754474f0ac106.seg 03636649\n03642806/points/1b67b4bfed6688ba5b22feddf58c05e1.pts 03642806/expert_verified/points_label/1b67b4bfed6688ba5b22feddf58c05e1.seg 03642806\n03797390/points/f1e439307b834015770a0ff1161fa15a.pts 03797390/expert_verified/points_label/f1e439307b834015770a0ff1161fa15a.seg 03797390\n03001627/points/b6c9495629c00419940806ade53ef2f.pts 03001627/expert_verified/points_label/b6c9495629c00419940806ade53ef2f.seg 03001627\n03001627/points/8e19d2ec95c45186a6fd617b2ff5d2d.pts 03001627/expert_verified/points_label/8e19d2ec95c45186a6fd617b2ff5d2d.seg 03001627\n03001627/points/d7b8189fe69cebedc41b07b1627c4b43.pts 03001627/expert_verified/points_label/d7b8189fe69cebedc41b07b1627c4b43.seg 03001627\n02691156/points/a7a0e7eddf4ffb8c19378fd691582500.pts 02691156/expert_verified/points_label/a7a0e7eddf4ffb8c19378fd691582500.seg 02691156\n03001627/points/2b6cbad4ba1e9a0645881d7eab1353ba.pts 03001627/expert_verified/points_label/2b6cbad4ba1e9a0645881d7eab1353ba.seg 03001627\n04379243/points/dade0594e68e2250be6c545952e7fa4a.pts 04379243/expert_verified/points_label/dade0594e68e2250be6c545952e7fa4a.seg 04379243\n03001627/points/9850d225049f987e9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/9850d225049f987e9b9f2eb77f5e247e.seg 03001627\n03948459/points/e9e6426605eb6d5952d52701459b1f0.pts 03948459/expert_verified/points_label/e9e6426605eb6d5952d52701459b1f0.seg 03948459\n03636649/points/e507bc77c03a1b3afcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/e507bc77c03a1b3afcb8d8c6d4df8143.seg 03636649\n03797390/points/a6d9f9ae39728831808951ff5fb582ac.pts 03797390/expert_verified/points_label/a6d9f9ae39728831808951ff5fb582ac.seg 03797390\n04379243/points/3144ba0c286cc61f490ad276cd2af3a4.pts 04379243/expert_verified/points_label/3144ba0c286cc61f490ad276cd2af3a4.seg 04379243\n04379243/points/9be565678aab11cba0ab1d82ef09f78f.pts 04379243/expert_verified/points_label/9be565678aab11cba0ab1d82ef09f78f.seg 04379243\n04379243/points/a4b2870ce7a54b8eec11c6b035aac769.pts 04379243/expert_verified/points_label/a4b2870ce7a54b8eec11c6b035aac769.seg 04379243\n03636649/points/78b95abd1d1158ffef3a2c64cef919d0.pts 03636649/expert_verified/points_label/78b95abd1d1158ffef3a2c64cef919d0.seg 03636649\n04379243/points/2182028f013e7eb530bbd4cddd04c77b.pts 04379243/expert_verified/points_label/2182028f013e7eb530bbd4cddd04c77b.seg 04379243\n02691156/points/e00b89bc338348caa42c49797afd1f5c.pts 02691156/expert_verified/points_label/e00b89bc338348caa42c49797afd1f5c.seg 02691156\n03001627/points/9d28a066df22319cca2e16d6cd76503c.pts 03001627/expert_verified/points_label/9d28a066df22319cca2e16d6cd76503c.seg 03001627\n03636649/points/3c4d8c4ebe9dedbc2cd2160e449d45ae.pts 03636649/expert_verified/points_label/3c4d8c4ebe9dedbc2cd2160e449d45ae.seg 03636649\n02691156/points/97d662e5e6345b46bd46d022fd7d80aa.pts 02691156/expert_verified/points_label/97d662e5e6345b46bd46d022fd7d80aa.seg 02691156\n03001627/points/9dac39c51680daa2f71e06115e9c3b3e.pts 03001627/expert_verified/points_label/9dac39c51680daa2f71e06115e9c3b3e.seg 03001627\n03624134/points/1ecb37ea8f0c4abc20fc54d2500eb7f1.pts 03624134/expert_verified/points_label/1ecb37ea8f0c4abc20fc54d2500eb7f1.seg 03624134\n03624134/points/3a0f48139bfd3a4ea152d2e823b9fe06.pts 03624134/expert_verified/points_label/3a0f48139bfd3a4ea152d2e823b9fe06.seg 03624134\n04379243/points/1264d88ae599df3fbeedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/1264d88ae599df3fbeedb4c8fd29e2d1.seg 04379243\n03001627/points/97bbc8970b05c4a3fcde6bcb709edd9a.pts 03001627/expert_verified/points_label/97bbc8970b05c4a3fcde6bcb709edd9a.seg 03001627\n03636649/points/1f58b59a1b6b06df766fc93a239bada0.pts 03636649/expert_verified/points_label/1f58b59a1b6b06df766fc93a239bada0.seg 03636649\n03001627/points/eb51e814c3f44a07914ced7dab3536b9.pts 03001627/expert_verified/points_label/eb51e814c3f44a07914ced7dab3536b9.seg 03001627\n03636649/points/a138582b1d0b9cbb137af984a9f45d65.pts 03636649/expert_verified/points_label/a138582b1d0b9cbb137af984a9f45d65.seg 03636649\n03790512/points/9f9de88a95b56660b37378f3c85478b4.pts 03790512/expert_verified/points_label/9f9de88a95b56660b37378f3c85478b4.seg 03790512\n03001627/points/a521fba02ca7f9aa822215026d1e8d82.pts 03001627/expert_verified/points_label/a521fba02ca7f9aa822215026d1e8d82.seg 03001627\n04225987/points/d303055e96cd59949da15808191f1405.pts 04225987/expert_verified/points_label/d303055e96cd59949da15808191f1405.seg 04225987\n04379243/points/7e3022a7bd00eb4195b8ea6a366e14d.pts 04379243/expert_verified/points_label/7e3022a7bd00eb4195b8ea6a366e14d.seg 04379243\n02691156/points/d83300deab42c100eb9db4e832a6dd82.pts 02691156/expert_verified/points_label/d83300deab42c100eb9db4e832a6dd82.seg 02691156\n03642806/points/a4b410734514306ac401e233323032d6.pts 03642806/expert_verified/points_label/a4b410734514306ac401e233323032d6.seg 03642806\n03790512/points/532e6f88a9975a27b37378f3c85478b4.pts 03790512/expert_verified/points_label/532e6f88a9975a27b37378f3c85478b4.seg 03790512\n03642806/points/cc691d9e8e189ce47a381a112bfd785.pts 03642806/expert_verified/points_label/cc691d9e8e189ce47a381a112bfd785.seg 03642806\n02691156/points/aa07239e9397cf189601fb40d0d298b9.pts 02691156/expert_verified/points_label/aa07239e9397cf189601fb40d0d298b9.seg 02691156\n03642806/points/cc0535a34cdc7d676bf98d15712168f.pts 03642806/expert_verified/points_label/cc0535a34cdc7d676bf98d15712168f.seg 03642806\n02691156/points/ddec69970cbc4d29112a90660b187a10.pts 02691156/expert_verified/points_label/ddec69970cbc4d29112a90660b187a10.seg 02691156\n04379243/points/268e68f1819a225c1b4b790955c17432.pts 04379243/expert_verified/points_label/268e68f1819a225c1b4b790955c17432.seg 04379243\n03624134/points/1943c87f92ac76e112cad8be168fe72d.pts 03624134/expert_verified/points_label/1943c87f92ac76e112cad8be168fe72d.seg 03624134\n04379243/points/b9fc2f624533bb8119fb4103277a6b93.pts 04379243/expert_verified/points_label/b9fc2f624533bb8119fb4103277a6b93.seg 04379243\n03001627/points/1c45b266d3c879dab36dcc661f3905d.pts 03001627/expert_verified/points_label/1c45b266d3c879dab36dcc661f3905d.seg 03001627\n03948459/points/1660ef4b3f20b1e2a94b922b533051b7.pts 03948459/expert_verified/points_label/1660ef4b3f20b1e2a94b922b533051b7.seg 03948459\n02691156/points/167250e2014c72dbb87697d3904b168b.pts 02691156/expert_verified/points_label/167250e2014c72dbb87697d3904b168b.seg 02691156\n02691156/points/dfe65f8a20df11c5d1df55cbe0874aa.pts 02691156/expert_verified/points_label/dfe65f8a20df11c5d1df55cbe0874aa.seg 02691156\n03001627/points/44a2a3952ea2315ff51f77a6d7299806.pts 03001627/expert_verified/points_label/44a2a3952ea2315ff51f77a6d7299806.seg 03001627\n04379243/points/a1896691fe875eccb9968f25875bdef4.pts 04379243/expert_verified/points_label/a1896691fe875eccb9968f25875bdef4.seg 04379243\n04379243/points/6f3506c9c5202101c4e8952b27b5f370.pts 04379243/expert_verified/points_label/6f3506c9c5202101c4e8952b27b5f370.seg 04379243\n04379243/points/fead7e0c30a347b1710801cae5dc529.pts 04379243/expert_verified/points_label/fead7e0c30a347b1710801cae5dc529.seg 04379243\n04379243/points/384bf53e12744e2019fb4103277a6b93.pts 04379243/expert_verified/points_label/384bf53e12744e2019fb4103277a6b93.seg 04379243\n03001627/points/30378faa6bf5b245fdef1c01cbd4ae0c.pts 03001627/expert_verified/points_label/30378faa6bf5b245fdef1c01cbd4ae0c.seg 03001627\n04379243/points/5690d17b330f73adfeb8ceb93793cb5.pts 04379243/expert_verified/points_label/5690d17b330f73adfeb8ceb93793cb5.seg 04379243\n03467517/points/2e4ec0874ea34a50812ca0ac90db1c07.pts 03467517/expert_verified/points_label/2e4ec0874ea34a50812ca0ac90db1c07.seg 03467517\n03001627/points/a007a3cd5b8ca7fb19fb4103277a6b93.pts 03001627/expert_verified/points_label/a007a3cd5b8ca7fb19fb4103277a6b93.seg 03001627\n03001627/points/bc21c95f766502a78b03575bb54dfd4.pts 03001627/expert_verified/points_label/bc21c95f766502a78b03575bb54dfd4.seg 03001627\n04379243/points/6a3ee73d42228f8581654cb17c02fd.pts 04379243/expert_verified/points_label/6a3ee73d42228f8581654cb17c02fd.seg 04379243\n04379243/points/4b399cdce8337c29285e0e27752e54a8.pts 04379243/expert_verified/points_label/4b399cdce8337c29285e0e27752e54a8.seg 04379243\n04379243/points/7f9d2da43d6aba67afb6676a5cd782b6.pts 04379243/expert_verified/points_label/7f9d2da43d6aba67afb6676a5cd782b6.seg 04379243\n03001627/points/72669be1815b2bb81e4fe86c4ad3ec90.pts 03001627/expert_verified/points_label/72669be1815b2bb81e4fe86c4ad3ec90.seg 03001627\n04379243/points/223fbcc813831d8c6e526771d2f7444e.pts 04379243/expert_verified/points_label/223fbcc813831d8c6e526771d2f7444e.seg 04379243\n02691156/points/adeb5d68e8d65cc419ba010ddb4974fe.pts 02691156/expert_verified/points_label/adeb5d68e8d65cc419ba010ddb4974fe.seg 02691156\n03001627/points/8a9d8dad6800d55ff37af16b2893f1d4.pts 03001627/expert_verified/points_label/8a9d8dad6800d55ff37af16b2893f1d4.seg 03001627\n04379243/points/db406d9b2a94bce5622d7484764b58f.pts 04379243/expert_verified/points_label/db406d9b2a94bce5622d7484764b58f.seg 04379243\n03001627/points/68b88c0be088c21d5e0096fb2d3266a.pts 03001627/expert_verified/points_label/68b88c0be088c21d5e0096fb2d3266a.seg 03001627\n03790512/points/973d75ed9c12836f3d033e6cf82ec72c.pts 03790512/expert_verified/points_label/973d75ed9c12836f3d033e6cf82ec72c.seg 03790512\n04379243/points/20292fba71362950c59c53f7df509858.pts 04379243/expert_verified/points_label/20292fba71362950c59c53f7df509858.seg 04379243\n03001627/points/21fb308ca737174e22f2f93459bd863e.pts 03001627/expert_verified/points_label/21fb308ca737174e22f2f93459bd863e.seg 03001627\n03001627/points/be9d5105e48ae27e713decb1a0563b12.pts 03001627/expert_verified/points_label/be9d5105e48ae27e713decb1a0563b12.seg 03001627\n02958343/points/c6441f127d51e478f0fb72d24c42a39.pts 02958343/expert_verified/points_label/c6441f127d51e478f0fb72d24c42a39.seg 02958343\n03001627/points/f29cbdb2c7bb10f9953d950bcd7de7a.pts 03001627/expert_verified/points_label/f29cbdb2c7bb10f9953d950bcd7de7a.seg 03001627\n02691156/points/65654b5c4e488e0c961fa14fc879444e.pts 02691156/expert_verified/points_label/65654b5c4e488e0c961fa14fc879444e.seg 02691156\n04379243/points/8654b644c766dd23d1dcc55e36186e4e.pts 04379243/expert_verified/points_label/8654b644c766dd23d1dcc55e36186e4e.seg 04379243\n04379243/points/56bb7376dfa9cb5c8cf069d506f8b5ac.pts 04379243/expert_verified/points_label/56bb7376dfa9cb5c8cf069d506f8b5ac.seg 04379243\n04379243/points/d291243cfb51ea7dcb25d116843b43a4.pts 04379243/expert_verified/points_label/d291243cfb51ea7dcb25d116843b43a4.seg 04379243\n03790512/points/49edb54e97458de8d373c34785838ee4.pts 03790512/expert_verified/points_label/49edb54e97458de8d373c34785838ee4.seg 03790512\n04379243/points/216da8313bc7b192ab610b0c94236463.pts 04379243/expert_verified/points_label/216da8313bc7b192ab610b0c94236463.seg 04379243\n03001627/points/5ac8b44ff77e5490c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/5ac8b44ff77e5490c8687ff9b0b4e4ac.seg 03001627\n03001627/points/956063d67b939431f56aa11cd5e0c3e.pts 03001627/expert_verified/points_label/956063d67b939431f56aa11cd5e0c3e.seg 03001627\n04379243/points/8dd8370dcaa8d770ea5682a3b818969a.pts 04379243/expert_verified/points_label/8dd8370dcaa8d770ea5682a3b818969a.seg 04379243\n03636649/points/3b64d5033c580d2ef76898f881b76a.pts 03636649/expert_verified/points_label/3b64d5033c580d2ef76898f881b76a.seg 03636649\n03001627/points/3d9dce1953180fe6f9c9f9697d1ec60.pts 03001627/expert_verified/points_label/3d9dce1953180fe6f9c9f9697d1ec60.seg 03001627\n03001627/points/d1b03eeb33fd441d8189e5e3786f2290.pts 03001627/expert_verified/points_label/d1b03eeb33fd441d8189e5e3786f2290.seg 03001627\n02691156/points/5294c39d2a57bd7e5cad6226edb8e82.pts 02691156/expert_verified/points_label/5294c39d2a57bd7e5cad6226edb8e82.seg 02691156\n04379243/points/7bc93a4cc26fab5c8c12b667670a35f2.pts 04379243/expert_verified/points_label/7bc93a4cc26fab5c8c12b667670a35f2.seg 04379243\n04379243/points/813d34995b5c4406b65b71636c46ae49.pts 04379243/expert_verified/points_label/813d34995b5c4406b65b71636c46ae49.seg 04379243\n03001627/points/6782b941de7b2199a344c33f76676fbd.pts 03001627/expert_verified/points_label/6782b941de7b2199a344c33f76676fbd.seg 03001627\n03636649/points/ea5ae3cfd142c3b923f93f957094a824.pts 03636649/expert_verified/points_label/ea5ae3cfd142c3b923f93f957094a824.seg 03636649\n03001627/points/47caca00f993bc4e4b3c42e318f3affc.pts 03001627/expert_verified/points_label/47caca00f993bc4e4b3c42e318f3affc.seg 03001627\n02691156/points/b702e35f4a59e81f64801ad2940cdd5.pts 02691156/expert_verified/points_label/b702e35f4a59e81f64801ad2940cdd5.seg 02691156\n03636649/points/3b5f0c01c2b914fc6f16f167d27a7dab.pts 03636649/expert_verified/points_label/3b5f0c01c2b914fc6f16f167d27a7dab.seg 03636649\n04379243/points/ad63116007d98a6d19758238d4c7aff2.pts 04379243/expert_verified/points_label/ad63116007d98a6d19758238d4c7aff2.seg 04379243\n03797390/points/8f6c86feaa74698d5c91ee20ade72edc.pts 03797390/expert_verified/points_label/8f6c86feaa74698d5c91ee20ade72edc.seg 03797390\n04379243/points/48baef3ab18d2d43d2afe8d5254a0d04.pts 04379243/expert_verified/points_label/48baef3ab18d2d43d2afe8d5254a0d04.seg 04379243\n03001627/points/fe5310a3457bf0e5c4e8952b27b5f370.pts 03001627/expert_verified/points_label/fe5310a3457bf0e5c4e8952b27b5f370.seg 03001627\n04379243/points/d4c330d27bbef3808f6610bf672cd686.pts 04379243/expert_verified/points_label/d4c330d27bbef3808f6610bf672cd686.seg 04379243\n04379243/points/adcb67b58024afb99910b7ec4c4e599b.pts 04379243/expert_verified/points_label/adcb67b58024afb99910b7ec4c4e599b.seg 04379243\n02958343/points/65d6433043c40046b82c0841410a924f.pts 02958343/expert_verified/points_label/65d6433043c40046b82c0841410a924f.seg 02958343\n04379243/points/1a00aa6b75362cc5b324368d54a7416f.pts 04379243/expert_verified/points_label/1a00aa6b75362cc5b324368d54a7416f.seg 04379243\n04379243/points/7982e2f2984978c6f4b6538438a0b930.pts 04379243/expert_verified/points_label/7982e2f2984978c6f4b6538438a0b930.seg 04379243\n03467517/points/26e1801ea747f72f14fe0da28e4f8384.pts 03467517/expert_verified/points_label/26e1801ea747f72f14fe0da28e4f8384.seg 03467517\n04379243/points/c8ee4a8b703180992985858e6f5832da.pts 04379243/expert_verified/points_label/c8ee4a8b703180992985858e6f5832da.seg 04379243\n02691156/points/f24daae76836e249f0878b58b4e887bf.pts 02691156/expert_verified/points_label/f24daae76836e249f0878b58b4e887bf.seg 02691156\n04379243/points/f29863d2fe8863d4195b8ea6a366e14d.pts 04379243/expert_verified/points_label/f29863d2fe8863d4195b8ea6a366e14d.seg 04379243\n04379243/points/babb0963a0e17bb59cd0aef0207ac8c6.pts 04379243/expert_verified/points_label/babb0963a0e17bb59cd0aef0207ac8c6.seg 04379243\n03001627/points/39911f927331db1c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/39911f927331db1c8687ff9b0b4e4ac.seg 03001627\n03001627/points/4a9d3ce54c09a2da696b74614952b2d0.pts 03001627/expert_verified/points_label/4a9d3ce54c09a2da696b74614952b2d0.seg 03001627\n03642806/points/caa4afd404f24d21275c1147a304ed86.pts 03642806/expert_verified/points_label/caa4afd404f24d21275c1147a304ed86.seg 03642806\n02691156/points/ff6e377e8e5b3757cc34b900bb2492e.pts 02691156/expert_verified/points_label/ff6e377e8e5b3757cc34b900bb2492e.seg 02691156\n03001627/points/483cfed0659965ed73c478529c40c4e6.pts 03001627/expert_verified/points_label/483cfed0659965ed73c478529c40c4e6.seg 03001627\n03797390/points/4b7888feea81219ab5f4a9188bfa0ef6.pts 03797390/expert_verified/points_label/4b7888feea81219ab5f4a9188bfa0ef6.seg 03797390\n03790512/points/40d84e407c46e8d8b31e74d456742c7.pts 03790512/expert_verified/points_label/40d84e407c46e8d8b31e74d456742c7.seg 03790512\n04379243/points/176e3b32d749ac94d79f2fc0b8d8ffad.pts 04379243/expert_verified/points_label/176e3b32d749ac94d79f2fc0b8d8ffad.seg 04379243\n03001627/points/657790bc7fd16326c132086242d50af2.pts 03001627/expert_verified/points_label/657790bc7fd16326c132086242d50af2.seg 03001627\n04379243/points/94c0ab5650ea392ddcfcef693e7ec696.pts 04379243/expert_verified/points_label/94c0ab5650ea392ddcfcef693e7ec696.seg 04379243\n03624134/points/bf5cae3922d3cb2bca7250d90eb506cf.pts 03624134/expert_verified/points_label/bf5cae3922d3cb2bca7250d90eb506cf.seg 03624134\n03001627/points/49a3b0242c13f92da6fee8e2140acec9.pts 03001627/expert_verified/points_label/49a3b0242c13f92da6fee8e2140acec9.seg 03001627\n03636649/points/e4c9bb21fe5bfeb3e21f078602e2eda8.pts 03636649/expert_verified/points_label/e4c9bb21fe5bfeb3e21f078602e2eda8.seg 03636649\n03636649/points/6595ee36783d261ed3281970e2c44dbe.pts 03636649/expert_verified/points_label/6595ee36783d261ed3281970e2c44dbe.seg 03636649\n02958343/points/9a152b11907b11074549b3c52ae0632e.pts 02958343/expert_verified/points_label/9a152b11907b11074549b3c52ae0632e.seg 02958343\n04379243/points/68a7bad2b06bc1a9d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/68a7bad2b06bc1a9d93768e7b9b1eabf.seg 04379243\n04379243/points/b9c756b2ff5d66ddfebad4f49b26ec52.pts 04379243/expert_verified/points_label/b9c756b2ff5d66ddfebad4f49b26ec52.seg 04379243\n03797390/points/2d10421716b16580e45ef4135c266a12.pts 03797390/expert_verified/points_label/2d10421716b16580e45ef4135c266a12.seg 03797390\n03001627/points/2c76aaa00e55c26836c07750784b6bc6.pts 03001627/expert_verified/points_label/2c76aaa00e55c26836c07750784b6bc6.seg 03001627\n03636649/points/5cca570916f420e64b3c42e318f3affc.pts 03636649/expert_verified/points_label/5cca570916f420e64b3c42e318f3affc.seg 03636649\n03001627/points/9225e57e34334ee019cb07ecb5b4102.pts 03001627/expert_verified/points_label/9225e57e34334ee019cb07ecb5b4102.seg 03001627\n03001627/points/17aeeadccf0e560e274b862d3a151946.pts 03001627/expert_verified/points_label/17aeeadccf0e560e274b862d3a151946.seg 03001627\n03636649/points/427806f30c61059c22e05b5d2ce39e3b.pts 03636649/expert_verified/points_label/427806f30c61059c22e05b5d2ce39e3b.seg 03636649\n03636649/points/17349d6d35aac0685ed28d6c8a1bdfe5.pts 03636649/expert_verified/points_label/17349d6d35aac0685ed28d6c8a1bdfe5.seg 03636649\n04379243/points/5ee4cbe45bdc4cd571a782a4379556c7.pts 04379243/expert_verified/points_label/5ee4cbe45bdc4cd571a782a4379556c7.seg 04379243\n03636649/points/5eda619e5f36499fc1537287b5c50d9d.pts 03636649/expert_verified/points_label/5eda619e5f36499fc1537287b5c50d9d.seg 03636649\n02691156/points/f57c74e194cd2b2bc8727b27ee96a4b7.pts 02691156/expert_verified/points_label/f57c74e194cd2b2bc8727b27ee96a4b7.seg 02691156\n02958343/points/27d42437168ccd7ddd75f724c0ccbe00.pts 02958343/expert_verified/points_label/27d42437168ccd7ddd75f724c0ccbe00.seg 02958343\n04379243/points/c8cf1c77bbb79d214719088c8e42c6ab.pts 04379243/expert_verified/points_label/c8cf1c77bbb79d214719088c8e42c6ab.seg 04379243\n04379243/points/40b48121d1879be2ee0605a41c3320d6.pts 04379243/expert_verified/points_label/40b48121d1879be2ee0605a41c3320d6.seg 04379243\n02691156/points/4f9b12d07dce21ac9d93a50cb0355558.pts 02691156/expert_verified/points_label/4f9b12d07dce21ac9d93a50cb0355558.seg 02691156\n02691156/points/25bd1569261bc545e8323edc0fe816a8.pts 02691156/expert_verified/points_label/25bd1569261bc545e8323edc0fe816a8.seg 02691156\n02691156/points/fbc429365ab7136be1a9c234926c21e2.pts 02691156/expert_verified/points_label/fbc429365ab7136be1a9c234926c21e2.seg 02691156\n04379243/points/798c315f86d8f02f931e98da3a93e73e.pts 04379243/expert_verified/points_label/798c315f86d8f02f931e98da3a93e73e.seg 04379243\n03790512/points/a0a40a9d5aabd6a7d5dde04c96fd8146.pts 03790512/expert_verified/points_label/a0a40a9d5aabd6a7d5dde04c96fd8146.seg 03790512\n04379243/points/884f15cfc6a3eea3dcfcef693e7ec696.pts 04379243/expert_verified/points_label/884f15cfc6a3eea3dcfcef693e7ec696.seg 04379243\n04379243/points/f16f939baeb7722e664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/f16f939baeb7722e664b3b9b23ddfcbc.seg 04379243\n03001627/points/1e0580f443a9e6d2593ebeeedbff73b.pts 03001627/expert_verified/points_label/1e0580f443a9e6d2593ebeeedbff73b.seg 03001627\n03636649/points/927e0654427c4d0b82241d99b4e87f38.pts 03636649/expert_verified/points_label/927e0654427c4d0b82241d99b4e87f38.seg 03636649\n03001627/points/bdd29e651e5f6fb2b079317292bdc5d4.pts 03001627/expert_verified/points_label/bdd29e651e5f6fb2b079317292bdc5d4.seg 03001627\n03642806/points/cb1e3a990782678b4b6682da890df381.pts 03642806/expert_verified/points_label/cb1e3a990782678b4b6682da890df381.seg 03642806\n03001627/points/fd5ac9b342fe518b9d3ea1c6b57a0095.pts 03001627/expert_verified/points_label/fd5ac9b342fe518b9d3ea1c6b57a0095.seg 03001627\n02958343/points/6bbcd5608ddf871a4cdd04162f008888.pts 02958343/expert_verified/points_label/6bbcd5608ddf871a4cdd04162f008888.seg 02958343\n04379243/points/76338ed3326689b249524cfd5973a145.pts 04379243/expert_verified/points_label/76338ed3326689b249524cfd5973a145.seg 04379243\n03001627/points/9a0571ae6169a6ebfebad4f49b26ec52.pts 03001627/expert_verified/points_label/9a0571ae6169a6ebfebad4f49b26ec52.seg 03001627\n03948459/points/49429e1d1e90c1ca202be79d8b285c1e.pts 03948459/expert_verified/points_label/49429e1d1e90c1ca202be79d8b285c1e.seg 03948459\n02691156/points/45a4ec99ed13ed773c2498c4c2f13ca.pts 02691156/expert_verified/points_label/45a4ec99ed13ed773c2498c4c2f13ca.seg 02691156\n04379243/points/70995336d06fc07ae9f3e9c758fef992.pts 04379243/expert_verified/points_label/70995336d06fc07ae9f3e9c758fef992.seg 04379243\n03001627/points/6fd76577d0df60669b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/6fd76577d0df60669b9f2eb77f5e247e.seg 03001627\n03001627/points/66f18d05d960ffe0bcd12732b5a4b789.pts 03001627/expert_verified/points_label/66f18d05d960ffe0bcd12732b5a4b789.seg 03001627\n03001627/points/e401be99c5a51d8bef8e9284f76f3024.pts 03001627/expert_verified/points_label/e401be99c5a51d8bef8e9284f76f3024.seg 03001627\n03001627/points/4a0b61d33846824ab1f04c301b6ccc90.pts 03001627/expert_verified/points_label/4a0b61d33846824ab1f04c301b6ccc90.seg 03001627\n04379243/points/9a5cb4122d518111b339f790b1757e92.pts 04379243/expert_verified/points_label/9a5cb4122d518111b339f790b1757e92.seg 04379243\n04379243/points/6281381ce38aa988de98d10ab5975b59.pts 04379243/expert_verified/points_label/6281381ce38aa988de98d10ab5975b59.seg 04379243\n04379243/points/d382d9e34f365544278d386bfa54545.pts 04379243/expert_verified/points_label/d382d9e34f365544278d386bfa54545.seg 04379243\n03948459/points/6de6e56c6f7d43692866658c90231a1a.pts 03948459/expert_verified/points_label/6de6e56c6f7d43692866658c90231a1a.seg 03948459\n02691156/points/494a1698eb82572c3df325aac2f73830.pts 02691156/expert_verified/points_label/494a1698eb82572c3df325aac2f73830.seg 02691156\n02691156/points/c581942f40cbb60819ba010ddb4974fe.pts 02691156/expert_verified/points_label/c581942f40cbb60819ba010ddb4974fe.seg 02691156\n04379243/points/e9038664b7d35e6b436e6787c76ef3f0.pts 04379243/expert_verified/points_label/e9038664b7d35e6b436e6787c76ef3f0.seg 04379243\n04099429/points/56c13d294f8afb1ffb88d148e845f82e.pts 04099429/expert_verified/points_label/56c13d294f8afb1ffb88d148e845f82e.seg 04099429\n02958343/points/86fa16c6da908e6b44221994b043fd86.pts 02958343/expert_verified/points_label/86fa16c6da908e6b44221994b043fd86.seg 02958343\n04379243/points/3249c3ad90085a9e98d5fc0473d00a1c.pts 04379243/expert_verified/points_label/3249c3ad90085a9e98d5fc0473d00a1c.seg 04379243\n03636649/points/8581a3ae1f77319ac066b9622c005c53.pts 03636649/expert_verified/points_label/8581a3ae1f77319ac066b9622c005c53.seg 03636649\n03790512/points/6e1397773a4d15db429f1c522640e6f0.pts 03790512/expert_verified/points_label/6e1397773a4d15db429f1c522640e6f0.seg 03790512\n03624134/points/c1ab7029de67351cf97a65c35ea619f0.pts 03624134/expert_verified/points_label/c1ab7029de67351cf97a65c35ea619f0.seg 03624134\n04379243/points/16e874e6165e836b30bbd4cddd04c77b.pts 04379243/expert_verified/points_label/16e874e6165e836b30bbd4cddd04c77b.seg 04379243\n03636649/points/ff08713d837d87edf2098a9f7fc86999.pts 03636649/expert_verified/points_label/ff08713d837d87edf2098a9f7fc86999.seg 03636649\n03790512/points/b649be9c09e2b332429f1c522640e6f0.pts 03790512/expert_verified/points_label/b649be9c09e2b332429f1c522640e6f0.seg 03790512\n03001627/points/85b16941984902f8facfa12c7d71c89f.pts 03001627/expert_verified/points_label/85b16941984902f8facfa12c7d71c89f.seg 03001627\n04379243/points/cf1a7653c10aaa0eab610b0c94236463.pts 04379243/expert_verified/points_label/cf1a7653c10aaa0eab610b0c94236463.seg 04379243\n03001627/points/a42aa59fa23b4a4d9c0ca344f487323e.pts 03001627/expert_verified/points_label/a42aa59fa23b4a4d9c0ca344f487323e.seg 03001627\n03001627/points/3f4f1d18c61a07f134b707eb14b2a4a5.pts 03001627/expert_verified/points_label/3f4f1d18c61a07f134b707eb14b2a4a5.seg 03001627\n03001627/points/d2b9e98373e96afec8d65ca96e6b18ef.pts 03001627/expert_verified/points_label/d2b9e98373e96afec8d65ca96e6b18ef.seg 03001627\n03636649/points/71dffdee89efe07cdff00b2637ddcbde.pts 03636649/expert_verified/points_label/71dffdee89efe07cdff00b2637ddcbde.seg 03636649\n02691156/points/5ac0cd21410b2a6a341877ff7a6c751f.pts 02691156/expert_verified/points_label/5ac0cd21410b2a6a341877ff7a6c751f.seg 02691156\n03636649/points/76eb7436c40e083384d184bdc625781a.pts 03636649/expert_verified/points_label/76eb7436c40e083384d184bdc625781a.seg 03636649\n03642806/points/13330d1e7b199dd82530b9c2b65d3f86.pts 03642806/expert_verified/points_label/13330d1e7b199dd82530b9c2b65d3f86.seg 03642806\n02691156/points/e726c8e6897130439a6e43b878d5b335.pts 02691156/expert_verified/points_label/e726c8e6897130439a6e43b878d5b335.seg 02691156\n04379243/points/40a402e1d949364a104ceb84075e40d6.pts 04379243/expert_verified/points_label/40a402e1d949364a104ceb84075e40d6.seg 04379243\n03001627/points/42140baad25c8598baa1a4ff2c45ffc9.pts 03001627/expert_verified/points_label/42140baad25c8598baa1a4ff2c45ffc9.seg 03001627\n03001627/points/5283a98b5c693e64ebefe6b1d594ad2e.pts 03001627/expert_verified/points_label/5283a98b5c693e64ebefe6b1d594ad2e.seg 03001627\n02691156/points/15898fef6fec88c53ada73811bb576de.pts 02691156/expert_verified/points_label/15898fef6fec88c53ada73811bb576de.seg 02691156\n03001627/points/3f8d0d53e2bd74124b3c42e318f3affc.pts 03001627/expert_verified/points_label/3f8d0d53e2bd74124b3c42e318f3affc.seg 03001627\n04379243/points/cd106955d3bdf8e751c4deb11af7079e.pts 04379243/expert_verified/points_label/cd106955d3bdf8e751c4deb11af7079e.seg 04379243\n03001627/points/11506b96d41f7d3dd7c4a943f33e0384.pts 03001627/expert_verified/points_label/11506b96d41f7d3dd7c4a943f33e0384.seg 03001627\n03001627/points/f51ab8433184dfd2c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/f51ab8433184dfd2c8687ff9b0b4e4ac.seg 03001627\n02691156/points/c9a6dcf87d1f15bca8607f540cc62ba.pts 02691156/expert_verified/points_label/c9a6dcf87d1f15bca8607f540cc62ba.seg 02691156\n04379243/points/d9c75799ff9ff74664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/d9c75799ff9ff74664b3b9b23ddfcbc.seg 04379243\n04379243/points/93e81005c19a74b8664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/93e81005c19a74b8664b3b9b23ddfcbc.seg 04379243\n02958343/points/5057c9dbf72e0352728fa2df514c65d4.pts 02958343/expert_verified/points_label/5057c9dbf72e0352728fa2df514c65d4.seg 02958343\n04379243/points/8ad88ee4442fd0fd8a6ba7ebad3985bb.pts 04379243/expert_verified/points_label/8ad88ee4442fd0fd8a6ba7ebad3985bb.seg 04379243\n04379243/points/a2554ec7e2331a8fab610b0c94236463.pts 04379243/expert_verified/points_label/a2554ec7e2331a8fab610b0c94236463.seg 04379243\n04379243/points/482a76d14781e55e25374da32e705c.pts 04379243/expert_verified/points_label/482a76d14781e55e25374da32e705c.seg 04379243\n02691156/points/d06105ee2a2ae27c51008e496c6cfd2e.pts 02691156/expert_verified/points_label/d06105ee2a2ae27c51008e496c6cfd2e.seg 02691156\n04379243/points/45a09b1ce3111e4f22f4fabdf1ee0670.pts 04379243/expert_verified/points_label/45a09b1ce3111e4f22f4fabdf1ee0670.seg 04379243\n03467517/points/9aaad035af7e6ab1ed724609df3eb104.pts 03467517/expert_verified/points_label/9aaad035af7e6ab1ed724609df3eb104.seg 03467517\n02691156/points/cf0cdaa94220ee3f4c3a35cee92bb95b.pts 02691156/expert_verified/points_label/cf0cdaa94220ee3f4c3a35cee92bb95b.seg 02691156\n02691156/points/48cb2de06f46cde25ed29e0a9f14425.pts 02691156/expert_verified/points_label/48cb2de06f46cde25ed29e0a9f14425.seg 02691156\n03001627/points/2f0a94efe6d1da7f8616812464c86290.pts 03001627/expert_verified/points_label/2f0a94efe6d1da7f8616812464c86290.seg 03001627\n02691156/points/e0385af10bddc6a0ca8607f540cc62ba.pts 02691156/expert_verified/points_label/e0385af10bddc6a0ca8607f540cc62ba.seg 02691156\n03467517/points/71139bd2ff6c4257280ec2e5049bb369.pts 03467517/expert_verified/points_label/71139bd2ff6c4257280ec2e5049bb369.seg 03467517\n03001627/points/6251b398004a02fffebad4f49b26ec52.pts 03001627/expert_verified/points_label/6251b398004a02fffebad4f49b26ec52.seg 03001627\n03467517/points/7eba657565cc69e913f86abea5e4b9e0.pts 03467517/expert_verified/points_label/7eba657565cc69e913f86abea5e4b9e0.seg 03467517\n03001627/points/8d2fd4b9c583e1e6a12cdfe22cdc2f5d.pts 03001627/expert_verified/points_label/8d2fd4b9c583e1e6a12cdfe22cdc2f5d.seg 03001627\n03001627/points/ffa1e25f499e586694e98ee4fdfd7464.pts 03001627/expert_verified/points_label/ffa1e25f499e586694e98ee4fdfd7464.seg 03001627\n03797390/points/9af98540f45411467246665d3d3724c.pts 03797390/expert_verified/points_label/9af98540f45411467246665d3d3724c.seg 03797390\n02691156/points/b9fabfa6d5fedbc3a8e091cb544689d5.pts 02691156/expert_verified/points_label/b9fabfa6d5fedbc3a8e091cb544689d5.seg 02691156\n04379243/points/a2561614d015f2fdfebad4f49b26ec52.pts 04379243/expert_verified/points_label/a2561614d015f2fdfebad4f49b26ec52.seg 04379243\n03642806/points/2134ad3fc25a6284193a4c984002ed32.pts 03642806/expert_verified/points_label/2134ad3fc25a6284193a4c984002ed32.seg 03642806\n03001627/points/d3302b7fa6504cab1a461b43b8f257f.pts 03001627/expert_verified/points_label/d3302b7fa6504cab1a461b43b8f257f.seg 03001627\n03467517/points/bf7026f9814230414269db3f92b7aa5e.pts 03467517/expert_verified/points_label/bf7026f9814230414269db3f92b7aa5e.seg 03467517\n03636649/points/9aff9fdad0e3555c7eecb4e0df212ad9.pts 03636649/expert_verified/points_label/9aff9fdad0e3555c7eecb4e0df212ad9.seg 03636649\n03797390/points/a3cd44bbd3ba5b019a4cbf5d3b79df06.pts 03797390/expert_verified/points_label/a3cd44bbd3ba5b019a4cbf5d3b79df06.seg 03797390\n04099429/points/eff3a27a085e02e5146be45f8a3c1ff8.pts 04099429/expert_verified/points_label/eff3a27a085e02e5146be45f8a3c1ff8.seg 04099429\n02958343/points/1e3f494626a24badf35b4953d8add91f.pts 02958343/expert_verified/points_label/1e3f494626a24badf35b4953d8add91f.seg 02958343\n04379243/points/1f3e217cbc871152d7465eca206fda6f.pts 04379243/expert_verified/points_label/1f3e217cbc871152d7465eca206fda6f.seg 04379243\n03636649/points/cef6757831b4d9738c8f019f17f4687c.pts 03636649/expert_verified/points_label/cef6757831b4d9738c8f019f17f4687c.seg 03636649\n04379243/points/e8689b8b1610bf2841bb8a7ba579a58.pts 04379243/expert_verified/points_label/e8689b8b1610bf2841bb8a7ba579a58.seg 04379243\n03001627/points/40168f46019eb867be7e1d42d63ca9f0.pts 03001627/expert_verified/points_label/40168f46019eb867be7e1d42d63ca9f0.seg 03001627\n03624134/points/7aed22a7074f16431cf05d6e4dbb95af.pts 03624134/expert_verified/points_label/7aed22a7074f16431cf05d6e4dbb95af.seg 03624134\n04379243/points/5d53ed3005f4dc6856786b90799c4fdb.pts 04379243/expert_verified/points_label/5d53ed3005f4dc6856786b90799c4fdb.seg 04379243\n04379243/points/beebc267ea0c16a5c7f6a57f6f73d8a6.pts 04379243/expert_verified/points_label/beebc267ea0c16a5c7f6a57f6f73d8a6.seg 04379243\n04379243/points/943d786e2df9251ec76aead7da70af41.pts 04379243/expert_verified/points_label/943d786e2df9251ec76aead7da70af41.seg 04379243\n04379243/points/90d87b4d9a5a1e78f4b6538438a0b930.pts 04379243/expert_verified/points_label/90d87b4d9a5a1e78f4b6538438a0b930.seg 04379243\n02958343/points/d47353fc60390df85d918097f81825e3.pts 02958343/expert_verified/points_label/d47353fc60390df85d918097f81825e3.seg 02958343\n03624134/points/90021da7c71f6bcbf02ee453ff283e26.pts 03624134/expert_verified/points_label/90021da7c71f6bcbf02ee453ff283e26.seg 03624134\n02958343/points/d1acd4916d3d3b57c48db2ed8f5e994c.pts 02958343/expert_verified/points_label/d1acd4916d3d3b57c48db2ed8f5e994c.seg 02958343\n03001627/points/1d1c829a54f0ae426cdb122727dd360f.pts 03001627/expert_verified/points_label/1d1c829a54f0ae426cdb122727dd360f.seg 03001627\n04379243/points/c35a14f84985f92a9856fa70a578baeb.pts 04379243/expert_verified/points_label/c35a14f84985f92a9856fa70a578baeb.seg 04379243\n03636649/points/5c5119a226e1ce9934804d261199e1bf.pts 03636649/expert_verified/points_label/5c5119a226e1ce9934804d261199e1bf.seg 03636649\n03636649/points/6bb8020fa82b27dde11a3e838aa2c287.pts 03636649/expert_verified/points_label/6bb8020fa82b27dde11a3e838aa2c287.seg 03636649\n03797390/points/fad118b32085f3f2c2c72e575af174cd.pts 03797390/expert_verified/points_label/fad118b32085f3f2c2c72e575af174cd.seg 03797390\n04379243/points/a82387cf9d9d253aa06f94abffad1304.pts 04379243/expert_verified/points_label/a82387cf9d9d253aa06f94abffad1304.seg 04379243\n03948459/points/a7a340a901d63486260a770f90456bf7.pts 03948459/expert_verified/points_label/a7a340a901d63486260a770f90456bf7.seg 03948459\n03624134/points/60e7b05ddeeb48eb37fa2c3ecb75f337.pts 03624134/expert_verified/points_label/60e7b05ddeeb48eb37fa2c3ecb75f337.seg 03624134\n02958343/points/3e2c3cb4f4c65b9cde9d4070fcdfa604.pts 02958343/expert_verified/points_label/3e2c3cb4f4c65b9cde9d4070fcdfa604.seg 02958343\n03001627/points/d58df0968070bf3b4b3c42e318f3affc.pts 03001627/expert_verified/points_label/d58df0968070bf3b4b3c42e318f3affc.seg 03001627\n04379243/points/4a3641784a9ecca04fa8d6439169bda4.pts 04379243/expert_verified/points_label/4a3641784a9ecca04fa8d6439169bda4.seg 04379243\n04225987/points/d31aaca67fd8ef1827d17dabad15093.pts 04225987/expert_verified/points_label/d31aaca67fd8ef1827d17dabad15093.seg 04225987\n03001627/points/c51937167dd0db45f7628281ecb18112.pts 03001627/expert_verified/points_label/c51937167dd0db45f7628281ecb18112.seg 03001627\n04379243/points/768cb2332a16fd63855931d119219022.pts 04379243/expert_verified/points_label/768cb2332a16fd63855931d119219022.seg 04379243\n03001627/points/8c76176c82e3e42d283b00891f680579.pts 03001627/expert_verified/points_label/8c76176c82e3e42d283b00891f680579.seg 03001627\n03001627/points/d4d9b991ff7d31e8c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/d4d9b991ff7d31e8c8687ff9b0b4e4ac.seg 03001627\n03797390/points/162201dfe14b73f0281365259d1cf342.pts 03797390/expert_verified/points_label/162201dfe14b73f0281365259d1cf342.seg 03797390\n04379243/points/ed1e06e886b5514fe8f49d7c9e73ab9.pts 04379243/expert_verified/points_label/ed1e06e886b5514fe8f49d7c9e73ab9.seg 04379243\n03636649/points/90651b3febfc3afe15226aa76eb7c3e.pts 03636649/expert_verified/points_label/90651b3febfc3afe15226aa76eb7c3e.seg 03636649\n04379243/points/24b208dd138d8af36210db75a4cd581b.pts 04379243/expert_verified/points_label/24b208dd138d8af36210db75a4cd581b.seg 04379243\n03001627/points/439418b35f600f4bb10dc0fca58d0b2c.pts 03001627/expert_verified/points_label/439418b35f600f4bb10dc0fca58d0b2c.seg 03001627\n03636649/points/88257c5a48d94b1e2b151d8b52c53b90.pts 03636649/expert_verified/points_label/88257c5a48d94b1e2b151d8b52c53b90.seg 03636649\n02691156/points/ad546b049b2246bd609e2d916fa0da27.pts 02691156/expert_verified/points_label/ad546b049b2246bd609e2d916fa0da27.seg 02691156\n03001627/points/7efeece3b5cf2853d706779c93538ee1.pts 03001627/expert_verified/points_label/7efeece3b5cf2853d706779c93538ee1.seg 03001627\n04379243/points/30dd74f09af6b1c2fe5c8ffd0f5eba47.pts 04379243/expert_verified/points_label/30dd74f09af6b1c2fe5c8ffd0f5eba47.seg 04379243\n02691156/points/752d9a010346862551cfdb4c9f126c12.pts 02691156/expert_verified/points_label/752d9a010346862551cfdb4c9f126c12.seg 02691156\n03001627/points/d1237422881f4d22ff25b0c2db862d19.pts 03001627/expert_verified/points_label/d1237422881f4d22ff25b0c2db862d19.seg 03001627\n04379243/points/95af60aa8cb9be066a76e23e6f966dea.pts 04379243/expert_verified/points_label/95af60aa8cb9be066a76e23e6f966dea.seg 04379243\n02691156/points/556d2b99469e62e623a346a784afd6ba.pts 02691156/expert_verified/points_label/556d2b99469e62e623a346a784afd6ba.seg 02691156\n04379243/points/6e23179a3559775a65eacc25f128a1c5.pts 04379243/expert_verified/points_label/6e23179a3559775a65eacc25f128a1c5.seg 04379243\n02691156/points/3b82e575165383903c83f6e156ad107a.pts 02691156/expert_verified/points_label/3b82e575165383903c83f6e156ad107a.seg 02691156\n02773838/points/71ead7f072106c63ed13f430b2941481.pts 02773838/expert_verified/points_label/71ead7f072106c63ed13f430b2941481.seg 02773838\n03001627/points/c9d68e1e5309ac25ac57e7d566628472.pts 03001627/expert_verified/points_label/c9d68e1e5309ac25ac57e7d566628472.seg 03001627\n02691156/points/b3a59a941500e76535592b447835a16e.pts 02691156/expert_verified/points_label/b3a59a941500e76535592b447835a16e.seg 02691156\n03797390/points/4d9764afa3fbeb1b6c69dceb67157a66.pts 03797390/expert_verified/points_label/4d9764afa3fbeb1b6c69dceb67157a66.seg 03797390\n04379243/points/68ea1f319a9d724ec3bd24f986301745.pts 04379243/expert_verified/points_label/68ea1f319a9d724ec3bd24f986301745.seg 04379243\n03001627/points/30363681727c804095937f6e581cbd41.pts 03001627/expert_verified/points_label/30363681727c804095937f6e581cbd41.seg 03001627\n03001627/points/f4f1aba65ebe48eb70930286c914896b.pts 03001627/expert_verified/points_label/f4f1aba65ebe48eb70930286c914896b.seg 03001627\n02691156/points/a3fc9ef9f611a783525e60273896d30a.pts 02691156/expert_verified/points_label/a3fc9ef9f611a783525e60273896d30a.seg 02691156\n03636649/points/b0871c4ac8505d9c3d39d8012919dd25.pts 03636649/expert_verified/points_label/b0871c4ac8505d9c3d39d8012919dd25.seg 03636649\n03001627/points/d7e26a070ee3b35cdf6cfab91d65bb91.pts 03001627/expert_verified/points_label/d7e26a070ee3b35cdf6cfab91d65bb91.seg 03001627\n04379243/points/9012c6ca245c1bf4e6c5cd45aa112726.pts 04379243/expert_verified/points_label/9012c6ca245c1bf4e6c5cd45aa112726.seg 04379243\n03636649/points/3ab9e4300cee0259f72e8839e840c146.pts 03636649/expert_verified/points_label/3ab9e4300cee0259f72e8839e840c146.seg 03636649\n04379243/points/6e0fed54fcae8a62edccc47bf0dcf5d3.pts 04379243/expert_verified/points_label/6e0fed54fcae8a62edccc47bf0dcf5d3.seg 04379243\n04379243/points/aafc579804cc095cbababe11fcea8796.pts 04379243/expert_verified/points_label/aafc579804cc095cbababe11fcea8796.seg 04379243\n03636649/points/9adee08c737c7c134c6deb9ede0648df.pts 03636649/expert_verified/points_label/9adee08c737c7c134c6deb9ede0648df.seg 03636649\n02691156/points/f39985959d394f8c863ab010b80d9ed.pts 02691156/expert_verified/points_label/f39985959d394f8c863ab010b80d9ed.seg 02691156\n04379243/points/23d4170c7a0a2a014b3c42e318f3affc.pts 04379243/expert_verified/points_label/23d4170c7a0a2a014b3c42e318f3affc.seg 04379243\n04379243/points/a1593fbe3a78c7858795000a72749c36.pts 04379243/expert_verified/points_label/a1593fbe3a78c7858795000a72749c36.seg 04379243\n03001627/points/4b2ede169dcc83ce4591019e9d133858.pts 03001627/expert_verified/points_label/4b2ede169dcc83ce4591019e9d133858.seg 03001627\n03001627/points/3fa1eeed2e8e2534febad4f49b26ec52.pts 03001627/expert_verified/points_label/3fa1eeed2e8e2534febad4f49b26ec52.seg 03001627\n04379243/points/e8ba9621aef9395a3019620286259e2c.pts 04379243/expert_verified/points_label/e8ba9621aef9395a3019620286259e2c.seg 04379243\n03001627/points/875925d42780159ffebad4f49b26ec52.pts 03001627/expert_verified/points_label/875925d42780159ffebad4f49b26ec52.seg 03001627\n03001627/points/548ab6b6e8b2dc505ff61a3a2a0e2484.pts 03001627/expert_verified/points_label/548ab6b6e8b2dc505ff61a3a2a0e2484.seg 03001627\n03467517/points/4f401d78068a9d348ee96618ee16ca27.pts 03467517/expert_verified/points_label/4f401d78068a9d348ee96618ee16ca27.seg 03467517\n04379243/points/f7600660924857c0d31d0d81bfe9c743.pts 04379243/expert_verified/points_label/f7600660924857c0d31d0d81bfe9c743.seg 04379243\n04379243/points/edba7eb533ae3578ece232edf44331c7.pts 04379243/expert_verified/points_label/edba7eb533ae3578ece232edf44331c7.seg 04379243\n03001627/points/8b8fa92f9c677b0713decb1a0563b12.pts 03001627/expert_verified/points_label/8b8fa92f9c677b0713decb1a0563b12.seg 03001627\n02691156/points/81e6b629264dad5daf2c6c19cc41708a.pts 02691156/expert_verified/points_label/81e6b629264dad5daf2c6c19cc41708a.seg 02691156\n02691156/points/a0a7e673a1e1bca78699933784576e73.pts 02691156/expert_verified/points_label/a0a7e673a1e1bca78699933784576e73.seg 02691156\n03636649/points/f01358d4f45cae23ce670f026edf07e5.pts 03636649/expert_verified/points_label/f01358d4f45cae23ce670f026edf07e5.seg 03636649\n03001627/points/808fa82fe9ad86d9f1cc184b6fa3e1f9.pts 03001627/expert_verified/points_label/808fa82fe9ad86d9f1cc184b6fa3e1f9.seg 03001627\n02691156/points/57937c7ab42260ebf119374ee5d5f944.pts 02691156/expert_verified/points_label/57937c7ab42260ebf119374ee5d5f944.seg 02691156\n03001627/points/fbddac94cfa74a7b5c0228148b88226c.pts 03001627/expert_verified/points_label/fbddac94cfa74a7b5c0228148b88226c.seg 03001627\n04379243/points/ad92bfc65465091c48d90eef8384210.pts 04379243/expert_verified/points_label/ad92bfc65465091c48d90eef8384210.seg 04379243\n03467517/points/6ce23c82af30b629e8f705eb96ba3376.pts 03467517/expert_verified/points_label/6ce23c82af30b629e8f705eb96ba3376.seg 03467517\n03001627/points/bd1787066323c7a64424fc4d3c9cb157.pts 03001627/expert_verified/points_label/bd1787066323c7a64424fc4d3c9cb157.seg 03001627\n03001627/points/uca24feec-f0c0-454c-baaf-561530686f40.pts 03001627/expert_verified/points_label/uca24feec-f0c0-454c-baaf-561530686f40.seg 03001627\n03001627/points/226704c72560008421ceb39dc3069834.pts 03001627/expert_verified/points_label/226704c72560008421ceb39dc3069834.seg 03001627\n02691156/points/2c49289098e4492bca8607f540cc62ba.pts 02691156/expert_verified/points_label/2c49289098e4492bca8607f540cc62ba.seg 02691156\n03001627/points/cff9a523a9e20eaeb40f0ac0fb9a650d.pts 03001627/expert_verified/points_label/cff9a523a9e20eaeb40f0ac0fb9a650d.seg 03001627\n04379243/points/38e90183c838f443b43753a53e4593db.pts 04379243/expert_verified/points_label/38e90183c838f443b43753a53e4593db.seg 04379243\n04379243/points/8b4ec70a3c1283b1fb5f8baea920e189.pts 04379243/expert_verified/points_label/8b4ec70a3c1283b1fb5f8baea920e189.seg 04379243\n04379243/points/59a1703cb9320c018f49a52c8d710d0f.pts 04379243/expert_verified/points_label/59a1703cb9320c018f49a52c8d710d0f.seg 04379243\n03636649/points/4ba237c2c40313f373b3ec02b97cb0f.pts 03636649/expert_verified/points_label/4ba237c2c40313f373b3ec02b97cb0f.seg 03636649\n04379243/points/bb027ed892722b1f3399de188dc5ee56.pts 04379243/expert_verified/points_label/bb027ed892722b1f3399de188dc5ee56.seg 04379243\n03467517/points/8b1d0f73e54ef59c93f0194265a9746c.pts 03467517/expert_verified/points_label/8b1d0f73e54ef59c93f0194265a9746c.seg 03467517\n03467517/points/1300e8bafb819c8e1887f40a4f62df44.pts 03467517/expert_verified/points_label/1300e8bafb819c8e1887f40a4f62df44.seg 03467517\n03642806/points/9fa387d7f442b96e75e60c00fabe2744.pts 03642806/expert_verified/points_label/9fa387d7f442b96e75e60c00fabe2744.seg 03642806\n04379243/points/e153f757330a4ea3cdd1f51ef2b8f2ed.pts 04379243/expert_verified/points_label/e153f757330a4ea3cdd1f51ef2b8f2ed.seg 04379243\n03636649/points/d00157a022079bdef3655a2ce983ab1f.pts 03636649/expert_verified/points_label/d00157a022079bdef3655a2ce983ab1f.seg 03636649\n04379243/points/9eeea5f7b030ff6ac155f88004a92bc8.pts 04379243/expert_verified/points_label/9eeea5f7b030ff6ac155f88004a92bc8.seg 04379243\n04379243/points/10ed64b4c7eb6d9311ee7ca4f000feba.pts 04379243/expert_verified/points_label/10ed64b4c7eb6d9311ee7ca4f000feba.seg 04379243\n03001627/points/6db2255a51caf84e823e7e244bf84209.pts 03001627/expert_verified/points_label/6db2255a51caf84e823e7e244bf84209.seg 03001627\n03001627/points/8ddaa112e6ba36b5b1e23c7675c49239.pts 03001627/expert_verified/points_label/8ddaa112e6ba36b5b1e23c7675c49239.seg 03001627\n04379243/points/7813f4e4c0a58118cbb8bac2032149c.pts 04379243/expert_verified/points_label/7813f4e4c0a58118cbb8bac2032149c.seg 04379243\n03797390/points/336122c3105440d193e42e2720468bf0.pts 03797390/expert_verified/points_label/336122c3105440d193e42e2720468bf0.seg 03797390\n03001627/points/f2e2993abf4c952b2e69a7e134f91051.pts 03001627/expert_verified/points_label/f2e2993abf4c952b2e69a7e134f91051.seg 03001627\n04379243/points/627248fa64c1db5fab610b0c94236463.pts 04379243/expert_verified/points_label/627248fa64c1db5fab610b0c94236463.seg 04379243\n04379243/points/3b465822b34ed20ca05d3424fd8d541a.pts 04379243/expert_verified/points_label/3b465822b34ed20ca05d3424fd8d541a.seg 04379243\n03467517/points/a7ddf2e5b9dc278293f0194265a9746c.pts 03467517/expert_verified/points_label/a7ddf2e5b9dc278293f0194265a9746c.seg 03467517\n03636649/points/b36bfbbc98cb45431735ea0e092a805a.pts 03636649/expert_verified/points_label/b36bfbbc98cb45431735ea0e092a805a.seg 03636649\n04379243/points/7d14ae7d0b7338bda0ab1d82ef09f78f.pts 04379243/expert_verified/points_label/7d14ae7d0b7338bda0ab1d82ef09f78f.seg 04379243\n03467517/points/f7645b3c690d954682c2412261cb8600.pts 03467517/expert_verified/points_label/f7645b3c690d954682c2412261cb8600.seg 03467517\n02958343/points/41a6deadd39b4c754d0f9a1ef5f184fe.pts 02958343/expert_verified/points_label/41a6deadd39b4c754d0f9a1ef5f184fe.seg 02958343\n02691156/points/f74cbd91e6fb40dfce5965228d7e8c9f.pts 02691156/expert_verified/points_label/f74cbd91e6fb40dfce5965228d7e8c9f.seg 02691156\n04379243/points/6c4c3bfe275e66b1b75e606711562bfc.pts 04379243/expert_verified/points_label/6c4c3bfe275e66b1b75e606711562bfc.seg 04379243\n04379243/points/7d358a01c9467815a9505c473725122e.pts 04379243/expert_verified/points_label/7d358a01c9467815a9505c473725122e.seg 04379243\n04379243/points/5fe3476df92392e1397aad305ec14786.pts 04379243/expert_verified/points_label/5fe3476df92392e1397aad305ec14786.seg 04379243\n03001627/points/34d3960d35d8d5219b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/34d3960d35d8d5219b9f2eb77f5e247e.seg 03001627\n03001627/points/1b67a3a1101a9acb905477d2a8504646.pts 03001627/expert_verified/points_label/1b67a3a1101a9acb905477d2a8504646.seg 03001627\n03001627/points/ee4858f78dc33591100e9bd5c4b0af54.pts 03001627/expert_verified/points_label/ee4858f78dc33591100e9bd5c4b0af54.seg 03001627\n03001627/points/a578b0027e7d9ec7b2ca3ea77e53abe.pts 03001627/expert_verified/points_label/a578b0027e7d9ec7b2ca3ea77e53abe.seg 03001627\n02691156/points/916950e40ca7aabc8b96ae1a0a8b84ec.pts 02691156/expert_verified/points_label/916950e40ca7aabc8b96ae1a0a8b84ec.seg 02691156\n04379243/points/1abfb0c03c81fc2219fb4103277a6b93.pts 04379243/expert_verified/points_label/1abfb0c03c81fc2219fb4103277a6b93.seg 04379243\n02691156/points/a702da03d770f5096e2738fc9da60e6f.pts 02691156/expert_verified/points_label/a702da03d770f5096e2738fc9da60e6f.seg 02691156\n04379243/points/2e2894138df855b26f88aa1b7f7cc6c6.pts 04379243/expert_verified/points_label/2e2894138df855b26f88aa1b7f7cc6c6.seg 04379243\n03001627/points/589cd6a1f4367fd834b707eb14b2a4a5.pts 03001627/expert_verified/points_label/589cd6a1f4367fd834b707eb14b2a4a5.seg 03001627\n03636649/points/f8534299ecce5c16eaf14273fa406ffc.pts 03636649/expert_verified/points_label/f8534299ecce5c16eaf14273fa406ffc.seg 03636649\n04379243/points/ea96b8a866121d1abed1bd9593e318c.pts 04379243/expert_verified/points_label/ea96b8a866121d1abed1bd9593e318c.seg 04379243\n03624134/points/9746101f20473d346bbd83c2bc4c3b2e.pts 03624134/expert_verified/points_label/9746101f20473d346bbd83c2bc4c3b2e.seg 03624134\n02958343/points/9c4a3879c71df693af0f25977186b501.pts 02958343/expert_verified/points_label/9c4a3879c71df693af0f25977186b501.seg 02958343\n03001627/points/6621723f7af35f2dcd344c2b2cefcda6.pts 03001627/expert_verified/points_label/6621723f7af35f2dcd344c2b2cefcda6.seg 03001627\n03948459/points/8c9e592c95f95e7c9a6e43b878d5b335.pts 03948459/expert_verified/points_label/8c9e592c95f95e7c9a6e43b878d5b335.seg 03948459\n04379243/points/36a6d851dbe02410ad16260d4d73b56.pts 04379243/expert_verified/points_label/36a6d851dbe02410ad16260d4d73b56.seg 04379243\n04379243/points/b1ca280d9567270ade98d10ab5975b59.pts 04379243/expert_verified/points_label/b1ca280d9567270ade98d10ab5975b59.seg 04379243\n03467517/points/5ed99a0b793e1f5ee52744498b9b3051.pts 03467517/expert_verified/points_label/5ed99a0b793e1f5ee52744498b9b3051.seg 03467517\n03001627/points/18fd8342fa5d1d4f5268b70948af88b2.pts 03001627/expert_verified/points_label/18fd8342fa5d1d4f5268b70948af88b2.seg 03001627\n02691156/points/cc60baa1a796f5c14c3a35cee92bb95b.pts 02691156/expert_verified/points_label/cc60baa1a796f5c14c3a35cee92bb95b.seg 02691156\n03642806/points/3237f5cd4bca555955357c338ec9641.pts 03642806/expert_verified/points_label/3237f5cd4bca555955357c338ec9641.seg 03642806\n03001627/points/fee248777c9c4807f8bc1f8036e08e44.pts 03001627/expert_verified/points_label/fee248777c9c4807f8bc1f8036e08e44.seg 03001627\n04379243/points/2d90a1998eca8778dcfcef693e7ec696.pts 04379243/expert_verified/points_label/2d90a1998eca8778dcfcef693e7ec696.seg 04379243\n02958343/points/3ef7cfbc172840b2393bf61b30c528bb.pts 02958343/expert_verified/points_label/3ef7cfbc172840b2393bf61b30c528bb.seg 02958343\n02691156/points/240fd3c1fd804ec1b8cf782e8c539948.pts 02691156/expert_verified/points_label/240fd3c1fd804ec1b8cf782e8c539948.seg 02691156\n04379243/points/60c931dcc6d0982944bda2555d37e46.pts 04379243/expert_verified/points_label/60c931dcc6d0982944bda2555d37e46.seg 04379243\n04379243/points/93040a14fad5588ed889130b88839a0c.pts 04379243/expert_verified/points_label/93040a14fad5588ed889130b88839a0c.seg 04379243\n02958343/points/a75ff576da012340468bac13e007a6e9.pts 02958343/expert_verified/points_label/a75ff576da012340468bac13e007a6e9.seg 02958343\n03467517/points/57286d92604c9ebea3d3eb77b119df6d.pts 03467517/expert_verified/points_label/57286d92604c9ebea3d3eb77b119df6d.seg 03467517\n03636649/points/913ba6b6ac6aea3356c82fefb25b338b.pts 03636649/expert_verified/points_label/913ba6b6ac6aea3356c82fefb25b338b.seg 03636649\n03001627/points/cce9ffdcc7ca8ddea300840c9d7bfa74.pts 03001627/expert_verified/points_label/cce9ffdcc7ca8ddea300840c9d7bfa74.seg 03001627\n04379243/points/913c0ff011ad0658dcfcef693e7ec696.pts 04379243/expert_verified/points_label/913c0ff011ad0658dcfcef693e7ec696.seg 04379243\n03001627/points/9d0b25421c13008e35836c728d324152.pts 03001627/expert_verified/points_label/9d0b25421c13008e35836c728d324152.seg 03001627\n03797390/points/a8f7a0edd3edc3299e54b4084dc33544.pts 03797390/expert_verified/points_label/a8f7a0edd3edc3299e54b4084dc33544.seg 03797390\n04379243/points/5b9a7b7952996844d802aa676be38da2.pts 04379243/expert_verified/points_label/5b9a7b7952996844d802aa676be38da2.seg 04379243\n02954340/points/4bd0b6df02772d8f59c9250a427b57f.pts 02954340/expert_verified/points_label/4bd0b6df02772d8f59c9250a427b57f.seg 02954340\n02958343/points/a72134cd499fd1c4f79e091fa09130a.pts 02958343/expert_verified/points_label/a72134cd499fd1c4f79e091fa09130a.seg 02958343\n04379243/points/cc6fbdc6f2aa5ea3d889130b88839a0c.pts 04379243/expert_verified/points_label/cc6fbdc6f2aa5ea3d889130b88839a0c.seg 04379243\n03624134/points/85ced924eedc6ff566b5b592ed1ddee0.pts 03624134/expert_verified/points_label/85ced924eedc6ff566b5b592ed1ddee0.seg 03624134\n03001627/points/60622d74c0712934a5817f81a1efa3cc.pts 03001627/expert_verified/points_label/60622d74c0712934a5817f81a1efa3cc.seg 03001627\n04379243/points/2633f011b236a8979070b65ce7b4b532.pts 04379243/expert_verified/points_label/2633f011b236a8979070b65ce7b4b532.seg 04379243\n03001627/points/9d9d69e5f2bc80a867903707764646db.pts 03001627/expert_verified/points_label/9d9d69e5f2bc80a867903707764646db.seg 03001627\n03001627/points/ce463d63d8771c5ccf19858fd1963d10.pts 03001627/expert_verified/points_label/ce463d63d8771c5ccf19858fd1963d10.seg 03001627\n04379243/points/ad17445446e4fd3adcfcef693e7ec696.pts 04379243/expert_verified/points_label/ad17445446e4fd3adcfcef693e7ec696.seg 04379243\n03001627/points/71372c1f20b6a04c43c40c5aa3d5c5b7.pts 03001627/expert_verified/points_label/71372c1f20b6a04c43c40c5aa3d5c5b7.seg 03001627\n02691156/points/9436273fc1a5e3ca7af159eaf7625abf.pts 02691156/expert_verified/points_label/9436273fc1a5e3ca7af159eaf7625abf.seg 02691156\n03797390/points/b98fa11a567f644344b25d683fe71de.pts 03797390/expert_verified/points_label/b98fa11a567f644344b25d683fe71de.seg 03797390\n02691156/points/53eee66291c47a91bc0909d98a1ff2b4.pts 02691156/expert_verified/points_label/53eee66291c47a91bc0909d98a1ff2b4.seg 02691156\n03642806/points/e55ececde88255b93e73f3893a7337bb.pts 03642806/expert_verified/points_label/e55ececde88255b93e73f3893a7337bb.seg 03642806\n02958343/points/1079efee042629d4ce28f0f1b509eda.pts 02958343/expert_verified/points_label/1079efee042629d4ce28f0f1b509eda.seg 02958343\n03001627/points/c826c65111c867ab45a1df43bcd9e471.pts 03001627/expert_verified/points_label/c826c65111c867ab45a1df43bcd9e471.seg 03001627\n02958343/points/39201299cf83ec2577763486d77d1cb.pts 02958343/expert_verified/points_label/39201299cf83ec2577763486d77d1cb.seg 02958343\n04379243/points/e8c01f71fd941af11190e285a2cbc9c.pts 04379243/expert_verified/points_label/e8c01f71fd941af11190e285a2cbc9c.seg 04379243\n03001627/points/948f1555282e27da190c615a2115d2f7.pts 03001627/expert_verified/points_label/948f1555282e27da190c615a2115d2f7.seg 03001627\n02691156/points/ca4ec545363b3b8e8c2814a4ead9cb90.pts 02691156/expert_verified/points_label/ca4ec545363b3b8e8c2814a4ead9cb90.seg 02691156\n03001627/points/b8f4ce34b44620cc9b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/b8f4ce34b44620cc9b9f2eb77f5e247e.seg 03001627\n02958343/points/188621bbfc7d9477ce27281f3b76d1f5.pts 02958343/expert_verified/points_label/188621bbfc7d9477ce27281f3b76d1f5.seg 02958343\n04379243/points/9a71b92445cd3f023a9bc242c86fb7a0.pts 04379243/expert_verified/points_label/9a71b92445cd3f023a9bc242c86fb7a0.seg 04379243\n03001627/points/4372b33dfc84c2f56a9ab6fc87e1604e.pts 03001627/expert_verified/points_label/4372b33dfc84c2f56a9ab6fc87e1604e.seg 03001627\n03001627/points/b16f1858c1a7c0a65001cb19c4a0eee4.pts 03001627/expert_verified/points_label/b16f1858c1a7c0a65001cb19c4a0eee4.seg 03001627\n03467517/points/5238adec0790595930c206f77b5cb4d0.pts 03467517/expert_verified/points_label/5238adec0790595930c206f77b5cb4d0.seg 03467517\n02958343/points/3ec7f0347638f7a891eea2fc80d4a25f.pts 02958343/expert_verified/points_label/3ec7f0347638f7a891eea2fc80d4a25f.seg 02958343\n02691156/points/32e7224d196e5866bd564bd76cf3cbec.pts 02691156/expert_verified/points_label/32e7224d196e5866bd564bd76cf3cbec.seg 02691156\n04379243/points/f9beeefdebf70350f4b6538438a0b930.pts 04379243/expert_verified/points_label/f9beeefdebf70350f4b6538438a0b930.seg 04379243\n04379243/points/acbc99e153b9d4d419fb4103277a6b93.pts 04379243/expert_verified/points_label/acbc99e153b9d4d419fb4103277a6b93.seg 04379243\n03467517/points/8ebc3d48afeceec752561cc0fb924c36.pts 03467517/expert_verified/points_label/8ebc3d48afeceec752561cc0fb924c36.seg 03467517\n04379243/points/966cef675324e416cd415550f639925.pts 04379243/expert_verified/points_label/966cef675324e416cd415550f639925.seg 04379243\n03636649/points/85f71a4724fa37c33d39d8012919dd25.pts 03636649/expert_verified/points_label/85f71a4724fa37c33d39d8012919dd25.seg 03636649\n03636649/points/370623095c9773e42ce7d46577f8a9bd.pts 03636649/expert_verified/points_label/370623095c9773e42ce7d46577f8a9bd.seg 03636649\n03624134/points/bbe934c9cdca9c1839ec49305bb07d3d.pts 03624134/expert_verified/points_label/bbe934c9cdca9c1839ec49305bb07d3d.seg 03624134\n02958343/points/d22a2d20acbdca70c972ff3f74d38438.pts 02958343/expert_verified/points_label/d22a2d20acbdca70c972ff3f74d38438.seg 02958343\n02958343/points/ff3c8e21a48ed17cc1bcae9def1986da.pts 02958343/expert_verified/points_label/ff3c8e21a48ed17cc1bcae9def1986da.seg 02958343\n03001627/points/fd5ca05b59b30241d838ae16242881dc.pts 03001627/expert_verified/points_label/fd5ca05b59b30241d838ae16242881dc.seg 03001627\n02691156/points/e3aff5ae3e8f2a7c4c2c88971423d0be.pts 02691156/expert_verified/points_label/e3aff5ae3e8f2a7c4c2c88971423d0be.seg 02691156\n02691156/points/b4575e5e6161fd497b164268a44f7712.pts 02691156/expert_verified/points_label/b4575e5e6161fd497b164268a44f7712.seg 02691156\n03467517/points/153e7883f6cf0e66d57700c05b1862d8.pts 03467517/expert_verified/points_label/153e7883f6cf0e66d57700c05b1862d8.seg 03467517\n03642806/points/4fc3d56243d2d8801ef1ccfaf50f2048.pts 03642806/expert_verified/points_label/4fc3d56243d2d8801ef1ccfaf50f2048.seg 03642806\n04379243/points/ec9861c234daf6bc915f51b5f5e95ffa.pts 04379243/expert_verified/points_label/ec9861c234daf6bc915f51b5f5e95ffa.seg 04379243\n03001627/points/7114ef00fe68d053cccbd142483bf2e7.pts 03001627/expert_verified/points_label/7114ef00fe68d053cccbd142483bf2e7.seg 03001627\n02691156/points/e812f54386acd072d44f37c9e0fb10d0.pts 02691156/expert_verified/points_label/e812f54386acd072d44f37c9e0fb10d0.seg 02691156\n03001627/points/5490efbdadce792f524f4eb395a8604.pts 03001627/expert_verified/points_label/5490efbdadce792f524f4eb395a8604.seg 03001627\n03948459/points/42740af029297f1d9874fa4c7b1a4298.pts 03948459/expert_verified/points_label/42740af029297f1d9874fa4c7b1a4298.seg 03948459\n03001627/points/d1ec6e9b8063b7efd7f7a4c4609b0913.pts 03001627/expert_verified/points_label/d1ec6e9b8063b7efd7f7a4c4609b0913.seg 03001627\n04379243/points/4b11be42b0c0482dd94faaee2b20e2bf.pts 04379243/expert_verified/points_label/4b11be42b0c0482dd94faaee2b20e2bf.seg 04379243\n03001627/points/d29971cef754cc91cd8c5d1ba690a2c3.pts 03001627/expert_verified/points_label/d29971cef754cc91cd8c5d1ba690a2c3.seg 03001627\n04379243/points/8cc8485f249a37f595b25bd3accf45b5.pts 04379243/expert_verified/points_label/8cc8485f249a37f595b25bd3accf45b5.seg 04379243\n04379243/points/bb5dbf708d5eb7f82099f9e22ca45b04.pts 04379243/expert_verified/points_label/bb5dbf708d5eb7f82099f9e22ca45b04.seg 04379243\n03001627/points/c1b64fef5f3efa0a129905ebfd12d5cd.pts 03001627/expert_verified/points_label/c1b64fef5f3efa0a129905ebfd12d5cd.seg 03001627\n04379243/points/e58e958428584b2b79972b30518c97e2.pts 04379243/expert_verified/points_label/e58e958428584b2b79972b30518c97e2.seg 04379243\n03790512/points/90a521e0def2631fd5dde04c96fd8146.pts 03790512/expert_verified/points_label/90a521e0def2631fd5dde04c96fd8146.seg 03790512\n03467517/points/fcab134da044e5fc77f469126771fc30.pts 03467517/expert_verified/points_label/fcab134da044e5fc77f469126771fc30.seg 03467517\n03001627/points/1d6faeb6d77d1f2cf95cd8df6bebbc3a.pts 03001627/expert_verified/points_label/1d6faeb6d77d1f2cf95cd8df6bebbc3a.seg 03001627\n04379243/points/e993ddaf6d03003071a782a4379556c7.pts 04379243/expert_verified/points_label/e993ddaf6d03003071a782a4379556c7.seg 04379243\n03001627/points/702cebffa33a19f019f079d1b712f46f.pts 03001627/expert_verified/points_label/702cebffa33a19f019f079d1b712f46f.seg 03001627\n03790512/points/7b4eb8cbc470d0d6d5dde04c96fd8146.pts 03790512/expert_verified/points_label/7b4eb8cbc470d0d6d5dde04c96fd8146.seg 03790512\n03001627/points/9515e377c1ec86529b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/9515e377c1ec86529b9f2eb77f5e247e.seg 03001627\n03001627/points/9c3d7b65c739a618285330f26226f8fb.pts 03001627/expert_verified/points_label/9c3d7b65c739a618285330f26226f8fb.seg 03001627\n03790512/points/8ed4bdaf0c8b88ea8b31e74d456742c7.pts 03790512/expert_verified/points_label/8ed4bdaf0c8b88ea8b31e74d456742c7.seg 03790512\n02958343/points/6ed2957beeb7940a9fbaa69916aaebda.pts 02958343/expert_verified/points_label/6ed2957beeb7940a9fbaa69916aaebda.seg 02958343\n03001627/points/37e2b82d5e9dde21cbde89e0c48a01bf.pts 03001627/expert_verified/points_label/37e2b82d5e9dde21cbde89e0c48a01bf.seg 03001627\n04379243/points/1b6bd64fda74bdc4d6983f351200ac6a.pts 04379243/expert_verified/points_label/1b6bd64fda74bdc4d6983f351200ac6a.seg 04379243\n04379243/points/531381f5bbc69e485769b3af36a2ff9f.pts 04379243/expert_verified/points_label/531381f5bbc69e485769b3af36a2ff9f.seg 04379243\n03790512/points/992fbae5178edcbc4e31d0cb4d7568.pts 03790512/expert_verified/points_label/992fbae5178edcbc4e31d0cb4d7568.seg 03790512\n04379243/points/65e7fd8d158658106a76e23e6f966dea.pts 04379243/expert_verified/points_label/65e7fd8d158658106a76e23e6f966dea.seg 04379243\n02691156/points/2229bc4e646f506679f56e78e8640bfb.pts 02691156/expert_verified/points_label/2229bc4e646f506679f56e78e8640bfb.seg 02691156\n02954340/points/f40b47fcbf83b962f0d11ae402ef940e.pts 02954340/expert_verified/points_label/f40b47fcbf83b962f0d11ae402ef940e.seg 02954340\n02773838/points/cbc2328cadf8dc573394926146371698.pts 02773838/expert_verified/points_label/cbc2328cadf8dc573394926146371698.seg 02773838\n02958343/points/3c6d7c6ce950917b3a93df79ef2b80ef.pts 02958343/expert_verified/points_label/3c6d7c6ce950917b3a93df79ef2b80ef.seg 02958343\n02958343/points/2ccaaa66525d7f095473e57e894e0ef5.pts 02958343/expert_verified/points_label/2ccaaa66525d7f095473e57e894e0ef5.seg 02958343\n02691156/points/70d9304de59792a9515d73fcb34092fc.pts 02691156/expert_verified/points_label/70d9304de59792a9515d73fcb34092fc.seg 02691156\n03001627/points/2ed8d45343a442097869557127addfc0.pts 03001627/expert_verified/points_label/2ed8d45343a442097869557127addfc0.seg 03001627\n04379243/points/84f5e52756fc84f86df14337f24e49f4.pts 04379243/expert_verified/points_label/84f5e52756fc84f86df14337f24e49f4.seg 04379243\n03001627/points/b33a3b1627ad61eb8ca4809dcf42fe1.pts 03001627/expert_verified/points_label/b33a3b1627ad61eb8ca4809dcf42fe1.seg 03001627\n04379243/points/369c19c0971221f3664b3b9b23ddfcbc.pts 04379243/expert_verified/points_label/369c19c0971221f3664b3b9b23ddfcbc.seg 04379243\n03642806/points/5a13f7551c20eb29f3ebfe51dc60263e.pts 03642806/expert_verified/points_label/5a13f7551c20eb29f3ebfe51dc60263e.seg 03642806\n04379243/points/1b01ef65920c342323bdffac38e6b250.pts 04379243/expert_verified/points_label/1b01ef65920c342323bdffac38e6b250.seg 04379243\n02691156/points/9b687f9cff46d43d89c2da356f872ebc.pts 02691156/expert_verified/points_label/9b687f9cff46d43d89c2da356f872ebc.seg 02691156\n04379243/points/746ceaf694d85eb5d5192f88466da1dc.pts 04379243/expert_verified/points_label/746ceaf694d85eb5d5192f88466da1dc.seg 04379243\n04379243/points/9f4eb0d734a2b7a4ab610b0c94236463.pts 04379243/expert_verified/points_label/9f4eb0d734a2b7a4ab610b0c94236463.seg 04379243\n03001627/points/a1213da0e7efffcafebad4f49b26ec52.pts 03001627/expert_verified/points_label/a1213da0e7efffcafebad4f49b26ec52.seg 03001627\n02958343/points/71b00ea32b1810ac373af83f3f2fe606.pts 02958343/expert_verified/points_label/71b00ea32b1810ac373af83f3f2fe606.seg 02958343\n02691156/points/52a84fea7c314f4c3dfc741b4df74043.pts 02691156/expert_verified/points_label/52a84fea7c314f4c3dfc741b4df74043.seg 02691156\n02958343/points/9f3c463272d13d39eb7780cdb3ece367.pts 02958343/expert_verified/points_label/9f3c463272d13d39eb7780cdb3ece367.seg 02958343\n03001627/points/def03f645b3fbd665bb93149cc0adf0.pts 03001627/expert_verified/points_label/def03f645b3fbd665bb93149cc0adf0.seg 03001627\n03001627/points/f9e386d968653602d68fb8f5d99affa0.pts 03001627/expert_verified/points_label/f9e386d968653602d68fb8f5d99affa0.seg 03001627\n03467517/points/9c399ebc617349dcd016bd20f13ab302.pts 03467517/expert_verified/points_label/9c399ebc617349dcd016bd20f13ab302.seg 03467517\n04379243/points/aaaba1bbe037d3b1e406974af41e8842.pts 04379243/expert_verified/points_label/aaaba1bbe037d3b1e406974af41e8842.seg 04379243\n03001627/points/4030ea84b560b857febad4f49b26ec52.pts 03001627/expert_verified/points_label/4030ea84b560b857febad4f49b26ec52.seg 03001627\n04379243/points/a38405108fb416d8356ca1f9220b9968.pts 04379243/expert_verified/points_label/a38405108fb416d8356ca1f9220b9968.seg 04379243\n04379243/points/f864677894410315ab610b0c94236463.pts 04379243/expert_verified/points_label/f864677894410315ab610b0c94236463.seg 04379243\n02954340/points/da5e5ec4c486d6c03baa6271927f050e.pts 02954340/expert_verified/points_label/da5e5ec4c486d6c03baa6271927f050e.seg 02954340\n02691156/points/eed299b690be51ffbd931fcaa69140.pts 02691156/expert_verified/points_label/eed299b690be51ffbd931fcaa69140.seg 02691156\n03797390/points/b4ae56d6638d5338de671f28c83d2dcb.pts 03797390/expert_verified/points_label/b4ae56d6638d5338de671f28c83d2dcb.seg 03797390\n04379243/points/10cc8c941fc8aeaa71a782a4379556c7.pts 04379243/expert_verified/points_label/10cc8c941fc8aeaa71a782a4379556c7.seg 04379243\n03636649/points/61b57e8b5da8fb13d527a9a6f5a872b9.pts 03636649/expert_verified/points_label/61b57e8b5da8fb13d527a9a6f5a872b9.seg 03636649\n02691156/points/ae4a9574248395b671d03b466c72ce41.pts 02691156/expert_verified/points_label/ae4a9574248395b671d03b466c72ce41.seg 02691156\n04379243/points/8cfe3ff92244310534506cc3910614fe.pts 04379243/expert_verified/points_label/8cfe3ff92244310534506cc3910614fe.seg 04379243\n03001627/points/597cb92a5bfb580eed98cca8f0ccd5f7.pts 03001627/expert_verified/points_label/597cb92a5bfb580eed98cca8f0ccd5f7.seg 03001627\n03001627/points/4231883e92a3c1a21c62d11641ffbd35.pts 03001627/expert_verified/points_label/4231883e92a3c1a21c62d11641ffbd35.seg 03001627\n03636649/points/28793511c46b4fa030f6e0ede20c4525.pts 03636649/expert_verified/points_label/28793511c46b4fa030f6e0ede20c4525.seg 03636649\n02958343/points/4c60f32b6efdc7217dfb1ee6a4b12bf8.pts 02958343/expert_verified/points_label/4c60f32b6efdc7217dfb1ee6a4b12bf8.seg 02958343\n04379243/points/397c56f15e547fad1bb088904f7cb154.pts 04379243/expert_verified/points_label/397c56f15e547fad1bb088904f7cb154.seg 04379243\n04379243/points/9bb816d6a3517a5ca74c2333655a11dd.pts 04379243/expert_verified/points_label/9bb816d6a3517a5ca74c2333655a11dd.seg 04379243\n03790512/points/bae59e64a50d3aa2f68f798d07e007b6.pts 03790512/expert_verified/points_label/bae59e64a50d3aa2f68f798d07e007b6.seg 03790512\n04379243/points/8b094873d775f6e21130871dbfe24c18.pts 04379243/expert_verified/points_label/8b094873d775f6e21130871dbfe24c18.seg 04379243\n04379243/points/4d2f7c689e77df6b6dc1766995c17a41.pts 04379243/expert_verified/points_label/4d2f7c689e77df6b6dc1766995c17a41.seg 04379243\n03467517/points/16916a50a064304bf6ed0b697979412e.pts 03467517/expert_verified/points_label/16916a50a064304bf6ed0b697979412e.seg 03467517\n03636649/points/c802fa4c82498450af6016f34c89d087.pts 03636649/expert_verified/points_label/c802fa4c82498450af6016f34c89d087.seg 03636649\n03001627/points/1ec5a88141aefca9cf6e4dd7ee69d71f.pts 03001627/expert_verified/points_label/1ec5a88141aefca9cf6e4dd7ee69d71f.seg 03001627\n04379243/points/bdefbb1f281434e39961e1085a81acc5.pts 04379243/expert_verified/points_label/bdefbb1f281434e39961e1085a81acc5.seg 04379243\n04379243/points/acf57dbafe8966f577fb15a8d7923976.pts 04379243/expert_verified/points_label/acf57dbafe8966f577fb15a8d7923976.seg 04379243\n03642806/points/cc67f6608c41743ec1830f8ca7a3cbed.pts 03642806/expert_verified/points_label/cc67f6608c41743ec1830f8ca7a3cbed.seg 03642806\n03001627/points/95e1571acdd75922afdb9a672b7d3b8a.pts 03001627/expert_verified/points_label/95e1571acdd75922afdb9a672b7d3b8a.seg 03001627\n04379243/points/2ebe5dfb7bd9a50c6effbd64ad6b71b8.pts 04379243/expert_verified/points_label/2ebe5dfb7bd9a50c6effbd64ad6b71b8.seg 04379243\n03001627/points/a6420c4ed13cf628945a77b945b7b70f.pts 03001627/expert_verified/points_label/a6420c4ed13cf628945a77b945b7b70f.seg 03001627\n04379243/points/1de679dd26d8c69cae44c65a6d0f0732.pts 04379243/expert_verified/points_label/1de679dd26d8c69cae44c65a6d0f0732.seg 04379243\n03001627/points/271012d5de261d08101accd22c701b9.pts 03001627/expert_verified/points_label/271012d5de261d08101accd22c701b9.seg 03001627\n04379243/points/5e409a2627f7cd7d63ecd64ef0e6814c.pts 04379243/expert_verified/points_label/5e409a2627f7cd7d63ecd64ef0e6814c.seg 04379243\n02691156/points/c9aeb20d7cd1b3b45e9e2656aff7dd5b.pts 02691156/expert_verified/points_label/c9aeb20d7cd1b3b45e9e2656aff7dd5b.seg 02691156\n04379243/points/45b23ac79688170893ba1eeaf62819a2.pts 04379243/expert_verified/points_label/45b23ac79688170893ba1eeaf62819a2.seg 04379243\n02691156/points/9ac292686a2fcebbe719b5362fe06bbb.pts 02691156/expert_verified/points_label/9ac292686a2fcebbe719b5362fe06bbb.seg 02691156\n04379243/points/3b0c62bde7b24de85ce578b5b4bfae3c.pts 04379243/expert_verified/points_label/3b0c62bde7b24de85ce578b5b4bfae3c.seg 04379243\n02958343/points/c487e9850891e1ec2d15396b7bcc6366.pts 02958343/expert_verified/points_label/c487e9850891e1ec2d15396b7bcc6366.seg 02958343\n03636649/points/b8e25e0825cb5db7765609a3f435fe9d.pts 03636649/expert_verified/points_label/b8e25e0825cb5db7765609a3f435fe9d.seg 03636649\n03001627/points/9fd6bb18dc21c70766ef9dd2f3ef27d3.pts 03001627/expert_verified/points_label/9fd6bb18dc21c70766ef9dd2f3ef27d3.seg 03001627\n02958343/points/bf37249fc8e16fd8f9a88cc63b910f3.pts 02958343/expert_verified/points_label/bf37249fc8e16fd8f9a88cc63b910f3.seg 02958343\n04225987/points/58ae991bd0350810b9ac379f661f5c75.pts 04225987/expert_verified/points_label/58ae991bd0350810b9ac379f661f5c75.seg 04225987\n03001627/points/508306f8ddf1b54c41cc9e8c39b4e399.pts 03001627/expert_verified/points_label/508306f8ddf1b54c41cc9e8c39b4e399.seg 03001627\n03642806/points/ef5b312fc20f1b20aab089a6db538ba7.pts 03642806/expert_verified/points_label/ef5b312fc20f1b20aab089a6db538ba7.seg 03642806\n03001627/points/d97c5945e9449a58737e4e0df09d751.pts 03001627/expert_verified/points_label/d97c5945e9449a58737e4e0df09d751.seg 03001627\n03001627/points/e1897a4391784bc2e8b2b8dc0c816caf.pts 03001627/expert_verified/points_label/e1897a4391784bc2e8b2b8dc0c816caf.seg 03001627\n04379243/points/a624ebf0bf0451a8d93768e7b9b1eabf.pts 04379243/expert_verified/points_label/a624ebf0bf0451a8d93768e7b9b1eabf.seg 04379243\n03636649/points/1e5e1ff56c27c0d2adc5f5aafedb1c38.pts 03636649/expert_verified/points_label/1e5e1ff56c27c0d2adc5f5aafedb1c38.seg 03636649\n03642806/points/2ce3a50ca6087f30d8e007cc6755cce9.pts 03642806/expert_verified/points_label/2ce3a50ca6087f30d8e007cc6755cce9.seg 03642806\n02691156/points/d615a8217b70af06bc0909d98a1ff2b4.pts 02691156/expert_verified/points_label/d615a8217b70af06bc0909d98a1ff2b4.seg 02691156\n02691156/points/6f72a0d86494b551a834b9c8bfc8647a.pts 02691156/expert_verified/points_label/6f72a0d86494b551a834b9c8bfc8647a.seg 02691156\n03001627/points/20fbab2b8770a1cbf51f77a6d7299806.pts 03001627/expert_verified/points_label/20fbab2b8770a1cbf51f77a6d7299806.seg 03001627\n03001627/points/d239d38424429a9a4626612b5d655dc.pts 03001627/expert_verified/points_label/d239d38424429a9a4626612b5d655dc.seg 03001627\n03001627/points/4c97f421c4ea4396d8ac5d7ad0953104.pts 03001627/expert_verified/points_label/4c97f421c4ea4396d8ac5d7ad0953104.seg 03001627\n03001627/points/5b68a6c2baf0ad61d0de9c949c366777.pts 03001627/expert_verified/points_label/5b68a6c2baf0ad61d0de9c949c366777.seg 03001627\n04379243/points/9bd1c242bd66d2fbb63c01786992bd2f.pts 04379243/expert_verified/points_label/9bd1c242bd66d2fbb63c01786992bd2f.seg 04379243\n03001627/points/e2dbe84030167f1ca5aad165050e534c.pts 03001627/expert_verified/points_label/e2dbe84030167f1ca5aad165050e534c.seg 03001627\n03001627/points/1c17cc67b8c747c3febad4f49b26ec52.pts 03001627/expert_verified/points_label/1c17cc67b8c747c3febad4f49b26ec52.seg 03001627\n04379243/points/2766a883126503cac3bd24f986301745.pts 04379243/expert_verified/points_label/2766a883126503cac3bd24f986301745.seg 04379243\n04225987/points/755dc44dae7791761082f2ea630bf69e.pts 04225987/expert_verified/points_label/755dc44dae7791761082f2ea630bf69e.seg 04225987\n04379243/points/c38ba6c06d2b813230c589758b4b5646.pts 04379243/expert_verified/points_label/c38ba6c06d2b813230c589758b4b5646.seg 04379243\n02691156/points/44c0cb6571f6f000ca8607f540cc62ba.pts 02691156/expert_verified/points_label/44c0cb6571f6f000ca8607f540cc62ba.seg 02691156\n03636649/points/522bc10920249e67141c66e2b49d221.pts 03636649/expert_verified/points_label/522bc10920249e67141c66e2b49d221.seg 03636649\n03790512/points/4548d86cf7f1c11ad373c34785838ee4.pts 03790512/expert_verified/points_label/4548d86cf7f1c11ad373c34785838ee4.seg 03790512\n02958343/points/37c5ac3d5b34761add75f724c0ccbe00.pts 02958343/expert_verified/points_label/37c5ac3d5b34761add75f724c0ccbe00.seg 02958343\n04379243/points/a15f31e2302f6ae5d67a73ffd62ba73f.pts 04379243/expert_verified/points_label/a15f31e2302f6ae5d67a73ffd62ba73f.seg 04379243\n02958343/points/6d714f7b7170a581da8e502a3c6cb4fb.pts 02958343/expert_verified/points_label/6d714f7b7170a581da8e502a3c6cb4fb.seg 02958343\n03624134/points/17c4163247e9237d4b7644126b1d71e0.pts 03624134/expert_verified/points_label/17c4163247e9237d4b7644126b1d71e0.seg 03624134\n03636649/points/7972fd0fe5755b4ad42b9650f19dd425.pts 03636649/expert_verified/points_label/7972fd0fe5755b4ad42b9650f19dd425.seg 03636649\n03001627/points/8ff4ba87d700054546992ce9fde1b2c2.pts 03001627/expert_verified/points_label/8ff4ba87d700054546992ce9fde1b2c2.seg 03001627\n03636649/points/a654df55875a2104d663817442d5278.pts 03636649/expert_verified/points_label/a654df55875a2104d663817442d5278.seg 03636649\n04379243/points/9c12fada31224bdf58c4e7e56d799d97.pts 04379243/expert_verified/points_label/9c12fada31224bdf58c4e7e56d799d97.seg 04379243\n03636649/points/9dad7ce60aa168d72cd2160e449d45ae.pts 03636649/expert_verified/points_label/9dad7ce60aa168d72cd2160e449d45ae.seg 03636649\n02691156/points/cfb555a4d82a600aca8607f540cc62ba.pts 02691156/expert_verified/points_label/cfb555a4d82a600aca8607f540cc62ba.seg 02691156\n04379243/points/415c174ecdc612fb6f5c30e29039b12d.pts 04379243/expert_verified/points_label/415c174ecdc612fb6f5c30e29039b12d.seg 04379243\n03467517/points/a5e2f05386e4ba55a894e1aba5d3799a.pts 03467517/expert_verified/points_label/a5e2f05386e4ba55a894e1aba5d3799a.seg 03467517\n03001627/points/a91b2c89e543a4b3aa3d970c5602cd4a.pts 03001627/expert_verified/points_label/a91b2c89e543a4b3aa3d970c5602cd4a.seg 03001627\n03624134/points/97ed13011e2d85e16029317225a75a9f.pts 03624134/expert_verified/points_label/97ed13011e2d85e16029317225a75a9f.seg 03624134\n04379243/points/388ea3f8ba27da8b777b6246417c94ff.pts 04379243/expert_verified/points_label/388ea3f8ba27da8b777b6246417c94ff.seg 04379243\n04379243/points/983cd9caf65adf1ddf6cfab91d65bb91.pts 04379243/expert_verified/points_label/983cd9caf65adf1ddf6cfab91d65bb91.seg 04379243\n03001627/points/e65d2f0ed75a786a37b2bb75885cfc44.pts 03001627/expert_verified/points_label/e65d2f0ed75a786a37b2bb75885cfc44.seg 03001627\n03624134/points/dce941899bcb752dfe474f09e3f3ac9a.pts 03624134/expert_verified/points_label/dce941899bcb752dfe474f09e3f3ac9a.seg 03624134\n04379243/points/ea3bcd9e6c4205031964126395b17c2a.pts 04379243/expert_verified/points_label/ea3bcd9e6c4205031964126395b17c2a.seg 04379243\n02691156/points/d13d131a649c5df38b96ae1a0a8b84ec.pts 02691156/expert_verified/points_label/d13d131a649c5df38b96ae1a0a8b84ec.seg 02691156\n04379243/points/f917474a20558aa33bbab77a66bc3671.pts 04379243/expert_verified/points_label/f917474a20558aa33bbab77a66bc3671.seg 04379243\n03001627/points/4a24652fbf2bed7e93583c67df8faf1.pts 03001627/expert_verified/points_label/4a24652fbf2bed7e93583c67df8faf1.seg 03001627\n02691156/points/5dd2324cd6ebf52e293fdbda4e7beec9.pts 02691156/expert_verified/points_label/5dd2324cd6ebf52e293fdbda4e7beec9.seg 02691156\n03642806/points/a59d3d87068d313c2656684d670220c2.pts 03642806/expert_verified/points_label/a59d3d87068d313c2656684d670220c2.seg 03642806\n04379243/points/5354ecb0e3aa1da074a16879fb3ac81f.pts 04379243/expert_verified/points_label/5354ecb0e3aa1da074a16879fb3ac81f.seg 04379243\n03642806/points/6c6a96e4486cc02cda66ecbb2c411f37.pts 03642806/expert_verified/points_label/6c6a96e4486cc02cda66ecbb2c411f37.seg 03642806\n04225987/points/fd3627deb2476b0f1f942c57ac0e8959.pts 04225987/expert_verified/points_label/fd3627deb2476b0f1f942c57ac0e8959.seg 04225987\n04379243/points/91bf48934d3b52ea36658c6705d0c08.pts 04379243/expert_verified/points_label/91bf48934d3b52ea36658c6705d0c08.seg 04379243\n04379243/points/18be1556eb4da5af7ccf848ce05c84be.pts 04379243/expert_verified/points_label/18be1556eb4da5af7ccf848ce05c84be.seg 04379243\n02958343/points/33211aabfefa14603b05c2ad25b4380f.pts 02958343/expert_verified/points_label/33211aabfefa14603b05c2ad25b4380f.seg 02958343\n04379243/points/3243ddb2aa4d1659beb83c64f2162734.pts 04379243/expert_verified/points_label/3243ddb2aa4d1659beb83c64f2162734.seg 04379243\n04379243/points/4ce90fe70faf4c3e255bc16374754e69.pts 04379243/expert_verified/points_label/4ce90fe70faf4c3e255bc16374754e69.seg 04379243\n04379243/points/15be511a2433482aa192483aa282f8e5.pts 04379243/expert_verified/points_label/15be511a2433482aa192483aa282f8e5.seg 04379243\n03624134/points/70b6b3ba6a27fd6f782db73f915dfbb8.pts 03624134/expert_verified/points_label/70b6b3ba6a27fd6f782db73f915dfbb8.seg 03624134\n03001627/points/519d19f3adebd20aba49014d9a3afe99.pts 03001627/expert_verified/points_label/519d19f3adebd20aba49014d9a3afe99.seg 03001627\n03467517/points/ca9720d793355dd693f0194265a9746c.pts 03467517/expert_verified/points_label/ca9720d793355dd693f0194265a9746c.seg 03467517\n03001627/points/e19214cabca496a3f7b54e04c7238d7.pts 03001627/expert_verified/points_label/e19214cabca496a3f7b54e04c7238d7.seg 03001627\n03001627/points/ea1bfe81b88395fcaa29e9f0529e8ef7.pts 03001627/expert_verified/points_label/ea1bfe81b88395fcaa29e9f0529e8ef7.seg 03001627\n03001627/points/2b110b833111b38c420adf24e49f74c8.pts 03001627/expert_verified/points_label/2b110b833111b38c420adf24e49f74c8.seg 03001627\n03001627/points/7b405c1d6d2dbea9f91663a74ccd2338.pts 03001627/expert_verified/points_label/7b405c1d6d2dbea9f91663a74ccd2338.seg 03001627\n02691156/points/489d3e4cc3d790a0ca8607f540cc62ba.pts 02691156/expert_verified/points_label/489d3e4cc3d790a0ca8607f540cc62ba.seg 02691156\n04379243/points/79eeee790ed5a5aac242632b2a8c3129.pts 04379243/expert_verified/points_label/79eeee790ed5a5aac242632b2a8c3129.seg 04379243\n03624134/points/665bf5d30d342d64adee73efb2c043f8.pts 03624134/expert_verified/points_label/665bf5d30d342d64adee73efb2c043f8.seg 03624134\n03467517/points/7f3f5c9953fb7e0a6cbec6f3d994a573.pts 03467517/expert_verified/points_label/7f3f5c9953fb7e0a6cbec6f3d994a573.seg 03467517\n03001627/points/d2597d18fdc3594e1dc59d2adbe5297d.pts 03001627/expert_verified/points_label/d2597d18fdc3594e1dc59d2adbe5297d.seg 03001627\n03001627/points/a9a1147eae9936f76f1e07a56c129dfc.pts 03001627/expert_verified/points_label/a9a1147eae9936f76f1e07a56c129dfc.seg 03001627\n02691156/points/64cb683afd5e9e559db1d21b460eacef.pts 02691156/expert_verified/points_label/64cb683afd5e9e559db1d21b460eacef.seg 02691156\n03624134/points/e0a78d771cfde145a5cea7e40e4d21ff.pts 03624134/expert_verified/points_label/e0a78d771cfde145a5cea7e40e4d21ff.seg 03624134\n02691156/points/e59c4f290d8585a862b600da24e0965.pts 02691156/expert_verified/points_label/e59c4f290d8585a862b600da24e0965.seg 02691156\n04379243/points/523ac3575244c7f3a130bbab7337a0cf.pts 04379243/expert_verified/points_label/523ac3575244c7f3a130bbab7337a0cf.seg 04379243\n03001627/points/96e83c79e8d76d4519fb4103277a6b93.pts 03001627/expert_verified/points_label/96e83c79e8d76d4519fb4103277a6b93.seg 03001627\n04379243/points/a2781622b5941ff2a886fe6408aa7382.pts 04379243/expert_verified/points_label/a2781622b5941ff2a886fe6408aa7382.seg 04379243\n04379243/points/5d24567426a614ecfd726e98b98fb36f.pts 04379243/expert_verified/points_label/5d24567426a614ecfd726e98b98fb36f.seg 04379243\n03001627/points/a5a2d09e5384237869513d0907f19c8f.pts 03001627/expert_verified/points_label/a5a2d09e5384237869513d0907f19c8f.seg 03001627\n02691156/points/e02485f093835f45c1b64d86df61366a.pts 02691156/expert_verified/points_label/e02485f093835f45c1b64d86df61366a.seg 02691156\n04379243/points/58f8fd169c9578e62f81cb887dc35578.pts 04379243/expert_verified/points_label/58f8fd169c9578e62f81cb887dc35578.seg 04379243\n04379243/points/c755eeaa4a588fcba9126dd5adc92c1e.pts 04379243/expert_verified/points_label/c755eeaa4a588fcba9126dd5adc92c1e.seg 04379243\n03001627/points/704179dd47a2282e676de9b6e111da8b.pts 03001627/expert_verified/points_label/704179dd47a2282e676de9b6e111da8b.seg 03001627\n03001627/points/9253f198c06794cdc7689830acac6e59.pts 03001627/expert_verified/points_label/9253f198c06794cdc7689830acac6e59.seg 03001627\n04379243/points/2ba8eb5ec0a05694593ebeeedbff73b.pts 04379243/expert_verified/points_label/2ba8eb5ec0a05694593ebeeedbff73b.seg 04379243\n03467517/points/133ebdf2ca7bf4b81d4e8021f58beea0.pts 03467517/expert_verified/points_label/133ebdf2ca7bf4b81d4e8021f58beea0.seg 03467517\n03467517/points/ba6d3dcff42ea7bba32c4b8efb0131e.pts 03467517/expert_verified/points_label/ba6d3dcff42ea7bba32c4b8efb0131e.seg 03467517\n03467517/points/222b705a80d75a4343b0b12983b9982.pts 03467517/expert_verified/points_label/222b705a80d75a4343b0b12983b9982.seg 03467517\n04379243/points/47317755c82114d5c3bd24f986301745.pts 04379243/expert_verified/points_label/47317755c82114d5c3bd24f986301745.seg 04379243\n04379243/points/175c0be26d0f2e916cb0bd372b0960ba.pts 04379243/expert_verified/points_label/175c0be26d0f2e916cb0bd372b0960ba.seg 04379243\n03636649/points/19388898dd69dd9fddc8e6d1ec6242c3.pts 03636649/expert_verified/points_label/19388898dd69dd9fddc8e6d1ec6242c3.seg 03636649\n04379243/points/3cec584145ee513d635418e95eea8a17.pts 04379243/expert_verified/points_label/3cec584145ee513d635418e95eea8a17.seg 04379243\n03001627/points/3a5c8d46fdc6793b956abdbfba57903a.pts 03001627/expert_verified/points_label/3a5c8d46fdc6793b956abdbfba57903a.seg 03001627\n03001627/points/3d32d89db2286377e63c6421b71f17c8.pts 03001627/expert_verified/points_label/3d32d89db2286377e63c6421b71f17c8.seg 03001627\n03001627/points/47a45ce9fb219083411e8b42940aba04.pts 03001627/expert_verified/points_label/47a45ce9fb219083411e8b42940aba04.seg 03001627\n03467517/points/214f6a08b78670de2cb522418d5742a0.pts 03467517/expert_verified/points_label/214f6a08b78670de2cb522418d5742a0.seg 03467517\n04379243/points/1b4bc147baf68d4ff008d8a3590fb522.pts 04379243/expert_verified/points_label/1b4bc147baf68d4ff008d8a3590fb522.seg 04379243\n03467517/points/83b2ecf5caced214e313875ff213ee10.pts 03467517/expert_verified/points_label/83b2ecf5caced214e313875ff213ee10.seg 03467517\n02691156/points/57fe8ad460bcb4929a4a28ef635593ce.pts 02691156/expert_verified/points_label/57fe8ad460bcb4929a4a28ef635593ce.seg 02691156\n03624134/points/e8a6915bd0bcf1bebaa284808a1567a8.pts 03624134/expert_verified/points_label/e8a6915bd0bcf1bebaa284808a1567a8.seg 03624134\n03001627/points/1da29597f89c2b004b3c42e318f3affc.pts 03001627/expert_verified/points_label/1da29597f89c2b004b3c42e318f3affc.seg 03001627\n04379243/points/2ef899e67eecef65190a91fd9a6f7d55.pts 04379243/expert_verified/points_label/2ef899e67eecef65190a91fd9a6f7d55.seg 04379243\n04379243/points/811a7be3be14bd2b62103e4bff47b4cd.pts 04379243/expert_verified/points_label/811a7be3be14bd2b62103e4bff47b4cd.seg 04379243\n03948459/points/592017db407391c68e7e947594effe19.pts 03948459/expert_verified/points_label/592017db407391c68e7e947594effe19.seg 03948459\n03636649/points/eb311e6232cb7011bb5bd941c6665c21.pts 03636649/expert_verified/points_label/eb311e6232cb7011bb5bd941c6665c21.seg 03636649\n02691156/points/caa7e70beee4543f42c20743f866e1a6.pts 02691156/expert_verified/points_label/caa7e70beee4543f42c20743f866e1a6.seg 02691156\n03001627/points/3aaa59b19eebcb5f41552c6ecbda964b.pts 03001627/expert_verified/points_label/3aaa59b19eebcb5f41552c6ecbda964b.seg 03001627\n03001627/points/a93aac9ad86008e69fc01fb65ca37d30.pts 03001627/expert_verified/points_label/a93aac9ad86008e69fc01fb65ca37d30.seg 03001627\n03624134/points/ceeb38ab7929361e76ec14627bf6bbcb.pts 03624134/expert_verified/points_label/ceeb38ab7929361e76ec14627bf6bbcb.seg 03624134\n03001627/points/93dc91115a9002e1663fcfd6703c85f3.pts 03001627/expert_verified/points_label/93dc91115a9002e1663fcfd6703c85f3.seg 03001627\n04379243/points/b08310a1d75702eda09ce9c1262c7237.pts 04379243/expert_verified/points_label/b08310a1d75702eda09ce9c1262c7237.seg 04379243\n03797390/points/e9bd4ee553eb35c1d5ccc40b510e4bd.pts 03797390/expert_verified/points_label/e9bd4ee553eb35c1d5ccc40b510e4bd.seg 03797390\n03001627/points/bdd57499bf64fab6bf80985a99195eb8.pts 03001627/expert_verified/points_label/bdd57499bf64fab6bf80985a99195eb8.seg 03001627\n04379243/points/48af84a5600ad5bc19fb4103277a6b93.pts 04379243/expert_verified/points_label/48af84a5600ad5bc19fb4103277a6b93.seg 04379243\n03001627/points/738395f54b301d80b1f5d603f931c1aa.pts 03001627/expert_verified/points_label/738395f54b301d80b1f5d603f931c1aa.seg 03001627\n03790512/points/6819949f5625ca12d0f568c31c1cd62a.pts 03790512/expert_verified/points_label/6819949f5625ca12d0f568c31c1cd62a.seg 03790512\n03467517/points/70d9a5d0330abd9df4b498e11fb60a4b.pts 03467517/expert_verified/points_label/70d9a5d0330abd9df4b498e11fb60a4b.seg 03467517\n02958343/points/174f1a421f652029d577c0ac53e96823.pts 02958343/expert_verified/points_label/174f1a421f652029d577c0ac53e96823.seg 02958343\n03001627/points/d764960666572084b1ea4e06e88051f3.pts 03001627/expert_verified/points_label/d764960666572084b1ea4e06e88051f3.seg 03001627\n02691156/points/ba662ec78231c493252b4f9439ef95a6.pts 02691156/expert_verified/points_label/ba662ec78231c493252b4f9439ef95a6.seg 02691156\n03636649/points/8a9f2e5b726ea37f60ad823977adaa23.pts 03636649/expert_verified/points_label/8a9f2e5b726ea37f60ad823977adaa23.seg 03636649\n04379243/points/80af0f92ecf69f69f5ff054d67d5fe35.pts 04379243/expert_verified/points_label/80af0f92ecf69f69f5ff054d67d5fe35.seg 04379243\n04379243/points/ce4e075487aa05ecdcfcef693e7ec696.pts 04379243/expert_verified/points_label/ce4e075487aa05ecdcfcef693e7ec696.seg 04379243\n03001627/points/564f5f96bc718194166420d06689fcf.pts 03001627/expert_verified/points_label/564f5f96bc718194166420d06689fcf.seg 03001627\n03636649/points/88d29e1350eda810c066b9622c005c53.pts 03636649/expert_verified/points_label/88d29e1350eda810c066b9622c005c53.seg 03636649\n04379243/points/346db24c1279e8d273fdbe4b39ff4036.pts 04379243/expert_verified/points_label/346db24c1279e8d273fdbe4b39ff4036.seg 04379243\n04379243/points/7062f5b229674ab7b0b54dd2cf2a35d4.pts 04379243/expert_verified/points_label/7062f5b229674ab7b0b54dd2cf2a35d4.seg 04379243\n03636649/points/923097cec128ae77469cbaa3d6420fb4.pts 03636649/expert_verified/points_label/923097cec128ae77469cbaa3d6420fb4.seg 03636649\n04379243/points/3fb5033b5ddaaf365f7afad12924b3b5.pts 04379243/expert_verified/points_label/3fb5033b5ddaaf365f7afad12924b3b5.seg 04379243\n03636649/points/32e9d8a4b5a141a2615efc34c3b36ef0.pts 03636649/expert_verified/points_label/32e9d8a4b5a141a2615efc34c3b36ef0.seg 03636649\n02691156/points/997cb29f544d6f2726360e1e29a956c7.pts 02691156/expert_verified/points_label/997cb29f544d6f2726360e1e29a956c7.seg 02691156\n04379243/points/7df9115b511668bdde98d10ab5975b59.pts 04379243/expert_verified/points_label/7df9115b511668bdde98d10ab5975b59.seg 04379243\n03636649/points/5580b95ab8e7806c6c5b8009db95f66f.pts 03636649/expert_verified/points_label/5580b95ab8e7806c6c5b8009db95f66f.seg 03636649\n04379243/points/6862bebc1f59a5caac7bed72580dc30f.pts 04379243/expert_verified/points_label/6862bebc1f59a5caac7bed72580dc30f.seg 04379243\n02691156/points/56ba815f883279b462b600da24e0965.pts 02691156/expert_verified/points_label/56ba815f883279b462b600da24e0965.seg 02691156\n03797390/points/5c48d471200d2bf16e8a121e6886e18d.pts 03797390/expert_verified/points_label/5c48d471200d2bf16e8a121e6886e18d.seg 03797390\n04379243/points/b48d04600e7cf2bebeedb4c8fd29e2d1.pts 04379243/expert_verified/points_label/b48d04600e7cf2bebeedb4c8fd29e2d1.seg 04379243\n02958343/points/323c9dc2a8911e146f2f07de403e98d8.pts 02958343/expert_verified/points_label/323c9dc2a8911e146f2f07de403e98d8.seg 02958343\n04225987/points/d3ff56062272f3e6346e65609be6d72f.pts 04225987/expert_verified/points_label/d3ff56062272f3e6346e65609be6d72f.seg 04225987\n03001627/points/af28dbdce6ed8cea19fb4103277a6b93.pts 03001627/expert_verified/points_label/af28dbdce6ed8cea19fb4103277a6b93.seg 03001627\n02958343/points/dfa6c32dec07727ee9d8921ebe6d5b8e.pts 02958343/expert_verified/points_label/dfa6c32dec07727ee9d8921ebe6d5b8e.seg 02958343\n03001627/points/c2b898dd5601454d626d7e3d07da8352.pts 03001627/expert_verified/points_label/c2b898dd5601454d626d7e3d07da8352.seg 03001627\n04379243/points/a7ef45d86ae5b496a97f238e46bc2221.pts 04379243/expert_verified/points_label/a7ef45d86ae5b496a97f238e46bc2221.seg 04379243\n04379243/points/1bd138c3e54a75d32f38c0d2792fb5e.pts 04379243/expert_verified/points_label/1bd138c3e54a75d32f38c0d2792fb5e.seg 04379243\n02958343/points/cd67376cac9f989151008e496c6cfd2e.pts 02958343/expert_verified/points_label/cd67376cac9f989151008e496c6cfd2e.seg 02958343\n03948459/points/af9eaed1d9574387ab2c2809513f396e.pts 03948459/expert_verified/points_label/af9eaed1d9574387ab2c2809513f396e.seg 03948459\n04379243/points/c418195771c7625945821c000807c3b1.pts 04379243/expert_verified/points_label/c418195771c7625945821c000807c3b1.seg 04379243\n04379243/points/88b227c5fb3906ce47c638c0eee4a2b3.pts 04379243/expert_verified/points_label/88b227c5fb3906ce47c638c0eee4a2b3.seg 04379243\n03467517/points/81bd0c7a35a147988cc3ae4061da3bb0.pts 03467517/expert_verified/points_label/81bd0c7a35a147988cc3ae4061da3bb0.seg 03467517\n04379243/points/5292f2930f188e0a7ff6ace05b36a5.pts 04379243/expert_verified/points_label/5292f2930f188e0a7ff6ace05b36a5.seg 04379243\n03636649/points/5f0a23ce527d0be52f38c0d2792fb5e.pts 03636649/expert_verified/points_label/5f0a23ce527d0be52f38c0d2792fb5e.seg 03636649\n03636649/points/98cdb45ca9925feb194eb328dc97c7e2.pts 03636649/expert_verified/points_label/98cdb45ca9925feb194eb328dc97c7e2.seg 03636649\n03790512/points/47054c1839830834a88e8cb97b773125.pts 03790512/expert_verified/points_label/47054c1839830834a88e8cb97b773125.seg 03790512\n03001627/points/b058cc77e628ac01c433ba3e0e025e8c.pts 03001627/expert_verified/points_label/b058cc77e628ac01c433ba3e0e025e8c.seg 03001627\n04225987/points/f74a5dfc0094e2d5561dce3fe08634b7.pts 04225987/expert_verified/points_label/f74a5dfc0094e2d5561dce3fe08634b7.seg 04225987\n02958343/points/e20b8a9c388eeb012c8b6ee41d7d5d62.pts 02958343/expert_verified/points_label/e20b8a9c388eeb012c8b6ee41d7d5d62.seg 02958343\n02958343/points/7203130a35ab20a4b1bb46d2556ba67d.pts 02958343/expert_verified/points_label/7203130a35ab20a4b1bb46d2556ba67d.seg 02958343\n03261776/points/2c6f04001afcce7ded85c3dc02bada79.pts 03261776/expert_verified/points_label/2c6f04001afcce7ded85c3dc02bada79.seg 03261776\n03001627/points/951fb0d7ad8ab2bec5b5bea66ef4576d.pts 03001627/expert_verified/points_label/951fb0d7ad8ab2bec5b5bea66ef4576d.seg 03001627\n02691156/points/54e926e12382808b66cf1b4a8fc3914e.pts 02691156/expert_verified/points_label/54e926e12382808b66cf1b4a8fc3914e.seg 02691156\n03001627/points/4c513ea0804fc008c8687ff9b0b4e4ac.pts 03001627/expert_verified/points_label/4c513ea0804fc008c8687ff9b0b4e4ac.seg 03001627\n03001627/points/748957972cae6b03c56be62b05937331.pts 03001627/expert_verified/points_label/748957972cae6b03c56be62b05937331.seg 03001627\n03001627/points/cc2639f8c584001a922dfe32810651d0.pts 03001627/expert_verified/points_label/cc2639f8c584001a922dfe32810651d0.seg 03001627\n04379243/points/d2f811bc37858425a63ceecddc308b25.pts 04379243/expert_verified/points_label/d2f811bc37858425a63ceecddc308b25.seg 04379243\n03001627/points/d48dac046436a29ec3bd24f986301745.pts 03001627/expert_verified/points_label/d48dac046436a29ec3bd24f986301745.seg 03001627\n03001627/points/30fafef5c734f926781ba0fdb47276df.pts 03001627/expert_verified/points_label/30fafef5c734f926781ba0fdb47276df.seg 03001627\n03001627/points/7293291b3fe8233fdef1c01cbd4ae0c.pts 03001627/expert_verified/points_label/7293291b3fe8233fdef1c01cbd4ae0c.seg 03001627\n03636649/points/3deedc86a83bbf23f647dc544bb0ab61.pts 03636649/expert_verified/points_label/3deedc86a83bbf23f647dc544bb0ab61.seg 03636649\n03467517/points/bb4a5712da8f63330d758421dd01f45.pts 03467517/expert_verified/points_label/bb4a5712da8f63330d758421dd01f45.seg 03467517\n03636649/points/39af776c1435a3374b59758e9336ca87.pts 03636649/expert_verified/points_label/39af776c1435a3374b59758e9336ca87.seg 03636649\n04379243/points/ef9f3af9b8453613febad4f49b26ec52.pts 04379243/expert_verified/points_label/ef9f3af9b8453613febad4f49b26ec52.seg 04379243\n02691156/points/29192f8c96264e3435fc197bbabcd5bd.pts 02691156/expert_verified/points_label/29192f8c96264e3435fc197bbabcd5bd.seg 02691156\n02691156/points/75d162523d703917b87697d3904b168b.pts 02691156/expert_verified/points_label/75d162523d703917b87697d3904b168b.seg 02691156\n04379243/points/3c04f4e0d183976a7e7cb173e141227.pts 04379243/expert_verified/points_label/3c04f4e0d183976a7e7cb173e141227.seg 04379243\n03790512/points/80011e85cd42668ad373c34785838ee4.pts 03790512/expert_verified/points_label/80011e85cd42668ad373c34785838ee4.seg 03790512\n04379243/points/994e524d70043c3496e349c87c588bf2.pts 04379243/expert_verified/points_label/994e524d70043c3496e349c87c588bf2.seg 04379243\n02691156/points/b1f08c51a098c43696d224195a988f09.pts 02691156/expert_verified/points_label/b1f08c51a098c43696d224195a988f09.seg 02691156\n04379243/points/cb31b6293506eb639a3528690d225ee1.pts 04379243/expert_verified/points_label/cb31b6293506eb639a3528690d225ee1.seg 04379243\n02691156/points/d70d648947c65b1eca8607f540cc62ba.pts 02691156/expert_verified/points_label/d70d648947c65b1eca8607f540cc62ba.seg 02691156\n03636649/points/7bebdd742342ba93febad4f49b26ec52.pts 03636649/expert_verified/points_label/7bebdd742342ba93febad4f49b26ec52.seg 03636649\n02691156/points/2a2caad9e540dcc687bf26680c510802.pts 02691156/expert_verified/points_label/2a2caad9e540dcc687bf26680c510802.seg 02691156\n03790512/points/73fd19410ce60b83d5dde04c96fd8146.pts 03790512/expert_verified/points_label/73fd19410ce60b83d5dde04c96fd8146.seg 03790512\n04379243/points/ccb8c52ff9e7a01819fb4103277a6b93.pts 04379243/expert_verified/points_label/ccb8c52ff9e7a01819fb4103277a6b93.seg 04379243\n03467517/points/cc9e9ef3e1326c5363e148e250c0340d.pts 03467517/expert_verified/points_label/cc9e9ef3e1326c5363e148e250c0340d.seg 03467517\n03001627/points/d5360f2b0b0299c29b9f2eb77f5e247e.pts 03001627/expert_verified/points_label/d5360f2b0b0299c29b9f2eb77f5e247e.seg 03001627\n02691156/points/6b69e4c1cceb6e0681fa1ee3c368532e.pts 02691156/expert_verified/points_label/6b69e4c1cceb6e0681fa1ee3c368532e.seg 02691156\n02691156/points/3ae96a1e1bb488942296d88107d065f6.pts 02691156/expert_verified/points_label/3ae96a1e1bb488942296d88107d065f6.seg 02691156\n04379243/points/5e4351c4525fae6d6fa63795f94c4d8c.pts 04379243/expert_verified/points_label/5e4351c4525fae6d6fa63795f94c4d8c.seg 04379243\n04225987/points/5c55e6b6708f730d758f6def7204bd6b.pts 04225987/expert_verified/points_label/5c55e6b6708f730d758f6def7204bd6b.seg 04225987\n03001627/points/a48e359faed3da88d3519c62a8100783.pts 03001627/expert_verified/points_label/a48e359faed3da88d3519c62a8100783.seg 03001627\n03467517/points/a4170135b1055cb8982c503992eaf09.pts 03467517/expert_verified/points_label/a4170135b1055cb8982c503992eaf09.seg 03467517\n02958343/points/b3f1ad55fa401c35e8c505ac322336cc.pts 02958343/expert_verified/points_label/b3f1ad55fa401c35e8c505ac322336cc.seg 02958343\n02691156/points/c7c5bb658cafcc7c67711f7c205c5b63.pts 02691156/expert_verified/points_label/c7c5bb658cafcc7c67711f7c205c5b63.seg 02691156\n02691156/points/914c308ac4a9156842c20743f866e1a6.pts 02691156/expert_verified/points_label/914c308ac4a9156842c20743f866e1a6.seg 02691156\n04379243/points/23acbe1f91d445f91ca1c7e576bee6b9.pts 04379243/expert_verified/points_label/23acbe1f91d445f91ca1c7e576bee6b9.seg 04379243\n04379243/points/8eb366f4f602219b490ad276cd2af3a4.pts 04379243/expert_verified/points_label/8eb366f4f602219b490ad276cd2af3a4.seg 04379243\n03624134/points/508ca8fa00e0cbb3e168961dc7b88f65.pts 03624134/expert_verified/points_label/508ca8fa00e0cbb3e168961dc7b88f65.seg 03624134\n04379243/points/be045fca16562f6764c85287e21825c4.pts 04379243/expert_verified/points_label/be045fca16562f6764c85287e21825c4.seg 04379243\n03001627/points/70f57047512c2eb84104b1c5cb7f9280.pts 03001627/expert_verified/points_label/70f57047512c2eb84104b1c5cb7f9280.seg 03001627\n03001627/points/975ea4be01c7488611bc8e8361bc5303.pts 03001627/expert_verified/points_label/975ea4be01c7488611bc8e8361bc5303.seg 03001627\n04379243/points/3c7cf00cd78adaef4b3c42e318f3affc.pts 04379243/expert_verified/points_label/3c7cf00cd78adaef4b3c42e318f3affc.seg 04379243\n02773838/points/220f08ff0c1d2a4542282fc88db7886b.pts 02773838/expert_verified/points_label/220f08ff0c1d2a4542282fc88db7886b.seg 02773838\n03636649/points/e35c4fadbf8d0426c26e81144f3196d5.pts 03636649/expert_verified/points_label/e35c4fadbf8d0426c26e81144f3196d5.seg 03636649\n03642806/points/93958423b98be8b538ff1b6d120c56aa.pts 03642806/expert_verified/points_label/93958423b98be8b538ff1b6d120c56aa.seg 03642806\n04379243/points/cf24f0128755080569080f7eaa8f3e1d.pts 04379243/expert_verified/points_label/cf24f0128755080569080f7eaa8f3e1d.seg 04379243\n04379243/points/f5cbbe04afdc4697562b835b63cfd09c.pts 04379243/expert_verified/points_label/f5cbbe04afdc4697562b835b63cfd09c.seg 04379243\n04379243/points/7a7590d19cf8274dab610b0c94236463.pts 04379243/expert_verified/points_label/7a7590d19cf8274dab610b0c94236463.seg 04379243\n03001627/points/bdfc3a43eccaac7e908cb3a44391b80.pts 03001627/expert_verified/points_label/bdfc3a43eccaac7e908cb3a44391b80.seg 03001627\n03636649/points/90d70f0a6b1cf72d79f0be73913de469.pts 03636649/expert_verified/points_label/90d70f0a6b1cf72d79f0be73913de469.seg 03636649\n03642806/points/17069b6604fc28bfa2f5beb253216d5b.pts 03642806/expert_verified/points_label/17069b6604fc28bfa2f5beb253216d5b.seg 03642806\n04379243/points/3b0625a3d623a7decfbec6fc6446a041.pts 04379243/expert_verified/points_label/3b0625a3d623a7decfbec6fc6446a041.seg 04379243\n04379243/points/9482c5f0a38a73c0fa16d3c3138134ae.pts 04379243/expert_verified/points_label/9482c5f0a38a73c0fa16d3c3138134ae.seg 04379243\n04379243/points/ed73c41dcfe9170119cc3eaf35cd388f.pts 04379243/expert_verified/points_label/ed73c41dcfe9170119cc3eaf35cd388f.seg 04379243\n04379243/points/1abed35643d34f60afed86cbd9fd5335.pts 04379243/expert_verified/points_label/1abed35643d34f60afed86cbd9fd5335.seg 04379243\n03001627/points/98e1936d3f25389bc3c6a889ee0bd115.pts 03001627/expert_verified/points_label/98e1936d3f25389bc3c6a889ee0bd115.seg 03001627\n03797390/points/ef24c302911bcde6ea6ff2182dd34668.pts 03797390/expert_verified/points_label/ef24c302911bcde6ea6ff2182dd34668.seg 03797390\n02773838/points/22b7d6fa819d62aefc69b7db9c6d5ad9.pts 02773838/expert_verified/points_label/22b7d6fa819d62aefc69b7db9c6d5ad9.seg 02773838\n03001627/points/19666f52289092a3394a3bbfc81460.pts 03001627/expert_verified/points_label/19666f52289092a3394a3bbfc81460.seg 03001627\n03001627/points/49b38e22f104005ecbde89e0c48a01bf.pts 03001627/expert_verified/points_label/49b38e22f104005ecbde89e0c48a01bf.seg 03001627\n04379243/points/de077e0bd6932baef12d7184a2ad3430.pts 04379243/expert_verified/points_label/de077e0bd6932baef12d7184a2ad3430.seg 04379243\n03001627/points/fe99f16c2532cdd07ba99ad16fdc05cd.pts 03001627/expert_verified/points_label/fe99f16c2532cdd07ba99ad16fdc05cd.seg 03001627\n03642806/points/a17cf326705a6443a09a37cf78d1b866.pts 03642806/expert_verified/points_label/a17cf326705a6443a09a37cf78d1b866.seg 03642806\n04379243/points/890940359fdfa036569c11df1aea8ca4.pts 04379243/expert_verified/points_label/890940359fdfa036569c11df1aea8ca4.seg 04379243\n03642806/points/7f75b94bd59d649958dd315c54df0c15.pts 03642806/expert_verified/points_label/7f75b94bd59d649958dd315c54df0c15.seg 03642806\n04379243/points/d0ef9d431a16e70de6c5cd45aa112726.pts 04379243/expert_verified/points_label/d0ef9d431a16e70de6c5cd45aa112726.seg 04379243\n03001627/points/2dc5055b8d900ec7db4b0ee93cf61ed1.pts 03001627/expert_verified/points_label/2dc5055b8d900ec7db4b0ee93cf61ed1.seg 03001627\n03001627/points/9e6b834449ed2db86199d6fe090be061.pts 03001627/expert_verified/points_label/9e6b834449ed2db86199d6fe090be061.seg 03001627\n04379243/points/9e3f1901ea14aca753315facdf531a34.pts 04379243/expert_verified/points_label/9e3f1901ea14aca753315facdf531a34.seg 04379243\n03001627/points/c4ebef05a72fc4f39d62eb3fdc2d3f8a.pts 03001627/expert_verified/points_label/c4ebef05a72fc4f39d62eb3fdc2d3f8a.seg 03001627\n03001627/points/428b77d0ffe6ab456e06155d245f15d6.pts 03001627/expert_verified/points_label/428b77d0ffe6ab456e06155d245f15d6.seg 03001627\n04225987/points/591971ce679ca4b93ad38b993d9e745f.pts 04225987/expert_verified/points_label/591971ce679ca4b93ad38b993d9e745f.seg 04225987\n03790512/points/bcabe20e46e5126ed5dde04c96fd8146.pts 03790512/expert_verified/points_label/bcabe20e46e5126ed5dde04c96fd8146.seg 03790512\n04379243/points/3ed500a12dfa511ba6040757a0125a99.pts 04379243/expert_verified/points_label/3ed500a12dfa511ba6040757a0125a99.seg 04379243\n04379243/points/1581d2682187764730bbd4cddd04c77b.pts 04379243/expert_verified/points_label/1581d2682187764730bbd4cddd04c77b.seg 04379243\n02691156/points/bb7d526405e9347b8f6810e1a2b6aa04.pts 02691156/expert_verified/points_label/bb7d526405e9347b8f6810e1a2b6aa04.seg 02691156\n02691156/points/fb9deec3a422b06b609e2d916fa0da27.pts 02691156/expert_verified/points_label/fb9deec3a422b06b609e2d916fa0da27.seg 02691156\n03636649/points/5e6abfc7d93fa5f1dc0efee4b442070.pts 03636649/expert_verified/points_label/5e6abfc7d93fa5f1dc0efee4b442070.seg 03636649\n03467517/points/2dbc73ad4ce7950163e148e250c0340d.pts 03467517/expert_verified/points_label/2dbc73ad4ce7950163e148e250c0340d.seg 03467517\n02958343/points/eea7f5d02088d49dfdb3c05088c091ae.pts 02958343/expert_verified/points_label/eea7f5d02088d49dfdb3c05088c091ae.seg 02958343\n04379243/points/83c24aad3914e61a73376642dd664bfd.pts 04379243/expert_verified/points_label/83c24aad3914e61a73376642dd664bfd.seg 04379243\n04379243/points/51874066ba946c58aaf15b62af6b513f.pts 04379243/expert_verified/points_label/51874066ba946c58aaf15b62af6b513f.seg 04379243\n03636649/points/5be8cdad3b218e373d39d8012919dd25.pts 03636649/expert_verified/points_label/5be8cdad3b218e373d39d8012919dd25.seg 03636649\n03636649/points/49cd0dd4d1c008edbbc7a6acbd8f058b.pts 03636649/expert_verified/points_label/49cd0dd4d1c008edbbc7a6acbd8f058b.seg 03636649\n03642806/points/d7e7e6651a23afc68ba4e518219eb66a.pts 03642806/expert_verified/points_label/d7e7e6651a23afc68ba4e518219eb66a.seg 03642806\n02958343/points/6026684ab31d567328044fe9244db50a.pts 02958343/expert_verified/points_label/6026684ab31d567328044fe9244db50a.seg 02958343\n04379243/points/c177762c0445d57ab20aa91e9e90c311.pts 04379243/expert_verified/points_label/c177762c0445d57ab20aa91e9e90c311.seg 04379243\n02691156/points/7bad9d15c0f0d3c03554ccf8c30febe7.pts 02691156/expert_verified/points_label/7bad9d15c0f0d3c03554ccf8c30febe7.seg 02691156\n03636649/points/dd818b0269b1aa15fcb8d8c6d4df8143.pts 03636649/expert_verified/points_label/dd818b0269b1aa15fcb8d8c6d4df8143.seg 03636649\n03624134/points/c4851aee1af7d874cc34b900bb2492e.pts 03624134/expert_verified/points_label/c4851aee1af7d874cc34b900bb2492e.seg 03624134\n03001627/points/e2ced471afce616454bfa32aa0766acb.pts 03001627/expert_verified/points_label/e2ced471afce616454bfa32aa0766acb.seg 03001627\n03797390/points/896f1d494bac0ebcdec712af445786fe.pts 03797390/expert_verified/points_label/896f1d494bac0ebcdec712af445786fe.seg 03797390\n04379243/points/481e00e4559705c616a2b5862518c93.pts 04379243/expert_verified/points_label/481e00e4559705c616a2b5862518c93.seg 04379243\n04379243/points/2ca883ba6a9dc6f68985be89a0ee21a.pts 04379243/expert_verified/points_label/2ca883ba6a9dc6f68985be89a0ee21a.seg 04379243\n04379243/points/ebc82e7df36f6e9a33963916b86d221f.pts 04379243/expert_verified/points_label/ebc82e7df36f6e9a33963916b86d221f.seg 04379243\n03001627/points/cdea84a63ad8c44febad4f49b26ec52.pts 03001627/expert_verified/points_label/cdea84a63ad8c44febad4f49b26ec52.seg 03001627\n03624134/points/c71280ea272fbfed4b7644126b1d71e0.pts 03624134/expert_verified/points_label/c71280ea272fbfed4b7644126b1d71e0.seg 03624134\n02958343/points/974c3d82f8726f086b418c7d9fedcaa9.pts 02958343/expert_verified/points_label/974c3d82f8726f086b418c7d9fedcaa9.seg 02958343\n02958343/points/4dbf4e0654d0c234e811106a82796d20.pts 02958343/expert_verified/points_label/4dbf4e0654d0c234e811106a82796d20.seg 02958343\n03467517/points/de9ca0c3e32f907dcb61cf5d9c47c2c7.pts 03467517/expert_verified/points_label/de9ca0c3e32f907dcb61cf5d9c47c2c7.seg 03467517\n02958343/points/9f4bbcf9f51fe1e42957c02bdefc95c8.pts 02958343/expert_verified/points_label/9f4bbcf9f51fe1e42957c02bdefc95c8.seg 02958343\n03467517/points/173e4f1824f7b9fa93f0194265a9746c.pts 03467517/expert_verified/points_label/173e4f1824f7b9fa93f0194265a9746c.seg 03467517\n03636649/points/b4f166440439171741657e31b569b105.pts 03636649/expert_verified/points_label/b4f166440439171741657e31b569b105.seg 03636649\n03948459/points/d1ba405fef56efa0fa29682ba98e856d.pts 03948459/expert_verified/points_label/d1ba405fef56efa0fa29682ba98e856d.seg 03948459\n03467517/points/a39dcefa599a76dd93f0194265a9746c.pts 03467517/expert_verified/points_label/a39dcefa599a76dd93f0194265a9746c.seg 03467517\n02958343/points/e213d976734431773a3afd30f2e86bd7.pts 02958343/expert_verified/points_label/e213d976734431773a3afd30f2e86bd7.seg 02958343\n04379243/points/b1335d826d7d60726e066e11deddab75.pts 04379243/expert_verified/points_label/b1335d826d7d60726e066e11deddab75.seg 04379243\n04379243/points/e37262abd76852ac00ee852f6d8aa3c.pts 04379243/expert_verified/points_label/e37262abd76852ac00ee852f6d8aa3c.seg 04379243\n03001627/points/5d346bdb7db27accf3588493d5c284.pts 03001627/expert_verified/points_label/5d346bdb7db27accf3588493d5c284.seg 03001627\n04379243/points/198ff59a42a147eb8ac5948d70801389.pts 04379243/expert_verified/points_label/198ff59a42a147eb8ac5948d70801389.seg 04379243\n03001627/points/b3fd987b330d0d2acda56795a6fbde1f.pts 03001627/expert_verified/points_label/b3fd987b330d0d2acda56795a6fbde1f.seg 03001627\n02691156/points/1cb757280b862ae52c7575c9089791ff.pts 02691156/expert_verified/points_label/1cb757280b862ae52c7575c9089791ff.seg 02691156\n03636649/points/4631e756666a8a208ca4aeb5e3b33af7.pts 03636649/expert_verified/points_label/4631e756666a8a208ca4aeb5e3b33af7.seg 03636649\n04379243/points/b82c6769c98e877d24d29f1dedd03a57.pts 04379243/expert_verified/points_label/b82c6769c98e877d24d29f1dedd03a57.seg 04379243\n03636649/points/2b194d6bed8daa82c0b2dda5ff15ea28.pts 03636649/expert_verified/points_label/2b194d6bed8daa82c0b2dda5ff15ea28.seg 03636649\n03001627/points/7e6b4a7b4dd60c40cc8bd7a04c9659f1.pts 03001627/expert_verified/points_label/7e6b4a7b4dd60c40cc8bd7a04c9659f1.seg 03001627\n03948459/points/d1cc54762432fd058a2c998c0df41abe.pts 03948459/expert_verified/points_label/d1cc54762432fd058a2c998c0df41abe.seg 03948459\n04225987/points/776eaffd7cbe7bc6b9e8bdc9c4a49aa2.pts 04225987/expert_verified/points_label/776eaffd7cbe7bc6b9e8bdc9c4a49aa2.seg 04225987\n04379243/points/6ce30b0327db26f340b4c5428883e585.pts 04379243/expert_verified/points_label/6ce30b0327db26f340b4c5428883e585.seg 04379243\n04379243/points/c5230678204a1bb8dcfcef693e7ec696.pts 04379243/expert_verified/points_label/c5230678204a1bb8dcfcef693e7ec696.seg 04379243\n02691156/points/563cef4df464ddb1e153dd90dac45a6d.pts 02691156/expert_verified/points_label/563cef4df464ddb1e153dd90dac45a6d.seg 02691156\n02958343/points/42e6ce03b361102ab86e0633bb69faea.pts 02958343/expert_verified/points_label/42e6ce03b361102ab86e0633bb69faea.seg 02958343\n03001627/points/26e8033e59a3adf6bb53a6a5f5051240.pts 03001627/expert_verified/points_label/26e8033e59a3adf6bb53a6a5f5051240.seg 03001627\n04379243/points/731b983cb313634fd018082a1777a5f8.pts 04379243/expert_verified/points_label/731b983cb313634fd018082a1777a5f8.seg 04379243\n02691156/points/10aa040f470500c6a66ef8df4909ded9.pts 02691156/expert_verified/points_label/10aa040f470500c6a66ef8df4909ded9.seg 02691156\n03467517/points/bb895a87931f51c893f0194265a9746c.pts 03467517/expert_verified/points_label/bb895a87931f51c893f0194265a9746c.seg 03467517\n03624134/points/a105080ce4564145aeb54153795ede63.pts 03624134/expert_verified/points_label/a105080ce4564145aeb54153795ede63.seg 03624134\n04379243/points/c12147db9b29ef9ee0480c954dcd56d1.pts 04379243/expert_verified/points_label/c12147db9b29ef9ee0480c954dcd56d1.seg 04379243\n04379243/points/21cdc417e398378e40f3ac0af6b7e700.pts 04379243/expert_verified/points_label/21cdc417e398378e40f3ac0af6b7e700.seg 04379243\n04379243/points/b11e0feb428f61edf008d8a3590fb522.pts 04379243/expert_verified/points_label/b11e0feb428f61edf008d8a3590fb522.seg 04379243\n04379243/points/2700f6693447c32d66c64744a4252d3.pts 04379243/expert_verified/points_label/2700f6693447c32d66c64744a4252d3.seg 04379243\n03467517/points/b6d0cf333c7e013993f0194265a9746c.pts 03467517/expert_verified/points_label/b6d0cf333c7e013993f0194265a9746c.seg 03467517\n03001627/points/ece627bd883d9bbfb0eb7e753c06942.pts 03001627/expert_verified/points_label/ece627bd883d9bbfb0eb7e753c06942.seg 03001627\n03636649/points/26f0f37f0f2623c4a3fa46ae73c48b4.pts 03636649/expert_verified/points_label/26f0f37f0f2623c4a3fa46ae73c48b4.seg 03636649\n04379243/points/8b07d458499d63f36d96c6cb347d6a90.pts 04379243/expert_verified/points_label/8b07d458499d63f36d96c6cb347d6a90.seg 04379243\n04379243/points/eb363770ee36b0309a79b01b89f55c86.pts 04379243/expert_verified/points_label/eb363770ee36b0309a79b01b89f55c86.seg 04379243\n04379243/points/ccf36a20b7ef3bd128071d61462a212d.pts 04379243/expert_verified/points_label/ccf36a20b7ef3bd128071d61462a212d.seg 04379243\n03001627/points/cf24fc2d10f8da31283b00891f680579.pts 03001627/expert_verified/points_label/cf24fc2d10f8da31283b00891f680579.seg 03001627\n02958343/points/8b4879617bd256391738f25e3015f92e.pts 02958343/expert_verified/points_label/8b4879617bd256391738f25e3015f92e.seg 02958343\n03001627/points/55e1cde05a99f6c7d1d34366ca81fb3b.pts 03001627/expert_verified/points_label/55e1cde05a99f6c7d1d34366ca81fb3b.seg 03001627\n03001627/points/6c25ec1178e9bab6e545858398955dd1.pts 03001627/expert_verified/points_label/6c25ec1178e9bab6e545858398955dd1.seg 03001627\n03001627/points/862f70e73fa70c9b1a719e2a845bdada.pts 03001627/expert_verified/points_label/862f70e73fa70c9b1a719e2a845bdada.seg 03001627\n04379243/points/fa5dce1043f44c06ab88e3acae6e8bc5.pts 04379243/expert_verified/points_label/fa5dce1043f44c06ab88e3acae6e8bc5.seg 04379243\n03467517/points/6f9d1467eb39f8abfae47f572c17b9cb.pts 03467517/expert_verified/points_label/6f9d1467eb39f8abfae47f572c17b9cb.seg 03467517\n04379243/points/60ef2830979fd08ec72d4ae978770752.pts 04379243/expert_verified/points_label/60ef2830979fd08ec72d4ae978770752.seg 04379243\n03624134/points/d69e028056c9291069654277b747a908.pts 03624134/expert_verified/points_label/d69e028056c9291069654277b747a908.seg 03624134\n04379243/points/8e7c894039ae2cfe99e8bf807e902261.pts 04379243/expert_verified/points_label/8e7c894039ae2cfe99e8bf807e902261.seg 04379243\n02958343/points/4e2ca20091449636599389919f6522e6.pts 02958343/expert_verified/points_label/4e2ca20091449636599389919f6522e6.seg 02958343\n04379243/points/b10d84b3a04085b17618b16b281bdf56.pts 04379243/expert_verified/points_label/b10d84b3a04085b17618b16b281bdf56.seg 04379243\n03948459/points/d13986cc2403a2034b4b3d2a28039009.pts 03948459/expert_verified/points_label/d13986cc2403a2034b4b3d2a28039009.seg 03948459\n03636649/points/d97a86cea650ae0baf5b49ad7809302.pts 03636649/expert_verified/points_label/d97a86cea650ae0baf5b49ad7809302.seg 03636649\n03797390/points/ca198dc3f7dc0cacec6338171298c66b.pts 03797390/expert_verified/points_label/ca198dc3f7dc0cacec6338171298c66b.seg 03797390\n03636649/points/3f968096c74ee3a3b04a2e6a78ff6c49.pts 03636649/expert_verified/points_label/3f968096c74ee3a3b04a2e6a78ff6c49.seg 03636649\n02691156/points/4d6ec762d1583ded46555ee25941a22e.pts 02691156/expert_verified/points_label/4d6ec762d1583ded46555ee25941a22e.seg 02691156\n03467517/points/401ff6021157dee293f0194265a9746c.pts 03467517/expert_verified/points_label/401ff6021157dee293f0194265a9746c.seg 03467517\n04379243/points/c1d808c75cc5e7ab4da5bb83ec125010.pts 04379243/expert_verified/points_label/c1d808c75cc5e7ab4da5bb83ec125010.seg 04379243\n03790512/points/3d37db1d974499287395d58407f193ba.pts 03790512/expert_verified/points_label/3d37db1d974499287395d58407f193ba.seg 03790512\n03624134/points/65892e0f7f93129d14cb807a24b99e1e.pts 03624134/expert_verified/points_label/65892e0f7f93129d14cb807a24b99e1e.seg 03624134\n03624134/points/854e7bb73afaff7591ea3afb2749822f.pts 03624134/expert_verified/points_label/854e7bb73afaff7591ea3afb2749822f.seg 03624134\n03624134/points/7b492f2baa1dc710cc34b900bb2492e.pts 03624134/expert_verified/points_label/7b492f2baa1dc710cc34b900bb2492e.seg 03624134\n03636649/points/b4b15a84b9067f94a75d03186a0409e2.pts 03636649/expert_verified/points_label/b4b15a84b9067f94a75d03186a0409e2.seg 03636649\n03636649/points/9db87bf898efd448cbde89e0c48a01bf.pts 03636649/expert_verified/points_label/9db87bf898efd448cbde89e0c48a01bf.seg 03636649\n02954340/points/9bd54e0123d3cd70a52821bf1aa3b19a.pts 02954340/expert_verified/points_label/9bd54e0123d3cd70a52821bf1aa3b19a.seg 02954340\n"
  },
  {
    "path": "dgcnn/tensorflow/part_seg/train_multi_gpu.py",
    "content": "import argparse\nimport subprocess\nimport tensorflow as tf\nimport numpy as np\nfrom datetime import datetime\nimport json\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.dirname(BASE_DIR))\nimport provider\nimport part_seg_model as model\n\nTOWER_NAME = 'tower'\n\n# DEFAULT SETTINGS\nparser = argparse.ArgumentParser()\nparser.add_argument('--num_gpu', type=int, default=2, help='The number of GPUs to use [default: 2]')\nparser.add_argument('--batch', type=int, default=16, help='Batch Size per GPU during training [default: 32]')\nparser.add_argument('--epoch', type=int, default=201, help='Epoch to run [default: 50]')\nparser.add_argument('--point_num', type=int, default=2048, help='Point Number [256/512/1024/2048]')\nparser.add_argument('--output_dir', type=str, default='train_results', help='Directory that stores all training logs and trained models')\nparser.add_argument('--wd', type=float, default=0, help='Weight Decay [Default: 0.0]')\nFLAGS = parser.parse_args()\n\nhdf5_data_dir = os.path.join(BASE_DIR, './hdf5_data')\n\n# MAIN SCRIPT\npoint_num = FLAGS.point_num\nbatch_size = FLAGS.batch\noutput_dir = FLAGS.output_dir\n\nif not os.path.exists(output_dir):\n  os.mkdir(output_dir)\n\n# color_map_file = os.path.join(hdf5_data_dir, 'part_color_mapping.json')\n# color_map = json.load(open(color_map_file, 'r'))\n\nall_obj_cats_file = os.path.join(hdf5_data_dir, 'all_object_categories.txt')\nfin = open(all_obj_cats_file, 'r')\nlines = [line.rstrip() for line in fin.readlines()]\nall_obj_cats = [(line.split()[0], line.split()[1]) for line in lines]\nfin.close()\n\nall_cats = json.load(open(os.path.join(hdf5_data_dir, 'overallid_to_catid_partid.json'), 'r'))\nNUM_CATEGORIES = 16\nNUM_PART_CATS = len(all_cats)\n\nprint('#### Batch Size Per GPU: {0}'.format(batch_size))\nprint('#### Point Number: {0}'.format(point_num))\nprint('#### Using GPUs: {0}'.format(FLAGS.num_gpu))\n\nDECAY_STEP = 16881 * 20\nDECAY_RATE = 0.5\n\nLEARNING_RATE_CLIP = 1e-5\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP * 2)\nBN_DECAY_CLIP = 0.99\n\nBASE_LEARNING_RATE = 0.003\nMOMENTUM = 0.9\nTRAINING_EPOCHES = FLAGS.epoch\nprint('### Training epoch: {0}'.format(TRAINING_EPOCHES))\n\nTRAINING_FILE_LIST = os.path.join(hdf5_data_dir, 'train_hdf5_file_list.txt')\nTESTING_FILE_LIST = os.path.join(hdf5_data_dir, 'val_hdf5_file_list.txt')\n\nMODEL_STORAGE_PATH = os.path.join(output_dir, 'trained_models')\nif not os.path.exists(MODEL_STORAGE_PATH):\n  os.mkdir(MODEL_STORAGE_PATH)\n\nLOG_STORAGE_PATH = os.path.join(output_dir, 'logs')\nif not os.path.exists(LOG_STORAGE_PATH):\n  os.mkdir(LOG_STORAGE_PATH)\n\nSUMMARIES_FOLDER =  os.path.join(output_dir, 'summaries')\nif not os.path.exists(SUMMARIES_FOLDER):\n  os.mkdir(SUMMARIES_FOLDER)\n\ndef printout(flog, data):\n  print(data)\n  flog.write(data + '\\n')\n\ndef convert_label_to_one_hot(labels):\n  label_one_hot = np.zeros((labels.shape[0], NUM_CATEGORIES))\n  for idx in range(labels.shape[0]):\n    label_one_hot[idx, labels[idx]] = 1\n  return label_one_hot\n\ndef average_gradients(tower_grads):\n  \"\"\"Calculate average gradient for each shared variable across all towers.\n\n  Note that this function provides a synchronization point across all towers.\n\n  Args:\n    tower_grads: List of lists of (gradient, variable) tuples. The outer list\n    is over individual gradients. The inner list is over the gradient\n    calculation for each tower.\n  Returns:\n     List of pairs of (gradient, variable) where the gradient has been \n     averaged across all towers.\n  \"\"\"\n  average_grads = []\n  for grad_and_vars in zip(*tower_grads):\n    # Note that each grad_and_vars looks like the following:\n    #   ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))\n    grads = []\n    for g, _ in grad_and_vars:\n      if g is None:\n        continue\n      expanded_g = tf.expand_dims(g, 0)\n      grads.append(expanded_g)\n\n    # Average over the 'tower' dimension.\n    grad = tf.concat(grads, 0)\n    grad = tf.reduce_mean(grad, 0)\n\n    # Keep in mind that the Variables are redundant because they are shared\n    # across towers. So .. we will just return the first tower's pointer to\n    # the Variable.\n    v = grad_and_vars[0][1]\n    grad_and_var = (grad, v)\n    average_grads.append(grad_and_var)\n  return average_grads\n\n\ndef train():\n  with tf.Graph().as_default(), tf.device('/cpu:0'):\n\n    batch = tf.Variable(0, trainable=False)\n    \n    learning_rate = tf.train.exponential_decay(\n            BASE_LEARNING_RATE,     # base learning rate\n            batch * batch_size,     # global_var indicating the number of steps\n            DECAY_STEP,             # step size\n            DECAY_RATE,             # decay rate\n            staircase=True          # Stair-case or continuous decreasing\n            )\n    learning_rate = tf.maximum(learning_rate, LEARNING_RATE_CLIP)\n  \n    bn_momentum = tf.train.exponential_decay(\n          BN_INIT_DECAY,\n          batch*batch_size,\n          BN_DECAY_DECAY_STEP,\n          BN_DECAY_DECAY_RATE,\n          staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n\n    lr_op = tf.summary.scalar('learning_rate', learning_rate)\n    batch_op = tf.summary.scalar('batch_number', batch)\n    bn_decay_op = tf.summary.scalar('bn_decay', bn_decay)\n\n    trainer = tf.train.AdamOptimizer(learning_rate)\n\n    # store tensors for different gpus\n    tower_grads = []\n    pointclouds_phs = []\n    input_label_phs = []\n    seg_phs =[]\n    is_training_phs =[]\n\n    with tf.variable_scope(tf.get_variable_scope()):\n      for i in xrange(FLAGS.num_gpu):\n        with tf.device('/gpu:%d' % i):\n          with tf.name_scope('%s_%d' % (TOWER_NAME, i)) as scope:\n            pointclouds_phs.append(tf.placeholder(tf.float32, shape=(batch_size, point_num, 3))) # for points\n            input_label_phs.append(tf.placeholder(tf.float32, shape=(batch_size, NUM_CATEGORIES))) # for one-hot category label\n            seg_phs.append(tf.placeholder(tf.int32, shape=(batch_size, point_num))) # for part labels\n            is_training_phs.append(tf.placeholder(tf.bool, shape=()))\n\n            seg_pred = model.get_model(pointclouds_phs[-1], input_label_phs[-1], \\\n                is_training=is_training_phs[-1], bn_decay=bn_decay, cat_num=NUM_CATEGORIES, \\\n                part_num=NUM_PART_CATS, batch_size=batch_size, num_point=point_num, weight_decay=FLAGS.wd)\n\n\n            loss, per_instance_seg_loss, per_instance_seg_pred_res  \\\n              = model.get_loss(seg_pred, seg_phs[-1])\n\n            total_training_loss_ph = tf.placeholder(tf.float32, shape=())\n            total_testing_loss_ph = tf.placeholder(tf.float32, shape=())\n\n            seg_training_acc_ph = tf.placeholder(tf.float32, shape=())\n            seg_testing_acc_ph = tf.placeholder(tf.float32, shape=())\n            seg_testing_acc_avg_cat_ph = tf.placeholder(tf.float32, shape=())\n\n            total_train_loss_sum_op = tf.summary.scalar('total_training_loss', total_training_loss_ph)\n            total_test_loss_sum_op = tf.summary.scalar('total_testing_loss', total_testing_loss_ph)\n\n        \n            seg_train_acc_sum_op = tf.summary.scalar('seg_training_acc', seg_training_acc_ph)\n            seg_test_acc_sum_op = tf.summary.scalar('seg_testing_acc', seg_testing_acc_ph)\n            seg_test_acc_avg_cat_op = tf.summary.scalar('seg_testing_acc_avg_cat', seg_testing_acc_avg_cat_ph)\n\n            tf.get_variable_scope().reuse_variables()\n\n            grads = trainer.compute_gradients(loss)\n\n            tower_grads.append(grads)\n\n    grads = average_gradients(tower_grads)\n\n    train_op = trainer.apply_gradients(grads, global_step=batch)\n\n    saver = tf.train.Saver(tf.global_variables(), sharded=True, max_to_keep=20)\n\n    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    config.allow_soft_placement = True\n    sess = tf.Session(config=config)\n    \n    init = tf.group(tf.global_variables_initializer(),\n             tf.local_variables_initializer())\n    sess.run(init)\n\n    train_writer = tf.summary.FileWriter(SUMMARIES_FOLDER + '/train', sess.graph)\n    test_writer = tf.summary.FileWriter(SUMMARIES_FOLDER + '/test')\n\n    train_file_list = provider.getDataFiles(TRAINING_FILE_LIST)\n    num_train_file = len(train_file_list)\n    test_file_list = provider.getDataFiles(TESTING_FILE_LIST)\n    num_test_file = len(test_file_list)\n\n    fcmd = open(os.path.join(LOG_STORAGE_PATH, 'cmd.txt'), 'w')\n    fcmd.write(str(FLAGS))\n    fcmd.close()\n\n    # write logs to the disk\n    flog = open(os.path.join(LOG_STORAGE_PATH, 'log.txt'), 'w')\n\n    def train_one_epoch(train_file_idx, epoch_num):\n      is_training = True\n\n      for i in range(num_train_file):\n        cur_train_filename = os.path.join(hdf5_data_dir, train_file_list[train_file_idx[i]])\n        printout(flog, 'Loading train file ' + cur_train_filename)\n\n        cur_data, cur_labels, cur_seg = provider.load_h5_data_label_seg(cur_train_filename)\n        cur_data, cur_labels, order = provider.shuffle_data(cur_data, np.squeeze(cur_labels))\n        cur_seg = cur_seg[order, ...]\n\n        cur_labels_one_hot = convert_label_to_one_hot(cur_labels)\n\n        num_data = len(cur_labels)\n        num_batch = num_data // (FLAGS.num_gpu * batch_size) # For all working gpus\n\n        total_loss = 0.0\n        total_seg_acc = 0.0\n\n        for j in range(num_batch):\n          begidx_0 = j * batch_size\n          endidx_0 = (j + 1) * batch_size\n          begidx_1 = (j + 1) * batch_size\n          endidx_1 = (j + 2) * batch_size\n\n          feed_dict = {\n              # For the first gpu\n              pointclouds_phs[0]: cur_data[begidx_0: endidx_0, ...], \n              input_label_phs[0]: cur_labels_one_hot[begidx_0: endidx_0, ...], \n              seg_phs[0]: cur_seg[begidx_0: endidx_0, ...],\n              is_training_phs[0]: is_training, \n              # For the second gpu\n              pointclouds_phs[1]: cur_data[begidx_1: endidx_1, ...], \n              input_label_phs[1]: cur_labels_one_hot[begidx_1: endidx_1, ...], \n              seg_phs[1]: cur_seg[begidx_1: endidx_1, ...],\n              is_training_phs[1]: is_training, \n              }\n\n\n          # train_op is for both gpus, and the others are for gpu_1\n          _, loss_val, per_instance_seg_loss_val, seg_pred_val, pred_seg_res \\\n              = sess.run([train_op, loss, per_instance_seg_loss, seg_pred, per_instance_seg_pred_res], \\\n              feed_dict=feed_dict)\n\n          per_instance_part_acc = np.mean(pred_seg_res == cur_seg[begidx_1: endidx_1, ...], axis=1)\n          average_part_acc = np.mean(per_instance_part_acc)\n\n          total_loss += loss_val\n          total_seg_acc += average_part_acc\n\n        total_loss = total_loss * 1.0 / num_batch\n        total_seg_acc = total_seg_acc * 1.0 / num_batch\n\n        lr_sum, bn_decay_sum, batch_sum, train_loss_sum, train_seg_acc_sum = sess.run(\\\n            [lr_op, bn_decay_op, batch_op, total_train_loss_sum_op, seg_train_acc_sum_op], \\\n            feed_dict={total_training_loss_ph: total_loss, seg_training_acc_ph: total_seg_acc})\n\n        train_writer.add_summary(train_loss_sum, i + epoch_num * num_train_file)\n        train_writer.add_summary(lr_sum, i + epoch_num * num_train_file)\n        train_writer.add_summary(bn_decay_sum, i + epoch_num * num_train_file)\n        train_writer.add_summary(train_seg_acc_sum, i + epoch_num * num_train_file)\n        train_writer.add_summary(batch_sum, i + epoch_num * num_train_file)\n\n        printout(flog, '\\tTraining Total Mean_loss: %f' % total_loss)\n        printout(flog, '\\t\\tTraining Seg Accuracy: %f' % total_seg_acc)\n\n    def eval_one_epoch(epoch_num):\n      is_training = False\n\n      total_loss = 0.0\n      total_seg_acc = 0.0\n      total_seen = 0\n\n      total_seg_acc_per_cat = np.zeros((NUM_CATEGORIES)).astype(np.float32)\n      total_seen_per_cat = np.zeros((NUM_CATEGORIES)).astype(np.int32)\n\n      for i in range(num_test_file):\n        cur_test_filename = os.path.join(hdf5_data_dir, test_file_list[i])\n        printout(flog, 'Loading test file ' + cur_test_filename)\n\n        cur_data, cur_labels, cur_seg = provider.load_h5_data_label_seg(cur_test_filename)\n        cur_labels = np.squeeze(cur_labels)\n\n        cur_labels_one_hot = convert_label_to_one_hot(cur_labels)\n\n        num_data = len(cur_labels)\n        num_batch = num_data // batch_size\n\n        # Run on gpu_1, since the tensors used for evaluation are defined on gpu_1\n        for j in range(num_batch):\n          begidx = j * batch_size\n          endidx = (j + 1) * batch_size\n          feed_dict = {\n              pointclouds_phs[1]: cur_data[begidx: endidx, ...], \n              input_label_phs[1]: cur_labels_one_hot[begidx: endidx, ...], \n              seg_phs[1]: cur_seg[begidx: endidx, ...],\n              is_training_phs[1]: is_training}\n\n          loss_val, per_instance_seg_loss_val, seg_pred_val, pred_seg_res \\\n              = sess.run([loss, per_instance_seg_loss, seg_pred, per_instance_seg_pred_res], \\\n              feed_dict=feed_dict)\n\n          per_instance_part_acc = np.mean(pred_seg_res == cur_seg[begidx: endidx, ...], axis=1)\n          average_part_acc = np.mean(per_instance_part_acc)\n\n          total_seen += 1\n          total_loss += loss_val\n          \n          total_seg_acc += average_part_acc\n\n          for shape_idx in range(begidx, endidx):\n            total_seen_per_cat[cur_labels[shape_idx]] += 1\n            total_seg_acc_per_cat[cur_labels[shape_idx]] += per_instance_part_acc[shape_idx - begidx]\n\n      total_loss = total_loss * 1.0 / total_seen\n      total_seg_acc = total_seg_acc * 1.0 / total_seen\n\n      test_loss_sum, test_seg_acc_sum = sess.run(\\\n          [total_test_loss_sum_op, seg_test_acc_sum_op], \\\n          feed_dict={total_testing_loss_ph: total_loss, \\\n          seg_testing_acc_ph: total_seg_acc})\n\n      test_writer.add_summary(test_loss_sum, (epoch_num+1) * num_train_file-1)\n      test_writer.add_summary(test_seg_acc_sum, (epoch_num+1) * num_train_file-1)\n\n      printout(flog, '\\tTesting Total Mean_loss: %f' % total_loss)\n      printout(flog, '\\t\\tTesting Seg Accuracy: %f' % total_seg_acc)\n\n      for cat_idx in range(NUM_CATEGORIES):\n        if total_seen_per_cat[cat_idx] > 0:\n          printout(flog, '\\n\\t\\tCategory %s Object Number: %d' % (all_obj_cats[cat_idx][0], total_seen_per_cat[cat_idx]))\n          printout(flog, '\\t\\tCategory %s Seg Accuracy: %f' % (all_obj_cats[cat_idx][0], total_seg_acc_per_cat[cat_idx]/total_seen_per_cat[cat_idx]))\n\n    if not os.path.exists(MODEL_STORAGE_PATH):\n      os.mkdir(MODEL_STORAGE_PATH)\n\n    for epoch in range(TRAINING_EPOCHES):\n      printout(flog, '\\n<<< Testing on the test dataset ...')\n      eval_one_epoch(epoch)\n\n      printout(flog, '\\n>>> Training for the epoch %d/%d ...' % (epoch, TRAINING_EPOCHES))\n\n      train_file_idx = np.arange(0, len(train_file_list))\n      np.random.shuffle(train_file_idx)\n\n      train_one_epoch(train_file_idx, epoch)\n\n      if epoch % 5 == 0:\n        cp_filename = saver.save(sess, os.path.join(MODEL_STORAGE_PATH, 'epoch_' + str(epoch)+'.ckpt'))\n        printout(flog, 'Successfully store the checkpoint model into ' + cp_filename)\n\n      flog.flush()\n\n    flog.close()\n\nif __name__=='__main__':\n  train()\n"
  },
  {
    "path": "dgcnn/tensorflow/provider.py",
    "content": "import os\nimport sys\nimport numpy as np\nimport h5py\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\n# Download dataset for point cloud classification\nDATA_DIR = os.path.join(BASE_DIR, 'data')\nif not os.path.exists(DATA_DIR):\n  os.mkdir(DATA_DIR)\nif not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n  www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n  zipfile = os.path.basename(www)\n  os.system('wget %s; unzip %s' % (www, zipfile))\n  os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n  os.system('rm %s' % (zipfile))\n\n\ndef shuffle_data(data, labels):\n  \"\"\" Shuffle data and labels.\n    Input:\n      data: B,N,... numpy array\n      label: B,... numpy array\n    Return:\n      shuffled data, label and shuffle indices\n  \"\"\"\n  idx = np.arange(len(labels))\n  np.random.shuffle(idx)\n  return data[idx, ...], labels[idx], idx\n\n\ndef rotate_point_cloud(batch_data):\n  \"\"\" Randomly rotate the point clouds to augument the dataset\n    rotation is per shape based along up direction\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, rotated batch of point clouds\n  \"\"\"\n  rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n  for k in xrange(batch_data.shape[0]):\n    rotation_angle = np.random.uniform() * 2 * np.pi\n    cosval = np.cos(rotation_angle)\n    sinval = np.sin(rotation_angle)\n    rotation_matrix = np.array([[cosval, 0, sinval],\n                  [0, 1, 0],\n                  [-sinval, 0, cosval]])\n    shape_pc = batch_data[k, ...]\n    rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n  return rotated_data\n\n\ndef rotate_point_cloud_by_angle(batch_data, rotation_angle):\n  \"\"\" Rotate the point cloud along up direction with certain angle.\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, rotated batch of point clouds\n  \"\"\"\n  rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n  for k in xrange(batch_data.shape[0]):\n    #rotation_angle = np.random.uniform() * 2 * np.pi\n    cosval = np.cos(rotation_angle)\n    sinval = np.sin(rotation_angle)\n    rotation_matrix = np.array([[cosval, 0, sinval],\n                  [0, 1, 0],\n                  [-sinval, 0, cosval]])\n    shape_pc = batch_data[k, ...]\n    rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n  return rotated_data\n\n\ndef rotate_perturbation_point_cloud(batch_data, angle_sigma=0.06, angle_clip=0.18):\n  \"\"\" Randomly perturb the point clouds by small rotations\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, rotated batch of point clouds\n  \"\"\"\n  rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n  for k in xrange(batch_data.shape[0]):\n    angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip)\n    Rx = np.array([[1,0,0],\n             [0,np.cos(angles[0]),-np.sin(angles[0])],\n             [0,np.sin(angles[0]),np.cos(angles[0])]])\n    Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])],\n             [0,1,0],\n             [-np.sin(angles[1]),0,np.cos(angles[1])]])\n    Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0],\n             [np.sin(angles[2]),np.cos(angles[2]),0],\n             [0,0,1]])\n    R = np.dot(Rz, np.dot(Ry,Rx))\n    shape_pc = batch_data[k, ...]\n    rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), R)\n  return rotated_data\n\n\ndef jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):\n  \"\"\" Randomly jitter points. jittering is per point.\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, jittered batch of point clouds\n  \"\"\"\n  B, N, C = batch_data.shape\n  assert(clip > 0)\n  jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)\n  jittered_data += batch_data\n  return jittered_data\n\ndef shift_point_cloud(batch_data, shift_range=0.1):\n  \"\"\" Randomly shift point cloud. Shift is per point cloud.\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, shifted batch of point clouds\n  \"\"\"\n  B, N, C = batch_data.shape\n  shifts = np.random.uniform(-shift_range, shift_range, (B,3))\n  for batch_index in range(B):\n    batch_data[batch_index,:,:] += shifts[batch_index,:]\n  return batch_data\n\n\ndef random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):\n  \"\"\" Randomly scale the point cloud. Scale is per point cloud.\n    Input:\n      BxNx3 array, original batch of point clouds\n    Return:\n      BxNx3 array, scaled batch of point clouds\n  \"\"\"\n  B, N, C = batch_data.shape\n  scales = np.random.uniform(scale_low, scale_high, B)\n  for batch_index in range(B):\n    batch_data[batch_index,:,:] *= scales[batch_index]\n  return batch_data\n\ndef getDataFiles(list_filename):\n  return [line.rstrip() for line in open(list_filename)]\n\ndef load_h5(h5_filename):\n  f = h5py.File(h5_filename)\n  data = f['data'][:]\n  label = f['label'][:]\n  return (data, label)\n\ndef loadDataFile(filename):\n  return load_h5(filename)\n\n\ndef load_h5_data_label_seg(h5_filename):\n  f = h5py.File(h5_filename)\n  data = f['data'][:] # (2048, 2048, 3)\n  label = f['label'][:] # (2048, 1)\n  seg = f['pid'][:] # (2048, 2048)\n  return (data, label, seg)\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/README.md",
    "content": "## Semantic segmentation of indoor scenes\n\n### Dataset\n\n1. Donwload prepared HDF5 data for training:\n```\nsh +x download_data.sh\n```\n2. Download 3D indoor parsing dataset (<a href=\"http://buildingparser.stanford.edu/dataset.html\">S3DIS Dataset</a>) for testing and visualization. \"Stanford3dDataset_v1.2_Aligned_Version.zip\" of the dataset is used. Unzip the downloaded file into \"dgcnn/data/\", and then run\n```\npython collect_indoor3d_data.py\n```\nto generate \"dgcnn/data/stanford_indoor3d\"\n\n### Train\n\nWe use 6-fold training, such that 6 models are trained leaving 1 of 6 areas as the testing area for each model. We keep using 2 GPUs for distributed training. To train 6 models sequentially, run\n```\nsh +x train_job.sh\n```\n\n### Evaluation\n\n1. To generate predicted results for all 6 areas, run \n```\nsh +x test_job.sh\n```\nThe model parameters are saved every 10 epochs, the saved model used to generate predited results can be changed by setting \"--model_path\" in \"test_job.sh\". For example, if you want to use the model saved after 70 epochs, you can set \"--model_path\" to \"log*n*/epoch_70.ckpt\" for *n* = 1, 2, ..., 6. To visualize the results, you can add \"--visu\" flag in the end of each line in \"test_job.sh\".\n\n2. To obtain overall quantitative evaluation results, run\n```\npython eval_iou_accuracy.py\n```\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/batch_inference.py",
    "content": "import argparse\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nfrom model import *\nimport indoor3d_util\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--batch_size', type=int, default=1, help='Batch Size during training [default: 1]')\nparser.add_argument('--num_point', type=int, default=4096, help='Point number [default: 4096]')\nparser.add_argument('--model_path', required=True, help='model checkpoint file path')\nparser.add_argument('--dump_dir', required=True, help='dump folder path')\nparser.add_argument('--output_filelist', required=True, help='TXT filename, filelist, each line is an output for a room')\nparser.add_argument('--room_data_filelist', required=True, help='TXT filename, filelist, each line is a test room data label file.')\nparser.add_argument('--no_clutter', action='store_true', help='If true, donot count the clutter class')\nparser.add_argument('--visu', action='store_true', help='Whether to output OBJ file for prediction visualization.')\nFLAGS = parser.parse_args()\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMODEL_PATH = FLAGS.model_path\nGPU_INDEX = FLAGS.gpu\nDUMP_DIR = FLAGS.dump_dir\nif not os.path.exists(DUMP_DIR): os.mkdir(DUMP_DIR)\nLOG_FOUT = open(os.path.join(DUMP_DIR, 'log_evaluate.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\nROOM_PATH_LIST = [os.path.join(ROOT_DIR,line.rstrip()) for line in open(FLAGS.room_data_filelist)]\n\nNUM_CLASSES = 13\n\ndef log_string(out_str):\n  LOG_FOUT.write(out_str+'\\n')\n  LOG_FOUT.flush()\n  print(out_str)\n\ndef evaluate():\n  is_training = False\n   \n  with tf.device('/gpu:'+str(GPU_INDEX)):\n    pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)\n    is_training_pl = tf.placeholder(tf.bool, shape=())\n\n    pred = get_model(pointclouds_pl, is_training_pl)\n    loss = get_loss(pred, labels_pl)\n    pred_softmax = tf.nn.softmax(pred)\n \n    saver = tf.train.Saver()\n    \n  config = tf.ConfigProto()\n  config.gpu_options.allow_growth = True\n  config.allow_soft_placement = True\n  sess = tf.Session(config=config)\n\n  saver.restore(sess, MODEL_PATH)\n  log_string(\"Model restored.\")\n\n  ops = {'pointclouds_pl': pointclouds_pl,\n       'labels_pl': labels_pl,\n       'is_training_pl': is_training_pl,\n       'pred': pred,\n       'pred_softmax': pred_softmax,\n       'loss': loss}\n  \n  total_correct = 0\n  total_seen = 0\n  fout_out_filelist = open(FLAGS.output_filelist, 'w')\n  for room_path in ROOM_PATH_LIST:\n    out_data_label_filename = os.path.basename(room_path)[:-4] + '_pred.txt'\n    out_data_label_filename = os.path.join(DUMP_DIR, out_data_label_filename)\n    out_gt_label_filename = os.path.basename(room_path)[:-4] + '_gt.txt'\n    out_gt_label_filename = os.path.join(DUMP_DIR, out_gt_label_filename)\n   \n    print(room_path, out_data_label_filename)\n    # Evaluate room one by one.\n    a, b = eval_one_epoch(sess, ops, room_path, out_data_label_filename, out_gt_label_filename)\n    total_correct += a\n    total_seen += b\n    fout_out_filelist.write(out_data_label_filename+'\\n')\n  fout_out_filelist.close()\n  log_string('all room eval accuracy: %f'% (total_correct / float(total_seen)))\n\ndef eval_one_epoch(sess, ops, room_path, out_data_label_filename, out_gt_label_filename):\n  error_cnt = 0\n  is_training = False\n  total_correct = 0\n  total_seen = 0\n  loss_sum = 0\n  total_seen_class = [0 for _ in range(NUM_CLASSES)]\n  total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n  if FLAGS.visu:\n    fout = open(os.path.join(DUMP_DIR, os.path.basename(room_path)[:-4]+'_pred.obj'), 'w')\n    fout_gt = open(os.path.join(DUMP_DIR, os.path.basename(room_path)[:-4]+'_gt.obj'), 'w')\n    fout_real_color = open(os.path.join(DUMP_DIR, os.path.basename(room_path)[:-4]+'_real_color.obj'), 'w')\n  fout_data_label = open(out_data_label_filename, 'w')\n  fout_gt_label = open(out_gt_label_filename, 'w')\n  \n  current_data, current_label = indoor3d_util.room2blocks_wrapper_normalized(room_path, NUM_POINT)\n  current_data = current_data[:,0:NUM_POINT,:]\n  current_label = np.squeeze(current_label)\n  # Get room dimension..\n  data_label = np.load(room_path)\n  data = data_label[:,0:6]\n  max_room_x = max(data[:,0])\n  max_room_y = max(data[:,1])\n  max_room_z = max(data[:,2])\n  \n  file_size = current_data.shape[0]\n  num_batches = file_size // BATCH_SIZE\n  print(file_size)\n\n  \n  for batch_idx in range(num_batches):\n    start_idx = batch_idx * BATCH_SIZE\n    end_idx = (batch_idx+1) * BATCH_SIZE\n    cur_batch_size = end_idx - start_idx\n    \n    feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],\n           ops['labels_pl']: current_label[start_idx:end_idx],\n           ops['is_training_pl']: is_training}\n    loss_val, pred_val = sess.run([ops['loss'], ops['pred_softmax']],\n                    feed_dict=feed_dict)\n\n    if FLAGS.no_clutter:\n      pred_label = np.argmax(pred_val[:,:,0:12], 2) # BxN\n    else:\n      pred_label = np.argmax(pred_val, 2) # BxN\n    \n    # Save prediction labels to OBJ file\n    for b in range(BATCH_SIZE):\n      pts = current_data[start_idx+b, :, :]\n      l = current_label[start_idx+b,:]\n      pts[:,6] *= max_room_x\n      pts[:,7] *= max_room_y\n      pts[:,8] *= max_room_z\n      pts[:,3:6] *= 255.0\n      pred = pred_label[b, :]\n      for i in range(NUM_POINT):\n        color = indoor3d_util.g_label2color[pred[i]]\n        color_gt = indoor3d_util.g_label2color[current_label[start_idx+b, i]]\n        if FLAGS.visu:\n          fout.write('v %f %f %f %d %d %d\\n' % (pts[i,6], pts[i,7], pts[i,8], color[0], color[1], color[2]))\n          fout_gt.write('v %f %f %f %d %d %d\\n' % (pts[i,6], pts[i,7], pts[i,8], color_gt[0], color_gt[1], color_gt[2]))\n        fout_data_label.write('%f %f %f %d %d %d %f %d\\n' % (pts[i,6], pts[i,7], pts[i,8], pts[i,3], pts[i,4], pts[i,5], pred_val[b,i,pred[i]], pred[i]))\n        fout_gt_label.write('%d\\n' % (l[i]))\n    \n    correct = np.sum(pred_label == current_label[start_idx:end_idx,:])\n    total_correct += correct\n    total_seen += (cur_batch_size*NUM_POINT)\n    loss_sum += (loss_val*BATCH_SIZE)\n    for i in range(start_idx, end_idx):\n      for j in range(NUM_POINT):\n        l = current_label[i, j]\n        total_seen_class[l] += 1\n        total_correct_class[l] += (pred_label[i-start_idx, j] == l)\n\n  log_string('eval mean loss: %f' % (loss_sum / float(total_seen/NUM_POINT)))\n  log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n  fout_data_label.close()\n  fout_gt_label.close()\n  if FLAGS.visu:\n    fout.close()\n    fout_gt.close()\n  return total_correct, total_seen\n\n\nif __name__=='__main__':\n  with tf.Graph().as_default():\n    evaluate()\n  LOG_FOUT.close()\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/collect_indoor3d_data.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nimport indoor3d_util\n\nanno_paths = [line.rstrip() for line in open(os.path.join(BASE_DIR, 'meta/anno_paths.txt'))]\nanno_paths = [os.path.join(indoor3d_util.DATA_PATH, p) for p in anno_paths]\n\noutput_folder = os.path.join(ROOT_DIR, 'data/stanford_indoor3d') \nif not os.path.exists(output_folder):\n  os.mkdir(output_folder)\n\n# Note: there is an extra character in the v1.2 data in Area_5/hallway_6. It's fixed manually.\nfor anno_path in anno_paths:\n  print(anno_path)\n  try:\n    elements = anno_path.split('/')\n    out_filename = elements[-3]+'_'+elements[-2]+'.npy'\n    indoor3d_util.collect_point_label(anno_path, os.path.join(output_folder, out_filename), 'numpy')\n  except:\n    print(anno_path, 'ERROR!!')\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/download_data.sh",
    "content": "#!/bin/bash\n\n# Download HDF5 for indoor 3d semantic segmentation (around 1.6GB) -> 'indoor3d_sem_seg_hdf5_data'\nwget https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip\nunzip indoor3d_sem_seg_hdf5_data.zip\nrm indoor3d_sem_seg_hdf5_data.zip"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/eval_iou_accuracy.py",
    "content": "import numpy as np\n\npred_data_label_filenames = []\nfor i in range(1,7):\n  file_name = 'log{}/output_filelist.txt'.format(i)\n  pred_data_label_filenames += [line.rstrip() for line in open(file_name)]\n\ngt_label_filenames = [f.rstrip('_pred\\.txt') + '_gt.txt' for f in pred_data_label_filenames]\n\nnum_room = len(gt_label_filenames)\n\ngt_classes = [0 for _ in range(13)]\npositive_classes = [0 for _ in range(13)]\ntrue_positive_classes = [0 for _ in range(13)]\n\nfor i in range(num_room):\n  print(i)\n  data_label = np.loadtxt(pred_data_label_filenames[i])\n  pred_label = data_label[:,-1]\n  gt_label = np.loadtxt(gt_label_filenames[i])\n  print(gt_label.shape)\n  for j in xrange(gt_label.shape[0]):\n    gt_l = int(gt_label[j])\n    pred_l = int(pred_label[j])\n    gt_classes[gt_l] += 1\n    positive_classes[pred_l] += 1\n    true_positive_classes[gt_l] += int(gt_l==pred_l)\n\n\nprint(gt_classes)\nprint(positive_classes)\nprint(true_positive_classes)\n\nprint('Overall accuracy: {0}'.format(sum(true_positive_classes)/float(sum(positive_classes))))\n\nprint 'IoU:'\niou_list = []\nfor i in range(13):\n  iou = true_positive_classes[i]/float(gt_classes[i]+positive_classes[i]-true_positive_classes[i]) \n  print(iou)\n  iou_list.append(iou)\n\nprint 'avg IoU:'\nprint(sum(iou_list)/13.0)\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/indoor3d_util.py",
    "content": "import numpy as np\nimport glob\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\n\n# -----------------------------------------------------------------------------\n# CONSTANTS\n# -----------------------------------------------------------------------------\n\nDATA_PATH = os.path.join(ROOT_DIR, 'data', 'Stanford3dDataset_v1.2_Aligned_Version')\ng_classes = [x.rstrip() for x in open(os.path.join(BASE_DIR, 'meta/class_names.txt'))]\ng_class2label = {cls: i for i,cls in enumerate(g_classes)}\ng_class2color = {'ceiling': [0,255,0],\n         'floor': [0,0,255],\n         'wall':  [0,255,255],\n         'beam':        [255,255,0],\n         'column':      [255,0,255],\n         'window':      [100,100,255],\n         'door':        [200,200,100],\n         'table':       [170,120,200],\n         'chair':       [255,0,0],\n         'sofa':        [200,100,100],\n         'bookcase':    [10,200,100],\n         'board':       [200,200,200],\n         'clutter':     [50,50,50]} \ng_easy_view_labels = [7,8,9,10,11,1]\ng_label2color = {g_classes.index(cls): g_class2color[cls] for cls in g_classes}\n\n\n# -----------------------------------------------------------------------------\n# CONVERT ORIGINAL DATA TO OUR DATA_LABEL FILES\n# -----------------------------------------------------------------------------\n\ndef collect_point_label(anno_path, out_filename, file_format='txt'):\n  \"\"\" Convert original dataset files to data_label file (each line is XYZRGBL).\n    We aggregated all the points from each instance in the room.\n\n  Args:\n    anno_path: path to annotations. e.g. Area_1/office_2/Annotations/\n    out_filename: path to save collected points and labels (each line is XYZRGBL)\n    file_format: txt or numpy, determines what file format to save.\n  Returns:\n    None\n  Note:\n    the points are shifted before save, the most negative point is now at origin.\n  \"\"\"\n  points_list = []\n \n  for f in glob.glob(os.path.join(anno_path, '*.txt')):\n    cls = os.path.basename(f).split('_')[0]\n    if cls not in g_classes: # note: in some room there is 'staris' class..\n      cls = 'clutter'\n    points = np.loadtxt(f)\n    labels = np.ones((points.shape[0],1)) * g_class2label[cls]\n    points_list.append(np.concatenate([points, labels], 1)) # Nx7\n  \n\n  data_label = np.concatenate(points_list, 0)\n  xyz_min = np.amin(data_label, axis=0)[0:3]\n  data_label[:, 0:3] -= xyz_min\n  \n  if file_format=='txt':\n    fout = open(out_filename, 'w')\n    for i in range(data_label.shape[0]):\n      fout.write('%f %f %f %d %d %d %d\\n' % \\\n              (data_label[i,0], data_label[i,1], data_label[i,2],\n               data_label[i,3], data_label[i,4], data_label[i,5],\n               data_label[i,6]))\n    fout.close()\n  elif file_format=='numpy':\n    np.save(out_filename, data_label)\n  else:\n    print('ERROR!! Unknown file format: %s, please use txt or numpy.' % \\\n      (file_format))\n    exit()\n\ndef point_label_to_obj(input_filename, out_filename, label_color=True, easy_view=False, no_wall=False):\n  \"\"\" For visualization of a room from data_label file,\n  input_filename: each line is X Y Z R G B L\n  out_filename: OBJ filename,\n      visualize input file by coloring point with label color\n    easy_view: only visualize furnitures and floor\n  \"\"\"\n  data_label = np.loadtxt(input_filename)\n  data = data_label[:, 0:6]\n  label = data_label[:, -1].astype(int)\n  fout = open(out_filename, 'w')\n  for i in range(data.shape[0]):\n    color = g_label2color[label[i]]\n    if easy_view and (label[i] not in g_easy_view_labels):\n      continue\n    if no_wall and ((label[i] == 2) or (label[i]==0)):\n      continue\n    if label_color:\n      fout.write('v %f %f %f %d %d %d\\n' % \\\n        (data[i,0], data[i,1], data[i,2], color[0], color[1], color[2]))\n    else:\n      fout.write('v %f %f %f %d %d %d\\n' % \\\n        (data[i,0], data[i,1], data[i,2], data[i,3], data[i,4], data[i,5]))\n  fout.close()\n \n\n\n# -----------------------------------------------------------------------------\n# PREPARE BLOCK DATA FOR DEEPNETS TRAINING/TESTING\n# -----------------------------------------------------------------------------\n\ndef sample_data(data, num_sample):\n  \"\"\" data is in N x ...\n    we want to keep num_samplexC of them.\n    if N > num_sample, we will randomly keep num_sample of them.\n    if N < num_sample, we will randomly duplicate samples.\n  \"\"\"\n  N = data.shape[0]\n  if (N == num_sample):\n    return data, range(N)\n  elif (N > num_sample):\n    sample = np.random.choice(N, num_sample)\n    return data[sample, ...], sample\n  else:\n    sample = np.random.choice(N, num_sample-N)\n    dup_data = data[sample, ...]\n    return np.concatenate([data, dup_data], 0), range(N)+list(sample)\n\ndef sample_data_label(data, label, num_sample):\n  new_data, sample_indices = sample_data(data, num_sample)\n  new_label = label[sample_indices]\n  return new_data, new_label\n  \ndef room2blocks(data, label, num_point, block_size=1.0, stride=1.0,\n        random_sample=False, sample_num=None, sample_aug=1):\n  \"\"\" Prepare block training data.\n  Args:\n    data: N x 6 numpy array, 012 are XYZ in meters, 345 are RGB in [0,1]\n      assumes the data is shifted (min point is origin) and aligned\n      (aligned with XYZ axis)\n    label: N size uint8 numpy array from 0-12\n    num_point: int, how many points to sample in each block\n    block_size: float, physical size of the block in meters\n    stride: float, stride for block sweeping\n    random_sample: bool, if True, we will randomly sample blocks in the room\n    sample_num: int, if random sample, how many blocks to sample\n      [default: room area]\n    sample_aug: if random sample, how much aug\n  Returns:\n    block_datas: K x num_point x 6 np array of XYZRGB, RGB is in [0,1]\n    block_labels: K x num_point x 1 np array of uint8 labels\n    \n  TODO: for this version, blocking is in fixed, non-overlapping pattern.\n  \"\"\"\n  assert(stride<=block_size)\n\n  limit = np.amax(data, 0)[0:3]\n   \n  # Get the corner location for our sampling blocks    \n  xbeg_list = []\n  ybeg_list = []\n  if not random_sample:\n    num_block_x = int(np.ceil((limit[0] - block_size) / stride)) + 1\n    num_block_y = int(np.ceil((limit[1] - block_size) / stride)) + 1\n    for i in range(num_block_x):\n      for j in range(num_block_y):\n        xbeg_list.append(i*stride)\n        ybeg_list.append(j*stride)\n  else:\n    num_block_x = int(np.ceil(limit[0] / block_size))\n    num_block_y = int(np.ceil(limit[1] / block_size))\n    if sample_num is None:\n      sample_num = num_block_x * num_block_y * sample_aug\n    for _ in range(sample_num):\n      xbeg = np.random.uniform(-block_size, limit[0]) \n      ybeg = np.random.uniform(-block_size, limit[1]) \n      xbeg_list.append(xbeg)\n      ybeg_list.append(ybeg)\n\n  # Collect blocks\n  block_data_list = []\n  block_label_list = []\n  idx = 0\n  for idx in range(len(xbeg_list)): \n     xbeg = xbeg_list[idx]\n     ybeg = ybeg_list[idx]\n     xcond = (data[:,0]<=xbeg+block_size) & (data[:,0]>=xbeg)\n     ycond = (data[:,1]<=ybeg+block_size) & (data[:,1]>=ybeg)\n     cond = xcond & ycond\n     if np.sum(cond) < 100: # discard block if there are less than 100 pts.\n       continue\n     \n     block_data = data[cond, :]\n     block_label = label[cond]\n     \n     # randomly subsample data\n     block_data_sampled, block_label_sampled = \\\n       sample_data_label(block_data, block_label, num_point)\n     block_data_list.append(np.expand_dims(block_data_sampled, 0))\n     block_label_list.append(np.expand_dims(block_label_sampled, 0))\n      \n  return np.concatenate(block_data_list, 0), \\\n       np.concatenate(block_label_list, 0)\n\n\ndef room2blocks_plus(data_label, num_point, block_size, stride,\n           random_sample, sample_num, sample_aug):\n  \"\"\" room2block with input filename and RGB preprocessing.\n  \"\"\"\n  data = data_label[:,0:6]\n  data[:,3:6] /= 255.0\n  label = data_label[:,-1].astype(np.uint8)\n  \n  return room2blocks(data, label, num_point, block_size, stride,\n             random_sample, sample_num, sample_aug)\n   \ndef room2blocks_wrapper(data_label_filename, num_point, block_size=1.0, stride=1.0,\n            random_sample=False, sample_num=None, sample_aug=1):\n  if data_label_filename[-3:] == 'txt':\n    data_label = np.loadtxt(data_label_filename)\n  elif data_label_filename[-3:] == 'npy':\n    data_label = np.load(data_label_filename)\n  else:\n    print('Unknown file type! exiting.')\n    exit()\n  return room2blocks_plus(data_label, num_point, block_size, stride,\n              random_sample, sample_num, sample_aug)\n\ndef room2blocks_plus_normalized(data_label, num_point, block_size, stride,\n                random_sample, sample_num, sample_aug):\n  \"\"\" room2block, with input filename and RGB preprocessing.\n    for each block centralize XYZ, add normalized XYZ as 678 channels\n  \"\"\"\n  data = data_label[:,0:6]\n  data[:,3:6] /= 255.0\n  label = data_label[:,-1].astype(np.uint8)\n  max_room_x = max(data[:,0])\n  max_room_y = max(data[:,1])\n  max_room_z = max(data[:,2])\n  \n  data_batch, label_batch = room2blocks(data, label, num_point, block_size, stride,\n                      random_sample, sample_num, sample_aug)\n  new_data_batch = np.zeros((data_batch.shape[0], num_point, 9))\n  for b in range(data_batch.shape[0]):\n    new_data_batch[b, :, 6] = data_batch[b, :, 0]/max_room_x\n    new_data_batch[b, :, 7] = data_batch[b, :, 1]/max_room_y\n    new_data_batch[b, :, 8] = data_batch[b, :, 2]/max_room_z\n    minx = min(data_batch[b, :, 0])\n    miny = min(data_batch[b, :, 1])\n    data_batch[b, :, 0] -= (minx+block_size/2)\n    data_batch[b, :, 1] -= (miny+block_size/2)\n  new_data_batch[:, :, 0:6] = data_batch\n  return new_data_batch, label_batch\n\n\ndef room2blocks_wrapper_normalized(data_label_filename, num_point, block_size=1.0, stride=1.0,\n                   random_sample=False, sample_num=None, sample_aug=1):\n  if data_label_filename[-3:] == 'txt':\n    data_label = np.loadtxt(data_label_filename)\n  elif data_label_filename[-3:] == 'npy':\n    data_label = np.load(data_label_filename)\n  else:\n    print('Unknown file type! exiting.')\n    exit()\n  return room2blocks_plus_normalized(data_label, num_point, block_size, stride,\n                     random_sample, sample_num, sample_aug)\n\ndef room2samples(data, label, sample_num_point):\n  \"\"\" Prepare whole room samples.\n\n  Args:\n    data: N x 6 numpy array, 012 are XYZ in meters, 345 are RGB in [0,1]\n      assumes the data is shifted (min point is origin) and\n      aligned (aligned with XYZ axis)\n    label: N size uint8 numpy array from 0-12\n    sample_num_point: int, how many points to sample in each sample\n  Returns:\n    sample_datas: K x sample_num_point x 9\n           numpy array of XYZRGBX'Y'Z', RGB is in [0,1]\n    sample_labels: K x sample_num_point x 1 np array of uint8 labels\n  \"\"\"\n  N = data.shape[0]\n  order = np.arange(N)\n  np.random.shuffle(order) \n  data = data[order, :]\n  label = label[order]\n\n  batch_num = int(np.ceil(N / float(sample_num_point)))\n  sample_datas = np.zeros((batch_num, sample_num_point, 6))\n  sample_labels = np.zeros((batch_num, sample_num_point, 1))\n\n  for i in range(batch_num):\n    beg_idx = i*sample_num_point\n    end_idx = min((i+1)*sample_num_point, N)\n    num = end_idx - beg_idx\n    sample_datas[i,0:num,:] = data[beg_idx:end_idx, :]\n    sample_labels[i,0:num,0] = label[beg_idx:end_idx]\n    if num < sample_num_point:\n      makeup_indices = np.random.choice(N, sample_num_point - num)\n      sample_datas[i,num:,:] = data[makeup_indices, :]\n      sample_labels[i,num:,0] = label[makeup_indices]\n  return sample_datas, sample_labels\n\ndef room2samples_plus_normalized(data_label, num_point):\n  \"\"\" room2sample, with input filename and RGB preprocessing.\n    for each block centralize XYZ, add normalized XYZ as 678 channels\n  \"\"\"\n  data = data_label[:,0:6]\n  data[:,3:6] /= 255.0\n  label = data_label[:,-1].astype(np.uint8)\n  max_room_x = max(data[:,0])\n  max_room_y = max(data[:,1])\n  max_room_z = max(data[:,2])\n  #print(max_room_x, max_room_y, max_room_z)\n  \n  data_batch, label_batch = room2samples(data, label, num_point)\n  new_data_batch = np.zeros((data_batch.shape[0], num_point, 9))\n  for b in range(data_batch.shape[0]):\n    new_data_batch[b, :, 6] = data_batch[b, :, 0]/max_room_x\n    new_data_batch[b, :, 7] = data_batch[b, :, 1]/max_room_y\n    new_data_batch[b, :, 8] = data_batch[b, :, 2]/max_room_z\n    #minx = min(data_batch[b, :, 0])\n    #miny = min(data_batch[b, :, 1])\n    #data_batch[b, :, 0] -= (minx+block_size/2)\n    #data_batch[b, :, 1] -= (miny+block_size/2)\n  new_data_batch[:, :, 0:6] = data_batch\n  return new_data_batch, label_batch\n\n\ndef room2samples_wrapper_normalized(data_label_filename, num_point):\n  if data_label_filename[-3:] == 'txt':\n    data_label = np.loadtxt(data_label_filename)\n  elif data_label_filename[-3:] == 'npy':\n    data_label = np.load(data_label_filename)\n  else:\n    print('Unknown file type! exiting.')\n    exit()\n  return room2samples_plus_normalized(data_label, num_point)\n\n\n# -----------------------------------------------------------------------------\n# EXTRACT INSTANCE BBOX FROM ORIGINAL DATA (for detection evaluation)\n# -----------------------------------------------------------------------------\n\ndef collect_bounding_box(anno_path, out_filename):\n  \"\"\" Compute bounding boxes from each instance in original dataset files on\n    one room. **We assume the bbox is aligned with XYZ coordinate.**\n  \n  Args:\n    anno_path: path to annotations. e.g. Area_1/office_2/Annotations/\n    out_filename: path to save instance bounding boxes for that room.\n      each line is x1 y1 z1 x2 y2 z2 label,\n      where (x1,y1,z1) is the point on the diagonal closer to origin\n  Returns:\n    None\n  Note:\n    room points are shifted, the most negative point is now at origin.\n  \"\"\"\n  bbox_label_list = []\n\n  for f in glob.glob(os.path.join(anno_path, '*.txt')):\n    cls = os.path.basename(f).split('_')[0]\n    if cls not in g_classes: # note: in some room there is 'staris' class..\n      cls = 'clutter'\n    points = np.loadtxt(f)\n    label = g_class2label[cls]\n    # Compute tightest axis aligned bounding box\n    xyz_min = np.amin(points[:, 0:3], axis=0)\n    xyz_max = np.amax(points[:, 0:3], axis=0)\n    ins_bbox_label = np.expand_dims(\n      np.concatenate([xyz_min, xyz_max, np.array([label])], 0), 0)\n    bbox_label_list.append(ins_bbox_label)\n\n  bbox_label = np.concatenate(bbox_label_list, 0)\n  room_xyz_min = np.amin(bbox_label[:, 0:3], axis=0)\n  bbox_label[:, 0:3] -= room_xyz_min \n  bbox_label[:, 3:6] -= room_xyz_min \n\n  fout = open(out_filename, 'w')\n  for i in range(bbox_label.shape[0]):\n    fout.write('%f %f %f %f %f %f %d\\n' % \\\n            (bbox_label[i,0], bbox_label[i,1], bbox_label[i,2],\n             bbox_label[i,3], bbox_label[i,4], bbox_label[i,5],\n             bbox_label[i,6]))\n  fout.close()\n\ndef bbox_label_to_obj(input_filename, out_filename_prefix, easy_view=False):\n  \"\"\" Visualization of bounding boxes.\n  \n  Args:\n    input_filename: each line is x1 y1 z1 x2 y2 z2 label\n    out_filename_prefix: OBJ filename prefix,\n      visualize object by g_label2color\n    easy_view: if True, only visualize furniture and floor\n  Returns:\n    output a list of OBJ file and MTL files with the same prefix\n  \"\"\"\n  bbox_label = np.loadtxt(input_filename)\n  bbox = bbox_label[:, 0:6]\n  label = bbox_label[:, -1].astype(int)\n  v_cnt = 0 # count vertex\n  ins_cnt = 0 # count instance\n  for i in range(bbox.shape[0]):\n    if easy_view and (label[i] not in g_easy_view_labels):\n      continue\n    obj_filename = out_filename_prefix+'_'+g_classes[label[i]]+'_'+str(ins_cnt)+'.obj'\n    mtl_filename = out_filename_prefix+'_'+g_classes[label[i]]+'_'+str(ins_cnt)+'.mtl'\n    fout_obj = open(obj_filename, 'w')\n    fout_mtl = open(mtl_filename, 'w')\n    fout_obj.write('mtllib %s\\n' % (os.path.basename(mtl_filename)))\n\n    length = bbox[i, 3:6] - bbox[i, 0:3]\n    a = length[0]\n    b = length[1]\n    c = length[2]\n    x = bbox[i, 0]\n    y = bbox[i, 1]\n    z = bbox[i, 2]\n    color = np.array(g_label2color[label[i]], dtype=float) / 255.0\n\n    material = 'material%d' % (ins_cnt)\n    fout_obj.write('usemtl %s\\n' % (material))\n    fout_obj.write('v %f %f %f\\n' % (x,y,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x,y+b,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y+b,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x,y,z))\n    fout_obj.write('v %f %f %f\\n' % (x,y+b,z))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y+b,z))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y,z))\n    fout_obj.write('g default\\n')\n    v_cnt = 0 # for individual box\n    fout_obj.write('f %d %d %d %d\\n' % (4+v_cnt, 3+v_cnt, 2+v_cnt, 1+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (1+v_cnt, 2+v_cnt, 6+v_cnt, 5+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (7+v_cnt, 6+v_cnt, 2+v_cnt, 3+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (4+v_cnt, 8+v_cnt, 7+v_cnt, 3+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (5+v_cnt, 8+v_cnt, 4+v_cnt, 1+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (5+v_cnt, 6+v_cnt, 7+v_cnt, 8+v_cnt))\n    fout_obj.write('\\n')\n\n    fout_mtl.write('newmtl %s\\n' % (material))\n    fout_mtl.write('Kd %f %f %f\\n' % (color[0], color[1], color[2]))\n    fout_mtl.write('\\n')\n    fout_obj.close()\n    fout_mtl.close() \n\n    v_cnt += 8\n    ins_cnt += 1\n\ndef bbox_label_to_obj_room(input_filename, out_filename_prefix, easy_view=False, permute=None, center=False, exclude_table=False):\n  \"\"\" Visualization of bounding boxes.\n  \n  Args:\n    input_filename: each line is x1 y1 z1 x2 y2 z2 label\n    out_filename_prefix: OBJ filename prefix,\n      visualize object by g_label2color\n    easy_view: if True, only visualize furniture and floor\n    permute: if not None, permute XYZ for rendering, e.g. [0 2 1]\n    center: if True, move obj to have zero origin\n  Returns:\n    output a list of OBJ file and MTL files with the same prefix\n  \"\"\"\n  bbox_label = np.loadtxt(input_filename)\n  bbox = bbox_label[:, 0:6]\n  if permute is not None:\n    assert(len(permute)==3)\n    permute = np.array(permute)\n    bbox[:,0:3] = bbox[:,permute]\n    bbox[:,3:6] = bbox[:,permute+3]\n  if center:\n    xyz_max = np.amax(bbox[:,3:6], 0)\n    bbox[:,0:3] -= (xyz_max/2.0)\n    bbox[:,3:6] -= (xyz_max/2.0)\n    bbox /= np.max(xyz_max/2.0)\n  label = bbox_label[:, -1].astype(int)\n  obj_filename = out_filename_prefix+'.obj' \n  mtl_filename = out_filename_prefix+'.mtl'\n\n  fout_obj = open(obj_filename, 'w')\n  fout_mtl = open(mtl_filename, 'w')\n  fout_obj.write('mtllib %s\\n' % (os.path.basename(mtl_filename)))\n  v_cnt = 0 # count vertex\n  ins_cnt = 0 # count instance\n  for i in range(bbox.shape[0]):\n    if easy_view and (label[i] not in g_easy_view_labels):\n      continue\n    if exclude_table and label[i] == g_classes.index('table'):\n      continue\n\n    length = bbox[i, 3:6] - bbox[i, 0:3]\n    a = length[0]\n    b = length[1]\n    c = length[2]\n    x = bbox[i, 0]\n    y = bbox[i, 1]\n    z = bbox[i, 2]\n    color = np.array(g_label2color[label[i]], dtype=float) / 255.0\n\n    material = 'material%d' % (ins_cnt)\n    fout_obj.write('usemtl %s\\n' % (material))\n    fout_obj.write('v %f %f %f\\n' % (x,y,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x,y+b,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y+b,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y,z+c))\n    fout_obj.write('v %f %f %f\\n' % (x,y,z))\n    fout_obj.write('v %f %f %f\\n' % (x,y+b,z))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y+b,z))\n    fout_obj.write('v %f %f %f\\n' % (x+a,y,z))\n    fout_obj.write('g default\\n')\n    fout_obj.write('f %d %d %d %d\\n' % (4+v_cnt, 3+v_cnt, 2+v_cnt, 1+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (1+v_cnt, 2+v_cnt, 6+v_cnt, 5+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (7+v_cnt, 6+v_cnt, 2+v_cnt, 3+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (4+v_cnt, 8+v_cnt, 7+v_cnt, 3+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (5+v_cnt, 8+v_cnt, 4+v_cnt, 1+v_cnt))\n    fout_obj.write('f %d %d %d %d\\n' % (5+v_cnt, 6+v_cnt, 7+v_cnt, 8+v_cnt))\n    fout_obj.write('\\n')\n\n    fout_mtl.write('newmtl %s\\n' % (material))\n    fout_mtl.write('Kd %f %f %f\\n' % (color[0], color[1], color[2]))\n    fout_mtl.write('\\n')\n\n    v_cnt += 8\n    ins_cnt += 1\n\n  fout_obj.close()\n  fout_mtl.close() \n\n\ndef collect_point_bounding_box(anno_path, out_filename, file_format):\n  \"\"\" Compute bounding boxes from each instance in original dataset files on\n    one room. **We assume the bbox is aligned with XYZ coordinate.**\n    Save both the point XYZRGB and the bounding box for the point's\n    parent element.\n \n  Args:\n    anno_path: path to annotations. e.g. Area_1/office_2/Annotations/\n    out_filename: path to save instance bounding boxes for each point,\n      plus the point's XYZRGBL\n      each line is XYZRGBL offsetX offsetY offsetZ a b c,\n      where cx = X+offsetX, cy=X+offsetY, cz=Z+offsetZ\n      where (cx,cy,cz) is center of the box, a,b,c are distances from center\n      to the surfaces of the box, i.e. x1 = cx-a, x2 = cx+a, y1=cy-b etc.\n    file_format: output file format, txt or numpy\n  Returns:\n    None\n\n  Note:\n    room points are shifted, the most negative point is now at origin.\n  \"\"\"\n  point_bbox_list = []\n\n  for f in glob.glob(os.path.join(anno_path, '*.txt')):\n    cls = os.path.basename(f).split('_')[0]\n    if cls not in g_classes: # note: in some room there is 'staris' class..\n      cls = 'clutter'\n    points = np.loadtxt(f) # Nx6\n    label = g_class2label[cls] # N,\n    # Compute tightest axis aligned bounding box\n    xyz_min = np.amin(points[:, 0:3], axis=0) # 3,\n    xyz_max = np.amax(points[:, 0:3], axis=0) # 3,\n    xyz_center = (xyz_min + xyz_max) / 2\n    dimension = (xyz_max - xyz_min) / 2\n\n    xyz_offsets = xyz_center - points[:,0:3] # Nx3\n    dimensions = np.ones((points.shape[0],3)) * dimension # Nx3\n    labels = np.ones((points.shape[0],1)) * label # N\n    point_bbox_list.append(np.concatenate([points, labels,\n                       xyz_offsets, dimensions], 1)) # Nx13\n\n  point_bbox = np.concatenate(point_bbox_list, 0) # KxNx13\n  room_xyz_min = np.amin(point_bbox[:, 0:3], axis=0)\n  point_bbox[:, 0:3] -= room_xyz_min \n\n  if file_format == 'txt':\n    fout = open(out_filename, 'w')\n    for i in range(point_bbox.shape[0]):\n      fout.write('%f %f %f %d %d %d %d %f %f %f %f %f %f\\n' % \\\n              (point_bbox[i,0], point_bbox[i,1], point_bbox[i,2],\n               point_bbox[i,3], point_bbox[i,4], point_bbox[i,5],\n               point_bbox[i,6],\n               point_bbox[i,7], point_bbox[i,8], point_bbox[i,9],\n               point_bbox[i,10], point_bbox[i,11], point_bbox[i,12]))\n    \n    fout.close()\n  elif file_format == 'numpy':\n    np.save(out_filename, point_bbox)\n  else:\n    print('ERROR!! Unknown file format: %s, please use txt or numpy.' % \\\n      (file_format))\n    exit()\n\n\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/all_data_label.txt",
    "content": "Area_1_conferenceRoom_1.npy\nArea_1_conferenceRoom_2.npy\nArea_1_copyRoom_1.npy\nArea_1_hallway_1.npy\nArea_1_hallway_2.npy\nArea_1_hallway_3.npy\nArea_1_hallway_4.npy\nArea_1_hallway_5.npy\nArea_1_hallway_6.npy\nArea_1_hallway_7.npy\nArea_1_hallway_8.npy\nArea_1_office_10.npy\nArea_1_office_11.npy\nArea_1_office_12.npy\nArea_1_office_13.npy\nArea_1_office_14.npy\nArea_1_office_15.npy\nArea_1_office_16.npy\nArea_1_office_17.npy\nArea_1_office_18.npy\nArea_1_office_19.npy\nArea_1_office_1.npy\nArea_1_office_20.npy\nArea_1_office_21.npy\nArea_1_office_22.npy\nArea_1_office_23.npy\nArea_1_office_24.npy\nArea_1_office_25.npy\nArea_1_office_26.npy\nArea_1_office_27.npy\nArea_1_office_28.npy\nArea_1_office_29.npy\nArea_1_office_2.npy\nArea_1_office_30.npy\nArea_1_office_31.npy\nArea_1_office_3.npy\nArea_1_office_4.npy\nArea_1_office_5.npy\nArea_1_office_6.npy\nArea_1_office_7.npy\nArea_1_office_8.npy\nArea_1_office_9.npy\nArea_1_pantry_1.npy\nArea_1_WC_1.npy\nArea_2_auditorium_1.npy\nArea_2_auditorium_2.npy\nArea_2_conferenceRoom_1.npy\nArea_2_hallway_10.npy\nArea_2_hallway_11.npy\nArea_2_hallway_12.npy\nArea_2_hallway_1.npy\nArea_2_hallway_2.npy\nArea_2_hallway_3.npy\nArea_2_hallway_4.npy\nArea_2_hallway_5.npy\nArea_2_hallway_6.npy\nArea_2_hallway_7.npy\nArea_2_hallway_8.npy\nArea_2_hallway_9.npy\nArea_2_office_10.npy\nArea_2_office_11.npy\nArea_2_office_12.npy\nArea_2_office_13.npy\nArea_2_office_14.npy\nArea_2_office_1.npy\nArea_2_office_2.npy\nArea_2_office_3.npy\nArea_2_office_4.npy\nArea_2_office_5.npy\nArea_2_office_6.npy\nArea_2_office_7.npy\nArea_2_office_8.npy\nArea_2_office_9.npy\nArea_2_storage_1.npy\nArea_2_storage_2.npy\nArea_2_storage_3.npy\nArea_2_storage_4.npy\nArea_2_storage_5.npy\nArea_2_storage_6.npy\nArea_2_storage_7.npy\nArea_2_storage_8.npy\nArea_2_storage_9.npy\nArea_2_WC_1.npy\nArea_2_WC_2.npy\nArea_3_conferenceRoom_1.npy\nArea_3_hallway_1.npy\nArea_3_hallway_2.npy\nArea_3_hallway_3.npy\nArea_3_hallway_4.npy\nArea_3_hallway_5.npy\nArea_3_hallway_6.npy\nArea_3_lounge_1.npy\nArea_3_lounge_2.npy\nArea_3_office_10.npy\nArea_3_office_1.npy\nArea_3_office_2.npy\nArea_3_office_3.npy\nArea_3_office_4.npy\nArea_3_office_5.npy\nArea_3_office_6.npy\nArea_3_office_7.npy\nArea_3_office_8.npy\nArea_3_office_9.npy\nArea_3_storage_1.npy\nArea_3_storage_2.npy\nArea_3_WC_1.npy\nArea_3_WC_2.npy\nArea_4_conferenceRoom_1.npy\nArea_4_conferenceRoom_2.npy\nArea_4_conferenceRoom_3.npy\nArea_4_hallway_10.npy\nArea_4_hallway_11.npy\nArea_4_hallway_12.npy\nArea_4_hallway_13.npy\nArea_4_hallway_14.npy\nArea_4_hallway_1.npy\nArea_4_hallway_2.npy\nArea_4_hallway_3.npy\nArea_4_hallway_4.npy\nArea_4_hallway_5.npy\nArea_4_hallway_6.npy\nArea_4_hallway_7.npy\nArea_4_hallway_8.npy\nArea_4_hallway_9.npy\nArea_4_lobby_1.npy\nArea_4_lobby_2.npy\nArea_4_office_10.npy\nArea_4_office_11.npy\nArea_4_office_12.npy\nArea_4_office_13.npy\nArea_4_office_14.npy\nArea_4_office_15.npy\nArea_4_office_16.npy\nArea_4_office_17.npy\nArea_4_office_18.npy\nArea_4_office_19.npy\nArea_4_office_1.npy\nArea_4_office_20.npy\nArea_4_office_21.npy\nArea_4_office_22.npy\nArea_4_office_2.npy\nArea_4_office_3.npy\nArea_4_office_4.npy\nArea_4_office_5.npy\nArea_4_office_6.npy\nArea_4_office_7.npy\nArea_4_office_8.npy\nArea_4_office_9.npy\nArea_4_storage_1.npy\nArea_4_storage_2.npy\nArea_4_storage_3.npy\nArea_4_storage_4.npy\nArea_4_WC_1.npy\nArea_4_WC_2.npy\nArea_4_WC_3.npy\nArea_4_WC_4.npy\nArea_5_conferenceRoom_1.npy\nArea_5_conferenceRoom_2.npy\nArea_5_conferenceRoom_3.npy\nArea_5_hallway_10.npy\nArea_5_hallway_11.npy\nArea_5_hallway_12.npy\nArea_5_hallway_13.npy\nArea_5_hallway_14.npy\nArea_5_hallway_15.npy\nArea_5_hallway_1.npy\nArea_5_hallway_2.npy\nArea_5_hallway_3.npy\nArea_5_hallway_4.npy\nArea_5_hallway_5.npy\nArea_5_hallway_6.npy\nArea_5_hallway_7.npy\nArea_5_hallway_8.npy\nArea_5_hallway_9.npy\nArea_5_lobby_1.npy\nArea_5_office_10.npy\nArea_5_office_11.npy\nArea_5_office_12.npy\nArea_5_office_13.npy\nArea_5_office_14.npy\nArea_5_office_15.npy\nArea_5_office_16.npy\nArea_5_office_17.npy\nArea_5_office_18.npy\nArea_5_office_19.npy\nArea_5_office_1.npy\nArea_5_office_20.npy\nArea_5_office_21.npy\nArea_5_office_22.npy\nArea_5_office_23.npy\nArea_5_office_24.npy\nArea_5_office_25.npy\nArea_5_office_26.npy\nArea_5_office_27.npy\nArea_5_office_28.npy\nArea_5_office_29.npy\nArea_5_office_2.npy\nArea_5_office_30.npy\nArea_5_office_31.npy\nArea_5_office_32.npy\nArea_5_office_33.npy\nArea_5_office_34.npy\nArea_5_office_35.npy\nArea_5_office_36.npy\nArea_5_office_37.npy\nArea_5_office_38.npy\nArea_5_office_39.npy\nArea_5_office_3.npy\nArea_5_office_40.npy\nArea_5_office_41.npy\nArea_5_office_42.npy\nArea_5_office_4.npy\nArea_5_office_5.npy\nArea_5_office_6.npy\nArea_5_office_7.npy\nArea_5_office_8.npy\nArea_5_office_9.npy\nArea_5_pantry_1.npy\nArea_5_storage_1.npy\nArea_5_storage_2.npy\nArea_5_storage_3.npy\nArea_5_storage_4.npy\nArea_5_WC_1.npy\nArea_5_WC_2.npy\nArea_6_conferenceRoom_1.npy\nArea_6_copyRoom_1.npy\nArea_6_hallway_1.npy\nArea_6_hallway_2.npy\nArea_6_hallway_3.npy\nArea_6_hallway_4.npy\nArea_6_hallway_5.npy\nArea_6_hallway_6.npy\nArea_6_lounge_1.npy\nArea_6_office_10.npy\nArea_6_office_11.npy\nArea_6_office_12.npy\nArea_6_office_13.npy\nArea_6_office_14.npy\nArea_6_office_15.npy\nArea_6_office_16.npy\nArea_6_office_17.npy\nArea_6_office_18.npy\nArea_6_office_19.npy\nArea_6_office_1.npy\nArea_6_office_20.npy\nArea_6_office_21.npy\nArea_6_office_22.npy\nArea_6_office_23.npy\nArea_6_office_24.npy\nArea_6_office_25.npy\nArea_6_office_26.npy\nArea_6_office_27.npy\nArea_6_office_28.npy\nArea_6_office_29.npy\nArea_6_office_2.npy\nArea_6_office_30.npy\nArea_6_office_31.npy\nArea_6_office_32.npy\nArea_6_office_33.npy\nArea_6_office_34.npy\nArea_6_office_35.npy\nArea_6_office_36.npy\nArea_6_office_37.npy\nArea_6_office_3.npy\nArea_6_office_4.npy\nArea_6_office_5.npy\nArea_6_office_6.npy\nArea_6_office_7.npy\nArea_6_office_8.npy\nArea_6_office_9.npy\nArea_6_openspace_1.npy\nArea_6_pantry_1.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/anno_paths.txt",
    "content": "Area_1/conferenceRoom_1/Annotations\nArea_1/conferenceRoom_2/Annotations\nArea_1/copyRoom_1/Annotations\nArea_1/hallway_1/Annotations\nArea_1/hallway_2/Annotations\nArea_1/hallway_3/Annotations\nArea_1/hallway_4/Annotations\nArea_1/hallway_5/Annotations\nArea_1/hallway_6/Annotations\nArea_1/hallway_7/Annotations\nArea_1/hallway_8/Annotations\nArea_1/office_10/Annotations\nArea_1/office_11/Annotations\nArea_1/office_12/Annotations\nArea_1/office_13/Annotations\nArea_1/office_14/Annotations\nArea_1/office_15/Annotations\nArea_1/office_16/Annotations\nArea_1/office_17/Annotations\nArea_1/office_18/Annotations\nArea_1/office_19/Annotations\nArea_1/office_1/Annotations\nArea_1/office_20/Annotations\nArea_1/office_21/Annotations\nArea_1/office_22/Annotations\nArea_1/office_23/Annotations\nArea_1/office_24/Annotations\nArea_1/office_25/Annotations\nArea_1/office_26/Annotations\nArea_1/office_27/Annotations\nArea_1/office_28/Annotations\nArea_1/office_29/Annotations\nArea_1/office_2/Annotations\nArea_1/office_30/Annotations\nArea_1/office_31/Annotations\nArea_1/office_3/Annotations\nArea_1/office_4/Annotations\nArea_1/office_5/Annotations\nArea_1/office_6/Annotations\nArea_1/office_7/Annotations\nArea_1/office_8/Annotations\nArea_1/office_9/Annotations\nArea_1/pantry_1/Annotations\nArea_1/WC_1/Annotations\nArea_2/auditorium_1/Annotations\nArea_2/auditorium_2/Annotations\nArea_2/conferenceRoom_1/Annotations\nArea_2/hallway_10/Annotations\nArea_2/hallway_11/Annotations\nArea_2/hallway_12/Annotations\nArea_2/hallway_1/Annotations\nArea_2/hallway_2/Annotations\nArea_2/hallway_3/Annotations\nArea_2/hallway_4/Annotations\nArea_2/hallway_5/Annotations\nArea_2/hallway_6/Annotations\nArea_2/hallway_7/Annotations\nArea_2/hallway_8/Annotations\nArea_2/hallway_9/Annotations\nArea_2/office_10/Annotations\nArea_2/office_11/Annotations\nArea_2/office_12/Annotations\nArea_2/office_13/Annotations\nArea_2/office_14/Annotations\nArea_2/office_1/Annotations\nArea_2/office_2/Annotations\nArea_2/office_3/Annotations\nArea_2/office_4/Annotations\nArea_2/office_5/Annotations\nArea_2/office_6/Annotations\nArea_2/office_7/Annotations\nArea_2/office_8/Annotations\nArea_2/office_9/Annotations\nArea_2/storage_1/Annotations\nArea_2/storage_2/Annotations\nArea_2/storage_3/Annotations\nArea_2/storage_4/Annotations\nArea_2/storage_5/Annotations\nArea_2/storage_6/Annotations\nArea_2/storage_7/Annotations\nArea_2/storage_8/Annotations\nArea_2/storage_9/Annotations\nArea_2/WC_1/Annotations\nArea_2/WC_2/Annotations\nArea_3/conferenceRoom_1/Annotations\nArea_3/hallway_1/Annotations\nArea_3/hallway_2/Annotations\nArea_3/hallway_3/Annotations\nArea_3/hallway_4/Annotations\nArea_3/hallway_5/Annotations\nArea_3/hallway_6/Annotations\nArea_3/lounge_1/Annotations\nArea_3/lounge_2/Annotations\nArea_3/office_10/Annotations\nArea_3/office_1/Annotations\nArea_3/office_2/Annotations\nArea_3/office_3/Annotations\nArea_3/office_4/Annotations\nArea_3/office_5/Annotations\nArea_3/office_6/Annotations\nArea_3/office_7/Annotations\nArea_3/office_8/Annotations\nArea_3/office_9/Annotations\nArea_3/storage_1/Annotations\nArea_3/storage_2/Annotations\nArea_3/WC_1/Annotations\nArea_3/WC_2/Annotations\nArea_4/conferenceRoom_1/Annotations\nArea_4/conferenceRoom_2/Annotations\nArea_4/conferenceRoom_3/Annotations\nArea_4/hallway_10/Annotations\nArea_4/hallway_11/Annotations\nArea_4/hallway_12/Annotations\nArea_4/hallway_13/Annotations\nArea_4/hallway_14/Annotations\nArea_4/hallway_1/Annotations\nArea_4/hallway_2/Annotations\nArea_4/hallway_3/Annotations\nArea_4/hallway_4/Annotations\nArea_4/hallway_5/Annotations\nArea_4/hallway_6/Annotations\nArea_4/hallway_7/Annotations\nArea_4/hallway_8/Annotations\nArea_4/hallway_9/Annotations\nArea_4/lobby_1/Annotations\nArea_4/lobby_2/Annotations\nArea_4/office_10/Annotations\nArea_4/office_11/Annotations\nArea_4/office_12/Annotations\nArea_4/office_13/Annotations\nArea_4/office_14/Annotations\nArea_4/office_15/Annotations\nArea_4/office_16/Annotations\nArea_4/office_17/Annotations\nArea_4/office_18/Annotations\nArea_4/office_19/Annotations\nArea_4/office_1/Annotations\nArea_4/office_20/Annotations\nArea_4/office_21/Annotations\nArea_4/office_22/Annotations\nArea_4/office_2/Annotations\nArea_4/office_3/Annotations\nArea_4/office_4/Annotations\nArea_4/office_5/Annotations\nArea_4/office_6/Annotations\nArea_4/office_7/Annotations\nArea_4/office_8/Annotations\nArea_4/office_9/Annotations\nArea_4/storage_1/Annotations\nArea_4/storage_2/Annotations\nArea_4/storage_3/Annotations\nArea_4/storage_4/Annotations\nArea_4/WC_1/Annotations\nArea_4/WC_2/Annotations\nArea_4/WC_3/Annotations\nArea_4/WC_4/Annotations\nArea_5/conferenceRoom_1/Annotations\nArea_5/conferenceRoom_2/Annotations\nArea_5/conferenceRoom_3/Annotations\nArea_5/hallway_10/Annotations\nArea_5/hallway_11/Annotations\nArea_5/hallway_12/Annotations\nArea_5/hallway_13/Annotations\nArea_5/hallway_14/Annotations\nArea_5/hallway_15/Annotations\nArea_5/hallway_1/Annotations\nArea_5/hallway_2/Annotations\nArea_5/hallway_3/Annotations\nArea_5/hallway_4/Annotations\nArea_5/hallway_5/Annotations\nArea_5/hallway_6/Annotations\nArea_5/hallway_7/Annotations\nArea_5/hallway_8/Annotations\nArea_5/hallway_9/Annotations\nArea_5/lobby_1/Annotations\nArea_5/office_10/Annotations\nArea_5/office_11/Annotations\nArea_5/office_12/Annotations\nArea_5/office_13/Annotations\nArea_5/office_14/Annotations\nArea_5/office_15/Annotations\nArea_5/office_16/Annotations\nArea_5/office_17/Annotations\nArea_5/office_18/Annotations\nArea_5/office_19/Annotations\nArea_5/office_1/Annotations\nArea_5/office_20/Annotations\nArea_5/office_21/Annotations\nArea_5/office_22/Annotations\nArea_5/office_23/Annotations\nArea_5/office_24/Annotations\nArea_5/office_25/Annotations\nArea_5/office_26/Annotations\nArea_5/office_27/Annotations\nArea_5/office_28/Annotations\nArea_5/office_29/Annotations\nArea_5/office_2/Annotations\nArea_5/office_30/Annotations\nArea_5/office_31/Annotations\nArea_5/office_32/Annotations\nArea_5/office_33/Annotations\nArea_5/office_34/Annotations\nArea_5/office_35/Annotations\nArea_5/office_36/Annotations\nArea_5/office_37/Annotations\nArea_5/office_38/Annotations\nArea_5/office_39/Annotations\nArea_5/office_3/Annotations\nArea_5/office_40/Annotations\nArea_5/office_41/Annotations\nArea_5/office_42/Annotations\nArea_5/office_4/Annotations\nArea_5/office_5/Annotations\nArea_5/office_6/Annotations\nArea_5/office_7/Annotations\nArea_5/office_8/Annotations\nArea_5/office_9/Annotations\nArea_5/pantry_1/Annotations\nArea_5/storage_1/Annotations\nArea_5/storage_2/Annotations\nArea_5/storage_3/Annotations\nArea_5/storage_4/Annotations\nArea_5/WC_1/Annotations\nArea_5/WC_2/Annotations\nArea_6/conferenceRoom_1/Annotations\nArea_6/copyRoom_1/Annotations\nArea_6/hallway_1/Annotations\nArea_6/hallway_2/Annotations\nArea_6/hallway_3/Annotations\nArea_6/hallway_4/Annotations\nArea_6/hallway_5/Annotations\nArea_6/hallway_6/Annotations\nArea_6/lounge_1/Annotations\nArea_6/office_10/Annotations\nArea_6/office_11/Annotations\nArea_6/office_12/Annotations\nArea_6/office_13/Annotations\nArea_6/office_14/Annotations\nArea_6/office_15/Annotations\nArea_6/office_16/Annotations\nArea_6/office_17/Annotations\nArea_6/office_18/Annotations\nArea_6/office_19/Annotations\nArea_6/office_1/Annotations\nArea_6/office_20/Annotations\nArea_6/office_21/Annotations\nArea_6/office_22/Annotations\nArea_6/office_23/Annotations\nArea_6/office_24/Annotations\nArea_6/office_25/Annotations\nArea_6/office_26/Annotations\nArea_6/office_27/Annotations\nArea_6/office_28/Annotations\nArea_6/office_29/Annotations\nArea_6/office_2/Annotations\nArea_6/office_30/Annotations\nArea_6/office_31/Annotations\nArea_6/office_32/Annotations\nArea_6/office_33/Annotations\nArea_6/office_34/Annotations\nArea_6/office_35/Annotations\nArea_6/office_36/Annotations\nArea_6/office_37/Annotations\nArea_6/office_3/Annotations\nArea_6/office_4/Annotations\nArea_6/office_5/Annotations\nArea_6/office_6/Annotations\nArea_6/office_7/Annotations\nArea_6/office_8/Annotations\nArea_6/office_9/Annotations\nArea_6/openspace_1/Annotations\nArea_6/pantry_1/Annotations\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area1_data_label.txt",
    "content": "data/stanford_indoor3d/Area_1_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_1_conferenceRoom_2.npy\ndata/stanford_indoor3d/Area_1_copyRoom_1.npy\ndata/stanford_indoor3d/Area_1_hallway_1.npy\ndata/stanford_indoor3d/Area_1_hallway_2.npy\ndata/stanford_indoor3d/Area_1_hallway_3.npy\ndata/stanford_indoor3d/Area_1_hallway_4.npy\ndata/stanford_indoor3d/Area_1_hallway_5.npy\ndata/stanford_indoor3d/Area_1_hallway_6.npy\ndata/stanford_indoor3d/Area_1_hallway_7.npy\ndata/stanford_indoor3d/Area_1_hallway_8.npy\ndata/stanford_indoor3d/Area_1_office_10.npy\ndata/stanford_indoor3d/Area_1_office_11.npy\ndata/stanford_indoor3d/Area_1_office_12.npy\ndata/stanford_indoor3d/Area_1_office_13.npy\ndata/stanford_indoor3d/Area_1_office_14.npy\ndata/stanford_indoor3d/Area_1_office_15.npy\ndata/stanford_indoor3d/Area_1_office_16.npy\ndata/stanford_indoor3d/Area_1_office_17.npy\ndata/stanford_indoor3d/Area_1_office_18.npy\ndata/stanford_indoor3d/Area_1_office_19.npy\ndata/stanford_indoor3d/Area_1_office_1.npy\ndata/stanford_indoor3d/Area_1_office_20.npy\ndata/stanford_indoor3d/Area_1_office_21.npy\ndata/stanford_indoor3d/Area_1_office_22.npy\ndata/stanford_indoor3d/Area_1_office_23.npy\ndata/stanford_indoor3d/Area_1_office_24.npy\ndata/stanford_indoor3d/Area_1_office_25.npy\ndata/stanford_indoor3d/Area_1_office_26.npy\ndata/stanford_indoor3d/Area_1_office_27.npy\ndata/stanford_indoor3d/Area_1_office_28.npy\ndata/stanford_indoor3d/Area_1_office_29.npy\ndata/stanford_indoor3d/Area_1_office_2.npy\ndata/stanford_indoor3d/Area_1_office_30.npy\ndata/stanford_indoor3d/Area_1_office_31.npy\ndata/stanford_indoor3d/Area_1_office_3.npy\ndata/stanford_indoor3d/Area_1_office_4.npy\ndata/stanford_indoor3d/Area_1_office_5.npy\ndata/stanford_indoor3d/Area_1_office_6.npy\ndata/stanford_indoor3d/Area_1_office_7.npy\ndata/stanford_indoor3d/Area_1_office_8.npy\ndata/stanford_indoor3d/Area_1_office_9.npy\ndata/stanford_indoor3d/Area_1_pantry_1.npy\ndata/stanford_indoor3d/Area_1_WC_1.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area2_data_label.txt",
    "content": "data/stanford_indoor3d/Area_2_auditorium_1.npy\ndata/stanford_indoor3d/Area_2_auditorium_2.npy\ndata/stanford_indoor3d/Area_2_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_2_hallway_10.npy\ndata/stanford_indoor3d/Area_2_hallway_11.npy\ndata/stanford_indoor3d/Area_2_hallway_12.npy\ndata/stanford_indoor3d/Area_2_hallway_1.npy\ndata/stanford_indoor3d/Area_2_hallway_2.npy\ndata/stanford_indoor3d/Area_2_hallway_3.npy\ndata/stanford_indoor3d/Area_2_hallway_4.npy\ndata/stanford_indoor3d/Area_2_hallway_5.npy\ndata/stanford_indoor3d/Area_2_hallway_6.npy\ndata/stanford_indoor3d/Area_2_hallway_7.npy\ndata/stanford_indoor3d/Area_2_hallway_8.npy\ndata/stanford_indoor3d/Area_2_hallway_9.npy\ndata/stanford_indoor3d/Area_2_office_10.npy\ndata/stanford_indoor3d/Area_2_office_11.npy\ndata/stanford_indoor3d/Area_2_office_12.npy\ndata/stanford_indoor3d/Area_2_office_13.npy\ndata/stanford_indoor3d/Area_2_office_14.npy\ndata/stanford_indoor3d/Area_2_office_1.npy\ndata/stanford_indoor3d/Area_2_office_2.npy\ndata/stanford_indoor3d/Area_2_office_3.npy\ndata/stanford_indoor3d/Area_2_office_4.npy\ndata/stanford_indoor3d/Area_2_office_5.npy\ndata/stanford_indoor3d/Area_2_office_6.npy\ndata/stanford_indoor3d/Area_2_office_7.npy\ndata/stanford_indoor3d/Area_2_office_8.npy\ndata/stanford_indoor3d/Area_2_office_9.npy\ndata/stanford_indoor3d/Area_2_storage_1.npy\ndata/stanford_indoor3d/Area_2_storage_2.npy\ndata/stanford_indoor3d/Area_2_storage_3.npy\ndata/stanford_indoor3d/Area_2_storage_4.npy\ndata/stanford_indoor3d/Area_2_storage_5.npy\ndata/stanford_indoor3d/Area_2_storage_6.npy\ndata/stanford_indoor3d/Area_2_storage_7.npy\ndata/stanford_indoor3d/Area_2_storage_8.npy\ndata/stanford_indoor3d/Area_2_storage_9.npy\ndata/stanford_indoor3d/Area_2_WC_1.npy\ndata/stanford_indoor3d/Area_2_WC_2.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area3_data_label.txt",
    "content": "data/stanford_indoor3d/Area_3_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_3_hallway_1.npy\ndata/stanford_indoor3d/Area_3_hallway_2.npy\ndata/stanford_indoor3d/Area_3_hallway_3.npy\ndata/stanford_indoor3d/Area_3_hallway_4.npy\ndata/stanford_indoor3d/Area_3_hallway_5.npy\ndata/stanford_indoor3d/Area_3_hallway_6.npy\ndata/stanford_indoor3d/Area_3_lounge_1.npy\ndata/stanford_indoor3d/Area_3_lounge_2.npy\ndata/stanford_indoor3d/Area_3_office_10.npy\ndata/stanford_indoor3d/Area_3_office_1.npy\ndata/stanford_indoor3d/Area_3_office_2.npy\ndata/stanford_indoor3d/Area_3_office_3.npy\ndata/stanford_indoor3d/Area_3_office_4.npy\ndata/stanford_indoor3d/Area_3_office_5.npy\ndata/stanford_indoor3d/Area_3_office_6.npy\ndata/stanford_indoor3d/Area_3_office_7.npy\ndata/stanford_indoor3d/Area_3_office_8.npy\ndata/stanford_indoor3d/Area_3_office_9.npy\ndata/stanford_indoor3d/Area_3_storage_1.npy\ndata/stanford_indoor3d/Area_3_storage_2.npy\ndata/stanford_indoor3d/Area_3_WC_1.npy\ndata/stanford_indoor3d/Area_3_WC_2.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area4_data_label.txt",
    "content": "data/stanford_indoor3d/Area_4_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_4_conferenceRoom_2.npy\ndata/stanford_indoor3d/Area_4_conferenceRoom_3.npy\ndata/stanford_indoor3d/Area_4_hallway_10.npy\ndata/stanford_indoor3d/Area_4_hallway_11.npy\ndata/stanford_indoor3d/Area_4_hallway_12.npy\ndata/stanford_indoor3d/Area_4_hallway_13.npy\ndata/stanford_indoor3d/Area_4_hallway_14.npy\ndata/stanford_indoor3d/Area_4_hallway_1.npy\ndata/stanford_indoor3d/Area_4_hallway_2.npy\ndata/stanford_indoor3d/Area_4_hallway_3.npy\ndata/stanford_indoor3d/Area_4_hallway_4.npy\ndata/stanford_indoor3d/Area_4_hallway_5.npy\ndata/stanford_indoor3d/Area_4_hallway_6.npy\ndata/stanford_indoor3d/Area_4_hallway_7.npy\ndata/stanford_indoor3d/Area_4_hallway_8.npy\ndata/stanford_indoor3d/Area_4_hallway_9.npy\ndata/stanford_indoor3d/Area_4_lobby_1.npy\ndata/stanford_indoor3d/Area_4_lobby_2.npy\ndata/stanford_indoor3d/Area_4_office_10.npy\ndata/stanford_indoor3d/Area_4_office_11.npy\ndata/stanford_indoor3d/Area_4_office_12.npy\ndata/stanford_indoor3d/Area_4_office_13.npy\ndata/stanford_indoor3d/Area_4_office_14.npy\ndata/stanford_indoor3d/Area_4_office_15.npy\ndata/stanford_indoor3d/Area_4_office_16.npy\ndata/stanford_indoor3d/Area_4_office_17.npy\ndata/stanford_indoor3d/Area_4_office_18.npy\ndata/stanford_indoor3d/Area_4_office_19.npy\ndata/stanford_indoor3d/Area_4_office_1.npy\ndata/stanford_indoor3d/Area_4_office_20.npy\ndata/stanford_indoor3d/Area_4_office_21.npy\ndata/stanford_indoor3d/Area_4_office_22.npy\ndata/stanford_indoor3d/Area_4_office_2.npy\ndata/stanford_indoor3d/Area_4_office_3.npy\ndata/stanford_indoor3d/Area_4_office_4.npy\ndata/stanford_indoor3d/Area_4_office_5.npy\ndata/stanford_indoor3d/Area_4_office_6.npy\ndata/stanford_indoor3d/Area_4_office_7.npy\ndata/stanford_indoor3d/Area_4_office_8.npy\ndata/stanford_indoor3d/Area_4_office_9.npy\ndata/stanford_indoor3d/Area_4_storage_1.npy\ndata/stanford_indoor3d/Area_4_storage_2.npy\ndata/stanford_indoor3d/Area_4_storage_3.npy\ndata/stanford_indoor3d/Area_4_storage_4.npy\ndata/stanford_indoor3d/Area_4_WC_1.npy\ndata/stanford_indoor3d/Area_4_WC_2.npy\ndata/stanford_indoor3d/Area_4_WC_3.npy\ndata/stanford_indoor3d/Area_4_WC_4.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area5_data_label.txt",
    "content": "data/stanford_indoor3d/Area_5_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_5_conferenceRoom_2.npy\ndata/stanford_indoor3d/Area_5_conferenceRoom_3.npy\ndata/stanford_indoor3d/Area_5_hallway_10.npy\ndata/stanford_indoor3d/Area_5_hallway_11.npy\ndata/stanford_indoor3d/Area_5_hallway_12.npy\ndata/stanford_indoor3d/Area_5_hallway_13.npy\ndata/stanford_indoor3d/Area_5_hallway_14.npy\ndata/stanford_indoor3d/Area_5_hallway_15.npy\ndata/stanford_indoor3d/Area_5_hallway_1.npy\ndata/stanford_indoor3d/Area_5_hallway_2.npy\ndata/stanford_indoor3d/Area_5_hallway_3.npy\ndata/stanford_indoor3d/Area_5_hallway_4.npy\ndata/stanford_indoor3d/Area_5_hallway_5.npy\ndata/stanford_indoor3d/Area_5_hallway_6.npy\ndata/stanford_indoor3d/Area_5_hallway_7.npy\ndata/stanford_indoor3d/Area_5_hallway_8.npy\ndata/stanford_indoor3d/Area_5_hallway_9.npy\ndata/stanford_indoor3d/Area_5_lobby_1.npy\ndata/stanford_indoor3d/Area_5_office_10.npy\ndata/stanford_indoor3d/Area_5_office_11.npy\ndata/stanford_indoor3d/Area_5_office_12.npy\ndata/stanford_indoor3d/Area_5_office_13.npy\ndata/stanford_indoor3d/Area_5_office_14.npy\ndata/stanford_indoor3d/Area_5_office_15.npy\ndata/stanford_indoor3d/Area_5_office_16.npy\ndata/stanford_indoor3d/Area_5_office_17.npy\ndata/stanford_indoor3d/Area_5_office_18.npy\ndata/stanford_indoor3d/Area_5_office_19.npy\ndata/stanford_indoor3d/Area_5_office_1.npy\ndata/stanford_indoor3d/Area_5_office_20.npy\ndata/stanford_indoor3d/Area_5_office_21.npy\ndata/stanford_indoor3d/Area_5_office_22.npy\ndata/stanford_indoor3d/Area_5_office_23.npy\ndata/stanford_indoor3d/Area_5_office_24.npy\ndata/stanford_indoor3d/Area_5_office_25.npy\ndata/stanford_indoor3d/Area_5_office_26.npy\ndata/stanford_indoor3d/Area_5_office_27.npy\ndata/stanford_indoor3d/Area_5_office_28.npy\ndata/stanford_indoor3d/Area_5_office_29.npy\ndata/stanford_indoor3d/Area_5_office_2.npy\ndata/stanford_indoor3d/Area_5_office_30.npy\ndata/stanford_indoor3d/Area_5_office_31.npy\ndata/stanford_indoor3d/Area_5_office_32.npy\ndata/stanford_indoor3d/Area_5_office_33.npy\ndata/stanford_indoor3d/Area_5_office_34.npy\ndata/stanford_indoor3d/Area_5_office_35.npy\ndata/stanford_indoor3d/Area_5_office_36.npy\ndata/stanford_indoor3d/Area_5_office_37.npy\ndata/stanford_indoor3d/Area_5_office_38.npy\ndata/stanford_indoor3d/Area_5_office_39.npy\ndata/stanford_indoor3d/Area_5_office_3.npy\ndata/stanford_indoor3d/Area_5_office_40.npy\ndata/stanford_indoor3d/Area_5_office_41.npy\ndata/stanford_indoor3d/Area_5_office_42.npy\ndata/stanford_indoor3d/Area_5_office_4.npy\ndata/stanford_indoor3d/Area_5_office_5.npy\ndata/stanford_indoor3d/Area_5_office_6.npy\ndata/stanford_indoor3d/Area_5_office_7.npy\ndata/stanford_indoor3d/Area_5_office_8.npy\ndata/stanford_indoor3d/Area_5_office_9.npy\ndata/stanford_indoor3d/Area_5_pantry_1.npy\ndata/stanford_indoor3d/Area_5_storage_1.npy\ndata/stanford_indoor3d/Area_5_storage_2.npy\ndata/stanford_indoor3d/Area_5_storage_3.npy\ndata/stanford_indoor3d/Area_5_storage_4.npy\ndata/stanford_indoor3d/Area_5_WC_1.npy\ndata/stanford_indoor3d/Area_5_WC_2.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/area6_data_label.txt",
    "content": "data/stanford_indoor3d/Area_6_conferenceRoom_1.npy\ndata/stanford_indoor3d/Area_6_copyRoom_1.npy\ndata/stanford_indoor3d/Area_6_hallway_1.npy\ndata/stanford_indoor3d/Area_6_hallway_2.npy\ndata/stanford_indoor3d/Area_6_hallway_3.npy\ndata/stanford_indoor3d/Area_6_hallway_4.npy\ndata/stanford_indoor3d/Area_6_hallway_5.npy\ndata/stanford_indoor3d/Area_6_hallway_6.npy\ndata/stanford_indoor3d/Area_6_lounge_1.npy\ndata/stanford_indoor3d/Area_6_office_10.npy\ndata/stanford_indoor3d/Area_6_office_11.npy\ndata/stanford_indoor3d/Area_6_office_12.npy\ndata/stanford_indoor3d/Area_6_office_13.npy\ndata/stanford_indoor3d/Area_6_office_14.npy\ndata/stanford_indoor3d/Area_6_office_15.npy\ndata/stanford_indoor3d/Area_6_office_16.npy\ndata/stanford_indoor3d/Area_6_office_17.npy\ndata/stanford_indoor3d/Area_6_office_18.npy\ndata/stanford_indoor3d/Area_6_office_19.npy\ndata/stanford_indoor3d/Area_6_office_1.npy\ndata/stanford_indoor3d/Area_6_office_20.npy\ndata/stanford_indoor3d/Area_6_office_21.npy\ndata/stanford_indoor3d/Area_6_office_22.npy\ndata/stanford_indoor3d/Area_6_office_23.npy\ndata/stanford_indoor3d/Area_6_office_24.npy\ndata/stanford_indoor3d/Area_6_office_25.npy\ndata/stanford_indoor3d/Area_6_office_26.npy\ndata/stanford_indoor3d/Area_6_office_27.npy\ndata/stanford_indoor3d/Area_6_office_28.npy\ndata/stanford_indoor3d/Area_6_office_29.npy\ndata/stanford_indoor3d/Area_6_office_2.npy\ndata/stanford_indoor3d/Area_6_office_30.npy\ndata/stanford_indoor3d/Area_6_office_31.npy\ndata/stanford_indoor3d/Area_6_office_32.npy\ndata/stanford_indoor3d/Area_6_office_33.npy\ndata/stanford_indoor3d/Area_6_office_34.npy\ndata/stanford_indoor3d/Area_6_office_35.npy\ndata/stanford_indoor3d/Area_6_office_36.npy\ndata/stanford_indoor3d/Area_6_office_37.npy\ndata/stanford_indoor3d/Area_6_office_3.npy\ndata/stanford_indoor3d/Area_6_office_4.npy\ndata/stanford_indoor3d/Area_6_office_5.npy\ndata/stanford_indoor3d/Area_6_office_6.npy\ndata/stanford_indoor3d/Area_6_office_7.npy\ndata/stanford_indoor3d/Area_6_office_8.npy\ndata/stanford_indoor3d/Area_6_office_9.npy\ndata/stanford_indoor3d/Area_6_openspace_1.npy\ndata/stanford_indoor3d/Area_6_pantry_1.npy\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/meta/class_names.txt",
    "content": "ceiling\nfloor\nwall\nbeam\ncolumn\nwindow\ndoor\ntable\nchair\nsofa\nbookcase\nboard\nclutter\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/model.py",
    "content": "import tensorflow as tf\nimport math\nimport time\nimport numpy as np\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nsys.path.append(os.path.join(BASE_DIR, '../models'))\nimport tf_util\n\ndef placeholder_inputs(batch_size, num_point):\n  pointclouds_pl = tf.placeholder(tf.float32,\n                   shape=(batch_size, num_point, 9))\n  labels_pl = tf.placeholder(tf.int32,\n                shape=(batch_size, num_point))\n  return pointclouds_pl, labels_pl\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n  \"\"\" ConvNet baseline, input is BxNx9 gray image \"\"\"\n  batch_size = point_cloud.get_shape()[0].value\n  num_point = point_cloud.get_shape()[1].value\n  input_image = tf.expand_dims(point_cloud, -1)\n\n  k = 20\n\n  adj = tf_util.pairwise_distance(point_cloud[:, :, 6:])\n  nn_idx = tf_util.knn(adj, k=k) # (batch, num_points, k)\n  edge_feature = tf_util.get_edge_feature(input_image, nn_idx=nn_idx, k=k)\n\n  out1 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv1', bn_decay=bn_decay, is_dist=True)\n  \n  out2 = tf_util.conv2d(out1, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv2', bn_decay=bn_decay, is_dist=True)\n\n  net_1 = tf.reduce_max(out2, axis=-2, keep_dims=True)\n\n\n\n  adj = tf_util.pairwise_distance(net_1)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(net_1, nn_idx=nn_idx, k=k)\n\n  out3 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv3', bn_decay=bn_decay, is_dist=True)\n\n  out4 = tf_util.conv2d(out3, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv4', bn_decay=bn_decay, is_dist=True)\n  \n  net_2 = tf.reduce_max(out4, axis=-2, keep_dims=True)\n  \n  \n\n  adj = tf_util.pairwise_distance(net_2)\n  nn_idx = tf_util.knn(adj, k=k)\n  edge_feature = tf_util.get_edge_feature(net_2, nn_idx=nn_idx, k=k)\n\n  out5 = tf_util.conv2d(edge_feature, 64, [1,1],\n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training, weight_decay=weight_decay,\n                       scope='adj_conv5', bn_decay=bn_decay, is_dist=True)\n\n  # out6 = tf_util.conv2d(out5, 64, [1,1],\n  #                      padding='VALID', stride=[1,1],\n  #                      bn=True, is_training=is_training, weight_decay=weight_decay,\n  #                      scope='adj_conv6', bn_decay=bn_decay, is_dist=True)\n\n  net_3 = tf.reduce_max(out5, axis=-2, keep_dims=True)\n\n\n\n  out7 = tf_util.conv2d(tf.concat([net_1, net_2, net_3], axis=-1), 1024, [1, 1], \n                       padding='VALID', stride=[1,1],\n                       bn=True, is_training=is_training,\n                       scope='adj_conv7', bn_decay=bn_decay, is_dist=True)\n\n  out_max = tf_util.max_pool2d(out7, [num_point, 1], padding='VALID', scope='maxpool')\n\n\n  expand = tf.tile(out_max, [1, num_point, 1, 1])\n\n  concat = tf.concat(axis=3, values=[expand, \n                                     net_1,\n                                     net_2,\n                                     net_3])\n\n  # CONV \n  net = tf_util.conv2d(concat, 512, [1,1], padding='VALID', stride=[1,1],\n             bn=True, is_training=is_training, scope='seg/conv1', is_dist=True)\n  net = tf_util.conv2d(net, 256, [1,1], padding='VALID', stride=[1,1],\n             bn=True, is_training=is_training, scope='seg/conv2', is_dist=True)\n  net = tf_util.dropout(net, keep_prob=0.7, is_training=is_training, scope='dp1')\n  net = tf_util.conv2d(net, 13, [1,1], padding='VALID', stride=[1,1],\n             activation_fn=None, scope='seg/conv3', is_dist=True)\n  net = tf.squeeze(net, [2])\n\n  return net\n\ndef get_loss(pred, label):\n  \"\"\" pred: B,N,13; label: B,N \"\"\"\n  loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n  return tf.reduce_mean(loss)\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/test_job.sh",
    "content": "python batch_inference.py --model_path log1/epoch_60.ckpt --dump_dir log1/dump --output_filelist log1/output_filelist.txt --room_data_filelist meta/area1_data_label.txt\npython batch_inference.py --model_path log2/epoch_60.ckpt --dump_dir log2/dump --output_filelist log2/output_filelist.txt --room_data_filelist meta/area2_data_label.txt\npython batch_inference.py --model_path log3/epoch_60.ckpt --dump_dir log3/dump --output_filelist log3/output_filelist.txt --room_data_filelist meta/area3_data_label.txt\npython batch_inference.py --model_path log4/epoch_60.ckpt --dump_dir log4/dump --output_filelist log4/output_filelist.txt --room_data_filelist meta/area4_data_label.txt\npython batch_inference.py --model_path log5/epoch_60.ckpt --dump_dir log5/dump --output_filelist log5/output_filelist.txt --room_data_filelist meta/area5_data_label.txt\npython batch_inference.py --model_path log6/epoch_60.ckpt --dump_dir log6/dump --output_filelist log6/output_filelist.txt --room_data_filelist meta/area6_data_label.txt"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/train.py",
    "content": "import argparse\nimport math\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nsys.path.append(ROOT_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nfrom model import *\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--num_gpu', type=int, default=2, help='the number of GPUs to use [default: 2]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=4096, help='Point number [default: 4096]')\nparser.add_argument('--max_epoch', type=int, default=101, help='Epoch to run [default: 50]')\nparser.add_argument('--batch_size', type=int, default=12, help='Batch Size during training for each GPU [default: 24]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=300000, help='Decay step for lr decay [default: 300000]')\nparser.add_argument('--decay_rate', type=float, default=0.5, help='Decay rate for lr decay [default: 0.5]')\nparser.add_argument('--test_area', type=int, default=6, help='Which area to use for test, option: 1-6 [default: 6]')\nFLAGS = parser.parse_args()\n\nTOWER_NAME = 'tower'\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nNUM_POINT = FLAGS.num_point\nBASE_LEARNING_RATE = FLAGS.learning_rate\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp model.py %s' % (LOG_DIR)) \nos.system('cp train.py %s' % (LOG_DIR)) \nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nMAX_NUM_POINT = 4096\nNUM_CLASSES = 13\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nALL_FILES = provider.getDataFiles('indoor3d_sem_seg_hdf5_data/all_files.txt') \nroom_filelist = [line.rstrip() for line in open('indoor3d_sem_seg_hdf5_data/room_filelist.txt')] \nprint len(room_filelist)\n\n# Load ALL data\ndata_batch_list = []\nlabel_batch_list = []\nfor h5_filename in ALL_FILES:\n  data_batch, label_batch = provider.loadDataFile(h5_filename)\n  data_batch_list.append(data_batch)\n  label_batch_list.append(label_batch)\ndata_batches = np.concatenate(data_batch_list, 0)\nlabel_batches = np.concatenate(label_batch_list, 0)\nprint(data_batches.shape)\nprint(label_batches.shape)\n\ntest_area = 'Area_'+str(FLAGS.test_area)\ntrain_idxs = []\ntest_idxs = []\nfor i,room_name in enumerate(room_filelist):\n  if test_area in room_name:\n    test_idxs.append(i)\n  else:\n    train_idxs.append(i)\n\ntrain_data = data_batches[train_idxs,...]\ntrain_label = label_batches[train_idxs]\ntest_data = data_batches[test_idxs,...]\ntest_label = label_batches[test_idxs]\nprint(train_data.shape, train_label.shape)\nprint(test_data.shape, test_label.shape)\n\n\ndef log_string(out_str):\n  LOG_FOUT.write(out_str+'\\n')\n  LOG_FOUT.flush()\n  print(out_str)\n\n\ndef get_learning_rate(batch):\n  learning_rate = tf.train.exponential_decay(\n            BASE_LEARNING_RATE,  # Base learning rate.\n            batch * BATCH_SIZE,  # Current index into the dataset.\n            DECAY_STEP,          # Decay step.\n            DECAY_RATE,          # Decay rate.\n            staircase=True)\n  learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!!\n  return learning_rate        \n\ndef get_bn_decay(batch):\n  bn_momentum = tf.train.exponential_decay(\n            BN_INIT_DECAY,\n            batch*BATCH_SIZE,\n            BN_DECAY_DECAY_STEP,\n            BN_DECAY_DECAY_RATE,\n            staircase=True)\n  bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n  return bn_decay\n\ndef average_gradients(tower_grads):\n  \"\"\"Calculate average gradient for each shared variable across all towers.\n\n  Note that this function provides a synchronization point across all towers.\n\n  Args:\n    tower_grads: List of lists of (gradient, variable) tuples. The outer list\n    is over individual gradients. The inner list is over the gradient\n    calculation for each tower.\n  Returns:\n     List of pairs of (gradient, variable) where the gradient has been \n     averaged across all towers.\n  \"\"\"\n  average_grads = []\n  for grad_and_vars in zip(*tower_grads):\n    # Note that each grad_and_vars looks like the following:\n    #   ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))\n    grads = []\n    for g, _ in grad_and_vars:\n      expanded_g = tf.expand_dims(g, 0)\n      grads.append(expanded_g)\n\n    # Average over the 'tower' dimension.\n    grad = tf.concat(grads, 0)\n    grad = tf.reduce_mean(grad, 0)\n\n    # Keep in mind that the Variables are redundant because they are shared\n    # across towers. So .. we will just return the first tower's pointer to\n    # the Variable.\n    v = grad_and_vars[0][1]\n    grad_and_var = (grad, v)\n    average_grads.append(grad_and_var)\n  return average_grads\n\ndef train():\n  with tf.Graph().as_default(), tf.device('/cpu:0'):\n    batch = tf.Variable(0, trainable=False)\n    \n    bn_decay = get_bn_decay(batch)\n    tf.summary.scalar('bn_decay', bn_decay)\n\n    learning_rate = get_learning_rate(batch)\n    tf.summary.scalar('learning_rate', learning_rate)\n    \n    trainer = tf.train.AdamOptimizer(learning_rate)\n    \n    tower_grads = []\n    pointclouds_phs = []\n    labels_phs = []\n    is_training_phs =[]\n\n    with tf.variable_scope(tf.get_variable_scope()):\n      for i in xrange(FLAGS.num_gpu):\n        with tf.device('/gpu:%d' % i):\n          with tf.name_scope('%s_%d' % (TOWER_NAME, i)) as scope:\n      \n            pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            \n            pointclouds_phs.append(pointclouds_pl)\n            labels_phs.append(labels_pl)\n            is_training_phs.append(is_training_pl)\n      \n            pred = get_model(pointclouds_phs[-1], is_training_phs[-1], bn_decay=bn_decay)\n            loss = get_loss(pred, labels_phs[-1])\n            tf.summary.scalar('loss', loss)\n\n            correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_phs[-1]))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)\n            tf.summary.scalar('accuracy', accuracy)\n\n            tf.get_variable_scope().reuse_variables()\n\n            grads = trainer.compute_gradients(loss)\n\n            tower_grads.append(grads)\n    \n    grads = average_gradients(tower_grads)\n\n    train_op = trainer.apply_gradients(grads, global_step=batch)\n    \n    saver = tf.train.Saver(tf.global_variables(), sharded=True, max_to_keep=10)\n    \n    # Create a session\n    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    config.allow_soft_placement = True\n    sess = tf.Session(config=config)\n\n    # Add summary writers\n    merged = tf.summary.merge_all()\n    train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'),\n                  sess.graph)\n    test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'))\n\n    # Init variables for two GPUs\n    init = tf.group(tf.global_variables_initializer(),\n             tf.local_variables_initializer())\n    sess.run(init)\n\n    ops = {'pointclouds_phs': pointclouds_phs,\n         'labels_phs': labels_phs,\n         'is_training_phs': is_training_phs,\n         'pred': pred,\n         'loss': loss,\n         'train_op': train_op,\n         'merged': merged,\n         'step': batch}\n\n    for epoch in range(MAX_EPOCH):\n      log_string('**** EPOCH %03d ****' % (epoch))\n      sys.stdout.flush()\n       \n      train_one_epoch(sess, ops, train_writer)\n      \n      # Save the variables to disk.\n      if epoch % 10 == 0:\n        save_path = saver.save(sess, os.path.join(LOG_DIR,'epoch_' + str(epoch)+'.ckpt'))\n        log_string(\"Model saved in file: %s\" % save_path)\n\n\n\ndef train_one_epoch(sess, ops, train_writer):\n  \"\"\" ops: dict mapping from string to tf ops \"\"\"\n  is_training = True\n  \n  log_string('----')\n  current_data, current_label, _ = provider.shuffle_data(train_data[:,0:NUM_POINT,:], train_label) \n  \n  file_size = current_data.shape[0]\n  num_batches = file_size // (FLAGS.num_gpu * BATCH_SIZE) \n  \n  total_correct = 0\n  total_seen = 0\n  loss_sum = 0\n  \n  for batch_idx in range(num_batches):\n    if batch_idx % 100 == 0:\n      print('Current batch/total batch num: %d/%d'%(batch_idx,num_batches))\n    start_idx_0 = batch_idx * BATCH_SIZE\n    end_idx_0 = (batch_idx+1) * BATCH_SIZE\n    start_idx_1 = (batch_idx+1) * BATCH_SIZE\n    end_idx_1 = (batch_idx+2) * BATCH_SIZE\n    \n    \n    feed_dict = {ops['pointclouds_phs'][0]: current_data[start_idx_0:end_idx_0, :, :],\n                 ops['pointclouds_phs'][1]: current_data[start_idx_1:end_idx_1, :, :],\n                 ops['labels_phs'][0]: current_label[start_idx_0:end_idx_0],\n                 ops['labels_phs'][1]: current_label[start_idx_1:end_idx_1],\n                 ops['is_training_phs'][0]: is_training,\n                 ops['is_training_phs'][1]: is_training}\n    summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']],\n                     feed_dict=feed_dict)\n    train_writer.add_summary(summary, step)\n    pred_val = np.argmax(pred_val, 2)\n    correct = np.sum(pred_val == current_label[start_idx_1:end_idx_1])\n    total_correct += correct\n    total_seen += (BATCH_SIZE*NUM_POINT)\n    loss_sum += loss_val\n  \n  log_string('mean loss: %f' % (loss_sum / float(num_batches)))\n  log_string('accuracy: %f' % (total_correct / float(total_seen)))\n\nif __name__ == \"__main__\":\n  train()\n  LOG_FOUT.close()\n"
  },
  {
    "path": "dgcnn/tensorflow/sem_seg/train_job.sh",
    "content": "python train.py --log_dir log1 --test_area 1\npython train.py --log_dir log2 --test_area 2\npython train.py --log_dir log3 --test_area 3\npython train.py --log_dir log4 --test_area 4\npython train.py --log_dir log5 --test_area 5\npython train.py --log_dir log6 --test_area 6"
  },
  {
    "path": "dgcnn/tensorflow/train.py",
    "content": "import argparse\nimport math\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, 'models'))\nsys.path.append(os.path.join(BASE_DIR, 'utils'))\nimport provider\nimport tf_util\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='dgcnn', help='Model name: dgcnn')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=1024, help='Point Number [256/512/1024/2048] [default: 1024]')\nparser.add_argument('--max_epoch', type=int, default=250, help='Epoch to run [default: 250]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.7, help='Decay rate for lr decay [default: 0.8]')\nFLAGS = parser.parse_args()\n\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nGPU_INDEX = FLAGS.gpu\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(BASE_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nMAX_NUM_POINT = 2048\nNUM_CLASSES = 40\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\n# ModelNet40 official train/test split\nTRAIN_FILES = provider.getDataFiles( \\\n    os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/train_files.txt'))\nTEST_FILES = provider.getDataFiles(\\\n    os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/test_files.txt'))\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            print(is_training_pl)\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.\n            batch = tf.Variable(0)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            # Get model and loss \n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)\n            loss = MODEL.get_loss(pred, labels_pl, end_points)\n            tf.summary.scalar('loss', loss)\n\n            correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)\n            tf.summary.scalar('accuracy', accuracy)\n\n            # Get training operator\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n            train_op = optimizer.minimize(loss, global_step=batch)\n            \n            # Add ops to save and restore all the variables.\n            saver = tf.train.Saver()\n            \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        #merged = tf.merge_all_summaries()\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'),\n                                  sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'))\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        # To fix the bug introduced in TF 0.12.1 as in\n        # http://stackoverflow.com/questions/41543774/invalidargumenterror-for-tensor-bool-tensorflow-0-12-1\n        #sess.run(init)\n        sess.run(init, {is_training_pl: True})\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch}\n\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n             \n            train_one_epoch(sess, ops, train_writer)\n            eval_one_epoch(sess, ops, test_writer)\n            \n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\n\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    # Shuffle train files\n    train_file_idxs = np.arange(0, len(TRAIN_FILES))\n    np.random.shuffle(train_file_idxs)\n    \n    for fn in range(len(TRAIN_FILES)):\n        log_string('----' + str(fn) + '-----')\n        current_data, current_label = provider.loadDataFile(TRAIN_FILES[train_file_idxs[fn]])\n        current_data = current_data[:,0:NUM_POINT,:]\n        current_data, current_label, _ = provider.shuffle_data(current_data, np.squeeze(current_label))            \n        current_label = np.squeeze(current_label)\n        \n        file_size = current_data.shape[0]\n        num_batches = file_size // BATCH_SIZE\n        \n        total_correct = 0\n        total_seen = 0\n        loss_sum = 0\n       \n        for batch_idx in range(num_batches):\n            start_idx = batch_idx * BATCH_SIZE\n            end_idx = (batch_idx+1) * BATCH_SIZE\n            \n            # Augment batched point clouds by rotation and jittering\n            rotated_data = provider.rotate_point_cloud(current_data[start_idx:end_idx, :, :])\n            jittered_data = provider.jitter_point_cloud(rotated_data)\n            jittered_data = provider.random_scale_point_cloud(jittered_data)\n            jittered_data = provider.rotate_perturbation_point_cloud(jittered_data)\n            jittered_data = provider.shift_point_cloud(jittered_data)\n\n            feed_dict = {ops['pointclouds_pl']: jittered_data,\n                         ops['labels_pl']: current_label[start_idx:end_idx],\n                         ops['is_training_pl']: is_training,}\n            summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n                ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n            train_writer.add_summary(summary, step)\n            pred_val = np.argmax(pred_val, 1)\n            correct = np.sum(pred_val == current_label[start_idx:end_idx])\n            total_correct += correct\n            total_seen += BATCH_SIZE\n            loss_sum += loss_val\n        \n        log_string('mean loss: %f' % (loss_sum / float(num_batches)))\n        log_string('accuracy: %f' % (total_correct / float(total_seen)))\n\n        \ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = False\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n    \n    for fn in range(len(TEST_FILES)):\n        log_string('----' + str(fn) + '-----')\n        current_data, current_label = provider.loadDataFile(TEST_FILES[fn])\n        current_data = current_data[:,0:NUM_POINT,:]\n        current_label = np.squeeze(current_label)\n        \n        file_size = current_data.shape[0]\n        num_batches = file_size // BATCH_SIZE\n        \n        for batch_idx in range(num_batches):\n            start_idx = batch_idx * BATCH_SIZE\n            end_idx = (batch_idx+1) * BATCH_SIZE\n\n            feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],\n                         ops['labels_pl']: current_label[start_idx:end_idx],\n                         ops['is_training_pl']: is_training}\n            summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n                ops['loss'], ops['pred']], feed_dict=feed_dict)\n            pred_val = np.argmax(pred_val, 1)\n            correct = np.sum(pred_val == current_label[start_idx:end_idx])\n            total_correct += correct\n            total_seen += BATCH_SIZE\n            loss_sum += (loss_val*BATCH_SIZE)\n            for i in range(start_idx, end_idx):\n                l = current_label[i]\n                total_seen_class[l] += 1\n                total_correct_class[l] += (pred_val[i-start_idx] == l)\n            \n    log_string('eval mean loss: %f' % (loss_sum / float(total_seen)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n         \n\n\nif __name__ == \"__main__\":\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "dgcnn/tensorflow/utils/data_prep_util.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nfrom plyfile import (PlyData, PlyElement, make2d, PlyParseError, PlyProperty)\nimport numpy as np\nimport h5py\n\nSAMPLING_BIN = os.path.join(BASE_DIR, 'third_party/mesh_sampling/build/pcsample')\n\nSAMPLING_POINT_NUM = 2048\nSAMPLING_LEAF_SIZE = 0.005\n\nMODELNET40_PATH = '../datasets/modelnet40'\ndef export_ply(pc, filename):\n\tvertex = np.zeros(pc.shape[0], dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')])\n\tfor i in range(pc.shape[0]):\n\t\tvertex[i] = (pc[i][0], pc[i][1], pc[i][2])\n\tply_out = PlyData([PlyElement.describe(vertex, 'vertex', comments=['vertices'])])\n\tply_out.write(filename)\n\n# Sample points on the obj shape\ndef get_sampling_command(obj_filename, ply_filename):\n    cmd = SAMPLING_BIN + ' ' + obj_filename\n    cmd += ' ' + ply_filename\n    cmd += ' -n_samples %d ' % SAMPLING_POINT_NUM\n    cmd += ' -leaf_size %f ' % SAMPLING_LEAF_SIZE\n    return cmd\n\n# --------------------------------------------------------------\n# Following are the helper functions to load MODELNET40 shapes\n# --------------------------------------------------------------\n\n# Read in the list of categories in MODELNET40\ndef get_category_names():\n    shape_names_file = os.path.join(MODELNET40_PATH, 'shape_names.txt')\n    shape_names = [line.rstrip() for line in open(shape_names_file)]\n    return shape_names\n\n# Return all the filepaths for the shapes in MODELNET40 \ndef get_obj_filenames():\n    obj_filelist_file = os.path.join(MODELNET40_PATH, 'filelist.txt')\n    obj_filenames = [os.path.join(MODELNET40_PATH, line.rstrip()) for line in open(obj_filelist_file)]\n    print('Got %d obj files in modelnet40.' % len(obj_filenames))\n    return obj_filenames\n\n# Helper function to create the father folder and all subdir folders if not exist\ndef batch_mkdir(output_folder, subdir_list):\n    if not os.path.exists(output_folder):\n        os.mkdir(output_folder)\n    for subdir in subdir_list:\n        if not os.path.exists(os.path.join(output_folder, subdir)):\n            os.mkdir(os.path.join(output_folder, subdir))\n\n# ----------------------------------------------------------------\n# Following are the helper functions to load save/load HDF5 files\n# ----------------------------------------------------------------\n\n# Write numpy array data and label to h5_filename\ndef save_h5_data_label_normal(h5_filename, data, label, normal, \n\t\tdata_dtype='float32', label_dtype='uint8', noral_dtype='float32'):\n    h5_fout = h5py.File(h5_filename)\n    h5_fout.create_dataset(\n            'data', data=data,\n            compression='gzip', compression_opts=4,\n            dtype=data_dtype)\n    h5_fout.create_dataset(\n            'normal', data=normal,\n            compression='gzip', compression_opts=4,\n            dtype=normal_dtype)\n    h5_fout.create_dataset(\n            'label', data=label,\n            compression='gzip', compression_opts=1,\n            dtype=label_dtype)\n    h5_fout.close()\n\n\n# Write numpy array data and label to h5_filename\ndef save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'):\n    h5_fout = h5py.File(h5_filename)\n    h5_fout.create_dataset(\n            'data', data=data,\n            compression='gzip', compression_opts=4,\n            dtype=data_dtype)\n    h5_fout.create_dataset(\n            'label', data=label,\n            compression='gzip', compression_opts=1,\n            dtype=label_dtype)\n    h5_fout.close()\n\n# Read numpy array data and label from h5_filename\ndef load_h5_data_label_normal(h5_filename):\n    f = h5py.File(h5_filename)\n    data = f['data'][:]\n    label = f['label'][:]\n    normal = f['normal'][:]\n    return (data, label, normal)\n\n# Read numpy array data and label from h5_filename\ndef load_h5_data_label_seg(h5_filename):\n    f = h5py.File(h5_filename)\n    data = f['data'][:]\n    label = f['label'][:]\n    seg = f['pid'][:]\n    return (data, label, seg)\n\n# Read numpy array data and label from h5_filename\ndef load_h5(h5_filename):\n    f = h5py.File(h5_filename)\n    data = f['data'][:]\n    label = f['label'][:]\n    return (data, label)\n\n# ----------------------------------------------------------------\n# Following are the helper functions to load save/load PLY files\n# ----------------------------------------------------------------\n\n# Load PLY file\ndef load_ply_data(filename, point_num):\n    plydata = PlyData.read(filename)\n    pc = plydata['vertex'].data[:point_num]\n    pc_array = np.array([[x, y, z] for x,y,z in pc])\n    return pc_array\n\n# Load PLY file\ndef load_ply_normal(filename, point_num):\n    plydata = PlyData.read(filename)\n    pc = plydata['normal'].data[:point_num]\n    pc_array = np.array([[x, y, z] for x,y,z in pc])\n    return pc_array\n\n# Make up rows for Nxk array\n# Input Pad is 'edge' or 'constant'\ndef pad_arr_rows(arr, row, pad='edge'):\n    assert(len(arr.shape) == 2)\n    assert(arr.shape[0] <= row)\n    assert(pad == 'edge' or pad == 'constant')\n    if arr.shape[0] == row:\n        return arr\n    if pad == 'edge':\n        return np.lib.pad(arr, ((0, row-arr.shape[0]), (0, 0)), 'edge')\n    if pad == 'constant':\n        return np.lib.pad(arr, ((0, row-arr.shape[0]), (0, 0)), 'constant', (0, 0))\n\n\n"
  },
  {
    "path": "dgcnn/tensorflow/utils/eulerangles.py",
    "content": "# emacs: -*- mode: python-mode; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##\n#\n#   See COPYING file distributed along with the NiBabel package for the\n#   copyright and license terms.\n#\n### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##\n''' Module implementing Euler angle rotations and their conversions\n\nSee:\n\n* http://en.wikipedia.org/wiki/Rotation_matrix\n* http://en.wikipedia.org/wiki/Euler_angles\n* http://mathworld.wolfram.com/EulerAngles.html\n\nSee also: *Representing Attitude with Euler Angles and Quaternions: A\nReference* (2006) by James Diebel. A cached PDF link last found here:\n\nhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.5134\n\nEuler's rotation theorem tells us that any rotation in 3D can be\ndescribed by 3 angles.  Let's call the 3 angles the *Euler angle vector*\nand call the angles in the vector :math:`alpha`, :math:`beta` and\n:math:`gamma`.  The vector is [ :math:`alpha`,\n:math:`beta`. :math:`gamma` ] and, in this description, the order of the\nparameters specifies the order in which the rotations occur (so the\nrotation corresponding to :math:`alpha` is applied first).\n\nIn order to specify the meaning of an *Euler angle vector* we need to\nspecify the axes around which each of the rotations corresponding to\n:math:`alpha`, :math:`beta` and :math:`gamma` will occur.\n\nThere are therefore three axes for the rotations :math:`alpha`,\n:math:`beta` and :math:`gamma`; let's call them :math:`i` :math:`j`,\n:math:`k`.\n\nLet us express the rotation :math:`alpha` around axis `i` as a 3 by 3\nrotation matrix `A`.  Similarly :math:`beta` around `j` becomes 3 x 3\nmatrix `B` and :math:`gamma` around `k` becomes matrix `G`.  Then the\nwhole rotation expressed by the Euler angle vector [ :math:`alpha`,\n:math:`beta`. :math:`gamma` ], `R` is given by::\n\n   R = np.dot(G, np.dot(B, A))\n\nSee http://mathworld.wolfram.com/EulerAngles.html\n\nThe order :math:`G B A` expresses the fact that the rotations are\nperformed in the order of the vector (:math:`alpha` around axis `i` =\n`A` first).\n\nTo convert a given Euler angle vector to a meaningful rotation, and a\nrotation matrix, we need to define:\n\n* the axes `i`, `j`, `k`\n* whether a rotation matrix should be applied on the left of a vector to\n  be transformed (vectors are column vectors) or on the right (vectors\n  are row vectors).\n* whether the rotations move the axes as they are applied (intrinsic\n  rotations) - compared the situation where the axes stay fixed and the\n  vectors move within the axis frame (extrinsic)\n* the handedness of the coordinate system\n\nSee: http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities\n\nWe are using the following conventions:\n\n* axes `i`, `j`, `k` are the `z`, `y`, and `x` axes respectively.  Thus\n  an Euler angle vector [ :math:`alpha`, :math:`beta`. :math:`gamma` ]\n  in our convention implies a :math:`alpha` radian rotation around the\n  `z` axis, followed by a :math:`beta` rotation around the `y` axis,\n  followed by a :math:`gamma` rotation around the `x` axis.\n* the rotation matrix applies on the left, to column vectors on the\n  right, so if `R` is the rotation matrix, and `v` is a 3 x N matrix\n  with N column vectors, the transformed vector set `vdash` is given by\n  ``vdash = np.dot(R, v)``.\n* extrinsic rotations - the axes are fixed, and do not move with the\n  rotations.\n* a right-handed coordinate system\n\nThe convention of rotation around ``z``, followed by rotation around\n``y``, followed by rotation around ``x``, is known (confusingly) as\n\"xyz\", pitch-roll-yaw, Cardan angles, or Tait-Bryan angles.\n'''\n\nimport math\n\nimport sys\nif sys.version_info >= (3,0):\n    from functools import reduce\n\nimport numpy as np\n\n\n_FLOAT_EPS_4 = np.finfo(float).eps * 4.0\n\n\ndef euler2mat(z=0, y=0, x=0):\n    ''' Return matrix for rotations around z, y and x axes\n\n    Uses the z, then y, then x convention above\n\n    Parameters\n    ----------\n    z : scalar\n       Rotation angle in radians around z-axis (performed first)\n    y : scalar\n       Rotation angle in radians around y-axis\n    x : scalar\n       Rotation angle in radians around x-axis (performed last)\n\n    Returns\n    -------\n    M : array shape (3,3)\n       Rotation matrix giving same rotation as for given angles\n\n    Examples\n    --------\n    >>> zrot = 1.3 # radians\n    >>> yrot = -0.1\n    >>> xrot = 0.2\n    >>> M = euler2mat(zrot, yrot, xrot)\n    >>> M.shape == (3, 3)\n    True\n\n    The output rotation matrix is equal to the composition of the\n    individual rotations\n\n    >>> M1 = euler2mat(zrot)\n    >>> M2 = euler2mat(0, yrot)\n    >>> M3 = euler2mat(0, 0, xrot)\n    >>> composed_M = np.dot(M3, np.dot(M2, M1))\n    >>> np.allclose(M, composed_M)\n    True\n\n    You can specify rotations by named arguments\n\n    >>> np.all(M3 == euler2mat(x=xrot))\n    True\n\n    When applying M to a vector, the vector should column vector to the\n    right of M.  If the right hand side is a 2D array rather than a\n    vector, then each column of the 2D array represents a vector.\n\n    >>> vec = np.array([1, 0, 0]).reshape((3,1))\n    >>> v2 = np.dot(M, vec)\n    >>> vecs = np.array([[1, 0, 0],[0, 1, 0]]).T # giving 3x2 array\n    >>> vecs2 = np.dot(M, vecs)\n\n    Rotations are counter-clockwise.\n\n    >>> zred = np.dot(euler2mat(z=np.pi/2), np.eye(3))\n    >>> np.allclose(zred, [[0, -1, 0],[1, 0, 0], [0, 0, 1]])\n    True\n    >>> yred = np.dot(euler2mat(y=np.pi/2), np.eye(3))\n    >>> np.allclose(yred, [[0, 0, 1],[0, 1, 0], [-1, 0, 0]])\n    True\n    >>> xred = np.dot(euler2mat(x=np.pi/2), np.eye(3))\n    >>> np.allclose(xred, [[1, 0, 0],[0, 0, -1], [0, 1, 0]])\n    True\n\n    Notes\n    -----\n    The direction of rotation is given by the right-hand rule (orient\n    the thumb of the right hand along the axis around which the rotation\n    occurs, with the end of the thumb at the positive end of the axis;\n    curl your fingers; the direction your fingers curl is the direction\n    of rotation).  Therefore, the rotations are counterclockwise if\n    looking along the axis of rotation from positive to negative.\n    '''\n    Ms = []\n    if z:\n        cosz = math.cos(z)\n        sinz = math.sin(z)\n        Ms.append(np.array(\n                [[cosz, -sinz, 0],\n                 [sinz, cosz, 0],\n                 [0, 0, 1]]))\n    if y:\n        cosy = math.cos(y)\n        siny = math.sin(y)\n        Ms.append(np.array(\n                [[cosy, 0, siny],\n                 [0, 1, 0],\n                 [-siny, 0, cosy]]))\n    if x:\n        cosx = math.cos(x)\n        sinx = math.sin(x)\n        Ms.append(np.array(\n                [[1, 0, 0],\n                 [0, cosx, -sinx],\n                 [0, sinx, cosx]]))\n    if Ms:\n        return reduce(np.dot, Ms[::-1])\n    return np.eye(3)\n\n\ndef mat2euler(M, cy_thresh=None):\n    ''' Discover Euler angle vector from 3x3 matrix\n\n    Uses the conventions above.\n\n    Parameters\n    ----------\n    M : array-like, shape (3,3)\n    cy_thresh : None or scalar, optional\n       threshold below which to give up on straightforward arctan for\n       estimating x rotation.  If None (default), estimate from\n       precision of input.\n\n    Returns\n    -------\n    z : scalar\n    y : scalar\n    x : scalar\n       Rotations in radians around z, y, x axes, respectively\n\n    Notes\n    -----\n    If there was no numerical error, the routine could be derived using\n    Sympy expression for z then y then x rotation matrix, which is::\n\n      [                       cos(y)*cos(z),                       -cos(y)*sin(z),         sin(y)],\n      [cos(x)*sin(z) + cos(z)*sin(x)*sin(y), cos(x)*cos(z) - sin(x)*sin(y)*sin(z), -cos(y)*sin(x)],\n      [sin(x)*sin(z) - cos(x)*cos(z)*sin(y), cos(z)*sin(x) + cos(x)*sin(y)*sin(z),  cos(x)*cos(y)]\n\n    with the obvious derivations for z, y, and x\n\n       z = atan2(-r12, r11)\n       y = asin(r13)\n       x = atan2(-r23, r33)\n\n    Problems arise when cos(y) is close to zero, because both of::\n\n       z = atan2(cos(y)*sin(z), cos(y)*cos(z))\n       x = atan2(cos(y)*sin(x), cos(x)*cos(y))\n\n    will be close to atan2(0, 0), and highly unstable.\n\n    The ``cy`` fix for numerical instability below is from: *Graphics\n    Gems IV*, Paul Heckbert (editor), Academic Press, 1994, ISBN:\n    0123361559.  Specifically it comes from EulerAngles.c by Ken\n    Shoemake, and deals with the case where cos(y) is close to zero:\n\n    See: http://www.graphicsgems.org/\n\n    The code appears to be licensed (from the website) as \"can be used\n    without restrictions\".\n    '''\n    M = np.asarray(M)\n    if cy_thresh is None:\n        try:\n            cy_thresh = np.finfo(M.dtype).eps * 4\n        except ValueError:\n            cy_thresh = _FLOAT_EPS_4\n    r11, r12, r13, r21, r22, r23, r31, r32, r33 = M.flat\n    # cy: sqrt((cos(y)*cos(z))**2 + (cos(x)*cos(y))**2)\n    cy = math.sqrt(r33*r33 + r23*r23)\n    if cy > cy_thresh: # cos(y) not close to zero, standard form\n        z = math.atan2(-r12,  r11) # atan2(cos(y)*sin(z), cos(y)*cos(z))\n        y = math.atan2(r13,  cy) # atan2(sin(y), cy)\n        x = math.atan2(-r23, r33) # atan2(cos(y)*sin(x), cos(x)*cos(y))\n    else: # cos(y) (close to) zero, so x -> 0.0 (see above)\n        # so r21 -> sin(z), r22 -> cos(z) and\n        z = math.atan2(r21,  r22)\n        y = math.atan2(r13,  cy) # atan2(sin(y), cy)\n        x = 0.0\n    return z, y, x\n\n\ndef euler2quat(z=0, y=0, x=0):\n    ''' Return quaternion corresponding to these Euler angles\n\n    Uses the z, then y, then x convention above\n\n    Parameters\n    ----------\n    z : scalar\n       Rotation angle in radians around z-axis (performed first)\n    y : scalar\n       Rotation angle in radians around y-axis\n    x : scalar\n       Rotation angle in radians around x-axis (performed last)\n\n    Returns\n    -------\n    quat : array shape (4,)\n       Quaternion in w, x, y z (real, then vector) format\n\n    Notes\n    -----\n    We can derive this formula in Sympy using:\n\n    1. Formula giving quaternion corresponding to rotation of theta radians\n       about arbitrary axis:\n       http://mathworld.wolfram.com/EulerParameters.html\n    2. Generated formulae from 1.) for quaternions corresponding to\n       theta radians rotations about ``x, y, z`` axes\n    3. Apply quaternion multiplication formula -\n       http://en.wikipedia.org/wiki/Quaternions#Hamilton_product - to\n       formulae from 2.) to give formula for combined rotations.\n    '''\n    z = z/2.0\n    y = y/2.0\n    x = x/2.0\n    cz = math.cos(z)\n    sz = math.sin(z)\n    cy = math.cos(y)\n    sy = math.sin(y)\n    cx = math.cos(x)\n    sx = math.sin(x)\n    return np.array([\n             cx*cy*cz - sx*sy*sz,\n             cx*sy*sz + cy*cz*sx,\n             cx*cz*sy - sx*cy*sz,\n             cx*cy*sz + sx*cz*sy])\n\n\ndef quat2euler(q):\n    ''' Return Euler angles corresponding to quaternion `q`\n\n    Parameters\n    ----------\n    q : 4 element sequence\n       w, x, y, z of quaternion\n\n    Returns\n    -------\n    z : scalar\n       Rotation angle in radians around z-axis (performed first)\n    y : scalar\n       Rotation angle in radians around y-axis\n    x : scalar\n       Rotation angle in radians around x-axis (performed last)\n\n    Notes\n    -----\n    It's possible to reduce the amount of calculation a little, by\n    combining parts of the ``quat2mat`` and ``mat2euler`` functions, but\n    the reduction in computation is small, and the code repetition is\n    large.\n    '''\n    # delayed import to avoid cyclic dependencies\n    import nibabel.quaternions as nq\n    return mat2euler(nq.quat2mat(q))\n\n\ndef euler2angle_axis(z=0, y=0, x=0):\n    ''' Return angle, axis corresponding to these Euler angles\n\n    Uses the z, then y, then x convention above\n\n    Parameters\n    ----------\n    z : scalar\n       Rotation angle in radians around z-axis (performed first)\n    y : scalar\n       Rotation angle in radians around y-axis\n    x : scalar\n       Rotation angle in radians around x-axis (performed last)\n\n    Returns\n    -------\n    theta : scalar\n       angle of rotation\n    vector : array shape (3,)\n       axis around which rotation occurs\n\n    Examples\n    --------\n    >>> theta, vec = euler2angle_axis(0, 1.5, 0)\n    >>> print(theta)\n    1.5\n    >>> np.allclose(vec, [0, 1, 0])\n    True\n    '''\n    # delayed import to avoid cyclic dependencies\n    import nibabel.quaternions as nq\n    return nq.quat2angle_axis(euler2quat(z, y, x))\n\n\ndef angle_axis2euler(theta, vector, is_normalized=False):\n    ''' Convert angle, axis pair to Euler angles\n\n    Parameters\n    ----------\n    theta : scalar\n       angle of rotation\n    vector : 3 element sequence\n       vector specifying axis for rotation.\n    is_normalized : bool, optional\n       True if vector is already normalized (has norm of 1).  Default\n       False\n\n    Returns\n    -------\n    z : scalar\n    y : scalar\n    x : scalar\n       Rotations in radians around z, y, x axes, respectively\n\n    Examples\n    --------\n    >>> z, y, x = angle_axis2euler(0, [1, 0, 0])\n    >>> np.allclose((z, y, x), 0)\n    True\n\n    Notes\n    -----\n    It's possible to reduce the amount of calculation a little, by\n    combining parts of the ``angle_axis2mat`` and ``mat2euler``\n    functions, but the reduction in computation is small, and the code\n    repetition is large.\n    '''\n    # delayed import to avoid cyclic dependencies\n    import nibabel.quaternions as nq\n    M = nq.angle_axis2mat(theta, vector, is_normalized)\n    return mat2euler(M)\n"
  },
  {
    "path": "dgcnn/tensorflow/utils/pc_util.py",
    "content": "\"\"\" Utility functions for processing point clouds.\n\nAuthor: Charles R. Qi, Hao Su\nDate: November 2016\n\"\"\"\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\n# Draw point cloud\nfrom eulerangles import euler2mat\n\n# Point cloud IO\nimport numpy as np\nfrom plyfile import PlyData, PlyElement\n\n \n# ----------------------------------------\n# Point Cloud/Volume Conversions\n# ----------------------------------------\n\ndef point_cloud_to_volume_batch(point_clouds, vsize=12, radius=1.0, flatten=True):\n    \"\"\" Input is BxNx3 batch of point cloud\n        Output is Bx(vsize^3)\n    \"\"\"\n    vol_list = []\n    for b in range(point_clouds.shape[0]):\n        vol = point_cloud_to_volume(np.squeeze(point_clouds[b,:,:]), vsize, radius)\n        if flatten:\n            vol_list.append(vol.flatten())\n        else:\n            vol_list.append(np.expand_dims(np.expand_dims(vol, -1), 0))\n    if flatten:\n        return np.vstack(vol_list)\n    else:\n        return np.concatenate(vol_list, 0)\n\n\ndef point_cloud_to_volume(points, vsize, radius=1.0):\n    \"\"\" input is Nx3 points.\n        output is vsize*vsize*vsize\n        assumes points are in range [-radius, radius]\n    \"\"\"\n    vol = np.zeros((vsize,vsize,vsize))\n    voxel = 2*radius/float(vsize)\n    locations = (points + radius)/voxel\n    locations = locations.astype(int)\n    vol[locations[:,0],locations[:,1],locations[:,2]] = 1.0\n    return vol\n\n#a = np.zeros((16,1024,3))\n#print point_cloud_to_volume_batch(a, 12, 1.0, False).shape\n\ndef volume_to_point_cloud(vol):\n    \"\"\" vol is occupancy grid (value = 0 or 1) of size vsize*vsize*vsize\n        return Nx3 numpy array.\n    \"\"\"\n    vsize = vol.shape[0]\n    assert(vol.shape[1] == vsize and vol.shape[1] == vsize)\n    points = []\n    for a in range(vsize):\n        for b in range(vsize):\n            for c in range(vsize):\n                if vol[a,b,c] == 1:\n                    points.append(np.array([a,b,c]))\n    if len(points) == 0:\n        return np.zeros((0,3))\n    points = np.vstack(points)\n    return points\n\n# ----------------------------------------\n# Point cloud IO\n# ----------------------------------------\n\ndef read_ply(filename):\n    \"\"\" read XYZ point cloud from filename PLY file \"\"\"\n    plydata = PlyData.read(filename)\n    pc = plydata['vertex'].data\n    pc_array = np.array([[x, y, z] for x,y,z in pc])\n    return pc_array\n\n\ndef write_ply(points, filename, text=True):\n    \"\"\" input: Nx3, write points to filename as PLY format. \"\"\"\n    points = [(points[i,0], points[i,1], points[i,2]) for i in range(points.shape[0])]\n    vertex = np.array(points, dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')])\n    el = PlyElement.describe(vertex, 'vertex', comments=['vertices'])\n    PlyData([el], text=text).write(filename)\n\n\n# ----------------------------------------\n# Simple Point cloud and Volume Renderers\n# ----------------------------------------\n\ndef draw_point_cloud(input_points, canvasSize=500, space=200, diameter=25,\n                     xrot=0, yrot=0, zrot=0, switch_xyz=[0,1,2], normalize=True):\n    \"\"\" Render point cloud to image with alpha channel.\n        Input:\n            points: Nx3 numpy array (+y is up direction)\n        Output:\n            gray image as numpy array of size canvasSizexcanvasSize\n    \"\"\"\n    image = np.zeros((canvasSize, canvasSize))\n    if input_points is None or input_points.shape[0] == 0:\n        return image\n\n    points = input_points[:, switch_xyz]\n    M = euler2mat(zrot, yrot, xrot)\n    points = (np.dot(M, points.transpose())).transpose()\n\n    # Normalize the point cloud\n    # We normalize scale to fit points in a unit sphere\n    if normalize:\n        centroid = np.mean(points, axis=0)\n        points -= centroid\n        furthest_distance = np.max(np.sqrt(np.sum(abs(points)**2,axis=-1)))\n        points /= furthest_distance\n\n    # Pre-compute the Gaussian disk\n    radius = (diameter-1)/2.0\n    disk = np.zeros((diameter, diameter))\n    for i in range(diameter):\n        for j in range(diameter):\n            if (i - radius) * (i-radius) + (j-radius) * (j-radius) <= radius * radius:\n                disk[i, j] = np.exp((-(i-radius)**2 - (j-radius)**2)/(radius**2))\n    mask = np.argwhere(disk > 0)\n    dx = mask[:, 0]\n    dy = mask[:, 1]\n    dv = disk[disk > 0]\n    \n    # Order points by z-buffer\n    zorder = np.argsort(points[:, 2])\n    points = points[zorder, :]\n    points[:, 2] = (points[:, 2] - np.min(points[:, 2])) / (np.max(points[:, 2] - np.min(points[:, 2])))\n    max_depth = np.max(points[:, 2])\n       \n    for i in range(points.shape[0]):\n        j = points.shape[0] - i - 1\n        x = points[j, 0]\n        y = points[j, 1]\n        xc = canvasSize/2 + (x*space)\n        yc = canvasSize/2 + (y*space)\n        xc = int(np.round(xc))\n        yc = int(np.round(yc))\n        \n        px = dx + xc\n        py = dy + yc\n        \n        image[px, py] = image[px, py] * 0.7 + dv * (max_depth - points[j, 2]) * 0.3\n    \n    image = image / np.max(image)\n    return image\n\ndef point_cloud_three_views(points):\n    \"\"\" input points Nx3 numpy array (+y is up direction).\n        return an numpy array gray image of size 500x1500. \"\"\" \n    # +y is up direction\n    # xrot is azimuth\n    # yrot is in-plane\n    # zrot is elevation\n    img1 = draw_point_cloud(points, zrot=110/180.0*np.pi, xrot=45/180.0*np.pi, yrot=0/180.0*np.pi)\n    img2 = draw_point_cloud(points, zrot=70/180.0*np.pi, xrot=135/180.0*np.pi, yrot=0/180.0*np.pi)\n    img3 = draw_point_cloud(points, zrot=180.0/180.0*np.pi, xrot=90/180.0*np.pi, yrot=0/180.0*np.pi)\n    image_large = np.concatenate([img1, img2, img3], 1)\n    return image_large\n\n\nfrom PIL import Image\ndef point_cloud_three_views_demo():\n    \"\"\" Demo for draw_point_cloud function \"\"\"\n    points = read_ply('../third_party/mesh_sampling/piano.ply')\n    im_array = point_cloud_three_views(points)\n    img = Image.fromarray(np.uint8(im_array*255.0))\n    img.save('piano.jpg')\n\nif __name__==\"__main__\":\n    point_cloud_three_views_demo()\n\n\nimport matplotlib.pyplot as plt\ndef pyplot_draw_point_cloud(points, output_filename):\n    \"\"\" points is a Nx3 numpy array \"\"\"\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.scatter(points[:,0], points[:,1], points[:,2])\n    ax.set_xlabel('x')\n    ax.set_ylabel('y')\n    ax.set_zlabel('z')\n    #savefig(output_filename)\n\ndef pyplot_draw_volume(vol, output_filename):\n    \"\"\" vol is of size vsize*vsize*vsize\n        output an image to output_filename\n    \"\"\"\n    points = volume_to_point_cloud(vol)\n    pyplot_draw_point_cloud(points, output_filename)\n"
  },
  {
    "path": "dgcnn/tensorflow/utils/plyfile.py",
    "content": "#   Copyright 2014 Darsh Ranjan\n#\n#   This file is part of python-plyfile.\n#\n#   python-plyfile is free software: you can redistribute it and/or\n#   modify it under the terms of the GNU General Public License as\n#   published by the Free Software Foundation, either version 3 of the\n#   License, or (at your option) any later version.\n#\n#   python-plyfile is distributed in the hope that it will be useful,\n#   but WITHOUT ANY WARRANTY; without even the implied warranty of\n#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n#   General Public License for more details.\n#\n#   You should have received a copy of the GNU General Public License\n#   along with python-plyfile.  If not, see\n#       <http://www.gnu.org/licenses/>.\n\nfrom itertools import islice as _islice\n\nimport numpy as _np\nfrom sys import byteorder as _byteorder\n\n\ntry:\n    _range = xrange\nexcept NameError:\n    _range = range\n\n\n# Many-many relation\n_data_type_relation = [\n    ('int8', 'i1'),\n    ('char', 'i1'),\n    ('uint8', 'u1'),\n    ('uchar', 'b1'),\n    ('uchar', 'u1'),\n    ('int16', 'i2'),\n    ('short', 'i2'),\n    ('uint16', 'u2'),\n    ('ushort', 'u2'),\n    ('int32', 'i4'),\n    ('int', 'i4'),\n    ('uint32', 'u4'),\n    ('uint', 'u4'),\n    ('float32', 'f4'),\n    ('float', 'f4'),\n    ('float64', 'f8'),\n    ('double', 'f8')\n]\n\n_data_types = dict(_data_type_relation)\n_data_type_reverse = dict((b, a) for (a, b) in _data_type_relation)\n\n_types_list = []\n_types_set = set()\nfor (_a, _b) in _data_type_relation:\n    if _a not in _types_set:\n        _types_list.append(_a)\n        _types_set.add(_a)\n    if _b not in _types_set:\n        _types_list.append(_b)\n        _types_set.add(_b)\n\n\n_byte_order_map = {\n    'ascii': '=',\n    'binary_little_endian': '<',\n    'binary_big_endian': '>'\n}\n\n_byte_order_reverse = {\n    '<': 'binary_little_endian',\n    '>': 'binary_big_endian'\n}\n\n_native_byte_order = {'little': '<', 'big': '>'}[_byteorder]\n\n\ndef _lookup_type(type_str):\n    if type_str not in _data_type_reverse:\n        try:\n            type_str = _data_types[type_str]\n        except KeyError:\n            raise ValueError(\"field type %r not in %r\" %\n                             (type_str, _types_list))\n\n    return _data_type_reverse[type_str]\n\n\ndef _split_line(line, n):\n    fields = line.split(None, n)\n    if len(fields) == n:\n        fields.append('')\n\n    assert len(fields) == n + 1\n\n    return fields\n\n\ndef make2d(array, cols=None, dtype=None):\n    '''\n    Make a 2D array from an array of arrays.  The `cols' and `dtype'\n    arguments can be omitted if the array is not empty.\n\n    '''\n    if (cols is None or dtype is None) and not len(array):\n        raise RuntimeError(\"cols and dtype must be specified for empty \"\n                           \"array\")\n\n    if cols is None:\n        cols = len(array[0])\n\n    if dtype is None:\n        dtype = array[0].dtype\n\n    return _np.fromiter(array, [('_', dtype, (cols,))],\n                        count=len(array))['_']\n\n\nclass PlyParseError(Exception):\n\n    '''\n    Raised when a PLY file cannot be parsed.\n\n    The attributes `element', `row', `property', and `message' give\n    additional information.\n\n    '''\n\n    def __init__(self, message, element=None, row=None, prop=None):\n        self.message = message\n        self.element = element\n        self.row = row\n        self.prop = prop\n\n        s = ''\n        if self.element:\n            s += 'element %r: ' % self.element.name\n        if self.row is not None:\n            s += 'row %d: ' % self.row\n        if self.prop:\n            s += 'property %r: ' % self.prop.name\n        s += self.message\n\n        Exception.__init__(self, s)\n\n    def __repr__(self):\n        return ('PlyParseError(%r, element=%r, row=%r, prop=%r)' %\n                self.message, self.element, self.row, self.prop)\n\n\nclass PlyData(object):\n\n    '''\n    PLY file header and data.\n\n    A PlyData instance is created in one of two ways: by the static\n    method PlyData.read (to read a PLY file), or directly from __init__\n    given a sequence of elements (which can then be written to a PLY\n    file).\n\n    '''\n\n    def __init__(self, elements=[], text=False, byte_order='=',\n                 comments=[], obj_info=[]):\n        '''\n        elements: sequence of PlyElement instances.\n\n        text: whether the resulting PLY file will be text (True) or\n            binary (False).\n\n        byte_order: '<' for little-endian, '>' for big-endian, or '='\n            for native.  This is only relevant if `text' is False.\n\n        comments: sequence of strings that will be placed in the header\n            between the 'ply' and 'format ...' lines.\n\n        obj_info: like comments, but will be placed in the header with\n            \"obj_info ...\" instead of \"comment ...\".\n\n        '''\n        if byte_order == '=' and not text:\n            byte_order = _native_byte_order\n\n        self.byte_order = byte_order\n        self.text = text\n\n        self.comments = list(comments)\n        self.obj_info = list(obj_info)\n        self.elements = elements\n\n    def _get_elements(self):\n        return self._elements\n\n    def _set_elements(self, elements):\n        self._elements = tuple(elements)\n        self._index()\n\n    elements = property(_get_elements, _set_elements)\n\n    def _get_byte_order(self):\n        return self._byte_order\n\n    def _set_byte_order(self, byte_order):\n        if byte_order not in ['<', '>', '=']:\n            raise ValueError(\"byte order must be '<', '>', or '='\")\n\n        self._byte_order = byte_order\n\n    byte_order = property(_get_byte_order, _set_byte_order)\n\n    def _index(self):\n        self._element_lookup = dict((elt.name, elt) for elt in\n                                    self._elements)\n        if len(self._element_lookup) != len(self._elements):\n            raise ValueError(\"two elements with same name\")\n\n    @staticmethod\n    def _parse_header(stream):\n        '''\n        Parse a PLY header from a readable file-like stream.\n\n        '''\n        lines = []\n        comments = {'comment': [], 'obj_info': []}\n        while True:\n            line = stream.readline().decode('ascii').strip()\n            fields = _split_line(line, 1)\n\n            if fields[0] == 'end_header':\n                break\n\n            elif fields[0] in comments.keys():\n                lines.append(fields)\n            else:\n                lines.append(line.split())\n\n        a = 0\n        if lines[a] != ['ply']:\n            raise PlyParseError(\"expected 'ply'\")\n\n        a += 1\n        while lines[a][0] in comments.keys():\n            comments[lines[a][0]].append(lines[a][1])\n            a += 1\n\n        if lines[a][0] != 'format':\n            raise PlyParseError(\"expected 'format'\")\n\n        if lines[a][2] != '1.0':\n            raise PlyParseError(\"expected version '1.0'\")\n\n        if len(lines[a]) != 3:\n            raise PlyParseError(\"too many fields after 'format'\")\n\n        fmt = lines[a][1]\n\n        if fmt not in _byte_order_map:\n            raise PlyParseError(\"don't understand format %r\" % fmt)\n\n        byte_order = _byte_order_map[fmt]\n        text = fmt == 'ascii'\n\n        a += 1\n        while a < len(lines) and lines[a][0] in comments.keys():\n            comments[lines[a][0]].append(lines[a][1])\n            a += 1\n\n        return PlyData(PlyElement._parse_multi(lines[a:]),\n                       text, byte_order,\n                       comments['comment'], comments['obj_info'])\n\n    @staticmethod\n    def read(stream):\n        '''\n        Read PLY data from a readable file-like object or filename.\n\n        '''\n        (must_close, stream) = _open_stream(stream, 'read')\n        try:\n            data = PlyData._parse_header(stream)\n            for elt in data:\n                elt._read(stream, data.text, data.byte_order)\n        finally:\n            if must_close:\n                stream.close()\n\n        return data\n\n    def write(self, stream):\n        '''\n        Write PLY data to a writeable file-like object or filename.\n\n        '''\n        (must_close, stream) = _open_stream(stream, 'write')\n        try:\n            stream.write(self.header.encode('ascii'))\n            stream.write(b'\\r\\n')\n            for elt in self:\n                elt._write(stream, self.text, self.byte_order)\n        finally:\n            if must_close:\n                stream.close()\n\n    @property\n    def header(self):\n        '''\n        Provide PLY-formatted metadata for the instance.\n\n        '''\n        lines = ['ply']\n\n        if self.text:\n            lines.append('format ascii 1.0')\n        else:\n            lines.append('format ' +\n                         _byte_order_reverse[self.byte_order] +\n                         ' 1.0')\n\n        # Some information is lost here, since all comments are placed\n        # between the 'format' line and the first element.\n        for c in self.comments:\n            lines.append('comment ' + c)\n\n        for c in self.obj_info:\n            lines.append('obj_info ' + c)\n\n        lines.extend(elt.header for elt in self.elements)\n        lines.append('end_header')\n        return '\\r\\n'.join(lines)\n\n    def __iter__(self):\n        return iter(self.elements)\n\n    def __len__(self):\n        return len(self.elements)\n\n    def __contains__(self, name):\n        return name in self._element_lookup\n\n    def __getitem__(self, name):\n        return self._element_lookup[name]\n\n    def __str__(self):\n        return self.header\n\n    def __repr__(self):\n        return ('PlyData(%r, text=%r, byte_order=%r, '\n                'comments=%r, obj_info=%r)' %\n                (self.elements, self.text, self.byte_order,\n                 self.comments, self.obj_info))\n\n\ndef _open_stream(stream, read_or_write):\n    if hasattr(stream, read_or_write):\n        return (False, stream)\n    try:\n        return (True, open(stream, read_or_write[0] + 'b'))\n    except TypeError:\n        raise RuntimeError(\"expected open file or filename\")\n\n\nclass PlyElement(object):\n\n    '''\n    PLY file element.\n\n    A client of this library doesn't normally need to instantiate this\n    directly, so the following is only for the sake of documenting the\n    internals.\n\n    Creating a PlyElement instance is generally done in one of two ways:\n    as a byproduct of PlyData.read (when reading a PLY file) and by\n    PlyElement.describe (before writing a PLY file).\n\n    '''\n\n    def __init__(self, name, properties, count, comments=[]):\n        '''\n        This is not part of the public interface.  The preferred methods\n        of obtaining PlyElement instances are PlyData.read (to read from\n        a file) and PlyElement.describe (to construct from a numpy\n        array).\n\n        '''\n        self._name = str(name)\n        self._check_name()\n        self._count = count\n\n        self._properties = tuple(properties)\n        self._index()\n\n        self.comments = list(comments)\n\n        self._have_list = any(isinstance(p, PlyListProperty)\n                              for p in self.properties)\n\n    @property\n    def count(self):\n        return self._count\n\n    def _get_data(self):\n        return self._data\n\n    def _set_data(self, data):\n        self._data = data\n        self._count = len(data)\n        self._check_sanity()\n\n    data = property(_get_data, _set_data)\n\n    def _check_sanity(self):\n        for prop in self.properties:\n            if prop.name not in self._data.dtype.fields:\n                raise ValueError(\"dangling property %r\" % prop.name)\n\n    def _get_properties(self):\n        return self._properties\n\n    def _set_properties(self, properties):\n        self._properties = tuple(properties)\n        self._check_sanity()\n        self._index()\n\n    properties = property(_get_properties, _set_properties)\n\n    def _index(self):\n        self._property_lookup = dict((prop.name, prop)\n                                     for prop in self._properties)\n        if len(self._property_lookup) != len(self._properties):\n            raise ValueError(\"two properties with same name\")\n\n    def ply_property(self, name):\n        return self._property_lookup[name]\n\n    @property\n    def name(self):\n        return self._name\n\n    def _check_name(self):\n        if any(c.isspace() for c in self._name):\n            msg = \"element name %r contains spaces\" % self._name\n            raise ValueError(msg)\n\n    def dtype(self, byte_order='='):\n        '''\n        Return the numpy dtype of the in-memory representation of the\n        data.  (If there are no list properties, and the PLY format is\n        binary, then this also accurately describes the on-disk\n        representation of the element.)\n\n        '''\n        return [(prop.name, prop.dtype(byte_order))\n                for prop in self.properties]\n\n    @staticmethod\n    def _parse_multi(header_lines):\n        '''\n        Parse a list of PLY element definitions.\n\n        '''\n        elements = []\n        while header_lines:\n            (elt, header_lines) = PlyElement._parse_one(header_lines)\n            elements.append(elt)\n\n        return elements\n\n    @staticmethod\n    def _parse_one(lines):\n        '''\n        Consume one element definition.  The unconsumed input is\n        returned along with a PlyElement instance.\n\n        '''\n        a = 0\n        line = lines[a]\n\n        if line[0] != 'element':\n            raise PlyParseError(\"expected 'element'\")\n        if len(line) > 3:\n            raise PlyParseError(\"too many fields after 'element'\")\n        if len(line) < 3:\n            raise PlyParseError(\"too few fields after 'element'\")\n\n        (name, count) = (line[1], int(line[2]))\n\n        comments = []\n        properties = []\n        while True:\n            a += 1\n            if a >= len(lines):\n                break\n\n            if lines[a][0] == 'comment':\n                comments.append(lines[a][1])\n            elif lines[a][0] == 'property':\n                properties.append(PlyProperty._parse_one(lines[a]))\n            else:\n                break\n\n        return (PlyElement(name, properties, count, comments),\n                lines[a:])\n\n    @staticmethod\n    def describe(data, name, len_types={}, val_types={},\n                 comments=[]):\n        '''\n        Construct a PlyElement from an array's metadata.\n\n        len_types and val_types can be given as mappings from list\n        property names to type strings (like 'u1', 'f4', etc., or\n        'int8', 'float32', etc.). These can be used to define the length\n        and value types of list properties.  List property lengths\n        always default to type 'u1' (8-bit unsigned integer), and value\n        types default to 'i4' (32-bit integer).\n\n        '''\n        if not isinstance(data, _np.ndarray):\n            raise TypeError(\"only numpy arrays are supported\")\n\n        if len(data.shape) != 1:\n            raise ValueError(\"only one-dimensional arrays are \"\n                             \"supported\")\n\n        count = len(data)\n\n        properties = []\n        descr = data.dtype.descr\n\n        for t in descr:\n            if not isinstance(t[1], str):\n                raise ValueError(\"nested records not supported\")\n\n            if not t[0]:\n                raise ValueError(\"field with empty name\")\n\n            if len(t) != 2 or t[1][1] == 'O':\n                # non-scalar field, which corresponds to a list\n                # property in PLY.\n\n                if t[1][1] == 'O':\n                    if len(t) != 2:\n                        raise ValueError(\"non-scalar object fields not \"\n                                         \"supported\")\n\n                len_str = _data_type_reverse[len_types.get(t[0], 'u1')]\n                if t[1][1] == 'O':\n                    val_type = val_types.get(t[0], 'i4')\n                    val_str = _lookup_type(val_type)\n                else:\n                    val_str = _lookup_type(t[1][1:])\n\n                prop = PlyListProperty(t[0], len_str, val_str)\n            else:\n                val_str = _lookup_type(t[1][1:])\n                prop = PlyProperty(t[0], val_str)\n\n            properties.append(prop)\n\n        elt = PlyElement(name, properties, count, comments)\n        elt.data = data\n\n        return elt\n\n    def _read(self, stream, text, byte_order):\n        '''\n        Read the actual data from a PLY file.\n\n        '''\n        if text:\n            self._read_txt(stream)\n        else:\n            if self._have_list:\n                # There are list properties, so a simple load is\n                # impossible.\n                self._read_bin(stream, byte_order)\n            else:\n                # There are no list properties, so loading the data is\n                # much more straightforward.\n                self._data = _np.fromfile(stream,\n                                          self.dtype(byte_order),\n                                          self.count)\n\n        if len(self._data) < self.count:\n            k = len(self._data)\n            del self._data\n            raise PlyParseError(\"early end-of-file\", self, k)\n\n        self._check_sanity()\n\n    def _write(self, stream, text, byte_order):\n        '''\n        Write the data to a PLY file.\n\n        '''\n        if text:\n            self._write_txt(stream)\n        else:\n            if self._have_list:\n                # There are list properties, so serialization is\n                # slightly complicated.\n                self._write_bin(stream, byte_order)\n            else:\n                # no list properties, so serialization is\n                # straightforward.\n                self.data.astype(self.dtype(byte_order),\n                                 copy=False).tofile(stream)\n\n    def _read_txt(self, stream):\n        '''\n        Load a PLY element from an ASCII-format PLY file.  The element\n        may contain list properties.\n\n        '''\n        self._data = _np.empty(self.count, dtype=self.dtype())\n\n        k = 0\n        for line in _islice(iter(stream.readline, b''), self.count):\n            fields = iter(line.strip().split())\n            for prop in self.properties:\n                try:\n                    self._data[prop.name][k] = prop._from_fields(fields)\n                except StopIteration:\n                    raise PlyParseError(\"early end-of-line\",\n                                        self, k, prop)\n                except ValueError:\n                    raise PlyParseError(\"malformed input\",\n                                        self, k, prop)\n            try:\n                next(fields)\n            except StopIteration:\n                pass\n            else:\n                raise PlyParseError(\"expected end-of-line\", self, k)\n            k += 1\n\n        if k < self.count:\n            del self._data\n            raise PlyParseError(\"early end-of-file\", self, k)\n\n    def _write_txt(self, stream):\n        '''\n        Save a PLY element to an ASCII-format PLY file.  The element may\n        contain list properties.\n\n        '''\n        for rec in self.data:\n            fields = []\n            for prop in self.properties:\n                fields.extend(prop._to_fields(rec[prop.name]))\n\n            _np.savetxt(stream, [fields], '%.18g', newline='\\r\\n')\n\n    def _read_bin(self, stream, byte_order):\n        '''\n        Load a PLY element from a binary PLY file.  The element may\n        contain list properties.\n\n        '''\n        self._data = _np.empty(self.count, dtype=self.dtype(byte_order))\n\n        for k in _range(self.count):\n            for prop in self.properties:\n                try:\n                    self._data[prop.name][k] = \\\n                        prop._read_bin(stream, byte_order)\n                except StopIteration:\n                    raise PlyParseError(\"early end-of-file\",\n                                        self, k, prop)\n\n    def _write_bin(self, stream, byte_order):\n        '''\n        Save a PLY element to a binary PLY file.  The element may\n        contain list properties.\n\n        '''\n        for rec in self.data:\n            for prop in self.properties:\n                prop._write_bin(rec[prop.name], stream, byte_order)\n\n    @property\n    def header(self):\n        '''\n        Format this element's metadata as it would appear in a PLY\n        header.\n\n        '''\n        lines = ['element %s %d' % (self.name, self.count)]\n\n        # Some information is lost here, since all comments are placed\n        # between the 'element' line and the first property definition.\n        for c in self.comments:\n            lines.append('comment ' + c)\n\n        lines.extend(list(map(str, self.properties)))\n\n        return '\\r\\n'.join(lines)\n\n    def __getitem__(self, key):\n        return self.data[key]\n\n    def __setitem__(self, key, value):\n        self.data[key] = value\n\n    def __str__(self):\n        return self.header\n\n    def __repr__(self):\n        return ('PlyElement(%r, %r, count=%d, comments=%r)' %\n                (self.name, self.properties, self.count,\n                 self.comments))\n\n\nclass PlyProperty(object):\n\n    '''\n    PLY property description.  This class is pure metadata; the data\n    itself is contained in PlyElement instances.\n\n    '''\n\n    def __init__(self, name, val_dtype):\n        self._name = str(name)\n        self._check_name()\n        self.val_dtype = val_dtype\n\n    def _get_val_dtype(self):\n        return self._val_dtype\n\n    def _set_val_dtype(self, val_dtype):\n        self._val_dtype = _data_types[_lookup_type(val_dtype)]\n\n    val_dtype = property(_get_val_dtype, _set_val_dtype)\n\n    @property\n    def name(self):\n        return self._name\n\n    def _check_name(self):\n        if any(c.isspace() for c in self._name):\n            msg = \"Error: property name %r contains spaces\" % self._name\n            raise RuntimeError(msg)\n\n    @staticmethod\n    def _parse_one(line):\n        assert line[0] == 'property'\n\n        if line[1] == 'list':\n            if len(line) > 5:\n                raise PlyParseError(\"too many fields after \"\n                                    \"'property list'\")\n            if len(line) < 5:\n                raise PlyParseError(\"too few fields after \"\n                                    \"'property list'\")\n\n            return PlyListProperty(line[4], line[2], line[3])\n\n        else:\n            if len(line) > 3:\n                raise PlyParseError(\"too many fields after \"\n                                    \"'property'\")\n            if len(line) < 3:\n                raise PlyParseError(\"too few fields after \"\n                                    \"'property'\")\n\n            return PlyProperty(line[2], line[1])\n\n    def dtype(self, byte_order='='):\n        '''\n        Return the numpy dtype description for this property (as a tuple\n        of strings).\n\n        '''\n        return byte_order + self.val_dtype\n\n    def _from_fields(self, fields):\n        '''\n        Parse from generator.  Raise StopIteration if the property could\n        not be read.\n\n        '''\n        return _np.dtype(self.dtype()).type(next(fields))\n\n    def _to_fields(self, data):\n        '''\n        Return generator over one item.\n\n        '''\n        yield _np.dtype(self.dtype()).type(data)\n\n    def _read_bin(self, stream, byte_order):\n        '''\n        Read data from a binary stream.  Raise StopIteration if the\n        property could not be read.\n\n        '''\n        try:\n            return _np.fromfile(stream, self.dtype(byte_order), 1)[0]\n        except IndexError:\n            raise StopIteration\n\n    def _write_bin(self, data, stream, byte_order):\n        '''\n        Write data to a binary stream.\n\n        '''\n        _np.dtype(self.dtype(byte_order)).type(data).tofile(stream)\n\n    def __str__(self):\n        val_str = _data_type_reverse[self.val_dtype]\n        return 'property %s %s' % (val_str, self.name)\n\n    def __repr__(self):\n        return 'PlyProperty(%r, %r)' % (self.name,\n                                        _lookup_type(self.val_dtype))\n\n\nclass PlyListProperty(PlyProperty):\n\n    '''\n    PLY list property description.\n\n    '''\n\n    def __init__(self, name, len_dtype, val_dtype):\n        PlyProperty.__init__(self, name, val_dtype)\n\n        self.len_dtype = len_dtype\n\n    def _get_len_dtype(self):\n        return self._len_dtype\n\n    def _set_len_dtype(self, len_dtype):\n        self._len_dtype = _data_types[_lookup_type(len_dtype)]\n\n    len_dtype = property(_get_len_dtype, _set_len_dtype)\n\n    def dtype(self, byte_order='='):\n        '''\n        List properties always have a numpy dtype of \"object\".\n\n        '''\n        return '|O'\n\n    def list_dtype(self, byte_order='='):\n        '''\n        Return the pair (len_dtype, val_dtype) (both numpy-friendly\n        strings).\n\n        '''\n        return (byte_order + self.len_dtype,\n                byte_order + self.val_dtype)\n\n    def _from_fields(self, fields):\n        (len_t, val_t) = self.list_dtype()\n\n        n = int(_np.dtype(len_t).type(next(fields)))\n\n        data = _np.loadtxt(list(_islice(fields, n)), val_t, ndmin=1)\n        if len(data) < n:\n            raise StopIteration\n\n        return data\n\n    def _to_fields(self, data):\n        '''\n        Return generator over the (numerical) PLY representation of the\n        list data (length followed by actual data).\n\n        '''\n        (len_t, val_t) = self.list_dtype()\n\n        data = _np.asarray(data, dtype=val_t).ravel()\n\n        yield _np.dtype(len_t).type(data.size)\n        for x in data:\n            yield x\n\n    def _read_bin(self, stream, byte_order):\n        (len_t, val_t) = self.list_dtype(byte_order)\n\n        try:\n            n = _np.fromfile(stream, len_t, 1)[0]\n        except IndexError:\n            raise StopIteration\n\n        data = _np.fromfile(stream, val_t, n)\n        if len(data) < n:\n            raise StopIteration\n\n        return data\n\n    def _write_bin(self, data, stream, byte_order):\n        '''\n        Write data to a binary stream.\n\n        '''\n        (len_t, val_t) = self.list_dtype(byte_order)\n\n        data = _np.asarray(data, dtype=val_t).ravel()\n\n        _np.array(data.size, dtype=len_t).tofile(stream)\n        data.tofile(stream)\n\n    def __str__(self):\n        len_str = _data_type_reverse[self.len_dtype]\n        val_str = _data_type_reverse[self.val_dtype]\n        return 'property list %s %s %s' % (len_str, val_str, self.name)\n\n    def __repr__(self):\n        return ('PlyListProperty(%r, %r, %r)' %\n                (self.name,\n                 _lookup_type(self.len_dtype),\n                 _lookup_type(self.val_dtype)))\n"
  },
  {
    "path": "dgcnn/tensorflow/utils/tf_util.py",
    "content": "\"\"\" Wrapper functions for TensorFlow layers.\n\nAuthor: Charles R. Qi\nDate: November 2016\n\nUpadted by Yue Wang and Yongbin Sun\n\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\n\ndef _variable_on_cpu(name, shape, initializer, use_fp16=False, trainable=True):\n  \"\"\"Helper to create a Variable stored on CPU memory.\n  Args:\n    name: name of the variable\n    shape: list of ints\n    initializer: initializer for Variable\n  Returns:\n    Variable Tensor\n  \"\"\"\n  with tf.device('/cpu:0'):\n    dtype = tf.float16 if use_fp16 else tf.float32\n    var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype, trainable=trainable)\n  return var\n\ndef _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):\n  \"\"\"Helper to create an initialized Variable with weight decay.\n\n  Note that the Variable is initialized with a truncated normal distribution.\n  A weight decay is added only if one is specified.\n\n  Args:\n    name: name of the variable\n    shape: list of ints\n    stddev: standard deviation of a truncated Gaussian\n    wd: add L2Loss weight decay multiplied by this float. If None, weight\n        decay is not added for this Variable.\n    use_xavier: bool, whether to use xavier initializer\n\n  Returns:\n    Variable Tensor\n  \"\"\"\n  if use_xavier:\n    initializer = tf.contrib.layers.xavier_initializer()\n  else:\n    initializer = tf.truncated_normal_initializer(stddev=stddev)\n  var = _variable_on_cpu(name, shape, initializer)\n  if wd is not None:\n    weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')\n    tf.add_to_collection('losses', weight_decay)\n  return var\n\n\ndef conv1d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=1,\n           padding='SAME',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=0.0,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None,\n           is_dist=False):\n  \"\"\" 1D convolution with non-linear operation.\n\n  Args:\n    inputs: 3-D tensor variable BxLxC\n    num_output_channels: int\n    kernel_size: int\n    scope: string\n    stride: int\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_in_channels = inputs.get_shape()[-1].value\n    kernel_shape = [kernel_size,\n                    num_in_channels, num_output_channels]\n    kernel = _variable_with_weight_decay('weights',\n                                         shape=kernel_shape,\n                                         use_xavier=use_xavier,\n                                         stddev=stddev,\n                                         wd=weight_decay)\n    outputs = tf.nn.conv1d(inputs, kernel,\n                           stride=stride,\n                           padding=padding)\n    biases = _variable_on_cpu('biases', [num_output_channels],\n                              tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases)\n\n    if bn:\n      outputs = batch_norm_for_conv1d(outputs, is_training,\n                                      bn_decay=bn_decay, scope='bn', is_dist=is_dist)\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\n\n\n\ndef conv2d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=[1, 1],\n           padding='SAME',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=0.0,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None,\n           is_dist=False):\n  \"\"\" 2D convolution with non-linear operation.\n\n  Args:\n    inputs: 4-D tensor variable BxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 2 ints\n    scope: string\n    stride: a list of 2 ints\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n      kernel_h, kernel_w = kernel_size\n      num_in_channels = inputs.get_shape()[-1].value\n      kernel_shape = [kernel_h, kernel_w,\n                      num_in_channels, num_output_channels]\n      kernel = _variable_with_weight_decay('weights',\n                                           shape=kernel_shape,\n                                           use_xavier=use_xavier,\n                                           stddev=stddev,\n                                           wd=weight_decay)\n      stride_h, stride_w = stride\n      outputs = tf.nn.conv2d(inputs, kernel,\n                             [1, stride_h, stride_w, 1],\n                             padding=padding)\n      biases = _variable_on_cpu('biases', [num_output_channels],\n                                tf.constant_initializer(0.0))\n      outputs = tf.nn.bias_add(outputs, biases)\n\n      if bn:\n        outputs = batch_norm_for_conv2d(outputs, is_training,\n                                        bn_decay=bn_decay, scope='bn', is_dist=is_dist)\n\n      if activation_fn is not None:\n        outputs = activation_fn(outputs)\n      return outputs\n\n\ndef conv2d_transpose(inputs,\n                     num_output_channels,\n                     kernel_size,\n                     scope,\n                     stride=[1, 1],\n                     padding='SAME',\n                     use_xavier=True,\n                     stddev=1e-3,\n                     weight_decay=0.0,\n                     activation_fn=tf.nn.relu,\n                     bn=False,\n                     bn_decay=None,\n                     is_training=None,\n                     is_dist=False):\n  \"\"\" 2D convolution transpose with non-linear operation.\n\n  Args:\n    inputs: 4-D tensor variable BxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 2 ints\n    scope: string\n    stride: a list of 2 ints\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n\n  Note: conv2d(conv2d_transpose(a, num_out, ksize, stride), a.shape[-1], ksize, stride) == a\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n      kernel_h, kernel_w = kernel_size\n      num_in_channels = inputs.get_shape()[-1].value\n      kernel_shape = [kernel_h, kernel_w,\n                      num_output_channels, num_in_channels] # reversed to conv2d\n      kernel = _variable_with_weight_decay('weights',\n                                           shape=kernel_shape,\n                                           use_xavier=use_xavier,\n                                           stddev=stddev,\n                                           wd=weight_decay)\n      stride_h, stride_w = stride\n      \n      # from slim.convolution2d_transpose\n      def get_deconv_dim(dim_size, stride_size, kernel_size, padding):\n          dim_size *= stride_size\n\n          if padding == 'VALID' and dim_size is not None:\n            dim_size += max(kernel_size - stride_size, 0)\n          return dim_size\n\n      # caculate output shape\n      batch_size = inputs.get_shape()[0].value\n      height = inputs.get_shape()[1].value\n      width = inputs.get_shape()[2].value\n      out_height = get_deconv_dim(height, stride_h, kernel_h, padding)\n      out_width = get_deconv_dim(width, stride_w, kernel_w, padding)\n      output_shape = [batch_size, out_height, out_width, num_output_channels]\n\n      outputs = tf.nn.conv2d_transpose(inputs, kernel, output_shape,\n                             [1, stride_h, stride_w, 1],\n                             padding=padding)\n      biases = _variable_on_cpu('biases', [num_output_channels],\n                                tf.constant_initializer(0.0))\n      outputs = tf.nn.bias_add(outputs, biases)\n\n      if bn:\n        outputs = batch_norm_for_conv2d(outputs, is_training,\n                                        bn_decay=bn_decay, scope='bn', is_dist=is_dist)\n\n      if activation_fn is not None:\n        outputs = activation_fn(outputs)\n      return outputs\n\n   \n\ndef conv3d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=[1, 1, 1],\n           padding='SAME',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=0.0,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None,\n           is_dist=False):\n  \"\"\" 3D convolution with non-linear operation.\n\n  Args:\n    inputs: 5-D tensor variable BxDxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 3 ints\n    scope: string\n    stride: a list of 3 ints\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    num_in_channels = inputs.get_shape()[-1].value\n    kernel_shape = [kernel_d, kernel_h, kernel_w,\n                    num_in_channels, num_output_channels]\n    kernel = _variable_with_weight_decay('weights',\n                                         shape=kernel_shape,\n                                         use_xavier=use_xavier,\n                                         stddev=stddev,\n                                         wd=weight_decay)\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.conv3d(inputs, kernel,\n                           [1, stride_d, stride_h, stride_w, 1],\n                           padding=padding)\n    biases = _variable_on_cpu('biases', [num_output_channels],\n                              tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases)\n    \n    if bn:\n      outputs = batch_norm_for_conv3d(outputs, is_training,\n                                      bn_decay=bn_decay, scope='bn', is_dist=is_dist)\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\ndef fully_connected(inputs,\n                    num_outputs,\n                    scope,\n                    use_xavier=True,\n                    stddev=1e-3,\n                    weight_decay=0.0,\n                    activation_fn=tf.nn.relu,\n                    bn=False,\n                    bn_decay=None,\n                    is_training=None,\n                    is_dist=False):\n  \"\"\" Fully connected layer with non-linear operation.\n  \n  Args:\n    inputs: 2-D tensor BxN\n    num_outputs: int\n  \n  Returns:\n    Variable tensor of size B x num_outputs.\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_input_units = inputs.get_shape()[-1].value\n    weights = _variable_with_weight_decay('weights',\n                                          shape=[num_input_units, num_outputs],\n                                          use_xavier=use_xavier,\n                                          stddev=stddev,\n                                          wd=weight_decay)\n    outputs = tf.matmul(inputs, weights)\n    biases = _variable_on_cpu('biases', [num_outputs],\n                             tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases)\n     \n    if bn:\n      outputs = batch_norm_for_fc(outputs, is_training, bn_decay, 'bn', is_dist=is_dist)\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\n\ndef max_pool2d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2],\n               padding='VALID'):\n  \"\"\" 2D max pooling.\n\n  Args:\n    inputs: 4-D tensor BxHxWxC\n    kernel_size: a list of 2 ints\n    stride: a list of 2 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_h, kernel_w = kernel_size\n    stride_h, stride_w = stride\n    outputs = tf.nn.max_pool(inputs,\n                             ksize=[1, kernel_h, kernel_w, 1],\n                             strides=[1, stride_h, stride_w, 1],\n                             padding=padding,\n                             name=sc.name)\n    return outputs\n\ndef avg_pool2d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2],\n               padding='VALID'):\n  \"\"\" 2D avg pooling.\n\n  Args:\n    inputs: 4-D tensor BxHxWxC\n    kernel_size: a list of 2 ints\n    stride: a list of 2 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_h, kernel_w = kernel_size\n    stride_h, stride_w = stride\n    outputs = tf.nn.avg_pool(inputs,\n                             ksize=[1, kernel_h, kernel_w, 1],\n                             strides=[1, stride_h, stride_w, 1],\n                             padding=padding,\n                             name=sc.name)\n    return outputs\n\n\ndef max_pool3d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2, 2],\n               padding='VALID'):\n  \"\"\" 3D max pooling.\n\n  Args:\n    inputs: 5-D tensor BxDxHxWxC\n    kernel_size: a list of 3 ints\n    stride: a list of 3 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.max_pool3d(inputs,\n                               ksize=[1, kernel_d, kernel_h, kernel_w, 1],\n                               strides=[1, stride_d, stride_h, stride_w, 1],\n                               padding=padding,\n                               name=sc.name)\n    return outputs\n\ndef avg_pool3d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2, 2],\n               padding='VALID'):\n  \"\"\" 3D avg pooling.\n\n  Args:\n    inputs: 5-D tensor BxDxHxWxC\n    kernel_size: a list of 3 ints\n    stride: a list of 3 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.avg_pool3d(inputs,\n                               ksize=[1, kernel_d, kernel_h, kernel_w, 1],\n                               strides=[1, stride_d, stride_h, stride_w, 1],\n                               padding=padding,\n                               name=sc.name)\n    return outputs\n\n\n\n\n\ndef batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay):\n  \"\"\" Batch normalization on convolutional maps and beyond...\n  Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow\n  \n  Args:\n      inputs:        Tensor, k-D input ... x C could be BC or BHWC or BDHWC\n      is_training:   boolean tf.Varialbe, true indicates training phase\n      scope:         string, variable scope\n      moments_dims:  a list of ints, indicating dimensions for moments calculation\n      bn_decay:      float or float tensor variable, controling moving average weight\n  Return:\n      normed:        batch-normalized maps\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_channels = inputs.get_shape()[-1].value\n    beta = tf.Variable(tf.constant(0.0, shape=[num_channels]),\n                       name='beta', trainable=True)\n    gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]),\n                        name='gamma', trainable=True)\n    batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments')\n    decay = bn_decay if bn_decay is not None else 0.9\n    ema = tf.train.ExponentialMovingAverage(decay=decay)\n    # Operator that maintains moving averages of variables.\n    ema_apply_op = tf.cond(is_training,\n                           lambda: ema.apply([batch_mean, batch_var]),\n                           lambda: tf.no_op())\n    \n    # Update moving average and return current batch's avg and var.\n    def mean_var_with_update():\n      with tf.control_dependencies([ema_apply_op]):\n        return tf.identity(batch_mean), tf.identity(batch_var)\n    \n    # ema.average returns the Variable holding the average of var.\n    mean, var = tf.cond(is_training,\n                        mean_var_with_update,\n                        lambda: (ema.average(batch_mean), ema.average(batch_var)))\n    normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)\n  return normed\n\n\ndef batch_norm_dist_template(inputs, is_training, scope, moments_dims, bn_decay):\n  \"\"\" The batch normalization for distributed training.\n  Args:\n      inputs:        Tensor, k-D input ... x C could be BC or BHWC or BDHWC\n      is_training:   boolean tf.Varialbe, true indicates training phase\n      scope:         string, variable scope\n      moments_dims:  a list of ints, indicating dimensions for moments calculation\n      bn_decay:      float or float tensor variable, controling moving average weight\n  Return:\n      normed:        batch-normalized maps\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_channels = inputs.get_shape()[-1].value\n    beta = _variable_on_cpu('beta', [num_channels], initializer=tf.zeros_initializer())\n    gamma = _variable_on_cpu('gamma', [num_channels], initializer=tf.ones_initializer())\n\n    pop_mean = _variable_on_cpu('pop_mean', [num_channels], initializer=tf.zeros_initializer(), trainable=False)\n    pop_var = _variable_on_cpu('pop_var', [num_channels], initializer=tf.ones_initializer(), trainable=False)\n\n    def train_bn_op():\n      batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments')\n      decay = bn_decay if bn_decay is not None else 0.9\n      train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) \n      train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay))\n      with tf.control_dependencies([train_mean, train_var]):\n        return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, gamma, 1e-3)\n\n    def test_bn_op():\n      return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, gamma, 1e-3)\n\n    normed = tf.cond(is_training,\n                     train_bn_op,\n                     test_bn_op)\n    return normed\n\n\n\ndef batch_norm_for_fc(inputs, is_training, bn_decay, scope, is_dist=False):\n  \"\"\" Batch normalization on FC data.\n  \n  Args:\n      inputs:      Tensor, 2D BxC input\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      is_dist:     true indicating distributed training scheme\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  if is_dist:\n    return batch_norm_dist_template(inputs, is_training, scope, [0,], bn_decay)\n  else:\n    return batch_norm_template(inputs, is_training, scope, [0,], bn_decay)\n\n\ndef batch_norm_for_conv1d(inputs, is_training, bn_decay, scope, is_dist=False):\n  \"\"\" Batch normalization on 1D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 3D BLC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      is_dist:     true indicating distributed training scheme\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  if is_dist:\n    return batch_norm_dist_template(inputs, is_training, scope, [0,1], bn_decay)\n  else:\n    return batch_norm_template(inputs, is_training, scope, [0,1], bn_decay)\n\n\n\n  \ndef batch_norm_for_conv2d(inputs, is_training, bn_decay, scope, is_dist=False):\n  \"\"\" Batch normalization on 2D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 4D BHWC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      is_dist:     true indicating distributed training scheme\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  if is_dist:\n    return batch_norm_dist_template(inputs, is_training, scope, [0,1,2], bn_decay)\n  else:\n    return batch_norm_template(inputs, is_training, scope, [0,1,2], bn_decay)\n\n\n\ndef batch_norm_for_conv3d(inputs, is_training, bn_decay, scope, is_dist=False):\n  \"\"\" Batch normalization on 3D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 5D BDHWC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      is_dist:     true indicating distributed training scheme\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  if is_dist:\n    return batch_norm_dist_template(inputs, is_training, scope, [0,1,2,3], bn_decay)\n  else:\n    return batch_norm_template(inputs, is_training, scope, [0,1,2,3], bn_decay)\n\n\ndef dropout(inputs,\n            is_training,\n            scope,\n            keep_prob=0.5,\n            noise_shape=None):\n  \"\"\" Dropout layer.\n\n  Args:\n    inputs: tensor\n    is_training: boolean tf.Variable\n    scope: string\n    keep_prob: float in [0,1]\n    noise_shape: list of ints\n\n  Returns:\n    tensor variable\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    outputs = tf.cond(is_training,\n                      lambda: tf.nn.dropout(inputs, keep_prob, noise_shape),\n                      lambda: inputs)\n    return outputs\n\n\ndef pairwise_distance(point_cloud):\n  \"\"\"Compute pairwise distance of a point cloud.\n\n  Args:\n    point_cloud: tensor (batch_size, num_points, num_dims)\n\n  Returns:\n    pairwise distance: (batch_size, num_points, num_points)\n  \"\"\"\n  og_batch_size = point_cloud.get_shape().as_list()[0]\n  point_cloud = tf.squeeze(point_cloud)\n  if og_batch_size == 1:\n    point_cloud = tf.expand_dims(point_cloud, 0)\n    \n  point_cloud_transpose = tf.transpose(point_cloud, perm=[0, 2, 1])\n  point_cloud_inner = tf.matmul(point_cloud, point_cloud_transpose)\n  point_cloud_inner = -2*point_cloud_inner\n  point_cloud_square = tf.reduce_sum(tf.square(point_cloud), axis=-1, keep_dims=True)\n  point_cloud_square_tranpose = tf.transpose(point_cloud_square, perm=[0, 2, 1])\n  return point_cloud_square + point_cloud_inner + point_cloud_square_tranpose\n\n\ndef knn(adj_matrix, k=20):\n  \"\"\"Get KNN based on the pairwise distance.\n  Args:\n    pairwise distance: (batch_size, num_points, num_points)\n    k: int\n\n  Returns:\n    nearest neighbors: (batch_size, num_points, k)\n  \"\"\"\n  neg_adj = -adj_matrix\n  _, nn_idx = tf.nn.top_k(neg_adj, k=k)\n  return nn_idx\n\n\ndef get_edge_feature(point_cloud, nn_idx, k=20):\n  \"\"\"Construct edge feature for each point\n  Args:\n    point_cloud: (batch_size, num_points, 1, num_dims)\n    nn_idx: (batch_size, num_points, k)\n    k: int\n\n  Returns:\n    edge features: (batch_size, num_points, k, num_dims)\n  \"\"\"\n  og_batch_size = point_cloud.get_shape().as_list()[0]\n  point_cloud = tf.squeeze(point_cloud)\n  if og_batch_size == 1:\n    point_cloud = tf.expand_dims(point_cloud, 0)\n\n  point_cloud_central = point_cloud\n\n  point_cloud_shape = point_cloud.get_shape()\n  batch_size = point_cloud_shape[0].value\n  num_points = point_cloud_shape[1].value\n  num_dims = point_cloud_shape[2].value\n\n  idx_ = tf.range(batch_size) * num_points\n  idx_ = tf.reshape(idx_, [batch_size, 1, 1]) \n\n  point_cloud_flat = tf.reshape(point_cloud, [-1, num_dims])\n  point_cloud_neighbors = tf.gather(point_cloud_flat, nn_idx+idx_)\n  point_cloud_central = tf.expand_dims(point_cloud_central, axis=-2)\n\n  point_cloud_central = tf.tile(point_cloud_central, [1, 1, k, 1])\n\n  edge_feature = tf.concat([point_cloud_central, point_cloud_neighbors-point_cloud_central], axis=-1)\n  return edge_feature\n"
  },
  {
    "path": "download.sh",
    "content": "###\n # @Description: \n # @Autor: Jiachen Sun\n # @Date: 2022-02-16 22:23:16\n # @LastEditors: Jiachen Sun\n # @LastEditTime: 2022-02-17 13:09:42\n### \nwgetgdrive(){\n\n  # $1 = file ID\n  # $2 = file name\n\n  URL=\"https://docs.google.com/uc?export=download&id=$1\"\n\n  wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate $URL -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=$1\" -O $2 && rm -rf /tmp/cookies.txt\n}\n\nmkdir -p tmp\nkey=\"$1\"\ncase $key in\n\tpretrained)\n\t\twgetgdrive 1qSkMYYK1qkT4wMMeAXerSI2Q7AxWujsS tmp/pretrained.zip\n\t\tunzip -o tmp/pretrained.zip\n\t\t;;\n\truns)\n\t\tmkdir -p runs \n\t\tcd runs\n\t\tpython ../gdrivedl.py https://drive.google.com/drive/folders/1UT-OfAsQ1OGSa6HSLZcK6YyJeIkaJUfF?usp=sharing \n    \tcd ..\n\t\t;;\n\tcor_exp)\n\t\tmkdir -p cor_exp \n\t\tcd cor_exp\n\t\tpython ../gdrivedl.py https://drive.google.com/drive/folders/1iYcJwFCFm9JWSiL1puIVfjpEgNF2dSoy?usp=sharing \n    \tcd ..\t\n\t\t;;\n\tmodelnet40_c)\n\t\tmkdir -p data/modelnet40_c\n\t\tcd data/modelnet40_c\n\t\tpython ../../gdrivedl.py https://drive.google.com/drive/folders/10YeQRh92r_WdL-Dnog2zQfFr03UW4qXX?usp=sharing \n    \tcd ../..\n\t\t;;\n\tmodelnet40)\n\t\twget --no-check-certificate https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip\n\t\tunzip modelnet40_ply_hdf5_2048.zip\n\t\tmv modelnet40_ply_hdf5_2048 data\n\t\trm -r modelnet40_ply_hdf5_2048.zip\n    \t\twgetgdrive 1jXe7UR6He-pV3B7vIxMAjEt63Vhy1bV8 tmp/modelnet40_ply_hdf5_2048_valid_small.zip\n\t\tunzip -o tmp/modelnet40_ply_hdf5_2048_valid_small.zip\n\t\tmv modelnet40_ply_hdf5_2048_valid_small/* data/modelnet40_ply_hdf5_2048/\n\t\trm -r modelnet40_ply_hdf5_2048_valid_small\n\t\twget http://modelnet.cs.princeton.edu/ModelNet40.zip\n\t\tunzip ModelNet40.zip\n\t\tmv ModelNet40 data\n\t\trm -r ModelNet40.zip\n\t\trm -rf modelnet40_ply_hdf5_2048\n\t\t;;\n\tmesh)\n\t\twget --no-check-certificate http://modelnet.cs.princeton.edu/ModelNet40.zip\n\t\tunzip ModelNet40.zip\n\t\tmv ModelNet40 data\n\t\trm -r ModelNet40.zip\n\t\t;;\n    \t*)\n    \t\techo \"unknow argument $1\" # unknown argument\n    \t\t;;\nesac\nrm -r tmp\n"
  },
  {
    "path": "emd/README.md",
    "content": "## Earth Mover's Distance of point clouds\n\n![](/emd/CDEMD.png)\n\nCompared to the Chamfer Distance (CD), the Earth Mover's Distance (EMD) is more reliable to distinguish the visual quality of the point clouds. See our [paper](http://cseweb.ucsd.edu/~mil070/projects/AAAI2020/paper.pdf) for more details. \n\nWe provide an EMD implementation for point cloud comparison, which only needs $O(n)$ memory and thus enables dense point clouds (with 10,000 points or over) and large batch size. It is based on an approximated algorithm (auction algorithm) and cannot guarantee a (but near) bijection assignment. It employs a parameter $\\epsilon$ to balance the error rate and the speed of convergence. Smaller $\\epsilon$ achieves more accurate results, but needs a longer time for convergence. The time complexity is $O(n^2k)$, where $k$ is the number of iterations. We set a $\\epsilon = 0.005, k = 50$ during training and a $\\epsilon = 0.002, k = 10000$ during testing. \n\n### Compile\nRun `python3 setup.py install` to compile.\n\n### Example\nSee `emd_module.py/test_emd()` for examples.\n\n### Input\n\n- **xyz1, xyz2**: float tensors with shape `[#batch, #points, 3]`. xyz1 is the predicted point cloud and xyz2 is the ground truth point cloud. Two point clouds should have same size and be normalized to [0, 1]. The number of points should be a multiple of 1024. The batch size should be no greater than 512. Since we only calculate gradients for xyz1, please do not swap xyz1 and xyz2.\n- **eps**: a float tensor, the parameter balances the error rate and the speed of convergence.\n- **iters**: a int tensor, the number of iterations.\n\n### Output\n\n- **dist**: a float tensor with shape `[#batch, #points]`. sqrt(dist) are the L2 distances between the pairs of points.\n- **assignment**: a int tensor with shape `[#batch, #points]`. The index of the matched point in the ground truth point cloud.\n"
  },
  {
    "path": "emd/emd.cpp",
    "content": "// EMD approximation module (based on auction algorithm)\n// author: Minghua Liu\n#include <torch/extension.h>\n#include <vector>\n\nint emd_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist, at::Tensor assignment, at::Tensor price, \n\t                 at::Tensor assignment_inv, at::Tensor bid, at::Tensor bid_increments, at::Tensor max_increments,\n\t                 at::Tensor unass_idx, at::Tensor unass_cnt, at::Tensor unass_cnt_sum, at::Tensor cnt_tmp, at::Tensor max_idx, float eps, int iters);\n\nint emd_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz, at::Tensor graddist, at::Tensor idx);\n\n\n\nint emd_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist, at::Tensor assignment, at::Tensor price, \n\t                 at::Tensor assignment_inv, at::Tensor bid, at::Tensor bid_increments, at::Tensor max_increments,\n\t                 at::Tensor unass_idx, at::Tensor unass_cnt, at::Tensor unass_cnt_sum, at::Tensor cnt_tmp, at::Tensor max_idx, float eps, int iters) {\n\treturn emd_cuda_forward(xyz1, xyz2, dist, assignment, price, assignment_inv, bid, bid_increments, max_increments, unass_idx, unass_cnt, unass_cnt_sum, cnt_tmp, max_idx, eps, iters);\n}\n\nint emd_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz, at::Tensor graddist, at::Tensor idx) {\n\n    return emd_cuda_backward(xyz1, xyz2, gradxyz, graddist, idx);\n}\n\n\n\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"forward\", &emd_forward, \"emd forward (CUDA)\");\n  m.def(\"backward\", &emd_backward, \"emd backward (CUDA)\");\n}"
  },
  {
    "path": "emd/emd_cuda.cu",
    "content": "// EMD approximation module (based on auction algorithm)\n// author: Minghua Liu\n#include <stdio.h>\n#include <ATen/ATen.h>\n\n#include <cuda.h>\n#include <iostream>\n#include <cuda_runtime.h>\n\n__device__ __forceinline__ float atomicMax(float *address, float val)\n{\n    int ret = __float_as_int(*address);\n    while(val > __int_as_float(ret))\n    {\n        int old = ret;\n        if((ret = atomicCAS((int *)address, old, __float_as_int(val))) == old)\n            break;\n    }\n    return __int_as_float(ret);\n}\n\n\n__global__ void clear(int b, int * cnt_tmp, int * unass_cnt) {\n\tfor (int i = threadIdx.x; i < b; i += blockDim.x) {\n\t\tcnt_tmp[i] = 0;\n\t\tunass_cnt[i] = 0;\n\t}\n}\n\n__global__ void calc_unass_cnt(int b, int n, int * assignment, int * unass_cnt) { \n\t// count the number of unassigned points in each batch\n\tconst int BLOCK_SIZE = 1024; \n\t__shared__ int scan_array[BLOCK_SIZE];\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tscan_array[threadIdx.x] = assignment[i * n + blockIdx.y * BLOCK_SIZE + threadIdx.x] == -1 ? 1 : 0;\n\t\t__syncthreads();\n\t\t\n\t\tint stride = 1;\n\t\twhile(stride <= BLOCK_SIZE / 2) {\n\t\t\tint index = (threadIdx.x + 1) * stride * 2 - 1; \n\t\t\tif(index < BLOCK_SIZE)\n\t\t\t\tscan_array[index] += scan_array[index - stride]; \n\t\t\tstride = stride * 2;\n\t\t\t__syncthreads(); \n\t\t}\n\t\t__syncthreads();\n\t\t\n\t\tif (threadIdx.x == BLOCK_SIZE - 1) {\n\t\t\tatomicAdd(&unass_cnt[i], scan_array[threadIdx.x]);\n\t\t}\n\t\t__syncthreads();\n\t}\n}\n\n__global__ void calc_unass_cnt_sum(int b, int * unass_cnt, int * unass_cnt_sum) {\n\t// count the cumulative sum over over unass_cnt\n\tconst int BLOCK_SIZE = 512; // batch_size <= 512\n\t__shared__ int scan_array[BLOCK_SIZE];\n\tscan_array[threadIdx.x] = unass_cnt[threadIdx.x];\n\t__syncthreads();\n\t\n\tint stride = 1;\n\twhile(stride <= BLOCK_SIZE / 2) {\n\t\tint index = (threadIdx.x + 1) * stride * 2 - 1; \n\t\tif(index < BLOCK_SIZE)\n\t\t\tscan_array[index] += scan_array[index - stride]; \n\t\tstride = stride * 2;\n\t\t__syncthreads(); \n\t}\n\t__syncthreads();\n\tstride = BLOCK_SIZE / 4; \n\twhile(stride > 0) {\n\t\tint index = (threadIdx.x + 1) * stride * 2 - 1; \n\t\tif((index + stride) < BLOCK_SIZE)\n\t\t\tscan_array[index + stride] += scan_array[index];\n\t\tstride = stride / 2;\n\t\t__syncthreads(); \n\t}\n\t__syncthreads(); \n\t\n\t//printf(\"%d\\n\", unass_cnt_sum[b - 1]);\n\tunass_cnt_sum[threadIdx.x] = scan_array[threadIdx.x];\n}\n\n__global__ void calc_unass_idx(int b, int n, int * assignment, int * unass_idx, int * unass_cnt, int * unass_cnt_sum, int * cnt_tmp) {\n\t// list all the unassigned points\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tif (assignment[i * n + blockIdx.y * 1024 + threadIdx.x] == -1) {\n\t\t\tint idx = atomicAdd(&cnt_tmp[i], 1);\n\t\t\tunass_idx[unass_cnt_sum[i] - unass_cnt[i] + idx] = blockIdx.y * 1024 + threadIdx.x;\n\t\t} \n\t}\n}\n\n__global__ void Bid(int b, int n, const float * xyz1, const float * xyz2, float eps, int * assignment, int * assignment_inv, float * price, \n\t\t\t\t\tint * bid, float * bid_increments, float * max_increments, int * unass_cnt, int * unass_cnt_sum, int * unass_idx) {\n\tconst int batch = 2048, block_size = 1024, block_cnt = n / 1024;\n\t__shared__ float xyz2_buf[batch * 3];\n\t__shared__ float price_buf[batch];\n\t__shared__ float best_buf[block_size];\n\t__shared__ float better_buf[block_size];\n\t__shared__ int best_i_buf[block_size];\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tint _unass_cnt = unass_cnt[i];\n\t\tif (_unass_cnt == 0)\n\t\t\tcontinue;\n\t\tint _unass_cnt_sum = unass_cnt_sum[i];\n\t\tint unass_per_block = (_unass_cnt + block_cnt - 1) / block_cnt;\n\t\tint thread_per_unass = block_size / unass_per_block;\n\t\tint unass_this_block = max(min(_unass_cnt - (int) blockIdx.y * unass_per_block, unass_per_block), 0);\n\t\t\t\n\t\tfloat x1, y1, z1, best = -1e9, better = -1e9;\n\t\tint best_i = -1, _unass_id = -1, thread_in_unass;\n\n\t\tif (threadIdx.x < thread_per_unass * unass_this_block) {\n\t\t\t_unass_id = unass_per_block * blockIdx.y + threadIdx.x / thread_per_unass + _unass_cnt_sum - _unass_cnt;\n\t\t\t_unass_id = unass_idx[_unass_id];\n\t\t\tthread_in_unass = threadIdx.x % thread_per_unass;\n\n\t\t\tx1 = xyz1[(i * n + _unass_id) * 3 + 0];\n\t\t\ty1 = xyz1[(i * n + _unass_id) * 3 + 1];\n\t\t\tz1 = xyz1[(i * n + _unass_id) * 3 + 2];\n\t\t}\n\n\t\tfor (int k2 = 0; k2 < n; k2 += batch) {\n\t\t\tint end_k = min(n, k2 + batch) - k2;\n\t\t\tfor (int j = threadIdx.x; j < end_k * 3; j += blockDim.x) {\n\t\t\t\txyz2_buf[j] = xyz2[(i * n + k2) * 3 + j];\n\t\t\t}\n\t\t\tfor (int j = threadIdx.x; j < end_k; j += blockDim.x) {\n\t\t\t\tprice_buf[j] = price[i * n + k2 + j];\n\t\t\t}\n\t\t\t__syncthreads();\n\n\t\t\tif (_unass_id != -1) {\n\t\t\t\tint delta = (end_k + thread_per_unass - 1) / thread_per_unass;\n\t\t\t\tint l = thread_in_unass * delta;\n\t\t\t\tint r = min((thread_in_unass + 1) * delta, end_k);\n\t\t\t\tfor (int k = l; k < r; k++) \n\t\t\t\t//if (!last || assignment_inv[i * n + k + k2] == -1)\n\t\t\t\t{\n\t\t\t\t\tfloat x2 = xyz2_buf[k * 3 + 0] - x1;\n\t\t\t\t\tfloat y2 = xyz2_buf[k * 3 + 1] - y1;\n\t\t\t\t\tfloat z2 = xyz2_buf[k * 3 + 2] - z1;\n\t\t\t\t\t// the coordinates of points should be normalized to [0, 1]\n\t\t\t\t\tfloat d = 3.0 - sqrtf(x2 * x2 + y2 * y2 + z2 * z2) - price_buf[k];\n\t\t\t\t\tif (d > best) {\n\t\t\t\t\t\tbetter = best;\n\t\t\t\t\t\tbest = d;\n\t\t\t\t\t\tbest_i = k + k2;\n\t\t\t\t\t}\n\t\t\t\t\telse if (d > better) {\n\t\t\t\t\t\tbetter = d;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t__syncthreads();\n\t\t}\n\n\t\tbest_buf[threadIdx.x] = best;\n\t\tbetter_buf[threadIdx.x] = better;\n\t\tbest_i_buf[threadIdx.x] = best_i;\n\t\t__syncthreads();\n\t\t\n\t\tif (_unass_id != -1 && thread_in_unass == 0) {\n\t\t\tfor (int j = threadIdx.x + 1; j < threadIdx.x + thread_per_unass; j++) {\n\t\t\t\tif (best_buf[j] > best) {\n\t\t\t\t\tbetter = max(best, better_buf[j]);\n\t\t\t\t\tbest = best_buf[j];\n\t\t\t\t\tbest_i = best_i_buf[j];\n\t\t\t\t}\n\t\t\t\telse better = max(better, best_buf[j]);\n\t\t\t}\n\t\t\tbid[i * n + _unass_id] = best_i;\n\t\t\tbid_increments[i * n + _unass_id] = best - better + eps; \n\t\t\tatomicMax(&max_increments[i * n + best_i], best - better + eps);\n\t\t}\n\t}\n}\n\n__global__ void GetMax(int b, int n, int * assignment, int * bid, float * bid_increments, float * max_increments, int * max_idx) {\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tint j = threadIdx.x + blockIdx.y * blockDim.x;\n\t\tif (assignment[i * n + j] == -1) {\n\t\t\tint bid_id = bid[i * n + j];\n\t\t\tfloat bid_inc = bid_increments[i * n + j];\n\t\t\tfloat max_inc = max_increments[i * n + bid_id];\n\t\t\tif (bid_inc - 1e-6 <= max_inc && max_inc <= bid_inc + 1e-6) \n\t\t\t{\n\t\t\t\tmax_idx[i * n + bid_id] = j;\n\t\t\t}\n\t\t}\n\t}\n}\n\n__global__ void Assign(int b, int n, int * assignment, int * assignment_inv, float * price, int * bid, float * bid_increments, float * max_increments, int * max_idx, bool last) {\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tint j = threadIdx.x + blockIdx.y * blockDim.x;\n\t\tif (assignment[i * n + j] == -1) {\n\t\t\tint bid_id = bid[i * n + j];\n\t\t\tif (last || max_idx[i * n + bid_id] == j) \n\t\t\t{\n\t\t\t\tfloat bid_inc = bid_increments[i * n + j];\n\t\t\t\tint ass_inv = assignment_inv[i * n + bid_id];\n\t\t\t\tif (!last && ass_inv != -1) {\n\t\t\t\t\tassignment[i * n + ass_inv] = -1;\n\t\t\t\t}\n\t\t\t\tassignment_inv[i * n + bid_id] = j;\n\t\t\t\tassignment[i * n + j] = bid_id;\n\t\t\t\tprice[i * n + bid_id] += bid_inc;\n\t\t\t\tmax_increments[i * n + bid_id] = -1e9;\n\t\t\t}\n\t\t}\n\t}\n}\n\n__global__ void CalcDist(int b, int n, float * xyz1, float * xyz2, float * dist, int * assignment) {\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tint j = threadIdx.x + blockIdx.y * blockDim.x;\n\t\tint k = assignment[i * n + j];\n\t\tfloat deltax = xyz1[(i * n + j) * 3 + 0] - xyz2[(i * n + k) * 3 + 0];\n\t\tfloat deltay = xyz1[(i * n + j) * 3 + 1] - xyz2[(i * n + k) * 3 + 1];\n\t\tfloat deltaz = xyz1[(i * n + j) * 3 + 2] - xyz2[(i * n + k) * 3 + 2];\n\t\tdist[i * n + j] = deltax * deltax + deltay * deltay + deltaz * deltaz;\n\t}\n}\n\nint emd_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist, at::Tensor assignment, at::Tensor price, \n\t                 at::Tensor assignment_inv, at::Tensor bid, at::Tensor bid_increments, at::Tensor max_increments,\n\t                 at::Tensor unass_idx, at::Tensor unass_cnt, at::Tensor unass_cnt_sum, at::Tensor cnt_tmp, at::Tensor max_idx, float eps, int iters) {\n\n\tconst auto batch_size = xyz1.size(0);\n\tconst auto n = xyz1.size(1); //num_points point cloud A\n\tconst auto m = xyz2.size(1); //num_points point cloud B\n\t\n\tif (n != m) {\n\t\tprintf(\"Input Error! The two point clouds should have the same size.\\n\");\n\t\treturn -1;\n\t}\n\n\tif (batch_size > 512) {\n\t\tprintf(\"Input Error! The batch size should be less than 512.\\n\");\n\t\treturn -1;\n\t}\n\n\tif (n % 1024 != 0) {\n\t\tprintf(\"Input Error! The size of the point clouds should be a multiple of 1024.\\n\");\n\t\treturn -1;\n\t}\n\n\t//cudaEvent_t start,stop;\n\t//cudaEventCreate(&start);\n\t//cudaEventCreate(&stop);\n\t//cudaEventRecord(start);\n\t//int iters = 50;\n\tfor (int i = 0; i < iters; i++) {\n\t\tclear<<<1, batch_size>>>(batch_size, cnt_tmp.data<int>(), unass_cnt.data<int>());\n\t\tcalc_unass_cnt<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, assignment.data<int>(), unass_cnt.data<int>());\n\t\tcalc_unass_cnt_sum<<<1, batch_size>>>(batch_size, unass_cnt.data<int>(), unass_cnt_sum.data<int>());\n\t\tcalc_unass_idx<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, assignment.data<int>(), unass_idx.data<int>(), unass_cnt.data<int>(), \n\t\t\t\t\t\t\t\t\t\t\t unass_cnt_sum.data<int>(), cnt_tmp.data<int>());\n\t\tBid<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, xyz1.data<float>(), xyz2.data<float>(), eps, assignment.data<int>(), assignment_inv.data<int>(), \n\t\t\t                          price.data<float>(), bid.data<int>(), bid_increments.data<float>(), max_increments.data<float>(),\n\t\t\t                          unass_cnt.data<int>(), unass_cnt_sum.data<int>(), unass_idx.data<int>());\n\t\tGetMax<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, assignment.data<int>(), bid.data<int>(), bid_increments.data<float>(), max_increments.data<float>(), max_idx.data<int>());\n\t\tAssign<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, assignment.data<int>(), assignment_inv.data<int>(), price.data<float>(), bid.data<int>(),\n\t\t\t\t\t\t\t\t\t  bid_increments.data<float>(), max_increments.data<float>(), max_idx.data<int>(), i == iters - 1);\n\t}\n\tCalcDist<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, xyz1.data<float>(), xyz2.data<float>(), dist.data<float>(), assignment.data<int>());\n\t//cudaEventRecord(stop);\n\t//cudaEventSynchronize(stop);\n\t//float elapsedTime;\n\t//cudaEventElapsedTime(&elapsedTime,start,stop);\n\t//printf(\"%lf\\n\", elapsedTime);\n\n\tcudaError_t err = cudaGetLastError();\n\t  if (err != cudaSuccess) {\n\t    printf(\"error in nnd Output: %s\\n\", cudaGetErrorString(err));\n\t    return 0;\n\t  }\n\t  return 1;\n}\n\n__global__ void NmDistanceGradKernel(int b, int n, const float * xyz1, const float * xyz2, const float * grad_dist, const int * idx, float * grad_xyz){\n\tfor (int i = blockIdx.x; i < b; i += gridDim.x) {\n\t\tfor (int j = threadIdx.x + blockIdx.y * blockDim.x; j < n; j += blockDim.x * gridDim.y) {\n\t\t\tfloat x1 = xyz1[(i * n + j) * 3 + 0];\n\t\t\tfloat y1 = xyz1[(i * n + j) * 3 + 1];\n\t\t\tfloat z1 = xyz1[(i * n + j) * 3 + 2];\n\t\t\tint j2 = idx[i * n + j];\n\t\t\tfloat x2 = xyz2[(i * n + j2) * 3 + 0];\n\t\t\tfloat y2 = xyz2[(i * n + j2) * 3 + 1];\n\t\t\tfloat z2 = xyz2[(i * n + j2) * 3 + 2];\n\t\t\tfloat g = grad_dist[i * n + j] * 2;\n\t\t\tatomicAdd(&(grad_xyz[(i * n + j) * 3 + 0]), g * (x1 - x2));\n\t\t\tatomicAdd(&(grad_xyz[(i * n + j) * 3 + 1]), g * (y1 - y2));\n\t\t\tatomicAdd(&(grad_xyz[(i * n + j) * 3 + 2]), g * (z1 - z2));\n\t\t}\n\t}\n}\n\nint emd_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz, at::Tensor graddist, at::Tensor idx){\n\tconst auto batch_size = xyz1.size(0);\n\tconst auto n = xyz1.size(1); \n\tconst auto m = xyz2.size(1); \n\n\tNmDistanceGradKernel<<<dim3(batch_size, n / 1024, 1), 1024>>>(batch_size, n, xyz1.data<float>(), xyz2.data<float>(), graddist.data<float>(), idx.data<int>(), gradxyz.data<float>());\n\t\n\tcudaError_t err = cudaGetLastError();\n\t  if (err != cudaSuccess) {\n\t    printf(\"error in nnd get grad: %s\\n\", cudaGetErrorString(err));\n\t    return 0;\n\t  }\n\t  return 1;\n\t\n}\n"
  },
  {
    "path": "emd/emd_module.py",
    "content": "# EMD approximation module (based on auction algorithm)\n# memory complexity: O(n)\n# time complexity: O(n^2 * iter) \n# author: Minghua Liu\n\n# Input:\n# xyz1, xyz2: [#batch, #points, 3]\n# where xyz1 is the predicted point cloud and xyz2 is the ground truth point cloud \n# two point clouds should have same size and be normalized to [0, 1]\n# #points should be a multiple of 1024\n# #batch should be no greater than 512\n# eps is a parameter which balances the error rate and the speed of convergence\n# iters is the number of iteration\n# we only calculate gradient for xyz1\n\n# Output:\n# dist: [#batch, #points],  sqrt(dist) -> L2 distance \n# assignment: [#batch, #points], index of the matched point in the ground truth point cloud\n# the result is an approximation and the assignment is not guranteed to be a bijection\n\nimport time\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\nimport emd\n\n\n\n\nclass emdFunction(Function):\n    @staticmethod\n    def forward(ctx, xyz1, xyz2, eps, iters):\n\n        batchsize, n, _ = xyz1.size()\n        _, m, _ = xyz2.size()\n\n        assert(n == m)\n        assert(xyz1.size()[0] == xyz2.size()[0])\n        assert(n % 1024 == 0)\n        assert(batchsize <= 512)\n\n        xyz1 = xyz1.contiguous().float().cuda()\n        xyz2 = xyz2.contiguous().float().cuda()\n        dist = torch.zeros(batchsize, n, device='cuda').contiguous()\n        assignment = torch.zeros(batchsize, n, device='cuda', dtype=torch.int32).contiguous() - 1\n        assignment_inv = torch.zeros(batchsize, m, device='cuda', dtype=torch.int32).contiguous() - 1\n        price = torch.zeros(batchsize, m, device='cuda').contiguous()\n        bid = torch.zeros(batchsize, n, device='cuda', dtype=torch.int32).contiguous()\n        bid_increments = torch.zeros(batchsize, n, device='cuda').contiguous()\n        max_increments = torch.zeros(batchsize, m, device='cuda').contiguous()\n        unass_idx = torch.zeros(batchsize * n, device='cuda', dtype=torch.int32).contiguous()\n        max_idx = torch.zeros(batchsize * m, device='cuda', dtype=torch.int32).contiguous()\n        unass_cnt = torch.zeros(512, dtype=torch.int32, device='cuda').contiguous()\n        unass_cnt_sum = torch.zeros(512, dtype=torch.int32, device='cuda').contiguous()\n        cnt_tmp = torch.zeros(512, dtype=torch.int32, device='cuda').contiguous()\n\n        emd.forward(xyz1, xyz2, dist, assignment, price, assignment_inv, bid, bid_increments, max_increments, unass_idx, unass_cnt, unass_cnt_sum, cnt_tmp, max_idx, eps, iters)\n\n        ctx.save_for_backward(xyz1, xyz2, assignment)\n        return dist, assignment\n\n    @staticmethod\n    def backward(ctx, graddist, gradidx):\n        xyz1, xyz2, assignment = ctx.saved_tensors\n        graddist = graddist.contiguous()\n\n        gradxyz1 = torch.zeros(xyz1.size(), device='cuda').contiguous()\n        gradxyz2 = torch.zeros(xyz2.size(), device='cuda').contiguous()\n\n        emd.backward(xyz1, xyz2, gradxyz1, graddist, assignment)\n        return gradxyz1, gradxyz2, None, None\n\nclass emdModule(nn.Module):\n    def __init__(self):\n        super(emdModule, self).__init__()\n\n    def forward(self, input1, input2, eps, iters):\n        return emdFunction.apply(input1, input2, eps, iters)\n\ndef test_emd():\n    x1 = torch.rand(20, 8192, 3).cuda()\n    x2 = torch.rand(20, 8192, 3).cuda()\n    emd = emdModule()\n    start_time = time.perf_counter()\n    dis, assigment = emd(x1, x2, 0.05, 3000)\n    print(\"Input_size: \", x1.shape)\n    print(\"Runtime: %lfs\" % (time.perf_counter() - start_time))\n    print(\"EMD: %lf\" % np.sqrt(dis.cpu()).mean())\n    print(\"|set(assignment)|: %d\" % assigment.unique().numel())\n    assigment = assigment.cpu().numpy()\n    assigment = np.expand_dims(assigment, -1)\n    x2 = np.take_along_axis(x2, assigment, axis = 1)\n    d = (x1 - x2) * (x1 - x2)\n    print(\"Verified EMD: %lf\" % np.sqrt(d.cpu().sum(-1)).mean())\n\n#test_emd()\n        "
  },
  {
    "path": "emd/setup.py",
    "content": "from setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\nsetup(\n    name='emd',\n    ext_modules=[\n        CUDAExtension('emd', [\n            'emd.cpp',\n            'emd_cuda.cu',\n        ]),\n    ],\n    cmdclass={\n        'build_ext': BuildExtension\n    })"
  },
  {
    "path": "eval_cor.sh",
    "content": "\n###\n # @Description: \n # @Autor: Jiachen Sun\n # @Date: 2022-02-16 22:23:16\n # @LastEditors: Jiachen Sun\n # @LastEditTime: 2022-02-23 17:20:27\n### \nif [ ! -d \"output\" ]; then\n    mkdir \"output\"\nfi\n\nfor model in 'gdanet'; do #'pointnet' 'pct' 'rscnn' 'pointnet2'  'simpleview' 'dgcnn'  'pointMLP' 'curvenet'; do\nfor cor in 'uniform' 'gaussian' 'background' 'impulse' 'upsampling' 'distortion_rbf' 'distortion_rbf_inv' 'density' 'density_inc' 'shear' 'rotation' 'cutout' 'distortion'  'occlusion' 'lidar'; do\n\nfor sev in 1 2 3 4 5; do\n\n# for aug in 'rsmix' 'cutmix_r' 'cutmix_k' 'mixup' 'pgd'; do\n\n# CUDA_VISIBLE_DEVICES=0 python main.py --entry test --model-path runs/${aug}_${model}_run_1/model_best_test.pth --exp-config configs/corruption/${model}.yaml --severity ${sev} --corruption ${cor} --output ./output/${model}_${aug}_${cor}_${sev}.txt\n\n# done\n\n# for adapt in 'tent' 'bn'; do\n\n# CUDA_VISIBLE_DEVICES=0 python main.py --entry test --model-path cor_exp/dgcnn_${model}_run_1/model_best_test.pth --exp-config configs/${adapt}/${model}.yaml --severity ${sev} --corruption ${cor} --output ./output/${model}_${adapt}_${cor}_${sev}.txt\n\n# done\n\nCUDA_VISIBLE_DEVICES=0 python main.py --entry test --model-path runs/dgcnn_${model}_run_1/model_best_test.pth --exp-config configs/corruption/${model}.yaml --severity ${sev} --corruption ${cor} --output ./output/${model}_none_${cor}_${sev}.txt\n\ndone\ndone\ndone\n"
  },
  {
    "path": "eval_og.sh",
    "content": "\nif [ ! -d \"output\" ]; then\n    mkdir \"output\"\nfi\n\nfor model in 'gdanet'; do #'dgcnn' 'rscnn' 'pct' 'pointnet' 'pointnet2'  'simpleview'  'curvenet' 'pointMLP';; do\n# for aug in 'pgd'; do\n\nCUDA_VISIBLE_DEVICES=1 python main.py --entry test --model-path runs/dgcnn_${model}_run_1/model_best_test.pth --exp-config configs/dgcnn_${model}_run_1.yaml --output ./output/${model}_clean.txt\n\ndone\n"
  },
  {
    "path": "eval_tent_cutmix.sh",
    "content": "\nif [ ! -d \"output\" ]; then\n    mkdir \"output\"\nfi\n\nfor model in 'rscnn' 'pct' 'pointnet' 'pointnet2'  'simpleview' 'dgcnn'; do\nfor cor in 'uniform' 'gaussian' 'background' 'impulse' 'upsampling' 'distortion_rbf' 'distortion_rbf_inv' 'density' 'density_inc' 'shear' 'rotation' 'cutout' 'distortion'  'occlusion' 'lidar'; do\n\nfor sev in 1 2 3 4 5; do\n\nCUDA_VISIBLE_DEVICES=0 python main.py --entry test --model-path runs/cutmix_r_${model}_run_1/model_best_test.pth --exp-config configs/tent_cutmix/${model}.yaml --severity ${sev} --corruption ${cor} --output ./output/${model}_megamerger_${cor}_${sev}.txt \n\ndone\ndone\ndone\n"
  },
  {
    "path": "gdrivedl.py",
    "content": "#!/usr/bin/env python\nfrom __future__ import unicode_literals\nimport json\nimport os\nimport re\nimport sys\nimport unicodedata\nimport argparse\nimport logging\n\ntry:\n    #Python3\n    from urllib.request import Request, urlopen, build_opener, HTTPCookieProcessor\n    from http.cookiejar import CookieJar\nexcept ImportError:\n    #Python2\n    from urllib2 import Request, urlopen, build_opener, HTTPCookieProcessor\n    from cookielib import CookieJar\n\nITEM_URL = 'https://drive.google.com/open?id={id}'\nFILE_URL = 'https://docs.google.com/uc?export=download&id={id}&confirm={confirm}'\nFOLDER_URL = 'https://drive.google.com/embeddedfolderview?id={id}#list'\nCHUNKSIZE = 4096\nUSER_AGENT = 'Mozilla/5.0'\n\nID_PATTERNS = [\n    re.compile('/file/d/([0-9A-Za-z_-]{10,})(?:/|$)', re.IGNORECASE),\n    re.compile('/folders/([0-9A-Za-z_-]{10,})(?:/|$)', re.IGNORECASE),\n    re.compile('id=([0-9A-Za-z_-]{10,})(?:&|$)', re.IGNORECASE),\n    re.compile('([0-9A-Za-z_-]{10,})', re.IGNORECASE)\n]\nFOLDER_PATTERN = re.compile('<a href=\"(https://drive.google.com/.*?)\".*?<div class=\"flip-entry-title\">(.*?)</div>',\n                            re.DOTALL | re.IGNORECASE)\nCONFIRM_PATTERN = re.compile(\"download_warning[0-9A-Za-z_-]+=([0-9A-Za-z_-]+);\",\n                             re.IGNORECASE)\nFILENAME_PATTERN = re.compile('attachment;filename=\"(.*?)\"',\n                             re.IGNORECASE)\n\ndef output(text):\n    try:\n        sys.stdout.write(text)\n    except UnicodeEncodeError:\n        sys.stdout.write(text.encode('utf8'))\n\n# Big thanks to leo_wallentin for below sanitize function (modified slightly for this script)\n# https://gitlab.com/jplusplus/sanitize-filename/-/blob/master/sanitize_filename/sanitize_filename.py\ndef sanitize(filename):\n    blacklist = [\"\\\\\", \"/\", \":\", \"*\", \"?\", \"\\\"\", \"<\", \">\", \"|\", \"\\0\"]\n    reserved = [\n        \"CON\", \"PRN\", \"AUX\", \"NUL\", \"COM1\", \"COM2\", \"COM3\", \"COM4\", \"COM5\",\n        \"COM6\", \"COM7\", \"COM8\", \"COM9\", \"LPT1\", \"LPT2\", \"LPT3\", \"LPT4\", \"LPT5\",\n        \"LPT6\", \"LPT7\", \"LPT8\", \"LPT9\",\n    ]\n\n    filename = \"\".join(c for c in filename if c not in blacklist)\n    filename = \"\".join(c for c in filename if 31 < ord(c))\n    filename = unicodedata.normalize(\"NFKD\", filename)\n    filename = filename.rstrip(\". \")\n    filename = filename.strip()\n\n    if all([x == \".\" for x in filename]):\n        filename = \"_\" + filename\n    if filename in reserved:\n        filename = \"_\" + filename\n    if len(filename) == 0:\n        filename = \"_\"\n    if len(filename) > 255:\n        parts = re.split(r\"/|\\\\\", filename)[-1].split(\".\")\n        if len(parts) > 1:\n            ext = \".\" + parts.pop()\n            filename = filename[:-len(ext)]\n        else:\n            ext = \"\"\n        if filename == \"\":\n            filename = \"_\"\n        if len(ext) > 254:\n            ext = ext[254:]\n        maxl = 255 - len(ext)\n        filename = filename[:maxl]\n        filename = filename + ext\n        filename = filename.rstrip(\". \")\n        if len(filename) == 0:\n            filename = \"_\"\n\n    return filename\n\ndef url_to_id(url):\n    for pattern in ID_PATTERNS:\n        match = pattern.search(url)\n        if match:\n            return match.group(1)\n\n    logging.error('Unable to get ID from {}'.format(url))\n    sys.exit(1)\n\nclass GDriveDL(object):\n    def __init__(self, quiet=False, overwrite=False):\n        self._quiet = quiet\n        self._overwrite = overwrite\n        self._create_empty_dirs = True\n        self._opener = build_opener(HTTPCookieProcessor(CookieJar()))\n\n    def _request(self, url):\n        logging.debug('Requesting: {}'.format(url))\n        req = Request(url, headers={'User-Agent': USER_AGENT})\n        return self._opener.open(req)\n\n    def process_url(self, url, directory, filename=None):\n        id = url_to_id(url)\n\n        if '://' not in url:\n            url = ITEM_URL.format(id=id)\n            resp = self._request(url)\n            url = resp.geturl()\n\n        if '/file/' in url.lower():\n            self.process_file(id, directory, filename=filename)\n        elif '/folders/' in url.lower():\n            if filename:\n                logging.warn(\"Ignoring --output-document option for folder download\")\n            self.process_folder(id, directory)\n        else:\n            logging.error('That id {} returned an unknown url {}'.format(id, url))\n            sys.exit(1)\n\n    def process_folder(self, id, directory):\n        url = FOLDER_URL.format(id=id)\n        resp = self._request(url)\n        html = resp.read().decode('utf-8')\n\n        matches = re.findall(FOLDER_PATTERN, html)\n\n        if not matches and 'ServiceLogin' in html:\n            logging.error('Folder: {} does not have link sharing enabled'.format(id))\n            sys.exit(1)\n\n        for match in matches:\n            url, item_name = match\n            id = url_to_id(url)\n\n            if '/file/' in url.lower():\n                self.process_file(id, directory, filename=sanitize(item_name))\n            elif '/folders/' in url.lower():\n                self.process_folder(id, os.path.join(directory, sanitize(item_name)))\n\n        if self._create_empty_dirs and not os.path.exists(directory):\n            os.makedirs(directory)\n            logging.info('Directory: {directory} [Created]'.format(directory=directory))\n\n    def process_file(self, id, directory, filename=None, confirm=''):\n        file_path = None\n\n        if filename:\n            file_path = filename if os.path.isabs(filename) else os.path.join(directory, filename)\n            if not self._overwrite and os.path.exists(file_path):\n                logging.info('{file_path} [Exists]'.format(file_path=file_path))\n                return\n\n        url = FILE_URL.format(id=id, confirm=confirm)\n        resp = self._request(url)\n\n        if 'ServiceLogin' in resp.url:\n            logging.error('File: {} does not have link sharing enabled'.format(id))\n            sys.exit(1)\n\n        cookies = resp.headers.get('Set-Cookie') or ''\n        if not confirm and 'download_warning' in cookies:\n            confirm = CONFIRM_PATTERN.search(cookies)\n            return self.process_file(id, directory, filename=filename, confirm=confirm.group(1))\n\n        if not file_path:\n            filename = FILENAME_PATTERN.search(resp.headers.get('content-disposition')).group(1)\n            file_path = os.path.join(directory, sanitize(filename))\n            if not self._overwrite and os.path.exists(file_path):\n                logging.info('{file_path} [Exists]'.format(file_path=file_path))\n                return\n\n        directory = os.path.dirname(file_path)\n        if not os.path.exists(directory):\n            os.makedirs(directory)\n            logging.info('Directory: {directory} [Created]'.format(directory=directory))\n\n        try:\n            with open(file_path, 'wb') as f:\n                dl = 0\n                last_out = 0\n                while True:\n                    chunk = resp.read(CHUNKSIZE)\n                    if not chunk:\n                        break\n\n                    if b'Too many users have viewed or downloaded this file recently' in chunk:\n                        logging.error('Quota exceeded for this file')\n                        sys.exit(1)\n\n                    dl += len(chunk)\n                    f.write(chunk)\n                    if not self._quiet and (not last_out or dl-last_out > 1048576):\n                        output(\"\\r{} {:.2f}MB\".format(\n                            file_path,\n                            dl / 1024 / 1024,\n                        ))\n                        last_out = dl\n                        sys.stdout.flush()\n        except:\n            if os.path.exists(file_path):\n                os.remove(file_path)\n            raise\n\n        if not self._quiet:\n            output('\\n')\n\n\ndef main(args=None):\n    parser = argparse.ArgumentParser(description='Download Google Drive files & folders')\n    parser.add_argument(\"url\", help=\"Shared Google Drive URL\")\n    parser.add_argument(\"-P\", \"--directory-prefix\", default='.', help=\"Output directory (default is current directory)\")\n    parser.add_argument(\"-O\", \"--output-document\", help=\"Output filename. Defaults to the GDrive filename. Not valid when downloading folders\")\n    parser.add_argument(\"-q\", \"--quiet\", help=\"Disable console output\", default=False, action=\"store_true\")\n    args = parser.parse_args(args)\n\n    if args.quiet:\n        logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.WARN)\n    else:\n        logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.INFO)\n\n    url = args.url\n    id = ''\n\n    for pattern in ID_PATTERNS:\n        match = pattern.search(url)\n        if match:\n            id = match.group(1)\n            break\n\n    if not id:\n        logging.error('Unable to get ID from {}'.format(url))\n        sys.exit(1)\n\n    gdrive = GDriveDL(quiet=args.quiet, overwrite=args.output_document is not None)\n    gdrive.process_url(url, directory=args.directory_prefix, filename=args.output_document)\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "main.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim.lr_scheduler as lr_scheduler\nimport random\nfrom dataloader import create_dataloader\nfrom time import time\nfrom datetime import datetime\nfrom progressbar import ProgressBar\nimport models\nfrom collections import defaultdict\nimport os\nimport numpy as np\nimport argparse\nfrom all_utils import (\n    TensorboardManager, PerfTrackTrain,\n    PerfTrackVal, TrackTrain, smooth_loss, DATASET_NUM_CLASS,\n    rscnn_voting_evaluate_cls, pn2_vote_evaluate_cls)\nfrom configs import get_cfg_defaults\nimport pprint\nfrom pointnet_pyt.pointnet.model import feature_transform_regularizer\nimport sys\nimport aug_utils\nfrom third_party import bn_helper, tent_helper\n\nDEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nif DEVICE.type == 'cpu':\n    print('WARNING: Using CPU')\n\ndef adapt_bn(data,model,cfg):\n    model = bn_helper.configure_model(model,eps=1e-5, momentum=0.1,reset_stats=False,no_stats=False)\n    for _ in range(cfg.ITER):\n        model(**data) \n    print(\"Adaptation Done ...\")\n    model.eval()\n    return model\n\ndef adapt_tent(data,model,cfg):\n    model = tent_helper.configure_model(model,eps=1e-5, momentum=0.1)\n    parameter,_ = tent_helper.collect_params(model)\n    optimizer_tent = torch.optim.SGD(parameter, lr=0.001,momentum=0.9)\n\n    for _ in range(cfg.ITER):\n        # index = np.random.choice(args.number,args.batch_size,replace=False)\n        tent_helper.forward_and_adapt(data,model,optimizer_tent)\n    print(\"Adaptation Done ...\")\n    model.eval()\n    return model\n\n\ndef check_inp_fmt(task, data_batch, dataset_name):\n    if task in ['cls', 'cls_trans']:\n        # assert set(data_batch.keys()) == {'pc', 'label'}\n        # print(data_batch['pc'],data_batch['label'])\n        pc, label = data_batch['pc'], data_batch['label']\n        # special case made for modelnet40_dgcnn to match the\n        # original implementation\n        # dgcnn loads torch.DoubleTensor for the test dataset\n        if dataset_name == 'modelnet40_dgcnn':\n            assert isinstance(pc, torch.FloatTensor) or isinstance(\n                pc, torch.DoubleTensor)\n        else:\n            assert isinstance(pc, torch.FloatTensor)\n        assert isinstance(label, torch.LongTensor)\n        assert len(pc.shape) == 3\n        assert len(label.shape) == 1\n        b1, _, y = pc.shape[0], pc.shape[1], pc.shape[2]\n        b2 = label.shape[0]\n        assert b1 == b2\n        assert y == 3\n        assert label.max().item() < DATASET_NUM_CLASS[dataset_name]\n        assert label.min().item() >= 0\n    else:\n        assert NotImplemented\n\n\ndef check_out_fmt(task, out, dataset_name):\n    if task == 'cls':\n        assert set(out.keys()) == {'logit'}\n        logit = out['logit']\n        assert isinstance(logit, torch.FloatTensor if DEVICE.type == 'cpu' else torch.cuda.FloatTensor)\n        assert len(logit.shape) == 2\n        assert DATASET_NUM_CLASS[dataset_name] == logit.shape[1]\n    elif task == 'cls_trans':\n        assert set(out.keys()) == {'logit', 'trans_feat'}\n        logit = out['logit']\n        trans_feat = out['trans_feat']\n        assert isinstance(logit, torch.FloatTensor if DEVICE.type == 'cpu' else torch.cuda.FloatTensor)\n        assert isinstance(trans_feat, torch.FloatTensor if DEVICE.type == 'cpu' else torch.cuda.FloatTensor)\n        assert len(logit.shape) == 2\n        assert len(trans_feat.shape) == 3\n        assert trans_feat.shape[0] == logit.shape[0]\n        # 64 coming from pointnet implementation\n        assert (trans_feat.shape[1] == trans_feat.shape[2]) and (trans_feat.shape[1] == 64)\n        assert DATASET_NUM_CLASS[dataset_name] == logit.shape[1]\n    else:\n        assert NotImplemented\n\n\ndef get_inp(task, model, data_batch, batch_proc, dataset_name):\n    check_inp_fmt(task, data_batch, dataset_name)\n    if not batch_proc is None:\n        data_batch = batch_proc(data_batch, DEVICE)\n        check_inp_fmt(task, data_batch, dataset_name)\n\n    if isinstance(model, nn.DataParallel):\n        model_type = type(model.module)\n    else:\n        model_type = type(model)\n\n    if task in ['cls', 'cls_trans']:\n        pc = data_batch['pc']\n        inp = {'pc': pc}\n    else:\n        assert False\n\n    return  inp\n\n\ndef get_loss(task, loss_name, data_batch, out, dataset_name):\n    \"\"\"\n    Returns the tensor loss function\n    :param task:\n    :param loss_name:\n    :param data_batch: batched data; note not applied data_batch\n    :param out: output from the model\n    :param dataset_name:\n    :return: tensor\n    \"\"\"\n    check_out_fmt(task, out, dataset_name)\n    if task == 'cls':\n        label = data_batch['label'].to(out['logit'].device)\n        if loss_name == 'cross_entropy':\n            if 'label_2' in data_batch.keys():\n                label_2 = data_batch['label_2'].to(out['logit'].device)\n                if isinstance(data_batch['lam'],torch.Tensor):\n                    loss = 0\n                    for i in range(data_batch['pc'].shape[0]):\n                        loss_tmp = smooth_loss(out['logit'][i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - data_batch['lam'][i]) + smooth_loss(out['logit'][i].unsqueeze(0), label_2[i].unsqueeze(0).long()) * data_batch['lam'][i]\n                        loss += loss_tmp\n                    loss = loss / data_batch['pc'].shape[0]\n                else:\n                    loss = smooth_loss(out['logit'], label) * (1 - data_batch['lam']) + smooth_loss(out['logit'], label_2) * data_batch['lam']\n            else:\n                loss = F.cross_entropy(out['logit'], label)\n        # source: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/util.py\n        elif loss_name == 'smooth':\n            if 'label_2' in data_batch.keys():\n                label_2 = data_batch['label_2'].to(out['logit'].device)\n                if isinstance(data_batch['lam'],torch.Tensor):\n                    loss = 0\n                    for i in range(data_batch['pc'].shape[0]):\n                        loss_tmp = smooth_loss(out['logit'][i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - data_batch['lam'][i]) + smooth_loss(out['logit'][i].unsqueeze(0), label_2[i].unsqueeze(0).long()) * data_batch['lam'][i]\n                        loss += loss_tmp\n                    loss = loss / data_batch['pc'].shape[0]\n                else:\n                    loss = smooth_loss(out['logit'], label) * (1 - data_batch['lam']) + smooth_loss(out['logit'], label_2) * data_batch['lam']\n            else:\n                loss = smooth_loss(out['logit'], label)\n        else:\n            assert False\n    elif task == 'cls_trans':\n        label = data_batch['label'].to(out['logit'].device)\n        trans_feat = out['trans_feat']\n        logit = out['logit']\n        if loss_name == 'cross_entropy':\n            if 'label_2' in data_batch.keys():\n                label_2 = data_batch['label_2'].to(out['logit'].device)\n                if isinstance(data_batch['lam'],torch.Tensor):\n                    loss = 0\n                    for i in range(data_batch['pc'].shape[0]):\n                        loss_tmp = smooth_loss(out['logit'][i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - data_batch['lam'][i]) + smooth_loss(out['logit'][i].unsqueeze(0), label_2[i].unsqueeze(0).long()) * data_batch['lam'][i]\n                        loss += loss_tmp\n                    loss = loss / data_batch['pc'].shape[0]\n                else:\n                    loss = smooth_loss(out['logit'], label) * (1 - data_batch['lam']) + smooth_loss(out['logit'], label_2) * data_batch['lam']\n            else:\n                loss = F.cross_entropy(out['logit'], label)\n            loss += feature_transform_regularizer(trans_feat) * 0.001\n        elif loss_name == 'smooth':\n            if 'label_2' in data_batch.keys():\n                label_2 = data_batch['label_2'].to(out['logit'].device)\n                if isinstance(data_batch['lam'],torch.Tensor):\n                    loss = 0\n                    for i in range(data_batch['pc'].shape[0]):\n                        loss_tmp = smooth_loss(out['logit'][i].unsqueeze(0), label[i].unsqueeze(0).long()) * (1 - data_batch['lam'][i]) + smooth_loss(out['logit'][i].unsqueeze(0), label_2[i].unsqueeze(0).long()) * data_batch['lam'][i]\n                        loss += loss_tmp\n                    loss = loss / data_batch['pc'].shape[0]\n                else:\n                    loss = smooth_loss(out['logit'], label) * (1 - data_batch['lam']) + smooth_loss(out['logit'], label_2) * data_batch['lam']\n            else:\n                loss = smooth_loss(out['logit'], label)\n            loss += feature_transform_regularizer(trans_feat) * 0.001\n        else:\n            assert False\n    else:\n        assert False\n\n    return loss\n\n\ndef validate(task, loader, model, dataset_name, adapt = None, confusion = False):\n    model.eval()\n\n    def get_extra_param():\n        return None\n\n    perf = PerfTrackVal(task, extra_param=get_extra_param())\n    time_dl = 0\n    time_gi = 0\n    time_model = 0\n    time_upd = 0\n\n    with torch.no_grad():\n        bar = ProgressBar(max_value=len(loader))\n        time5  = time()\n        if confusion:\n            pred = []\n            ground = []\n        for i, data_batch in enumerate(loader):\n            time1 = time()\n            inp = get_inp(task, model, data_batch, loader.dataset.batch_proc, dataset_name)\n            time2 = time()\n\n            if adapt.METHOD == 'bn':\n                model = adapt_bn(inp,model,adapt)\n            elif adapt.METHOD == 'tent':\n                model = adapt_tent(inp,model,adapt)\n\n            out = model(**inp)\n\n            if confusion:\n                pred.append(out['logit'].squeeze().cpu())\n                ground.append(data_batch['label'].squeeze().cpu())\n\n            time3 = time()\n            perf.update(data_batch=data_batch, out=out)\n            time4 = time()\n\n            time_dl += (time1 - time5)\n            time_gi += (time2 - time1)\n            time_model += (time3 - time2)\n            time_upd += (time4 - time3)\n\n            time5 = time()\n            bar.update(i)\n\n    print(f\"Time DL: {time_dl}, Time Get Inp: {time_gi}, Time Model: {time_model}, Time Update: {time_upd}\")\n    if not confusion:\n        return perf.agg()\n    else:\n        pred = np.argmax(torch.cat(pred).numpy(), axis=1)\n        # print(pred)\n        ground = torch.cat(ground).numpy()\n        # print(ground)\n        return perf.agg(), pred, ground\n\ndef train(task, loader, model, optimizer, loss_name, dataset_name, cfg):\n    model.train()\n\n    def get_extra_param():\n       return None\n\n    perf = PerfTrackTrain(task, extra_param=get_extra_param())\n    time_forward = 0\n    time_backward = 0\n    time_data_loading = 0\n\n    time3  = time()\n    for i, data_batch in enumerate(loader):\n        time1 = time()\n\n        if cfg.AUG.NAME == 'cutmix_r':\n            data_batch = aug_utils.cutmix_r(data_batch,cfg)\n        elif cfg.AUG.NAME == 'cutmix_k':\n            data_batch = aug_utils.cutmix_k(data_batch,cfg)\n        elif cfg.AUG.NAME == 'mixup':\n            data_batch = aug_utils.mixup(data_batch,cfg)\n        elif cfg.AUG.NAME == 'rsmix':\n            data_batch = aug_utils.rsmix(data_batch,cfg)\n        elif cfg.AUG.NAME == 'pgd':\n            data_batch = aug_utils.pgd(data_batch,model, task, loss_name, dataset_name)\n            model.train()\n        # print(data_batch)\n        inp = get_inp(task, model, data_batch, loader.dataset.batch_proc, dataset_name)\n        out = model(**inp)\n        loss = get_loss(task, loss_name, data_batch, out, dataset_name)\n\n        perf.update_all(data_batch=data_batch, out=out, loss=loss)\n        time2 = time()\n\n        if loss.ne(loss).any():\n            print(\"WARNING: avoiding step as nan in the loss\")\n        else:\n            optimizer.zero_grad()\n            loss.backward()\n            bad_grad = False\n            for x in model.parameters():\n                if x.grad is not None:\n                    if x.grad.ne(x.grad).any():\n                        print(\"WARNING: nan in a gradient\")\n                        bad_grad = True\n                        break\n                    if ((x.grad == float('inf')) | (x.grad == float('-inf'))).any():\n                        print(\"WARNING: inf in a gradient\")\n                        bad_grad = True\n                        break\n\n            if bad_grad:\n                print(\"WARNING: avoiding step as bad gradient\")\n            else:\n                optimizer.step()\n\n        time_data_loading += (time1 - time3)\n        time_forward += (time2 - time1)\n        time3 = time()\n        time_backward += (time3 - time2)\n\n        if i % 50 == 0:\n            print(\n                f\"[{i}/{len(loader)}] avg_loss: {perf.agg_loss()}, FW time = {round(time_forward, 2)}, \"\n                f\"BW time = {round(time_backward, 2)}, DL time = {round(time_data_loading, 2)}\")\n\n    return perf.agg(), perf.agg_loss()\n\n\ndef save_checkpoint(id, epoch, model, optimizer,  lr_sched, bnm_sched, test_perf, cfg):\n    model.cpu()\n    path = f\"./runs/{cfg.EXP.EXP_ID}/model_{id}.pth\"\n    torch.save({\n        'cfg': vars(cfg),\n        'epoch': epoch,\n        'model_state': model.state_dict(),\n        'optimizer_state': optimizer.state_dict(),\n        'lr_sched_state': lr_sched.state_dict(),\n        'bnm_sched_state': bnm_sched.state_dict() if bnm_sched is not None else None,\n        'test_perf': test_perf,\n    }, path)\n    print('Checkpoint saved to %s' % path)\n    model.to(DEVICE)\n\n\ndef load_best_checkpoint(model, cfg):\n    path = f\"./runs/{cfg.EXP.EXP_ID}/model_best.pth\"\n    checkpoint = torch.load(path)\n    model.load_state_dict(checkpoint['model_state'])\n    print('Checkpoint loaded from %s' % path)\n\n\ndef load_model_opt_sched(model, optimizer, lr_sched, bnm_sched, model_path):\n    print(f'Recovering model and checkpoint from {model_path}')\n    checkpoint = torch.load(model_path)\n    try:\n        model.load_state_dict(checkpoint['model_state'])\n    except:\n        if isinstance(model, nn.DataParallel):\n            model.module.load_state_dict(checkpoint['model_state'])\n        else:\n            model = nn.DataParallel(model)\n            model.load_state_dict(checkpoint['model_state'])\n            model = model.module\n\n    optimizer.load_state_dict(checkpoint['optimizer_state'])\n    # for backward compatibility with saved models\n    if 'lr_sched_state' in checkpoint:\n        lr_sched.load_state_dict(checkpoint['lr_sched_state'])\n        if checkpoint['bnm_sched_state'] is not None:\n            bnm_sched.load_state_dict(checkpoint['bnm_sched_state'])\n    else:\n        print(\"WARNING: lr scheduler and bnm scheduler states are not loaded.\")\n\n    return model\n\n\ndef get_model(cfg):\n    if cfg.EXP.MODEL_NAME == 'simpleview':\n        model = models.MVModel(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET,\n            **cfg.MODEL.MV)\n    elif cfg.EXP.MODEL_NAME == 'rscnn':\n        model = models.RSCNN(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET,\n            **cfg.MODEL.RSCNN)\n    elif cfg.EXP.MODEL_NAME == 'pointnet2':\n        model = models.PointNet2(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET,\n            **cfg.MODEL.PN2)\n    elif cfg.EXP.MODEL_NAME == 'dgcnn':\n        model = models.DGCNN(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'pointnet':\n        model = models.PointNet(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'pct':\n        model = models.Pct(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'pointMLP':\n        model = models.pointMLP(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'pointMLP2':\n        model = models.pointMLP2(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'curvenet':\n        model = models.CurveNet(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    elif cfg.EXP.MODEL_NAME == 'gdanet':\n        model = models.GDANET(\n            task=cfg.EXP.TASK,\n            dataset=cfg.EXP.DATASET)\n    else:\n        assert False\n\n    return model\n\n\ndef get_metric_from_perf(task, perf, metric_name):\n    if task in ['cls', 'cls_trans']:\n        assert metric_name in ['acc']\n        metric = perf[metric_name]\n    else:\n        assert False\n    return metric\n\n\ndef get_optimizer(optim_name, tr_arg, model):\n    if optim_name == 'vanilla':\n        optimizer = torch.optim.Adam(\n            model.parameters(),\n            lr=tr_arg.learning_rate,\n            weight_decay=tr_arg.l2)\n        lr_sched = lr_scheduler.ReduceLROnPlateau(\n            optimizer,\n            mode='min',\n            factor=tr_arg.lr_decay_factor,\n            patience=tr_arg.lr_reduce_patience,\n            verbose=True,\n            min_lr=tr_arg.lr_clip)\n        bnm_sched = None\n    elif optim_name == 'pct':\n        pass\n        optimizer = torch.optim.Adam(\n            model.parameters(),\n            lr=tr_arg.learning_rate,\n            weight_decay=tr_arg.l2)\n        lr_sched = lr_scheduler.CosineAnnealingLR(\n            optimizer,\n            tr_arg.num_epochs,\n            eta_min=tr_arg.learning_rate)\n        bnm_sched = None\n    else:\n        assert False\n\n    return optimizer, lr_sched, bnm_sched\n\n\ndef entry_train(cfg, resume=False, model_path=\"\"):\n    loader_train = create_dataloader(split='train', cfg=cfg)\n    loader_valid = create_dataloader(split='valid', cfg=cfg)\n    loader_test  = create_dataloader(split='test',  cfg=cfg)\n\n    model = get_model(cfg)\n    model.to(DEVICE)\n    print(model)\n    if torch.cuda.device_count() > 1:\n        model = nn.DataParallel(model)\n\n    optimizer, lr_sched, bnm_sched = get_optimizer(cfg.EXP.OPTIMIZER, cfg.TRAIN, model)\n\n    if resume:\n        model = load_model_opt_sched(model, optimizer, lr_sched, bnm_sched, model_path)\n    else:\n        assert model_path == \"\"\n\n\n    log_dir = f\"./runs/{cfg.EXP.EXP_ID}\"\n    if not os.path.exists(log_dir):\n        os.makedirs(log_dir)\n    tb = TensorboardManager(log_dir)\n    track_train = TrackTrain(early_stop_patience=cfg.TRAIN.early_stop)\n\n    for epoch in range(cfg.TRAIN.num_epochs):\n        print(f'Epoch {epoch}')\n\n        print('Training..')\n        train_perf, train_loss = train(cfg.EXP.TASK, loader_train, model, optimizer, cfg.EXP.LOSS_NAME, cfg.EXP.DATASET, cfg)\n        pprint.pprint(train_perf, width=80)\n        tb.update('train', epoch, train_perf)\n\n        if (not cfg.EXP_EXTRA.no_val) and epoch % cfg.EXP_EXTRA.val_eval_freq == 0:\n                print('\\nValidating..')\n                val_perf = validate(cfg.EXP.TASK, loader_valid, model, cfg.EXP.DATASET, cfg.ADAPT)\n                pprint.pprint(val_perf, width=80)\n                tb.update('val', epoch, val_perf)\n        else:\n            val_perf = defaultdict(float)\n\n        if (not cfg.EXP_EXTRA.no_test) and (epoch % cfg.EXP_EXTRA.test_eval_freq == 0):\n            print('\\nTesting..')\n            test_perf = validate(cfg.EXP.TASK, loader_test, model, cfg.EXP.DATASET, cfg.ADAPT)\n            pprint.pprint(test_perf, width=80)\n            tb.update('test', epoch, test_perf)\n        else:\n            test_perf = defaultdict(float)\n\n        track_train.record_epoch(\n            epoch_id=epoch,\n            train_metric=get_metric_from_perf(cfg.EXP.TASK, train_perf, cfg.EXP.METRIC),\n            val_metric=get_metric_from_perf(cfg.EXP.TASK, val_perf, cfg.EXP.METRIC),\n            test_metric=get_metric_from_perf(cfg.EXP.TASK, test_perf, cfg.EXP.METRIC))\n\n        if (not cfg.EXP_EXTRA.no_val) and track_train.save_model(epoch_id=epoch, split='val'):\n            print('Saving best model on the validation set')\n            save_checkpoint('best_val', epoch, model, optimizer,  lr_sched, bnm_sched, test_perf, cfg)\n\n        if (not cfg.EXP_EXTRA.no_test) and track_train.save_model(epoch_id=epoch, split='test'):\n            print('Saving best model on the test set')\n            save_checkpoint('best_test', epoch, model, optimizer,  lr_sched, bnm_sched, test_perf, cfg)\n\n        if (not cfg.EXP_EXTRA.no_val) and track_train.early_stop(epoch_id=epoch):\n            print(f\"Early stopping at {epoch} as val acc did not improve for {cfg.TRAIN.early_stop} epochs.\")\n            break\n\n        if (not (cfg.EXP_EXTRA.save_ckp == 0)) and (epoch % cfg.EXP_EXTRA.save_ckp == 0):\n            save_checkpoint(f'{epoch}', epoch, model, optimizer,  lr_sched, bnm_sched, test_perf, cfg)\n\n        if cfg.EXP.OPTIMIZER == 'vanilla':\n            assert bnm_sched is None\n            lr_sched.step(train_loss)\n        else:\n            lr_sched.step()\n\n    print('Saving the final model')\n    save_checkpoint('final', epoch, model, optimizer,  lr_sched, bnm_sched, test_perf, cfg)\n\n    print('\\nTesting on the final model..')\n    last_test_perf = validate(cfg.EXP.TASK, loader_test, model, cfg.EXP.DATASET, cfg.ADAPT)\n    pprint.pprint(last_test_perf, width=80)\n\n    tb.close()\n\n\ndef entry_test(cfg, test_or_valid, model_path=\"\", confusion = False):\n    split = \"test\" if test_or_valid else \"valid\"\n    loader_test = create_dataloader(split=split, cfg=cfg)\n\n    model = get_model(cfg)\n    model.to(DEVICE)\n    print(model)\n    if torch.cuda.device_count() > 1:\n        model = nn.DataParallel(model)\n\n    optimizer, lr_sched, bnm_sched = get_optimizer(cfg.EXP.OPTIMIZER, cfg.TRAIN, model)\n    model = load_model_opt_sched(model, optimizer, lr_sched, bnm_sched, model_path)\n    model.eval()\n    if confusion:\n        test_perf, pred, ground = validate(cfg.EXP.TASK, loader_test, model, cfg.EXP.DATASET, cfg.ADAPT, confusion)\n        print(pred.shape, ground.shape)\n        #### some hardcoding #######\n        np.save('./output/' + cfg.EXP.MODEL_NAME + '_' +  cfg.DATALOADER.MODELNET40_C.corruption + '_' + str(cfg.DATALOADER.MODELNET40_C.severity)  + '_pred.npy',pred )\n        np.save('./output/' + cfg.EXP.MODEL_NAME + '_' +  cfg.DATALOADER.MODELNET40_C.corruption + '_' + str(cfg.DATALOADER.MODELNET40_C.severity)  + '_ground.npy',ground)\n        #### #### #### #### #### ####\n    else:\n        test_perf = validate(cfg.EXP.TASK, loader_test, model, cfg.EXP.DATASET, cfg.ADAPT, confusion)\n    print(\"Model: {} Corruption: {} Severity: {} Acc: {} Class Acc: {}\".format(cfg.EXP.MODEL_NAME, cfg.DATALOADER.MODELNET40_C.corruption, cfg.DATALOADER.MODELNET40_C.severity,test_perf['acc'],test_perf['class_acc']),file=file_object,flush=True)\n    pprint.pprint(test_perf, width=80)\n    return test_perf\n\n\ndef rscnn_vote_evaluation(cfg, model_path, log_file):\n    model = get_model(cfg)\n    checkpoint = torch.load(model_path)\n    try:\n        model.load_state_dict(checkpoint['model_state'])\n    except:\n        print(\"WARNING: using dataparallel to load data\")\n        model = nn.DataParallel(model)\n        model.load_state_dict(checkpoint['model_state'])\n    print(f\"Checkpoint loaded from {model_path}\")\n    model.to(DEVICE)\n    model.eval()\n\n    assert cfg.EXP.DATASET in [\"modelnet40_rscnn\"]\n    loader_test = create_dataloader(split='test', cfg=cfg)\n\n    rscnn_voting_evaluate_cls(\n        loader=loader_test,\n        model=model,\n        data_batch_to_points_target=lambda x: (x['pc'], x['label']),\n        points_to_inp=lambda x: {'pc': x},\n        out_to_prob=lambda x: F.softmax(x['logit'], dim=1),\n        log_file=log_file\n    )\n\n\ndef pn2_vote_evaluation(cfg, model_path, log_file):\n    assert cfg.EXP.DATASET in [\"modelnet40_pn2\"]\n    loader_test = create_dataloader(split='test', cfg=cfg)\n\n    model = get_model(cfg)\n    checkpoint = torch.load(model_path)\n    try:\n        model.load_state_dict(checkpoint['model_state'])\n    except:\n        print(\"WARNING: using dataparallel to load data\")\n        model = nn.DataParallel(model)\n        model.load_state_dict(checkpoint['model_state'])\n    print(f\"Checkpoint loaded from {model_path}\")\n    model.to(DEVICE)\n    model.eval()\n\n    pn2_vote_evaluate_cls(loader_test, model, log_file)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.set_defaults(entry=lambda cmd_args: parser.print_help())\n    parser.add_argument('--entry', type=str, default=\"train\")\n    parser.add_argument('--exp-config', type=str, default=\"\")\n    parser.add_argument('--model-path', type=str, default=\"\")\n    parser.add_argument('--resume', action=\"store_true\", default=False)\n    # parser.add_argument('--gpu',type=str,default='0',\n                        # help=\"Which gpu to use\")\n    parser.add_argument('--corruption',type=str,default='uniform',\n                        help=\"Which corruption to use\")\n    parser.add_argument('--output',type=str,default='./test.txt',\n                        help=\"path to output file\")\n    parser.add_argument('--severity',type=int,default=1,\n                        help=\"Which severity to use\")\n\n    parser.add_argument('--confusion', action=\"store_true\", default=False,\n                        help=\"whether to output confusion matrix data\")\n\n\n    cmd_args = parser.parse_args()\n    # os.environ['CUDA_VISIBLE_DEVICES'] = cmd_args.gpu\n\n    if cmd_args.entry == \"train\":\n        assert not cmd_args.exp_config == \"\"\n        if not cmd_args.resume:\n            assert cmd_args.model_path == \"\"\n\n        cfg = get_cfg_defaults()\n        cfg.merge_from_file(cmd_args.exp_config)\n        if cfg.EXP.EXP_ID == \"\":\n            cfg.EXP.EXP_ID = str(datetime.now())[:-7].replace(' ', '-')\n        cfg.freeze()\n        print(cfg)\n\n        random.seed(cfg.EXP.SEED)\n        np.random.seed(cfg.EXP.SEED)\n        torch.manual_seed(cfg.EXP.SEED)\n\n        entry_train(cfg, cmd_args.resume, cmd_args.model_path)\n\n    elif cmd_args.entry in [\"test\", \"valid\"]:\n        file_object = open(cmd_args.output, 'a')\n        assert not cmd_args.exp_config == \"\"\n        assert not cmd_args.model_path == \"\"\n\n        cfg = get_cfg_defaults()\n        cfg.merge_from_file(cmd_args.exp_config)\n        if cfg.EXP.DATASET == \"modelnet40_c\":\n            cfg.DATALOADER.MODELNET40_C.corruption = cmd_args.corruption\n            cfg.DATALOADER.MODELNET40_C.severity = cmd_args.severity\n        cfg.freeze()\n        print(cfg)\n\n        random.seed(cfg.EXP.SEED)\n        np.random.seed(cfg.EXP.SEED)\n        torch.manual_seed(cfg.EXP.SEED)\n\n        test_or_valid = cmd_args.entry == \"test\"\n        entry_test(cfg, test_or_valid, cmd_args.model_path,cmd_args.confusion)\n\n    elif cmd_args.entry in [\"rscnn_vote\", \"pn2_vote\"]:\n        assert not cmd_args.exp_config == \"\"\n        assert not cmd_args.model_path == \"\"\n        log_file = f\"vote_log/{cmd_args.model_path.replace('/', '_')}_{cmd_args.entry.replace('/', '_')}.log\"\n\n        cfg = get_cfg_defaults()\n        cfg.merge_from_file(cmd_args.exp_config)\n        cfg.freeze()\n        print(cfg)\n\n        seed  = cfg.EXP.SEED\n        random.seed(seed)\n        np.random.seed(seed)\n        torch.manual_seed(seed)\n        torch.cuda.manual_seed(seed)\n        torch.cuda.manual_seed_all(seed)\n\n        torch.backends.cudnn.enabled = True\n        torch.backends.cudnn.benchmark = True\n        torch.backends.cudnn.deterministic = True\n\n        if cmd_args.entry == \"rscnn_vote\":\n            rscnn_vote_evaluation(cfg, cmd_args.model_path, log_file)\n        elif cmd_args.entry == \"pn2_vote\":\n            pn2_vote_evaluation(cfg, cmd_args.model_path, log_file)\n    else:\n        assert False\n"
  },
  {
    "path": "models/__init__.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-17 13:35:52\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-22 23:36:25\n'''\nfrom .mv import MVModel\nfrom .rscnn import RSCNN\nfrom .pointnet2 import PointNet2\nfrom .dgcnn import DGCNN\nfrom .pointnet import PointNet\nfrom .pct import Pct\nfrom .pointmlp import pointMLP\nfrom .pointmlp2 import pointMLP2\nfrom .curvenet import CurveNet\nfrom .gdanet import GDANET"
  },
  {
    "path": "models/curvenet.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-17 20:37:07\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-17 20:42:20\n'''\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom CurveNet.core.models.curvenet_cls import CurveNet as CurveNet_og\nfrom all_utils import DATASET_NUM_CLASS\n\nclass CurveNet(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n\n        if task == \"cls\":\n            num_classes = DATASET_NUM_CLASS[dataset]\n            self.model = CurveNet_og(num_classes=num_classes)\n\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.permute(0, 2, 1).contiguous()\n        if self.task == 'cls':\n            assert cls is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n\n        return out\n"
  },
  {
    "path": "models/dgcnn.py",
    "content": "\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom dgcnn.pytorch.model import DGCNN as DGCNN_original\nfrom all_utils import DATASET_NUM_CLASS\n\nclass DGCNN(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n\n        if task == \"cls\":\n            num_classes = DATASET_NUM_CLASS[dataset]\n            # default arguments\n            class Args:\n                def __init__(self):\n                    self.k = 20\n                    self.emb_dims = 1024\n                    self.dropout = 0.5\n                    self.leaky_relu = 1\n            args = Args()\n            self.model = DGCNN_original(args, output_channels=num_classes)\n\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.permute(0, 2, 1).contiguous()\n        if self.task == 'cls':\n            assert cls is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n\n        return out\n"
  },
  {
    "path": "models/gdanet.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-22 23:22:17\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-23 00:16:25\n'''\nimport torch\nimport torch.nn as nn\nfrom GDANet.model.GDANet_cls import GDANET as GDANET_og\nfrom all_utils import DATASET_NUM_CLASS\n\nclass GDANET(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task =  task\n        num_class = DATASET_NUM_CLASS[dataset]\n        if task == 'cls':\n            self.model = GDANET_og(number_class=num_class)\n        else:\n            assert False\n\n    def forward(self, pc, normal=None, cls=None):\n        # batch_size = pc.shape[0]\n        pc=pc.permute(0,2,1).contiguous()\n        pc = pc.to(next(self.parameters()).device)\n        if self.task == 'cls':\n            assert cls is None\n            assert normal is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n        return out\n"
  },
  {
    "path": "models/model_utils.py",
    "content": "import torch.nn as nn\n# from syncbn_pyt.modules.nn import BatchNorm2d as BatchNorm2dSync\n\nclass Squeeze(nn.Module):\n    def __init__(self):\n        super().__init__()\n\n    def forward(self, inp):\n        return inp.squeeze()\n\nclass BatchNormPoint(nn.Module):\n    def __init__(self, feat_size, sync_bn=False):\n        super().__init__()\n        self.feat_size = feat_size\n        self.sync_bn=sync_bn\n        if self.sync_bn:\n            self.bn = BatchNorm2dSync(feat_size)\n        else:\n            self.bn = nn.BatchNorm1d(feat_size)\n\n    def forward(self, x):\n        assert len(x.shape) == 3\n        s1, s2, s3 = x.shape[0], x.shape[1], x.shape[2]\n        assert s3 == self.feat_size\n        if self.sync_bn:\n            # 4d input for BatchNorm2dSync\n            x = x.view(s1 * s2, self.feat_size, 1, 1)\n            x = self.bn(x)\n        else:\n            x = x.view(s1 * s2, self.feat_size)\n            x = self.bn(x)\n        return x.view(s1, s2, s3)\n\n"
  },
  {
    "path": "models/mv.py",
    "content": "import torch\nimport torch.nn as nn\nfrom all_utils import DATASET_NUM_CLASS\nfrom models.model_utils import Squeeze, BatchNormPoint\nfrom models.mv_utils import PCViews\n\n\nclass MVModel(nn.Module):\n    def __init__(self, task, dataset, backbone,\n                 feat_size):\n\n        super().__init__()\n        assert task == 'cls'\n        self.task = task\n        self.num_class = DATASET_NUM_CLASS[dataset]\n        self.dropout_p = 0.5\n        self.feat_size = feat_size\n\n        pc_views = PCViews()\n        self.num_views = pc_views.num_views\n        self._get_img = pc_views.get_img\n\n        img_layers, in_features = self.get_img_layers(\n            backbone, feat_size=feat_size)\n        self.img_model = nn.Sequential(*img_layers)\n\n        self.final_fc = MVFC(\n            num_views=self.num_views,\n            in_features=in_features,\n            out_features=self.num_class,\n            dropout_p=self.dropout_p)\n\n    def forward(self, pc):\n        \"\"\"\n        :param pc:\n        :return:\n        \"\"\"\n\n        pc = pc.cuda()\n        img = self.get_img(pc)\n        feat = self.img_model(img)\n        logit = self.final_fc(feat)\n        out = {'logit': logit}\n        return out\n\n    def get_img(self, pc):\n        img = self._get_img(pc)\n        img = torch.tensor(img).float()\n        img = img.to(next(self.parameters()).device)\n        assert len(img.shape) == 3\n        img = img.unsqueeze(3)\n        # [num_pc * num_views, 1, RESOLUTION, RESOLUTION]\n        img = img.permute(0, 3, 1, 2)\n\n        return img\n\n    @staticmethod\n    def get_img_layers(backbone, feat_size):\n        \"\"\"\n        Return layers for the image model\n        \"\"\"\n\n        from models.resnet import _resnet, BasicBlock\n        assert backbone == 'resnet18'\n        layers = [2, 2, 2, 2]\n        block = BasicBlock\n        backbone_mod = _resnet(\n            arch=None,\n            block=block,\n            layers=layers,\n            pretrained=False,\n            progress=False,\n            feature_size=feat_size,\n            zero_init_residual=True)\n\n        all_layers = [x for x in backbone_mod.children()]\n        in_features = all_layers[-1].in_features\n\n        # all layers except the final fc layer and the initial conv layers\n        # WARNING: this is checked only for resnet models\n        main_layers = all_layers[4:-1]\n        img_layers = [\n            nn.Conv2d(1, feat_size, kernel_size=(3, 3), stride=(1, 1),\n                      padding=(1, 1), bias=False),\n            nn.BatchNorm2d(feat_size, eps=1e-05, momentum=0.1,\n                           affine=True, track_running_stats=True),\n            nn.ReLU(inplace=True),\n            *main_layers,\n            Squeeze()\n        ]\n\n        return img_layers, in_features\n\n\nclass MVFC(nn.Module):\n    \"\"\"\n    Final FC layers for the MV model\n    \"\"\"\n\n    def __init__(self, num_views, in_features, out_features, dropout_p):\n        super().__init__()\n        self.num_views = num_views\n        self.in_features = in_features\n        self.model = nn.Sequential(\n                BatchNormPoint(in_features),\n                # dropout before concatenation so that each view drops features independently\n                nn.Dropout(dropout_p),\n                nn.Flatten(),\n                nn.Linear(in_features=in_features * self.num_views,\n                          out_features=in_features),\n                nn.BatchNorm1d(in_features),\n                nn.ReLU(),\n                nn.Dropout(dropout_p),\n                nn.Linear(in_features=in_features, out_features=out_features,\n                          bias=True))\n\n    def forward(self, feat):\n        feat = feat.view((-1, self.num_views, self.in_features))\n        out = self.model(feat)\n        return out\n"
  },
  {
    "path": "models/mv_utils.py",
    "content": "import numpy as np\nimport torch\n\nRESOLUTION = 128\nTRANS = -1.4\n\ndef euler2mat(angle):\n    \"\"\"Convert euler angles to rotation matrix.\n     :param angle: [3] or [b, 3]\n     :return\n        rotmat: [3] or [b, 3, 3]\n    source\n    https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/inverse_warp.py\n    \"\"\"\n\n    if len(angle.size()) == 1:\n        x, y, z = angle[0], angle[1], angle[2]\n        _dim = 0\n        _view = [3, 3]\n    elif len(angle.size()) == 2:\n        b, _ = angle.size()\n        x, y, z = angle[:, 0], angle[:, 1], angle[:, 2]\n        _dim = 1\n        _view = [b, 3, 3]\n\n    else:\n        assert False\n\n    cosz = torch.cos(z)\n    sinz = torch.sin(z)\n\n    # zero = torch.zeros([b], requires_grad=False, device=angle.device)[0]\n    # one = torch.ones([b], requires_grad=False, device=angle.device)[0]\n    zero = z.detach()*0\n    one = zero.detach()+1\n    zmat = torch.stack([cosz, -sinz, zero,\n                        sinz, cosz, zero,\n                        zero, zero, one], dim=_dim).reshape(_view)\n\n    cosy = torch.cos(y)\n    siny = torch.sin(y)\n\n    ymat = torch.stack([cosy, zero, siny,\n                        zero, one, zero,\n                        -siny, zero, cosy], dim=_dim).reshape(_view)\n\n    cosx = torch.cos(x)\n    sinx = torch.sin(x)\n\n    xmat = torch.stack([one, zero, zero,\n                        zero, cosx, -sinx,\n                        zero, sinx, cosx], dim=_dim).reshape(_view)\n\n    rot_mat = xmat @ ymat @ zmat\n    # print(rot_mat)\n    return rot_mat\n\n\ndef distribute(depth, _x, _y, size_x, size_y, image_height, image_width):\n    \"\"\"\n    Distributes the depth associated with each point to the discrete coordinates (image_height, image_width) in a region\n    of size (size_x, size_y).\n    :param depth:\n    :param _x:\n    :param _y:\n    :param size_x:\n    :param size_y:\n    :param image_height:\n    :param image_width:\n    :return:\n    \"\"\"\n\n    assert size_x % 2 == 0 or size_x == 1\n    assert size_y % 2 == 0 or size_y == 1\n    batch, _ = depth.size()\n    epsilon = torch.tensor([1e-12], requires_grad=False, device=depth.device)\n    _i = torch.linspace(-size_x / 2, (size_x / 2) - 1, size_x, requires_grad=False, device=depth.device)\n    _j = torch.linspace(-size_y / 2, (size_y / 2) - 1, size_y, requires_grad=False, device=depth.device)\n\n    extended_x = _x.unsqueeze(2).repeat([1, 1, size_x]) + _i  # [batch, num_points, size_x]\n    extended_y = _y.unsqueeze(2).repeat([1, 1, size_y]) + _j  # [batch, num_points, size_y]\n\n    extended_x = extended_x.unsqueeze(3).repeat([1, 1, 1, size_y])  # [batch, num_points, size_x, size_y]\n    extended_y = extended_y.unsqueeze(2).repeat([1, 1, size_x, 1])  # [batch, num_points, size_x, size_y]\n\n    extended_x.ceil_()\n    extended_y.ceil_()\n\n    value = depth.unsqueeze(2).unsqueeze(3).repeat([1, 1, size_x, size_y])  # [batch, num_points, size_x, size_y]\n\n    # all points that will be finally used\n    masked_points = ((extended_x >= 0)\n                     * (extended_x <= image_height - 1)\n                     * (extended_y >= 0)\n                     * (extended_y <= image_width - 1)\n                     * (value >= 0))\n\n    true_extended_x = extended_x\n    true_extended_y = extended_y\n\n    # to prevent error\n    extended_x = (extended_x % image_height)\n    extended_y = (extended_y % image_width)\n\n    # [batch, num_points, size_x, size_y]\n    distance = torch.abs((extended_x - _x.unsqueeze(2).unsqueeze(3))\n                         * (extended_y - _y.unsqueeze(2).unsqueeze(3)))\n    weight = (masked_points.float()\n          * (1 / (value + epsilon)))  # [batch, num_points, size_x, size_y]\n    weighted_value = value * weight\n\n    weight = weight.view([batch, -1])\n    weighted_value = weighted_value.view([batch, -1])\n\n    coordinates = (extended_x.view([batch, -1]) * image_width) + extended_y.view(\n        [batch, -1])\n    coord_max = image_height * image_width\n    true_coordinates = (true_extended_x.view([batch, -1]) * image_width) + true_extended_y.view(\n        [batch, -1])\n    true_coordinates[~masked_points.view([batch, -1])] = coord_max\n    weight_scattered = torch.zeros(\n        [batch, image_width * image_height],\n        device=depth.device).scatter_add(1, coordinates.long(), weight)\n\n    masked_zero_weight_scattered = (weight_scattered == 0.0)\n    weight_scattered += masked_zero_weight_scattered.float()\n\n    weighed_value_scattered = torch.zeros(\n        [batch, image_width * image_height],\n        device=depth.device).scatter_add(1, coordinates.long(), weighted_value)\n\n    return weighed_value_scattered,  weight_scattered\n\n\ndef points2depth(points, image_height, image_width, size_x=4, size_y=4):\n    \"\"\"\n    :param points: [B, num_points, 3]\n    :param image_width:\n    :param image_height:\n    :param size_x:\n    :param size_y:\n    :return:\n        depth_recovered: [B, image_width, image_height]\n    \"\"\"\n\n    epsilon = torch.tensor([1e-12], requires_grad=False, device=points.device)\n    # epsilon not needed, kept here to ensure exact replication of old version\n    coord_x = (points[:, :, 0] / (points[:, :, 2] + epsilon)) * (image_width / image_height)  # [batch, num_points]\n    coord_y = (points[:, :, 1] / (points[:, :, 2] + epsilon))  # [batch, num_points]\n\n    batch, total_points, _ = points.size()\n    depth = points[:, :, 2]  # [batch, num_points]\n    # pdb.set_trace()\n    _x = ((coord_x + 1) * image_height) / 2\n    _y = ((coord_y + 1) * image_width) / 2\n\n    weighed_value_scattered, weight_scattered = distribute(\n        depth=depth,\n        _x=_x,\n        _y=_y,\n        size_x=size_x,\n        size_y=size_y,\n        image_height=image_height,\n        image_width=image_width)\n\n    depth_recovered = (weighed_value_scattered / weight_scattered).view([\n        batch, image_height, image_width\n    ])\n\n    return depth_recovered\n\n\n# source: https://discuss.pytorch.org/t/batched-index-select/9115/6\ndef batched_index_select(inp, dim, index):\n    \"\"\"\n    input: B x * x ... x *\n    dim: 0 < scalar\n    index: B x M\n    \"\"\"\n    views = [inp.shape[0]] + \\\n        [1 if i != dim else -1 for i in range(1, len(inp.shape))]\n    expanse = list(inp.shape)\n    expanse[0] = -1\n    expanse[dim] = -1\n    index = index.view(views).expand(expanse)\n    return torch.gather(inp, dim, index)\n\n\ndef point_fea_img_fea(point_fea, point_coo, h, w):\n    \"\"\"\n    each point_coo is of the form (x*w + h). points not in the canvas are removed\n    :param point_fea: [batch_size, num_points, feat_size]\n    :param point_coo: [batch_size, num_points]\n    :return:\n    \"\"\"\n    assert len(point_fea.shape) == 3\n    assert len(point_coo.shape) == 2\n    assert point_fea.shape[0:2] == point_coo.shape\n\n    coo_max = ((h - 1) * w) + (w - 1)\n    mask_point_coo = (point_coo >= 0) * (point_coo <= coo_max)\n    point_coo *= mask_point_coo.float()\n    point_fea *= mask_point_coo.float().unsqueeze(-1)\n\n    bs, _, fs = point_fea.shape\n    point_coo = point_coo.unsqueeze(2).repeat([1, 1, fs])\n    img_fea = torch.zeros([bs, h * w, fs], device=point_fea.device).scatter_add(1, point_coo.long(), point_fea)\n\n    return img_fea\n\n\ndef distribute_img_fea_points(img_fea, point_coord):\n    \"\"\"\n    :param img_fea: [B, C, H, W]\n    :param point_coord: [B, num_points], each coordinate  is a scalar value given by (x * W) + y\n    :return\n        point_fea: [B, num_points, C], for points with coordinates outside the image, we return 0\n    \"\"\"\n    B, C, H, W = list(img_fea.size())\n    img_fea = img_fea.permute(0, 2, 3, 1).view([B, H*W, C])\n\n    coord_max = ((H - 1) * W) + (W - 1)\n    mask_point_coord = (point_coord >= 0) * (point_coord <= coord_max)\n    mask_point_coord = mask_point_coord.float()\n    point_coord = mask_point_coord * point_coord\n    point_fea = batched_index_select(\n        inp=img_fea,\n        dim=1,\n        index=point_coord.long())\n    point_fea = mask_point_coord.unsqueeze(-1) * point_fea\n    return point_fea\n\n\nclass PCViews:\n    \"\"\"For creating images from PC based on the view information. Faster as the\n    repeated operations are done only once whie initialization.\n    \"\"\"\n\n    def __init__(self):\n        _views = np.asarray([\n            [[0 * np.pi / 2, 0, np.pi / 2], [0, 0, TRANS]],\n            [[1 * np.pi / 2, 0, np.pi / 2], [0, 0, TRANS]],\n            [[2 * np.pi / 2, 0, np.pi / 2], [0, 0, TRANS]],\n            [[3 * np.pi / 2, 0, np.pi / 2], [0, 0, TRANS]],\n            [[0, -np.pi / 2, np.pi / 2], [0, 0, TRANS]],\n            [[0, np.pi / 2, np.pi / 2], [0, 0, TRANS]]])\n        self.num_views = 6\n        angle = torch.tensor(_views[:, 0, :]).float().cuda()\n        self.rot_mat = euler2mat(angle).transpose(1, 2)\n        self.translation = torch.tensor(_views[:, 1, :]).float().cuda()\n        self.translation = self.translation.unsqueeze(1)\n\n    def get_img(self, points):\n        \"\"\"Get image based on the prespecified specifications.\n\n        Args:\n            points (torch.tensor): of size [B, _, 3]\n        Returns:\n            img (torch.tensor): of size [B * self.num_views, RESOLUTION,\n                RESOLUTION]\n        \"\"\"\n        b, _, _ = points.shape\n        v = self.translation.shape[0]\n\n        _points = self.point_transform(\n            points=torch.repeat_interleave(points, v, dim=0),\n            rot_mat=self.rot_mat.repeat(b, 1, 1),\n            translation=self.translation.repeat(b, 1, 1))\n\n        img = points2depth(\n            points=_points,\n            image_height=RESOLUTION,\n            image_width=RESOLUTION,\n            size_x=1,\n            size_y=1,\n        )\n        return img\n\n    @staticmethod\n    def point_transform(points, rot_mat, translation):\n        \"\"\"\n        :param points: [batch, num_points, 3]\n        :param rot_mat: [batch, 3]\n        :param translation: [batch, 1, 3]\n        :return:\n        \"\"\"\n\n        points = torch.matmul(points.to('cuda:0'), rot_mat.to('cuda:0'))\n        points = points - translation\n        return points\n"
  },
  {
    "path": "models/pct.py",
    "content": "import torch.nn as nn\nfrom PCT_Pytorch.model import Pct as Pct_original\nfrom all_utils import DATASET_NUM_CLASS\n\nclass Pct(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n\n        if task == \"cls\":\n            num_classes = DATASET_NUM_CLASS[dataset]\n            # default arguments\n            class Args:\n                def __init__(self):\n                    self.dropout = 0.5\n            args = Args()\n            self.model = Pct_original(args, output_channels=num_classes)\n\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.permute(0, 2, 1).contiguous()\n        if self.task == 'cls':\n            assert cls is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n\n        return out\n"
  },
  {
    "path": "models/pointmlp.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-17 20:50:58\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-21 21:18:02\n'''\nimport torch.nn as nn\nfrom pointMLP.classification_ModelNet40.models.pointmlp import pointMLP as pointMLP_original\nfrom all_utils import DATASET_NUM_CLASS\n\n\nclass pointMLP(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n\n        if task == \"cls\":\n            num_classes = DATASET_NUM_CLASS[dataset]\n            self.model = pointMLP_original(num_classes=num_classes)\n\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.permute(0, 2, 1).contiguous()\n        if self.task == 'cls':\n            assert cls is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n\n        return out\n"
  },
  {
    "path": "models/pointmlp2.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-21 21:16:25\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-21 21:17:57\n'''\nimport torch.nn as nn\nfrom pointMLP.classification_ModelNet40.models.pointmlp import pointMLPElite as pointMLP_original\nfrom all_utils import DATASET_NUM_CLASS\n\nclass pointMLP2(nn.Module):\n\n    def __init__(self, task, dataset):\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n\n        if task == \"cls\":\n            num_classes = DATASET_NUM_CLASS[dataset]\n            self.model = pointMLP_original(num_classes=num_classes)\n\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.permute(0, 2, 1).contiguous()\n        if self.task == 'cls':\n            assert cls is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n\n        return out\n"
  },
  {
    "path": "models/pointnet.py",
    "content": "# based on: https://github.com/fxia22/pointnet.pytorch/blob/master/utils/train_classification.py\nimport torch.nn as nn\nfrom pointnet_pyt.pointnet.model import PointNetCls\nfrom all_utils import DATASET_NUM_CLASS\n\nclass PointNet(nn.Module):\n\n    def __init__(self, dataset, task):\n        super().__init__()\n        self.task = task\n        num_class = DATASET_NUM_CLASS[dataset]\n        if self.task == 'cls_trans':\n            self.model =  PointNetCls(k=num_class, feature_transform=True)\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        pc = pc.transpose(2, 1).float()\n        if self.task == 'cls_trans':\n            logit, _, trans_feat = self.model(pc)\n        else:\n            assert False\n\n        out = {'logit': logit, 'trans_feat': trans_feat}\n        return out\n"
  },
  {
    "path": "models/pointnet2.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-16 22:23:16\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-24 22:36:59\n'''\nimport torch\nimport torch.nn as nn\nfrom pointnet2_pyt.pointnet2.models.pointnet2_msg_cls import Pointnet2MSG\nfrom all_utils import DATASET_NUM_CLASS\n\nclass PointNet2(nn.Module):\n\n    def __init__(self, task, dataset, version_cls):\n        super().__init__()\n        self.task =  task\n        num_class = DATASET_NUM_CLASS[dataset]\n        if task == 'cls':\n            self.model = Pointnet2MSG(num_classes=num_class, input_channels=0, use_xyz=True, version=version_cls)\n        else:\n            assert False\n\n    def forward(self, pc, normal=None, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        if self.task == 'cls':\n            assert cls is None\n            assert normal is None\n            logit = self.model(pc)\n            out = {'logit': logit}\n        else:\n            assert False\n        return out\n"
  },
  {
    "path": "models/resnet.py",
    "content": "import torch\nimport torch.nn as nn\nfrom torchvision.models.utils import load_state_dict_from_url\n\n\n__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',\n           'resnet152', 'resnext50_32x4d', 'resnext101_32x8d',\n           'wide_resnet50_2', 'wide_resnet101_2']\n\n\nmodel_urls = {\n    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\n    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\n    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\n    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\n    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\n    'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',\n    'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',\n    'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',\n    'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',\n}\n\n\ndef conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=dilation, groups=groups, bias=False, dilation=dilation)\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,\n                 base_width=64, dilation=1, norm_layer=None):\n        super(BasicBlock, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        if groups != 1 or base_width != 64:\n            raise ValueError('BasicBlock only supports groups=1 and base_width=64')\n        if dilation > 1:\n            raise NotImplementedError(\"Dilation > 1 not supported in BasicBlock\")\n        # Both self.conv1 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = norm_layer(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = norm_layer(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)\n    # while original implementation places the stride at the first 1x1 convolution(self.conv1)\n    # according to \"Deep residual learning for image recognition\"https://arxiv.org/abs/1512.03385.\n    # This variant is also known as ResNet V1.5 and improves accuracy according to\n    # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.\n\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,\n                 base_width=64, dilation=1, norm_layer=None):\n        super(Bottleneck, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        width = int(planes * (base_width / 64.)) * groups\n        # Both self.conv2 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv1x1(inplanes, width)\n        self.bn1 = norm_layer(width)\n        self.conv2 = conv3x3(width, width, stride, groups, dilation)\n        self.bn2 = norm_layer(width)\n        self.conv3 = conv1x1(width, planes * self.expansion)\n        self.bn3 = norm_layer(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass ResNet(nn.Module):\n\n    def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,\n                 groups=1, width_per_group=64, replace_stride_with_dilation=None,\n                 norm_layer=None, feature_size=64):\n        super(ResNet, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        self._norm_layer = norm_layer\n\n        self.inplanes = feature_size\n        self.dilation = 1\n        if replace_stride_with_dilation is None:\n            # each element in the tuple indicates if we should replace\n            # the 2x2 stride with a dilated convolution instead\n            replace_stride_with_dilation = [False, False, False]\n        if len(replace_stride_with_dilation) != 3:\n            raise ValueError(\"replace_stride_with_dilation should be None \"\n                             \"or a 3-element tuple, got {}\".format(replace_stride_with_dilation))\n        self.groups = groups\n        self.base_width = width_per_group\n        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = norm_layer(self.inplanes)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, feature_size, layers[0])\n        self.layer2 = self._make_layer(block, feature_size * 2, layers[1], stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block, feature_size * 4, layers[2], stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n        self.layer4 = self._make_layer(block, feature_size * 8, layers[3], stride=2,\n                                       dilate=replace_stride_with_dilation[2])\n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        self.fc = nn.Linear(feature_size * 8 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n        # Zero-initialize the last BN in each residual branch,\n        # so that the residual branch starts with zeros, and each residual block behaves like an identity.\n        # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, Bottleneck):\n                    nn.init.constant_(m.bn3.weight, 0)\n                elif isinstance(m, BasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        norm_layer = self._norm_layer\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                norm_layer(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, self.groups,\n                            self.base_width, previous_dilation, norm_layer))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes, groups=self.groups,\n                                base_width=self.base_width, dilation=self.dilation,\n                                norm_layer=norm_layer))\n\n        return nn.Sequential(*layers)\n\n    def _forward_impl(self, x):\n        # See note [TorchScript super()]\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n\n        x = self.avgpool(x)\n        x = torch.flatten(x, 1)\n        x = self.fc(x)\n\n        return x\n\n    def forward(self, x):\n        return self._forward_impl(x)\n\n\ndef _resnet(arch, block, layers, pretrained, progress, **kwargs):\n    model = ResNet(block, layers, **kwargs)\n    if pretrained:\n        state_dict = load_state_dict_from_url(model_urls[arch],\n                                              progress=progress)\n        model.load_state_dict(state_dict)\n    return model\n\n\ndef resnet18(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-18 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet34(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-34 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet50(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-50 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet101(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-101 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet152(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-152 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnext50_32x4d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-50 32x4d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 4\n    return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef resnext101_32x8d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-101 32x8d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 8\n    return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet50_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-50-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet101_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-101-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)"
  },
  {
    "path": "models/rscnn.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom rs_cnn.models import RSCNN_MSN_Seg, RSCNN_SSN_Cls\nfrom all_utils import DATASET_NUM_CLASS\n# distilled from:\n# https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/models/rscnn_ssn_cls.py\n# https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/models/rscnn_msn_seg.py\nclass RSCNN(nn.Module):\n\n    def __init__(self, task, dataset, ssn_or_msn):\n        \"\"\"\n        Returns a model\n        :param cls_or_seg: (bool) if true cls else seg\n        :param ssn_or_msn: (bool) if true ssn else msn\n        \"\"\"\n        super().__init__()\n        self.task = task\n        self.dataset = dataset\n        num_classes = DATASET_NUM_CLASS[self.dataset]\n        if self.task == 'cls':\n            assert ssn_or_msn\n            # source: https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/cfgs/config_ssn_cls.yaml\n            # source: https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/train_cls.py#L73\n            rscnn_params = {\n                'num_classes':num_classes,\n                'input_channels': 0,\n                'relation_prior': 1,\n                'use_xyz': True\n            }\n            self.model = RSCNN_SSN_Cls(**rscnn_params)\n        else:\n            assert False\n\n    def forward(self, pc, cls=None):\n        pc = pc.to(next(self.parameters()).device)\n        if self.task == 'cls':\n            assert cls is None\n            out = {'logit': self.model(pc)}\n        else:\n            assert False\n        return out\n"
  },
  {
    "path": "pc_utils.py",
    "content": "import numpy as np\nimport torch\n\n# source: https://github.com/charlesq34/pointnet2/blob/74bb67f3702e8aec55a7b8765dd728b18456030c/utils/provider.py#L187-L198\ndef jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):\n    \"\"\" Randomly jitter points. jittering is per point.\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, jittered batch of point clouds\n    \"\"\"\n    B, N, C = batch_data.shape\n    assert(clip > 0)\n    jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)\n    jittered_data += batch_data\n    return jittered_data\n\n# source: https://github.com/charlesq34/pointnet2/blob/74bb67f3702e8aec55a7b8765dd728b18456030c/utils/provider.py#L32-L50\ndef rotate_point_cloud(batch_data):\n    \"\"\" Randomly rotate the point clouds to augument the dataset\n        rotation is per shape based along up direction\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, 0, sinval],\n                                    [0, 1, 0],\n                                    [-sinval, 0, cosval]])\n        shape_pc = batch_data[k, ...]\n        rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n    return rotated_data\n\n# source: https://github.com/WangYueFt/dgcnn/blob/master/pytorch/data.py\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n\n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\n# based on https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/data/data_utils.py#L81\nclass PointcloudScaleAndTranslate(object):\n    def __init__(self, scale_low=2. / 3., scale_high=3. / 2., translate_range=0.2, no_z_aug=False):\n        \"\"\"\n        :param scale_low:\n        :param scale_high:\n        :param translate_range:\n        :param no_z: no translation and scaling along the z axis\n        \"\"\"\n        self.scale_low = scale_low\n        self.scale_high = scale_high\n        self.translate_range = translate_range\n        self.no_z_aug = no_z_aug\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            xyz1 = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])\n            xyz2 = np.random.uniform(low=-self.translate_range, high=self.translate_range, size=[3])\n            if self.no_z_aug:\n                xyz1[2] = 1.0\n                xyz2[2] = 0.0\n            pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], torch.from_numpy(xyz1).float().cuda()) + torch.from_numpy(xyz2).float().cuda()\n\n        return pc"
  },
  {
    "path": "pointMLP/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n.idea\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n.DS_Store\n"
  },
  {
    "path": "pointMLP/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "pointMLP/README.md",
    "content": "# Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework （ICLR 2022）\n\n\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rethinking-network-design-and-local-geometry-1/3d-point-cloud-classification-on-modelnet40)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40?p=rethinking-network-design-and-local-geometry-1)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rethinking-network-design-and-local-geometry-1/3d-point-cloud-classification-on-scanobjectnn)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-scanobjectnn?p=rethinking-network-design-and-local-geometry-1)\n\n\n<div align=\"left\">\n    <a><img src=\"images/smile.png\"  height=\"70px\" ></a>\n    <a><img src=\"images/neu.png\"  height=\"70px\" ></a>\n    <a><img src=\"images/columbia.png\"  height=\"70px\" ></a>\n</div>\n\n[Project Sites]() | [arXiv](https://arxiv.org/abs/2202.07123) | Primary contact: [Xu Ma](mailto:ma.xu1@northeastern.edu)\n\n<div align=\"center\">\n  <img src=\"images/overview.png\" width=\"650px\" height=\"300px\">\n</div>\n\nOverview of one stage in PointMLP. Given an input point cloud, PointMLP progressively extract local features using residual point MLP blocks. In each stage, we first transform local point using a geometric affine module, then local points are are extracted before and after aggregation respectively. By repeating multiple stages, PointMLP progressively enlarge the receptive field and model entire point cloud geometric information.\n\n\n## BibTeX\n\n    @inproceedings{\n        ma2022rethinking,\n        title={Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual {MLP} Framework},\n        author={Xu Ma and Can Qin and Haoxuan You and Haoxi Ran and Yun Fu},\n        booktitle={International Conference on Learning Representations},\n        year={2022},\n        url={https://openreview.net/forum?id=3Pbra-_u76D}\n    }\n\n## Model Zoo\n- The codes/models/logs for submission version (without bug fixed) can be found here [commit:d2b8dbaa](http://github.com/13952522076/pointMLP-pytorch/tree/d2b8dbaa06eb6176b222dcf2ad248f8438582026).\n\n- On ModelNet40, fixed pointMLP achieves a result of **91.5% mAcc** and **94.1% OA** without voting, logs and pretrained models can be found [[here]](https://web.northeastern.edu/smilelab/xuma/pointMLP/checkpoints/fixstd/modelnet40/pointMLP-20220209053148-404/).\n- On ScanObjectNN, fixed pointMLP achieves a result of **84.4% mAcc** and **86.1% OA** without voting, logs and pretrained models can be found [[here]](https://web.northeastern.edu/smilelab/xuma/pointMLP/checkpoints/fixstd/scanobjectnn/pointMLP-20220204021453/).\n- Stay tuned. More elite versions and voting results will be uploaded.\n\n\n\n## News & Updates:\n\n- [ ] updated more pretrained models\n- [ ] double check the part seg utils\n- [ ] project page\n- [x] update std bug (unstable testing in previous version)\n- [x] paper/codes release\n\n:point_right::point_right::point_right:**NOTE:** The codes/models/logs for submission version (without bug fixed) can be found here [commit:d2b8dbaa](http://github.com/13952522076/pointMLP-pytorch/tree/d2b8dbaa06eb6176b222dcf2ad248f8438582026).\n\n\n\n\n## Install\n\n```bash\n# 1. clone this repo\ngit clone https://github.com/ma-xu/pointMLP-pytorch.git\ncd pointMLP-pytorch\n\n# 2. create a conda virtual environment and activate it\nconda create -n pointmlp python=3.7 -y\nconda activate pointmlp\n\n# 3. install required libs, pytorch 1.8.1, torchvision 0.9.1, etc.\npip install -r requirements.txt\n\n# 4. install CUDA kernels\npip install pointnet2_ops_lib/.\n```\n\n\n## Useage\n\n### Classification ModelNet40\n**Train**: The dataset will be automatically downloaded, run following command to train.\n\nBy default, it will create a fold named \"checkpoints/{modelName}-{msg}-{randomseed}\", which includes args.txt, best_checkpoint.pth, last_checkpoint.pth, log.txt, out.txt.\n```bash\ncd pointMLP-pytorch/classification_ModelNet40\n# train pointMLP\npython main.py --model pointMLP\n# train pointMLP-elite\npython main.py --model pointMLPElite\n# please add other paramemters as you wish.\n```\n\n\nTo conduct voting testing, run\n```bash\n# please modify the msg accrodingly\npython voting.py --model pointMLP --msg demo\n```\n\n\n### Classification ScanObjectNN\n\nThe dataset will be automatically downloaded\n\n- Train pointMLP/pointMLPElite \n```bash\n# train pointMLP\npython main.py --model pointMLP\n# train pointMLP-elite\npython main.py --model pointMLPElite\n# please add other paramemters as you wish.\n```\nBy default, it will create a fold named \"checkpoints/{modelName}-{msg}-{randomseed}\", which includes args.txt, best_checkpoint.pth, last_checkpoint.pth, log.txt, out.txt.\n\n\n### Part segmentation\n\n- Make data folder and download the dataset\n```bash\ncd pointMLP-pytorch/part_segmentation\nmkdir data\ncd data\nwget https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip --no-check-certificate\nunzip shapenetcore_partanno_segmentation_benchmark_v0_normal.zip\n```\n\n- Train pointMLP\n```bash\n# train pointMLP\npython main.py --model pointMLP\n# please add other paramemters as you wish.\n```\n\n\n## Acknowledgment\n\nOur implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.\n\n[CurveNet](https://github.com/tiangexiang/CurveNet),\n[PAConv](https://github.com/CVMI-Lab/PAConv),\n[GDANet](https://github.com/mutianxu/GDANet),\n[Pointnet2_PyTorch](https://github.com/erikwijmans/Pointnet2_PyTorch)\n\n## LICENSE\nPointMLP is under the Apache-2.0 license. \nPlease contact the authors for commercial use.\n\n\n\n\n\n\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/data.py",
    "content": "import os\nimport glob\nimport h5py\nimport numpy as np\nfrom torch.utils.data import Dataset\nos.environ[\"HDF5_USE_FILE_LOCKING\"] = \"FALSE\"\n\ndef download():\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, 'data')\n    if not os.path.exists(DATA_DIR):\n        os.mkdir(DATA_DIR)\n    if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n        www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n        zipfile = os.path.basename(www)\n        os.system('wget %s  --no-check-certificate; unzip %s' % (www, zipfile))\n        os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n        os.system('rm %s' % (zipfile))\n\ndef load_data(partition):\n    download()\n    BASE_DIR = os.path.dirname(os.path.abspath(__file__))\n    DATA_DIR = os.path.join(BASE_DIR, 'data')\n    all_data = []\n    all_label = []\n    for h5_name in glob.glob(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048', 'ply_data_%s*.h5'%partition)):\n        # print(f\"h5_name: {h5_name}\")\n        f = h5py.File(h5_name,'r')\n        data = f['data'][:].astype('float32')\n        label = f['label'][:].astype('int64')\n        f.close()\n        all_data.append(data)\n        all_label.append(label)\n    all_data = np.concatenate(all_data, axis=0)\n    all_label = np.concatenate(all_label, axis=0)\n    return all_data, all_label\n\ndef random_point_dropout(pc, max_dropout_ratio=0.875):\n    ''' batch_pc: BxNx3 '''\n    # for b in range(batch_pc.shape[0]):\n    dropout_ratio = np.random.random()*max_dropout_ratio # 0~0.875    \n    drop_idx = np.where(np.random.random((pc.shape[0]))<=dropout_ratio)[0]\n    # print ('use random drop', len(drop_idx))\n\n    if len(drop_idx)>0:\n        pc[drop_idx,:] = pc[0,:] # set to the first point\n    return pc\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n       \n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\nclass ModelNet40(Dataset):\n    def __init__(self, num_points, partition='train'):\n        self.data, self.label = load_data(partition)\n        self.num_points = num_points\n        self.partition = partition        \n\n    def __getitem__(self, item):\n        pointcloud = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.partition == 'train':\n            # pointcloud = random_point_dropout(pointcloud) # open for dgcnn not for our idea  for all\n            pointcloud = translate_pointcloud(pointcloud)\n            np.random.shuffle(pointcloud)\n        return pointcloud, label\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\nif __name__ == '__main__':\n    train = ModelNet40(1024)\n    test = ModelNet40(1024, 'test')\n    # for data, label in train:\n    #     print(data.shape)\n    #     print(label.shape)\n    from torch.utils.data import DataLoader\n    train_loader = DataLoader(ModelNet40(partition='train', num_points=1024), num_workers=4,\n                              batch_size=32, shuffle=True, drop_last=True)\n    for batch_idx, (data, label) in enumerate(train_loader):\n        print(f\"batch_idx: {batch_idx}  | data shape: {data.shape} | ;lable shape: {label.shape}\")\n\n    train_set = ModelNet40(partition='train', num_points=1024)\n    test_set = ModelNet40(partition='test', num_points=1024)\n    print(f\"train_set size {train_set.__len__()}\")\n    print(f\"test_set size {test_set.__len__()}\")\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/helper.py",
    "content": "import torch\nimport torch.nn.functional as F\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1)\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/main.py",
    "content": "\"\"\"\nUsage:\npython main.py --model PointMLP --msg demo\n\"\"\"\nimport argparse\nimport os\nimport logging\nimport datetime\nimport torch\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.optim\nimport torch.utils.data\nimport torch.utils.data.distributed\nfrom torch.utils.data import DataLoader\nimport models as models\nfrom utils import Logger, mkdir_p, progress_bar, save_model, save_args, cal_loss\nfrom data import ModelNet40\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\nimport sklearn.metrics as metrics\nimport numpy as np\n\n\ndef parse_args():\n    \"\"\"Parameters\"\"\"\n    parser = argparse.ArgumentParser('training')\n    parser.add_argument('-c', '--checkpoint', type=str, metavar='PATH',\n                        help='path to save checkpoint (default: checkpoint)')\n    parser.add_argument('--msg', type=str, help='message after checkpoint')\n    parser.add_argument('--batch_size', type=int, default=32, help='batch size in training')\n    parser.add_argument('--model', default='PointNet', help='model name [default: pointnet_cls]')\n    parser.add_argument('--epoch', default=300, type=int, help='number of epoch in training')\n    parser.add_argument('--num_points', type=int, default=1024, help='Point Number')\n    parser.add_argument('--learning_rate', default=0.1, type=float, help='learning rate in training')\n    parser.add_argument('--min_lr', default=0.005, type=float, help='min lr')\n    parser.add_argument('--weight_decay', type=float, default=2e-4, help='decay rate')\n    parser.add_argument('--seed', type=int, help='random seed')\n    parser.add_argument('--workers', default=8, type=int, help='workers')\n    return parser.parse_args()\n\n\ndef main():\n    args = parse_args()\n    if args.seed is None:\n        args.seed = np.random.randint(1, 10000)\n    os.environ[\"HDF5_USE_FILE_LOCKING\"] = \"FALSE\"\n\n    assert torch.cuda.is_available(), \"Please ensure codes are executed in cuda.\"\n    device = 'cuda'\n    if args.seed is not None:\n        torch.manual_seed(args.seed)\n        np.random.seed(args.seed)\n        torch.cuda.manual_seed_all(args.seed)\n        torch.cuda.manual_seed(args.seed)\n        torch.set_printoptions(10)\n        torch.backends.cudnn.benchmark = False\n        torch.backends.cudnn.deterministic = True\n        os.environ['PYTHONHASHSEED'] = str(args.seed)\n    time_str = str(datetime.datetime.now().strftime('-%Y%m%d%H%M%S'))\n    if args.msg is None:\n        message = time_str\n    else:\n        message = \"-\" + args.msg\n    args.checkpoint = 'checkpoints/' + args.model + message + '-' + str(args.seed)\n    if not os.path.isdir(args.checkpoint):\n        mkdir_p(args.checkpoint)\n\n    screen_logger = logging.getLogger(\"Model\")\n    screen_logger.setLevel(logging.INFO)\n    formatter = logging.Formatter('%(message)s')\n    file_handler = logging.FileHandler(os.path.join(args.checkpoint, \"out.txt\"))\n    file_handler.setLevel(logging.INFO)\n    file_handler.setFormatter(formatter)\n    screen_logger.addHandler(file_handler)\n\n    def printf(str):\n        screen_logger.info(str)\n        print(str)\n\n    # Model\n    printf(f\"args: {args}\")\n    printf('==> Building model..')\n    net = models.__dict__[args.model]()\n    criterion = cal_loss\n    net = net.to(device)\n    # criterion = criterion.to(device)\n    if device == 'cuda':\n        net = torch.nn.DataParallel(net)\n        cudnn.benchmark = True\n\n    best_test_acc = 0.  # best test accuracy\n    best_train_acc = 0.\n    best_test_acc_avg = 0.\n    best_train_acc_avg = 0.\n    best_test_loss = float(\"inf\")\n    best_train_loss = float(\"inf\")\n    start_epoch = 0  # start from epoch 0 or last checkpoint epoch\n    optimizer_dict = None\n\n    if not os.path.isfile(os.path.join(args.checkpoint, \"last_checkpoint.pth\")):\n        save_args(args)\n        logger = Logger(os.path.join(args.checkpoint, 'log.txt'), title=\"ModelNet\" + args.model)\n        logger.set_names([\"Epoch-Num\", 'Learning-Rate',\n                          'Train-Loss', 'Train-acc-B', 'Train-acc',\n                          'Valid-Loss', 'Valid-acc-B', 'Valid-acc'])\n    else:\n        printf(f\"Resuming last checkpoint from {args.checkpoint}\")\n        checkpoint_path = os.path.join(args.checkpoint, \"last_checkpoint.pth\")\n        checkpoint = torch.load(checkpoint_path)\n        net.load_state_dict(checkpoint['net'])\n        start_epoch = checkpoint['epoch']\n        best_test_acc = checkpoint['best_test_acc']\n        best_train_acc = checkpoint['best_train_acc']\n        best_test_acc_avg = checkpoint['best_test_acc_avg']\n        best_train_acc_avg = checkpoint['best_train_acc_avg']\n        best_test_loss = checkpoint['best_test_loss']\n        best_train_loss = checkpoint['best_train_loss']\n        logger = Logger(os.path.join(args.checkpoint, 'log.txt'), title=\"ModelNet\" + args.model, resume=True)\n        optimizer_dict = checkpoint['optimizer']\n\n    printf('==> Preparing data..')\n    train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=args.workers,\n                              batch_size=args.batch_size, shuffle=True, drop_last=True)\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=args.workers,\n                             batch_size=args.batch_size // 2, shuffle=False, drop_last=False)\n\n    optimizer = torch.optim.SGD(net.parameters(), lr=args.learning_rate, momentum=0.9, weight_decay=args.weight_decay)\n    if optimizer_dict is not None:\n        optimizer.load_state_dict(optimizer_dict)\n    scheduler = CosineAnnealingLR(optimizer, args.epoch, eta_min=args.min_lr, last_epoch=start_epoch - 1)\n\n    for epoch in range(start_epoch, args.epoch):\n        printf('Epoch(%d/%s) Learning Rate %s:' % (epoch + 1, args.epoch, optimizer.param_groups[0]['lr']))\n        train_out = train(net, train_loader, optimizer, criterion, device)  # {\"loss\", \"acc\", \"acc_avg\", \"time\"}\n        test_out = validate(net, test_loader, criterion, device)\n        scheduler.step()\n\n        if test_out[\"acc\"] > best_test_acc:\n            best_test_acc = test_out[\"acc\"]\n            is_best = True\n        else:\n            is_best = False\n\n        best_test_acc = test_out[\"acc\"] if (test_out[\"acc\"] > best_test_acc) else best_test_acc\n        best_train_acc = train_out[\"acc\"] if (train_out[\"acc\"] > best_train_acc) else best_train_acc\n        best_test_acc_avg = test_out[\"acc_avg\"] if (test_out[\"acc_avg\"] > best_test_acc_avg) else best_test_acc_avg\n        best_train_acc_avg = train_out[\"acc_avg\"] if (train_out[\"acc_avg\"] > best_train_acc_avg) else best_train_acc_avg\n        best_test_loss = test_out[\"loss\"] if (test_out[\"loss\"] < best_test_loss) else best_test_loss\n        best_train_loss = train_out[\"loss\"] if (train_out[\"loss\"] < best_train_loss) else best_train_loss\n\n        save_model(\n            net, epoch, path=args.checkpoint, acc=test_out[\"acc\"], is_best=is_best,\n            best_test_acc=best_test_acc,  # best test accuracy\n            best_train_acc=best_train_acc,\n            best_test_acc_avg=best_test_acc_avg,\n            best_train_acc_avg=best_train_acc_avg,\n            best_test_loss=best_test_loss,\n            best_train_loss=best_train_loss,\n            optimizer=optimizer.state_dict()\n        )\n        logger.append([epoch, optimizer.param_groups[0]['lr'],\n                       train_out[\"loss\"], train_out[\"acc_avg\"], train_out[\"acc\"],\n                       test_out[\"loss\"], test_out[\"acc_avg\"], test_out[\"acc\"]])\n        printf(\n            f\"Training loss:{train_out['loss']} acc_avg:{train_out['acc_avg']}% acc:{train_out['acc']}% time:{train_out['time']}s\")\n        printf(\n            f\"Testing loss:{test_out['loss']} acc_avg:{test_out['acc_avg']}% \"\n            f\"acc:{test_out['acc']}% time:{test_out['time']}s [best test acc: {best_test_acc}%] \\n\\n\")\n    logger.close()\n\n    printf(f\"++++++++\" * 2 + \"Final results\" + \"++++++++\" * 2)\n    printf(f\"++  Last Train time: {train_out['time']} | Last Test time: {test_out['time']}  ++\")\n    printf(f\"++  Best Train loss: {best_train_loss} | Best Test loss: {best_test_loss}  ++\")\n    printf(f\"++  Best Train acc_B: {best_train_acc_avg} | Best Test acc_B: {best_test_acc_avg}  ++\")\n    printf(f\"++  Best Train acc: {best_train_acc} | Best Test acc: {best_test_acc}  ++\")\n    printf(f\"++++++++\" * 5)\n\n\ndef train(net, trainloader, optimizer, criterion, device):\n    net.train()\n    train_loss = 0\n    correct = 0\n    total = 0\n    train_pred = []\n    train_true = []\n    time_cost = datetime.datetime.now()\n    for batch_idx, (data, label) in enumerate(trainloader):\n        data, label = data.to(device), label.to(device).squeeze()\n        data = data.permute(0, 2, 1)  # so, the input data shape is [batch, 3, 1024]\n        optimizer.zero_grad()\n        logits = net(data)\n        loss = criterion(logits, label)\n        loss.backward()\n        torch.nn.utils.clip_grad_norm_(net.parameters(), 1)\n        optimizer.step()\n        train_loss += loss.item()\n        preds = logits.max(dim=1)[1]\n\n        train_true.append(label.cpu().numpy())\n        train_pred.append(preds.detach().cpu().numpy())\n\n        total += label.size(0)\n        correct += preds.eq(label).sum().item()\n\n        progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'\n                     % (train_loss / (batch_idx + 1), 100. * correct / total, correct, total))\n\n    time_cost = int((datetime.datetime.now() - time_cost).total_seconds())\n    train_true = np.concatenate(train_true)\n    train_pred = np.concatenate(train_pred)\n    return {\n        \"loss\": float(\"%.3f\" % (train_loss / (batch_idx + 1))),\n        \"acc\": float(\"%.3f\" % (100. * metrics.accuracy_score(train_true, train_pred))),\n        \"acc_avg\": float(\"%.3f\" % (100. * metrics.balanced_accuracy_score(train_true, train_pred))),\n        \"time\": time_cost\n    }\n\n\ndef validate(net, testloader, criterion, device):\n    net.eval()\n    test_loss = 0\n    correct = 0\n    total = 0\n    test_true = []\n    test_pred = []\n    time_cost = datetime.datetime.now()\n    with torch.no_grad():\n        for batch_idx, (data, label) in enumerate(testloader):\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            logits = net(data)\n            loss = criterion(logits, label)\n            test_loss += loss.item()\n            preds = logits.max(dim=1)[1]\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n            total += label.size(0)\n            correct += preds.eq(label).sum().item()\n            progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'\n                         % (test_loss / (batch_idx + 1), 100. * correct / total, correct, total))\n\n    time_cost = int((datetime.datetime.now() - time_cost).total_seconds())\n    test_true = np.concatenate(test_true)\n    test_pred = np.concatenate(test_pred)\n    return {\n        \"loss\": float(\"%.3f\" % (test_loss / (batch_idx + 1))),\n        \"acc\": float(\"%.3f\" % (100. * metrics.accuracy_score(test_true, test_pred))),\n        \"acc_avg\": float(\"%.3f\" % (100. * metrics.balanced_accuracy_score(test_true, test_pred))),\n        \"time\": time_cost\n    }\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/models/__init__.py",
    "content": "from __future__ import absolute_import\n\nfrom .pointmlp import pointMLP, pointMLPElite\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/models/pointmlp.py",
    "content": "\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n# from torch import einsum\n# from einops import rearrange, repeat\n\nfrom PCT_Pytorch.pointnet2_ops_lib.pointnet2_ops import pointnet2_utils\n# from pointnet2_ops import pointnet2_utils\n\n\ndef get_activation(activation):\n    if activation.lower() == 'gelu':\n        return nn.GELU()\n    elif activation.lower() == 'rrelu':\n        return nn.RReLU(inplace=True)\n    elif activation.lower() == 'selu':\n        return nn.SELU(inplace=True)\n    elif activation.lower() == 'silu':\n        return nn.SiLU(inplace=True)\n    elif activation.lower() == 'hardswish':\n        return nn.Hardswish(inplace=True)\n    elif activation.lower() == 'leakyrelu':\n        return nn.LeakyReLU(inplace=True)\n    else:\n        return nn.ReLU(inplace=True)\n\n\ndef square_distance(src, dst):\n    \"\"\"\n    Calculate Euclid distance between each two points.\n    src^T * dst = xn * xm + yn * ym + zn * zm；\n    sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn;\n    sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm;\n    dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2\n         = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst\n    Input:\n        src: source points, [B, N, C]\n        dst: target points, [B, M, C]\n    Output:\n        dist: per-point square distance, [B, N, M]\n    \"\"\"\n    B, N, _ = src.shape\n    _, M, _ = dst.shape\n    dist = -2 * torch.matmul(src, dst.permute(0, 2, 1))\n    dist += torch.sum(src ** 2, -1).view(B, N, 1)\n    dist += torch.sum(dst ** 2, -1).view(B, 1, M)\n    return dist\n\n\ndef index_points(points, idx):\n    \"\"\"\n    Input:\n        points: input points data, [B, N, C]\n        idx: sample index data, [B, S]\n    Return:\n        new_points:, indexed points data, [B, S, C]\n    \"\"\"\n    device = points.device\n    B = points.shape[0]\n    view_shape = list(idx.shape)\n    view_shape[1:] = [1] * (len(view_shape) - 1)\n    repeat_shape = list(idx.shape)\n    repeat_shape[0] = 1\n    batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)\n    new_points = points[batch_indices, idx, :]\n    return new_points\n\n\ndef farthest_point_sample(xyz, npoint):\n    \"\"\"\n    Input:\n        xyz: pointcloud data, [B, N, 3]\n        npoint: number of samples\n    Return:\n        centroids: sampled pointcloud index, [B, npoint]\n    \"\"\"\n    device = xyz.device\n    B, N, C = xyz.shape\n    centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)\n    distance = torch.ones(B, N).to(device) * 1e10\n    farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device)\n    batch_indices = torch.arange(B, dtype=torch.long).to(device)\n    for i in range(npoint):\n        centroids[:, i] = farthest\n        centroid = xyz[batch_indices, farthest, :].view(B, 1, 3)\n        dist = torch.sum((xyz - centroid) ** 2, -1)\n        distance = torch.min(distance, dist)\n        farthest = torch.max(distance, -1)[1]\n    return centroids\n\n\ndef query_ball_point(radius, nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        radius: local region radius\n        nsample: max sample number in local region\n        xyz: all points, [B, N, 3]\n        new_xyz: query points, [B, S, 3]\n    Return:\n        group_idx: grouped points index, [B, S, nsample]\n    \"\"\"\n    device = xyz.device\n    B, N, C = xyz.shape\n    _, S, _ = new_xyz.shape\n    group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])\n    sqrdists = square_distance(new_xyz, xyz)\n    group_idx[sqrdists > radius ** 2] = N\n    group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]\n    group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])\n    mask = group_idx == N\n    group_idx[mask] = group_first[mask]\n    return group_idx\n\n\ndef knn_point(nsample, xyz, new_xyz):\n    \"\"\"\n    Input:\n        nsample: max sample number in local region\n        xyz: all points, [B, N, C]\n        new_xyz: query points, [B, S, C]\n    Return:\n        group_idx: grouped points index, [B, S, nsample]\n    \"\"\"\n    sqrdists = square_distance(new_xyz, xyz)\n    _, group_idx = torch.topk(sqrdists, nsample, dim=-1, largest=False, sorted=False)\n    return group_idx\n\n\nclass LocalGrouper(nn.Module):\n    def __init__(self, channel, groups, kneighbors, use_xyz=True, normalize=\"center\", **kwargs):\n        \"\"\"\n        Give xyz[b,p,3] and fea[b,p,d], return new_xyz[b,g,3] and new_fea[b,g,k,d]\n        :param groups: groups number\n        :param kneighbors: k-nerighbors\n        :param kwargs: others\n        \"\"\"\n        super(LocalGrouper, self).__init__()\n        self.groups = groups\n        self.kneighbors = kneighbors\n        self.use_xyz = use_xyz\n        if normalize is not None:\n            self.normalize = normalize.lower()\n        else:\n            self.normalize = None\n        if self.normalize not in [\"center\", \"anchor\"]:\n            print(f\"Unrecognized normalize parameter (self.normalize), set to None. Should be one of [center, anchor].\")\n            self.normalize = None\n        if self.normalize is not None:\n            add_channel=3 if self.use_xyz else 0\n            self.affine_alpha = nn.Parameter(torch.ones([1,1,1,channel + add_channel]))\n            self.affine_beta = nn.Parameter(torch.zeros([1, 1, 1, channel + add_channel]))\n\n    def forward(self, xyz, points):\n        B, N, C = xyz.shape\n        S = self.groups\n        xyz = xyz.contiguous()  # xyz [btach, points, xyz]\n\n        # fps_idx = torch.multinomial(torch.linspace(0, N - 1, steps=N).repeat(B, 1).to(xyz.device), num_samples=self.groups, replacement=False).long()\n        # fps_idx = farthest_point_sample(xyz, self.groups).long()\n        fps_idx = pointnet2_utils.furthest_point_sample(xyz, self.groups).long()  # [B, npoint]\n        new_xyz = index_points(xyz, fps_idx)  # [B, npoint, 3]\n        new_points = index_points(points, fps_idx)  # [B, npoint, d]\n\n        idx = knn_point(self.kneighbors, xyz, new_xyz)\n        # idx = query_ball_point(radius, nsample, xyz, new_xyz)\n        grouped_xyz = index_points(xyz, idx)  # [B, npoint, k, 3]\n        grouped_points = index_points(points, idx)  # [B, npoint, k, d]\n        if self.use_xyz:\n            grouped_points = torch.cat([grouped_points, grouped_xyz],dim=-1)  # [B, npoint, k, d+3]\n        if self.normalize is not None:\n            if self.normalize ==\"center\":\n                mean = torch.mean(grouped_points, dim=2, keepdim=True)\n            if self.normalize ==\"anchor\":\n                mean = torch.cat([new_points, new_xyz],dim=-1) if self.use_xyz else new_points\n                mean = mean.unsqueeze(dim=-2)  # [B, npoint, 1, d+3]\n            std = torch.std((grouped_points-mean).reshape(B,-1),dim=-1,keepdim=True).unsqueeze(dim=-1).unsqueeze(dim=-1)\n            grouped_points = (grouped_points-mean)/(std + 1e-5)\n            grouped_points = self.affine_alpha*grouped_points + self.affine_beta\n\n        new_points = torch.cat([grouped_points, new_points.view(B, S, 1, -1).repeat(1, 1, self.kneighbors, 1)], dim=-1)\n        return new_xyz, new_points\n\n\nclass ConvBNReLU1D(nn.Module):\n    def __init__(self, in_channels, out_channels, kernel_size=1, bias=True, activation='relu'):\n        super(ConvBNReLU1D, self).__init__()\n        self.act = get_activation(activation)\n        self.net = nn.Sequential(\n            nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, bias=bias),\n            nn.BatchNorm1d(out_channels),\n            self.act\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\nclass ConvBNReLURes1D(nn.Module):\n    def __init__(self, channel, kernel_size=1, groups=1, res_expansion=1.0, bias=True, activation='relu'):\n        super(ConvBNReLURes1D, self).__init__()\n        self.act = get_activation(activation)\n        self.net1 = nn.Sequential(\n            nn.Conv1d(in_channels=channel, out_channels=int(channel * res_expansion),\n                      kernel_size=kernel_size, groups=groups, bias=bias),\n            nn.BatchNorm1d(int(channel * res_expansion)),\n            self.act\n        )\n        if groups > 1:\n            self.net2 = nn.Sequential(\n                nn.Conv1d(in_channels=int(channel * res_expansion), out_channels=channel,\n                          kernel_size=kernel_size, groups=groups, bias=bias),\n                nn.BatchNorm1d(channel),\n                self.act,\n                nn.Conv1d(in_channels=channel, out_channels=channel,\n                          kernel_size=kernel_size, bias=bias),\n                nn.BatchNorm1d(channel),\n            )\n        else:\n            self.net2 = nn.Sequential(\n                nn.Conv1d(in_channels=int(channel * res_expansion), out_channels=channel,\n                          kernel_size=kernel_size, bias=bias),\n                nn.BatchNorm1d(channel)\n            )\n\n    def forward(self, x):\n        return self.act(self.net2(self.net1(x)) + x)\n\n\nclass PreExtraction(nn.Module):\n    def __init__(self, channels, out_channels,  blocks=1, groups=1, res_expansion=1, bias=True,\n                 activation='relu', use_xyz=True):\n        \"\"\"\n        input: [b,g,k,d]: output:[b,d,g]\n        :param channels:\n        :param blocks:\n        \"\"\"\n        super(PreExtraction, self).__init__()\n        in_channels = 3+2*channels if use_xyz else 2*channels\n        self.transfer = ConvBNReLU1D(in_channels, out_channels, bias=bias, activation=activation)\n        operation = []\n        for _ in range(blocks):\n            operation.append(\n                ConvBNReLURes1D(out_channels, groups=groups, res_expansion=res_expansion,\n                                bias=bias, activation=activation)\n            )\n        self.operation = nn.Sequential(*operation)\n\n    def forward(self, x):\n        b, n, s, d = x.size()  # torch.Size([32, 512, 32, 6])\n        x = x.permute(0, 1, 3, 2)\n        x = x.reshape(-1, d, s)\n        x = self.transfer(x)\n        batch_size, _, _ = x.size()\n        x = self.operation(x)  # [b, d, k]\n        x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)\n        x = x.reshape(b, n, -1).permute(0, 2, 1)\n        return x\n\n\nclass PosExtraction(nn.Module):\n    def __init__(self, channels, blocks=1, groups=1, res_expansion=1, bias=True, activation='relu'):\n        \"\"\"\n        input[b,d,g]; output[b,d,g]\n        :param channels:\n        :param blocks:\n        \"\"\"\n        super(PosExtraction, self).__init__()\n        operation = []\n        for _ in range(blocks):\n            operation.append(\n                ConvBNReLURes1D(channels, groups=groups, res_expansion=res_expansion, bias=bias, activation=activation)\n            )\n        self.operation = nn.Sequential(*operation)\n\n    def forward(self, x):  # [b, d, g]\n        return self.operation(x)\n\n\nclass Model(nn.Module):\n    def __init__(self, points=1024, class_num=40, embed_dim=64, groups=1, res_expansion=1.0,\n                 activation=\"relu\", bias=True, use_xyz=True, normalize=\"center\",\n                 dim_expansion=[2, 2, 2, 2], pre_blocks=[2, 2, 2, 2], pos_blocks=[2, 2, 2, 2],\n                 k_neighbors=[32, 32, 32, 32], reducers=[2, 2, 2, 2], **kwargs):\n        super(Model, self).__init__()\n        self.stages = len(pre_blocks)\n        self.class_num = class_num\n        self.points = points\n        self.embedding = ConvBNReLU1D(3, embed_dim, bias=bias, activation=activation)\n        assert len(pre_blocks) == len(k_neighbors) == len(reducers) == len(pos_blocks) == len(dim_expansion), \\\n            \"Please check stage number consistent for pre_blocks, pos_blocks k_neighbors, reducers.\"\n        self.local_grouper_list = nn.ModuleList()\n        self.pre_blocks_list = nn.ModuleList()\n        self.pos_blocks_list = nn.ModuleList()\n        last_channel = embed_dim\n        anchor_points = self.points\n        for i in range(len(pre_blocks)):\n            out_channel = last_channel * dim_expansion[i]\n            pre_block_num = pre_blocks[i]\n            pos_block_num = pos_blocks[i]\n            kneighbor = k_neighbors[i]\n            reduce = reducers[i]\n            anchor_points = anchor_points // reduce\n            # append local_grouper_list\n            local_grouper = LocalGrouper(last_channel, anchor_points, kneighbor, use_xyz, normalize)  # [b,g,k,d]\n            self.local_grouper_list.append(local_grouper)\n            # append pre_block_list\n            pre_block_module = PreExtraction(last_channel, out_channel, pre_block_num, groups=groups,\n                                             res_expansion=res_expansion,\n                                             bias=bias, activation=activation, use_xyz=use_xyz)\n            self.pre_blocks_list.append(pre_block_module)\n            # append pos_block_list\n            pos_block_module = PosExtraction(out_channel, pos_block_num, groups=groups,\n                                             res_expansion=res_expansion, bias=bias, activation=activation)\n            self.pos_blocks_list.append(pos_block_module)\n\n            last_channel = out_channel\n\n        self.act = get_activation(activation)\n        self.classifier = nn.Sequential(\n            nn.Linear(last_channel, 512),\n            nn.BatchNorm1d(512),\n            self.act,\n            nn.Dropout(0.5),\n            nn.Linear(512, 256),\n            nn.BatchNorm1d(256),\n            self.act,\n            nn.Dropout(0.5),\n            nn.Linear(256, self.class_num)\n        )\n\n    def forward(self, x):\n        xyz = x.permute(0, 2, 1)\n        batch_size, _, _ = x.size()\n        x = self.embedding(x)  # B,D,N\n        for i in range(self.stages):\n            # Give xyz[b, p, 3] and fea[b, p, d], return new_xyz[b, g, 3] and new_fea[b, g, k, d]\n            xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1))  # [b,g,3]  [b,g,k,d]\n            x = self.pre_blocks_list[i](x)  # [b,d,g]\n            x = self.pos_blocks_list[i](x)  # [b,d,g]\n\n        x = F.adaptive_max_pool1d(x, 1).squeeze(dim=-1)\n        x = self.classifier(x)\n        return x\n\n\n\n\ndef pointMLP(num_classes=40, **kwargs) -> Model:\n    return Model(points=1024, class_num=num_classes, embed_dim=64, groups=1, res_expansion=1.0,\n                   activation=\"relu\", bias=False, use_xyz=False, normalize=\"anchor\",\n                   dim_expansion=[2, 2, 2, 2], pre_blocks=[2, 2, 2, 2], pos_blocks=[2, 2, 2, 2],\n                   k_neighbors=[24, 24, 24, 24], reducers=[2, 2, 2, 2], **kwargs)\n\n\ndef pointMLPElite(num_classes=40, **kwargs) -> Model:\n    return Model(points=1024, class_num=num_classes, embed_dim=32, groups=1, res_expansion=0.25,\n                   activation=\"relu\", bias=False, use_xyz=False, normalize=\"anchor\",\n                   dim_expansion=[2, 2, 2, 1], pre_blocks=[1, 1, 2, 1], pos_blocks=[1, 1, 2, 1],\n                   k_neighbors=[24,24,24,24], reducers=[2, 2, 2, 2], **kwargs)\n\nif __name__ == '__main__':\n    data = torch.rand(2, 3, 1024)\n    print(\"===> testing pointMLP ...\")\n    model = pointMLP()\n    out = model(data)\n    print(out.shape)\n\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/__init__.py",
    "content": "\"\"\"Useful utils\n\"\"\"\nfrom .misc import *\nfrom .logger import *\nfrom .progress.progress.bar import Bar as Bar\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/logger.py",
    "content": "# A simple torch style logger\n# (C) Wei YANG 2017\nfrom __future__ import absolute_import\nimport matplotlib.pyplot as plt\nimport os\nimport sys\nimport numpy as np\n\n__all__ = ['Logger', 'LoggerMonitor', 'savefig']\n\ndef savefig(fname, dpi=None):\n    dpi = 150 if dpi == None else dpi\n    plt.savefig(fname, dpi=dpi)\n    \ndef plot_overlap(logger, names=None):\n    names = logger.names if names == None else names\n    numbers = logger.numbers\n    for _, name in enumerate(names):\n        x = np.arange(len(numbers[name]))\n        plt.plot(x, np.asarray(numbers[name]))\n    return [logger.title + '(' + name + ')' for name in names]\n\nclass Logger(object):\n    '''Save training process to log file with simple plot function.'''\n    def __init__(self, fpath, title=None, resume=False): \n        self.file = None\n        self.resume = resume\n        self.title = '' if title == None else title\n        if fpath is not None:\n            if resume: \n                self.file = open(fpath, 'r') \n                name = self.file.readline()\n                self.names = name.rstrip().split('\\t')\n                self.numbers = {}\n                for _, name in enumerate(self.names):\n                    self.numbers[name] = []\n\n                for numbers in self.file:\n                    numbers = numbers.rstrip().split('\\t')\n                    for i in range(0, len(numbers)):\n                        self.numbers[self.names[i]].append(numbers[i])\n                self.file.close()\n                self.file = open(fpath, 'a')  \n            else:\n                self.file = open(fpath, 'w')\n\n    def set_names(self, names):\n        if self.resume: \n            pass\n        # initialize numbers as empty list\n        self.numbers = {}\n        self.names = names\n        for _, name in enumerate(self.names):\n            self.file.write(name)\n            self.file.write('\\t')\n            self.numbers[name] = []\n        self.file.write('\\n')\n        self.file.flush()\n\n\n    def append(self, numbers):\n        assert len(self.names) == len(numbers), 'Numbers do not match names'\n        for index, num in enumerate(numbers):\n            self.file.write(\"{0:.6f}\".format(num))\n            self.file.write('\\t')\n            self.numbers[self.names[index]].append(num)\n        self.file.write('\\n')\n        self.file.flush()\n\n    def plot(self, names=None):   \n        names = self.names if names == None else names\n        numbers = self.numbers\n        for _, name in enumerate(names):\n            x = np.arange(len(numbers[name]))\n            plt.plot(x, np.asarray(numbers[name]))\n        plt.legend([self.title + '(' + name + ')' for name in names])\n        plt.grid(True)\n\n    def close(self):\n        if self.file is not None:\n            self.file.close()\n\nclass LoggerMonitor(object):\n    '''Load and visualize multiple logs.'''\n    def __init__ (self, paths):\n        '''paths is a distionary with {name:filepath} pair'''\n        self.loggers = []\n        for title, path in paths.items():\n            logger = Logger(path, title=title, resume=True)\n            self.loggers.append(logger)\n\n    def plot(self, names=None):\n        plt.figure()\n        plt.subplot(121)\n        legend_text = []\n        for logger in self.loggers:\n            legend_text += plot_overlap(logger, names)\n        plt.legend(legend_text, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n        plt.grid(True)\n                    \nif __name__ == '__main__':\n    # # Example\n    # logger = Logger('test.txt')\n    # logger.set_names(['Train loss', 'Valid loss','Test loss'])\n\n    # length = 100\n    # t = np.arange(length)\n    # train_loss = np.exp(-t / 10.0) + np.random.rand(length) * 0.1\n    # valid_loss = np.exp(-t / 10.0) + np.random.rand(length) * 0.1\n    # test_loss = np.exp(-t / 10.0) + np.random.rand(length) * 0.1\n\n    # for i in range(0, length):\n    #     logger.append([train_loss[i], valid_loss[i], test_loss[i]])\n    # logger.plot()\n\n    # Example: logger monitor\n    paths = {\n    'resadvnet20':'/home/wyang/code/pytorch-classification/checkpoint/cifar10/resadvnet20/log.txt', \n    'resadvnet32':'/home/wyang/code/pytorch-classification/checkpoint/cifar10/resadvnet32/log.txt',\n    'resadvnet44':'/home/wyang/code/pytorch-classification/checkpoint/cifar10/resadvnet44/log.txt',\n    }\n\n    field = ['Valid Acc.']\n\n    monitor = LoggerMonitor(paths)\n    monitor.plot(names=field)\n    savefig('test.eps')"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/misc.py",
    "content": "'''Some helper functions for PyTorch, including:\n    - get_mean_and_std: calculate the mean and std value of dataset.\n    - msr_init: net parameter initialization.\n    - progress_bar: progress bar mimic xlua.progress.\n'''\nimport errno\nimport os\nimport sys\nimport time\nimport math\nimport torch\nimport shutil\nimport numpy as np\nimport random\nimport torch.nn.functional as F\n\n\nimport torch.nn as nn\nimport torch.nn.init as init\nfrom torch.autograd import Variable\n\n__all__ = ['get_mean_and_std', 'init_params', 'mkdir_p', 'AverageMeter',\n           'progress_bar','save_model',\"save_args\",\"set_seed\", \"IOStream\", \"cal_loss\"]\n\n\ndef get_mean_and_std(dataset):\n    '''Compute the mean and std value of dataset.'''\n    dataloader = trainloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_workers=2)\n\n    mean = torch.zeros(3)\n    std = torch.zeros(3)\n    print('==> Computing mean and std..')\n    for inputs, targets in dataloader:\n        for i in range(3):\n            mean[i] += inputs[:,i,:,:].mean()\n            std[i] += inputs[:,i,:,:].std()\n    mean.div_(len(dataset))\n    std.div_(len(dataset))\n    return mean, std\n\ndef init_params(net):\n    '''Init layer parameters.'''\n    for m in net.modules():\n        if isinstance(m, nn.Conv2d):\n            init.kaiming_normal(m.weight, mode='fan_out')\n            if m.bias:\n                init.constant(m.bias, 0)\n        elif isinstance(m, nn.BatchNorm2d):\n            init.constant(m.weight, 1)\n            init.constant(m.bias, 0)\n        elif isinstance(m, nn.Linear):\n            init.normal(m.weight, std=1e-3)\n            if m.bias:\n                init.constant(m.bias, 0)\n\ndef mkdir_p(path):\n    '''make dir if not exist'''\n    try:\n        os.makedirs(path)\n    except OSError as exc:  # Python >2.5\n        if exc.errno == errno.EEXIST and os.path.isdir(path):\n            pass\n        else:\n            raise\n\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\n       Imported from https://github.com/pytorch/examples/blob/master/imagenet/main.py#L247-L262\n    \"\"\"\n    def __init__(self):\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        self.val = val\n        self.sum += val * n\n        self.count += n\n        self.avg = self.sum / self.count\n\n\n\nTOTAL_BAR_LENGTH = 65.\nlast_time = time.time()\nbegin_time = last_time\ndef progress_bar(current, total, msg=None):\n    global last_time, begin_time\n    if current == 0:\n        begin_time = time.time()  # Reset for new bar.\n\n    cur_len = int(TOTAL_BAR_LENGTH*current/total)\n    rest_len = int(TOTAL_BAR_LENGTH - cur_len) - 1\n\n    sys.stdout.write(' [')\n    for i in range(cur_len):\n        sys.stdout.write('=')\n    sys.stdout.write('>')\n    for i in range(rest_len):\n        sys.stdout.write('.')\n    sys.stdout.write(']')\n\n    cur_time = time.time()\n    step_time = cur_time - last_time\n    last_time = cur_time\n    tot_time = cur_time - begin_time\n\n    L = []\n    L.append('  Step: %s' % format_time(step_time))\n    L.append(' | Tot: %s' % format_time(tot_time))\n    if msg:\n        L.append(' | ' + msg)\n\n    msg = ''.join(L)\n    sys.stdout.write(msg)\n    # for i in range(term_width-int(TOTAL_BAR_LENGTH)-len(msg)-3):\n    #     sys.stdout.write(' ')\n\n    # Go back to the center of the bar.\n    # for i in range(term_width-int(TOTAL_BAR_LENGTH/2)+2):\n    #     sys.stdout.write('\\b')\n    sys.stdout.write(' %d/%d ' % (current+1, total))\n\n    if current < total-1:\n        sys.stdout.write('\\r')\n    else:\n        sys.stdout.write('\\n')\n    sys.stdout.flush()\n\n\ndef format_time(seconds):\n    days = int(seconds / 3600/24)\n    seconds = seconds - days*3600*24\n    hours = int(seconds / 3600)\n    seconds = seconds - hours*3600\n    minutes = int(seconds / 60)\n    seconds = seconds - minutes*60\n    secondsf = int(seconds)\n    seconds = seconds - secondsf\n    millis = int(seconds*1000)\n\n    f = ''\n    i = 1\n    if days > 0:\n        f += str(days) + 'D'\n        i += 1\n    if hours > 0 and i <= 2:\n        f += str(hours) + 'h'\n        i += 1\n    if minutes > 0 and i <= 2:\n        f += str(minutes) + 'm'\n        i += 1\n    if secondsf > 0 and i <= 2:\n        f += str(secondsf) + 's'\n        i += 1\n    if millis > 0 and i <= 2:\n        f += str(millis) + 'ms'\n        i += 1\n    if f == '':\n        f = '0ms'\n    return f\n\n\ndef save_model(net, epoch, path, acc, is_best, **kwargs):\n    state = {\n        'net': net.state_dict(),\n        'epoch': epoch,\n        'acc': acc\n    }\n    for key, value in kwargs.items():\n        state[key] = value\n    filepath = os.path.join(path, \"last_checkpoint.pth\")\n    torch.save(state, filepath)\n    if is_best:\n        shutil.copyfile(filepath, os.path.join(path, 'best_checkpoint.pth'))\n\n\n\ndef save_args(args):\n    file = open(os.path.join(args.checkpoint, 'args.txt'), \"w\")\n    for k, v in vars(args).items():\n        file.write(f\"{k}:\\t {v}\\n\")\n    file.close()\n\n\n\ndef set_seed(seed=None):\n    if seed is None:\n        return\n    random.seed(seed)\n    os.environ['PYTHONHASHSEED'] = (\"%s\" % seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    torch.backends.cudnn.benchmark = False\n    torch.backends.cudnn.deterministic = True\n\n\n\n# create a file and write the text into it\nclass IOStream():\n    def __init__(self, path):\n        self.f = open(path, 'a')\n\n    def cprint(self, text):\n        print(text)\n        self.f.write(text+'\\n')\n        self.f.flush()\n\n    def close(self):\n        self.f.close()\n\n\ndef cal_loss(pred, gold, smoothing=True):\n    ''' Calculate cross entropy loss, apply label smoothing if needed. '''\n\n    gold = gold.contiguous().view(-1)\n\n    if smoothing:\n        eps = 0.2\n        n_class = pred.size(1)\n\n        one_hot = torch.zeros_like(pred).scatter(1, gold.view(-1, 1), 1)\n        one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)\n        log_prb = F.log_softmax(pred, dim=1)\n\n        loss = -(one_hot * log_prb).sum(dim=1).mean()\n    else:\n        loss = F.cross_entropy(pred, gold, reduction='mean')\n\n    return loss\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/.gitignore",
    "content": "*.pyc\n*.egg-info\nbuild/\ndist/\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/LICENSE",
    "content": "# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/MANIFEST.in",
    "content": "include README.rst LICENSE\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/README.rst",
    "content": "Easy progress reporting for Python\n==================================\n\n|pypi|\n\n|demo|\n\n.. |pypi| image:: https://img.shields.io/pypi/v/progress.svg\n.. |demo| image:: https://raw.github.com/verigak/progress/master/demo.gif\n   :alt: Demo\n\nBars\n----\n\nThere are 7 progress bars to choose from:\n\n- ``Bar``\n- ``ChargingBar``\n- ``FillingSquaresBar``\n- ``FillingCirclesBar``\n- ``IncrementalBar``\n- ``PixelBar``\n- ``ShadyBar``\n\nTo use them, just call ``next`` to advance and ``finish`` to finish:\n\n.. code-block:: python\n\n    from progress.bar import Bar\n\n    bar = Bar('Processing', max=20)\n    for i in range(20):\n        # Do some work\n        bar.next()\n    bar.finish()\n\nThe result will be a bar like the following: ::\n\n    Processing |#############                   | 42/100\n\nTo simplify the common case where the work is done in an iterator, you can\nuse the ``iter`` method:\n\n.. code-block:: python\n\n    for i in Bar('Processing').iter(it):\n        # Do some work\n\nProgress bars are very customizable, you can change their width, their fill\ncharacter, their suffix and more:\n\n.. code-block:: python\n\n    bar = Bar('Loading', fill='@', suffix='%(percent)d%%')\n\nThis will produce a bar like the following: ::\n\n    Loading |@@@@@@@@@@@@@                   | 42%\n\nYou can use a number of template arguments in ``message`` and ``suffix``:\n\n==========  ================================\nName        Value\n==========  ================================\nindex       current value\nmax         maximum value\nremaining   max - index\nprogress    index / max\npercent     progress * 100\navg         simple moving average time per item (in seconds)\nelapsed     elapsed time in seconds\nelapsed_td  elapsed as a timedelta (useful for printing as a string)\neta         avg * remaining\neta_td      eta as a timedelta (useful for printing as a string)\n==========  ================================\n\nInstead of passing all configuration options on instatiation, you can create\nyour custom subclass:\n\n.. code-block:: python\n\n    class FancyBar(Bar):\n        message = 'Loading'\n        fill = '*'\n        suffix = '%(percent).1f%% - %(eta)ds'\n\nYou can also override any of the arguments or create your own:\n\n.. code-block:: python\n\n    class SlowBar(Bar):\n        suffix = '%(remaining_hours)d hours remaining'\n        @property\n        def remaining_hours(self):\n            return self.eta // 3600\n\n\nSpinners\n========\n\nFor actions with an unknown number of steps you can use a spinner:\n\n.. code-block:: python\n\n    from progress.spinner import Spinner\n\n    spinner = Spinner('Loading ')\n    while state != 'FINISHED':\n        # Do some work\n        spinner.next()\n\nThere are 5 predefined spinners:\n\n- ``Spinner``\n- ``PieSpinner``\n- ``MoonSpinner``\n- ``LineSpinner``\n- ``PixelSpinner``\n\n\nOther\n=====\n\nThere are a number of other classes available too, please check the source or\nsubclass one of them to create your own.\n\n\nLicense\n=======\n\nprogress is licensed under ISC\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/progress/__init__.py",
    "content": "# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\nfrom __future__ import division\n\nfrom collections import deque\nfrom datetime import timedelta\nfrom math import ceil\nfrom sys import stderr\nfrom time import time\n\n\n__version__ = '1.3'\n\n\nclass Infinite(object):\n    file = stderr\n    sma_window = 10         # Simple Moving Average window\n\n    def __init__(self, *args, **kwargs):\n        self.index = 0\n        self.start_ts = time()\n        self.avg = 0\n        self._ts = self.start_ts\n        self._xput = deque(maxlen=self.sma_window)\n        for key, val in kwargs.items():\n            setattr(self, key, val)\n\n    def __getitem__(self, key):\n        if key.startswith('_'):\n            return None\n        return getattr(self, key, None)\n\n    @property\n    def elapsed(self):\n        return int(time() - self.start_ts)\n\n    @property\n    def elapsed_td(self):\n        return timedelta(seconds=self.elapsed)\n\n    def update_avg(self, n, dt):\n        if n > 0:\n            self._xput.append(dt / n)\n            self.avg = sum(self._xput) / len(self._xput)\n\n    def update(self):\n        pass\n\n    def start(self):\n        pass\n\n    def finish(self):\n        pass\n\n    def next(self, n=1):\n        now = time()\n        dt = now - self._ts\n        self.update_avg(n, dt)\n        self._ts = now\n        self.index = self.index + n\n        self.update()\n\n    def iter(self, it):\n        try:\n            for x in it:\n                yield x\n                self.next()\n        finally:\n            self.finish()\n\n\nclass Progress(Infinite):\n    def __init__(self, *args, **kwargs):\n        super(Progress, self).__init__(*args, **kwargs)\n        self.max = kwargs.get('max', 100)\n\n    @property\n    def eta(self):\n        return int(ceil(self.avg * self.remaining))\n\n    @property\n    def eta_td(self):\n        return timedelta(seconds=self.eta)\n\n    @property\n    def percent(self):\n        return self.progress * 100\n\n    @property\n    def progress(self):\n        return min(1, self.index / self.max)\n\n    @property\n    def remaining(self):\n        return max(self.max - self.index, 0)\n\n    def start(self):\n        self.update()\n\n    def goto(self, index):\n        incr = index - self.index\n        self.next(incr)\n\n    def iter(self, it):\n        try:\n            self.max = len(it)\n        except TypeError:\n            pass\n\n        try:\n            for x in it:\n                yield x\n                self.next()\n        finally:\n            self.finish()\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/progress/bar.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\nfrom __future__ import unicode_literals\nfrom . import Progress\nfrom .helpers import WritelnMixin\n\n\nclass Bar(WritelnMixin, Progress):\n    width = 32\n    message = ''\n    suffix = '%(index)d/%(max)d'\n    bar_prefix = ' |'\n    bar_suffix = '| '\n    empty_fill = ' '\n    fill = '#'\n    hide_cursor = True\n\n    def update(self):\n        filled_length = int(self.width * self.progress)\n        empty_length = self.width - filled_length\n\n        message = self.message % self\n        bar = self.fill * filled_length\n        empty = self.empty_fill * empty_length\n        suffix = self.suffix % self\n        line = ''.join([message, self.bar_prefix, bar, empty, self.bar_suffix,\n                        suffix])\n        self.writeln(line)\n\n\nclass ChargingBar(Bar):\n    suffix = '%(percent)d%%'\n    bar_prefix = ' '\n    bar_suffix = ' '\n    empty_fill = '∙'\n    fill = '█'\n\n\nclass FillingSquaresBar(ChargingBar):\n    empty_fill = '▢'\n    fill = '▣'\n\n\nclass FillingCirclesBar(ChargingBar):\n    empty_fill = '◯'\n    fill = '◉'\n\n\nclass IncrementalBar(Bar):\n    phases = (' ', '▏', '▎', '▍', '▌', '▋', '▊', '▉', '█')\n\n    def update(self):\n        nphases = len(self.phases)\n        filled_len = self.width * self.progress\n        nfull = int(filled_len)                      # Number of full chars\n        phase = int((filled_len - nfull) * nphases)  # Phase of last char\n        nempty = self.width - nfull                  # Number of empty chars\n\n        message = self.message % self\n        bar = self.phases[-1] * nfull\n        current = self.phases[phase] if phase > 0 else ''\n        empty = self.empty_fill * max(0, nempty - len(current))\n        suffix = self.suffix % self\n        line = ''.join([message, self.bar_prefix, bar, current, empty,\n                        self.bar_suffix, suffix])\n        self.writeln(line)\n\n\nclass PixelBar(IncrementalBar):\n    phases = ('⡀', '⡄', '⡆', '⡇', '⣇', '⣧', '⣷', '⣿')\n\n\nclass ShadyBar(IncrementalBar):\n    phases = (' ', '░', '▒', '▓', '█')\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/progress/counter.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\nfrom __future__ import unicode_literals\nfrom . import Infinite, Progress\nfrom .helpers import WriteMixin\n\n\nclass Counter(WriteMixin, Infinite):\n    message = ''\n    hide_cursor = True\n\n    def update(self):\n        self.write(str(self.index))\n\n\nclass Countdown(WriteMixin, Progress):\n    hide_cursor = True\n\n    def update(self):\n        self.write(str(self.remaining))\n\n\nclass Stack(WriteMixin, Progress):\n    phases = (' ', '▁', '▂', '▃', '▄', '▅', '▆', '▇', '█')\n    hide_cursor = True\n\n    def update(self):\n        nphases = len(self.phases)\n        i = min(nphases - 1, int(self.progress * nphases))\n        self.write(self.phases[i])\n\n\nclass Pie(Stack):\n    phases = ('○', '◔', '◑', '◕', '●')\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/progress/helpers.py",
    "content": "# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\nfrom __future__ import print_function\n\n\nHIDE_CURSOR = '\\x1b[?25l'\nSHOW_CURSOR = '\\x1b[?25h'\n\n\nclass WriteMixin(object):\n    hide_cursor = False\n\n    def __init__(self, message=None, **kwargs):\n        super(WriteMixin, self).__init__(**kwargs)\n        self._width = 0\n        if message:\n            self.message = message\n\n        if self.file.isatty():\n            if self.hide_cursor:\n                print(HIDE_CURSOR, end='', file=self.file)\n            print(self.message, end='', file=self.file)\n            self.file.flush()\n\n    def write(self, s):\n        if self.file.isatty():\n            b = '\\b' * self._width\n            c = s.ljust(self._width)\n            print(b + c, end='', file=self.file)\n            self._width = max(self._width, len(s))\n            self.file.flush()\n\n    def finish(self):\n        if self.file.isatty() and self.hide_cursor:\n            print(SHOW_CURSOR, end='', file=self.file)\n\n\nclass WritelnMixin(object):\n    hide_cursor = False\n\n    def __init__(self, message=None, **kwargs):\n        super(WritelnMixin, self).__init__(**kwargs)\n        if message:\n            self.message = message\n\n        if self.file.isatty() and self.hide_cursor:\n            print(HIDE_CURSOR, end='', file=self.file)\n\n    def clearln(self):\n        if self.file.isatty():\n            print('\\r\\x1b[K', end='', file=self.file)\n\n    def writeln(self, line):\n        if self.file.isatty():\n            self.clearln()\n            print(line, end='', file=self.file)\n            self.file.flush()\n\n    def finish(self):\n        if self.file.isatty():\n            print(file=self.file)\n            if self.hide_cursor:\n                print(SHOW_CURSOR, end='', file=self.file)\n\n\nfrom signal import signal, SIGINT\nfrom sys import exit\n\n\nclass SigIntMixin(object):\n    \"\"\"Registers a signal handler that calls finish on SIGINT\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super(SigIntMixin, self).__init__(*args, **kwargs)\n        signal(SIGINT, self._sigint_handler)\n\n    def _sigint_handler(self, signum, frame):\n        self.finish()\n        exit(0)\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/progress/spinner.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2012 Giorgos Verigakis <verigak@gmail.com>\n#\n# Permission to use, copy, modify, and distribute this software for any\n# purpose with or without fee is hereby granted, provided that the above\n# copyright notice and this permission notice appear in all copies.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\nfrom __future__ import unicode_literals\nfrom . import Infinite\nfrom .helpers import WriteMixin\n\n\nclass Spinner(WriteMixin, Infinite):\n    message = ''\n    phases = ('-', '\\\\', '|', '/')\n    hide_cursor = True\n\n    def update(self):\n        i = self.index % len(self.phases)\n        self.write(self.phases[i])\n\n\nclass PieSpinner(Spinner):\n    phases = ['◷', '◶', '◵', '◴']\n\n\nclass MoonSpinner(Spinner):\n    phases = ['◑', '◒', '◐', '◓']\n\n\nclass LineSpinner(Spinner):\n    phases = ['⎺', '⎻', '⎼', '⎽', '⎼', '⎻']\n\nclass PixelSpinner(Spinner):\n    phases = ['⣾','⣷', '⣯', '⣟', '⡿', '⢿', '⣻', '⣽']\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/setup.py",
    "content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport progress\n\n\nsetup(\n    name='progress',\n    version=progress.__version__,\n    description='Easy to use progress bars',\n    long_description=open('README.rst').read(),\n    author='Giorgos Verigakis',\n    author_email='verigak@gmail.com',\n    url='http://github.com/verigak/progress/',\n    license='ISC',\n    packages=['progress'],\n    classifiers=[\n        'Environment :: Console',\n        'Intended Audience :: Developers',\n        'License :: OSI Approved :: ISC License (ISCL)',\n        'Programming Language :: Python :: 2.6',\n        'Programming Language :: Python :: 2.7',\n        'Programming Language :: Python :: 3.3',\n        'Programming Language :: Python :: 3.4',\n        'Programming Language :: Python :: 3.5',\n        'Programming Language :: Python :: 3.6',\n    ]\n)\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/utils/progress/test_progress.py",
    "content": "#!/usr/bin/env python\n\nfrom __future__ import print_function\n\nimport random\nimport time\n\nfrom progress.bar import (Bar, ChargingBar, FillingSquaresBar,\n                          FillingCirclesBar, IncrementalBar, PixelBar,\n                          ShadyBar)\nfrom progress.spinner import (Spinner, PieSpinner, MoonSpinner, LineSpinner,\n                              PixelSpinner)\nfrom progress.counter import Counter, Countdown, Stack, Pie\n\n\ndef sleep():\n    t = 0.01\n    t += t * random.uniform(-0.1, 0.1)  # Add some variance\n    time.sleep(t)\n\n\nfor bar_cls in (Bar, ChargingBar, FillingSquaresBar, FillingCirclesBar):\n    suffix = '%(index)d/%(max)d [%(elapsed)d / %(eta)d / %(eta_td)s]'\n    bar = bar_cls(bar_cls.__name__, suffix=suffix)\n    for i in bar.iter(range(200)):\n        sleep()\n\nfor bar_cls in (IncrementalBar, PixelBar, ShadyBar):\n    suffix = '%(percent)d%% [%(elapsed_td)s / %(eta)d / %(eta_td)s]'\n    bar = bar_cls(bar_cls.__name__, suffix=suffix)\n    for i in bar.iter(range(200)):\n        sleep()\n\nfor spin in (Spinner, PieSpinner, MoonSpinner, LineSpinner, PixelSpinner):\n    for i in spin(spin.__name__ + ' ').iter(range(100)):\n        sleep()\n    print()\n\nfor singleton in (Counter, Countdown, Stack, Pie):\n    for i in singleton(singleton.__name__ + ' ').iter(range(100)):\n        sleep()\n    print()\n\nbar = IncrementalBar('Random', suffix='%(index)d')\nfor i in range(100):\n    bar.goto(random.randint(0, 100))\n    sleep()\nbar.finish()\n"
  },
  {
    "path": "pointMLP/classification_ModelNet40/voting.py",
    "content": "import argparse\nimport os\nimport datetime\nimport torch\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.optim\nimport torch.utils.data\nimport torch.utils.data.distributed\nfrom torch.utils.data import DataLoader\nimport models as models\nfrom utils import progress_bar, IOStream\nfrom data import ModelNet40\nimport sklearn.metrics as metrics\nfrom helper import cal_loss\nimport numpy as np\nimport torch.nn.functional as F\n\nmodel_names = sorted(name for name in models.__dict__\n                     if callable(models.__dict__[name]))\n\n\ndef parse_args():\n    \"\"\"Parameters\"\"\"\n    parser = argparse.ArgumentParser('training')\n    parser.add_argument('-c', '--checkpoint', type=str, metavar='PATH',\n                        help='path to save checkpoint (default: checkpoint)')\n    parser.add_argument('--msg', type=str, help='message after checkpoint')\n    parser.add_argument('--batch_size', type=int, default=32, help='batch size in training')\n    parser.add_argument('--model', default='model31A', help='model name [default: pointnet_cls]')\n    parser.add_argument('--num_classes', default=40, type=int, choices=[10, 40], help='training on ModelNet10/40')\n    parser.add_argument('--num_points', type=int, default=1024, help='Point Number')\n    parser.add_argument('--seed', type=int, help='random seed (default: 1)')\n\n    # Voting evaluation, referring: https://github.com/CVMI-Lab/PAConv/blob/main/obj_cls/eval_voting.py\n    parser.add_argument('--NUM_PEPEAT', type=int, default=300)\n    parser.add_argument('--NUM_VOTE', type=int, default=10)\n\n    parser.add_argument('--validate', action='store_true', help='Validate the original testing result.')\n    return parser.parse_args()\n\n\nclass PointcloudScale(object):  # input random scaling\n    def __init__(self, scale_low=2. / 3., scale_high=3. / 2.):\n        self.scale_low = scale_low\n        self.scale_high = scale_high\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            xyz1 = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])\n            pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], torch.from_numpy(xyz1).float().cuda())\n\n        return pc\n\n\ndef main():\n    args = parse_args()\n    print(f\"args: {args}\")\n    os.environ[\"HDF5_USE_FILE_LOCKING\"] = \"FALSE\"\n    if args.seed is None:\n        args.seed = np.random.randint(1, 10000)\n    print(f\"random seed is set to {args.seed}, the speed will slow down.\")\n    torch.manual_seed(args.seed)\n    np.random.seed(args.seed)\n    torch.cuda.manual_seed_all(args.seed)\n    torch.cuda.manual_seed(args.seed)\n    torch.set_printoptions(10)\n    torch.backends.cudnn.benchmark = False\n    torch.backends.cudnn.deterministic = True\n    os.environ['PYTHONHASHSEED'] = str(args.seed)\n    if torch.cuda.is_available():\n        device = 'cuda'\n    else:\n        device = 'cpu'\n    print(f\"==> Using device: {device}\")\n    if args.msg is None:\n        message = str(datetime.datetime.now().strftime('-%Y%m%d%H%M%S'))\n    else:\n        message = \"-\" + args.msg\n    args.checkpoint = 'checkpoints/' + args.model + message\n\n    print('==> Preparing data..')\n    test_loader = DataLoader(ModelNet40(partition='test', num_points=args.num_points), num_workers=4,\n                             batch_size=args.batch_size // 2, shuffle=False, drop_last=False)\n    # Model\n    print('==> Building model..')\n    net = models.__dict__[args.model]()\n    criterion = cal_loss\n    net = net.to(device)\n    checkpoint_path = os.path.join(args.checkpoint, 'best_checkpoint.pth')\n    checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))\n    # criterion = criterion.to(device)\n    if device == 'cuda':\n        net = torch.nn.DataParallel(net)\n        cudnn.benchmark = True\n    net.load_state_dict(checkpoint['net'])\n\n    if args.validate:\n        test_out = validate(net, test_loader, criterion, device)\n        print(f\"Vanilla out: {test_out}\")\n        print(f\"Note 1: Please also load the random seed parameter (if forgot, see out.txt).\\n\"\n              f\"Note 2: This result may vary little on different GPUs (and number of GPUs), we tested 2080Ti, P100, and V100.\\n\"\n              f\"[note : Original result is achieved with V100 GPUs.]\\n\\n\\n\")\n        # Interestingly, we get original best_test_acc on 4 V100 gpus, but this model is trained on one V100 gpu.\n        # On different GPUs, and different number of GPUs, both OA and mean_acc vary a little.\n        # Also, the batch size also affect the testing results, could not understand.\n\n    print(f\"===> start voting evaluation...\")\n    voting(net, test_loader, device, args)\n\n\ndef validate(net, testloader, criterion, device):\n    net.eval()\n    test_loss = 0\n    correct = 0\n    total = 0\n    test_true = []\n    test_pred = []\n    time_cost = datetime.datetime.now()\n    with torch.no_grad():\n        for batch_idx, (data, label) in enumerate(testloader):\n            data, label = data.to(device), label.to(device).squeeze()\n            data = data.permute(0, 2, 1)\n            logits = net(data)\n            loss = criterion(logits, label)\n            test_loss += loss.item()\n            preds = logits.max(dim=1)[1]\n            test_true.append(label.cpu().numpy())\n            test_pred.append(preds.detach().cpu().numpy())\n            total += label.size(0)\n            correct += preds.eq(label).sum().item()\n            progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'\n                         % (test_loss / (batch_idx + 1), 100. * correct / total, correct, total))\n\n    time_cost = int((datetime.datetime.now() - time_cost).total_seconds())\n    test_true = np.concatenate(test_true)\n    test_pred = np.concatenate(test_pred)\n    return {\n        \"loss\": float(\"%.3f\" % (test_loss / (batch_idx + 1))),\n        \"acc\": float(\"%.3f\" % (100. * metrics.accuracy_score(test_true, test_pred))),\n        \"acc_avg\": float(\"%.3f\" % (100. * metrics.balanced_accuracy_score(test_true, test_pred))),\n        \"time\": time_cost\n    }\n\n\ndef voting(net, testloader, device, args):\n    name = '/evaluate_voting' + str(datetime.datetime.now().strftime('-%Y%m%d%H%M%S')) + 'seed_' + str(\n        args.seed) + '.log'\n    io = IOStream(args.checkpoint + name)\n    io.cprint(str(args))\n\n    net.eval()\n    best_acc = 0\n    best_mean_acc = 0\n    # pointscale = PointcloudScale(scale_low=0.8, scale_high=1.18)  # set the range of scaling\n    # pointscale = PointcloudScale()\n    pointscale = PointcloudScale(scale_low=0.85, scale_high=1.15)\n\n    for i in range(args.NUM_PEPEAT):\n        test_true = []\n        test_pred = []\n\n        for batch_idx, (data, label) in enumerate(testloader):\n            data, label = data.to(device), label.to(device).squeeze()\n            pred = 0\n            for v in range(args.NUM_VOTE):\n                new_data = data\n                # batch_size = data.size()[0]\n                if v > 0:\n                    new_data.data = pointscale(new_data.data)\n                with torch.no_grad():\n                    pred += F.softmax(net(new_data.permute(0, 2, 1)), dim=1)  # sum 10 preds\n            pred /= args.NUM_VOTE  # avg the preds!\n            label = label.view(-1)\n            pred_choice = pred.max(dim=1)[1]\n            test_true.append(label.cpu().numpy())\n            test_pred.append(pred_choice.detach().cpu().numpy())\n        test_true = np.concatenate(test_true)\n        test_pred = np.concatenate(test_pred)\n        test_acc = 100. * metrics.accuracy_score(test_true, test_pred)\n        test_mean_acc = 100. * metrics.balanced_accuracy_score(test_true, test_pred)\n        if test_acc > best_acc:\n            best_acc = test_acc\n        if test_mean_acc > best_mean_acc:\n            best_mean_acc = test_mean_acc\n        outstr = 'Voting %d, test acc: %.3f, test mean acc: %.3f,  [current best(all_acc: %.3f mean_acc: %.3f)]' % \\\n                 (i, test_acc, test_mean_acc, best_acc, best_mean_acc)\n        io.cprint(outstr)\n\n    final_outstr = 'Final voting test acc: %.6f,' % (best_acc * 100)\n    io.cprint(final_outstr)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "pointnet2_pyt/.gitignore",
    "content": "__pycache__\n*.pth*\n.autoenv*\nruns\nbuild\ncheckpoints\n*.prof\n.lvimrc\n.vimtags\n.ccls\n.ccls-cache/\ndist/\npointnet2.egg-info/\n*.zip\n*.so\n.tox/\n.mypy_cache\n**/*.pyc\n"
  },
  {
    "path": "pointnet2_pyt/.pre-commit-config.yaml",
    "content": "exclude: 'build|egg-info|dist'\n\nrepos:\n-   repo: https://github.com/pre-commit/pre-commit-hooks\n    rev: v1.2.3\n    hooks:\n    -   id: trailing-whitespace\n    -   id: check-added-large-files\n    -   id: end-of-file-fixer\n\n-   repo: https://github.com/ambv/black\n    rev: stable\n    hooks:\n    - id: black\n      language_version: python3.6\n\n-   repo: local\n    hooks:\n    - id: clang-format\n      name: Run clang-format\n      entry: clang-format --style google -i\n      types: [text]\n      files: '.*\\.cpp$|.*\\.h$|.*\\.cu$|.*\\.hpp$'\n      language: system\n"
  },
  {
    "path": "pointnet2_pyt/.travis.yml",
    "content": "dist: trusty\n\nlanguage: python\n\npython:\n  - \"3.6\"\ninstall:\n  - pip install black\nscript:\n  - black --check .\n  - find . -not -path '*/\\.*' | grep -E \".*\\.cpp$|.*\\.h$|.*\\.cu$|.*\\.hpp$\" | xargs -I {} bash -c \"diff -u <(cat {}) <(clang-format --style google {})\"\n"
  },
  {
    "path": "pointnet2_pyt/MANIFEST.in",
    "content": "graft pointnet2/_ext-src/include/\n"
  },
  {
    "path": "pointnet2_pyt/README.rst",
    "content": "Pointnet2/Pointnet++ PyTorch\n============================\n\n* Implemention of Pointnet2/Pointnet++ written in `PyTorch <http://pytorch.org>`_.\n\n* Supports Multi-GPU via `nn.DataParallel <https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel>`_.\n\n* Supports PyTorch version >= 1.0.0.  Use `v1.0 <https://github.com/erikwijmans/Pointnet2_PyTorch/releases/tag/v1.0>`_\n  for support of older versions of PyTorch.\n\n\nSee the official code release for the paper (in tensorflow), `charlesq34/pointnet2 <https://github.com/charlesq34/pointnet2>`_,\nfor official model definitions and hyper-parameters.\n\nThe custom ops used by Pointnet++ are currently **ONLY** supported on the GPU using CUDA.\n\nSetup\n-----\n\n* Install ``python`` -- This repo is tested with ``2.7``, ``3.5``, and ``3.6``\n\n\n* Install dependencies\n\n  ::\n\n    pip install -r requirements.txt\n\n\n* Building `_ext` module\n\n  ::\n\n    python setup.py build_ext --inplace\n\n\n* Optionally, you can also install this repo as a package\n\n  ::\n\n    pip install -e .\n\n\nExample training\n------------------\n\nTwo training examples are provided by ``pointnet2/train/train_sem_seg.py`` and ``pointnet2/train/train_cls.py``.\nThe datasets for both will be downloaded automatically by default.\n\n\nThey can be run via\n\n::\n\n  python -m pointnet2.train.train_cls\n\n  python -m pointnet2.train.train_sem_seg\n\n\nBoth scripts will print training progress after every epoch to the command line.  Use the ``--visdom`` flag to\nenable logging to visdom and more detailed logging of training progress.\n\n\nContributing\n------------\n\nThis repository uses `black <https://github.com/ambv/black>`_ for linting and style enforcement on python code.\nFor c++/cuda code,\n`clang-format <https://clang.llvm.org/docs/ClangFormat.html>`_ is used for style.  The simplest way to\ncomply with style is via `pre-commit <https://pre-commit.com/>`_\n\n::\n\n  pip install pre-commit\n  pre-commit install\n\n\n\nCitation\n--------\n\n::\n\n  @article{pytorchpointnet++,\n        Author = {Erik Wijmans},\n        Title = {Pointnet++ Pytorch},\n        Journal = {https://github.com/erikwijmans/Pointnet2_PyTorch},\n        Year = {2018}\n  }\n\n  @inproceedings{qi2017pointnet++,\n      title={Pointnet++: Deep hierarchical feature learning on point sets in a metric space},\n      author={Qi, Charles Ruizhongtai and Yi, Li and Su, Hao and Guibas, Leonidas J},\n      booktitle={Advances in Neural Information Processing Systems},\n      pages={5099--5108},\n      year={2017}\n  }\n"
  },
  {
    "path": "pointnet2_pyt/UNLICENSE",
    "content": "This is free and unencumbered software released into the public domain.\n\nAnyone is free to copy, modify, publish, use, compile, sell, or\ndistribute this software, either in source code form or as a compiled\nbinary, for any purpose, commercial or non-commercial, and by any\nmeans.\n\nIn jurisdictions that recognize copyright laws, the author or authors\nof this software dedicate any and all copyright interest in the\nsoftware to the public domain. We make this dedication for the benefit\nof the public at large and to the detriment of our heirs and\nsuccessors. We intend this dedication to be an overt act of\nrelinquishment in perpetuity of all present and future rights to this\nsoftware under copyright law.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR\nOTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\nOTHER DEALINGS IN THE SOFTWARE.\n\nFor more information, please refer to <http://unlicense.org/>\n"
  },
  {
    "path": "pointnet2_pyt/__init__.py",
    "content": ""
  },
  {
    "path": "pointnet2_pyt/pointnet2/__init__.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-16 22:23:16\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-24 23:12:32\n'''\nfrom __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\n\n__version__ = \"2.1.1\"\n\ntry:\n    __POINTNET2_SETUP__\nexcept NameError:\n    __POINTNET2_SETUP__ = False\n\nif not __POINTNET2_SETUP__:\n    from pointnet2_pyt.pointnet2 import utils\n    from pointnet2_pyt.pointnet2 import data\n    from pointnet2_pyt.pointnet2 import models\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/ball_query.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,\n                      const int nsample);\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/cuda_utils.h",
    "content": "#ifndef _CUDA_UTILS_H\n#define _CUDA_UTILS_H\n\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <cmath>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n#include <vector>\n\n#define TOTAL_THREADS 512\n\ninline int opt_n_threads(int work_size) {\n  const int pow_2 = std::log(static_cast<double>(work_size)) / std::log(2.0);\n\n  return max(min(1 << pow_2, TOTAL_THREADS), 1);\n}\n\ninline dim3 opt_block_config(int x, int y) {\n  const int x_threads = opt_n_threads(x);\n  const int y_threads =\n      max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);\n  dim3 block_config(x_threads, y_threads, 1);\n\n  return block_config;\n}\n\n#define CUDA_CHECK_ERRORS()                                           \\\n  do {                                                                \\\n    cudaError_t err = cudaGetLastError();                             \\\n    if (cudaSuccess != err) {                                         \\\n      fprintf(stderr, \"CUDA kernel failed : %s\\n%s at L:%d in %s\\n\",  \\\n              cudaGetErrorString(err), __PRETTY_FUNCTION__, __LINE__, \\\n              __FILE__);                                              \\\n      exit(-1);                                                       \\\n    }                                                                 \\\n  } while (0)\n\n#endif\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/group_points.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor group_points(at::Tensor points, at::Tensor idx);\nat::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/interpolate.h",
    "content": "#pragma once\n\n#include <torch/extension.h>\n#include <vector>\n\nstd::vector<at::Tensor> three_nn(at::Tensor unknowns, at::Tensor knows);\nat::Tensor three_interpolate(at::Tensor points, at::Tensor idx,\n                             at::Tensor weight);\nat::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,\n                                  at::Tensor weight, const int m);\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/sampling.h",
    "content": "#pragma once\n#include <torch/extension.h>\n\nat::Tensor gather_points(at::Tensor points, at::Tensor idx);\nat::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx, const int n);\nat::Tensor furthest_point_sampling(at::Tensor points, const int nsamples);\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/include/utils.h",
    "content": "#pragma once\n#include <ATen/cuda/CUDAContext.h>\n#include <torch/extension.h>\n\n#define CHECK_CUDA(x)                                          \\\n  do {                                                         \\\n    AT_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\"); \\\n  } while (0)\n\n#define CHECK_CONTIGUOUS(x)                                         \\\n  do {                                                              \\\n    AT_CHECK(x.is_contiguous(), #x \" must be a contiguous tensor\"); \\\n  } while (0)\n\n#define CHECK_IS_INT(x)                              \\\n  do {                                               \\\n    AT_CHECK(x.scalar_type() == at::ScalarType::Int, \\\n             #x \" must be an int tensor\");           \\\n  } while (0)\n\n#define CHECK_IS_FLOAT(x)                              \\\n  do {                                                 \\\n    AT_CHECK(x.scalar_type() == at::ScalarType::Float, \\\n             #x \" must be a float tensor\");            \\\n  } while (0)\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/ball_query.cpp",
    "content": "#include \"ball_query.h\"\n#include \"utils.h\"\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n                                     int nsample, const float *new_xyz,\n                                     const float *xyz, int *idx);\n\nat::Tensor ball_query(at::Tensor new_xyz, at::Tensor xyz, const float radius,\n                      const int nsample) {\n  CHECK_CONTIGUOUS(new_xyz);\n  CHECK_CONTIGUOUS(xyz);\n  CHECK_IS_FLOAT(new_xyz);\n  CHECK_IS_FLOAT(xyz);\n\n  if (new_xyz.type().is_cuda()) {\n    CHECK_CUDA(xyz);\n  }\n\n  at::Tensor idx =\n      torch::zeros({new_xyz.size(0), new_xyz.size(1), nsample},\n                   at::device(new_xyz.device()).dtype(at::ScalarType::Int));\n\n  if (new_xyz.type().is_cuda()) {\n    query_ball_point_kernel_wrapper(xyz.size(0), xyz.size(1), new_xyz.size(1),\n                                    radius, nsample, new_xyz.data<float>(),\n                                    xyz.data<float>(), idx.data<int>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return idx;\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/ball_query_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: new_xyz(b, m, 3) xyz(b, n, 3)\n// output: idx(b, m, nsample)\n__global__ void query_ball_point_kernel(int b, int n, int m, float radius,\n                                        int nsample,\n                                        const float *__restrict__ new_xyz,\n                                        const float *__restrict__ xyz,\n                                        int *__restrict__ idx) {\n  int batch_index = blockIdx.x;\n  xyz += batch_index * n * 3;\n  new_xyz += batch_index * m * 3;\n  idx += m * nsample * batch_index;\n\n  int index = threadIdx.x;\n  int stride = blockDim.x;\n\n  float radius2 = radius * radius;\n  for (int j = index; j < m; j += stride) {\n    float new_x = new_xyz[j * 3 + 0];\n    float new_y = new_xyz[j * 3 + 1];\n    float new_z = new_xyz[j * 3 + 2];\n    for (int k = 0, cnt = 0; k < n && cnt < nsample; ++k) {\n      float x = xyz[k * 3 + 0];\n      float y = xyz[k * 3 + 1];\n      float z = xyz[k * 3 + 2];\n      float d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) +\n                 (new_z - z) * (new_z - z);\n      if (d2 < radius2) {\n        if (cnt == 0) {\n          for (int l = 0; l < nsample; ++l) {\n            idx[j * nsample + l] = k;\n          }\n        }\n        idx[j * nsample + cnt] = k;\n        ++cnt;\n      }\n    }\n  }\n}\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n                                     int nsample, const float *new_xyz,\n                                     const float *xyz, int *idx) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  query_ball_point_kernel<<<b, opt_n_threads(m), 0, stream>>>(\n      b, n, m, radius, nsample, new_xyz, xyz, idx);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/bindings.cpp",
    "content": "#include \"ball_query.h\"\n#include \"group_points.h\"\n#include \"interpolate.h\"\n#include \"sampling.h\"\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"gather_points\", &gather_points);\n  m.def(\"gather_points_grad\", &gather_points_grad);\n  m.def(\"furthest_point_sampling\", &furthest_point_sampling);\n\n  m.def(\"three_nn\", &three_nn);\n  m.def(\"three_interpolate\", &three_interpolate);\n  m.def(\"three_interpolate_grad\", &three_interpolate_grad);\n\n  m.def(\"ball_query\", &ball_query);\n\n  m.def(\"group_points\", &group_points);\n  m.def(\"group_points_grad\", &group_points_grad);\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/group_points.cpp",
    "content": "#include \"group_points.h\"\n#include \"utils.h\"\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n                                 const float *points, const int *idx,\n                                 float *out);\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                      int nsample, const float *grad_out,\n                                      const int *idx, float *grad_points);\n\nat::Tensor group_points(at::Tensor points, at::Tensor idx) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n\n  if (points.type().is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1), idx.size(2)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.type().is_cuda()) {\n    group_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),\n                                idx.size(1), idx.size(2), points.data<float>(),\n                                idx.data<int>(), output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n\nat::Tensor group_points_grad(at::Tensor grad_out, at::Tensor idx, const int n) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n\n  if (grad_out.type().is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), n},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.type().is_cuda()) {\n    group_points_grad_kernel_wrapper(\n        grad_out.size(0), grad_out.size(1), n, idx.size(1), idx.size(2),\n        grad_out.data<float>(), idx.data<int>(), output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/group_points_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: points(b, c, n) idx(b, npoints, nsample)\n// output: out(b, c, npoints, nsample)\n__global__ void group_points_kernel(int b, int c, int n, int npoints,\n                                    int nsample,\n                                    const float *__restrict__ points,\n                                    const int *__restrict__ idx,\n                                    float *__restrict__ out) {\n  int batch_index = blockIdx.x;\n  points += batch_index * n * c;\n  idx += batch_index * npoints * nsample;\n  out += batch_index * npoints * nsample * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * npoints; i += stride) {\n    const int l = i / npoints;\n    const int j = i % npoints;\n    for (int k = 0; k < nsample; ++k) {\n      int ii = idx[j * nsample + k];\n      out[(l * npoints + j) * nsample + k] = points[l * n + ii];\n    }\n  }\n}\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n                                 const float *points, const int *idx,\n                                 float *out) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  group_points_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n      b, c, n, npoints, nsample, points, idx, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, npoints, nsample), idx(b, npoints, nsample)\n// output: grad_points(b, c, n)\n__global__ void group_points_grad_kernel(int b, int c, int n, int npoints,\n                                         int nsample,\n                                         const float *__restrict__ grad_out,\n                                         const int *__restrict__ idx,\n                                         float *__restrict__ grad_points) {\n  int batch_index = blockIdx.x;\n  grad_out += batch_index * npoints * nsample * c;\n  idx += batch_index * npoints * nsample;\n  grad_points += batch_index * n * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * npoints; i += stride) {\n    const int l = i / npoints;\n    const int j = i % npoints;\n    for (int k = 0; k < nsample; ++k) {\n      int ii = idx[j * nsample + k];\n      atomicAdd(grad_points + l * n + ii,\n                grad_out[(l * npoints + j) * nsample + k]);\n    }\n  }\n}\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                      int nsample, const float *grad_out,\n                                      const int *idx, float *grad_points) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  group_points_grad_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n      b, c, n, npoints, nsample, grad_out, idx, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/interpolate.cpp",
    "content": "#include \"interpolate.h\"\n#include \"utils.h\"\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n                             const float *known, float *dist2, int *idx);\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n                                      const float *points, const int *idx,\n                                      const float *weight, float *out);\nvoid three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,\n                                           const float *grad_out,\n                                           const int *idx, const float *weight,\n                                           float *grad_points);\n\nstd::vector<at::Tensor> three_nn(at::Tensor unknowns, at::Tensor knows) {\n  CHECK_CONTIGUOUS(unknowns);\n  CHECK_CONTIGUOUS(knows);\n  CHECK_IS_FLOAT(unknowns);\n  CHECK_IS_FLOAT(knows);\n\n  if (unknowns.type().is_cuda()) {\n    CHECK_CUDA(knows);\n  }\n\n  at::Tensor idx =\n      torch::zeros({unknowns.size(0), unknowns.size(1), 3},\n                   at::device(unknowns.device()).dtype(at::ScalarType::Int));\n  at::Tensor dist2 =\n      torch::zeros({unknowns.size(0), unknowns.size(1), 3},\n                   at::device(unknowns.device()).dtype(at::ScalarType::Float));\n\n  if (unknowns.type().is_cuda()) {\n    three_nn_kernel_wrapper(unknowns.size(0), unknowns.size(1), knows.size(1),\n                            unknowns.data<float>(), knows.data<float>(),\n                            dist2.data<float>(), idx.data<int>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return {dist2, idx};\n}\n\nat::Tensor three_interpolate(at::Tensor points, at::Tensor idx,\n                             at::Tensor weight) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_CONTIGUOUS(weight);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n  CHECK_IS_FLOAT(weight);\n\n  if (points.type().is_cuda()) {\n    CHECK_CUDA(idx);\n    CHECK_CUDA(weight);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.type().is_cuda()) {\n    three_interpolate_kernel_wrapper(\n        points.size(0), points.size(1), points.size(2), idx.size(1),\n        points.data<float>(), idx.data<int>(), weight.data<float>(),\n        output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\nat::Tensor three_interpolate_grad(at::Tensor grad_out, at::Tensor idx,\n                                  at::Tensor weight, const int m) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_CONTIGUOUS(weight);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n  CHECK_IS_FLOAT(weight);\n\n  if (grad_out.type().is_cuda()) {\n    CHECK_CUDA(idx);\n    CHECK_CUDA(weight);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), m},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.type().is_cuda()) {\n    three_interpolate_grad_kernel_wrapper(\n        grad_out.size(0), grad_out.size(1), grad_out.size(2), m,\n        grad_out.data<float>(), idx.data<int>(), weight.data<float>(),\n        output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/interpolate_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: unknown(b, n, 3) known(b, m, 3)\n// output: dist2(b, n, 3), idx(b, n, 3)\n__global__ void three_nn_kernel(int b, int n, int m,\n                                const float *__restrict__ unknown,\n                                const float *__restrict__ known,\n                                float *__restrict__ dist2,\n                                int *__restrict__ idx) {\n  int batch_index = blockIdx.x;\n  unknown += batch_index * n * 3;\n  known += batch_index * m * 3;\n  dist2 += batch_index * n * 3;\n  idx += batch_index * n * 3;\n\n  int index = threadIdx.x;\n  int stride = blockDim.x;\n  for (int j = index; j < n; j += stride) {\n    float ux = unknown[j * 3 + 0];\n    float uy = unknown[j * 3 + 1];\n    float uz = unknown[j * 3 + 2];\n\n    double best1 = 1e40, best2 = 1e40, best3 = 1e40;\n    int besti1 = 0, besti2 = 0, besti3 = 0;\n    for (int k = 0; k < m; ++k) {\n      float x = known[k * 3 + 0];\n      float y = known[k * 3 + 1];\n      float z = known[k * 3 + 2];\n      float d = (ux - x) * (ux - x) + (uy - y) * (uy - y) + (uz - z) * (uz - z);\n      if (d < best1) {\n        best3 = best2;\n        besti3 = besti2;\n        best2 = best1;\n        besti2 = besti1;\n        best1 = d;\n        besti1 = k;\n      } else if (d < best2) {\n        best3 = best2;\n        besti3 = besti2;\n        best2 = d;\n        besti2 = k;\n      } else if (d < best3) {\n        best3 = d;\n        besti3 = k;\n      }\n    }\n    dist2[j * 3 + 0] = best1;\n    dist2[j * 3 + 1] = best2;\n    dist2[j * 3 + 2] = best3;\n\n    idx[j * 3 + 0] = besti1;\n    idx[j * 3 + 1] = besti2;\n    idx[j * 3 + 2] = besti3;\n  }\n}\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n                             const float *known, float *dist2, int *idx) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_nn_kernel<<<b, opt_n_threads(n), 0, stream>>>(b, n, m, unknown, known,\n                                                      dist2, idx);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: points(b, c, m), idx(b, n, 3), weight(b, n, 3)\n// output: out(b, c, n)\n__global__ void three_interpolate_kernel(int b, int c, int m, int n,\n                                         const float *__restrict__ points,\n                                         const int *__restrict__ idx,\n                                         const float *__restrict__ weight,\n                                         float *__restrict__ out) {\n  int batch_index = blockIdx.x;\n  points += batch_index * m * c;\n\n  idx += batch_index * n * 3;\n  weight += batch_index * n * 3;\n\n  out += batch_index * n * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * n; i += stride) {\n    const int l = i / n;\n    const int j = i % n;\n    float w1 = weight[j * 3 + 0];\n    float w2 = weight[j * 3 + 1];\n    float w3 = weight[j * 3 + 2];\n\n    int i1 = idx[j * 3 + 0];\n    int i2 = idx[j * 3 + 1];\n    int i3 = idx[j * 3 + 2];\n\n    out[i] = points[l * m + i1] * w1 + points[l * m + i2] * w2 +\n             points[l * m + i3] * w3;\n  }\n}\n\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n                                      const float *points, const int *idx,\n                                      const float *weight, float *out) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_interpolate_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n      b, c, m, n, points, idx, weight, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, n), idx(b, n, 3), weight(b, n, 3)\n// output: grad_points(b, c, m)\n\n__global__ void three_interpolate_grad_kernel(\n    int b, int c, int n, int m, const float *__restrict__ grad_out,\n    const int *__restrict__ idx, const float *__restrict__ weight,\n    float *__restrict__ grad_points) {\n  int batch_index = blockIdx.x;\n  grad_out += batch_index * n * c;\n  idx += batch_index * n * 3;\n  weight += batch_index * n * 3;\n  grad_points += batch_index * m * c;\n\n  const int index = threadIdx.y * blockDim.x + threadIdx.x;\n  const int stride = blockDim.y * blockDim.x;\n  for (int i = index; i < c * n; i += stride) {\n    const int l = i / n;\n    const int j = i % n;\n    float w1 = weight[j * 3 + 0];\n    float w2 = weight[j * 3 + 1];\n    float w3 = weight[j * 3 + 2];\n\n    int i1 = idx[j * 3 + 0];\n    int i2 = idx[j * 3 + 1];\n    int i3 = idx[j * 3 + 2];\n\n    atomicAdd(grad_points + l * m + i1, grad_out[i] * w1);\n    atomicAdd(grad_points + l * m + i2, grad_out[i] * w2);\n    atomicAdd(grad_points + l * m + i3, grad_out[i] * w3);\n  }\n}\n\nvoid three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,\n                                           const float *grad_out,\n                                           const int *idx, const float *weight,\n                                           float *grad_points) {\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n  three_interpolate_grad_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n      b, c, n, m, grad_out, idx, weight, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/sampling.cpp",
    "content": "#include \"sampling.h\"\n#include \"utils.h\"\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n                                  const float *points, const int *idx,\n                                  float *out);\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                       const float *grad_out, const int *idx,\n                                       float *grad_points);\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n                                            const float *dataset, float *temp,\n                                            int *idxs);\n\nat::Tensor gather_points(at::Tensor points, at::Tensor idx) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(points);\n  CHECK_IS_INT(idx);\n\n  if (points.type().is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({points.size(0), points.size(1), idx.size(1)},\n                   at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.type().is_cuda()) {\n    gather_points_kernel_wrapper(points.size(0), points.size(1), points.size(2),\n                                 idx.size(1), points.data<float>(),\n                                 idx.data<int>(), output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n\nat::Tensor gather_points_grad(at::Tensor grad_out, at::Tensor idx,\n                              const int n) {\n  CHECK_CONTIGUOUS(grad_out);\n  CHECK_CONTIGUOUS(idx);\n  CHECK_IS_FLOAT(grad_out);\n  CHECK_IS_INT(idx);\n\n  if (grad_out.type().is_cuda()) {\n    CHECK_CUDA(idx);\n  }\n\n  at::Tensor output =\n      torch::zeros({grad_out.size(0), grad_out.size(1), n},\n                   at::device(grad_out.device()).dtype(at::ScalarType::Float));\n\n  if (grad_out.type().is_cuda()) {\n    gather_points_grad_kernel_wrapper(grad_out.size(0), grad_out.size(1), n,\n                                      idx.size(1), grad_out.data<float>(),\n                                      idx.data<int>(), output.data<float>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\nat::Tensor furthest_point_sampling(at::Tensor points, const int nsamples) {\n  CHECK_CONTIGUOUS(points);\n  CHECK_IS_FLOAT(points);\n\n  at::Tensor output =\n      torch::zeros({points.size(0), nsamples},\n                   at::device(points.device()).dtype(at::ScalarType::Int));\n\n  at::Tensor tmp =\n      torch::full({points.size(0), points.size(1)}, 1e10,\n                  at::device(points.device()).dtype(at::ScalarType::Float));\n\n  if (points.type().is_cuda()) {\n    furthest_point_sampling_kernel_wrapper(\n        points.size(0), points.size(1), nsamples, points.data<float>(),\n        tmp.data<float>(), output.data<int>());\n  } else {\n    AT_CHECK(false, \"CPU not supported\");\n  }\n\n  return output;\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/_ext-src/src/sampling_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n\n// input: points(b, c, n) idx(b, m)\n// output: out(b, c, m)\n__global__ void gather_points_kernel(int b, int c, int n, int m,\n                                     const float *__restrict__ points,\n                                     const int *__restrict__ idx,\n                                     float *__restrict__ out) {\n  for (int i = blockIdx.x; i < b; i += gridDim.x) {\n    for (int l = blockIdx.y; l < c; l += gridDim.y) {\n      for (int j = threadIdx.x; j < m; j += blockDim.x) {\n        int a = idx[i * m + j];\n        out[(i * c + l) * m + j] = points[(i * c + l) * n + a];\n      }\n    }\n  }\n}\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n                                  const float *points, const int *idx,\n                                  float *out) {\n  gather_points_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0,\n                         at::cuda::getCurrentCUDAStream()>>>(b, c, n, npoints,\n                                                             points, idx, out);\n\n  CUDA_CHECK_ERRORS();\n}\n\n// input: grad_out(b, c, m) idx(b, m)\n// output: grad_points(b, c, n)\n__global__ void gather_points_grad_kernel(int b, int c, int n, int m,\n                                          const float *__restrict__ grad_out,\n                                          const int *__restrict__ idx,\n                                          float *__restrict__ grad_points) {\n  for (int i = blockIdx.x; i < b; i += gridDim.x) {\n    for (int l = blockIdx.y; l < c; l += gridDim.y) {\n      for (int j = threadIdx.x; j < m; j += blockDim.x) {\n        int a = idx[i * m + j];\n        atomicAdd(grad_points + (i * c + l) * n + a,\n                  grad_out[(i * c + l) * m + j]);\n      }\n    }\n  }\n}\n\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n                                       const float *grad_out, const int *idx,\n                                       float *grad_points) {\n  gather_points_grad_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0,\n                              at::cuda::getCurrentCUDAStream()>>>(\n      b, c, n, npoints, grad_out, idx, grad_points);\n\n  CUDA_CHECK_ERRORS();\n}\n\n__device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i,\n                         int idx1, int idx2) {\n  const float v1 = dists[idx1], v2 = dists[idx2];\n  const int i1 = dists_i[idx1], i2 = dists_i[idx2];\n  dists[idx1] = max(v1, v2);\n  dists_i[idx1] = v2 > v1 ? i2 : i1;\n}\n\n// Input dataset: (b, n, 3), tmp: (b, n)\n// Ouput idxs (b, m)\ntemplate <unsigned int block_size>\n__global__ void furthest_point_sampling_kernel(\n    int b, int n, int m, const float *__restrict__ dataset,\n    float *__restrict__ temp, int *__restrict__ idxs) {\n  if (m <= 0) return;\n  __shared__ float dists[block_size];\n  __shared__ int dists_i[block_size];\n\n  int batch_index = blockIdx.x;\n  dataset += batch_index * n * 3;\n  temp += batch_index * n;\n  idxs += batch_index * m;\n\n  int tid = threadIdx.x;\n  const int stride = block_size;\n\n  int old = 0;\n  if (threadIdx.x == 0) idxs[0] = old;\n\n  __syncthreads();\n  for (int j = 1; j < m; j++) {\n    int besti = 0;\n    float best = -1;\n    float x1 = dataset[old * 3 + 0];\n    float y1 = dataset[old * 3 + 1];\n    float z1 = dataset[old * 3 + 2];\n    for (int k = tid; k < n; k += stride) {\n      float x2, y2, z2;\n      x2 = dataset[k * 3 + 0];\n      y2 = dataset[k * 3 + 1];\n      z2 = dataset[k * 3 + 2];\n      float mag = (x2 * x2) + (y2 * y2) + (z2 * z2);\n      if (mag <= 1e-3) continue;\n\n      float d =\n          (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) + (z2 - z1) * (z2 - z1);\n\n      float d2 = min(d, temp[k]);\n      temp[k] = d2;\n      besti = d2 > best ? k : besti;\n      best = d2 > best ? d2 : best;\n    }\n    dists[tid] = best;\n    dists_i[tid] = besti;\n    __syncthreads();\n\n    if (block_size >= 512) {\n      if (tid < 256) {\n        __update(dists, dists_i, tid, tid + 256);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 256) {\n      if (tid < 128) {\n        __update(dists, dists_i, tid, tid + 128);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 128) {\n      if (tid < 64) {\n        __update(dists, dists_i, tid, tid + 64);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 64) {\n      if (tid < 32) {\n        __update(dists, dists_i, tid, tid + 32);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 32) {\n      if (tid < 16) {\n        __update(dists, dists_i, tid, tid + 16);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 16) {\n      if (tid < 8) {\n        __update(dists, dists_i, tid, tid + 8);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 8) {\n      if (tid < 4) {\n        __update(dists, dists_i, tid, tid + 4);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 4) {\n      if (tid < 2) {\n        __update(dists, dists_i, tid, tid + 2);\n      }\n      __syncthreads();\n    }\n    if (block_size >= 2) {\n      if (tid < 1) {\n        __update(dists, dists_i, tid, tid + 1);\n      }\n      __syncthreads();\n    }\n\n    old = dists_i[0];\n    if (tid == 0) idxs[j] = old;\n  }\n}\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n                                            const float *dataset, float *temp,\n                                            int *idxs) {\n  unsigned int n_threads = opt_n_threads(n);\n\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  switch (n_threads) {\n    case 512:\n      furthest_point_sampling_kernel<512>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 256:\n      furthest_point_sampling_kernel<256>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 128:\n      furthest_point_sampling_kernel<128>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 64:\n      furthest_point_sampling_kernel<64>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 32:\n      furthest_point_sampling_kernel<32>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 16:\n      furthest_point_sampling_kernel<16>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 8:\n      furthest_point_sampling_kernel<8>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 4:\n      furthest_point_sampling_kernel<4>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 2:\n      furthest_point_sampling_kernel<2>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    case 1:\n      furthest_point_sampling_kernel<1>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n      break;\n    default:\n      furthest_point_sampling_kernel<512>\n          <<<b, n_threads, 0, stream>>>(b, n, m, dataset, temp, idxs);\n  }\n\n  CUDA_CHECK_ERRORS();\n}\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/data/.gitignore",
    "content": "indoor3d_sem_seg_hdf5_data\nmodelnet40_ply_hdf5_2048\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/data/Indoor3DSemSegLoader.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.utils.data as data\nimport numpy as np\nimport os\nimport h5py\nimport subprocess\nimport shlex\n\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\ndef _get_data_files(list_filename):\n    with open(list_filename) as f:\n        return [line.rstrip() for line in f]\n\n\ndef _load_data_file(name):\n    f = h5py.File(name)\n    data = f[\"data\"][:]\n    label = f[\"label\"][:]\n    return data, label\n\n\nclass Indoor3DSemSeg(data.Dataset):\n    def __init__(self, num_points, train=True, download=True, data_precent=1.0):\n        super().__init__()\n        self.data_precent = data_precent\n        self.folder = \"indoor3d_sem_seg_hdf5_data\"\n        self.data_dir = os.path.join(BASE_DIR, self.folder)\n        self.url = (\n            \"https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip\"\n        )\n\n        if download and not os.path.exists(self.data_dir):\n            zipfile = os.path.join(BASE_DIR, os.path.basename(self.url))\n            subprocess.check_call(\n                shlex.split(\"curl {} -o {}\".format(self.url, zipfile))\n            )\n\n            subprocess.check_call(\n                shlex.split(\"unzip {} -d {}\".format(zipfile, BASE_DIR))\n            )\n\n            subprocess.check_call(shlex.split(\"rm {}\".format(zipfile)))\n\n        self.train, self.num_points = train, num_points\n\n        all_files = _get_data_files(os.path.join(self.data_dir, \"all_files.txt\"))\n        room_filelist = _get_data_files(\n            os.path.join(self.data_dir, \"room_filelist.txt\")\n        )\n\n        data_batchlist, label_batchlist = [], []\n        for f in all_files:\n            data, label = _load_data_file(os.path.join(BASE_DIR, f))\n            data_batchlist.append(data)\n            label_batchlist.append(label)\n\n        data_batches = np.concatenate(data_batchlist, 0)\n        labels_batches = np.concatenate(label_batchlist, 0)\n\n        test_area = \"Area_5\"\n        train_idxs, test_idxs = [], []\n        for i, room_name in enumerate(room_filelist):\n            if test_area in room_name:\n                test_idxs.append(i)\n            else:\n                train_idxs.append(i)\n\n        if self.train:\n            self.points = data_batches[train_idxs, ...]\n            self.labels = labels_batches[train_idxs, ...]\n        else:\n            self.points = data_batches[test_idxs, ...]\n            self.labels = labels_batches[test_idxs, ...]\n\n    def __getitem__(self, idx):\n        pt_idxs = np.arange(0, self.num_points)\n        np.random.shuffle(pt_idxs)\n\n        current_points = torch.from_numpy(self.points[idx, pt_idxs].copy()).type(\n            torch.FloatTensor\n        )\n        current_labels = torch.from_numpy(self.labels[idx, pt_idxs].copy()).type(\n            torch.LongTensor\n        )\n\n        return current_points, current_labels\n\n    def __len__(self):\n        return int(self.points.shape[0] * self.data_precent)\n\n    def set_num_points(self, pts):\n        self.num_points = pts\n\n    def randomize(self):\n        pass\n\n\nif __name__ == \"__main__\":\n    dset = Indoor3DSemSeg(16, \"./\", train=True)\n    print(dset[0])\n    print(len(dset))\n    dloader = torch.utils.data.DataLoader(dset, batch_size=32, shuffle=True)\n    for i, data in enumerate(dloader, 0):\n        inputs, labels = data\n        if i == len(dloader) - 1:\n            print(inputs.size())\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/data/ModelNet40Loader.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.utils.data as data\nimport numpy as np\nimport os\nimport h5py\nimport subprocess\nimport shlex\n\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\ndef _get_data_files(list_filename):\n    with open(list_filename) as f:\n        return [line.rstrip()[5:] for line in f]\n\n\ndef _load_data_file(name):\n    f = h5py.File(name)\n    data = f[\"data\"][:]\n    label = f[\"label\"][:]\n    return data, label\n\n\nclass ModelNet40Cls(data.Dataset):\n    def __init__(self, num_points, transforms=None, train=True, download=True):\n        super().__init__()\n\n        self.transforms = transforms\n\n        self.folder = \"modelnet40_ply_hdf5_2048\"\n        self.data_dir = os.path.join(BASE_DIR, self.folder)\n        self.url = \"https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip\"\n\n        if download and not os.path.exists(self.data_dir):\n            zipfile = os.path.join(BASE_DIR, os.path.basename(self.url))\n            subprocess.check_call(\n                shlex.split(\"curl {} -o {}\".format(self.url, zipfile))\n            )\n\n            subprocess.check_call(\n                shlex.split(\"unzip {} -d {}\".format(zipfile, BASE_DIR))\n            )\n\n            subprocess.check_call(shlex.split(\"rm {}\".format(zipfile)))\n\n        self.train = train\n        if self.train:\n            self.files = _get_data_files(os.path.join(self.data_dir, \"train_files.txt\"))\n        else:\n            self.files = _get_data_files(os.path.join(self.data_dir, \"test_files.txt\"))\n\n        point_list, label_list = [], []\n        for f in self.files:\n            points, labels = _load_data_file(os.path.join(BASE_DIR, f))\n            point_list.append(points)\n            label_list.append(labels)\n\n        self.points = np.concatenate(point_list, 0)\n        self.labels = np.concatenate(label_list, 0)\n        self.set_num_points(num_points)\n\n    def __getitem__(self, idx):\n        pt_idxs = np.arange(0, self.num_points)\n        np.random.shuffle(pt_idxs)\n\n        current_points = self.points[idx, pt_idxs].copy()\n        label = torch.from_numpy(self.labels[idx]).type(torch.LongTensor)\n\n        if self.transforms is not None:\n            current_points = self.transforms(current_points)\n\n        return current_points, label\n\n    def __len__(self):\n        return self.points.shape[0]\n\n    def set_num_points(self, pts):\n        self.num_points = min(self.points.shape[1], pts)\n\n    def randomize(self):\n        pass\n\n\nif __name__ == \"__main__\":\n    from torchvision import transforms\n    import data_utils as d_utils\n\n    transforms = transforms.Compose(\n        [\n            d_utils.PointcloudToTensor(),\n            d_utils.PointcloudRotate(axis=np.array([1, 0, 0])),\n            d_utils.PointcloudScale(),\n            d_utils.PointcloudTranslate(),\n            d_utils.PointcloudJitter(),\n        ]\n    )\n    dset = ModelNet40Cls(16, train=True, transforms=transforms)\n    print(dset[0][0])\n    print(dset[0][1])\n    print(len(dset))\n    dloader = torch.utils.data.DataLoader(dset, batch_size=32, shuffle=True)\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/data/__init__.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom .ModelNet40Loader import ModelNet40Cls\nfrom .Indoor3DSemSegLoader import Indoor3DSemSeg\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/data/data_utils.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport numpy as np\n\n\ndef angle_axis(angle, axis):\n    # type: (float, np.ndarray) -> float\n    r\"\"\"Returns a 4x4 rotation matrix that performs a rotation around axis by angle\n\n    Parameters\n    ----------\n    angle : float\n        Angle to rotate by\n    axis: np.ndarray\n        Axis to rotate about\n\n    Returns\n    -------\n    torch.Tensor\n        3x3 rotation matrix\n    \"\"\"\n    u = axis / np.linalg.norm(axis)\n    cosval, sinval = np.cos(angle), np.sin(angle)\n\n    # yapf: disable\n    cross_prod_mat = np.array([[0.0, -u[2], u[1]],\n                                [u[2], 0.0, -u[0]],\n                                [-u[1], u[0], 0.0]])\n\n    R = torch.from_numpy(\n        cosval * np.eye(3)\n        + sinval * cross_prod_mat\n        + (1.0 - cosval) * np.outer(u, u)\n    )\n    # yapf: enable\n    return R.float()\n\n\nclass PointcloudScale(object):\n    def __init__(self, lo=0.8, hi=1.25):\n        self.lo, self.hi = lo, hi\n\n    def __call__(self, points):\n        scaler = np.random.uniform(self.lo, self.hi)\n        points[:, 0:3] *= scaler\n        return points\n\n\nclass PointcloudRotate(object):\n    def __init__(self, axis=np.array([0.0, 1.0, 0.0])):\n        self.axis = axis\n\n    def __call__(self, points):\n        rotation_angle = np.random.uniform() * 2 * np.pi\n        rotation_matrix = angle_axis(rotation_angle, self.axis)\n\n        normals = points.size(1) > 3\n        if not normals:\n            return torch.matmul(points, rotation_matrix.t())\n        else:\n            pc_xyz = points[:, 0:3]\n            pc_normals = points[:, 3:]\n            points[:, 0:3] = torch.matmul(pc_xyz, rotation_matrix.t())\n            points[:, 3:] = torch.matmul(pc_normals, rotation_matrix.t())\n\n            return points\n\n\nclass PointcloudRotatePerturbation(object):\n    def __init__(self, angle_sigma=0.06, angle_clip=0.18):\n        self.angle_sigma, self.angle_clip = angle_sigma, angle_clip\n\n    def _get_angles(self):\n        angles = np.clip(\n            self.angle_sigma * np.random.randn(3), -self.angle_clip, self.angle_clip\n        )\n\n        return angles\n\n    def __call__(self, points):\n        angles = self._get_angles()\n        Rx = angle_axis(angles[0], np.array([1.0, 0.0, 0.0]))\n        Ry = angle_axis(angles[1], np.array([0.0, 1.0, 0.0]))\n        Rz = angle_axis(angles[2], np.array([0.0, 0.0, 1.0]))\n\n        rotation_matrix = torch.matmul(torch.matmul(Rz, Ry), Rx)\n\n        normals = points.size(1) > 3\n        if not normals:\n            return torch.matmul(points, rotation_matrix.t())\n        else:\n            pc_xyz = points[:, 0:3]\n            pc_normals = points[:, 3:]\n            points[:, 0:3] = torch.matmul(pc_xyz, rotation_matrix.t())\n            points[:, 3:] = torch.matmul(pc_normals, rotation_matrix.t())\n\n            return points\n\n\nclass PointcloudJitter(object):\n    def __init__(self, std=0.01, clip=0.05):\n        self.std, self.clip = std, clip\n\n    def __call__(self, points):\n        jittered_data = (\n            points.new(points.size(0), 3)\n            .normal_(mean=0.0, std=self.std)\n            .clamp_(-self.clip, self.clip)\n        )\n        points[:, 0:3] += jittered_data\n        return points\n\n\nclass PointcloudTranslate(object):\n    def __init__(self, translate_range=0.1):\n        self.translate_range = translate_range\n\n    def __call__(self, points):\n        translation = np.random.uniform(-self.translate_range, self.translate_range)\n        points[:, 0:3] += translation\n        return points\n\n\nclass PointcloudToTensor(object):\n    def __call__(self, points):\n        return torch.from_numpy(points).float()\n\n\nclass PointcloudRandomInputDropout(object):\n    def __init__(self, max_dropout_ratio=0.875):\n        assert max_dropout_ratio >= 0 and max_dropout_ratio < 1\n        self.max_dropout_ratio = max_dropout_ratio\n\n    def __call__(self, points):\n        pc = points.numpy()\n\n        dropout_ratio = np.random.random() * self.max_dropout_ratio  # 0~0.875\n        drop_idx = np.where(np.random.random((pc.shape[0])) <= dropout_ratio)[0]\n        if len(drop_idx) > 0:\n            pc[drop_idx] = pc[0]  # set to the first point\n\n        return torch.from_numpy(pc).float()\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/models/__init__.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom .pointnet2_msg_sem import Pointnet2MSG as Pointnet2SemMSG\nfrom .pointnet2_ssg_sem import Pointnet2SSG as Pointnet2SemSSG\nfrom .pointnet2_msg_cls import Pointnet2MSG as Pointnet2ClsMSG\nfrom .pointnet2_ssg_cls import Pointnet2SSG as Pointnet2ClsSSG\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/models/pointnet2_msg_cls.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.nn as nn\nimport etw_pytorch_utils as pt_utils\nfrom collections import namedtuple\n\nfrom pointnet2.utils.pointnet2_modules import PointnetSAModuleMSG, PointnetSAModule\n\n\ndef model_fn_decorator(criterion):\n    ModelReturn = namedtuple(\"ModelReturn\", [\"preds\", \"loss\", \"acc\"])\n\n    def model_fn(model, data, epoch=0, eval=False):\n        with torch.set_grad_enabled(not eval):\n            inputs, labels = data\n            inputs = inputs.to(\"cuda\", non_blocking=True)\n            labels = labels.to(\"cuda\", non_blocking=True)\n\n            preds = model(inputs)\n            labels = labels.view(-1)\n            loss = criterion(preds, labels)\n\n            _, classes = torch.max(preds, -1)\n            acc = (classes == labels).float().sum() / labels.numel()\n\n            return ModelReturn(preds, loss, {\"acc\": acc.item(), \"loss\": loss.item()})\n\n    return model_fn\n\n\nclass Pointnet2MSG(nn.Module):\n    r\"\"\"\n        PointNet2 with multi-scale grouping\n        Classification network\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier\n        input_channels: int = 3\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=3, use_xyz=True, version=1.0):\n        super(Pointnet2MSG, self).__init__()\n\n        self.SA_modules = nn.ModuleList()\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=512,\n                radii=[0.1, 0.2, 0.4],\n                nsamples=[16, 32, 128],\n                mlps=[\n                    [input_channels, 32, 32, 64],\n                    [input_channels, 64, 64, 128],\n                    [input_channels, 64, 96, 128],\n                ],\n                use_xyz=use_xyz,\n            )\n        )\n\n        input_channels = 64 + 128 + 128\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=128,\n                radii=[0.2, 0.4, 0.8],\n                nsamples=[32, 64, 128],\n                mlps=[\n                    [input_channels, 64, 64, 128],\n                    [input_channels, 128, 128, 256],\n                    [input_channels, 128, 128, 256],\n                ],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(mlp=[128 + 256 + 256, 256, 512, 1024], use_xyz=use_xyz)\n        )\n\n        if version == 1.0:\n            self.FC_layer = (\n                pt_utils.Seq(1024)\n                .fc(512, bn=True)\n                # potentially different for original one\n                # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_cls_msg.py#L34\n                .dropout(0.5)\n                .fc(256, bn=True)\n                # potentially different for original one\n                # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_cls_msg.py#L34\n                .dropout(0.5)\n                .fc(num_classes, activation=None)\n            )\n        elif version == 2.0:\n            self.FC_layer = (\n                pt_utils.Seq(1024)\n                .fc(512, bn=True)\n                # potentially different for original one\n                # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_cls_msg.py#L34\n                .dropout(0.6)\n                .fc(256, bn=True)\n                # potentially different for original one\n                # https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_cls_msg.py#L34\n                .dropout(0.6)\n                .fc(num_classes, activation=None)\n            )\n        else:\n            assert False\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None\n\n        return xyz, features\n\n    def forward(self, pointcloud):\n        # type: (Pointnet2MSG, torch.cuda.FloatTensor) -> pt_utils.Seq\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n\n        for module in self.SA_modules:\n            xyz, features = module(xyz, features)\n\n        return self.FC_layer(features.squeeze(-1))\n\n\n# arguments found out based on https://github.com/charlesq34/pointnet2/commit/74c52aa30458d1695e093a179cd335b7885b3244\n# commit\nclass Pointnet2MSG5K(nn.Module):\n    r\"\"\"\n        PointNet2 with multi-scale grouping\n        Classification network\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier\n        input_channels: int = 3\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=3, use_xyz=True):\n        super(Pointnet2MSG5K, self).__init__()\n\n        self.SA_modules = nn.ModuleList()\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=512,\n                radii=[0.1, 0.2, 0.4],\n                nsamples=[32,64,128],\n                mlps=[\n                    [input_channels, 32, 32, 64],\n                    [input_channels, 64, 64, 128],\n                    [input_channels, 64, 96, 128],\n                ],\n                use_xyz=use_xyz,\n            )\n        )\n\n        input_channels = 64 + 128 + 128\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=128,\n                radii=[0.2, 0.4, 0.8],\n                nsamples=[64,64,128],\n                mlps=[\n                    [input_channels, 64, 64, 128],\n                    [input_channels, 128, 128, 256],\n                    [input_channels, 128, 128, 256],\n                ],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(mlp=[128 + 256 + 256, 256, 512, 1024], use_xyz=use_xyz)\n        )\n\n        self.FC_layer = (\n            pt_utils.Seq(1024)\n            .fc(512, bn=True)\n            .dropout(0.5)\n            .fc(256, bn=True)\n            .dropout(0.5)\n            .fc(num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None\n\n        return xyz, features\n\n    def forward(self, pointcloud):\n        # type: (Pointnet2MSG, torch.cuda.FloatTensor) -> pt_utils.Seq\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n\n        for module in self.SA_modules:\n            xyz, features = module(xyz, features)\n\n        return self.FC_layer(features.squeeze(-1))"
  },
  {
    "path": "pointnet2_pyt/pointnet2/models/pointnet2_msg_sem.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.nn as nn\nimport etw_pytorch_utils as pt_utils\nfrom collections import namedtuple\n\nfrom pointnet2.utils.pointnet2_modules import PointnetFPModule, PointnetSAModuleMSG\n\n\ndef model_fn_decorator(criterion):\n    ModelReturn = namedtuple(\"ModelReturn\", [\"preds\", \"loss\", \"acc\"])\n\n    def model_fn(model, data, epoch=0, eval=False):\n        with torch.set_grad_enabled(not eval):\n            inputs, labels = data\n            inputs = inputs.to(\"cuda\", non_blocking=True)\n            labels = labels.to(\"cuda\", non_blocking=True)\n\n            preds = model(inputs)\n            loss = criterion(preds.view(labels.numel(), -1), labels.view(-1))\n\n            _, classes = torch.max(preds, -1)\n            acc = (classes == labels).float().sum() / labels.numel()\n\n        return ModelReturn(preds, loss, {\"acc\": acc.item(), \"loss\": loss.item()})\n\n    return model_fn\n\n\nclass Pointnet2MSG(nn.Module):\n    r\"\"\"\n        PointNet2 with multi-scale grouping\n        Semantic segmentation network that uses feature propogation layers\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier that run for each point\n        input_channels: int = 6\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=6, use_xyz=True):\n        super(Pointnet2MSG, self).__init__()\n\n        self.SA_modules = nn.ModuleList()\n        c_in = input_channels\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=1024,\n                radii=[0.05, 0.1],\n                nsamples=[16, 32],\n                mlps=[[c_in, 16, 16, 32], [c_in, 32, 32, 64]],\n                use_xyz=use_xyz,\n            )\n        )\n        c_out_0 = 32 + 64\n\n        c_in = c_out_0\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=256,\n                radii=[0.1, 0.2],\n                nsamples=[16, 32],\n                mlps=[[c_in, 64, 64, 128], [c_in, 64, 96, 128]],\n                use_xyz=use_xyz,\n            )\n        )\n        c_out_1 = 128 + 128\n\n        c_in = c_out_1\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=64,\n                radii=[0.2, 0.4],\n                nsamples=[16, 32],\n                mlps=[[c_in, 128, 196, 256], [c_in, 128, 196, 256]],\n                use_xyz=use_xyz,\n            )\n        )\n        c_out_2 = 256 + 256\n\n        c_in = c_out_2\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=16,\n                radii=[0.4, 0.8],\n                nsamples=[16, 32],\n                mlps=[[c_in, 256, 256, 512], [c_in, 256, 384, 512]],\n                use_xyz=use_xyz,\n            )\n        )\n        c_out_3 = 512 + 512\n\n        self.FP_modules = nn.ModuleList()\n        self.FP_modules.append(PointnetFPModule(mlp=[256 + input_channels, 128, 128]))\n        self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_0, 256, 256]))\n        self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_1, 512, 512]))\n        self.FP_modules.append(PointnetFPModule(mlp=[c_out_3 + c_out_2, 512, 512]))\n\n        self.FC_layer = (\n            pt_utils.Seq(128)\n            .conv1d(128, bn=True)\n            .dropout()\n            .conv1d(num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None\n\n        return xyz, features\n\n    def forward(self, pointcloud):\n        # type: (Pointnet2MSG, torch.cuda.FloatTensor) -> pt_utils.Seq\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n\n        l_xyz, l_features = [xyz], [features]\n        for i in range(len(self.SA_modules)):\n            li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])\n            l_xyz.append(li_xyz)\n            l_features.append(li_features)\n\n        for i in range(-1, -(len(self.FP_modules) + 1), -1):\n            l_features[i - 1] = self.FP_modules[i](\n                l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i]\n            )\n\n        return self.FC_layer(l_features[0]).transpose(1, 2).contiguous()\n\n\nif __name__ == \"__main__\":\n    from torch.autograd import Variable\n    import numpy as np\n    import torch.optim as optim\n\n    B = 2\n    N = 32\n    inputs = torch.randn(B, N, 6).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B * N)).view(B, N).cuda()\n    model = Pointnet2MSG(3, input_channels=3)\n    model.cuda()\n\n    optimizer = optim.Adam(model.parameters(), lr=1e-2)\n\n    print(\"Testing with xyz\")\n    model_fn = model_fn_decorator(nn.CrossEntropyLoss())\n    for _ in range(5):\n        optimizer.zero_grad()\n        _, loss, _ = model_fn(model, (inputs, labels))\n        loss.backward()\n        print(loss.data[0])\n        optimizer.step()\n\n    # with use_xyz=False\n    inputs = torch.randn(B, N, 6).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B * N)).view(B, N).cuda()\n    model = Pointnet2MSG(3, input_channels=3, use_xyz=False)\n    model.cuda()\n\n    optimizer = optim.Adam(model.parameters(), lr=1e-2)\n\n    print(\"Testing without xyz\")\n    model_fn = model_fn_decorator(nn.CrossEntropyLoss())\n    for _ in range(5):\n        optimizer.zero_grad()\n        _, loss, _ = model_fn(model, (inputs, labels))\n        loss.backward()\n        print(loss.data[0])\n        optimizer.step()\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/models/pointnet2_ssg_cls.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.nn as nn\nimport etw_pytorch_utils as pt_utils\nfrom collections import namedtuple\n\nfrom pointnet2.utils.pointnet2_modules import PointnetSAModule\n\n\ndef model_fn_decorator(criterion):\n    ModelReturn = namedtuple(\"ModelReturn\", [\"preds\", \"loss\", \"acc\"])\n\n    def model_fn(model, data, epoch=0, eval=False):\n        with torch.set_grad_enabled(not eval):\n            inputs, labels = data\n            inputs = inputs.to(\"cuda\", non_blocking=True)\n            labels = labels.to(\"cuda\", non_blocking=True)\n\n            preds = model(inputs)\n            labels = labels.view(-1)\n            loss = criterion(preds, labels)\n\n            _, classes = torch.max(preds, -1)\n            acc = (classes == labels).float().sum() / labels.numel()\n\n            return ModelReturn(preds, loss, {\"acc\": acc.item(), \"loss\": loss.item()})\n\n    return model_fn\n\n\nclass Pointnet2SSG(nn.Module):\n    r\"\"\"\n        PointNet2 with single-scale grouping\n        Classification network\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier\n        input_channels: int = 3\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=3, use_xyz=True):\n        super(Pointnet2SSG, self).__init__()\n\n        self.SA_modules = nn.ModuleList()\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=512,\n                radius=0.2,\n                nsample=64,\n                mlp=[input_channels, 64, 64, 128],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=128,\n                radius=0.4,\n                nsample=64,\n                mlp=[128, 128, 128, 256],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(mlp=[256, 256, 512, 1024], use_xyz=use_xyz)\n        )\n\n        self.FC_layer = (\n            pt_utils.Seq(1024)\n            .fc(512, bn=True)\n            .dropout(0.5)\n            .fc(256, bn=True)\n            .dropout(0.5)\n            .fc(num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None\n\n        return xyz, features\n\n    def forward(self, pointcloud):\n        # type: (Pointnet2SSG, torch.cuda.FloatTensor) -> pt_utils.Seq\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n\n        for module in self.SA_modules:\n            xyz, features = module(xyz, features)\n\n        return self.FC_layer(features.squeeze(-1))\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/models/pointnet2_ssg_sem.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.nn as nn\nimport etw_pytorch_utils as pt_utils\nfrom collections import namedtuple\n\nfrom pointnet2.utils.pointnet2_modules import PointnetSAModule, PointnetFPModule\n\n\ndef model_fn_decorator(criterion):\n    ModelReturn = namedtuple(\"ModelReturn\", [\"preds\", \"loss\", \"acc\"])\n\n    def model_fn(model, data, epoch=0, eval=False):\n        with torch.set_grad_enabled(not eval):\n            inputs, labels = data\n            inputs = inputs.to(\"cuda\", non_blocking=True)\n            labels = labels.to(\"cuda\", non_blocking=True)\n\n            preds = model(inputs)\n            loss = criterion(preds.view(labels.numel(), -1), labels.view(-1))\n\n            _, classes = torch.max(preds, -1)\n            acc = (classes == labels).float().sum() / labels.numel()\n\n            return ModelReturn(preds, loss, {\"acc\": acc.item(), \"loss\": loss.item()})\n\n    return model_fn\n\n\nclass Pointnet2SSG(nn.Module):\n    r\"\"\"\n        PointNet2 with single-scale grouping\n        Semantic segmentation network that uses feature propogation layers\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier that run for each point\n        input_channels: int = 6\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=3, use_xyz=True):\n        super(Pointnet2SSG, self).__init__()\n\n        self.SA_modules = nn.ModuleList()\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=1024,\n                radius=0.1,\n                nsample=32,\n                mlp=[input_channels, 32, 32, 64],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=256,\n                radius=0.2,\n                nsample=32,\n                mlp=[64, 64, 64, 128],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=64,\n                radius=0.4,\n                nsample=32,\n                mlp=[128, 128, 128, 256],\n                use_xyz=use_xyz,\n            )\n        )\n        self.SA_modules.append(\n            PointnetSAModule(\n                npoint=16,\n                radius=0.8,\n                nsample=32,\n                mlp=[256, 256, 256, 512],\n                use_xyz=use_xyz,\n            )\n        )\n\n        self.FP_modules = nn.ModuleList()\n        self.FP_modules.append(\n            PointnetFPModule(mlp=[128 + input_channels, 128, 128, 128])\n        )\n        self.FP_modules.append(PointnetFPModule(mlp=[256 + 64, 256, 128]))\n        self.FP_modules.append(PointnetFPModule(mlp=[256 + 128, 256, 256]))\n        self.FP_modules.append(PointnetFPModule(mlp=[512 + 256, 256, 256]))\n\n        self.FC_layer = (\n            pt_utils.Seq(128)\n            .conv1d(128, bn=True)\n            .dropout()\n            .conv1d(num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = pc[..., 3:].transpose(1, 2).contiguous() if pc.size(-1) > 3 else None\n\n        return xyz, features\n\n    def forward(self, pointcloud):\n        # type: (Pointnet2SSG, torch.cuda.FloatTensor) -> pt_utils.Seq\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n\n        l_xyz, l_features = [xyz], [features]\n        for i in range(len(self.SA_modules)):\n            li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])\n            l_xyz.append(li_xyz)\n            l_features.append(li_features)\n\n        for i in range(-1, -(len(self.FP_modules) + 1), -1):\n            l_features[i - 1] = self.FP_modules[i](\n                l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i]\n            )\n\n        return self.FC_layer(l_features[0]).transpose(1, 2).contiguous()\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/train/__init__.py",
    "content": ""
  },
  {
    "path": "pointnet2_pyt/pointnet2/train/train_cls.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nimport etw_pytorch_utils as pt_utils\nimport pprint\nimport os.path as osp\nimport os\nimport argparse\n\nfrom pointnet2.models import Pointnet2ClsMSG as Pointnet\nfrom pointnet2.models.pointnet2_msg_cls import model_fn_decorator\nfrom pointnet2.data import ModelNet40Cls\nimport pointnet2.data.data_utils as d_utils\n\ntorch.backends.cudnn.enabled = True\ntorch.backends.cudnn.benchmark = True\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(\n        description=\"Arguments for cls training\",\n        formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n    )\n    parser.add_argument(\"-batch_size\", type=int, default=16, help=\"Batch size\")\n    parser.add_argument(\n        \"-num_points\", type=int, default=4096, help=\"Number of points to train with\"\n    )\n    parser.add_argument(\n        \"-weight_decay\", type=float, default=1e-5, help=\"L2 regularization coeff\"\n    )\n    parser.add_argument(\"-lr\", type=float, default=1e-2, help=\"Initial learning rate\")\n    parser.add_argument(\n        \"-lr_decay\", type=float, default=0.7, help=\"Learning rate decay gamma\"\n    )\n    parser.add_argument(\n        \"-decay_step\", type=float, default=2e5, help=\"Learning rate decay step\"\n    )\n    parser.add_argument(\n        \"-bn_momentum\", type=float, default=0.5, help=\"Initial batch norm momentum\"\n    )\n    parser.add_argument(\n        \"-bnm_decay\", type=float, default=0.5, help=\"Batch norm momentum decay gamma\"\n    )\n    parser.add_argument(\n        \"-checkpoint\", type=str, default=None, help=\"Checkpoint to start from\"\n    )\n    parser.add_argument(\n        \"-epochs\", type=int, default=200, help=\"Number of epochs to train for\"\n    )\n    parser.add_argument(\n        \"-run_name\",\n        type=str,\n        default=\"cls_run_1\",\n        help=\"Name for run in tensorboard_logger\",\n    )\n    parser.add_argument(\"--visdom-port\", type=int, default=8097)\n    parser.add_argument(\"--visdom\", action=\"store_true\")\n\n    return parser.parse_args()\n\n\nlr_clip = 1e-5\nbnm_clip = 1e-2\n\nif __name__ == \"__main__\":\n    args = parse_args()\n\n    transforms = transforms.Compose(\n        [\n            d_utils.PointcloudToTensor(),\n            d_utils.PointcloudScale(),\n            d_utils.PointcloudRotate(),\n            d_utils.PointcloudRotatePerturbation(),\n            d_utils.PointcloudTranslate(),\n            d_utils.PointcloudJitter(),\n            d_utils.PointcloudRandomInputDropout(),\n        ]\n    )\n\n    test_set = ModelNet40Cls(args.num_points, transforms=transforms, train=False)\n    test_loader = DataLoader(\n        test_set,\n        batch_size=args.batch_size,\n        shuffle=True,\n        num_workers=2,\n        pin_memory=True,\n    )\n\n    train_set = ModelNet40Cls(args.num_points, transforms=transforms)\n    train_loader = DataLoader(\n        train_set,\n        batch_size=args.batch_size,\n        shuffle=True,\n        num_workers=2,\n        pin_memory=True,\n    )\n\n    model = Pointnet(input_channels=0, num_classes=40, use_xyz=True)\n    model.cuda()\n    optimizer = optim.Adam(\n        model.parameters(), lr=args.lr, weight_decay=args.weight_decay\n    )\n    lr_lbmd = lambda it: max(\n        args.lr_decay ** (int(it * args.batch_size / args.decay_step)),\n        lr_clip / args.lr,\n    )\n    bn_lbmd = lambda it: max(\n        args.bn_momentum\n        * args.bnm_decay ** (int(it * args.batch_size / args.decay_step)),\n        bnm_clip,\n    )\n\n    # default value\n    it = -1  # for the initialize value of `LambdaLR` and `BNMomentumScheduler`\n    best_loss = 1e10\n    start_epoch = 1\n\n    # load status from checkpoint\n    if args.checkpoint is not None:\n        checkpoint_status = pt_utils.load_checkpoint(\n            model, optimizer, filename=args.checkpoint.split(\".\")[0]\n        )\n        if checkpoint_status is not None:\n            it, start_epoch, best_loss = checkpoint_status\n\n    lr_scheduler = lr_sched.LambdaLR(optimizer, lr_lambda=lr_lbmd, last_epoch=it)\n    bnm_scheduler = pt_utils.BNMomentumScheduler(\n        model, bn_lambda=bn_lbmd, last_epoch=it\n    )\n\n    it = max(it, 0)  # for the initialize value of `trainer.train`\n\n    model_fn = model_fn_decorator(nn.CrossEntropyLoss())\n\n    if args.visdom:\n        viz = pt_utils.VisdomViz(port=args.visdom_port)\n    else:\n        viz = pt_utils.CmdLineViz()\n\n    viz.text(pprint.pformat(vars(args)))\n\n    if not osp.isdir(\"checkpoints\"):\n        os.makedirs(\"checkpoints\")\n\n    trainer = pt_utils.Trainer(\n        model,\n        model_fn,\n        optimizer,\n        checkpoint_name=\"checkpoints/pointnet2_cls\",\n        best_name=\"checkpoints/pointnet2_cls_best\",\n        lr_scheduler=lr_scheduler,\n        bnm_scheduler=bnm_scheduler,\n        viz=viz,\n    )\n\n    trainer.train(\n        it, start_epoch, args.epochs, train_loader, test_loader, best_loss=best_loss\n    )\n\n    if start_epoch == args.epochs:\n        _ = trainer.eval_epoch(test_loader)\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/train/train_sem_seg.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nimport etw_pytorch_utils as pt_utils\nimport pprint\nimport os.path as osp\nimport os\nimport argparse\n\nfrom pointnet2.models import Pointnet2SemMSG as Pointnet\nfrom pointnet2.models.pointnet2_msg_sem import model_fn_decorator\nfrom pointnet2.data import Indoor3DSemSeg\n\nparser = argparse.ArgumentParser(description=\"Arg parser\")\nparser.add_argument(\n    \"-batch_size\", type=int, default=32, help=\"Batch size [default: 32]\"\n)\nparser.add_argument(\n    \"-num_points\",\n    type=int,\n    default=4096,\n    help=\"Number of points to train with [default: 4096]\",\n)\nparser.add_argument(\n    \"-weight_decay\",\n    type=float,\n    default=0,\n    help=\"L2 regularization coeff [default: 0.0]\",\n)\nparser.add_argument(\n    \"-lr\", type=float, default=1e-2, help=\"Initial learning rate [default: 1e-2]\"\n)\nparser.add_argument(\n    \"-lr_decay\",\n    type=float,\n    default=0.5,\n    help=\"Learning rate decay gamma [default: 0.5]\",\n)\nparser.add_argument(\n    \"-decay_step\",\n    type=float,\n    default=2e5,\n    help=\"Learning rate decay step [default: 20]\",\n)\nparser.add_argument(\n    \"-bn_momentum\",\n    type=float,\n    default=0.9,\n    help=\"Initial batch norm momentum [default: 0.9]\",\n)\nparser.add_argument(\n    \"-bn_decay\",\n    type=float,\n    default=0.5,\n    help=\"Batch norm momentum decay gamma [default: 0.5]\",\n)\nparser.add_argument(\n    \"-checkpoint\", type=str, default=None, help=\"Checkpoint to start from\"\n)\nparser.add_argument(\n    \"-epochs\", type=int, default=200, help=\"Number of epochs to train for\"\n)\nparser.add_argument(\n    \"-run_name\",\n    type=str,\n    default=\"sem_seg_run_1\",\n    help=\"Name for run in tensorboard_logger\",\n)\nparser.add_argument(\"--visdom-port\", type=int, default=8097)\nparser.add_argument(\"--visdom\", action=\"store_true\")\n\nlr_clip = 1e-5\nbnm_clip = 1e-2\n\nif __name__ == \"__main__\":\n    args = parser.parse_args()\n\n    test_set = Indoor3DSemSeg(args.num_points, train=False)\n    test_loader = DataLoader(\n        test_set,\n        batch_size=args.batch_size,\n        shuffle=True,\n        pin_memory=True,\n        num_workers=2,\n    )\n\n    train_set = Indoor3DSemSeg(args.num_points)\n    train_loader = DataLoader(\n        train_set,\n        batch_size=args.batch_size,\n        pin_memory=True,\n        num_workers=2,\n        shuffle=True,\n    )\n\n    model = Pointnet(num_classes=13, input_channels=6, use_xyz=True)\n    model.cuda()\n    optimizer = optim.Adam(\n        model.parameters(), lr=args.lr, weight_decay=args.weight_decay\n    )\n\n    lr_lbmd = lambda it: max(\n        args.lr_decay ** (int(it * args.batch_size / args.decay_step)),\n        lr_clip / args.lr,\n    )\n    bnm_lmbd = lambda it: max(\n        args.bn_momentum\n        * args.bn_decay ** (int(it * args.batch_size / args.decay_step)),\n        bnm_clip,\n    )\n\n    # default value\n    it = -1  # for the initialize value of `LambdaLR` and `BNMomentumScheduler`\n    best_loss = 1e10\n    start_epoch = 1\n\n    # load status from checkpoint\n    if args.checkpoint is not None:\n        checkpoint_status = pt_utils.load_checkpoint(\n            model, optimizer, filename=args.checkpoint.split(\".\")[0]\n        )\n        if checkpoint_status is not None:\n            it, start_epoch, best_loss = checkpoint_status\n\n    lr_scheduler = lr_sched.LambdaLR(optimizer, lr_lambda=lr_lbmd, last_epoch=it)\n    bnm_scheduler = pt_utils.BNMomentumScheduler(\n        model, bn_lambda=bnm_lmbd, last_epoch=it\n    )\n\n    it = max(it, 0)  # for the initialize value of `trainer.train`\n\n    model_fn = model_fn_decorator(nn.CrossEntropyLoss())\n\n    if args.visdom:\n        viz = pt_utils.VisdomViz(port=args.visdom_port)\n    else:\n        viz = pt_utils.CmdLineViz()\n\n    viz.text(pprint.pformat(vars(args)))\n\n    if not osp.isdir(\"checkpoints\"):\n        os.makedirs(\"checkpoints\")\n\n    trainer = pt_utils.Trainer(\n        model,\n        model_fn,\n        optimizer,\n        checkpoint_name=\"checkpoints/pointnet2_semseg\",\n        best_name=\"checkpoints/pointnet2_semseg_best\",\n        lr_scheduler=lr_scheduler,\n        bnm_scheduler=bnm_scheduler,\n        viz=viz,\n    )\n\n    trainer.train(\n        it, start_epoch, args.epochs, train_loader, test_loader, best_loss=best_loss\n    )\n\n    if start_epoch == args.epochs:\n        _ = trainer.eval_epoch(test_loader)\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/utils/.gitignore",
    "content": "build\n_ext\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/utils/__init__.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-16 22:23:16\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-23 16:50:08\n'''\nfrom __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom PCT_Pytorch.pointnet2_ops_lib.pointnet2_ops import pointnet2_utils\nfrom PCT_Pytorch.pointnet2_ops_lib.pointnet2_ops import pointnet2_modules\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/utils/linalg_utils.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nfrom enum import Enum\nimport numpy as np\n\nPDist2Order = Enum(\"PDist2Order\", \"d_first d_second\")\n\n\ndef pdist2(X, Z=None, order=PDist2Order.d_second):\n    # type: (torch.Tensor, torch.Tensor, PDist2Order) -> torch.Tensor\n    r\"\"\" Calculates the pairwise distance between X and Z\n\n    D[b, i, j] = l2 distance X[b, i] and Z[b, j]\n\n    Parameters\n    ---------\n    X : torch.Tensor\n        X is a (B, N, d) tensor.  There are B batches, and N vectors of dimension d\n    Z: torch.Tensor\n        Z is a (B, M, d) tensor.  If Z is None, then Z = X\n\n    Returns\n    -------\n    torch.Tensor\n        Distance matrix is size (B, N, M)\n    \"\"\"\n\n    if order == PDist2Order.d_second:\n        if X.dim() == 2:\n            X = X.unsqueeze(0)\n        if Z is None:\n            Z = X\n            G = np.matmul(X, Z.transpose(-2, -1))\n            S = (X * X).sum(-1, keepdim=True)\n            R = S.transpose(-2, -1)\n        else:\n            if Z.dim() == 2:\n                Z = Z.unsqueeze(0)\n            G = np.matmul(X, Z.transpose(-2, -1))\n            S = (X * X).sum(-1, keepdim=True)\n            R = (Z * Z).sum(-1, keepdim=True).transpose(-2, -1)\n    else:\n        if X.dim() == 2:\n            X = X.unsqueeze(0)\n        if Z is None:\n            Z = X\n            G = np.matmul(X.transpose(-2, -1), Z)\n            R = (X * X).sum(-2, keepdim=True)\n            S = R.transpose(-2, -1)\n        else:\n            if Z.dim() == 2:\n                Z = Z.unsqueeze(0)\n            G = np.matmul(X.transpose(-2, -1), Z)\n            S = (X * X).sum(-2, keepdim=True).transpose(-2, -1)\n            R = (Z * Z).sum(-2, keepdim=True)\n\n    return torch.abs(R + S - 2 * G).squeeze(0)\n\n\ndef pdist2_slow(X, Z=None):\n    if Z is None:\n        Z = X\n    D = torch.zeros(X.size(0), X.size(2), Z.size(2))\n\n    for b in range(D.size(0)):\n        for i in range(D.size(1)):\n            for j in range(D.size(2)):\n                D[b, i, j] = torch.dist(X[b, :, i], Z[b, :, j])\n    return D\n\n\nif __name__ == \"__main__\":\n    X = torch.randn(2, 3, 5)\n    Z = torch.randn(2, 3, 3)\n\n    print(pdist2(X, order=PDist2Order.d_first))\n    print(pdist2_slow(X))\n    print(torch.dist(pdist2(X, order=PDist2Order.d_first), pdist2_slow(X)))\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/utils/pointnet2_modules.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport etw_pytorch_utils as pt_utils\n\nfrom pointnet2.utils import pointnet2_utils\n\nif False:\n    # Workaround for type hints without depending on the `typing` module\n    from typing import *\n\n\nclass _PointnetSAModuleBase(nn.Module):\n    def __init__(self):\n        super(_PointnetSAModuleBase, self).__init__()\n        self.npoint = None\n        self.groupers = None\n        self.mlps = None\n\n    def forward(self, xyz, features=None):\n        # type: (_PointnetSAModuleBase, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor of the xyz coordinates of the features\n        features : torch.Tensor\n            (B, C, N) tensor of the descriptors of the the features\n\n        Returns\n        -------\n        new_xyz : torch.Tensor\n            (B, npoint, 3) tensor of the new features' xyz\n        new_features : torch.Tensor\n            (B,  \\sum_k(mlps[k][-1]), npoint) tensor of the new_features descriptors\n        \"\"\"\n\n        new_features_list = []\n        B = xyz.shape[0]\n        xyz_flipped = xyz.transpose(1, 2).contiguous()\n        new_xyz = (\n            pointnet2_utils.gather_operation(\n                xyz_flipped, pointnet2_utils.furthest_point_sample(xyz, self.npoint)\n            )\n            .transpose(1, 2)\n            .contiguous()\n            if self.npoint is not None\n            else torch.zeros((B, 1, 3)).to(xyz.device)\n        )\n\n        for i in range(len(self.groupers)):\n            new_features = self.groupers[i](\n                xyz, new_xyz, features\n            )  # (B, C, npoint, nsample)\n\n            new_features = self.mlps[i](new_features)  # (B, mlp[-1], npoint, nsample)\n            new_features = F.max_pool2d(\n                new_features, kernel_size=[1, new_features.size(3)]\n            )  # (B, mlp[-1], npoint, 1)\n            new_features = new_features.squeeze(-1)  # (B, mlp[-1], npoint)\n\n            new_features_list.append(new_features)\n\n        return new_xyz, torch.cat(new_features_list, dim=1)\n\n\nclass PointnetSAModuleMSG(_PointnetSAModuleBase):\n    r\"\"\"Pointnet set abstrction layer with multiscale grouping\n\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radii : list of float32\n        list of radii to group with\n    nsamples : list of int32\n        Number of samples in each ball query\n    mlps : list of list of int32\n        Spec of the pointnet before the global max_pool for each scale\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, npoint, radii, nsamples, mlps, bn=True, use_xyz=True):\n        # type: (PointnetSAModuleMSG, int, List[float], List[int], List[List[int]], bool, bool) -> None\n        super(PointnetSAModuleMSG, self).__init__()\n\n        assert len(radii) == len(nsamples) == len(mlps)\n\n        self.npoint = npoint\n        self.groupers = nn.ModuleList()\n        self.mlps = nn.ModuleList()\n        for i in range(len(radii)):\n            radius = radii[i]\n            nsample = nsamples[i]\n            self.groupers.append(\n                pointnet2_utils.QueryAndGroup(radius, nsample, use_xyz=use_xyz)\n                if npoint is not None\n                else pointnet2_utils.GroupAll(use_xyz)\n            )\n            mlp_spec = mlps[i]\n            if use_xyz:\n                mlp_spec[0] += 3\n\n            self.mlps.append(pt_utils.SharedMLP(mlp_spec, bn=bn))\n\n\nclass PointnetSAModule(PointnetSAModuleMSG):\n    r\"\"\"Pointnet set abstrction layer\n\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radius : float\n        Radius of ball\n    nsample : int\n        Number of samples in the ball query\n    mlp : list\n        Spec of the pointnet before the global max_pool\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n        self, mlp, npoint=None, radius=None, nsample=None, bn=True, use_xyz=True\n    ):\n        # type: (PointnetSAModule, List[int], int, float, int, bool, bool) -> None\n        super(PointnetSAModule, self).__init__(\n            mlps=[mlp],\n            npoint=npoint,\n            radii=[radius],\n            nsamples=[nsample],\n            bn=bn,\n            use_xyz=use_xyz,\n        )\n\n\nclass PointnetFPModule(nn.Module):\n    r\"\"\"Propigates the features of one set to another\n\n    Parameters\n    ----------\n    mlp : list\n        Pointnet module parameters\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, mlp, bn=True):\n        # type: (PointnetFPModule, List[int], bool) -> None\n        super(PointnetFPModule, self).__init__()\n        self.mlp = pt_utils.SharedMLP(mlp, bn=bn)\n\n    def forward(self, unknown, known, unknow_feats, known_feats):\n        # type: (PointnetFPModule, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of the xyz positions of the unknown features\n        known : torch.Tensor\n            (B, m, 3) tensor of the xyz positions of the known features\n        unknow_feats : torch.Tensor\n            (B, C1, n) tensor of the features to be propigated to\n        known_feats : torch.Tensor\n            (B, C2, m) tensor of features to be propigated\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, mlp[-1], n) tensor of the features of the unknown features\n        \"\"\"\n\n        if known is not None:\n            dist, idx = pointnet2_utils.three_nn(unknown, known)\n            dist_recip = 1.0 / (dist + 1e-8)\n            norm = torch.sum(dist_recip, dim=2, keepdim=True)\n            weight = dist_recip / norm\n\n            interpolated_feats = pointnet2_utils.three_interpolate(\n                known_feats, idx, weight\n            )\n        else:\n            interpolated_feats = known_feats.expand(\n                *(known_feats.size()[0:2] + [unknown.size(1)])\n            )\n\n        if unknow_feats is not None:\n            new_features = torch.cat(\n                [interpolated_feats, unknow_feats], dim=1\n            )  # (B, C2 + C1, n)\n        else:\n            new_features = interpolated_feats\n\n        new_features = new_features.unsqueeze(-1)\n        new_features = self.mlp(new_features)\n\n        return new_features.squeeze(-1)\n\n\nif __name__ == \"__main__\":\n    from torch.autograd import Variable\n\n    torch.manual_seed(1)\n    torch.cuda.manual_seed_all(1)\n    xyz = Variable(torch.randn(2, 9, 3).cuda(), requires_grad=True)\n    xyz_feats = Variable(torch.randn(2, 9, 6).cuda(), requires_grad=True)\n\n    test_module = PointnetSAModuleMSG(\n        npoint=2, radii=[5.0, 10.0], nsamples=[6, 3], mlps=[[9, 3], [9, 6]]\n    )\n    test_module.cuda()\n    print(test_module(xyz, xyz_feats))\n\n    #  test_module = PointnetFPModule(mlp=[6, 6])\n    #  test_module.cuda()\n    #  from torch.autograd import gradcheck\n    #  inputs = (xyz, xyz, None, xyz_feats)\n    #  test = gradcheck(test_module, inputs, eps=1e-6, atol=1e-4)\n    #  print(test)\n\n    for _ in range(1):\n        _, new_features = test_module(xyz, xyz_feats)\n        new_features.backward(torch.cuda.FloatTensor(*new_features.size()).fill_(1))\n        print(new_features)\n        print(xyz.grad)\n"
  },
  {
    "path": "pointnet2_pyt/pointnet2/utils/pointnet2_utils.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport torch\nfrom torch.autograd import Function\nimport torch.nn as nn\nimport etw_pytorch_utils as pt_utils\nimport sys\n\ntry:\n    import builtins\nexcept:\n    import __builtin__ as builtins\n\ntry:\n    import pointnet2._ext as _ext\nexcept ImportError:\n    if not getattr(builtins, \"__POINTNET2_SETUP__\", False):\n        raise ImportError(\n            \"Could not import _ext module.\\n\"\n            \"Please see the setup instructions in the README: \"\n            \"https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/README.rst\"\n        )\n\nif False:\n    # Workaround for type hints without depending on the `typing` module\n    from typing import *\n\n\nclass RandomDropout(nn.Module):\n    def __init__(self, p=0.5, inplace=False):\n        super(RandomDropout, self).__init__()\n        self.p = p\n        self.inplace = inplace\n\n    def forward(self, X):\n        theta = torch.Tensor(1).uniform_(0, self.p)[0]\n        return pt_utils.feature_dropout_no_scaling(X, theta, self.train, self.inplace)\n\n\nclass FurthestPointSampling(Function):\n    @staticmethod\n    def forward(ctx, xyz, npoint):\n        # type: (Any, torch.Tensor, int) -> torch.Tensor\n        r\"\"\"\n        Uses iterative furthest point sampling to select a set of npoint features that have the largest\n        minimum distance\n\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor where N > npoint\n        npoint : int32\n            number of features in the sampled set\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint) tensor containing the set\n        \"\"\"\n        return _ext.furthest_point_sampling(xyz, npoint)\n\n    @staticmethod\n    def backward(xyz, a=None):\n        return None, None\n\n\nfurthest_point_sample = FurthestPointSampling.apply\n\n\nclass GatherOperation(Function):\n    @staticmethod\n    def forward(ctx, features, idx):\n        # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor\n\n        idx : torch.Tensor\n            (B, npoint) tensor of the features to gather\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint) tensor\n        \"\"\"\n\n        _, C, N = features.size()\n\n        ctx.for_backwards = (idx, C, N)\n\n        return _ext.gather_points(features, idx)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        idx, C, N = ctx.for_backwards\n\n        grad_features = _ext.gather_points_grad(grad_out.contiguous(), idx, N)\n        return grad_features, None\n\n\ngather_operation = GatherOperation.apply\n\n\nclass ThreeNN(Function):\n    @staticmethod\n    def forward(ctx, unknown, known):\n        # type: (Any, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]\n        r\"\"\"\n            Find the three nearest neighbors of unknown in known\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of known features\n        known : torch.Tensor\n            (B, m, 3) tensor of unknown features\n\n        Returns\n        -------\n        dist : torch.Tensor\n            (B, n, 3) l2 distance to the three nearest neighbors\n        idx : torch.Tensor\n            (B, n, 3) index of 3 nearest neighbors\n        \"\"\"\n        dist2, idx = _ext.three_nn(unknown, known)\n\n        return torch.sqrt(dist2), idx\n\n    @staticmethod\n    def backward(ctx, a=None, b=None):\n        return None, None\n\n\nthree_nn = ThreeNN.apply\n\n\nclass ThreeInterpolate(Function):\n    @staticmethod\n    def forward(ctx, features, idx, weight):\n        # type(Any, torch.Tensor, torch.Tensor, torch.Tensor) -> Torch.Tensor\n        r\"\"\"\n            Performs weight linear interpolation on 3 features\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, c, m) Features descriptors to be interpolated from\n        idx : torch.Tensor\n            (B, n, 3) three nearest neighbors of the target features in features\n        weight : torch.Tensor\n            (B, n, 3) weights\n\n        Returns\n        -------\n        torch.Tensor\n            (B, c, n) tensor of the interpolated features\n        \"\"\"\n        B, c, m = features.size()\n        n = idx.size(1)\n\n        ctx.three_interpolate_for_backward = (idx, weight, m)\n\n        return _ext.three_interpolate(features, idx, weight)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        # type: (Any, torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, c, n) tensor with gradients of ouputs\n\n        Returns\n        -------\n        grad_features : torch.Tensor\n            (B, c, m) tensor with gradients of features\n\n        None\n\n        None\n        \"\"\"\n        idx, weight, m = ctx.three_interpolate_for_backward\n\n        grad_features = _ext.three_interpolate_grad(\n            grad_out.contiguous(), idx, weight, m\n        )\n\n        return grad_features, None, None\n\n\nthree_interpolate = ThreeInterpolate.apply\n\n\nclass GroupingOperation(Function):\n    @staticmethod\n    def forward(ctx, features, idx):\n        # type: (Any, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor of features to group\n        idx : torch.Tensor\n            (B, npoint, nsample) tensor containing the indicies of features to group with\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint, nsample) tensor\n        \"\"\"\n        B, nfeatures, nsample = idx.size()\n        _, C, N = features.size()\n\n        ctx.for_backwards = (idx, N)\n\n        return _ext.group_points(features, idx)\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        # type: (Any, torch.tensor) -> Tuple[torch.Tensor, torch.Tensor]\n        r\"\"\"\n\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, C, npoint, nsample) tensor of the gradients of the output from forward\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, N) gradient of the features\n        None\n        \"\"\"\n        idx, N = ctx.for_backwards\n\n        grad_features = _ext.group_points_grad(grad_out.contiguous(), idx, N)\n\n        return grad_features, None\n\n\ngrouping_operation = GroupingOperation.apply\n\n\nclass BallQuery(Function):\n    @staticmethod\n    def forward(ctx, radius, nsample, xyz, new_xyz):\n        # type: (Any, float, int, torch.Tensor, torch.Tensor) -> torch.Tensor\n        r\"\"\"\n\n        Parameters\n        ----------\n        radius : float\n            radius of the balls\n        nsample : int\n            maximum number of features in the balls\n        xyz : torch.Tensor\n            (B, N, 3) xyz coordinates of the features\n        new_xyz : torch.Tensor\n            (B, npoint, 3) centers of the ball query\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint, nsample) tensor with the indicies of the features that form the query balls\n        \"\"\"\n        return _ext.ball_query(new_xyz, xyz, radius, nsample)\n\n    @staticmethod\n    def backward(ctx, a=None):\n        return None, None, None, None\n\n\nball_query = BallQuery.apply\n\n\nclass QueryAndGroup(nn.Module):\n    r\"\"\"\n    Groups with a ball query of radius\n\n    Parameters\n    ---------\n    radius : float32\n        Radius of ball\n    nsample : int32\n        Maximum number of features to gather in the ball\n    \"\"\"\n\n    def __init__(self, radius, nsample, use_xyz=True):\n        # type: (QueryAndGroup, float, int, bool) -> None\n        super(QueryAndGroup, self).__init__()\n        self.radius, self.nsample, self.use_xyz = radius, nsample, use_xyz\n\n    def forward(self, xyz, new_xyz, features=None):\n        # type: (QueryAndGroup, torch.Tensor. torch.Tensor, torch.Tensor) -> Tuple[Torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            centriods (B, npoint, 3)\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, 3 + C, npoint, nsample) tensor\n        \"\"\"\n\n        idx = ball_query(self.radius, self.nsample, xyz, new_xyz)\n        xyz_trans = xyz.transpose(1, 2).contiguous()\n        grouped_xyz = grouping_operation(xyz_trans, idx)  # (B, 3, npoint, nsample)\n        grouped_xyz -= new_xyz.transpose(1, 2).unsqueeze(-1)\n\n        if features is not None:\n            grouped_features = grouping_operation(features, idx)\n            if self.use_xyz:\n                new_features = torch.cat(\n                    [grouped_xyz, grouped_features], dim=1\n                )  # (B, C + 3, npoint, nsample)\n            else:\n                new_features = grouped_features\n        else:\n            assert (\n                self.use_xyz\n            ), \"Cannot have not features and not use xyz as a feature!\"\n            new_features = grouped_xyz\n\n        return new_features\n\n\nclass GroupAll(nn.Module):\n    r\"\"\"\n    Groups all features\n\n    Parameters\n    ---------\n    \"\"\"\n\n    def __init__(self, use_xyz=True):\n        # type: (GroupAll, bool) -> None\n        super(GroupAll, self).__init__()\n        self.use_xyz = use_xyz\n\n    def forward(self, xyz, new_xyz, features=None):\n        # type: (GroupAll, torch.Tensor, torch.Tensor, torch.Tensor) -> Tuple[torch.Tensor]\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            Ignored\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, C + 3, 1, N) tensor\n        \"\"\"\n\n        grouped_xyz = xyz.transpose(1, 2).unsqueeze(2)\n        if features is not None:\n            grouped_features = features.unsqueeze(2)\n            if self.use_xyz:\n                new_features = torch.cat(\n                    [grouped_xyz, grouped_features], dim=1\n                )  # (B, 3 + C, 1, N)\n            else:\n                new_features = grouped_features\n        else:\n            new_features = grouped_xyz\n\n        return new_features\n"
  },
  {
    "path": "pointnet2_pyt/setup.py",
    "content": "'''\nDescription: \nAutor: Jiachen Sun\nDate: 2022-02-16 22:23:16\nLastEditors: Jiachen Sun\nLastEditTime: 2022-02-24 23:16:38\n'''\nfrom __future__ import division, absolute_import, with_statement, print_function\nfrom setuptools import setup, find_packages\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\nimport glob\n\ntry:\n    import builtins\nexcept:\n    import __builtin__ as builtins\n\nbuiltins.__POINTNET2_SETUP__ = True\nimport pointnet2\n\n# _ext_src_root = \"pointnet2/_ext-src\"\n# _ext_sources = glob.glob(\"{}/src/*.cpp\".format(_ext_src_root)) + glob.glob(\n#     \"{}/src/*.cu\".format(_ext_src_root)\n# )\n# _ext_headers = glob.glob(\"{}/include/*\".format(_ext_src_root))\n\nrequirements = [\"etw_pytorch_utils==1.1.1\", \"h5py\", \"enum34\", \"future\"]\n\nsetup(\n    name=\"pointnet2\",\n    version=pointnet2.__version__,\n    author=\"Erik Wijmans\",\n    packages=find_packages(),\n    install_requires=requirements,\n    # ext_modules=[\n    #     CUDAExtension(\n    #         name=\"pointnet2._ext\",\n    #         sources=_ext_sources,\n    #         extra_compile_args={\n    #             \"cxx\": [\"-O2\", \"-I{}\".format(\"{}/include\".format(_ext_src_root))],\n    #             \"nvcc\": [\"-O2\", \"-I{}\".format(\"{}/include\".format(_ext_src_root))],\n    #         },\n    #     )\n    # ],\n    # cmdclass={\"build_ext\": BuildExtension},\n)\n"
  },
  {
    "path": "pointnet2_pyt/tests/conftest.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nimport pytest\nimport torch\nimport numpy as np\n\npytest_plugins = [\"helpers_namespace\"]\n\n\ndef _test_loop(model, model_fn, inputs, labels):\n    optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)\n\n    prev_loss = 1e10\n    for _ in range(5):\n        optimizer.zero_grad()\n        _, loss, _ = model_fn(model, (inputs, labels))\n        loss.backward()\n        optimizer.step()\n\n        assert loss.item() < prev_loss + 1.0, \"Loss spiked upwards\"\n\n        prev_loss = loss.item()\n\n\n@pytest.helpers.register\ndef cls_test_xyz(model, model_fn):\n    B, N = 4, 2048\n    inputs = torch.randn(B, N, 6).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B)).cuda()\n    model.cuda()\n\n    _test_loop(model, model_fn, inputs, labels)\n\n\n@pytest.helpers.register\ndef cls_test_no_xyz(model, model_fn):\n    B, N = 4, 2048\n    inputs = torch.randn(B, N, 3).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B)).cuda()\n    model.cuda()\n\n    _test_loop(model, model_fn, inputs, labels)\n\n\n@pytest.helpers.register\ndef semseg_test_xyz(model, model_fn):\n    B, N = 4, 2048\n    inputs = torch.randn(B, N, 6).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B * N)).view(B, N).cuda()\n    model.cuda()\n\n    _test_loop(model, model_fn, inputs, labels)\n\n\n@pytest.helpers.register\ndef semseg_test_no_xyz(model, model_fn):\n    B, N = 4, 2048\n    inputs = torch.randn(B, N, 3).cuda()\n    labels = torch.from_numpy(np.random.randint(0, 3, size=B * N)).view(B, N).cuda()\n    model.cuda()\n\n    _test_loop(model, model_fn, inputs, labels)\n"
  },
  {
    "path": "pointnet2_pyt/tests/test_cls_msg.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom pointnet2.models.pointnet2_msg_cls import model_fn_decorator, Pointnet2MSG\nimport torch.nn as nn\nimport pytest\n\n\ndef test_xyz():\n    model = Pointnet2MSG(3, input_channels=3)\n    pytest.helpers.cls_test_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n\n\ndef test_no_xyz():\n    model = Pointnet2MSG(3, input_channels=0)\n    pytest.helpers.cls_test_no_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n"
  },
  {
    "path": "pointnet2_pyt/tests/test_cls_ssg.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom pointnet2.models.pointnet2_ssg_cls import model_fn_decorator, Pointnet2SSG\nimport torch.nn as nn\nimport pytest\n\n\ndef test_xyz():\n    model = Pointnet2SSG(3, input_channels=3)\n    pytest.helpers.cls_test_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n\n\ndef test_no_xyz():\n    model = Pointnet2SSG(3, input_channels=0)\n    pytest.helpers.cls_test_no_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n"
  },
  {
    "path": "pointnet2_pyt/tests/test_semseg_msg.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom pointnet2.models.pointnet2_msg_sem import model_fn_decorator, Pointnet2MSG\nimport torch.nn as nn\nimport pytest\n\n\ndef test_xyz():\n    model = Pointnet2MSG(3, input_channels=3)\n    pytest.helpers.semseg_test_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n\n\ndef test_no_xyz():\n    model = Pointnet2MSG(3, input_channels=0)\n    pytest.helpers.semseg_test_no_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n"
  },
  {
    "path": "pointnet2_pyt/tests/test_semseg_ssg.py",
    "content": "from __future__ import (\n    division,\n    absolute_import,\n    with_statement,\n    print_function,\n    unicode_literals,\n)\nfrom pointnet2.models.pointnet2_ssg_sem import model_fn_decorator, Pointnet2SSG\nimport torch.nn as nn\nimport pytest\n\n\ndef test_xyz():\n    model = Pointnet2SSG(3, input_channels=3)\n    pytest.helpers.semseg_test_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n\n\ndef test_no_xyz():\n    model = Pointnet2SSG(3, input_channels=0)\n    pytest.helpers.semseg_test_no_xyz(model, model_fn_decorator(nn.CrossEntropyLoss()))\n"
  },
  {
    "path": "pointnet2_pyt/tox.ini",
    "content": "[tox]\nenvlist =\n    py27\n    py35\n    py36\n\n[testenv]\n# install pytest in the virtualenv where commands will be executed\ndeps =\n    numpy\n    torch>=1.0\n    git+git://github.com/erikwijmans/etw_pytorch_utils.git@v1.1.1#egg=etw_pytorch_utils\n    pytest\n    pytest-helpers-namespace\ncommands =\n    pytest\n"
  },
  {
    "path": "pointnet2_tf/LICENSE",
    "content": "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space.\n\nCopyright (c) 2017, Geometric Computation Group of Stanford University\n\nThe MIT License (MIT)\n\nCopyright (c) 2017 Charles R. Qi\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "pointnet2_tf/README.md",
    "content": "### PointNet++: *Deep Hierarchical Feature Learning on Point Sets in a Metric Space*\nCreated by <a href=\"http://charlesrqi.com\" target=\"_blank\">Charles R. Qi</a>, <a href=\"http://stanford.edu/~ericyi\">Li (Eric) Yi</a>, <a href=\"http://ai.stanford.edu/~haosu/\" target=\"_blank\">Hao Su</a>, <a href=\"http://geometry.stanford.edu/member/guibas/\" target=\"_blank\">Leonidas J. Guibas</a> from Stanford University.\n\n![prediction example](https://github.com/charlesq34/pointnet2/blob/master/doc/teaser.jpg)\n\n### Citation\nIf you find our work useful in your research, please consider citing:\n\n        @article{qi2017pointnetplusplus,\n          title={PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},\n          author={Qi, Charles R and Yi, Li and Su, Hao and Guibas, Leonidas J},\n          journal={arXiv preprint arXiv:1706.02413},\n          year={2017}\n        }\n\n### Introduction\nThis work is based on our NIPS'17 paper. You can find arXiv version of the paper <a href=\"https://arxiv.org/pdf/1706.02413.pdf\">here</a> or check <a href=\"http://stanford.edu/~rqi/pointnet2\">project webpage</a> for a quick overview. PointNet++ is a follow-up project that builds on and extends <a href=\"https://github.com/charlesq34/pointnet\">PointNet</a>. It is version 2.0 of the PointNet architecture.\n\nPointNet (the v1 model) either transforms features of *individual points* independently or process global features of the *entire point set*. However, in many cases there are well defined distance metrics such as Euclidean distance for 3D point clouds collected by 3D sensors or geodesic distance for manifolds like isometric shape surfaces. In PointNet++ we want to respect *spatial localities* of those point sets. PointNet++ learns hierarchical features with increasing scales of contexts, just like that in convolutional neural networks. Besides, we also observe one challenge that is not present in convnets (with images) -- non-uniform densities in natural point clouds. To deal with those non-uniform densities, we further propose special layers that are able to intelligently aggregate information from different scales.\n\nIn this repository we release code and data for our PointNet++ classification and segmentation networks as well as a few utility scripts for training, testing and data processing and visualization.\n\n### Installation\n\nInstall <a href=\"https://www.tensorflow.org/install/\">TensorFlow</a>. The code is tested under TF1.2 GPU version and Python 2.7 (version 3 should also work) on Ubuntu 14.04. There are also some dependencies for a few Python libraries for data processing and visualizations like `cv2`, `h5py` etc. It's highly recommended that you have access to GPUs.\n\n#### Compile Customized TF Operators\nThe TF operators are included under `tf_ops`, you need to compile them (check `tf_xxx_compile.sh` under each ops subfolder) first. Update `nvcc` and `python` path if necessary. The code is tested under TF1.2.0. If you are using earlier version it's possible that you need to remove the `-D_GLIBCXX_USE_CXX11_ABI=0` flag in g++ command in order to compile correctly.\n\nTo compile the operators in TF version >=1.4, you need to modify the compile scripts slightly.\n\nFirst, find Tensorflow include and library paths.\n\n        TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')\n        TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')\n        \nThen, add flags of `-I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework` to the `g++` commands.\n\n### Usage\n\n#### Shape Classification\n\nTo train a PointNet++ model to classify ModelNet40 shapes (using point clouds with XYZ coordinates):\n\n        python train.py\n\nTo see all optional arguments for training:\n\n        python train.py -h\n\nIf you have multiple GPUs on your machine, you can also run the multi-GPU version training (our implementation is similar to the tensorflow <a href=\"https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10\">cifar10 tutorial</a>):\n\n        CUDA_VISIBLE_DEVICES=0,1 python train_multi_gpu.py --num_gpus 2\n\nAfter training, to evaluate the classification accuracies (with optional multi-angle voting):\n\n        python evaluate.py --num_votes 12 \n\n<i>Side Note:</i> For the XYZ+normal experiment reported in our paper: (1) 5000 points are used and (2) a further random data dropout augmentation is used during training (see commented line after `augment_batch_data` in `train.py` and (3) the model architecture is updated such that the `nsample=128` in the first two set abstraction levels, which is suited for the larger point density in 5000-point samplings.\n\nTo use normal features for classification: You can get our sampled point clouds of ModelNet40 (XYZ and normal from mesh, 10k points per shape) <a href=\"https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip\">here (1.6GB)</a>. Move the uncompressed data folder to `data/modelnet40_normal_resampled`\n\n#### Object Part Segmentation\n\nTo train a model to segment object parts for ShapeNet models:\n\n        cd part_seg\n        python train.py\n\nPreprocessed ShapeNetPart dataset (XYZ, normal and part labels) can be found <a href=\"https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip\">here (674MB)</a>. Move the uncompressed data folder to `data/shapenetcore_partanno_segmentation_benchmark_v0_normal`\n\n#### Semantic Scene Parsing\n\nSee `scannet/README` and `scannet/train.py` for details.\n\n#### Visualization Tools\nWe have provided a handy point cloud visualization tool under `utils`. Run `sh compile_render_balls_so.sh` to compile it and then you can try the demo with `python show3d_balls.py` The original code is from <a href=\"http://github.com/fanhqme/PointSetGeneration\">here</a>.\n\n#### Prepare Your Own Data\nYou can refer to <a href=\"https://github.com/charlesq34/3dmodel_feature/blob/master/io/write_hdf5.py\">here</a> on how to prepare your own HDF5 files for either classification or segmentation. Or you can refer to `modelnet_dataset.py` on how to read raw data files and prepare mini-batches from them. A more advanced way is to use TensorFlow's dataset APIs, for which you can find more documentations <a href=\"https://www.tensorflow.org/programmers_guide/datasets\">here</a>.\n\n### License\nOur code is released under MIT License (see LICENSE file for details).\n\n### Updates\n* 02/23/2018: Added support for multi-gpu training for the classification task.\n* 02/23/2018: Adopted a new way for data loading. No longer require manual data downloading to train a classification network.\n* 02/06/2018: Added sample training code for ScanNet semantic segmentation.\n\n### Related Projects\n\n* <a href=\"http://stanford.edu/~rqi/pointnet\" target=\"_blank\">PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation</a> by Qi et al. (CVPR 2017 Oral Presentation). Code and data released in <a href=\"https://github.com/charlesq34/pointnet\">GitHub</a>.\n* <a href=\"https://arxiv.org/abs/1711.08488\" target=\"_blank\">Frustum PointNets for 3D Object Detection from RGB-D Data</a> by Qi et al. (CVPR 2018) A novel framework for 3D object detection with RGB-D data. Based on 2D boxes from a 2D object detector on RGB images, we extrude the depth maps in 2D boxes to point clouds in 3D space and then realize instance segmentation and 3D bounding box estimation using PointNet/PointNet++. The method proposed has achieved first place on KITTI 3D object detection benchmark on all categories (last checked on 11/30/2017). Code and data release TBD.\n"
  },
  {
    "path": "pointnet2_tf/data/README.md",
    "content": "#### Point Cloud Data\nYou can get our sampled point clouds of ModelNet40 (XYZ and normal from mesh, 10k points per shape) at this <a href=\"https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip\">link</a>. The ShapeNetPart dataset (XYZ, normal and part labels) can be found <a href=\"https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip\">here</a>.\n\nUncompress the downloaded data in this directory.\n"
  },
  {
    "path": "pointnet2_tf/evaluate.py",
    "content": "'''\n    Evaluate classification performance with optional voting.\n    Will use H5 dataset in default. If using normal, will shift to the normal dataset.\n'''\nimport tensorflow as tf\nimport numpy as np\nimport argparse\nimport socket\nimport importlib\nimport time\nimport os\nimport scipy.misc\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = BASE_DIR\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport modelnet_dataset\nimport modelnet_h5_dataset\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='pointnet2_cls_ssg', help='Model name. [default: pointnet2_cls_ssg]')\nparser.add_argument('--batch_size', type=int, default=16, help='Batch Size during training [default: 16]')\nparser.add_argument('--num_point', type=int, default=1024, help='Point Number [256/512/1024/2048] [default: 1024]')\nparser.add_argument('--model_path', default='log/model.ckpt', help='model checkpoint file path [default: log/model.ckpt]')\nparser.add_argument('--dump_dir', default='dump', help='dump folder path [dump]')\nparser.add_argument('--normal', action='store_true', help='Whether to use normal information')\nparser.add_argument('--num_votes', type=int, default=1, help='Aggregate classification scores from multiple rotations [default: 1]')\nFLAGS = parser.parse_args()\n\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMODEL_PATH = FLAGS.model_path\nGPU_INDEX = FLAGS.gpu\nMODEL = importlib.import_module(FLAGS.model) # import network module\nDUMP_DIR = FLAGS.dump_dir\nif not os.path.exists(DUMP_DIR): os.mkdir(DUMP_DIR)\nLOG_FOUT = open(os.path.join(DUMP_DIR, 'log_evaluate.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nNUM_CLASSES = 40\nSHAPE_NAMES = [line.rstrip() for line in \\\n    open(os.path.join(ROOT_DIR, 'data/modelnet40_ply_hdf5_2048/shape_names.txt'))] \n\nHOSTNAME = socket.gethostname()\n\n# Shapenet official train/test split\nif FLAGS.normal:\n    assert(NUM_POINT<=10000)\n    DATA_PATH = os.path.join(ROOT_DIR, 'data/modelnet40_normal_resampled')\n    TRAIN_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='train', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\n    TEST_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='test', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\nelse:\n    assert(NUM_POINT<=2048)\n    TRAIN_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/train_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=True)\n    TEST_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/test_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=False)\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef evaluate(num_votes):\n    is_training = False\n     \n    with tf.device('/gpu:'+str(GPU_INDEX)):\n        pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n        is_training_pl = tf.placeholder(tf.bool, shape=())\n\n        # simple model\n        pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl)\n        MODEL.get_loss(pred, labels_pl, end_points)\n        losses = tf.get_collection('losses')\n        total_loss = tf.add_n(losses, name='total_loss')\n        \n        # Add ops to save and restore all the variables.\n        saver = tf.train.Saver()\n        \n    # Create a session\n    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    config.allow_soft_placement = True\n    config.log_device_placement = False\n    sess = tf.Session(config=config)\n\n    # Restore variables from disk.\n    saver.restore(sess, MODEL_PATH)\n    log_string(\"Model restored.\")\n\n    ops = {'pointclouds_pl': pointclouds_pl,\n           'labels_pl': labels_pl,\n           'is_training_pl': is_training_pl,\n           'pred': pred,\n           'loss': total_loss}\n\n    eval_one_epoch(sess, ops, num_votes)\n\ndef eval_one_epoch(sess, ops, num_votes=1, topk=1):\n    is_training = False\n\n    # Make sure batch data is of same size\n    cur_batch_data = np.zeros((BATCH_SIZE,NUM_POINT,TEST_DATASET.num_channel()))\n    cur_batch_label = np.zeros((BATCH_SIZE), dtype=np.int32)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    batch_idx = 0\n    shape_ious = []\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    while TEST_DATASET.has_next_batch():\n        batch_data, batch_label = TEST_DATASET.next_batch(augment=False)\n        bsize = batch_data.shape[0]\n        print('Batch: %03d, batch size: %d'%(batch_idx, bsize))\n        # for the last batch in the epoch, the bsize:end are from last batch\n        cur_batch_data[0:bsize,...] = batch_data\n        cur_batch_label[0:bsize] = batch_label\n\n        batch_pred_sum = np.zeros((BATCH_SIZE, NUM_CLASSES)) # score for classes\n        for vote_idx in range(num_votes):\n            # Shuffle point order to achieve different farthest samplings\n            shuffled_indices = np.arange(NUM_POINT)\n            np.random.shuffle(shuffled_indices)\n            if FLAGS.normal:\n                rotated_data = provider.rotate_point_cloud_by_angle_with_normal(cur_batch_data[:, shuffled_indices, :],\n                    vote_idx/float(num_votes) * np.pi * 2)\n            else:\n                rotated_data = provider.rotate_point_cloud_by_angle(cur_batch_data[:, shuffled_indices, :],\n                    vote_idx/float(num_votes) * np.pi * 2)\n            feed_dict = {ops['pointclouds_pl']: rotated_data,\n                         ops['labels_pl']: cur_batch_label,\n                         ops['is_training_pl']: is_training}\n            loss_val, pred_val = sess.run([ops['loss'], ops['pred']], feed_dict=feed_dict)\n            batch_pred_sum += pred_val\n        pred_val = np.argmax(batch_pred_sum, 1)\n        correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n        total_correct += correct\n        total_seen += bsize\n        loss_sum += loss_val\n        batch_idx += 1\n        for i in range(bsize):\n            l = batch_label[i]\n            total_seen_class[l] += 1\n            total_correct_class[l] += (pred_val[i] == l)\n    \n    log_string('eval mean loss: %f' % (loss_sum / float(batch_idx)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n\n    class_accuracies = np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float)\n    for i, name in enumerate(SHAPE_NAMES):\n        log_string('%10s:\\t%0.3f' % (name, class_accuracies[i]))\n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        evaluate(num_votes=FLAGS.num_votes)\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/modelnet_dataset.py",
    "content": "'''\n    ModelNet dataset. Support ModelNet40, ModelNet10, XYZ and normal channels. Up to 10000 points.\n'''\n\nimport os\nimport os.path\nimport json\nimport numpy as np\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = BASE_DIR\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\n\ndef pc_normalize(pc):\n    l = pc.shape[0]\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n    pc = pc / m\n    return pc\n\nclass ModelNetDataset():\n    def __init__(self, root, batch_size = 32, npoints = 1024, split='train', normalize=True, normal_channel=False, modelnet10=False, cache_size=15000, shuffle=None):\n        self.root = root\n        self.batch_size = batch_size\n        self.npoints = npoints\n        self.normalize = normalize\n        if modelnet10:\n            self.catfile = os.path.join(self.root, 'modelnet10_shape_names.txt')\n        else:\n            self.catfile = os.path.join(self.root, 'shape_names.txt')\n        self.cat = [line.rstrip() for line in open(self.catfile)]\n        self.classes = dict(zip(self.cat, range(len(self.cat))))  \n        self.normal_channel = normal_channel\n        \n        shape_ids = {}\n        if modelnet10:\n            shape_ids['train'] = [line.rstrip() for line in open(os.path.join(self.root, 'modelnet10_train.txt'))] \n            shape_ids['test']= [line.rstrip() for line in open(os.path.join(self.root, 'modelnet10_test.txt'))]\n        else:\n            shape_ids['train'] = [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_train.txt'))] \n            shape_ids['test']= [line.rstrip() for line in open(os.path.join(self.root, 'modelnet40_test.txt'))]\n        assert(split=='train' or split=='test')\n        shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids[split]]\n        # list of (shape_name, shape_txt_file_path) tuple\n        self.datapath = [(shape_names[i], os.path.join(self.root, shape_names[i], shape_ids[split][i])+'.txt') for i in range(len(shape_ids[split]))]\n\n        self.cache_size = cache_size # how many data points to cache in memory\n        self.cache = {} # from index to (point_set, cls) tuple\n\n        if shuffle is None:\n            if split == 'train': self.shuffle = True\n            else: self.shuffle = False\n        else:\n            self.shuffle = shuffle\n\n        self.reset()\n\n    def _augment_batch_data(self, batch_data):\n        if self.normal_channel:\n            rotated_data = provider.rotate_point_cloud_with_normal(batch_data)\n            rotated_data = provider.rotate_perturbation_point_cloud_with_normal(rotated_data)\n        else:\n            rotated_data = provider.rotate_point_cloud(batch_data)\n            rotated_data = provider.rotate_perturbation_point_cloud(rotated_data)\n    \n        jittered_data = provider.random_scale_point_cloud(rotated_data[:,:,0:3])\n        jittered_data = provider.shift_point_cloud(jittered_data)\n        jittered_data = provider.jitter_point_cloud(jittered_data)\n        rotated_data[:,:,0:3] = jittered_data\n        return provider.shuffle_points(rotated_data)\n\n\n    def _get_item(self, index): \n        if index in self.cache:\n            point_set, cls = self.cache[index]\n        else:\n            fn = self.datapath[index]\n            cls = self.classes[self.datapath[index][0]]\n            cls = np.array([cls]).astype(np.int32)\n            point_set = np.loadtxt(fn[1],delimiter=',').astype(np.float32)\n            # Take the first npoints\n            point_set = point_set[0:self.npoints,:]\n            if self.normalize:\n                point_set[:,0:3] = pc_normalize(point_set[:,0:3])\n            if not self.normal_channel:\n                point_set = point_set[:,0:3]\n            if len(self.cache) < self.cache_size:\n                self.cache[index] = (point_set, cls)\n        return point_set, cls\n        \n    def __getitem__(self, index):\n        return self._get_item(index)\n\n    def __len__(self):\n        return len(self.datapath)\n\n    def num_channel(self):\n        if self.normal_channel:\n            return 6\n        else:\n            return 3\n\n    def reset(self):\n        self.idxs = np.arange(0, len(self.datapath))\n        if self.shuffle:\n            np.random.shuffle(self.idxs)\n        self.num_batches = (len(self.datapath)+self.batch_size-1) // self.batch_size\n        self.batch_idx = 0\n\n    def has_next_batch(self):\n        return self.batch_idx < self.num_batches\n\n    def next_batch(self, augment=False):\n        ''' returned dimension may be smaller than self.batch_size '''\n        start_idx = self.batch_idx * self.batch_size\n        end_idx = min((self.batch_idx+1) * self.batch_size, len(self.datapath))\n        bsize = end_idx - start_idx\n        batch_data = np.zeros((bsize, self.npoints, self.num_channel()))\n        batch_label = np.zeros((bsize), dtype=np.int32)\n        for i in range(bsize):\n            ps,cls = self._get_item(self.idxs[i+start_idx])\n            batch_data[i] = ps\n            batch_label[i] = cls\n        self.batch_idx += 1\n        if augment: batch_data = self._augment_batch_data(batch_data)\n        return batch_data, batch_label\n    \nif __name__ == '__main__':\n    d = ModelNetDataset(root = '../data/modelnet40_normal_resampled', split='test')\n    print(d.shuffle)\n    print(len(d))\n    import time\n    tic = time.time()\n    for i in range(10):\n        ps, cls = d[i]\n    print(time.time() - tic)\n    print(ps.shape, type(ps), cls)\n\n    print(d.has_next_batch())\n    ps_batch, cls_batch = d.next_batch(True)\n    print(ps_batch.shape)\n    print(cls_batch.shape)\n"
  },
  {
    "path": "pointnet2_tf/modelnet_h5_dataset.py",
    "content": "'''\n    ModelNet dataset. Support ModelNet40, XYZ channels. Up to 2048 points.\n    Faster IO than ModelNetDataset in the first epoch.\n'''\n\nimport os\nimport sys\nimport numpy as np\nimport h5py\nfrom .utils import provider\n\n# updated datapath\nDATA_DIR = 'data'\nif not os.path.exists(DATA_DIR):\n    os.mkdir(DATA_DIR)\nif not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')):\n    www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip'\n    zipfile = os.path.basename(www)\n    os.system('wget %s; unzip %s' % (www, zipfile))\n    os.system('mv %s %s' % (zipfile[:-4], DATA_DIR))\n    os.system('rm %s' % (zipfile))\n\n\ndef shuffle_data(data, labels):\n    \"\"\" Shuffle data and labels.\n        Input:\n          data: B,N,... numpy array\n          label: B,... numpy array\n        Return:\n          shuffled data, label and shuffle indices\n    \"\"\"\n    idx = np.arange(len(labels))\n    np.random.shuffle(idx)\n    return data[idx, ...], labels[idx], idx\n\ndef getDataFiles(list_filename):\n    return [line.rstrip() for line in open(list_filename)]\n\ndef load_h5(h5_filename):\n    f = h5py.File(h5_filename, 'r')\n    data = f['data'][:]\n    label = f['label'][:]\n    return (data, label)\n\ndef loadDataFile(filename):\n    return load_h5(filename)\n\n\nclass ModelNetH5Dataset(object):\n    def __init__(self, list_filename, batch_size = 32, npoints = 1024, shuffle=True):\n        self.list_filename = list_filename\n        self.batch_size = batch_size\n        self.npoints = npoints\n        self.shuffle = shuffle\n        self.h5_files = getDataFiles(self.list_filename)\n        self.reset()\n\n    def reset(self):\n        ''' reset order of h5 files '''\n        self.file_idxs = np.arange(0, len(self.h5_files))\n        if self.shuffle: np.random.shuffle(self.file_idxs)\n        self.current_data = None\n        self.current_label = None\n        self.current_file_idx = 0\n        self.batch_idx = 0\n\n    def _augment_batch_data(self, batch_data):\n        rotated_data = provider.rotate_point_cloud(batch_data)\n        rotated_data = provider.rotate_perturbation_point_cloud(rotated_data)\n        jittered_data = provider.random_scale_point_cloud(rotated_data[:,:,0:3])\n        jittered_data = provider.shift_point_cloud(jittered_data)\n        jittered_data = provider.jitter_point_cloud(jittered_data)\n        rotated_data[:,:,0:3] = jittered_data\n        return provider.shuffle_points(rotated_data)\n\n\n    def _get_data_filename(self):\n        return self.h5_files[self.file_idxs[self.current_file_idx]]\n\n    def _load_data_file(self, filename):\n        self.current_data,self.current_label = load_h5(filename)\n        self.current_label = np.squeeze(self.current_label)\n        self.batch_idx = 0\n        if self.shuffle:\n            self.current_data, self.current_label, _ = shuffle_data(self.current_data,self.current_label)\n\n    def _has_next_batch_in_file(self):\n        return self.batch_idx*self.batch_size < self.current_data.shape[0]\n\n    def num_channel(self):\n        return 3\n\n    def has_next_batch(self):\n        # TODO: add backend thread to load data\n        if (self.current_data is None) or (not self._has_next_batch_in_file()):\n            if self.current_file_idx >= len(self.h5_files):\n                return False\n            self._load_data_file(self._get_data_filename())\n            self.batch_idx = 0\n            self.current_file_idx += 1\n        return self._has_next_batch_in_file()\n\n    def next_batch(self, augment=False):\n        ''' returned dimension may be smaller than self.batch_size '''\n        start_idx = self.batch_idx * self.batch_size\n        end_idx = min((self.batch_idx+1) * self.batch_size, self.current_data.shape[0])\n        bsize = end_idx - start_idx\n        batch_label = np.zeros((bsize), dtype=np.int32)\n        data_batch = self.current_data[start_idx:end_idx, 0:self.npoints, :].copy()\n        label_batch = self.current_label[start_idx:end_idx].copy()\n        self.batch_idx += 1\n        if augment: data_batch = self._augment_batch_data(data_batch)\n        return data_batch, label_batch\n\nif __name__=='__main__':\n    d = ModelNetH5Dataset('data/modelnet40_ply_hdf5_2048/train_files.txt')\n    print(d.shuffle)\n    print(d.has_next_batch())\n    ps_batch, cls_batch = d.next_batch(True)\n    print(ps_batch.shape)\n    print(cls_batch.shape)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet2_cls_msg.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\nfrom pointnet_util import pointnet_sa_module, pointnet_sa_module_msg\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 3))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size))\n    return pointclouds_pl, labels_pl\n\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n    \"\"\" Classification PointNet, input is BxNx3, output Bx40 \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n\n    l0_xyz = point_cloud\n    l0_points = None\n\n    # Set abstraction layers\n    l1_xyz, l1_points = pointnet_sa_module_msg(l0_xyz, l0_points, 512, [0.1,0.2,0.4], [16,32,128], [[32,32,64], [64,64,128], [64,96,128]], is_training, bn_decay, scope='layer1', use_nchw=True)\n    l2_xyz, l2_points = pointnet_sa_module_msg(l1_xyz, l1_points, 128, [0.2,0.4,0.8], [32,64,128], [[64,64,128], [128,128,256], [128,128,256]], is_training, bn_decay, scope='layer2')\n    l3_xyz, l3_points, _ = pointnet_sa_module(l2_xyz, l2_points, npoint=None, radius=None, nsample=None, mlp=[256,512,1024], mlp2=None, group_all=True, is_training=is_training, bn_decay=bn_decay, scope='layer3')\n\n    # Fully connected layers\n    net = tf.reshape(l3_points, [batch_size, -1])\n    net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training, scope='fc1', bn_decay=bn_decay)\n    net = tf_util.dropout(net, keep_prob=0.4, is_training=is_training, scope='dp1')\n    net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training, scope='fc2', bn_decay=bn_decay)\n    net = tf_util.dropout(net, keep_prob=0.4, is_training=is_training, scope='dp2')\n    net = tf_util.fully_connected(net, 40, activation_fn=None, scope='fc3')\n\n    return net, end_points\n\n\ndef get_loss(pred, label, end_points):\n    \"\"\" pred: B*NUM_CLASSES,\n        label: B, \"\"\"\n    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n    classify_loss = tf.reduce_mean(loss)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,1024,3))\n        net, _ = get_model(inputs, tf.constant(True))\n        print(net)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet2_cls_ssg.py",
    "content": "\"\"\"\n    PointNet++ Model for point clouds classification\n\"\"\"\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\nfrom pointnet_util import pointnet_sa_module\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 3))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size))\n    return pointclouds_pl, labels_pl\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n    \"\"\" Classification PointNet, input is BxNx3, output Bx40 \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n    l0_xyz = point_cloud\n    l0_points = None\n    end_points['l0_xyz'] = l0_xyz\n\n    # Set abstraction layers\n    # Note: When using NCHW for layer 2, we see increased GPU memory usage (in TF1.4).\n    # So we only use NCHW for layer 1 until this issue can be resolved.\n    l1_xyz, l1_points, l1_indices = pointnet_sa_module(l0_xyz, l0_points, npoint=512, radius=0.2, nsample=32, mlp=[64,64,128], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer1', use_nchw=True)\n    l2_xyz, l2_points, l2_indices = pointnet_sa_module(l1_xyz, l1_points, npoint=128, radius=0.4, nsample=64, mlp=[128,128,256], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer2')\n    l3_xyz, l3_points, l3_indices = pointnet_sa_module(l2_xyz, l2_points, npoint=None, radius=None, nsample=None, mlp=[256,512,1024], mlp2=None, group_all=True, is_training=is_training, bn_decay=bn_decay, scope='layer3')\n\n    # Fully connected layers\n    net = tf.reshape(l3_points, [batch_size, -1])\n    net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training, scope='fc1', bn_decay=bn_decay)\n    net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training, scope='dp1')\n    net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training, scope='fc2', bn_decay=bn_decay)\n    net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training, scope='dp2')\n    net = tf_util.fully_connected(net, 40, activation_fn=None, scope='fc3')\n\n    return net, end_points\n\n\ndef get_loss(pred, label, end_points):\n    \"\"\" pred: B*NUM_CLASSES,\n        label: B, \"\"\"\n    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n    classify_loss = tf.reduce_mean(loss)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,1024,3))\n        output, _ = get_model(inputs, tf.constant(True))\n        print(output)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet2_part_seg.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\nfrom pointnet_util import pointnet_sa_module, pointnet_fp_module\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 6))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size, num_point))\n    return pointclouds_pl, labels_pl\n\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n    \"\"\" Part segmentation PointNet, input is BxNx6 (XYZ NormalX NormalY NormalZ), output Bx50 \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n    l0_xyz = tf.slice(point_cloud, [0,0,0], [-1,-1,3])\n    l0_points = tf.slice(point_cloud, [0,0,3], [-1,-1,3])\n\n    # Set Abstraction layers\n    l1_xyz, l1_points, l1_indices = pointnet_sa_module(l0_xyz, l0_points, npoint=512, radius=0.2, nsample=64, mlp=[64,64,128], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer1')\n    l2_xyz, l2_points, l2_indices = pointnet_sa_module(l1_xyz, l1_points, npoint=128, radius=0.4, nsample=64, mlp=[128,128,256], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer2')\n    l3_xyz, l3_points, l3_indices = pointnet_sa_module(l2_xyz, l2_points, npoint=None, radius=None, nsample=None, mlp=[256,512,1024], mlp2=None, group_all=True, is_training=is_training, bn_decay=bn_decay, scope='layer3')\n\n    # Feature Propagation layers\n    l2_points = pointnet_fp_module(l2_xyz, l3_xyz, l2_points, l3_points, [256,256], is_training, bn_decay, scope='fa_layer1')\n    l1_points = pointnet_fp_module(l1_xyz, l2_xyz, l1_points, l2_points, [256,128], is_training, bn_decay, scope='fa_layer2')\n    l0_points = pointnet_fp_module(l0_xyz, l1_xyz, tf.concat([l0_xyz,l0_points],axis=-1), l1_points, [128,128,128], is_training, bn_decay, scope='fa_layer3')\n\n    # FC layers\n    net = tf_util.conv1d(l0_points, 128, 1, padding='VALID', bn=True, is_training=is_training, scope='fc1', bn_decay=bn_decay)\n    end_points['feats'] = net \n    net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training, scope='dp1')\n    net = tf_util.conv1d(net, 50, 1, padding='VALID', activation_fn=None, scope='fc2')\n\n    return net, end_points\n\n\ndef get_loss(pred, label):\n    \"\"\" pred: BxNxC,\n        label: BxN, \"\"\"\n    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n    classify_loss = tf.reduce_mean(loss)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,2048,6))\n        net, _ = get_model(inputs, tf.constant(True))\n        print(net)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet2_part_seg_msg_one_hot.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\nfrom pointnet_util import pointnet_sa_module, pointnet_sa_module_msg, pointnet_fp_module\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 6))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size, num_point))\n    cls_labels_pl = tf.placeholder(tf.int32, shape=(batch_size))\n    return pointclouds_pl, labels_pl, cls_labels_pl\n\nNUM_CATEGORIES = 16\n\ndef get_model(point_cloud, cls_label, is_training, bn_decay=None):\n    \"\"\" Classification PointNet, input is BxNx3, output Bx40 \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n    l0_xyz = tf.slice(point_cloud, [0,0,0], [-1,-1,3])\n    l0_points = tf.slice(point_cloud, [0,0,3], [-1,-1,3])\n\n    # Set abstraction layers\n    l1_xyz, l1_points = pointnet_sa_module_msg(l0_xyz, l0_points, 512, [0.1,0.2,0.4], [32,64,128], [[32,32,64], [64,64,128], [64,96,128]], is_training, bn_decay, scope='layer1')\n    l2_xyz, l2_points = pointnet_sa_module_msg(l1_xyz, l1_points, 128, [0.4,0.8], [64,128], [[128,128,256],[128,196,256]], is_training, bn_decay, scope='layer2')\n    l3_xyz, l3_points, l3_indices = pointnet_sa_module(l2_xyz, l2_points, npoint=None, radius=None, nsample=None, mlp=[256,512,1024], mlp2=None, group_all=True, is_training=is_training, bn_decay=bn_decay, scope='layer3')\n\n    # Feature propagation layers\n    l2_points = pointnet_fp_module(l2_xyz, l3_xyz, l2_points, l3_points, [256,256], is_training, bn_decay, scope='fa_layer1')\n    l1_points = pointnet_fp_module(l1_xyz, l2_xyz, l1_points, l2_points, [256,128], is_training, bn_decay, scope='fa_layer2')\n\n    cls_label_one_hot = tf.one_hot(cls_label, depth=NUM_CATEGORIES, on_value=1.0, off_value=0.0)\n    cls_label_one_hot = tf.reshape(cls_label_one_hot, [batch_size, 1, NUM_CATEGORIES])\n    cls_label_one_hot = tf.tile(cls_label_one_hot, [1,num_point,1])\n    l0_points = pointnet_fp_module(l0_xyz, l1_xyz, tf.concat([cls_label_one_hot, l0_xyz, l0_points],axis=-1), l1_points, [128,128], is_training, bn_decay, scope='fp_layer3')\n\n    # FC layers\n    net = tf_util.conv1d(l0_points, 128, 1, padding='VALID', bn=True, is_training=is_training, scope='fc1', bn_decay=bn_decay)\n    end_points['feats'] = net \n    net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training, scope='dp1')\n    net = tf_util.conv1d(net, 50, 1, padding='VALID', activation_fn=None, scope='fc2')\n\n    return net, end_points\n\n\ndef get_loss(pred, label):\n    \"\"\" pred: BxNxC,\n        label: BxN, \"\"\"\n    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n    classify_loss = tf.reduce_mean(loss)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,2048,6))\n        cls_labels = tf.zeros((32),dtype=tf.int32)\n        output, ep = get_model(inputs, cls_labels, tf.constant(True))\n        print(output)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet2_sem_seg.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\nfrom pointnet_util import pointnet_sa_module, pointnet_fp_module\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 3))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size, num_point))\n    smpws_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point))\n    return pointclouds_pl, labels_pl, smpws_pl\n\n\ndef get_model(point_cloud, is_training, num_class, bn_decay=None):\n    \"\"\" Semantic segmentation PointNet, input is BxNx3, output Bxnum_class \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n    l0_xyz = point_cloud\n    l0_points = None\n    end_points['l0_xyz'] = l0_xyz\n\n    # Layer 1\n    l1_xyz, l1_points, l1_indices = pointnet_sa_module(l0_xyz, l0_points, npoint=1024, radius=0.1, nsample=32, mlp=[32,32,64], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer1')\n    l2_xyz, l2_points, l2_indices = pointnet_sa_module(l1_xyz, l1_points, npoint=256, radius=0.2, nsample=32, mlp=[64,64,128], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer2')\n    l3_xyz, l3_points, l3_indices = pointnet_sa_module(l2_xyz, l2_points, npoint=64, radius=0.4, nsample=32, mlp=[128,128,256], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer3')\n    l4_xyz, l4_points, l4_indices = pointnet_sa_module(l3_xyz, l3_points, npoint=16, radius=0.8, nsample=32, mlp=[256,256,512], mlp2=None, group_all=False, is_training=is_training, bn_decay=bn_decay, scope='layer4')\n\n    # Feature Propagation layers\n    l3_points = pointnet_fp_module(l3_xyz, l4_xyz, l3_points, l4_points, [256,256], is_training, bn_decay, scope='fa_layer1')\n    l2_points = pointnet_fp_module(l2_xyz, l3_xyz, l2_points, l3_points, [256,256], is_training, bn_decay, scope='fa_layer2')\n    l1_points = pointnet_fp_module(l1_xyz, l2_xyz, l1_points, l2_points, [256,128], is_training, bn_decay, scope='fa_layer3')\n    l0_points = pointnet_fp_module(l0_xyz, l1_xyz, l0_points, l1_points, [128,128,128], is_training, bn_decay, scope='fa_layer4')\n\n    # FC layers\n    net = tf_util.conv1d(l0_points, 128, 1, padding='VALID', bn=True, is_training=is_training, scope='fc1', bn_decay=bn_decay)\n    end_points['feats'] = net \n    net = tf_util.dropout(net, keep_prob=0.5, is_training=is_training, scope='dp1')\n    net = tf_util.conv1d(net, num_class, 1, padding='VALID', activation_fn=None, scope='fc2')\n\n    return net, end_points\n\n\ndef get_loss(pred, label, smpw):\n    \"\"\" pred: BxNxC,\n        label: BxN, \n\tsmpw: BxN \"\"\"\n    classify_loss = tf.losses.sparse_softmax_cross_entropy(labels=label, logits=pred, weights=smpw)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,2048,3))\n        net, _ = get_model(inputs, tf.constant(True), 10)\n        print(net)\n"
  },
  {
    "path": "pointnet2_tf/models/pointnet_cls_basic.py",
    "content": "'''\n    PointNet version 1 Model\n    Reference: https://github.com/charlesq34/pointnet\n'''\nimport tensorflow as tf\nimport numpy as np\nimport math\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../utils'))\nimport tf_util\n\ndef placeholder_inputs(batch_size, num_point):\n    pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 3))\n    labels_pl = tf.placeholder(tf.int32, shape=(batch_size))\n    return pointclouds_pl, labels_pl\n\n\ndef get_model(point_cloud, is_training, bn_decay=None):\n    \"\"\" Classification PointNet, input is BxNx3, output Bx40 \"\"\"\n    batch_size = point_cloud.get_shape()[0].value\n    num_point = point_cloud.get_shape()[1].value\n    end_points = {}\n    input_image = tf.expand_dims(point_cloud, -1)\n    \n    # Point functions (MLP implemented as conv2d)\n    net = tf_util.conv2d(input_image, 64, [1,3],\n                         padding='VALID', stride=[1,1],\n                         bn=True, is_training=is_training,\n                         scope='conv1', bn_decay=bn_decay)\n    net = tf_util.conv2d(net, 64, [1,1],\n                         padding='VALID', stride=[1,1],\n                         bn=True, is_training=is_training,\n                         scope='conv2', bn_decay=bn_decay)\n    net = tf_util.conv2d(net, 64, [1,1],\n                         padding='VALID', stride=[1,1],\n                         bn=True, is_training=is_training,\n                         scope='conv3', bn_decay=bn_decay)\n    net = tf_util.conv2d(net, 128, [1,1],\n                         padding='VALID', stride=[1,1],\n                         bn=True, is_training=is_training,\n                         scope='conv4', bn_decay=bn_decay)\n    net = tf_util.conv2d(net, 1024, [1,1],\n                         padding='VALID', stride=[1,1],\n                         bn=True, is_training=is_training,\n                         scope='conv5', bn_decay=bn_decay)\n\n    # Symmetric function: max pooling\n    net = tf_util.max_pool2d(net, [num_point,1],\n                             padding='VALID', scope='maxpool')\n    \n    # MLP on global point cloud vector\n    net = tf.reshape(net, [batch_size, -1])\n    net = tf_util.fully_connected(net, 512, bn=True, is_training=is_training,\n                                  scope='fc1', bn_decay=bn_decay)\n    net = tf_util.fully_connected(net, 256, bn=True, is_training=is_training,\n                                  scope='fc2', bn_decay=bn_decay)\n    net = tf_util.dropout(net, keep_prob=0.7, is_training=is_training,\n                          scope='dp1')\n    net = tf_util.fully_connected(net, 40, activation_fn=None, scope='fc3')\n\n    return net, end_points\n\n\ndef get_loss(pred, label, end_points):\n    \"\"\" pred: B*NUM_CLASSES,\n        label: B, \"\"\"\n    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)\n    classify_loss = tf.reduce_mean(loss)\n    tf.summary.scalar('classify loss', classify_loss)\n    tf.add_to_collection('losses', classify_loss)\n    return classify_loss\n\n\nif __name__=='__main__':\n    with tf.Graph().as_default():\n        inputs = tf.zeros((32,1024,3))\n        outputs = get_model(inputs, tf.constant(True))\n        print(outputs)\n"
  },
  {
    "path": "pointnet2_tf/part_seg/command.sh",
    "content": "python train.py --model pointnet2_part_seg --log_dir log --gpu 1 --max_epoch 201 > log.txt 2>&1 &\n"
  },
  {
    "path": "pointnet2_tf/part_seg/command_one_hot.sh",
    "content": "python train_one_hot.py --batch_size 8 --model pointnet2_part_seg_msg_one_hot --log_dir log_msg_one_hot --gpu 0 --max_epoch 201 > log_msg_one_hot.txt 2>&1 &\n"
  },
  {
    "path": "pointnet2_tf/part_seg/evaluate.py",
    "content": "import argparse\nimport math\nfrom datetime import datetime\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport tf_util\nimport part_dataset_all_normal\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='pointnet2_part_seg', help='Model name [default: pointnet2_part_seg]')\nparser.add_argument('--model_path', default='log/model.ckpt', help='model checkpoint file path [default: log/model.ckpt]')\nparser.add_argument('--log_dir', default='log_eval', help='Log dir [default: log_eval]')\nparser.add_argument('--num_point', type=int, default=2048, help='Point Number [default: 2048]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nFLAGS = parser.parse_args()\n\n\nVOTE_NUM = 12\n\n\nEPOCH_CNT = 0\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nGPU_INDEX = FLAGS.gpu\n\nMODEL_PATH = FLAGS.model_path\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(ROOT_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\nNUM_CLASSES = 50\n\n# Shapenet official train/test split\nDATA_PATH = os.path.join(ROOT_DIR, 'data', 'shapenetcore_partanno_segmentation_benchmark_v0_normal')\nTEST_DATASET = part_dataset_all_normal.PartNormalDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, split='test')\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef evaluate():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            print is_training_pl\n            \n            print \"--- Get model and loss\"\n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl)\n            loss = MODEL.get_loss(pred, labels_pl)\n            saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        sess = tf.Session(config=config)\n        # Restore variables from disk.\n        saver.restore(sess, MODEL_PATH)\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss}\n\n        eval_one_epoch(sess, ops)\n\ndef get_batch(dataset, idxs, start_idx, end_idx):\n    bsize = end_idx-start_idx\n    batch_data = np.zeros((bsize, NUM_POINT, 6))\n    batch_label = np.zeros((bsize, NUM_POINT), dtype=np.int32)\n    for i in range(bsize):\n        ps,normal,seg = dataset[idxs[i+start_idx]]\n        batch_data[i,:,0:3] = ps\n        batch_data[i,:,3:6] = normal\n        batch_label[i,:] = seg\n    return batch_data, batch_label\n\ndef eval_one_epoch(sess, ops):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = False\n    test_idxs = np.arange(0, len(TEST_DATASET))\n    # Test on all data: last batch might be smaller than BATCH_SIZE\n    num_batches = (len(TEST_DATASET)+BATCH_SIZE-1)/BATCH_SIZE\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    seg_classes = TEST_DATASET.seg_classes\n    shape_ious = {cat:[] for cat in seg_classes.keys()}\n    seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}\n    for cat in seg_classes.keys():\n        for label in seg_classes[cat]:\n            seg_label_to_cat[label] = cat\n\n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n    \n    batch_data = np.zeros((BATCH_SIZE, NUM_POINT, 6))\n    batch_label = np.zeros((BATCH_SIZE, NUM_POINT)).astype(np.int32)\n    for batch_idx in range(num_batches):\n        if batch_idx %20==0:\n            log_string('%03d/%03d'%(batch_idx, num_batches))\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = min(len(TEST_DATASET), (batch_idx+1) * BATCH_SIZE)\n        cur_batch_size = end_idx-start_idx\n        cur_batch_data, cur_batch_label = get_batch(TEST_DATASET, test_idxs, start_idx, end_idx)\n        if cur_batch_size == BATCH_SIZE:\n            batch_data = cur_batch_data\n            batch_label = cur_batch_label\n        else:\n            batch_data[0:cur_batch_size] = cur_batch_data\n            batch_label[0:cur_batch_size] = cur_batch_label\n\n        # ---------------------------------------------------------------------\n        loss_val = 0\n        pred_val = np.zeros((BATCH_SIZE, NUM_POINT, NUM_CLASSES))\n        for _ in range(VOTE_NUM):\n            feed_dict = {ops['pointclouds_pl']: batch_data,\n                         ops['labels_pl']: batch_label,\n                         ops['is_training_pl']: is_training}\n            temp_loss_val, temp_pred_val = sess.run([ops['loss'], ops['pred']], feed_dict=feed_dict)\n            loss_val += temp_loss_val\n            pred_val += temp_pred_val\n        loss_val /= float(VOTE_NUM)\n        # ---------------------------------------------------------------------\n    \n        # Select valid data\n        cur_pred_val = pred_val[0:cur_batch_size]\n        # Constrain pred to the groundtruth classes (selected by seg_classes[cat])\n        cur_pred_val_logits = cur_pred_val\n        cur_pred_val = np.zeros((cur_batch_size, NUM_POINT)).astype(np.int32)\n        for i in range(cur_batch_size):\n            cat = seg_label_to_cat[cur_batch_label[i,0]]\n            logits = cur_pred_val_logits[i,:,:]\n            cur_pred_val[i,:] = np.argmax(logits[:,seg_classes[cat]], 1) + seg_classes[cat][0]\n        correct = np.sum(cur_pred_val == cur_batch_label)\n        total_correct += correct\n        total_seen += (cur_batch_size*NUM_POINT)\n        if cur_batch_size==BATCH_SIZE:\n            loss_sum += loss_val\n        for l in range(NUM_CLASSES):\n            total_seen_class[l] += np.sum(cur_batch_label==l)\n            total_correct_class[l] += (np.sum((cur_pred_val==l) & (cur_batch_label==l)))\n\n        for i in range(cur_batch_size):\n            segp = cur_pred_val[i,:]\n            segl = cur_batch_label[i,:] \n            cat = seg_label_to_cat[segl[0]]\n            part_ious = [0.0 for _ in range(len(seg_classes[cat]))]\n            for l in seg_classes[cat]:\n                if (np.sum(segl==l) == 0) and (np.sum(segp==l) == 0): # part is not present, no prediction as well\n                    part_ious[l-seg_classes[cat][0]] = 1.0\n                else:\n                    part_ious[l-seg_classes[cat][0]] = np.sum((segl==l) & (segp==l)) / float(np.sum((segl==l) | (segp==l)))\n            shape_ious[cat].append(np.mean(part_ious))\n\n    all_shape_ious = []\n    for cat in shape_ious.keys():\n        for iou in shape_ious[cat]:\n            all_shape_ious.append(iou)\n        shape_ious[cat] = np.mean(shape_ious[cat])\n    print len(all_shape_ious)\n    mean_shape_ious = np.mean(shape_ious.values())\n    log_string('eval mean loss: %f' % (loss_sum / float(len(TEST_DATASET)/BATCH_SIZE)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    for cat in sorted(shape_ious.keys()):\n        log_string('eval mIoU of %s:\\t %f'%(cat, shape_ious[cat]))\n    log_string('eval mean mIoU: %f' % (mean_shape_ious))\n    log_string('eval mean mIoU (all shapes): %f' % (np.mean(all_shape_ious)))\n         \nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    evaluate()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/part_seg/part_dataset.py",
    "content": "'''\n    Dataset for shapenet part segmentaion.\n'''\n\nimport os\nimport os.path\nimport json\nimport numpy as np\nimport sys\n\ndef pc_normalize(pc):\n    l = pc.shape[0]\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n    pc = pc / m\n    return pc\n\nclass PartDataset():\n    def __init__(self, root, npoints = 2500, classification = False, class_choice = None, split='train', normalize=True):\n        self.npoints = npoints\n        self.root = root\n        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')\n        self.cat = {}\n        \n        self.classification = classification\n        self.normalize = normalize\n        \n        with open(self.catfile, 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = ls[1]\n        #print(self.cat)\n        if not class_choice is  None:\n            self.cat = {k:v for k,v in self.cat.items() if k in class_choice}\n            \n        self.meta = {}\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:\n            train_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:\n            val_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:\n            test_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        for item in self.cat:\n            #print('category', item)\n            self.meta[item] = []\n            dir_point = os.path.join(self.root, self.cat[item], 'points')\n            dir_seg = os.path.join(self.root, self.cat[item], 'points_label')\n            #print(dir_point, dir_seg)\n            fns = sorted(os.listdir(dir_point))\n            #print(fns[0][0:-4])\n            if split=='trainval':\n                fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]\n            elif split=='train':\n                fns = [fn for fn in fns if fn[0:-4] in train_ids]\n            elif split=='val':\n                fns = [fn for fn in fns if fn[0:-4] in val_ids]\n            elif split=='test':\n                fns = [fn for fn in fns if fn[0:-4] in test_ids]\n            else:\n                print('Unknown split: %s. Exiting..'%(split))\n                exit(-1)\n                \n            #print(os.path.basename(fns))\n            for fn in fns:\n                token = (os.path.splitext(os.path.basename(fn))[0]) \n                self.meta[item].append((os.path.join(dir_point, token + '.pts'), os.path.join(dir_seg, token + '.seg')))\n        \n        self.datapath = []\n        for item in self.cat:\n            for fn in self.meta[item]:\n                self.datapath.append((item, fn[0], fn[1]))\n            \n         \n        self.classes = dict(zip(self.cat, range(len(self.cat))))  \n        self.num_seg_classes = 0\n        if not self.classification:\n            for i in range(len(self.datapath)/50):\n                l = len(np.unique(np.loadtxt(self.datapath[i][-1]).astype(np.uint8)))\n                if l > self.num_seg_classes:\n                    self.num_seg_classes = l\n        #print(self.num_seg_classes)\n        \n        self.cache = {} # from index to (point_set, cls, seg) tuple\n        self.cache_size = 10000\n               \n    def __getitem__(self, index):\n        if index in self.cache:\n            point_set, seg, cls = self.cache[index]\n        else:\n            fn = self.datapath[index]\n            cls = self.classes[self.datapath[index][0]]\n            cls = np.array([cls]).astype(np.int32)\n            point_set = np.loadtxt(fn[1]).astype(np.float32)\n            if self.normalize:\n                point_set = pc_normalize(point_set)\n            seg = np.loadtxt(fn[2]).astype(np.int64) - 1\n            #print(point_set.shape, seg.shape)\n            if len(self.cache) < self.cache_size:\n                self.cache[index] = (point_set, seg, cls)\n                \n        \n        choice = np.random.choice(len(seg), self.npoints, replace=True)\n        #resample\n        point_set = point_set[choice, :]\n        seg = seg[choice]\n        if self.classification:\n            return point_set, cls\n        else:\n            return point_set, seg\n        \n    def __len__(self):\n        return len(self.datapath)\n\n\nif __name__ == '__main__':\n    d = PartDataset(root = '../data/shapenetcore_partanno_segmentation_benchmark_v0', class_choice = ['Airplane'], split='test')\n    print(len(d))\n    import time\n    tic = time.time()\n    for i in range(100):\n        ps, seg = d[i]\n        print np.max(seg), np.min(seg)\n    print(time.time() - tic)\n    print(ps.shape, type(ps), seg.shape,type(seg))\n    \n    d = PartDataset(root = '../data/shapenetcore_partanno_segmentation_benchmark_v0', classification = True)\n    print(len(d))\n    ps, cls = d[0]\n    print(ps.shape, type(ps), cls.shape,type(cls))\n\n"
  },
  {
    "path": "pointnet2_tf/part_seg/part_dataset_all_normal.py",
    "content": "'''\n    Dataset for ShapeNetPart segmentation\n'''\n\nimport os\nimport os.path\nimport json\nimport numpy as np\nimport sys\n\ndef pc_normalize(pc):\n    l = pc.shape[0]\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n    pc = pc / m\n    return pc\n\nclass PartNormalDataset():\n    def __init__(self, root, npoints = 2500, classification = False, split='train', normalize=True, return_cls_label = False):\n        self.npoints = npoints\n        self.root = root\n        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')\n        self.cat = {}\n        \n        self.classification = classification\n        self.normalize = normalize\n        self.return_cls_label = return_cls_label\n        \n        with open(self.catfile, 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = ls[1]\n        self.cat = {k:v for k,v in self.cat.items()}\n        #print(self.cat)\n            \n        self.meta = {}\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:\n            train_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:\n            val_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:\n            test_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        for item in self.cat:\n            #print('category', item)\n            self.meta[item] = []\n            dir_point = os.path.join(self.root, self.cat[item])\n            fns = sorted(os.listdir(dir_point))\n            #print(fns[0][0:-4])\n            if split=='trainval':\n                fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]\n            elif split=='train':\n                fns = [fn for fn in fns if fn[0:-4] in train_ids]\n            elif split=='val':\n                fns = [fn for fn in fns if fn[0:-4] in val_ids]\n            elif split=='test':\n                fns = [fn for fn in fns if fn[0:-4] in test_ids]\n            else:\n                print('Unknown split: %s. Exiting..'%(split))\n                exit(-1)\n                \n            #print(os.path.basename(fns))\n            for fn in fns:\n                token = (os.path.splitext(os.path.basename(fn))[0]) \n                self.meta[item].append(os.path.join(dir_point, token + '.txt'))\n        \n        self.datapath = []\n        for item in self.cat:\n            for fn in self.meta[item]:\n                self.datapath.append((item, fn))\n            \n         \n        self.classes = dict(zip(self.cat, range(len(self.cat))))  \n        # Mapping from category ('Chair') to a list of int [10,11,12,13] as segmentation labels\n        self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43], 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46], 'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27], 'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40], 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]}\n\n        for cat in sorted(self.seg_classes.keys()):\n            print(cat, self.seg_classes[cat])\n        \n        self.cache = {} # from index to (point_set, cls, seg) tuple\n        self.cache_size = 20000\n        \n    def __getitem__(self, index):\n        if index in self.cache:\n            point_set, normal, seg, cls = self.cache[index]\n        else:\n            fn = self.datapath[index]\n            cat = self.datapath[index][0]\n            cls = self.classes[cat]\n            cls = np.array([cls]).astype(np.int32)\n            data = np.loadtxt(fn[1]).astype(np.float32)\n            point_set = data[:,0:3]\n            if self.normalize:\n                point_set = pc_normalize(point_set)\n            normal = data[:,3:6]\n            seg = data[:,-1].astype(np.int32)\n            if len(self.cache) < self.cache_size:\n                self.cache[index] = (point_set, normal, seg, cls)\n                \n        \n        choice = np.random.choice(len(seg), self.npoints, replace=True)\n        #resample\n        point_set = point_set[choice, :]\n        seg = seg[choice]\n        normal = normal[choice,:]\n        if self.classification:\n            return point_set, normal, cls\n        else:\n            if self.return_cls_label:\n                return point_set, normal, seg, cls\n            else:\n                return point_set, normal, seg\n        \n    def __len__(self):\n        return len(self.datapath)\n\n\nif __name__ == '__main__':\n    d = PartNormalDataset(root = '../data/shapenetcore_partanno_segmentation_benchmark_v0_normal', split='trainval', npoints=3000)\n    print(len(d))\n\n    i = 500\n    ps, normal, seg = d[i]\n    print d.datapath[i]\n    print np.max(seg), np.min(seg)\n    print(ps.shape, seg.shape, normal.shape)\n    print ps\n    print normal\n    \n    sys.path.append('../utils')\n    import show3d_balls\n    show3d_balls.showpoints(ps, normal+1, ballradius=8)\n\n    d = PartNormalDataset(root = '../data/shapenetcore_partanno_segmentation_benchmark_v0_normal', classification = True)\n    print(len(d))\n    ps, normal, cls = d[0]\n    print(ps.shape, type(ps), cls.shape,type(cls))\n\n"
  },
  {
    "path": "pointnet2_tf/part_seg/test.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport argparse\nimport socket\nimport importlib\nimport time\nimport os\nimport scipy.misc\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, 'models'))\nsys.path.append(os.path.join(BASE_DIR, 'utils'))\nimport provider\nimport show3d_balls\nsys.path.append(os.path.join(ROOT_DIR, 'data_prep'))\nimport part_dataset\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--num_point', type=int, default=2048, help='Point Number [default: 2048]')\nparser.add_argument('--category', default='Airplane', help='Which single class to train on [default: Airplane]')\nparser.add_argument('--model', default='pointnet2_part_seg', help='Model name [default: pointnet2_part_seg]')\nparser.add_argument('--model_path', default='log/model.ckpt', help='model checkpoint file path [default: log/model.ckpt]')\nFLAGS = parser.parse_args()\n\n\nMODEL_PATH = FLAGS.model_path\nGPU_INDEX = FLAGS.gpu\nNUM_POINT = FLAGS.num_point\nMODEL = importlib.import_module(FLAGS.model) # import network module\nNUM_CLASSES = 4\nDATA_PATH = os.path.join(ROOT_DIR, 'data', 'shapenetcore_partanno_segmentation_benchmark_v0_normal')\nTEST_DATASET = part_dataset.PartDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, class_choice=FLAGS.category, split='test')\n\ndef get_model(batch_size, num_point):\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(batch_size, num_point)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl)\n            loss = MODEL.get_loss(pred, labels_pl, end_points)\n            saver = tf.train.Saver()\n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        sess = tf.Session(config=config)\n        # Restore variables from disk.\n        saver.restore(sess, MODEL_PATH)\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss}\n        return sess, ops\n\ndef inference(sess, ops, pc, batch_size):\n    ''' pc: BxNx3 array, return BxN pred '''\n    assert pc.shape[0]%batch_size == 0\n    num_batches = pc.shape[0]/batch_size\n    logits = np.zeros((pc.shape[0], pc.shape[1], NUM_CLASSES))\n    for i in range(num_batches):\n        feed_dict = {ops['pointclouds_pl']: pc[i*batch_size:(i+1)*batch_size,...],\n                     ops['is_training_pl']: False}\n        batch_logits = sess.run(ops['pred'], feed_dict=feed_dict)\n        logits[i*batch_size:(i+1)*batch_size,...] = batch_logits\n    return np.argmax(logits, 2)\n\nif __name__=='__main__':\n\n    import matplotlib.pyplot as plt\n    cmap = plt.cm.get_cmap(\"hsv\", 4)\n    cmap = np.array([cmap(i) for i in range(10)])[:,:3]\n\n    for i in range(len(TEST_DATASET)):\n        ps, seg = TEST_DATASET[i]\n        sess, ops = get_model(batch_size=1, num_point=ps.shape[0])\n        segp = inference(sess, ops, np.expand_dims(ps,0), batch_size=1)\n        segp = segp.squeeze()\n\n        gt = cmap[seg, :]\n        pred = cmap[segp, :]\n        show3d_balls.showpoints(ps, gt, pred, ballradius=8)\n"
  },
  {
    "path": "pointnet2_tf/part_seg/train.py",
    "content": "import argparse\nimport math\nfrom datetime import datetime\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nimport part_dataset_all_normal\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='model', help='Model name [default: model]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=2048, help='Point Number [default: 2048]')\nparser.add_argument('--max_epoch', type=int, default=201, help='Epoch to run [default: 201]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.7, help='Decay rate for lr decay [default: 0.7]')\nFLAGS = parser.parse_args()\n\nEPOCH_CNT = 0\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nGPU_INDEX = FLAGS.gpu\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(ROOT_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nNUM_CLASSES = 50\n\n# Shapenet official train/test split\nDATA_PATH = os.path.join(ROOT_DIR, 'data', 'shapenetcore_partanno_segmentation_benchmark_v0_normal')\nTRAIN_DATASET = part_dataset_all_normal.PartNormalDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, split='trainval')\nTEST_DATASET = part_dataset_all_normal.PartNormalDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, split='test')\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.\n            batch = tf.Variable(0)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            print \"--- Get model and loss\"\n            # Get model and loss \n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)\n            loss = MODEL.get_loss(pred, labels_pl)\n            tf.summary.scalar('loss', loss)\n\n            correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)\n            tf.summary.scalar('accuracy', accuracy)\n\n            print \"--- Get training operator\"\n            # Get training operator\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n            train_op = optimizer.minimize(loss, global_step=batch)\n            \n            # Add ops to save and restore all the variables.\n            saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'), sess.graph)\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        sess.run(init)\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch,\n               'end_points': end_points}\n\n        best_acc = -1\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n             \n            train_one_epoch(sess, ops, train_writer)\n            eval_one_epoch(sess, ops, test_writer)\n\n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\ndef get_batch(dataset, idxs, start_idx, end_idx):\n    bsize = end_idx-start_idx\n    batch_data = np.zeros((bsize, NUM_POINT, 6))\n    batch_label = np.zeros((bsize, NUM_POINT), dtype=np.int32)\n    for i in range(bsize):\n        ps,normal,seg = dataset[idxs[i+start_idx]]\n        batch_data[i,:,0:3] = ps\n        batch_data[i,:,3:6] = normal\n        batch_label[i,:] = seg\n    return batch_data, batch_label\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    # Shuffle train samples\n    train_idxs = np.arange(0, len(TRAIN_DATASET))\n    np.random.shuffle(train_idxs)\n    num_batches = len(TRAIN_DATASET)/BATCH_SIZE\n    \n    log_string(str(datetime.now()))\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    for batch_idx in range(num_batches):\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = (batch_idx+1) * BATCH_SIZE\n        batch_data, batch_label = get_batch(TRAIN_DATASET, train_idxs, start_idx, end_idx)\n        # Augment batched point clouds by rotation and jittering\n        #aug_data = batch_data\n        #aug_data = provider.random_scale_point_cloud(batch_data)\n        batch_data[:,:,0:3] = provider.jitter_point_cloud(batch_data[:,:,0:3])\n        feed_dict = {ops['pointclouds_pl']: batch_data,\n                     ops['labels_pl']: batch_label,\n                     ops['is_training_pl']: is_training,}\n        summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n        train_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 2)\n        correct = np.sum(pred_val == batch_label)\n        total_correct += correct\n        total_seen += (BATCH_SIZE*NUM_POINT)\n        loss_sum += loss_val\n\n        if (batch_idx+1)%10 == 0:\n            log_string(' -- %03d / %03d --' % (batch_idx+1, num_batches))\n            log_string('mean loss: %f' % (loss_sum / 10))\n            log_string('accuracy: %f' % (total_correct / float(total_seen)))\n            total_correct = 0\n            total_seen = 0\n            loss_sum = 0\n        \n\n        \ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n    test_idxs = np.arange(0, len(TEST_DATASET))\n    # Test on all data: last batch might be smaller than BATCH_SIZE\n    num_batches = (len(TEST_DATASET)+BATCH_SIZE-1)/BATCH_SIZE\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    seg_classes = TEST_DATASET.seg_classes\n    shape_ious = {cat:[] for cat in seg_classes.keys()}\n    seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}\n    for cat in seg_classes.keys():\n        for label in seg_classes[cat]:\n            seg_label_to_cat[label] = cat\n\n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n    \n    batch_data = np.zeros((BATCH_SIZE, NUM_POINT, 3))\n    batch_label = np.zeros((BATCH_SIZE, NUM_POINT)).astype(np.int32)\n    for batch_idx in range(num_batches):\n        if batch_idx %20==0:\n            log_string('%03d/%03d'%(batch_idx, num_batches))\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = min(len(TEST_DATASET), (batch_idx+1) * BATCH_SIZE)\n        cur_batch_size = end_idx-start_idx\n        cur_batch_data, cur_batch_label = get_batch(TEST_DATASET, test_idxs, start_idx, end_idx)\n        if cur_batch_size == BATCH_SIZE:\n            batch_data = cur_batch_data\n            batch_label = cur_batch_label\n        else:\n            batch_data[0:cur_batch_size] = cur_batch_data\n            batch_label[0:cur_batch_size] = cur_batch_label\n\n        # ---------------------------------------------------------------------\n        feed_dict = {ops['pointclouds_pl']: batch_data,\n                     ops['labels_pl']: batch_label,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n        test_writer.add_summary(summary, step)\n        # ---------------------------------------------------------------------\n    \n        # Select valid data\n        cur_pred_val = pred_val[0:cur_batch_size]\n        # Constrain pred to the groundtruth classes (selected by seg_classes[cat])\n        cur_pred_val_logits = cur_pred_val\n        cur_pred_val = np.zeros((cur_batch_size, NUM_POINT)).astype(np.int32)\n        for i in range(cur_batch_size):\n            cat = seg_label_to_cat[cur_batch_label[i,0]]\n            logits = cur_pred_val_logits[i,:,:]\n            cur_pred_val[i,:] = np.argmax(logits[:,seg_classes[cat]], 1) + seg_classes[cat][0]\n        correct = np.sum(cur_pred_val == cur_batch_label)\n        total_correct += correct\n        total_seen += (cur_batch_size*NUM_POINT)\n        if cur_batch_size==BATCH_SIZE:\n            loss_sum += loss_val\n        for l in range(NUM_CLASSES):\n            total_seen_class[l] += np.sum(cur_batch_label==l)\n            total_correct_class[l] += (np.sum((cur_pred_val==l) & (cur_batch_label==l)))\n\n        for i in range(cur_batch_size):\n            segp = cur_pred_val[i,:]\n            segl = cur_batch_label[i,:] \n            cat = seg_label_to_cat[segl[0]]\n            part_ious = [0.0 for _ in range(len(seg_classes[cat]))]\n            for l in seg_classes[cat]:\n                if (np.sum(segl==l) == 0) and (np.sum(segp==l) == 0): # part is not present, no prediction as well\n                    part_ious[l-seg_classes[cat][0]] = 1.0\n                else:\n                    part_ious[l-seg_classes[cat][0]] = np.sum((segl==l) & (segp==l)) / float(np.sum((segl==l) | (segp==l)))\n            shape_ious[cat].append(np.mean(part_ious))\n\n    all_shape_ious = []\n    for cat in shape_ious.keys():\n        for iou in shape_ious[cat]:\n            all_shape_ious.append(iou)\n        shape_ious[cat] = np.mean(shape_ious[cat])\n    mean_shape_ious = np.mean(shape_ious.values())\n    log_string('eval mean loss: %f' % (loss_sum / float(len(TEST_DATASET)/BATCH_SIZE)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    for cat in sorted(shape_ious.keys()):\n        log_string('eval mIoU of %s:\\t %f'%(cat, shape_ious[cat]))\n    log_string('eval mean mIoU: %f' % (mean_shape_ious))\n    log_string('eval mean mIoU (all shapes): %f' % (np.mean(all_shape_ious)))\n         \n    EPOCH_CNT += 1\n    return total_correct/float(total_seen)\n\n\nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/part_seg/train_one_hot.py",
    "content": "import argparse\nimport math\nfrom datetime import datetime\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nimport part_dataset_all_normal\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='model', help='Model name [default: model]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=2048, help='Point Number [default: 2048]')\nparser.add_argument('--max_epoch', type=int, default=201, help='Epoch to run [default: 201]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=16881*20, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.5, help='Decay rate for lr decay [default: 0.7]')\nFLAGS = parser.parse_args()\n\nEPOCH_CNT = 0\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nGPU_INDEX = FLAGS.gpu\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(ROOT_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nNUM_CLASSES = 50\n\n# Shapenet official train/test split\nDATA_PATH = os.path.join(ROOT_DIR, 'data', 'shapenetcore_partanno_segmentation_benchmark_v0_normal')\nTRAIN_DATASET = part_dataset_all_normal.PartNormalDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, split='trainval', return_cls_label=True)\nTEST_DATASET = part_dataset_all_normal.PartNormalDataset(root=DATA_PATH, npoints=NUM_POINT, classification=False, split='test', return_cls_label=True)\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl, cls_labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            print is_training_pl\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.\n            batch = tf.Variable(0)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            print \"--- Get model and loss\"\n            # Get model and loss \n            pred, end_points = MODEL.get_model(pointclouds_pl, cls_labels_pl, is_training_pl, bn_decay=bn_decay)\n            loss = MODEL.get_loss(pred, labels_pl)\n            tf.summary.scalar('loss', loss)\n\n            correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)\n            tf.summary.scalar('accuracy', accuracy)\n\n            print \"--- Get training operator\"\n            # Get training operator\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n            train_op = optimizer.minimize(loss, global_step=batch)\n            \n            # Add ops to save and restore all the variables.\n            saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'), sess.graph)\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        sess.run(init)\n        #sess.run(init, {is_training_pl: True})\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'cls_labels_pl': cls_labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch,\n               'end_points': end_points}\n\n        best_acc = -1\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n             \n            train_one_epoch(sess, ops, train_writer)\n            eval_one_epoch(sess, ops, test_writer)\n\n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\ndef get_batch(dataset, idxs, start_idx, end_idx):\n    bsize = end_idx-start_idx\n    batch_data = np.zeros((bsize, NUM_POINT, 6))\n    batch_label = np.zeros((bsize, NUM_POINT), dtype=np.int32)\n    batch_cls_label = np.zeros((bsize,), dtype=np.int32)\n    for i in range(bsize):\n        ps,normal,seg,cls = dataset[idxs[i+start_idx]]\n        batch_data[i,:,0:3] = ps\n        batch_data[i,:,3:6] = normal\n        batch_label[i,:] = seg\n        batch_cls_label[i] = cls\n    return batch_data, batch_label, batch_cls_label\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    # Shuffle train samples\n    train_idxs = np.arange(0, len(TRAIN_DATASET))\n    np.random.shuffle(train_idxs)\n    num_batches = len(TRAIN_DATASET)/BATCH_SIZE\n    \n    log_string(str(datetime.now()))\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    for batch_idx in range(num_batches):\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = (batch_idx+1) * BATCH_SIZE\n        batch_data, batch_label, batch_cls_label = get_batch(TRAIN_DATASET, train_idxs, start_idx, end_idx)\n        # Augment batched point clouds by rotation and jittering\n        #aug_data = batch_data\n        #aug_data = provider.random_scale_point_cloud(batch_data)\n        batch_data[:,:,0:3] = provider.jitter_point_cloud(batch_data[:,:,0:3])\n        feed_dict = {ops['pointclouds_pl']: batch_data,\n                     ops['labels_pl']: batch_label,\n                     ops['cls_labels_pl']: batch_cls_label,\n                     ops['is_training_pl']: is_training,}\n        summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n        train_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 2)\n        correct = np.sum(pred_val == batch_label)\n        total_correct += correct\n        total_seen += (BATCH_SIZE*NUM_POINT)\n        loss_sum += loss_val\n\n        if (batch_idx+1)%10 == 0:\n            log_string(' -- %03d / %03d --' % (batch_idx+1, num_batches))\n            log_string('mean loss: %f' % (loss_sum / 10))\n            log_string('accuracy: %f' % (total_correct / float(total_seen)))\n            total_correct = 0\n            total_seen = 0\n            loss_sum = 0\n        \n\n        \ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n    test_idxs = np.arange(0, len(TEST_DATASET))\n    # Test on all data: last batch might be smaller than BATCH_SIZE\n    num_batches = (len(TEST_DATASET)+BATCH_SIZE-1)/BATCH_SIZE\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    seg_classes = TEST_DATASET.seg_classes\n    shape_ious = {cat:[] for cat in seg_classes.keys()}\n    seg_label_to_cat = {} # {0:Airplane, 1:Airplane, ...49:Table}\n    for cat in seg_classes.keys():\n        for label in seg_classes[cat]:\n            seg_label_to_cat[label] = cat\n\n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n    \n    batch_data = np.zeros((BATCH_SIZE, NUM_POINT, 3))\n    batch_label = np.zeros((BATCH_SIZE, NUM_POINT)).astype(np.int32)\n    batch_cls_label = np.zeros((BATCH_SIZE,)).astype(np.int32)\n    for batch_idx in range(num_batches):\n        if batch_idx %20==0:\n            log_string('%03d/%03d'%(batch_idx, num_batches))\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = min(len(TEST_DATASET), (batch_idx+1) * BATCH_SIZE)\n        cur_batch_size = end_idx-start_idx\n        cur_batch_data, cur_batch_label, cur_batch_cls_label = get_batch(TEST_DATASET, test_idxs, start_idx, end_idx)\n        if cur_batch_size == BATCH_SIZE:\n            batch_data = cur_batch_data\n            batch_label = cur_batch_label\n            batch_cls_label = cur_batch_cls_label\n        else:\n            batch_data[0:cur_batch_size] = cur_batch_data\n            batch_label[0:cur_batch_size] = cur_batch_label\n            batch_cls_label[0:cur_batch_size] = cur_batch_cls_label\n\n        # ---------------------------------------------------------------------\n        feed_dict = {ops['pointclouds_pl']: batch_data,\n                     ops['labels_pl']: batch_label,\n                     ops['cls_labels_pl']: batch_cls_label,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n        test_writer.add_summary(summary, step)\n        # ---------------------------------------------------------------------\n    \n        # Select valid data\n        cur_pred_val = pred_val[0:cur_batch_size]\n        # Constrain pred to the groundtruth classes (selected by seg_classes[cat])\n        cur_pred_val_logits = cur_pred_val\n        cur_pred_val = np.zeros((cur_batch_size, NUM_POINT)).astype(np.int32)\n        for i in range(cur_batch_size):\n            cat = seg_label_to_cat[cur_batch_label[i,0]]\n            logits = cur_pred_val_logits[i,:,:]\n            cur_pred_val[i,:] = np.argmax(logits[:,seg_classes[cat]], 1) + seg_classes[cat][0]\n        correct = np.sum(cur_pred_val == cur_batch_label)\n        total_correct += correct\n        total_seen += (cur_batch_size*NUM_POINT)\n        if cur_batch_size==BATCH_SIZE:\n            loss_sum += loss_val\n        for l in range(NUM_CLASSES):\n            total_seen_class[l] += np.sum(cur_batch_label==l)\n            total_correct_class[l] += (np.sum((cur_pred_val==l) & (cur_batch_label==l)))\n\n        for i in range(cur_batch_size):\n            segp = cur_pred_val[i,:]\n            segl = cur_batch_label[i,:] \n            cat = seg_label_to_cat[segl[0]]\n            part_ious = [0.0 for _ in range(len(seg_classes[cat]))]\n            for l in seg_classes[cat]:\n                if (np.sum(segl==l) == 0) and (np.sum(segp==l) == 0): # part is not present, no prediction as well\n                    part_ious[l-seg_classes[cat][0]] = 1.0\n                else:\n                    part_ious[l-seg_classes[cat][0]] = np.sum((segl==l) & (segp==l)) / float(np.sum((segl==l) | (segp==l)))\n            shape_ious[cat].append(np.mean(part_ious))\n\n    all_shape_ious = []\n    for cat in shape_ious.keys():\n        for iou in shape_ious[cat]:\n            all_shape_ious.append(iou)\n        shape_ious[cat] = np.mean(shape_ious[cat])\n    mean_shape_ious = np.mean(shape_ious.values())\n    log_string('eval mean loss: %f' % (loss_sum / float(len(TEST_DATASET)/BATCH_SIZE)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    for cat in sorted(shape_ious.keys()):\n        log_string('eval mIoU of %s:\\t %f'%(cat, shape_ious[cat]))\n    log_string('eval mean mIoU: %f' % (mean_shape_ious))\n    log_string('eval mean mIoU (all shapes): %f' % (np.mean(all_shape_ious)))\n         \n    EPOCH_CNT += 1\n    return total_correct/float(total_seen)\n\n\nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/scannet/README.md",
    "content": "### ScanNet Data\n\nOriginal dataset website: <a href=\"http://www.scan-net.org/\">http://www.scan-net.org/</a>\n\nYou can get our preprocessed data at <a href=\"https://shapenet.cs.stanford.edu/media/scannet_data_pointnet2.zip\">here (1.72GB)</a> and refer to the code in `scannet_util.py` for data loading. Note that the virtual scan data is generated on the fly from our preprocessed data.\n\nSome code we used for scannet preprocessing is also included in `preprocessing` folder. You have to download the original ScanNet data and make small modifications in paths in order to run them.\n\nNote: To use ScanNetV2 data, change the tsv file to `scannetv2-labels.combined.tsv` and also update `scannet_util.py` to read the raw class and NYU40 names in the right columns (shifted by 1 compared to the V1 tsv).\n"
  },
  {
    "path": "pointnet2_tf/scannet/pc_util.py",
    "content": "\"\"\" Utility functions for processing point clouds.\n\nAuthor: Charles R. Qi, Hao Su\nDate: November 2016\n\"\"\"\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\n# Draw point cloud\nfrom eulerangles import euler2mat\n\n# Point cloud IO\nimport numpy as np\nfrom plyfile import PlyData, PlyElement\n\n \n# ----------------------------------------\n# Point Cloud/Volume Conversions\n# ----------------------------------------\ndef point_cloud_label_to_surface_voxel_label(point_cloud, label, res=0.0484):\n    coordmax = np.max(point_cloud,axis=0)\n    coordmin = np.min(point_cloud,axis=0)\n    nvox = np.ceil((coordmax-coordmin)/res)\n    vidx = np.ceil((point_cloud-coordmin)/res)\n    vidx = vidx[:,0]+vidx[:,1]*nvox[0]+vidx[:,2]*nvox[0]*nvox[1]\n    uvidx = np.unique(vidx)\n    if label.ndim==1:\n        uvlabel = [np.argmax(np.bincount(label[vidx==uv].astype(np.uint32))) for uv in uvidx]\n    else:\n        assert(label.ndim==2)\n\tuvlabel = np.zeros(len(uvidx),label.shape[1])\n\tfor i in range(label.shape[1]):\n\t    uvlabel[:,i] = np.array([np.argmax(np.bincount(label[vidx==uv,i].astype(np.uint32))) for uv in uvidx])\n    return uvidx, uvlabel, nvox\n\ndef point_cloud_label_to_surface_voxel_label_fast(point_cloud, label, res=0.0484):\n    coordmax = np.max(point_cloud,axis=0)\n    coordmin = np.min(point_cloud,axis=0)\n    nvox = np.ceil((coordmax-coordmin)/res)\n    vidx = np.ceil((point_cloud-coordmin)/res)\n    vidx = vidx[:,0]+vidx[:,1]*nvox[0]+vidx[:,2]*nvox[0]*nvox[1]\n    uvidx, vpidx = np.unique(vidx,return_index=True)\n    if label.ndim==1:\n        uvlabel = label[vpidx]\n    else:\n        assert(label.ndim==2)\n\tuvlabel = label[vpidx,:]\n    return uvidx, uvlabel, nvox\n\ndef point_cloud_to_volume_batch(point_clouds, vsize=12, radius=1.0, flatten=True):\n    \"\"\" Input is BxNx3 batch of point cloud\n        Output is Bx(vsize^3)\n    \"\"\"\n    vol_list = []\n    for b in range(point_clouds.shape[0]):\n        vol = point_cloud_to_volume(np.squeeze(point_clouds[b,:,:]), vsize, radius)\n        if flatten:\n            vol_list.append(vol.flatten())\n        else:\n            vol_list.append(np.expand_dims(np.expand_dims(vol, -1), 0))\n    if flatten:\n        return np.vstack(vol_list)\n    else:\n        return np.concatenate(vol_list, 0)\n\n\ndef point_cloud_to_volume(points, vsize, radius=1.0):\n    \"\"\" input is Nx3 points.\n        output is vsize*vsize*vsize\n        assumes points are in range [-radius, radius]\n    \"\"\"\n    vol = np.zeros((vsize,vsize,vsize))\n    voxel = 2*radius/float(vsize)\n    locations = (points + radius)/voxel\n    locations = locations.astype(int)\n    vol[locations[:,0],locations[:,1],locations[:,2]] = 1.0\n    return vol\n\n#a = np.zeros((16,1024,3))\n#print point_cloud_to_volume_batch(a, 12, 1.0, False).shape\n\ndef volume_to_point_cloud(vol):\n    \"\"\" vol is occupancy grid (value = 0 or 1) of size vsize*vsize*vsize\n        return Nx3 numpy array.\n    \"\"\"\n    vsize = vol.shape[0]\n    assert(vol.shape[1] == vsize and vol.shape[1] == vsize)\n    points = []\n    for a in range(vsize):\n        for b in range(vsize):\n            for c in range(vsize):\n                if vol[a,b,c] == 1:\n                    points.append(np.array([a,b,c]))\n    if len(points) == 0:\n        return np.zeros((0,3))\n    points = np.vstack(points)\n    return points\n\ndef point_cloud_to_volume_v2_batch(point_clouds, vsize=12, radius=1.0, num_sample=128):\n    \"\"\" Input is BxNx3 a batch of point cloud\n        Output is BxVxVxVxnum_samplex3\n        Added on Feb 19\n    \"\"\"\n    vol_list = []\n    for b in range(point_clouds.shape[0]):\n        vol = point_cloud_to_volume_v2(point_clouds[b,:,:], vsize, radius, num_sample)\n        vol_list.append(np.expand_dims(vol, 0))\n    return np.concatenate(vol_list, 0)\n\ndef point_cloud_to_volume_v2(points, vsize, radius=1.0, num_sample=128):\n    \"\"\" input is Nx3 points\n        output is vsize*vsize*vsize*num_sample*3\n        assumes points are in range [-radius, radius]\n        samples num_sample points in each voxel, if there are less than\n        num_sample points, replicate the points\n        Added on Feb 19\n    \"\"\"\n    vol = np.zeros((vsize,vsize,vsize,num_sample,3))\n    voxel = 2*radius/float(vsize)\n    locations = (points + radius)/voxel\n    locations = locations.astype(int)\n    loc2pc = {}\n    for n in range(points.shape[0]):\n        loc = tuple(locations[n,:])\n        if loc not in loc2pc:\n            loc2pc[loc] = []\n        loc2pc[loc].append(points[n,:])\n    #print loc2pc\n\n    for i in range(vsize):\n        for j in range(vsize):\n            for k in range(vsize):\n                if (i,j,k) not in loc2pc:\n                    vol[i,j,k,:,:] = np.zeros((num_sample,3))\n                else:\n                    pc = loc2pc[(i,j,k)] # a list of (3,) arrays\n                    pc = np.vstack(pc) # kx3\n                    # Sample/pad to num_sample points\n                    if pc.shape[0]>num_sample:\n                        choices = np.random.choice(pc.shape[0], num_sample, replace=False)\n                        pc = pc[choices,:]\n                    elif pc.shape[0]<num_sample:\n                        pc = np.lib.pad(pc, ((0,num_sample-pc.shape[0]),(0,0)), 'edge')\n                    # Normalize\n                    pc_center = (np.array([i,j,k])+0.5)*voxel - radius\n                    #print 'pc center: ', pc_center\n                    pc = (pc - pc_center) / voxel # shift and scale\n                    vol[i,j,k,:,:] = pc \n                #print (i,j,k), vol[i,j,k,:,:]\n    return vol\n\ndef point_cloud_to_image_batch(point_clouds, imgsize, radius=1.0, num_sample=128):\n    \"\"\" Input is BxNx3 a batch of point cloud\n        Output is BxIxIxnum_samplex3\n        Added on Feb 19\n    \"\"\"\n    img_list = []\n    for b in range(point_clouds.shape[0]):\n        img = point_cloud_to_image(point_clouds[b,:,:], imgsize, radius, num_sample)\n        img_list.append(np.expand_dims(img, 0))\n    return np.concatenate(img_list, 0)\n\n\ndef point_cloud_to_image(points, imgsize, radius=1.0, num_sample=128):\n    \"\"\" input is Nx3 points\n        output is imgsize*imgsize*num_sample*3\n        assumes points are in range [-radius, radius]\n        samples num_sample points in each pixel, if there are less than\n        num_sample points, replicate the points\n        Added on Feb 19\n    \"\"\"\n    img = np.zeros((imgsize, imgsize, num_sample, 3))\n    pixel = 2*radius/float(imgsize)\n    locations = (points[:,0:2] + radius)/pixel # Nx2\n    locations = locations.astype(int)\n    loc2pc = {}\n    for n in range(points.shape[0]):\n        loc = tuple(locations[n,:])\n        if loc not in loc2pc:\n            loc2pc[loc] = []\n        loc2pc[loc].append(points[n,:])\n    for i in range(imgsize):\n        for j in range(imgsize):\n            if (i,j) not in loc2pc:\n                img[i,j,:,:] = np.zeros((num_sample,3))\n            else:\n                pc = loc2pc[(i,j)]\n                pc = np.vstack(pc)\n                if pc.shape[0]>num_sample:\n                    choices = np.random.choice(pc.shape[0], num_sample, replace=False)\n                    pc = pc[choices,:]\n                elif pc.shape[0]<num_sample:\n                    pc = np.lib.pad(pc, ((0,num_sample-pc.shape[0]),(0,0)), 'edge')\n                pc_center = (np.array([i,j])+0.5)*pixel - radius\n                pc[:,0:2] = (pc[:,0:2] - pc_center)/pixel\n                img[i,j,:,:] = pc\n    return img\n# ----------------------------------------\n# Point cloud IO\n# ----------------------------------------\n\ndef read_ply(filename):\n    \"\"\" read XYZ point cloud from filename PLY file \"\"\"\n    plydata = PlyData.read(filename)\n    pc = plydata['vertex'].data\n    pc_array = np.array([[x, y, z] for x,y,z in pc])\n    return pc_array\n\ndef read_ply_xyz(filename):\n    \"\"\" read XYZ point cloud from filename PLY file \"\"\"\n    assert(os.path.isfile(filename))\n    with open(filename, 'rb') as f:\n        plydata = PlyData.read(f)\n        num_verts = plydata['vertex'].count\n        vertices = np.zeros(shape=[num_verts, 3], dtype=np.float32)\n        vertices[:,0] = plydata['vertex'].data['x']\n        vertices[:,1] = plydata['vertex'].data['y']\n        vertices[:,2] = plydata['vertex'].data['z']\n    return vertices\n\ndef read_ply_xyzrgb(filename):\n    \"\"\" read XYZRGB point cloud from filename PLY file \"\"\"\n    assert(os.path.isfile(filename))\n    with open(filename, 'rb') as f:\n        plydata = PlyData.read(f)\n        num_verts = plydata['vertex'].count\n        vertices = np.zeros(shape=[num_verts, 6], dtype=np.float32)\n        vertices[:,0] = plydata['vertex'].data['x']\n        vertices[:,1] = plydata['vertex'].data['y']\n        vertices[:,2] = plydata['vertex'].data['z']\n        vertices[:,3] = plydata['vertex'].data['red']\n        vertices[:,4] = plydata['vertex'].data['green']\n        vertices[:,5] = plydata['vertex'].data['blue']\n    return vertices\n\ndef write_ply(points, filename, text=True):\n    \"\"\" input: Nx3, write points to filename as PLY format. \"\"\"\n    points = [(points[i,0], points[i,1], points[i,2]) for i in range(points.shape[0])]\n    vertex = np.array(points, dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')])\n    el = PlyElement.describe(vertex, 'vertex', comments=['vertices'])\n    PlyData([el], text=text).write(filename)\n\n\n# ----------------------------------------\n# Simple Point cloud and Volume Renderers\n# ----------------------------------------\n\ndef draw_point_cloud(input_points, canvasSize=500, space=200, diameter=25,\n                     xrot=0, yrot=0, zrot=0, switch_xyz=[0,1,2], normalize=True):\n    \"\"\" Render point cloud to image with alpha channel.\n        Input:\n            points: Nx3 numpy array (+y is up direction)\n        Output:\n            gray image as numpy array of size canvasSizexcanvasSize\n    \"\"\"\n    image = np.zeros((canvasSize, canvasSize))\n    if input_points is None or input_points.shape[0] == 0:\n        return image\n\n    points = input_points[:, switch_xyz]\n    M = euler2mat(zrot, yrot, xrot)\n    points = (np.dot(M, points.transpose())).transpose()\n\n    # Normalize the point cloud\n    # We normalize scale to fit points in a unit sphere\n    if normalize:\n        centroid = np.mean(points, axis=0)\n        points -= centroid\n        furthest_distance = np.max(np.sqrt(np.sum(abs(points)**2,axis=-1)))\n        points /= furthest_distance\n\n    # Pre-compute the Gaussian disk\n    radius = (diameter-1)/2.0\n    disk = np.zeros((diameter, diameter))\n    for i in range(diameter):\n        for j in range(diameter):\n            if (i - radius) * (i-radius) + (j-radius) * (j-radius) <= radius * radius:\n                disk[i, j] = np.exp((-(i-radius)**2 - (j-radius)**2)/(radius**2))\n    mask = np.argwhere(disk > 0)\n    dx = mask[:, 0]\n    dy = mask[:, 1]\n    dv = disk[disk > 0]\n    \n    # Order points by z-buffer\n    zorder = np.argsort(points[:, 2])\n    points = points[zorder, :]\n    points[:, 2] = (points[:, 2] - np.min(points[:, 2])) / (np.max(points[:, 2] - np.min(points[:, 2])))\n    max_depth = np.max(points[:, 2])\n       \n    for i in range(points.shape[0]):\n        j = points.shape[0] - i - 1\n        x = points[j, 0]\n        y = points[j, 1]\n        xc = canvasSize/2 + (x*space)\n        yc = canvasSize/2 + (y*space)\n        xc = int(np.round(xc))\n        yc = int(np.round(yc))\n        \n        px = dx + xc\n        py = dy + yc\n        \n        image[px, py] = image[px, py] * 0.7 + dv * (max_depth - points[j, 2]) * 0.3\n    \n    image = image / np.max(image)\n    return image\n\ndef point_cloud_three_views(points):\n    \"\"\" input points Nx3 numpy array (+y is up direction).\n        return an numpy array gray image of size 500x1500. \"\"\" \n    # +y is up direction\n    # xrot is azimuth\n    # yrot is in-plane\n    # zrot is elevation\n    img1 = draw_point_cloud(points, zrot=110/180.0*np.pi, xrot=45/180.0*np.pi, yrot=0/180.0*np.pi)\n    img2 = draw_point_cloud(points, zrot=70/180.0*np.pi, xrot=135/180.0*np.pi, yrot=0/180.0*np.pi)\n    img3 = draw_point_cloud(points, zrot=180.0/180.0*np.pi, xrot=90/180.0*np.pi, yrot=0/180.0*np.pi)\n    image_large = np.concatenate([img1, img2, img3], 1)\n    return image_large\n\n\ndef point_cloud_three_views_demo():\n    \"\"\" Demo for draw_point_cloud function \"\"\"\n    from PIL import Image\n    points = read_ply('../third_party/mesh_sampling/piano.ply')\n    im_array = point_cloud_three_views(points)\n    img = Image.fromarray(np.uint8(im_array*255.0))\n    img.save('piano.jpg')\n\nif __name__==\"__main__\":\n    point_cloud_three_views_demo()\n\n\ndef pyplot_draw_point_cloud(points, output_filename):\n    \"\"\" points is a Nx3 numpy array \"\"\"\n    import matplotlib.pyplot as plt\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.scatter(points[:,0], points[:,1], points[:,2])\n    ax.set_xlabel('x')\n    ax.set_ylabel('y')\n    ax.set_zlabel('z')\n    #savefig(output_filename)\n\ndef pyplot_draw_volume(vol, output_filename):\n    \"\"\" vol is of size vsize*vsize*vsize\n        output an image to output_filename\n    \"\"\"\n    points = volume_to_point_cloud(vol)\n    pyplot_draw_point_cloud(points, output_filename)\n\ndef write_ply_color(points, labels, out_filename, num_classes=None):\n    \"\"\" Color (N,3) points with labels (N) within range 0 ~ num_classes-1 as OBJ file \"\"\"\n    import matplotlib.pyplot as pyplot\n    labels = labels.astype(int)\n    N = points.shape[0]\n    if num_classes is None:\n        num_classes = np.max(labels)+1\n    else:\n        assert(num_classes>np.max(labels))\n    fout = open(out_filename, 'w')\n    colors = [pyplot.cm.hsv(i/float(num_classes)) for i in range(num_classes)]\n    for i in range(N):\n        c = colors[labels[i]]\n        c = [int(x*255) for x in c]\n        fout.write('v %f %f %f %d %d %d\\n' % (points[i,0],points[i,1],points[i,2],c[0],c[1],c[2]))\n    fout.close()\n\ndef write_ply_rgb(points, colors, out_filename, num_classes=None):\n    \"\"\" Color (N,3) points with RGB colors (N,3) within range [0,255] as OBJ file \"\"\"\n    colors = colors.astype(int)\n    N = points.shape[0]\n    fout = open(out_filename, 'w')\n    for i in range(N):\n        c = colors[i,:]\n        fout.write('v %f %f %f %d %d %d\\n' % (points[i,0],points[i,1],points[i,2],c[0],c[1],c[2]))\n    fout.close()\n"
  },
  {
    "path": "pointnet2_tf/scannet/preprocessing/collect_scannet_scenes.py",
    "content": "import scannet_util\n\nCLASS_NAMES = scannet_util.g_label_names\nRAW2SCANNET = scannet_util.g_raw2scannet\n\nimport os\nimport json\nimport sys\nimport numpy as np\nBASE_DIR = os.path.dirname(__file__)\n\nsys.path.append(BASE_DIR)\nsys.path.append('../')\nimport pc_util\n\nSCANNET_DIR = 'scannet_clean_2'\nSCENE_NAMES = [line.rstrip() for line in open('scannet_all.txt')]\n\ndef collect_one_scene_data_label(scene_name, out_filename):\n    # Over-segmented segments: maps from segment to vertex/point IDs\n    data_folder = os.path.join(SCANNET_DIR, scene_name)\n    mesh_seg_filename = os.path.join(data_folder, '%s_vh_clean_2.0.010000.segs.json'%(scene_name))\n    #print mesh_seg_filename\n    with open(mesh_seg_filename) as jsondata:\n        d = json.load(jsondata)\n        seg = d['segIndices']\n        #print len(seg)\n    segid_to_pointid = {}\n    for i in range(len(seg)):\n        if seg[i] not in segid_to_pointid:\n            segid_to_pointid[seg[i]] = []\n        segid_to_pointid[seg[i]].append(i)\n    \n    # Raw points in XYZRGBA\n    ply_filename = os.path.join(data_folder, '%s_vh_clean_2.ply' % (scene_name))\n    points = pc_util.read_ply_xyzrgb(ply_filename)\n    log_string(str(points.shape))\n    \n    # Instances over-segmented segment IDs: annotation on segments\n    instance_segids = []\n    labels = []\n    annotation_filename = os.path.join(data_folder, '%s.aggregation.json'%(scene_name))\n    #print annotation_filename\n    with open(annotation_filename) as jsondata:\n        d = json.load(jsondata)\n        for x in d['segGroups']:\n            instance_segids.append(x['segments'])\n            labels.append(x['label'])\n    \n    #print len(instance_segids)\n    #print labels\n    \n    # Each instance's points\n    instance_points_list = []\n    instance_labels_list = []\n    semantic_labels_list = []\n    for i in range(len(instance_segids)):\n       segids = instance_segids[i]\n       pointids = []\n       for segid in segids:\n           pointids += segid_to_pointid[segid]\n       instance_points = points[np.array(pointids),:]\n       instance_points_list.append(instance_points)\n       instance_labels_list.append(np.ones((instance_points.shape[0], 1))*i)   \n       if labels[i] not in RAW2SCANNET:\n           label = 'unannotated'\n       else:\n           label = RAW2SCANNET[labels[i]]\n       label = CLASS_NAMES.index(label)\n       semantic_labels_list.append(np.ones((instance_points.shape[0], 1))*label)\n       \n    # Refactor data format\n    scene_points = np.concatenate(instance_points_list, 0)\n    scene_points = scene_points[:,0:6] # XYZRGB, disregarding the A\n    instance_labels = np.concatenate(instance_labels_list, 0) \n    semantic_labels = np.concatenate(semantic_labels_list, 0)\n    data = np.concatenate((scene_points, instance_labels, semantic_labels), 1)\n    np.save(out_filename, data)\n\n\nLOG_FOUT = open('log.txt','w')\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\n\nif __name__=='__main__':\n    output_folder = 'scannet_scenes'\n    if not os.path.exists(output_folder):\n        os.mkdir(output_folder)\n    \n    for scene_name in SCENE_NAMES:\n        log_string(scene_name)\n        try:\n            out_filename = scene_name+'.npy' # scene0000_00.npy\n            collect_one_scene_data_label(scene_name, os.path.join(output_folder, out_filename))\n        except Exception, e:\n            log_string(scene_name+'ERROR!!')\n            log_string(str(e))\n    \n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/scannet/preprocessing/demo.py",
    "content": "import sys\nimport os\n\nBASE_DIR = os.path.dirname(__file__)\n\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, '../'))\n\nimport numpy as np\nimport pc_util\n\ndata = np.load('scannet_scenes/scene0001_01.npy')\nscene_points = data[:,0:3]\ncolors = data[:,3:6]\ninstance_labels = data[:,6]\nsemantic_labels = data[:,7]\n\n\noutput_folder = 'demo_output'\nif not os.path.exists(output_folder):\n    os.mkdir(output_folder)\n\n# Write scene as OBJ file for visualization\npc_util.write_ply_rgb(scene_points, colors, os.path.join(output_folder, 'scene.obj'))\npc_util.write_ply_color(scene_points, instance_labels, os.path.join(output_folder, 'scene_instance.obj'))\npc_util.write_ply_color(scene_points, semantic_labels, os.path.join(output_folder, 'scene_semantic.obj'))\n"
  },
  {
    "path": "pointnet2_tf/scannet/preprocessing/fetch_label_names.py",
    "content": "''' scanning through annotation files for all the scenes to get a complete list of categories '''\n\nimport os\nimport json\nscannet_dir = './scannet/'\nscene_names = [line.rstrip() for line in open('scannet_all.txt')]\n\nlabels = set()\nfor scene_name in scene_names:\n    path = os.path.join(scannet_dir, scene_name)\n    agg_filename = os.path.join(path, scene_name+'.aggregation.json')\n    with open(agg_filename) as jsondata:\n        d = json.load(jsondata)\n        for x in d['segGroups']:\n            labels.add(x['label']) \n\nfout = open('class_names.txt', 'w')\nfor label in list(labels):\n    print label\n    try:\n        fout.write(label+'\\n')\n    except:\n        pass\nfout.close()\n"
  },
  {
    "path": "pointnet2_tf/scannet/preprocessing/scannet-labels.combined.tsv",
    "content": "category\tcount\tnyuId\tnyu40id\teigen13id\tnyuClass\tnyu40class\teigen13class\tModelNet40\tModelNet10\tShapeNetCore55\tsynsetoffset\twnsynsetid\twnsynsetkey\r\nwall\t7274\t21\t1\t12\twall\twall\tWall\t\t\t\t\tn04546855\twall.n.01\r\nchair\t5419\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nfloor\t3910\t11\t2\t5\tfloor\tfloor\tFloor\t\t\t\t\tn03365592\tfloor.n.01\r\ntable\t2664\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ndoor\t1400\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\tn03221720\tdoor.n.01\r\ncouch\t1222\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\ncabinet\t1106\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\nshelf\t889\t42\t15\t6\tshelves\tshelves\tFurniture\tbookshelf\t\tbookshelf\t02871439\tn02871439\tbookshelf.n.01\r\ndesk\t862\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\ttable\t04379243\tn03179701\tdesk.n.01\r\noffice chair\t837\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn04373704\tswivel_chair.n.01\r\nbed\t814\t157\t4\t1\tbed\tbed\tBed\tbed\tbed\tbed\t02818832\tn02818832\tbed.n.01\r\ntrashcan\t688\t12\t39\t6\tgarbage bin\totherfurniture\tFurniture\t\t\ttrash_bin\t02747177\tn02747177\tashcan.n.01\r\npillow\t608\t119\t18\t7\tpillow\tpillow\tObjects\t\t\tpillow\t03938244\tn03938244\tpillow.n.01\r\nsink\t504\t24\t34\t7\tsink\tsink\tObjects\tsink\t\t\t\tn04223580\tsink.n.01\r\npicture\t467\t64\t11\t8\tpicture\tpicture\tPicture\t\t\t\t\tn03931044\tpicture.n.01\r\nwindow\t432\t59\t9\t13\twindow\twindow\tWindow\t\t\t\t\tn04587648\twindow.n.01\r\ntoilet\t402\t124\t33\t7\ttoilet\ttoilet\tObjects\ttoilet\ttoilet\t\t\tn04446276\ttoilet.n.01\r\nbookshelf\t400\t88\t10\t6\tbookshelf\tbookshelf\tFurniture\tbookshelf\t\tbookshelf\t02871439\tn02871439\tbookshelf.n.01\r\nmonitor\t395\t49\t40\t7\tmonitor\totherprop\tObjects\tmonitor\tmonitor\ttv or monitor\t03211117\tn03782190\tmonitor.n.04\r\ncomputer\t369\t46\t40\t7\tcomputer\totherprop\tObjects\t\t\t\t\tn03082979\tcomputer.n.01\r\ncurtain\t356\t89\t16\t13\tcurtain\tcurtain\tWindow\tcurtain\t\t\t\tn03151077\tcurtain.n.01\r\nbook\t335\t1\t23\t2\tbook\tbooks\tBooks\t\t\t\t\tn02870526\tbook.n.11\r\narmchair\t318\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn02738535\tarmchair.n.01\r\ncoffee table\t303\t356\t39\t6\tcoffee table\totherfurniture\tFurniture\ttable\ttable\ttable\t04379243\tn03063968\tcoffee_table.n.01\r\ndrawer\t290\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\tn03233905\tdrawer.n.01\r\nbox\t283\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\tn02883344\tbox.n.01\r\nrefrigerator\t269\t17\t24\t6\trefridgerator\trefridgerator\tFurniture\t\t\t\t\tn04070727\trefrigerator.n.01\r\nlamp\t255\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nkitchen cabinet\t252\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\t\t\tn02933112\tcabinet.n.01\r\ndining chair\t242\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ntowel\t222\t135\t27\t7\ttowel\ttowel\tObjects\t\t\t\t\tn04459362\ttowel.n.01\r\nclothes\t214\t141\t21\t7\tclothes\tclothes\tObjects\t\t\t\t\tn02728440\tapparel.n.01\r\ntv\t210\t172\t25\t11\ttelevision\ttelevision\tTV\t\t\ttv or monitor\t03211117\tn03211117\tdisplay.n.06\r\nnightstand\t206\t158\t32\t6\tnight stand\tnight stand\tFurniture\tnight_stand\tnight_stand\t\t\tn03015254\tchest_of_drawers.n.01\r\ncounter\t196\t7\t12\t6\tcounter\tcounter\tFurniture\ttable\ttable\ttable\t04379243\tn03116530\tcounter.n.01\r\ndresser\t180\t169\t17\t6\tdresser\tdresser\tFurniture\tdresser\tdresser\t\t\tn03015254\tchest_of_drawers.n.01\r\ncountertop\t176\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\tn03118245\tcountertop.n.01\r\nstool\t165\t150\t40\t7\tstool\totherprop\tObjects\tstool\t\t\t\tn04326896\tstool.n.01\r\ncushion\t141\t119\t18\t7\tpillow\tpillow\tObjects\t\t\t\t\tn03151500\tcushion.n.03\r\nplant\t139\t82\t40\t7\tplant\totherprop\tObjects\tplant\t\t\t\tn00017222\tplant.n.02\r\nceiling\t134\t4\t22\t3\tceiling\tceiling\tCeiling\t\t\t\t\tn02990373\tceiling.n.01\r\nbathtub\t134\t136\t36\t7\tbathtub\tbathtub\tObjects\tbathtub\tbathtub\ttub\t02808440\tn02808440\tbathtub.n.01\r\nbedframe\t132\t157\t4\t1\tbed\tbed\tBed\t\t\t\t\tn02822579\tbedstead.n.01\r\nend table\t125\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ndining table\t123\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nkeyboard\t118\t47\t40\t7\tkeyboard\totherprop\tObjects\tkeyboard\t\tcomputer keyboard\t03085013\tn03085013\tcomputer_keyboard.n.01\r\nbag\t116\t55\t37\t7\tbag\tbag\tObjects\t\t\tsuitcase\t02773838\tn02773838\tbag.n.06\r\nbackpack\t114\t206\t40\t7\tbackpack\totherprop\tObjects\t\t\t\t\tn02769748\tbackpack.n.01\r\ntoilet paper\t113\t139\t40\t7\ttoilet paper\totherprop\tObjects\t\t\t\t\tn15075141\ttoilet_tissue.n.01\r\nprinter\t111\t66\t40\t7\tprinter\totherprop\tObjects\t\t\tprinter\t04004475\tn04004475\tprinter.n.03\r\ntv stand\t103\t291\t39\t6\ttv stand\totherfurniture\tFurniture\ttv_stand\t\t\t\tn03290653\tentertainment_center.n.01\r\nwhiteboard\t102\t45\t30\t7\twhiteboard\twhiteboard\tObjects\t\t\t\t\tn03211616\tdisplay_panel.n.01\r\ncarpet\t99\t130\t40\t7\trug\totherprop\tObjects\t\t\t\t\tn04118021\trug.n.01\r\nblanket\t99\t312\t40\t7\tblanket\totherprop\tObjects\t\t\t\t\tn02849154\tblanket.n.01\r\nshower curtain\t99\t123\t28\t7\tshower curtain\tshower curtain\tObjects\tcurtain\t\t\t\tn04209239\tshower_curtain.n.01\r\ntrash can\t94\t12\t39\t6\tgarbage bin\totherfurniture\tFurniture\t\t\ttrash_bin\t02747177\tn02747177\tashcan.n.01\r\ncloset\t94\t772\t39\t6\twardrobe\totherfurniture\tFurniture\twardrobe\t\t\t\t\t\r\nstair\t89\t215\t38\t7\tstairs\totherstructure\tObjects\tstairs\t\t\t\tn04314914\tstep.n.04\r\nmicrowave\t88\t13\t40\t7\tmicrowave\totherprop\tObjects\t\t\tmicrowave\t03761084\tn03761084\tmicrowave.n.02\r\nwashbasin\t86\t24\t34\t7\tsink\tsink\tObjects\tsink\t\t\t\tn04553920\twashbasin.n.01\r\nrug\t85\t130\t40\t7\trug\totherprop\tObjects\t\t\t\t\tn04118021\trug.n.01\r\nstove\t78\t242\t38\t7\tstove\totherstructure\tObjects\t\t\tstove\t04330267\tn04330267\tstove.n.02\r\nshoe\t68\t149\t40\t7\tshoe\totherprop\tObjects\t\t\t\t\tn04199027\tshoe.n.01\r\ncomputer tower\t68\t46\t40\t7\tcomputer\totherprop\tObjects\t\t\t\t\tn03082979\tcomputer.n.01\r\nbottle\t66\t2\t40\t7\tbottle\totherprop\tObjects\tbottle\t\tbottle\t02876657\tn02876657\tbottle.n.01\r\nbin\t64\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\tn02839910\tbin.n.01\r\nottoman\t63\t359\t39\t6\tottoman\totherfurniture\tFurniture\tstool\t\t\t\tn03380724\tfootstool.n.01\r\nbench\t63\t204\t39\t6\tbench\totherfurniture\tFurniture\tbench\t\tbench\t02828884\tn02828884\tbench.n.01\r\nboard\t63\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\nwashing machine\t62\t278\t39\t6\twashing machine\totherfurniture\tFurniture\t\t\twashing_machine\t04554684\tn04554684\twasher.n.03\r\nmirror\t62\t122\t19\t7\tmirror\tmirror\tObjects\t\t\t\t\tn03773035\tmirror.n.01\r\ncopier\t61\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03257586\tduplicator.n.01\r\nbasket\t60\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn02801938\tbasket.n.01\r\nsofa chair\t59\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nfile cabinet\t54\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\nfan\t52\t74\t40\t7\tfan\totherprop\tObjects\t\t\t\t\tn03320046\tfan.n.01\r\nlaptop\t52\t37\t40\t7\tlaptop\totherprop\tObjects\tlaptop\t\tlaptop\t03642806\tn03642806\tlaptop.n.01\r\nshower\t49\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn04208936\tshower.n.01\r\npaper\t48\t15\t26\t7\tpaper\tpaper\tObjects\t\t\t\t\tn14974264\tpaper.n.01\r\nperson\t48\t331\t31\t7\tperson\tperson\tObjects\tperson\t\t\t\tn05217688\tperson.n.02\r\nheadboard\t47\t161\t39\t6\theadboard\totherfurniture\tFurniture\t\t\t\t\tn03502200\theadboard.n.01\r\npaper towel dispenser\t47\t14\t40\t7\tpaper towel dispenser\totherprop\tObjects\t\t\t\t\t\t\r\nfaucet\t45\t9\t40\t7\tfaucet\totherprop\tObjects\t\t\tfaucet\t03325088\tn03325088\tfaucet.n.01\r\noven\t43\t238\t38\t7\toven\totherstructure\tObjects\t\t\t\t\tn03862676\toven.n.01\r\nfootstool\t42\t359\t39\t6\tottoman\totherfurniture\tFurniture\tstool\t\t\t\tn03380724\tfootstool.n.01\r\nblinds\t42\t80\t13\t13\tblinds\tblinds\tWindow\t\t\t\t\tn02851099\tblind.n.03\r\nrack\t41\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn04038440\track.n.05\r\nplate\t39\t233\t40\t7\tplate\totherprop\tObjects\t\t\t\t\tn03959485\tplate.n.04\r\nblackboard\t38\t225\t38\t7\tblackboard\totherstructure\tObjects\t\t\t\t\tn02846511\tblackboard.n.01\r\npiano\t38\t298\t39\t6\tpiano\totherfurniture\tFurniture\tpiano\t\tpiano\t03928116\tn03928116\tpiano.n.01\r\nheater\t38\t111\t39\t6\theater\totherfurniture\tFurniture\t\t\t\t\tn03508101\theater.n.01\r\nsoap\t37\t133\t40\t7\tsoap\totherprop\tObjects\t\t\t\t\t\t\r\nluggage\t36\t783\t40\t7\tluggage\totherprop\tObjects\t\t\t\t\tn02774630\tbaggage.n.01\r\ncomputer desk\t36\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\ttable\t04379243\tn03179701\tdesk.n.01\r\nrail\t36\t497\t38\t7\trailing\totherstructure\tObjects\t\t\t\t\t\t\r\nradiator\t36\t236\t39\t6\tradiator\totherfurniture\tFurniture\t\t\t\t\tn04041069\tradiator.n.02\r\nrecycle bin\t35\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\ncontainer\t34\t140\t40\t7\tcontainer\totherprop\tObjects\t\t\t\t\tn03094503\tcontainer.n.01\r\nwardrobe\t34\t772\t39\t6\twardrobe\totherfurniture\tFurniture\twardrobe\t\t\t\tn04550184\twardrobe.n.01\r\nsoap dispenser\t32\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04254120\tsoap_dispenser.n.01\r\ntelephone\t32\t32\t40\t7\ttelephone\totherprop\tObjects\t\t\ttelephone\t04401088\tn04401088\ttelephone.n.01\r\nbucket\t32\t427\t40\t7\tbucket\totherprop\tObjects\t\t\t\t\tn02909870\tbucket.n.01\r\nclock\t31\t56\t40\t7\tclock\totherprop\tObjects\t\t\tclock\t03046257\tn03046257\tclock.n.01\r\nstand\t29\t50\t39\t6\tstand\totherfurniture\tFurniture\ttable\ttable\ttable\t04379243\tn04301000\tstand.n.04\r\nlight\t27\t62\t38\t7\tlight\totherstructure\tObjects\t\t\t\t\tn03665366\tlight.n.02\r\nlaundry basket\t27\t164\t40\t7\tlaundry basket\totherprop\tObjects\t\t\tbasket\t02801938\tn03050864\tclothes_hamper.n.01\r\npipe\t27\t41\t40\t7\tpipe\totherprop\tObjects\t\t\t\t\t\t\r\nround table\t26\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04114554\tround_table.n.02\r\nroof\t25\t4\t22\t3\tceiling\tceiling\tCeiling\t\t\t\t\tn04105068\troof.n.01\r\nclothes dryer\t25\t\t39\t6\t\totherfurniture\tFurniture\t\t\t\t\tn03251766\tdryer.n.01\r\ncoat\t23\t324\t40\t7\tjacket\totherprop\tObjects\t\t\t\t\tn03057021\tcoat.n.01\r\nguitar\t23\t300\t40\t7\tguitar\totherprop\tObjects\tguitar\t\tguitar\t03467517\tn03467517\tguitar.n.01\r\ndesk chair\t23\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nsheet\t22\t559\t40\t7\tsheet\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet paper holder\t22\t647\t40\t7\ttoilet paper holder\totherprop\tObjects\t\t\t\t\t\t\r\nseat\t22\t524\t39\t6\tfurniture\totherfurniture\tFurniture\t\t\t\t\tn04161981\tseat.n.03\r\nstep\t21\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn04314914\tstep.n.04\r\nspeaker\t20\t54\t40\t7\tspeaker\totherprop\tObjects\t\t\tspeaker\t03691459\tn03691459\tloudspeaker.n.01\r\nvending machine\t19\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\tn04525305\tvending_machine.n.01\r\ncolumn\t19\t94\t38\t7\tcolumn\totherstructure\tObjects\t\t\t\t\tn03074380\tcolumn.n.06\r\nbicycle\t18\t189\t40\t7\tbicycle\totherprop\tObjects\t\t\tbicycle\t02834778\tn02834778\tbicycle.n.01\r\nladder\t18\t48\t39\t6\tladder\totherfurniture\tFurniture\tstairs\t\t\t\tn03632277\tladder.n.01\r\ncover\t18\t312\t40\t7\tblanket\totherprop\tObjects\t\t\t\t\t\t\r\nhandle\t18\t758\t40\t7\thandle\totherprop\tObjects\t\t\t\t\tn03485997\thandle.n.01\r\nbathroom stall\t18\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn02873839\tbooth.n.02\r\nfoosball table\t17\t510\t39\t6\tfoosball table\totherfurniture\tFurniture\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ntable lamp\t17\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn04380533\ttable_lamp.n.01\r\nshower wall\t17\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nchest\t17\t344\t39\t6\tchest\totherfurniture\tFurniture\tdresser\tdresser\t\t\t\t\r\ncup\t17\t35\t40\t7\tcup\totherprop\tObjects\tcup\t\tcup or mug\t03797390\tn03797390\tmug.n.04\r\njacket\t16\t324\t40\t7\tjacket\totherprop\tObjects\t\t\t\t\tn03589791\tjacket.n.01\r\nstorage bin\t16\t812\t40\t7\tstorage bin\totherprop\tObjects\t\t\t\t\t\t\r\nscreen\t16\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncoffee maker\t16\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03063338\tcoffee_maker.n.01\r\nhamper\t15\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn03482405\thamper.n.02\r\ndishwasher\t15\t8\t38\t7\tdishwasher\totherstructure\tObjects\t\t\tdishwasher\t03207941\tn03207941\tdishwasher.n.01\r\nwindow frame\t15\t477\t38\t7\twindow frame\totherstructure\tObjects\t\t\t\t\tn04589593\twindow_frame.n.01\r\npaper towel\t15\t113\t40\t7\tpaper towel\totherprop\tObjects\t\t\t\t\tn03887697\tpaper_towel.n.01\r\nmachine\t15\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\tn03699975\tmachine.n.01\r\nmat\t15\t143\t20\t5\tfloor mat\tfloor mat\tFloor\t\t\t\t\tn03727837\tmat.n.01\r\nwindowsill\t14\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn04590263\twindowsill.n.01\r\ntap\t14\t9\t40\t7\tfaucet\totherprop\tObjects\t\t\tfaucet\t03325088\tn04559451\twater_faucet.n.01\r\npool table\t14\t515\t39\t6\tpool table\totherfurniture\tFurniture\ttable\ttable\ttable\t04379243\tn03982430\tpool_table.n.01\r\nhand dryer\t14\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbar\t14\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\tn02788689\tbar.n.03\r\nframe\t14\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\nrolling chair\t14\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ntoaster\t14\t251\t40\t7\ttoaster\totherprop\tObjects\t\t\t\t\tn04442312\ttoaster.n.02\r\nwall frame\t14\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\nhanger\t13\t211\t40\t7\thanger\totherprop\tObjects\t\t\t\t\tn03490884\thanger.n.02\r\nconference table\t13\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn03090000\tconference_table.n.01\r\nhandrail\t13\t453\t38\t7\tbanister\totherstructure\tObjects\t\t\t\t\tn02788148\tbannister.n.02\r\ntreadmill\t13\t458\t39\t6\ttreadmill\totherfurniture\tFurniture\t\t\t\t\tn04477387\ttreadmill.n.01\r\nbulletin board\t13\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\nironing board\t13\t313\t39\t6\tironing board\totherfurniture\tFurniture\t\t\t\t\tn03586090\tironing_board.n.01\r\nfireplace\t12\t372\t38\t7\tfireplace\totherstructure\tObjects\t\t\t\t\tn03346455\tfireplace.n.01\r\nsoap dish\t12\t638\t40\t7\tsoap dish\totherprop\tObjects\t\t\t\t\tn04254009\tsoap_dish.n.01\r\nfabric\t12\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03309808\tfabric.n.01\r\nkitchen counter\t12\t7\t12\t6\tcounter\tcounter\tFurniture\ttable\ttable\ttable\t04379243\tn03116530\tcounter.n.01\r\nglass\t12\t612\t38\t7\tglass\totherstructure\tObjects\t\t\t\t\tn03438257\tglass.n.02\r\ndoorframe\t11\t615\t38\t7\tdoor frame\totherstructure\tObjects\t\t\t\t\tn03222722\tdoorframe.n.01\r\ntable cushion\t11\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet paper dispenser\t11\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nslab\t11\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn04233405\tslab.n.01\r\nmini fridge\t11\t17\t24\t6\trefridgerator\trefridgerator\tFurniture\t\t\t\t\tn03273913\telectric_refrigerator.n.01\r\nfire extinguisher\t11\t10\t40\t7\tfire extinguisher\totherprop\tObjects\t\t\t\t\tn03345837\tfire_extinguisher.n.01\r\nshampoo\t11\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nball\t11\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\t\t\r\nhat\t11\t193\t40\t7\that\totherprop\tObjects\t\t\t\t\tn03497657\that.n.01\r\nshower curtain rod\t11\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\njunk\t11\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn14857897\tdebris.n.01\r\nsoap holder\t10\t506\t40\t7\tsoap holder\totherprop\tObjects\t\t\t\t\t\t\r\nstaircase\t10\t215\t38\t7\tstairs\totherstructure\tObjects\t\t\t\t\tn04298308\tstairway.n.01\r\ntoiletry\t10\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04447443\ttoiletry.n.01\r\nstall door\t10\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\nframed picture\t10\t64\t11\t8\tpicture\tpicture\tPicture\t\t\t\t\t\t\r\nwater cooler\t10\t509\t39\t6\twater cooler\totherfurniture\tFurniture\t\t\t\t\tn04559166\twater_cooler.n.01\r\nbags\t10\t\t40\t7\t\totherprop\tObjects\t\t\tsuitcase\t02773838\tn02773838\tbag.n.06\r\ndesk lamp\t10\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\npaper cutter\t10\t108\t40\t7\tpaper cutter\totherprop\tObjects\t\t\t\t\tn03886940\tpaper_cutter.n.01\r\nled tv\t9\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nswitch\t9\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04372370\tswitch.n.01\r\nbed sheet\t9\t559\t40\t7\tsheet\totherprop\tObjects\t\t\t\t\tn04188179\tsheet.n.03\r\nroof frame\t9\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\ntray\t9\t179\t40\t7\ttray\totherprop\tObjects\t\t\t\t\tn04476259\ttray.n.01\r\ncomforter\t9\t484\t40\t7\tcomforter\totherprop\tObjects\t\t\t\t\tn04033995\tquilt.n.01\r\nair conditioner\t9\t79\t38\t7\tair conditioner\totherstructure\tObjects\t\t\t\t\tn02686379\tair_conditioner.n.01\r\nshower door\t9\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\nshirt\t9\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04197391\tshirt.n.01\r\nswivel chair\t9\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn04373704\tswivel_chair.n.01\r\npillar\t9\t94\t38\t7\tcolumn\totherstructure\tObjects\t\t\t\t\tn03073977\tcolumn.n.07\r\ndetergent\t9\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nledge\t9\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn09337253\tledge.n.01\r\nvase\t8\t78\t40\t7\tvase\totherprop\tObjects\tvase\t\tjar\t03593526\tn04522168\tvase.n.01\r\ntoaster oven\t8\t275\t40\t7\ttoaster oven\totherprop\tObjects\t\t\t\t\tn04442441\ttoaster_oven.n.01\r\nbedpost\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02821415\tbedpost.n.01\r\nfood\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn00021265\tfood.n.01\r\npicture frame\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03931765\tpicture_frame.n.01\r\npoltrone\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstudy table\t8\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\noffice table\t8\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nmouse\t8\t103\t40\t7\tmouse\totherprop\tObjects\t\t\t\t\tn03793489\tmouse.n.04\r\nstorage\t8\t\t\t\t\t\t\t\t\t\t\tn03744276\tmemory.n.04\r\nnerf gun\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntable chair\t8\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nnight table\t8\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ncomputer chair\t8\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ntoilet seat liner dispenser\t8\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbackrest\t7\t5\t5\t4\tchair\tchair\tChair\t\t\t\t\tn02767433\tback.n.08\r\nchair seat\t7\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsink cabinet\t7\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\ncan\t7\t329\t40\t7\tcan\totherprop\tObjects\t\t\tcan\t02946921\tn02946921\tcan.n.01\r\nfurniture\t7\t524\t39\t6\tfurniture\totherfurniture\tFurniture\t\t\t\t\tn03405725\tfurniture.n.01\r\ncart\t7\t305\t40\t7\tcart\totherprop\tObjects\t\t\t\t\tn03484083\thandcart.n.01\r\nstool chair\t7\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nstep stool\t7\t276\t40\t7\tstep stool\totherprop\tObjects\tstool\t\t\t\tn04315713\tstep_stool.n.01\r\nrobe\t7\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04097866\trobe.n.01\r\ntable stand\t7\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nstall\t7\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn02873839\tbooth.n.02\r\ndispenser\t7\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03210683\tdispenser.n.01\r\nstorage container\t7\t140\t40\t7\tcontainer\totherprop\tObjects\t\t\t\t\t\t\r\nside table\t7\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\npartition\t7\t21\t1\t12\twall\twall\tWall\t\t\t\t\tn03894379\tpartition.n.01\r\nappliance\t7\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlotion\t7\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03690938\tlotion.n.01\r\npot\t7\t16\t40\t7\tpot\totherprop\tObjects\t\t\t\t\t\t\r\nphoto\t7\t508\t40\t7\tphoto\totherprop\tObjects\t\t\t\t\tn03925226\tphotograph.n.01\r\ntoilet brush\t7\t630\t40\t7\ttoilet brush\totherprop\tObjects\t\t\t\t\t\t\r\nscale\t7\t639\t40\t7\tscale\totherprop\tObjects\t\t\t\t\tn04141975\tscale.n.07\r\ntissue box\t7\t138\t40\t7\ttissue box\totherprop\tObjects\t\t\t\t\t\t\r\nremote\t7\t\t40\t7\t\totherprop\tObjects\t\t\tremote_control\t04074963\tn04074963\tremote_control.n.01\r\nlight switch\t6\t301\t38\t7\tlight switch\totherstructure\tObjects\t\t\t\t\t\t\r\ncrate\t6\t183\t39\t6\tcrate\totherfurniture\tFurniture\t\t\t\t\tn03127925\tcrate.n.01\r\nping pong table\t6\t625\t39\t6\tping pong table\totherfurniture\tFurniture\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nplatform\t6\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\npantry\t6\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn03885535\tpantry.n.01\r\nbath cabinet\t6\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\nslipper\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04241394\tslipper.n.01\r\nsideboard\t6\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\t\t\r\nholder\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03525454\tholder.n.01\r\nworktop\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noutlet\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04548771\twall_socket.n.01\r\ngas cooker\t6\t242\t38\t7\tstove\totherstructure\tObjects\t\t\t\t\tn03425595\tgas_range.n.01\r\ndoorhandle\t6\t652\t40\t7\tknob\totherprop\tObjects\t\t\t\t\tn03222959\tdoorknob.n.01\r\ncutting board\t6\t247\t40\t7\tcutting board\totherprop\tObjects\t\t\t\t\tn03025513\tchopping_board.n.01\r\nbathroom sink\t6\t24\t34\t7\tsink\tsink\tObjects\tsink\t\t\t\tn04223580\tsink.n.01\r\ncontroller\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03096960\tcontrol.n.09\r\nbedding set\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmount\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndecoration\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03169390\tdecoration.n.01\r\ntablet\t6\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nreading table\t6\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nfloor covered\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncooker\t6\t267\t40\t7\tutensil\totherprop\tObjects\t\t\t\t\tn03101156\tcooker.n.01\r\nfile\t6\t75\t40\t7\tfile\totherprop\tObjects\t\t\tfiling_cabinet\t03337140\tn03337140\tfile.n.03\r\narchive\t6\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02735086\tarchive.n.01\r\ntrolley\t5\t504\t40\t7\ttrolley\totherprop\tObjects\t\t\t\t\tn04335435\tstreetcar.n.01\r\nwainscoting\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlampshade\t5\t859\t40\t7\tlamp shade\totherprop\tObjects\t\t\t\t\tn03637318\tlampshade.n.01\r\nchina\t5\t267\t40\t7\tutensil\totherprop\tObjects\t\t\t\t\tn03018209\tchina.n.02\r\nsign\t5\t208\t40\t7\tsign\totherprop\tObjects\t\t\t\t\tn04217882\tsignboard.n.01\r\nfax machine\t5\t68\t40\t7\tfax machine\totherprop\tObjects\t\t\t\t\t\t\r\nmirror frame\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nprojector\t5\t90\t40\t7\tprojector\totherprop\tObjects\t\t\t\t\tn04009552\tprojector.n.02\r\nsweater\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04370048\tsweater.n.01\r\npaint can\t5\t329\t40\t7\tcan\totherprop\tObjects\t\t\tcan\t02946921\tn02946921\tcan.n.01\r\nheat register\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen table\t5\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn03620967\tkitchen_table.n.01\r\nglobe\t5\t347\t40\t7\tglobe\totherprop\tObjects\t\t\t\t\t\t\r\ntoy\t5\t389\t40\t7\ttoy\totherprop\tObjects\t\t\t\t\tn03964744\tplaything.n.01\r\nkitchen worktop\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npaper roll\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmeeting table\t5\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nvaze\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall clock\t5\t56\t40\t7\tclock\totherprop\tObjects\t\t\tclock\t03046257\tn04548280\twall_clock.n.01\r\ncloset door\t5\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\npack\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndoormat\t5\t143\t20\t5\tfloor mat\tfloor mat\tFloor\t\t\t\t\tn03223299\tdoormat.n.02\r\ntissue\t5\t648\t40\t7\ttissue\totherprop\tObjects\t\t\t\t\t\t\r\nplastic container\t5\t140\t40\t7\tcontainer\totherprop\tObjects\t\t\t\t\t\t\r\nstatue\t5\t294\t40\t7\tsculpture\totherprop\tObjects\t\t\t\t\tn04306847\tstatue.n.01\r\ndollhouse\t5\t486\t39\t6\tdoll house\totherfurniture\tFurniture\t\t\t\t\tn03219483\tdollhouse.n.01\r\nvacuum\t5\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04517823\tvacuum.n.04\r\nwet floor sign\t5\t208\t40\t7\tsign\totherprop\tObjects\t\t\t\t\t\t\r\nvanity\t5\t169\t17\t6\tdresser\tdresser\tFurniture\tdresser\tdresser\ttable\t04379243\tn03238586\tdressing_table.n.01\r\ncandle\t5\t137\t40\t7\tcandle\totherprop\tObjects\tlamp\t\t\t\tn02948072\tcandle.n.01\r\nlibrary desk\t5\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\t\t\t\t\r\ncarton box\t5\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\neasel\t5\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn03262809\teasel.n.01\r\nwall lamp\t5\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nwall hanging\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03491178\thanging.n.01\r\nface wash\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncorner\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlounge chair\t4\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03262932\teasy_chair.n.01\r\nbeanbag\t4\t797\t39\t6\tbean bag\totherfurniture\tFurniture\t\t\t\t\tn02816656\tbeanbag.n.01\r\nmarker holder\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndumbell\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nping pong paddle\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlocker\t4\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\t\t\t\t\r\nplunger\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03970156\tplunger.n.03\r\nsoap bar\t4\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\t\t\r\nstudent chair\t4\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\noffice object\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstuffed animal\t4\t177\t40\t7\tstuffed animal\totherprop\tObjects\t\t\t\t\t\t\r\nwater fountain\t4\t339\t38\t7\twater fountain\totherstructure\tObjects\t\t\t\t\tn03241335\tdrinking_fountain.n.01\r\ndoorknob\t4\t27\t40\t7\tdoor knob\totherprop\tObjects\t\t\t\t\tn03222959\tdoorknob.n.01\r\nfootrest\t4\t163\t39\t6\tfoot rest\totherfurniture\tFurniture\tstool\t\t\t\tn03380724\tfootstool.n.01\r\nac unit\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsafe\t4\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\tn04125021\tsafe.n.01\r\ntile\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04435180\ttile.n.01\r\neasle\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nheadphone\t4\t160\t40\t7\theadphones\totherprop\tObjects\t\t\theadphone\t03261776\tn03261776\tearphone.n.01\r\ndress\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03236735\tdress.n.01\r\nrolling cart\t4\t305\t40\t7\tcart\totherprop\tObjects\t\t\t\t\t\t\r\nchest of drawers\t4\t524\t39\t6\tfurniture\totherfurniture\tFurniture\tdresser\tdresser\t\t\tn03015254\tchest_of_drawers.n.01\r\nplastic bin\t4\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\npail\t4\t427\t40\t7\tbucket\totherprop\tObjects\t\t\t\t\tn02909870\tbucket.n.01\r\ndry erase board\t4\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\ncoatrack\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03059103\tcoatrack.n.01\r\nrecliner\t4\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn04062428\trecliner.n.01\r\nroomba\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhighchair\t4\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03518445\thighchair.n.01\r\ndish rack\t4\t581\t40\t7\tdish rack\totherprop\tObjects\t\t\t\t\tn03207630\tdish_rack.n.01\r\ndartboard\t4\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn03162940\tdartboard.n.01\r\nbroom\t4\t328\t40\t7\tbroom\totherprop\tObjects\t\t\t\t\tn02906734\tbroom.n.01\r\nbook rack\t4\t224\t39\t6\tbookrack\totherfurniture\tFurniture\t\t\t\t\t\t\r\neraser\t4\t100\t40\t7\teraser\totherprop\tObjects\t\t\t\t\tn03294833\teraser.n.01\r\nbath mat\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02807401\tbath_mat.n.01\r\ntextile\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03309808\tfabric.n.01\r\npaper box\t4\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nguitar case\t4\t771\t40\t7\tguitar case\totherprop\tObjects\t\t\t\t\t\t\r\nmop\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04367480\tswab.n.02\r\nlavatory\t4\t\t40\t7\t\totherprop\tObjects\ttoilet\ttoilet\t\t\t\t\r\nserver\t4\t360\t40\t7\tserver\totherprop\tObjects\t\t\t\t\t\t\r\npaper towel holder\t4\t281\t40\t7\tpaper towel holder\totherprop\tObjects\t\t\t\t\t\t\r\noffice supply\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npanel\t4\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\ntoilet brush holder\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmagazine\t4\t71\t40\t7\tmagazine\totherprop\tObjects\t\t\t\t\tn06595351\tmagazine.n.01\r\nkitchen rack\t4\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn04038440\track.n.05\r\ntable object\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nrange hood\t4\t380\t38\t7\trange hood\totherstructure\tObjects\trange_hood\t\t\t\tn04053677\trange_hood.n.01\r\nbath\t4\t136\t36\t7\tbathtub\tbathtub\tObjects\tbathtub\tbathtub\ttub\t02808440\tn02808440\tbathtub.n.01\r\ntrim\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04484160\ttrimming.n.02\r\nscanner\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbathrobe\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02807616\tbathrobe.n.01\r\ndoor and post\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npouff\t4\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nspring curtain\t3\t89\t16\t13\tcurtain\tcurtain\tWindow\tcurtain\t\t\t\t\t\r\nrecycle\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfax\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03316105\tfacsimile.n.02\r\nrolling shelf\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nflat-screen television\t3\t172\t25\t11\ttelevision\ttelevision\tTV\t\t\t\t\t\t\r\nfuton\t3\t576\t39\t6\tmattress\totherfurniture\tFurniture\t\t\t\t\tn03408444\tfuton.n.01\r\nstack of chairs\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndustpan\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03259009\tdustpan.n.02\r\nhand towel\t3\t135\t27\t7\ttowel\ttowel\tObjects\t\t\t\t\tn03490006\thand_towel.n.01\r\nfloor lamp\t3\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03367059\tfloor_lamp.n.01\r\nmainboard\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen shelf\t3\t42\t15\t6\tshelves\tshelves\tFurniture\tbookshelf\t\tbookshelf\t02871439\tn02871439\tbookshelf.n.01\r\norganizer\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03918737\tpersonal_digital_assistant.n.01\r\nfreezer\t3\t17\t24\t6\trefridgerator\trefridgerator\tFurniture\t\t\t\t\tn03170635\tdeep-freeze.n.01\r\nfurnace\t3\t551\t39\t6\tfurnace\totherfurniture\tFurniture\t\t\t\t\tn03404449\tfurnace.n.01\r\nstock\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmap\t3\t107\t40\t7\tmap\totherprop\tObjects\t\t\t\t\tn03720163\tmap.n.01\r\nhelmet\t3\t\t40\t7\t\totherprop\tObjects\t\t\thelmet\t03513137\tn03513137\thelmet.n.02\r\nwallpaper\t3\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\nwall cabinet\t3\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\noffice equipment\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhair dryer\t3\t577\t40\t7\thair dryer\totherprop\tObjects\t\t\t\t\tn03483316\thand_blower.n.01\r\nbacksplash\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nexercise ball\t3\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\t\t\r\njeremiah\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn11082842\tjeremiah.n.01\r\nflush\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfridge just\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfolded clothes\t3\t141\t21\t7\tclothes\tclothes\tObjects\t\t\t\t\t\t\r\nwindow counter\t3\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\t\t\r\niron\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03584829\tiron.n.04\r\nstudio light\t3\t62\t38\t7\tlight\totherstructure\tObjects\t\t\t\t\t\t\r\nsconce\t3\t62\t38\t7\tlight\totherstructure\tObjects\t\t\t\t\tn04148703\tsconce.n.03\r\nsofa set\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbaseboard\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02800354\tbaseboard.n.01\r\nsink counter\t3\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\t\t\r\nkitchen slab\t3\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\tn04233405\tslab.n.01\r\ncabinet door\t3\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\nexercise machine\t3\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nwood\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nteatowels\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nworkbench\t3\t204\t39\t6\tbench\totherfurniture\tFurniture\tbench\t\ttable\t04379243\tn04600486\tworkbench.n.01\r\nbackwall\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncubby\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03144365\tcubby.n.01\r\nwater bottle\t3\t2\t40\t7\tbottle\totherprop\tObjects\tbottle\t\tbottle\t02876657\tn04557648\twater_bottle.n.01\r\nkitchen sink\t3\t24\t34\t7\tsink\tsink\tObjects\tsink\t\t\t\tn03620889\tkitchen_sink.n.01\r\nsink area\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhandicap bar\t3\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\t\t\r\npainter\t3\t594\t40\t7\tcat\totherprop\tObjects\t\t\t\t\tn02125311\tcougar.n.01\r\ntank\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwashstand\t3\t524\t39\t6\tfurniture\totherfurniture\tFurniture\t\t\t\t\tn04555400\twashstand.n.01\r\npurse\t3\t181\t40\t7\tpurse\totherprop\tObjects\t\t\t\t\tn02774152\tbag.n.04\r\nsurface\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02688443\tairfoil.n.01\r\ntowel rack\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04459773\ttowel_rack.n.01\r\ndecor\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03579355\tinterior_decoration.n.01\r\nhandwash\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbar stool\t3\t150\t40\t7\tstool\totherprop\tObjects\tstool\t\t\t\t\t\r\npan\t3\t589\t40\t7\tpan\totherprop\tObjects\t\t\t\t\tn03880531\tpan.n.01\r\nair propeller\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npaneling\t3\t21\t1\t12\twall\twall\tWall\t\t\t\t\tn03882611\tpaneling.n.01\r\nvent\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen junk\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn14857897\tdebris.n.01\r\npiano bench\t3\t460\t39\t6\tpiano bench\totherfurniture\tFurniture\tbench\t\tbench\t02828884\tn02828884\tbench.n.01\r\nbunk bed\t3\t804\t39\t6\tbunk bed\totherfurniture\tFurniture\tbed\tbed\tbed\t02818832\tn02920259\tbunk_bed.n.01\r\nbed lamp\t3\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nshower mat\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedding\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02820210\tbedclothes.n.01\r\nshoe rack\t3\t614\t40\t7\tshoe rack\totherprop\tObjects\t\t\t\t\t\t\r\nnotice board\t3\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn02916538\tbulletin_board.n.02\r\nshower floor\t3\t11\t2\t5\tfloor\tfloor\tFloor\t\t\t\t\t\t\r\nbrush\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02908217\tbrush.n.02\r\ntable football\t3\t166\t40\t7\tfootball\totherprop\tObjects\t\t\t\t\t\t\r\npadded bench\t3\t204\t39\t6\tbench\totherfurniture\tFurniture\tbench\t\tbench\t02828884\tn02828884\tbench.n.01\r\nbathroom carpet\t3\t130\t40\t7\trug\totherprop\tObjects\t\t\t\t\tn04118021\trug.n.01\r\nshowerhead\t3\t650\t40\t7\tshower head\totherprop\tObjects\t\t\t\t\tn04209383\tshowerhead.n.01\r\nloft\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchair w/table\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedhead\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nroll\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04101375\troll.n.04\r\ncomidin\t3\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncardboard box\t3\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\ncushion stool\t2\t150\t40\t7\tstool\totherprop\tObjects\tstool\t\t\t\t\t\r\nbed cabinet\t2\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\npile of clothes\t2\t141\t21\t7\tclothes\tclothes\tObjects\t\t\t\t\t\t\r\ncase\t2\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\t\t\r\nslep\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nswiffer\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstapler\t2\t67\t40\t7\tstapler\totherprop\tObjects\t\t\t\t\tn04303497\tstapler.n.01\r\ncable\t2\t450\t40\t7\tcables\totherprop\tObjects\t\t\t\t\t\t\r\nwork desk\t2\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\t\t\t\t\r\nfloor carpet\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedside\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn08649711\tbedside.n.01\r\ntrash bag\t2\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\t\t\r\nheating devce\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsofa table\t2\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nventilator\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04526964\tventilator.n.01\r\ncat\t2\t594\t40\t7\tcat\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen utensil\t2\t267\t40\t7\tutensil\totherprop\tObjects\t\t\t\t\tn03621049\tkitchen_utensil.n.01\r\ncounter top for sink\t2\t24\t34\t7\tsink\tsink\tObjects\tsink\t\t\t\t\t\r\nbathroom frame\t2\t\t38\t7\t\totherstructure\tObjects\t\t\t\t\t\t\r\nbanister\t2\t453\t38\t7\tbanister\totherstructure\tObjects\t\t\t\t\tn02788148\tbannister.n.02\r\ntuvalet kağıdı\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntrunk\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nblank screen\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntire\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04440749\ttire.n.01\r\nscreen curtain\t2\t89\t16\t13\tcurtain\tcurtain\tWindow\tcurtain\t\t\t\t\t\r\ncooking range\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndressware\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nblow up matress\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nshred bin\t2\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\nair matress\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfolder\t2\t69\t40\t7\tfolder\totherprop\tObjects\t\t\t\t\tn03376279\tfolder.n.02\r\nroom heater\t2\t111\t39\t6\theater\totherfurniture\tFurniture\t\t\t\t\t\t\r\ncar\t2\t530\t40\t7\tcar\totherprop\tObjects\tcar\t\tcar\t02958343\tn02958343\tcar.n.01\r\nmassage armchair\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwardrobe door\t2\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\ncoffee supply\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntissue holder\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntab\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nknickknack\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nindoor plant\t2\t82\t40\t7\tplant\totherprop\tObjects\tplant\t\t\t\t\t\r\nnotebook\t2\t210\t40\t7\tnotebook\totherprop\tObjects\t\t\t\t\tn03832673\tnotebook.n.02\r\nwater dispenser\t2\t507\t40\t7\twater dispenser\totherprop\tObjects\t\t\t\t\tn03210683\tdispenser.n.01\r\ncleaning supply\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlibrary table\t2\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nbin cover\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nteakettle\t2\t243\t40\t7\ttea kettle\totherprop\tObjects\t\t\t\t\tn04397768\tteakettle.n.01\r\nreservoir\t2\t263\t40\t7\tvessel\totherprop\tObjects\t\t\t\t\tn04078574\treservoir.n.03\r\narc sofa\t2\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\nbedside lamp\t2\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\npicture window\t2\t59\t9\t13\twindow\twindow\tWindow\t\t\t\t\tn03932080\tpicture_window.n.01\r\nmedicine cabinet\t2\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn03742115\tmedicine_chest.n.01\r\ncosmetic bag\t2\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\t\t\r\ncoffee\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbrush holder\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nduvet\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03266749\teiderdown.n.01\r\nflower stand\t2\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nbedcover\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02822220\tbedspread.n.01\r\nside\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn09437454\tslope.n.01\r\nweight\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04571292\tweight.n.02\r\npitcher\t2\t273\t40\t7\tpitcher\totherprop\tObjects\t\t\t\t\tn03950228\tpitcher.n.02\r\nschoolbag\t2\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\tn04146343\tschoolbag.n.01\r\nwall picture\t2\t64\t11\t8\tpicture\tpicture\tPicture\t\t\t\t\t\t\r\nmetal handrail\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nicebox\t2\t17\t24\t6\trefridgerator\trefridgerator\tFurniture\t\t\t\t\tn04070727\trefrigerator.n.01\r\nexercise equipment\t2\t457\t39\t6\texcercise equipment\totherfurniture\tFurniture\t\t\t\t\t\t\r\nloft bed\t2\t157\t4\t1\tbed\tbed\tBed\tbed\tbed\tbed\t02818832\tn02818832\tbed.n.01\r\ntennis table\t2\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nbookbag\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nscarf\t2\t240\t40\t7\tscarf\totherprop\tObjects\t\t\t\t\tn04143897\tscarf.n.01\r\ncabinet drawer\t2\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\ndouble door\t2\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\tn03226880\tdouble_door.n.01\r\nbooks in a book shelf\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nentertainment stand\t2\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\npotty\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03004275\tchamberpot.n.01\r\njohn\t2\t\t40\t7\t\totherprop\tObjects\ttoilet\ttoilet\t\t\tn04446276\ttoilet.n.01\r\ndesk accessory\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchair support\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nshopping bag\t2\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\tn04204081\tshopping_bag.n.01\r\ntree\t2\t82\t40\t7\tplant\totherprop\tObjects\tplant\t\t\t\tn13104059\ttree.n.01\r\ndustpan broom\t2\t328\t40\t7\tbroom\totherprop\tObjects\t\t\t\t\t\t\r\nstall wall\t2\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\ntường\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedside drawer\t2\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\nchaise\t2\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03002711\tchaise_longue.n.01\r\ncurtain rod\t2\t582\t38\t7\tcurtain rod\totherstructure\tObjects\t\t\t\t\t\t\r\nboardgame\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncccvurtains\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndevice\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03183080\tdevice.n.01\r\narmrest\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02741475\tarmrest.n.01\r\nalarm\t2\t525\t40\t7\talarm\totherprop\tObjects\t\t\tclock\t03046257\tn02694662\talarm_clock.n.01\r\ntowel rail\t2\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\tn04459909\ttowel_rail.n.01\r\nsliding wood doors\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbackbag\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbath curtain\t2\t89\t16\t13\tcurtain\tcurtain\tWindow\tcurtain\t\t\t\t\t\r\nwashcloth\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04554523\twashcloth.n.01\r\nbean bag chair\t2\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ntoolbox\t2\t344\t39\t6\tchest\totherfurniture\tFurniture\t\t\t\t\tn04452615\ttoolbox.n.01\r\nrack shelf\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkleenex box\t2\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nair freshner\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsewing machine\t2\t890\t40\t7\tsewing machine\totherprop\tObjects\t\t\t\t\tn04179913\tsewing_machine.n.01\r\nwashing and drying machine\t2\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nhairbrush\t2\t120\t40\t7\thair brush\totherprop\tObjects\t\t\t\t\tn03475581\thairbrush.n.01\r\nlap desk\t2\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\t\t\t\t\r\nhutch\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstack of flat items/possibly cardboard\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nunder-bed drawer\t2\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\nbleach\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncomidine\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbathmat\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncinder block\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03031957\tcinder_block.n.01\r\nmailbox\t2\t26\t29\t7\tbox\tbox\tObjects\t\t\tmailbox\t03710193\tn03710193\tmailbox.n.01\r\nwriting board\t2\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn04608127\twriting_board.n.01\r\nflooring\t2\t11\t2\t5\tfloor\tfloor\tFloor\t\t\t\t\tn03365592\tfloor.n.01\r\nwall of safety boxes\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvessel\t2\t263\t40\t7\tvessel\totherprop\tObjects\t\t\twatercraft\t04530566\tn04530566\tvessel.n.02\r\ncube sofa\t2\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\ntoothpaste\t2\t128\t40\t7\ttoothpaste\totherprop\tObjects\t\t\t\t\t\t\r\nliquid dispencer\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfolding chair\t2\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03376595\tfolding_chair.n.01\r\nsquash\t2\t717\t40\t7\tsquash\totherprop\tObjects\tplant\t\t\t\tn12158798\tsquash.n.01\r\ngrille\t2\t700\t38\t7\tgrill\totherstructure\tObjects\t\t\t\t\t\t\r\ncenterpiece\t2\t878\t40\t7\tcenterpiece\totherprop\tObjects\t\t\t\t\tn02994419\tcenterpiece.n.02\r\nwall folder\t2\t69\t40\t7\tfolder\totherprop\tObjects\t\t\t\t\t\t\r\ntowel hanger\t2\t211\t40\t7\thanger\totherprop\tObjects\t\t\t\t\tn03490884\thanger.n.02\r\ntoilet pot\t2\t16\t40\t7\tpot\totherprop\tObjects\t\t\t\t\t\t\r\naid\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nrope\t2\t560\t40\t7\trope\totherprop\tObjects\t\t\t\t\tn04108268\trope.n.01\r\nenvelop rack\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntissue roll\t2\t764\t40\t7\ttissue roll\totherprop\tObjects\t\t\t\t\t\t\r\nrostrum\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03159640\tdais.n.01\r\nowen\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nelectric panel\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbowl\t2\t22\t40\t7\tbowl\totherprop\tObjects\tbowl\t\tbowl\t02880940\tn02880940\tbowl.n.03\r\nboiler\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntile wall\t2\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nreflection\t2\t64\t11\t8\tpicture\tpicture\tPicture\t\t\t\t\tn04068976\treflection.n.05\r\ncrib\t2\t485\t39\t6\tcrib\totherfurniture\tFurniture\t\t\t\t\t\t\r\nshelves of stuff\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen gadget\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02729965\tappliance.n.01\r\nsliding door\t2\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\tn04239074\tsliding_door.n.01\r\npaper bag\t2\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\tn04122825\tsack.n.01\r\nwater heater\t2\t588\t40\t7\twater heater\totherprop\tObjects\t\t\t\t\tn04560113\twater_heater.n.01\r\nalarm clock\t2\t156\t40\t7\talarm clock\totherprop\tObjects\t\t\tclock\t03046257\tn02694662\talarm_clock.n.01\r\nchair rail\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncorkboard\t2\t34\t40\t7\tcork board\totherprop\tObjects\t\t\t\t\tn14823376\tcorkboard.n.01\r\nsàn nhà\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nconformer\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\neasy chair\t2\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03262932\teasy_chair.n.01\r\nsehpa\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlibrary\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbench seat\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmusic stand\t2\t820\t39\t6\tmusic stand\totherfurniture\tFurniture\t\t\t\t\tn03801760\tmusic_stand.n.01\r\noffice\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03841666\toffice.n.01\r\nclutter\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nflush for toilet\t2\t124\t33\t7\ttoilet\ttoilet\tObjects\ttoilet\ttoilet\t\t\t\t\r\nkleenex\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbox/storage\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npost\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03988170\tpost.n.04\r\nplug\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsocket\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04255163\tsocket.n.02\r\nisland\t2\t456\t38\t7\tkitchen island\totherstructure\tObjects\t\t\t\t\t\t\r\ninstrument case\t2\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\t\t\r\npaper tray\t2\t538\t40\t7\tpaper tray\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet paper package\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nantibacterial soap dispenser\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncubicle\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvaccuum\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nphotography light\t2\t62\t38\t7\tlight\totherstructure\tObjects\t\t\t\t\t\t\r\ntrump wall\t2\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nshredder\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04210120\tshredder.n.01\r\nsquare table\t2\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ntable support\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsink plumbing\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbasin\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npinball machine\t2\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\tn03941417\tpinball_machine.n.01\r\nmeeting\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn08542634\tconfluence.n.01\r\nmobile\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhatrack\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03059103\tcoatrack.n.01\r\njohnny\t2\t331\t31\t7\tperson\tperson\tObjects\tperson\t\t\t\tn10628368\trebel.n.01\r\nprojector screen\t2\t53\t38\t7\tprojector screen\totherstructure\tObjects\t\t\t\t\t\t\r\nvalance\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03111296\tcornice.n.01\r\neliptical\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nboot\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwindow ledge\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nclothes hamper\t2\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn03050864\tclothes_hamper.n.01\r\nwall side\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstairstepper\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbottle recycling bin\t2\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\nblend\t2\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntissue dispenser\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nentertainment center\t1\t524\t39\t6\tfurniture\totherfurniture\tFurniture\t\t\t\t\tn03290653\tentertainment_center.n.01\r\nkettle\t1\t16\t40\t7\tpot\totherprop\tObjects\t\t\t\t\tn03612814\tkettle.n.01\r\nlaundyr supply\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchain\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ngown\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nw.c.\t1\t\t40\t7\t\totherprop\tObjects\ttoilet\ttoilet\t\t\tn04558478\twater_closet.n.01\r\nshelf unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwalss\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmirror reflection\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbed curtain\t1\t89\t16\t13\tcurtain\tcurtain\tWindow\tcurtain\t\t\t\t\t\r\npapertray\t1\t538\t40\t7\tpaper tray\totherprop\tObjects\t\t\t\t\t\t\r\ntissue paper\t1\t15\t26\t7\tpaper\tpaper\tObjects\t\t\t\t\t\t\r\nmilk\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nleg rest sofa\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\ndouble decker\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndekw\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsink counter top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbreadbox\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02893692\tbread-bin.n.01\r\nfloor mat\t1\t143\t20\t5\tfloor mat\tfloor mat\tFloor\t\t\t\t\tn03727837\tmat.n.01\r\nelectrical machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nstudent seat\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nprivacy partition\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchairlegs\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmail tray\t1\t618\t40\t7\tmail tray\totherprop\tObjects\t\t\t\t\t\t\r\nover the door storage\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndoor hinge\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncahi\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nexcercise cycle\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhandsoap\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nglass door\t1\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\ndustbin box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\ntoy ship\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstorage rack\t1\t448\t39\t6\tstorage rack\totherfurniture\tFurniture\t\t\t\t\t\t\r\nwall outside room\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nask tray\t1\t179\t40\t7\ttray\totherprop\tObjects\t\t\t\t\t\t\r\npunching bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\tn04023962\tpunching_bag.n.02\r\nstorage drawer\t1\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\ncat litter box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nshower rod\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noffice desk\t1\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\t\t\t\t\r\nwater filter\t1\t731\t40\t7\twater filter\totherprop\tObjects\t\t\t\t\tn04559620\twater_filter.n.01\r\nnicknack\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02897692\tbric-a-brac.n.01\r\ntin of drink\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwork\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlustre\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npaper shredder\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nawllll\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbooth\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn02874214\tbooth.n.01\r\nfolded blanket\t1\t312\t40\t7\tblanket\totherprop\tObjects\t\t\t\t\t\t\r\nbed skirt\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncomputer speaker\t1\t54\t40\t7\tspeaker\totherprop\tObjects\t\t\tspeaker\t03691459\tn03691459\tloudspeaker.n.01\r\ndesktop computer\t1\t46\t40\t7\tcomputer\totherprop\tObjects\t\t\t\t\tn03180011\tdesktop_computer.n.01\r\nhadnwash\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncovered box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nlampbase\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nnet\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall pane\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndressing gown\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03237992\tdressing_gown.n.01\r\nfootstowindow 2ol\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndisplay/signs\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\narifact\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nleg\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nceiling fan\t1\t74\t40\t7\tfan\totherprop\tObjects\t\t\t\t\t\t\r\ncircular sofa\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\nfood/drink\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchair cushion\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwater\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04562658\twater_system.n.02\r\nsinl\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsing\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncardboard\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nplastic sliding drawers\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncelltech\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsink tap\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbinder\t1\t399\t40\t7\tbinder\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet plumbing\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmarble\t1\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\tn03721047\tmarble.n.02\r\ntable speaker\t1\t54\t40\t7\tspeaker\totherprop\tObjects\t\t\tspeaker\t03691459\tn03691459\tloudspeaker.n.01\r\ntalbetop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ngym cycle\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntable piece\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncounter panel\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nplug/outlet\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntill\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\tn02976939\tcashbox.n.01\r\nsink unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nurinary\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nball chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nof shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nshower rack\t1\t614\t40\t7\tshoe rack\totherprop\tObjects\t\t\t\t\t\t\r\nfull bed\t1\t157\t4\t1\tbed\tbed\tBed\tbed\tbed\tbed\t02818832\tn02818832\tbed.n.01\r\nsink cupboard\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhouse shoe\t1\t149\t40\t7\tshoe\totherprop\tObjects\t\t\t\t\t\t\r\ncha\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbeachball\t1\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\tn02814224\tbeach_ball.n.01\r\ngame table\t1\t429\t40\t7\tgame table\totherprop\tObjects\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ntiny table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nfeminine hygiene waste basket\t1\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn02801938\tbasket.n.01\r\ndicplay case\t1\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\t\t\r\nend table chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ncardbox\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nenvelope\t1\t476\t40\t7\tenvelope\totherprop\tObjects\t\t\t\t\tn03291819\tenvelope.n.01\r\nresevoir\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nrabbit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02324045\trabbit.n.01\r\nboard meeting table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nhold\t1\t758\t40\t7\thandle\totherprop\tObjects\t\t\t\t\t\t\r\nlaser printer\t1\t66\t40\t7\tprinter\totherprop\tObjects\t\t\tprinter\t04004475\tn03643737\tlaser_printer.n.01\r\nglasses case\t1\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\tn03438863\tglasses_case.n.01\r\nthermos\t1\t693\t40\t7\tflask\totherprop\tObjects\tbottle\t\tbottle\t02876657\tn04422727\tthermos.n.01\r\nshelving unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbath towel\t1\t135\t27\t7\ttowel\ttowel\tObjects\t\t\t\t\tn02808304\tbath_towel.n.01\r\nmonitor stand\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nbreakfast bar\t1\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\t\t\r\nflat screen television\t1\t172\t25\t11\ttelevision\ttelevision\tTV\t\t\t\t\t\t\r\ndressin table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ndress rack\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn03238762\tdress_rack.n.01\r\nplates of food\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfigure\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncredenza\t1\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\tn03129753\tcredenza.n.01\r\noejcts\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsinktop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall.table\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npenholder\t1\t464\t40\t7\tpen holder\totherprop\tObjects\t\t\t\t\t\t\r\nwall panel\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\tn04548503\twall_panel.n.01\r\nstair landing\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwallpapere\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwater dispencer\t1\t507\t40\t7\twater dispenser\totherprop\tObjects\t\t\t\t\t\t\r\ntherostat\t1\t110\t40\t7\tthermostat\totherprop\tObjects\t\t\t\t\t\t\r\nfrying pan\t1\t318\t40\t7\tfrying pan\totherprop\tObjects\t\t\t\t\tn03400231\tfrying_pan.n.01\r\nseparator\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02995998\tcentrifuge.n.01\r\ndivider\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nceiling lamp\t1\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nrod\t1\t\t40\t7\t\totherprop\tObjects\t\t\tpistol\t03948459\tn03427202\tgat.n.01\r\n;amps\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvaccum\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchair b'\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npaint bucket\t1\t427\t40\t7\tbucket\totherprop\tObjects\t\t\t\t\t\t\r\nlighting\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03667235\tlighting.n.02\r\ntablecloth\t1\t292\t40\t7\ttablecloth\totherprop\tObjects\t\t\t\t\tn04380143\ttablecloth.n.01\r\nplank\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn15101854\tboard.n.02\r\nsink pipe\t1\t41\t40\t7\tpipe\totherprop\tObjects\t\t\t\t\t\t\r\nconcrete slab\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmini couch\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\npost it\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstair down\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nspout\t1\t41\t40\t7\tpipe\totherprop\tObjects\t\t\t\t\tn04287153\tspout.n.01\r\ncutlery\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03154073\tcutter.n.06\r\nmagazine rack\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn03704549\tmagazine_rack.n.01\r\nmini printer\t1\t66\t40\t7\tprinter\totherprop\tObjects\t\t\tprinter\t04004475\tn04004475\tprinter.n.03\r\ntray box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nlevel 2 stair case\t1\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\t\t\r\nindustrial machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\ngiường\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ninduction cooktop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nartwork\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npillowcase\t1\t851\t40\t7\tcase\totherprop\tObjects\t\t\t\t\tn02975412\tcase.n.19\r\nwindow wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nminibar\t1\t7\t12\t6\tcounter\tcounter\tFurniture\t\t\t\t\tn03769610\tminibar.n.01\r\nlaundry detergent\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nserving plate\t1\t233\t40\t7\tplate\totherprop\tObjects\t\t\t\t\t\t\r\ndirty basket\t1\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn02801938\tbasket.n.01\r\nreading lamp\t1\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn04057981\treading_lamp.n.01\r\nopen cabinet\t1\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\nhand truck\t1\t305\t40\t7\tcart\totherprop\tObjects\t\t\t\t\tn03490119\thand_truck.n.01\r\npad\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03195485\tdiggings.n.02\r\nlaptop table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nventilator window\t1\t59\t9\t13\twindow\twindow\tWindow\t\t\t\t\t\t\r\nesll\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsofa bed\t1\t157\t4\t1\tbed\tbed\tBed\tbed\tbed\tbed\t02818832\tn02818832\tbed.n.01\r\nsafa\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nheating system\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03509025\theating_system.n.01\r\nfloor]floo\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfacial scrub\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvent hood\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noffice cupboard\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstand fan\t1\t74\t40\t7\tfan\totherprop\tObjects\t\t\t\t\t\t\r\nstorage shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhanging clothes\t1\t141\t21\t7\tclothes\tclothes\tObjects\t\t\t\t\t\t\r\nfuse box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\npizza\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npersonal effect\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndrawer organizer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmain board\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\nloofa\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nshower surround\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbycicle\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nall-in-one computer\t1\t46\t40\t7\tcomputer\totherprop\tObjects\t\t\t\t\t\t\r\nbox of tissue\t1\t648\t40\t7\ttissue\totherprop\tObjects\t\t\t\t\t\t\r\ndoorlock\t1\t646\t40\t7\tdoor lock\totherprop\tObjects\t\t\t\t\tn03223162\tdoorlock.n.01\r\nbase unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntennis ball\t1\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\tn04409515\ttennis_ball.n.01\r\nsnack machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nlaptop bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\t\t\r\nhallway wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nmsic\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfile organizer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfire hose\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03346004\tfire_hose.n.01\r\nmedia center\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\numbrella\t1\t203\t40\t7\tumbrella\totherprop\tObjects\t\t\t\t\tn04507155\tumbrella.n.01\r\nbarrier\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02796623\tbarrier.n.01\r\ndirt\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsubwoofer\t1\t54\t40\t7\tspeaker\totherprop\tObjects\t\t\tspeaker\t03691459\tn04349401\tsubwoofer.n.01\r\ntable tennis\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nprinter/scanner\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndrying rack\t1\t262\t39\t6\tdrying rack\totherfurniture\tFurniture\t\t\t\t\t\t\r\nwppd [ame;owood paneling\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet robe\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nprinter stand\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nshower screen\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwater bed\t1\t157\t4\t1\tbed\tbed\tBed\tbed\tbed\tbed\t02818832\tn04557522\twater_bed.n.01\r\ndisplay sign\t1\t208\t40\t7\tsign\totherprop\tObjects\t\t\t\t\t\t\r\ndiaper bin\t1\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\nrouter\t1\t303\t40\t7\trouter\totherprop\tObjects\t\t\t\t\t\t\r\nashtray\t1\t377\t40\t7\tashtray\totherprop\tObjects\t\t\t\t\tn02747802\tashtray.n.01\r\nfootrest/table\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncleaning brush\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbathroom desk\t1\t36\t14\t10\tdesk\tdesk\tTable\tdesk\tdesk\ttable\t04379243\tn03179701\tdesk.n.01\r\ntoilet commode\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstaicase handrail\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nclothes shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndink\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet seat protectors\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstuffware\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlow table for storage\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncovered stuff\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhood\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfloor reflection\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncan opener\t1\t279\t40\t7\tcan opener\totherprop\tObjects\t\t\t\t\tn02951585\tcan_opener.n.01\r\ntop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwheely bin\t1\t307\t40\t7\tbin\totherprop\tObjects\t\t\t\t\t\t\r\nbook bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\tn02870676\tbook_bag.n.01\r\nbody wash\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstudy chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nstepladder\t1\t48\t39\t6\tladder\totherfurniture\tFurniture\tstairs\t\t\t\tn04315599\tstep_ladder.n.01\r\npaper holder\t1\t470\t40\t7\tpaper holder\totherprop\tObjects\t\t\t\t\t\t\r\nwhite\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nyoga ball\t1\t60\t40\t7\tball\totherprop\tObjects\t\t\t\t\t\t\r\nconsol\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncloths container\t1\t140\t40\t7\tcontainer\totherprop\tObjects\t\t\t\t\t\t\r\nshorts\t1\t192\t40\t7\tshorts\totherprop\tObjects\t\t\t\t\tn04204755\tshort_circuit.n.01\r\nemergency exit windows\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ngym\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03472112\tgymnasium.n.02\r\nbox of tissues\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntrolly\t1\t221\t39\t6\ttrolly\totherfurniture\tFurniture\t\t\t\t\t\t\r\nwater purifyer\t1\t93\t40\t7\twater purifier\totherprop\tObjects\t\t\t\t\t\t\r\nclutter'\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwallend\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntissue `paper\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmop and bucket\t1\t427\t40\t7\tbucket\totherprop\tObjects\t\t\t\t\t\t\r\ngrocery\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03461385\tgrocery_store.n.01\r\nworktable\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04603729\tworktable.n.01\r\nair outlet\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfold\t1\t97\t40\t7\tpen\totherprop\tObjects\t\t\t\t\tn03376159\tfold.n.06\r\ntoilet commoed\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhandfold\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbusiness chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nllong table chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nbooks in a shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchaurs\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntennis table stand\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nkitchen water purifier\t1\t93\t40\t7\twater purifier\totherprop\tObjects\t\t\t\t\t\t\r\nlid\t1\t533\t40\t7\tlid\totherprop\tObjects\t\t\t\t\t\t\r\nelectric hob\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbaggage\t1\t783\t40\t7\tluggage\totherprop\tObjects\t\t\t\t\t\t\r\ncpnter\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstairs (exit)\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nequipment\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03294048\tequipment.n.01\r\nrocking chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn04099969\trocking_chair.n.01\r\nbunkbed\t1\t804\t39\t6\tbunk bed\totherfurniture\tFurniture\t\t\t\t\t\t\r\ndivan\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn03214966\tdivan.n.01\r\nbottle of hand sanitizer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsales stall\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndehumidifer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall cover\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nplumbing\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03969041\tplumbing.n.01\r\nelliptical machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nplayball\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsafety bar\t1\t51\t38\t7\tbar\totherstructure\tObjects\t\t\t\t\t\t\r\nstereo\t1\t84\t40\t7\tstereo\totherprop\tObjects\t\t\t\t\tn04315948\tstereo.n.01\r\nhavlu\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhandbag\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02774152\tbag.n.04\r\nfurnance\t1\t551\t39\t6\tfurnace\totherfurniture\tFurniture\t\t\t\t\t\t\r\nshleves\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncoat hanger\t1\t400\t40\t7\tcoat hanger\totherprop\tObjects\t\t\t\t\tn03057920\tcoat_hanger.n.01\r\nerar wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nbed bench\t1\t204\t39\t6\tbench\totherfurniture\tFurniture\tbench\t\tbench\t02828884\tn02828884\tbench.n.01\r\ntissu\t1\t648\t40\t7\ttissue\totherprop\tObjects\t\t\t\t\t\t\r\nplastic tub\t1\t232\t40\t7\tplastic tub\totherprop\tObjects\tbathtub\tbathtub\ttub\t02808440\tn02808440\tbathtub.n.01\r\npotholder\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03992115\tpotholder.n.01\r\ncoffee mug\t1\t263\t40\t7\tvessel\totherprop\tObjects\t\t\tcup or mug\t03797390\tn03063599\tcoffee_mug.n.01\r\ntennis rcket bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\t\t\r\nstand lamp\t1\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nbed/sofa\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlaundry supply\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndocument shredder\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ngas-range stove\t1\t242\t38\t7\tstove\totherstructure\tObjects\t\t\tstove\t04330267\tn04330267\tstove.n.02\r\ntable soccer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncookingrange\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbookbags\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndownstairs\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncoffeepot\t1\t893\t40\t7\tcoffee pot\totherprop\tObjects\t\t\t\t\tn03063689\tcoffeepot.n.01\r\njar\t1\t70\t40\t7\tjar\totherprop\tObjects\t\t\tjar\t03593526\tn03593526\tjar.n.01\r\nrear wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\npart of chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ntape\t1\t109\t40\t7\ttape\totherprop\tObjects\t\t\t\t\t\t\r\ndrawer table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nwoodenrack\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet floor\t1\t11\t2\t5\tfloor\tfloor\tFloor\t\t\t\t\t\t\r\ncushioned seating\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbulls eye\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsoft mat\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsnack box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\ndesk organizer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfootboard\t1\t559\t40\t7\tsheet\totherprop\tObjects\t\t\t\t\tn03379461\tfootboard.n.02\r\nwall hook\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchopping board\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn03025513\tchopping_board.n.01\r\nround picture\t1\t64\t11\t8\tpicture\tpicture\tPicture\t\t\t\t\t\t\r\nchimney\t1\t702\t38\t7\tchimney\totherstructure\tObjects\t\t\t\t\tn03017428\tchimney.n.01\r\nstudio screen\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npersonal belonging\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nroll of paper\t1\t15\t26\t7\tpaper\tpaper\tObjects\t\t\t\t\t\t\r\ngaming wheel\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlandlord\t1\t331\t31\t7\tperson\tperson\tObjects\tperson\t\t\t\tn10245236\tlandlord.n.01\r\nebd\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nheater radiator\t1\t236\t39\t6\tradiator\totherfurniture\tFurniture\t\t\t\t\t\t\r\ncabinet above\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nweighted plate\t1\t233\t40\t7\tplate\totherprop\tObjects\t\t\t\t\t\t\r\ntravelling bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\tsuitcase\t02773838\tn02773838\tbag.n.06\r\ndesk material\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndoor wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\ntraffic cone\t1\t6\t40\t7\tcone\totherprop\tObjects\tcone\t\t\t\t\t\r\ncomputer mouse\t1\t103\t40\t7\tmouse\totherprop\tObjects\t\t\t\t\tn03793489\tmouse.n.04\r\ncoathanger\t1\t400\t40\t7\tcoat hanger\totherprop\tObjects\t\t\t\t\t\t\r\nbureau\t1\t524\t39\t6\tfurniture\totherfurniture\tFurniture\tdresser\tdresser\t\t\tn03015254\tchest_of_drawers.n.01\r\ntyre\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04440749\ttire.n.01\r\narmchairchair\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noven range\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npants\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04489008\ttrouser.n.01\r\nchiropractic chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nkeg\t1\t343\t39\t6\tbarrel\totherfurniture\tFurniture\t\t\t\t\tn03610418\tkeg.n.02\r\nspray\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02754103\tatomizer.n.01\r\npaper trimmer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstanding whiteboard\t1\t45\t30\t7\twhiteboard\twhiteboard\tObjects\t\t\t\t\t\t\r\ndesk drawer\t1\t475\t39\t6\tdesk drawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\nwindow/windowed door\t1\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\nsoapbox\t1\t671\t40\t7\tsoap box\totherprop\tObjects\t\t\t\t\t\t\r\npillow sofa\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\ncentre table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\ndoorway\t1\t609\t38\t7\tdoor way\totherstructure\tObjects\tdoor\t\t\t\tn03224032\tdoorway.n.01\r\nwall and whiteboard\t1\t45\t30\t7\twhiteboard\twhiteboard\tObjects\t\t\t\t\t\t\r\nlaptop computer\t1\t46\t40\t7\tcomputer\totherprop\tObjects\tlaptop\t\tlaptop\t03642806\tn03642806\tlaptop.n.01\r\nscanner/copier\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsuitcase w/clothes\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npower pusher\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nshower faucet handle\t1\t758\t40\t7\thandle\totherprop\tObjects\t\t\t\t\t\t\r\nwalk\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04544979\twalk.n.05\r\nmatte\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\natm machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\ngarage door\t1\t850\t38\t7\tgarage door\totherstructure\tObjects\tdoor\t\t\t\t\t\r\nwals\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncabinet aisle\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntable light\t1\t62\t38\t7\tlight\totherstructure\tObjects\t\t\t\t\t\t\r\nguillotine paper trimmer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nround 2\\\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nteddy\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03013580\tchemise.n.01\r\nwhite board/divider\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwhite wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nmark\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04681387\tcrisscross.n.01\r\npartition wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nshag rug\t1\t130\t40\t7\trug\totherprop\tObjects\t\t\t\t\tn04183217\tshag_rug.n.01\r\nupstair way\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmusic stand'\t1\t820\t39\t6\tmusic stand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nrecamier\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nventhole\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04526241\tvent.n.01\r\ndining seat\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet cover\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npersonal item\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntallboy\t1\t524\t39\t6\tfurniture\totherfurniture\tFurniture\tdresser\tdresser\t\t\tn03518305\thighboy.n.01\r\ndrawers unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nteapot\t1\t678\t40\t7\ttea pot\totherprop\tObjects\t\t\t\t\tn04398044\tteapot.n.01\r\ncook cabinet\t1\t3\t3\t6\tcabinet\tcabinet\tFurniture\t\t\tcabinet\t02933112\tn02933112\tcabinet.n.01\r\nwok pan\t1\t589\t40\t7\tpan\totherprop\tObjects\t\t\t\t\t\t\r\ntv tray\t1\t179\t40\t7\ttray\totherprop\tObjects\t\t\t\t\t\t\r\nround chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\nsawhorse\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04140631\tsawhorse.n.01\r\nkitchen range\t1\t242\t38\t7\tstove\totherstructure\tObjects\t\t\t\t\tn04330340\tstove.n.01\r\nbusdrver\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbarricade\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall ornament\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncolor printer\t1\t66\t40\t7\tprinter\totherprop\tObjects\t\t\tprinter\t04004475\tn04004475\tprinter.n.03\r\nsticker\t1\t725\t40\t7\tsticker\totherprop\tObjects\t\t\t\t\tn07272545\tgummed_label.n.01\r\nexit sign\t1\t86\t40\t7\texit sign\totherprop\tObjects\t\t\t\t\t\t\r\ngas stove\t1\t242\t38\t7\tstove\totherstructure\tObjects\t\t\tstove\t04330267\tn04330267\tstove.n.02\r\nventa hood\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncopier/printer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall-mounted lamp\t1\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\nitem box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nwater puifyer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall papper\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsalt and peper\t1\t737\t40\t7\tsalt and pepper\totherprop\tObjects\t\t\t\t\t\t\r\nprinter four\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntowel fastener\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbasth\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nflipflops\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbonus\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitchen box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\tn02883344\tbox.n.01\r\ncentral heating unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhanging tubelight\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsoccer ball\t1\t837\t40\t7\tsoccer ball\totherprop\tObjects\t\t\t\t\tn04254680\tsoccer_ball.n.01\r\nalmarah\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncanopy\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmed box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\ndrain\t1\t567\t38\t7\tdrain\totherstructure\tObjects\t\t\t\t\t\t\r\npanelling\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\tn03882611\tpaneling.n.01\r\nbed stand\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\ndeal\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\tn15102622\tdeal.n.04\r\nmassage\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsafety rail\t1\t497\t38\t7\trailing\totherstructure\tObjects\t\t\t\t\tn04127395\tsafety_rail.n.01\r\nvacuumer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbinfl\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nlightbulb\t1\t566\t40\t7\tlight bulb\totherprop\tObjects\tlamp\t\t\t\tn03665924\tlight_bulb.n.01\r\ndoor hydraulic\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ninduction cook top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedstand\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncalander\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nset of seats\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchocolate bar dispenser\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall unit tv\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbroomstick\t1\t328\t40\t7\tbroom\totherprop\tObjects\t\t\t\t\tn02907082\tbroomstick.n.01\r\nbath faucet\t1\t9\t40\t7\tfaucet\totherprop\tObjects\t\t\tfaucet\t03325088\tn03325088\tfaucet.n.01\r\nfolded cloth\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsupply\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nunder oven drawer\t1\t174\t39\t6\tdrawer\totherfurniture\tFurniture\t\t\t\t\t\t\r\nkinect\t1\t823\t40\t7\tkinect\totherprop\tObjects\t\t\t\t\t\t\r\ncash\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn10886222\tcash.n.03\r\ndining side wall\t1\t21\t1\t12\twall\twall\tWall\t\t\t\t\t\t\r\nlog\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03686658\tlog.n.05\r\ngarden gnome\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncoucnb\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndart\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03162818\tdart.n.01\r\ndust pan and brush\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsmoke alarm\t1\t525\t40\t7\talarm\totherprop\tObjects\t\t\t\t\tn03343737\tfire_alarm.n.02\r\nkitchen top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoilet flush\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncooler\t1\t17\t24\t6\trefridgerator\trefridgerator\tFurniture\t\t\t\t\tn03102654\tcooler.n.01\r\nkitchen island\t1\t456\t38\t7\tkitchen island\totherstructure\tObjects\t\t\t\t\tn03620600\tkitchen_island.n.01\r\nbalcony\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nescape door\t1\t28\t8\t12\tdoor\tdoor\tWall\tdoor\t\t\t\t\t\r\nhammer\t1\t883\t40\t7\thammer\totherprop\tObjects\t\t\t\t\tn03481172\thammer.n.02\r\nwall and paiting\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nkitch shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nhandwasher\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvanity top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbodyboard\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmessenger bag\t1\t55\t37\t7\tbag\tbag\tObjects\t\t\t\t\t\t\r\nstationary bike\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncabinet countertop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nping pong padle\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nteapoy\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nclothes basket\t1\t39\t40\t7\tbasket\totherprop\tObjects\t\t\tbasket\t02801938\tn03050864\tclothes_hamper.n.01\r\nxbox\t1\t628\t40\t7\txbox\totherprop\tObjects\txbox\t\t\t\t\t\r\nboth\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfoosball\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsoad stand\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\t\t\r\nprop\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn02692086\tairplane_propeller.n.01\r\nbuddha\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nreflection in a mirror\t1\t122\t19\t7\tmirror\tmirror\tObjects\t\t\t\t\t\t\r\nbar stol\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noven/stove\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npatterned rug\t1\t130\t40\t7\trug\totherprop\tObjects\t\t\t\t\t\t\r\nwindow panel\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nvault\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndust bin cover\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nthrow\t1\t872\t40\t7\tthrow\totherprop\tObjects\t\t\t\t\tn04429971\tthrow.n.04\r\npainting and frame\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncovered piano\t1\t298\t39\t6\tpiano\totherfurniture\tFurniture\tpiano\t\tpiano\t03928116\tn03928116\tpiano.n.01\r\ndrawer unit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\naircon\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npackage\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03871083\tpackage.n.02\r\ngas vent\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nblock\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncloth container\t1\t140\t40\t7\tcontainer\totherprop\tObjects\t\t\t\t\t\t\r\nadditional printer\t1\t66\t40\t7\tprinter\totherprop\tObjects\t\t\tprinter\t04004475\tn04004475\tprinter.n.03\r\ndanger sign\t1\t208\t40\t7\tsign\totherprop\tObjects\t\t\t\t\t\t\r\ngame machine\t1\t220\t40\t7\tmachine\totherprop\tObjects\t\t\t\t\t\t\r\nlight fixture\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nutility\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04516874\tutility.n.06\r\nbase rack\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstaircase landing\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nszll\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npiano note\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbboks\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncabord\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncentral table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nsplash\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04682319\tsplash.n.04\r\nsuit\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn04350905\tsuit.n.01\r\ncook top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\njug\t1\t687\t40\t7\tjug\totherprop\tObjects\tbottle\t\tbottle\t02876657\tn03603722\tjug.n.01\r\nstepstool\t1\t276\t40\t7\tstep stool\totherprop\tObjects\t\t\t\t\t\t\r\ntripod\t1\t50\t39\t6\tstand\totherfurniture\tFurniture\t\t\t\t\tn04485082\ttripod.n.01\r\ncover box\t1\t26\t29\t7\tbox\tbox\tObjects\t\t\t\t\t\t\r\nbaby crib\t1\t485\t39\t6\tcrib\totherfurniture\tFurniture\t\t\t\t\t\t\r\nair condisnor\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwater softner\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchandelier\t1\t342\t38\t7\tchandelier\totherstructure\tObjects\t\t\t\t\tn03005285\tchandelier.n.01\r\nfloor patterning\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntablet top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsmoke detector\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbaseball cap\t1\t\t40\t7\t\totherprop\tObjects\t\t\tcap\t02954340\tn02799323\tbaseball_cap.n.01\r\ntissue roll holder\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ncase of water\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall-organizer\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\npiece\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\tn03343853\tfirearm.n.01\r\nwheelbarrel\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ndesktop item\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntv showcase\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nchelves\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\ntoothbrush\t1\t127\t40\t7\ttoothbrush\totherprop\tObjects\t\t\t\t\tn04453156\ttoothbrush.n.01\r\nchiffonière\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nleg towel\t1\t135\t27\t7\ttowel\ttowel\tObjects\t\t\t\t\t\t\r\nflowers/decorations\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nsnake toy\t1\t389\t40\t7\ttoy\totherprop\tObjects\t\t\t\t\t\t\r\ncabinet's side\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nbedroom chair\t1\t5\t5\t4\tchair\tchair\tChair\tchair\tchair\tchair\t03001627\tn03001627\tchair.n.01\r\ndrum\t1\t145\t40\t7\tdrum\totherprop\tObjects\t\t\t\t\tn03249569\tdrum.n.01\r\nliquid soap\t1\t133\t40\t7\tsoap\totherprop\tObjects\t\t\t\t\t\t\r\nset of bedding\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nnight lamp\t1\t144\t35\t7\tlamp\tlamp\tObjects\tlamp\t\tlamp\t03636649\tn03636649\tlamp.n.02\r\npost board\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\nmeasuring cup\t1\t730\t40\t7\tmeasuring cup\totherprop\tObjects\tcup\t\t\t\tn03733805\tmeasuring_cup.n.01\r\nbaseboard heater\t1\t111\t39\t6\theater\totherfurniture\tFurniture\t\t\t\t\t\t\r\npaper shelf\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nalert sheet\t1\t559\t40\t7\tsheet\totherprop\tObjects\t\t\t\t\t\t\r\nduster\t1\t115\t40\t7\tduster\totherprop\tObjects\t\t\t\t\tn03258330\tdustcloth.n.01\r\nsnooker table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn03982430\tpool_table.n.01\r\nleg rest\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nwall storage\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\noffice board\t1\t408\t38\t7\tboard\totherstructure\tObjects\t\t\t\t\t\t\r\nbathroom counter\t1\t7\t12\t6\tcounter\tcounter\tFurniture\ttable\ttable\ttable\t04379243\tn03116530\tcounter.n.01\r\ntable sofa\t1\t83\t6\t9\tsofa\tsofa\tSofa\tsofa\tsofa\tsofa\t04256520\tn04256520\tsofa.n.01\r\nglass-topped table\t1\t19\t7\t10\ttable\ttable\tTable\ttable\ttable\ttable\t04379243\tn04379243\ttable.n.02\r\nracket bat\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nfridge handle\t1\t758\t40\t7\thandle\totherprop\tObjects\t\t\t\t\t\t\r\nstove top\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nmonitor from pc\t1\t\t40\t7\t\totherprop\tObjects\t\t\t\t\t\t\r\nstick\t1\t529\t40\t7\tstick\totherprop\tObjects\t\t\t\t\t\t\r\n"
  },
  {
    "path": "pointnet2_tf/scannet/preprocessing/scannet_util.py",
    "content": "\n\ng_label_names = ['unannotated', 'wall', 'floor', 'chair', 'table', 'desk', 'bed', 'bookshelf', 'sofa', 'sink', 'bathtub', 'toilet', 'curtain', 'counter', 'door', 'window', 'shower curtain', 'refridgerator', 'picture', 'cabinet', 'otherfurniture']\n\ndef get_raw2scannet_label_map():\n    lines = [line.rstrip() for line in open('scannet-labels.combined.tsv')]\n    lines = lines[1:]\n    raw2scannet = {}\n    for i in range(len(lines)):\n        label_classes_set = set(g_label_names)\n        elements = lines[i].split('\\t')\n        raw_name = elements[0]\n        nyu40_name = elements[6]\n        if nyu40_name not in label_classes_set:\n            raw2scannet[raw_name] = 'unannotated'\n        else:\n            raw2scannet[raw_name] = nyu40_name\n    return raw2scannet\n\n\ng_raw2scannet = get_raw2scannet_label_map()\n"
  },
  {
    "path": "pointnet2_tf/scannet/scannet_dataset.py",
    "content": "import pickle\nimport os\nimport sys\nimport numpy as np\nimport pc_util\nimport scene_util\n\nclass ScannetDataset():\n    def __init__(self, root, npoints=8192, split='train'):\n        self.npoints = npoints\n        self.root = root\n        self.split = split\n        self.data_filename = os.path.join(self.root, 'scannet_%s.pickle'%(split))\n        with open(self.data_filename,'rb') as fp:\n            self.scene_points_list = pickle.load(fp)\n            self.semantic_labels_list = pickle.load(fp)\n\tif split=='train':\n\t    labelweights = np.zeros(21)\n\t    for seg in self.semantic_labels_list:\n\t\ttmp,_ = np.histogram(seg,range(22))\n\t\tlabelweights += tmp\n\t    labelweights = labelweights.astype(np.float32)\n\t    labelweights = labelweights/np.sum(labelweights)\n\t    self.labelweights = 1/np.log(1.2+labelweights)\n\telif split=='test':\n\t    self.labelweights = np.ones(21)\n    def __getitem__(self, index):\n        point_set = self.scene_points_list[index]\n        semantic_seg = self.semantic_labels_list[index].astype(np.int32)\n        coordmax = np.max(point_set,axis=0)\n\tcoordmin = np.min(point_set,axis=0)\n\tsmpmin = np.maximum(coordmax-[1.5,1.5,3.0], coordmin)\n\tsmpmin[2] = coordmin[2]\n\tsmpsz = np.minimum(coordmax-smpmin,[1.5,1.5,3.0])\n\tsmpsz[2] = coordmax[2]-coordmin[2]\n\tisvalid = False\n\tfor i in range(10):\n\t    curcenter = point_set[np.random.choice(len(semantic_seg),1)[0],:]\n\t    curmin = curcenter-[0.75,0.75,1.5]\n\t    curmax = curcenter+[0.75,0.75,1.5]\n\t    curmin[2] = coordmin[2]\n\t    curmax[2] = coordmax[2]\n\t    curchoice = np.sum((point_set>=(curmin-0.2))*(point_set<=(curmax+0.2)),axis=1)==3\n\t    cur_point_set = point_set[curchoice,:]\n\t    cur_semantic_seg = semantic_seg[curchoice]\n\t    if len(cur_semantic_seg)==0:\n\t\tcontinue\n\t    mask = np.sum((cur_point_set>=(curmin-0.01))*(cur_point_set<=(curmax+0.01)),axis=1)==3\n\t    vidx = np.ceil((cur_point_set[mask,:]-curmin)/(curmax-curmin)*[31.0,31.0,62.0])\n\t    vidx = np.unique(vidx[:,0]*31.0*62.0+vidx[:,1]*62.0+vidx[:,2])\n\t    isvalid = np.sum(cur_semantic_seg>0)/len(cur_semantic_seg)>=0.7 and len(vidx)/31.0/31.0/62.0>=0.02\n\t    if isvalid:\n\t\tbreak\n\tchoice = np.random.choice(len(cur_semantic_seg), self.npoints, replace=True)\n\tpoint_set = cur_point_set[choice,:]\n\tsemantic_seg = cur_semantic_seg[choice]\n\tmask = mask[choice]\n\tsample_weight = self.labelweights[semantic_seg]\n\tsample_weight *= mask\n        return point_set, semantic_seg, sample_weight\n    def __len__(self):\n        return len(self.scene_points_list)\n\nclass ScannetDatasetWholeScene():\n    def __init__(self, root, npoints=8192, split='train'):\n        self.npoints = npoints\n        self.root = root\n        self.split = split\n        self.data_filename = os.path.join(self.root, 'scannet_%s.pickle'%(split))\n        with open(self.data_filename,'rb') as fp:\n            self.scene_points_list = pickle.load(fp)\n            self.semantic_labels_list = pickle.load(fp)\n\tif split=='train':\n\t    labelweights = np.zeros(21)\n\t    for seg in self.semantic_labels_list:\n\t\ttmp,_ = np.histogram(seg,range(22))\n\t\tlabelweights += tmp\n\t    labelweights = labelweights.astype(np.float32)\n\t    labelweights = labelweights/np.sum(labelweights)\n\t    self.labelweights = 1/np.log(1.2+labelweights)\n\telif split=='test':\n\t    self.labelweights = np.ones(21)\n    def __getitem__(self, index):\n        point_set_ini = self.scene_points_list[index]\n        semantic_seg_ini = self.semantic_labels_list[index].astype(np.int32)\n        coordmax = np.max(point_set_ini,axis=0)\n\tcoordmin = np.min(point_set_ini,axis=0)\n\tnsubvolume_x = np.ceil((coordmax[0]-coordmin[0])/1.5).astype(np.int32)\n        nsubvolume_y = np.ceil((coordmax[1]-coordmin[1])/1.5).astype(np.int32)\n\tpoint_sets = list()\n\tsemantic_segs = list()\n\tsample_weights = list()\n\tisvalid = False\n\tfor i in range(nsubvolume_x):\n\t    for j in range(nsubvolume_y):\n\t\tcurmin = coordmin+[i*1.5,j*1.5,0]\n\t\tcurmax = coordmin+[(i+1)*1.5,(j+1)*1.5,coordmax[2]-coordmin[2]]\n\t\tcurchoice = np.sum((point_set_ini>=(curmin-0.2))*(point_set_ini<=(curmax+0.2)),axis=1)==3\n\t\tcur_point_set = point_set_ini[curchoice,:]\n\t        cur_semantic_seg = semantic_seg_ini[curchoice]\n\t        if len(cur_semantic_seg)==0:\n\t\t    continue\n\t\tmask = np.sum((cur_point_set>=(curmin-0.001))*(cur_point_set<=(curmax+0.001)),axis=1)==3\n\t\tchoice = np.random.choice(len(cur_semantic_seg), self.npoints, replace=True)\n\t\tpoint_set = cur_point_set[choice,:] # Nx3\n\t\tsemantic_seg = cur_semantic_seg[choice] # N\n\t\tmask = mask[choice]\n\t\tif sum(mask)/float(len(mask))<0.01:\n\t\t    continue\n\t\tsample_weight = self.labelweights[semantic_seg]\n\t\tsample_weight *= mask # N\n\t\tpoint_sets.append(np.expand_dims(point_set,0)) # 1xNx3\n\t\tsemantic_segs.append(np.expand_dims(semantic_seg,0)) # 1xN\n\t\tsample_weights.append(np.expand_dims(sample_weight,0)) # 1xN\n\tpoint_sets = np.concatenate(tuple(point_sets),axis=0)\n\tsemantic_segs = np.concatenate(tuple(semantic_segs),axis=0)\n\tsample_weights = np.concatenate(tuple(sample_weights),axis=0)\n        return point_sets, semantic_segs, sample_weights\n    def __len__(self):\n        return len(self.scene_points_list)\n\nclass ScannetDatasetVirtualScan():\n    def __init__(self, root, npoints=8192, split='train'):\n        self.npoints = npoints\n        self.root = root\n        self.split = split\n        self.data_filename = os.path.join(self.root, 'scannet_%s.pickle'%(split))\n        with open(self.data_filename,'rb') as fp:\n            self.scene_points_list = pickle.load(fp)\n            self.semantic_labels_list = pickle.load(fp)\n\tif split=='train':\n\t    labelweights = np.zeros(21)\n\t    for seg in self.semantic_labels_list:\n\t\ttmp,_ = np.histogram(seg,range(22))\n\t\tlabelweights += tmp\n\t    labelweights = labelweights.astype(np.float32)\n\t    labelweights = labelweights/np.sum(labelweights)\n\t    self.labelweights = 1/np.log(1.2+labelweights)\n\telif split=='test':\n\t    self.labelweights = np.ones(21)\n    def __getitem__(self, index):\n        point_set_ini = self.scene_points_list[index]\n        semantic_seg_ini = self.semantic_labels_list[index].astype(np.int32)\n\tsample_weight_ini = self.labelweights[semantic_seg_ini]\n\tpoint_sets = list()\n\tsemantic_segs = list()\n\tsample_weights = list()\n\tfor i in xrange(8):\n\t    smpidx = scene_util.virtual_scan(point_set_ini,mode=i)\n\t    if len(smpidx)<300:\n\t\tcontinue\n            point_set = point_set_ini[smpidx,:]\n\t    semantic_seg = semantic_seg_ini[smpidx]\n\t    sample_weight = sample_weight_ini[smpidx]\n\t    choice = np.random.choice(len(semantic_seg), self.npoints, replace=True)\n\t    point_set = point_set[choice,:] # Nx3\n\t    semantic_seg = semantic_seg[choice] # N\n\t    sample_weight = sample_weight[choice] # N\n\t    point_sets.append(np.expand_dims(point_set,0)) # 1xNx3\n\t    semantic_segs.append(np.expand_dims(semantic_seg,0)) # 1xN\n\t    sample_weights.append(np.expand_dims(sample_weight,0)) # 1xN\n\tpoint_sets = np.concatenate(tuple(point_sets),axis=0)\n\tsemantic_segs = np.concatenate(tuple(semantic_segs),axis=0)\n\tsample_weights = np.concatenate(tuple(sample_weights),axis=0)\n        return point_sets, semantic_segs, sample_weights\n    def __len__(self):\n        return len(self.scene_points_list)\n\nif __name__=='__main__':\n    d = ScannetDatasetWholeScene(root = './data', split='test', npoints=8192)\n    labelweights_vox = np.zeros(21)\n    for ii in xrange(len(d)):\n\tprint ii\n        ps,seg,smpw = d[ii]\n        for b in xrange(ps.shape[0]):\n    \t    _, uvlabel, _ = pc_util.point_cloud_label_to_surface_voxel_label_fast(ps[b,smpw[b,:]>0,:], seg[b,smpw[b,:]>0], res=0.02)\n\t    tmp,_ = np.histogram(uvlabel,range(22))\n\t    labelweights_vox += tmp\n    print labelweights_vox[1:].astype(np.float32)/np.sum(labelweights_vox[1:].astype(np.float32))\n    exit()\n\n\n"
  },
  {
    "path": "pointnet2_tf/scannet/scene_util.py",
    "content": "import os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\nimport numpy as np\nfrom sklearn.neighbors import NearestNeighbors\nfrom numpy import linalg as la\nimport scipy.io as sio\n\ndef cart2sph(xyz):\n  xy = xyz[:,0]**2+xyz[:,1]**2\n  aer = np.zeros(xyz.shape)\n  aer[:,2] = np.sqrt(xy+xyz[:,2]**2)\n  aer[:,1] = np.arctan2(xyz[:,2],np.sqrt(xy))\n  aer[:,0] = np.arctan2(xyz[:,1],xyz[:,0])\n  return aer\n\n# generate virtual scan of a scene by subsampling the point cloud\ndef virtual_scan(xyz, mode=-1):\n  camloc = np.mean(xyz,axis=0)\n  camloc[2] = 1.5 # human height\n  if mode==-1:\n    view_dr = np.array([2*np.pi*np.random.random(), np.pi/10*(np.random.random()-0.75)])\n    camloc[:2] -= (0.8+0.7*np.random.random())*np.array([np.cos(view_dr[0]),np.sin(view_dr[0])])\n  else:\n    view_dr = np.array([np.pi/4*mode, 0])\n    camloc[:2] -= np.array([np.cos(view_dr[0]),np.sin(view_dr[0])])\n  ct_ray_dr = np.array([np.cos(view_dr[1])*np.cos(view_dr[0]), np.cos(view_dr[1])*np.sin(view_dr[0]), np.sin(view_dr[1])])\n  hr_dr = np.cross(ct_ray_dr, np.array([0,0,1]))\n  hr_dr /= la.norm(hr_dr)\n  vt_dr = np.cross(hr_dr, ct_ray_dr)\n  vt_dr /= la.norm(vt_dr)\n  xx = np.linspace(-0.6,0.6,200) #200\n  yy = np.linspace(-0.45,0.45,150) #150\n  xx, yy = np.meshgrid(xx,yy)\n  xx = xx.reshape(-1,1)\n  yy = yy.reshape(-1,1)\n  rays = xx*hr_dr.reshape(1,-1)+yy*vt_dr.reshape(1,-1)+ct_ray_dr.reshape(1,-1)\n  rays_aer = cart2sph(rays)\n  local_xyz = xyz-camloc.reshape(1,-1)\n  local_aer = cart2sph(local_xyz)\n  nbrs = NearestNeighbors(n_neighbors=1, algorithm='kd_tree').fit(rays_aer[:,:2])\n  mindd, minidx = nbrs.kneighbors(local_aer[:,:2])\n  mindd = mindd.reshape(-1)\n  minidx = minidx.reshape(-1)\n\n  sub_idx = mindd<0.01\n  if sum(sub_idx)<100:\n    return np.ones(0)\n  sub_r = local_aer[sub_idx,2]\n  sub_minidx = minidx[sub_idx]\n  min_r = float('inf')*np.ones(np.max(sub_minidx)+1)\n  for i in xrange(len(sub_r)):\n    if sub_r[i]<min_r[sub_minidx[i]]:\n      min_r[sub_minidx[i]] = sub_r[i]\n  sub_smpidx = np.ones(len(sub_r))\n  for i in xrange(len(sub_r)):\n    if sub_r[i]>min_r[sub_minidx[i]]:\n      sub_smpidx[i] = 0\n  smpidx = np.where(sub_idx)[0]\n  smpidx = smpidx[sub_smpidx==1]\n  return smpidx\n\nif __name__=='__main__':\n  pc = np.load('scannet_dataset/scannet_scenes/scene0015_00.npy')\n  print pc.shape\n  xyz = pc[:,:3]\n  seg = pc[:,7]\n  smpidx = virtual_scan(xyz,mode=2)\n  xyz = xyz[smpidx,:]\n  seg = seg[smpidx]\n  sio.savemat('tmp.mat',{'pc':xyz,'seg':seg})\n"
  },
  {
    "path": "pointnet2_tf/scannet/train.py",
    "content": "import argparse\nimport math\nfrom datetime import datetime\n#import h5pyprovider\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(BASE_DIR) # model\nsys.path.append(ROOT_DIR) # provider\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nimport pc_util\nsys.path.append(os.path.join(ROOT_DIR, 'data_prep'))\nimport scannet_dataset\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='model', help='Model name [default: model]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=8192, help='Point Number [default: 8192]')\nparser.add_argument('--max_epoch', type=int, default=201, help='Epoch to run [default: 201]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.7, help='Decay rate for lr decay [default: 0.7]')\nFLAGS = parser.parse_args()\n\nEPOCH_CNT = 0\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nGPU_INDEX = FLAGS.gpu\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(BASE_DIR, FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nNUM_CLASSES = 21\n\n# Shapenet official train/test split\nDATA_PATH = os.path.join(ROOT_DIR,'data','scannet_data_pointnet2')\nTRAIN_DATASET = scannet_dataset.ScannetDataset(root=DATA_PATH, npoints=NUM_POINT, split='train')\nTEST_DATASET = scannet_dataset.ScannetDataset(root=DATA_PATH, npoints=NUM_POINT, split='test')\nTEST_DATASET_WHOLE_SCENE = scannet_dataset.ScannetDatasetWholeScene(root=DATA_PATH, npoints=NUM_POINT, split='test')\n\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learing_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl, smpws_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            print is_training_pl\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.\n            batch = tf.Variable(0)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            print \"--- Get model and loss\"\n            # Get model and loss \n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl, NUM_CLASSES, bn_decay=bn_decay)\n            loss = MODEL.get_loss(pred, labels_pl, smpws_pl)\n            tf.summary.scalar('loss', loss)\n\n            correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)\n            tf.summary.scalar('accuracy', accuracy)\n\n            print \"--- Get training operator\"\n            # Get training operator\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n            train_op = optimizer.minimize(loss, global_step=batch)\n            \n            # Add ops to save and restore all the variables.\n            saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'), sess.graph)\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        sess.run(init)\n        #sess.run(init, {is_training_pl: True})\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n\t       'smpws_pl': smpws_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch,\n               'end_points': end_points}\n\n        best_acc = -1\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n\n            train_one_epoch(sess, ops, train_writer)\n\t    if epoch%5==0:\n            \tacc = eval_one_epoch(sess, ops, test_writer)\n\t\tacc = eval_whole_scene_one_epoch(sess, ops, test_writer)\n            if acc > best_acc:\n                best_acc = acc\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"best_model_epoch_%03d.ckpt\"%(epoch)))\n                log_string(\"Model saved in file: %s\" % save_path)\n\n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\ndef get_batch_wdp(dataset, idxs, start_idx, end_idx):\n    bsize = end_idx-start_idx\n    batch_data = np.zeros((bsize, NUM_POINT, 3))\n    batch_label = np.zeros((bsize, NUM_POINT), dtype=np.int32)\n    batch_smpw = np.zeros((bsize, NUM_POINT), dtype=np.float32)\n    for i in range(bsize):\n        ps,seg,smpw = dataset[idxs[i+start_idx]]\n        batch_data[i,...] = ps\n        batch_label[i,:] = seg\n\tbatch_smpw[i,:] = smpw\n\n\tdropout_ratio = np.random.random()*0.875 # 0-0.875\n        drop_idx = np.where(np.random.random((ps.shape[0]))<=dropout_ratio)[0]\n\tbatch_data[i,drop_idx,:] = batch_data[i,0,:]\n\tbatch_label[i,drop_idx] = batch_label[i,0]\n\tbatch_smpw[i,drop_idx] *= 0\n    return batch_data, batch_label, batch_smpw\n\ndef get_batch(dataset, idxs, start_idx, end_idx):\n    bsize = end_idx-start_idx\n    batch_data = np.zeros((bsize, NUM_POINT, 3))\n    batch_label = np.zeros((bsize, NUM_POINT), dtype=np.int32)\n    batch_smpw = np.zeros((bsize, NUM_POINT), dtype=np.float32)\n    for i in range(bsize):\n        ps,seg,smpw = dataset[idxs[i+start_idx]]\n        batch_data[i,...] = ps\n        batch_label[i,:] = seg\n\tbatch_smpw[i,:] = smpw\n    return batch_data, batch_label, batch_smpw\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    # Shuffle train samples\n    train_idxs = np.arange(0, len(TRAIN_DATASET))\n    np.random.shuffle(train_idxs)\n    num_batches = len(TRAIN_DATASET)/BATCH_SIZE\n    \n    log_string(str(datetime.now()))\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    for batch_idx in range(num_batches):\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = (batch_idx+1) * BATCH_SIZE\n        batch_data, batch_label, batch_smpw = get_batch_wdp(TRAIN_DATASET, train_idxs, start_idx, end_idx)\n        # Augment batched point clouds by rotation\n\taug_data = provider.rotate_point_cloud_z(batch_data)\n        feed_dict = {ops['pointclouds_pl']: aug_data,\n                     ops['labels_pl']: batch_label,\n\t\t     ops['smpws_pl']:batch_smpw,\n                     ops['is_training_pl']: is_training,}\n        summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n        train_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 2)\n        correct = np.sum(pred_val == batch_label)\n        total_correct += correct\n        total_seen += (BATCH_SIZE*NUM_POINT)\n        loss_sum += loss_val\n        if (batch_idx+1)%10 == 0:\n            log_string(' -- %03d / %03d --' % (batch_idx+1, num_batches))\n            log_string('mean loss: %f' % (loss_sum / 10))\n            log_string('accuracy: %f' % (total_correct / float(total_seen)))\n            total_correct = 0\n            total_seen = 0\n            loss_sum = 0\n\n# evaluate on randomly chopped scenes\ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n    test_idxs = np.arange(0, len(TEST_DATASET))\n    num_batches = len(TEST_DATASET)/BATCH_SIZE\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    total_correct_vox = 0\n    total_seen_vox = 0\n    total_seen_class_vox = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class_vox = [0 for _ in range(NUM_CLASSES)]\n    \n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n\n    labelweights = np.zeros(21)\n    labelweights_vox = np.zeros(21)\n    for batch_idx in range(num_batches):\n        start_idx = batch_idx * BATCH_SIZE\n        end_idx = (batch_idx+1) * BATCH_SIZE\n        batch_data, batch_label, batch_smpw = get_batch(TEST_DATASET, test_idxs, start_idx, end_idx)\n\n\taug_data = provider.rotate_point_cloud_z(batch_data)\n\n        feed_dict = {ops['pointclouds_pl']: aug_data,\n                     ops['labels_pl']: batch_label,\n\t  \t     ops['smpws_pl']: batch_smpw,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n        test_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 2) # BxN\n        correct = np.sum((pred_val == batch_label) & (batch_label>0) & (batch_smpw>0)) # evaluate only on 20 categories but not unknown\n        total_correct += correct\n        total_seen += np.sum((batch_label>0) & (batch_smpw>0))\n        loss_sum += loss_val\n\ttmp,_ = np.histogram(batch_label,range(22))\n\tlabelweights += tmp\n        for l in range(NUM_CLASSES):\n            total_seen_class[l] += np.sum((batch_label==l) & (batch_smpw>0))\n            total_correct_class[l] += np.sum((pred_val==l) & (batch_label==l) & (batch_smpw>0))\n\n\tfor b in xrange(batch_label.shape[0]):\n\t    _, uvlabel, _ = pc_util.point_cloud_label_to_surface_voxel_label_fast(aug_data[b,batch_smpw[b,:]>0,:], np.concatenate((np.expand_dims(batch_label[b,batch_smpw[b,:]>0],1),np.expand_dims(pred_val[b,batch_smpw[b,:]>0],1)),axis=1), res=0.02)\n\t    total_correct_vox += np.sum((uvlabel[:,0]==uvlabel[:,1])&(uvlabel[:,0]>0))\n            total_seen_vox += np.sum(uvlabel[:,0]>0)\n\t    tmp,_ = np.histogram(uvlabel[:,0],range(22))\n\t    labelweights_vox += tmp\n\t    for l in range(NUM_CLASSES):\n                total_seen_class_vox[l] += np.sum(uvlabel[:,0]==l)\n                total_correct_class_vox[l] += np.sum((uvlabel[:,0]==l) & (uvlabel[:,1]==l))\n\n    log_string('eval mean loss: %f' % (loss_sum / float(num_batches)))\n    log_string('eval point accuracy vox: %f'% (total_correct_vox / float(total_seen_vox)))\n    log_string('eval point avg class acc vox: %f' % (np.mean(np.array(total_correct_class_vox[1:])/(np.array(total_seen_class_vox[1:],dtype=np.float)+1e-6))))\n    log_string('eval point accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval point avg class acc: %f' % (np.mean(np.array(total_correct_class[1:])/(np.array(total_seen_class[1:],dtype=np.float)+1e-6))))\n    labelweights_vox = labelweights_vox[1:].astype(np.float32)/np.sum(labelweights_vox[1:].astype(np.float32))\n    caliweights = np.array([0.388,0.357,0.038,0.033,0.017,0.02,0.016,0.025,0.002,0.002,0.002,0.007,0.006,0.022,0.004,0.0004,0.003,0.002,0.024,0.029])\n    log_string('eval point calibrated average acc: %f' % (np.average(np.array(total_correct_class[1:])/(np.array(total_seen_class[1:],dtype=np.float)+1e-6),weights=caliweights)))\n    per_class_str = 'vox based --------'\n    for l in range(1,NUM_CLASSES):\n\tper_class_str += 'class %d weight: %f, acc: %f; ' % (l,labelweights_vox[l-1],total_correct_class[l]/float(total_seen_class[l]))\n    log_string(per_class_str)\n    EPOCH_CNT += 1\n    return total_correct/float(total_seen)\n\n# evaluate on whole scenes to generate numbers provided in the paper\ndef eval_whole_scene_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n    test_idxs = np.arange(0, len(TEST_DATASET_WHOLE_SCENE))\n    num_batches = len(TEST_DATASET_WHOLE_SCENE)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n\n    total_correct_vox = 0\n    total_seen_vox = 0\n    total_seen_class_vox = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class_vox = [0 for _ in range(NUM_CLASSES)]\n    \n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION WHOLE SCENE----'%(EPOCH_CNT))\n\n    labelweights = np.zeros(21)\n    labelweights_vox = np.zeros(21)\n    is_continue_batch = False\n    \n    extra_batch_data = np.zeros((0,NUM_POINT,3))\n    extra_batch_label = np.zeros((0,NUM_POINT))\n    extra_batch_smpw = np.zeros((0,NUM_POINT))\n    for batch_idx in range(num_batches):\n\tif not is_continue_batch:\n            batch_data, batch_label, batch_smpw = TEST_DATASET_WHOLE_SCENE[batch_idx]\n\t    batch_data = np.concatenate((batch_data,extra_batch_data),axis=0)\n\t    batch_label = np.concatenate((batch_label,extra_batch_label),axis=0)\n\t    batch_smpw = np.concatenate((batch_smpw,extra_batch_smpw),axis=0)\n\telse:\n\t    batch_data_tmp, batch_label_tmp, batch_smpw_tmp = TEST_DATASET_WHOLE_SCENE[batch_idx]\n\t    batch_data = np.concatenate((batch_data,batch_data_tmp),axis=0)\n\t    batch_label = np.concatenate((batch_label,batch_label_tmp),axis=0)\n\t    batch_smpw = np.concatenate((batch_smpw,batch_smpw_tmp),axis=0)\n\tif batch_data.shape[0]<BATCH_SIZE:\n\t    is_continue_batch = True\n\t    continue\n\telif batch_data.shape[0]==BATCH_SIZE:\n\t    is_continue_batch = False\n\t    extra_batch_data = np.zeros((0,NUM_POINT,3))\n    \t    extra_batch_label = np.zeros((0,NUM_POINT))\n    \t    extra_batch_smpw = np.zeros((0,NUM_POINT))\n\telse:\n\t    is_continue_batch = False\n\t    extra_batch_data = batch_data[BATCH_SIZE:,:,:]\n    \t    extra_batch_label = batch_label[BATCH_SIZE:,:]\n    \t    extra_batch_smpw = batch_smpw[BATCH_SIZE:,:]\n\t    batch_data = batch_data[:BATCH_SIZE,:,:]\n    \t    batch_label = batch_label[:BATCH_SIZE,:]\n    \t    batch_smpw = batch_smpw[:BATCH_SIZE,:]\n\n\taug_data = batch_data\n        feed_dict = {ops['pointclouds_pl']: aug_data,\n                     ops['labels_pl']: batch_label,\n\t  \t     ops['smpws_pl']: batch_smpw,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n\ttest_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 2) # BxN\n        correct = np.sum((pred_val == batch_label) & (batch_label>0) & (batch_smpw>0)) # evaluate only on 20 categories but not unknown\n        total_correct += correct\n        total_seen += np.sum((batch_label>0) & (batch_smpw>0))\n        loss_sum += loss_val\n\ttmp,_ = np.histogram(batch_label,range(22))\n\tlabelweights += tmp\n        for l in range(NUM_CLASSES):\n            total_seen_class[l] += np.sum((batch_label==l) & (batch_smpw>0))\n            total_correct_class[l] += np.sum((pred_val==l) & (batch_label==l) & (batch_smpw>0))\n\n\tfor b in xrange(batch_label.shape[0]):\n\t    _, uvlabel, _ = pc_util.point_cloud_label_to_surface_voxel_label_fast(aug_data[b,batch_smpw[b,:]>0,:], np.concatenate((np.expand_dims(batch_label[b,batch_smpw[b,:]>0],1),np.expand_dims(pred_val[b,batch_smpw[b,:]>0],1)),axis=1), res=0.02)\n\t    total_correct_vox += np.sum((uvlabel[:,0]==uvlabel[:,1])&(uvlabel[:,0]>0))\n            total_seen_vox += np.sum(uvlabel[:,0]>0)\n\t    tmp,_ = np.histogram(uvlabel[:,0],range(22))\n\t    labelweights_vox += tmp\n\t    for l in range(NUM_CLASSES):\n                total_seen_class_vox[l] += np.sum(uvlabel[:,0]==l)\n                total_correct_class_vox[l] += np.sum((uvlabel[:,0]==l) & (uvlabel[:,1]==l))\n\n    log_string('eval whole scene mean loss: %f' % (loss_sum / float(num_batches)))\n    log_string('eval whole scene point accuracy vox: %f'% (total_correct_vox / float(total_seen_vox)))\n    log_string('eval whole scene point avg class acc vox: %f' % (np.mean(np.array(total_correct_class_vox[1:])/(np.array(total_seen_class_vox[1:],dtype=np.float)+1e-6))))\n    log_string('eval whole scene point accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval whole scene point avg class acc: %f' % (np.mean(np.array(total_correct_class[1:])/(np.array(total_seen_class[1:],dtype=np.float)+1e-6))))\n    labelweights = labelweights[1:].astype(np.float32)/np.sum(labelweights[1:].astype(np.float32))\n    labelweights_vox = labelweights_vox[1:].astype(np.float32)/np.sum(labelweights_vox[1:].astype(np.float32))\n    caliweights = np.array([0.388,0.357,0.038,0.033,0.017,0.02,0.016,0.025,0.002,0.002,0.002,0.007,0.006,0.022,0.004,0.0004,0.003,0.002,0.024,0.029])\n    caliacc = np.average(np.array(total_correct_class_vox[1:])/(np.array(total_seen_class_vox[1:],dtype=np.float)+1e-6),weights=caliweights)\n    log_string('eval whole scene point calibrated average acc vox: %f' % caliacc)\n\n    per_class_str = 'vox based --------'\n    for l in range(1,NUM_CLASSES):\n\tper_class_str += 'class %d weight: %f, acc: %f; ' % (l,labelweights_vox[l-1],total_correct_class_vox[l]/float(total_seen_class_vox[l]))\n    log_string(per_class_str)\n    EPOCH_CNT += 1\n    return caliacc\n\n\nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/interpolate.cpp",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n\n// Find three nearest neigbors with square distance\n// input: xyz1 (b,n,3), xyz2(b,m,3)\n// output: dist (b,n,3), idx (b,n,3)\nvoid threenn_cpu(int b, int n, int m, const float *xyz1, const float *xyz2, float *dist, int *idx) {\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n\t    float x1=xyz1[j*3+0];\n\t    float y1=xyz1[j*3+1];\n\t    float z1=xyz1[j*3+2];\n            double best1=1e40; double best2=1e40; double best3=1e40;\n            int besti1=0; int besti2=0; int besti3=0;\n            for (int k=0;k<m;++k) {\n                float x2=xyz2[k*3+0];\n\t        float y2=xyz2[k*3+1];\n\t        float z2=xyz2[k*3+2];\n\t\t//float d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n\t\tdouble d=x2*x2+y2*y2+z2*z2;\n                if (d<best1) {\n                    best3=best2;\n                    besti3=besti2;\n                    best2=best1;\n                    besti2=besti1;\n                    best1=d;\n                    besti1=k;\n                } else if (d<best2) {\n                    best3=best2;\n                    besti3=besti2;\n                    best2=d;\n                    besti2=k;\n                } else if (d<best3) {\n                    best3=d;\n                    besti3=k;\n                }\n            } \n            dist[j*3]=best1;\n            idx[j*3]=besti1;\n            dist[j*3+1]=best2;\n            idx[j*3+1]=besti2;\n            dist[j*3+2]=best3;\n            idx[j*3+2]=besti3;\n        } \n        xyz1+=n*3;\n        xyz2+=m*3;\n        dist+=n*3;\n        idx+=n*3;\n    }\n} \n\n// CONSTANT WEIGHT TODO\n// input: dist (b,n,3)\n// output: weight (b,n,3)\nvoid get_weights_cpu(int b, int n, const float *dist, float *weight) {\n    const float w = 1.0/3.0;\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n            weight[j*3]=w;\n            weight[j*3+1]=w;\n            weight[j*3+2]=w;\n        } \n        dist+=n*3;\n        weight+=n*3;\n    }\n}\n\n// input: points (b,m,c), idx (b,n,3), weight (b,n,3)\n// output: out (b,n,c)\nvoid interpolate_cpu(int b, int m, int c, int n, const float *points, const int *idx, const float *weight, float *out) {\n     float w1,w2,w3;\n     int i1,i2,i3;\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n            w1=weight[j*3];\n            w2=weight[j*3+1];\n            w3=weight[j*3+2]; \n            i1=idx[j*3];\n            i2=idx[j*3+1];\n            i3=idx[j*3+2];\n            for (int l=0;l<c;++l) {\n                out[j*c+l] = points[i1*c+l]*w1 + points[i2*c+l]*w2 + points[i3*c+l]*w3;\n            }\n        } \n        points+=m*c;\n        idx+=n*3;\n        weight+=n*3;\n        out+=n*c;\n    }\n}\n\n// input: grad_out (b,n,c), idx (b,n,3), weight (b,n,3)\n// output: grad_points (b,m,c)\nvoid interpolate_grad_cpu(int b, int n, int c, int m, const float *grad_out, const int *idx, const float *weight, float *grad_points) {\n     float w1,w2,w3;\n     int i1,i2,i3;\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n            w1=weight[j*3];\n            w2=weight[j*3+1];\n            w3=weight[j*3+2]; \n            i1=idx[j*3];\n            i2=idx[j*3+1];\n            i3=idx[j*3+2];\n            for (int l=0;l<c;++l) {\n                grad_points[i1*c+l] += grad_out[j*c+l]*w1;\n                grad_points[i2*c+l] += grad_out[j*c+l]*w2;\n                grad_points[i3*c+l] += grad_out[j*c+l]*w3;\n            }\n        } \n        grad_out+=n*c;\n        idx+=n*3;\n        weight+=n*3;\n        grad_points+=m*c;\n    }\n}\n\nint main()\n{\n    int b=32,n=512,m=128,c=64;\n    float *xyz1=new float[b*n*3];\n    float *xyz2=new float[b*m*3];\n    float *dist=new float[b*n*3];\n    int *idx=new int[b*n*3];\n    memset(idx, 0, sizeof(int)*b*n*3);\n    float *weight=new float[b*n*3];\n    float *points=new float[b*m*c];\n    float *out=new float[b*n*c];\n    float *grad_out=new float[b*n*c]; // grad to out\n    memset(grad_out, 0.0, sizeof(float)*b*n*c);\n    float *grad_points=new float[b*m*c]; // grad to points\n    for (int i=0;i<b*n*3;i++)\n        xyz1[i]=randomf();\n    for (int i=0;i<b*m*3;i++)\n        xyz2[i]=randomf();\n    for (int i=0;i<b*m*c;i++)\n        points[i]=randomf();\n\n    double t0=get_time();\n    threenn_cpu(b,n,m,xyz1,xyz2,dist,idx);\n    printf(\"threenn cpu time %f\\n\",get_time()-t0);\n    \n    t0=get_time();\n    get_weights_cpu(b,n,dist,weight);\n    printf(\"get_weights_cpu cpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    interpolate_cpu(b,m,c,n,points,idx,weight,out);\n    printf(\"interpolate_cpu cpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    interpolate_grad_cpu(b,n,c,m,grad_out,idx,weight,grad_points);\n    printf(\"interpolate_grad_cpu cpu time %f\\n\",get_time()-t0);\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/tf_interpolate.cpp",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#include \"tensorflow/core/framework/common_shape_fns.h\"\nusing namespace tensorflow;\n\nREGISTER_OP(\"ThreeNN\")\n    .Input(\"xyz1: float32\")\n    .Input(\"xyz2: float32\")\n    .Output(\"dist: float32\")\n    .Output(\"idx: int32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        c->set_output(0, c->input(0));\n        c->set_output(1, c->input(0));\n        return Status::OK();\n    });\nREGISTER_OP(\"ThreeInterpolate\")\n    .Input(\"points: float32\")\n    .Input(\"idx: int32\")\n    .Input(\"weight: float32\")\n    .Output(\"out: float32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        ::tensorflow::shape_inference::ShapeHandle dims1; // (b,m,c)\n        c->WithRank(c->input(0), 3, &dims1);\n        ::tensorflow::shape_inference::ShapeHandle dims2; // (b,n,3)\n        c->WithRank(c->input(1), 3, &dims2);\n        // (b,n,c)\n        ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), c->Dim(dims2, 1), c->Dim(dims1, 2)});\n        c->set_output(0, output);\n        return Status::OK();\n    });\nREGISTER_OP(\"ThreeInterpolateGrad\")\n    .Input(\"points: float32\")\n    .Input(\"idx: int32\")\n    .Input(\"weight: float32\")\n    .Input(\"grad_out: float32\")\n    .Output(\"grad_points: float32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        c->set_output(0, c->input(0));\n        return Status::OK();\n    });\n\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n\n// Find three nearest neigbors with square distance\n// input: xyz1 (b,n,3), xyz2(b,m,3)\n// output: dist (b,n,3), idx (b,n,3)\nvoid threenn_cpu(int b, int n, int m, const float *xyz1, const float *xyz2, float *dist, int *idx) {\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n\t    float x1=xyz1[j*3+0];\n\t    float y1=xyz1[j*3+1];\n\t    float z1=xyz1[j*3+2];\n            double best1=1e40; double best2=1e40; double best3=1e40;\n            int besti1=0; int besti2=0; int besti3=0;\n            for (int k=0;k<m;++k) {\n                float x2=xyz2[k*3+0];\n\t        float y2=xyz2[k*3+1];\n\t        float z2=xyz2[k*3+2];\n\t\t//float d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n\t\tdouble d=(x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1);\n                if (d<best1) {\n                    best3=best2;\n                    besti3=besti2;\n                    best2=best1;\n                    besti2=besti1;\n                    best1=d;\n                    besti1=k;\n                } else if (d<best2) {\n                    best3=best2;\n                    besti3=besti2;\n                    best2=d;\n                    besti2=k;\n                } else if (d<best3) {\n                    best3=d;\n                    besti3=k;\n                }\n            } \n            dist[j*3]=best1;\n            idx[j*3]=besti1;\n            dist[j*3+1]=best2;\n            idx[j*3+1]=besti2;\n            dist[j*3+2]=best3;\n            idx[j*3+2]=besti3;\n        } \n        xyz1+=n*3;\n        xyz2+=m*3;\n        dist+=n*3;\n        idx+=n*3;\n    }\n} \n\n// input: points (b,m,c), idx (b,n,3), weight (b,n,3)\n// output: out (b,n,c)\nvoid threeinterpolate_cpu(int b, int m, int c, int n, const float *points, const int *idx, const float *weight, float *out) {\n     float w1,w2,w3;\n     int i1,i2,i3;\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n            w1=weight[j*3];\n            w2=weight[j*3+1];\n            w3=weight[j*3+2]; \n            i1=idx[j*3];\n            i2=idx[j*3+1];\n            i3=idx[j*3+2];\n            for (int l=0;l<c;++l) {\n                out[j*c+l] = points[i1*c+l]*w1 + points[i2*c+l]*w2 + points[i3*c+l]*w3;\n            }\n        } \n        points+=m*c;\n        idx+=n*3;\n        weight+=n*3;\n        out+=n*c;\n    }\n}\n\n// input: grad_out (b,n,c), idx (b,n,3), weight (b,n,3)\n// output: grad_points (b,m,c)\nvoid threeinterpolate_grad_cpu(int b, int n, int c, int m, const float *grad_out, const int *idx, const float *weight, float *grad_points) {\n     float w1,w2,w3;\n     int i1,i2,i3;\n     for (int i=0;i<b;++i) {\n        for (int j=0;j<n;++j) {\n            w1=weight[j*3];\n            w2=weight[j*3+1];\n            w3=weight[j*3+2]; \n            i1=idx[j*3];\n            i2=idx[j*3+1];\n            i3=idx[j*3+2];\n            for (int l=0;l<c;++l) {\n                grad_points[i1*c+l] += grad_out[j*c+l]*w1;\n                grad_points[i2*c+l] += grad_out[j*c+l]*w2;\n                grad_points[i3*c+l] += grad_out[j*c+l]*w3;\n            }\n        } \n        grad_out+=n*c;\n        idx+=n*3;\n        weight+=n*3;\n        grad_points+=m*c;\n    }\n}\n\n\n\nclass ThreeNNOp : public OpKernel {\n    public:\n        explicit ThreeNNOp(OpKernelConstruction* context) : OpKernel(context) {}\n\n        void Compute(OpKernelContext* context) override {\n            const Tensor& xyz1_tensor = context->input(0);\n            OP_REQUIRES(context, xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"ThreeNN expects (b,n,3) xyz1 shape.\"));\n            int b = xyz1_tensor.shape().dim_size(0);\n            int n = xyz1_tensor.shape().dim_size(1);\n\n            const Tensor& xyz2_tensor = context->input(1);\n            OP_REQUIRES(context, xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"ThreeNN expects (b,m,3) xyz2 shape.\"));\n            int m = xyz2_tensor.shape().dim_size(1);\n\n            Tensor *dist_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,n,3}, &dist_tensor));\n            Tensor *idx_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,n,3}, &idx_tensor));\n\n            auto xyz1_flat = xyz1_tensor.flat<float>();\n            const float *xyz1 = &(xyz1_flat(0));\n            auto xyz2_flat = xyz2_tensor.flat<float>();\n            const float *xyz2 = &(xyz2_flat(0));\n            auto dist_flat = dist_tensor->flat<float>();\n            float *dist = &(dist_flat(0));\n            auto idx_flat = idx_tensor->flat<int>();\n            int *idx = &(idx_flat(0));\n            threenn_cpu(b,n,m,xyz1,xyz2,dist,idx);\n        }\n};\nREGISTER_KERNEL_BUILDER(Name(\"ThreeNN\").Device(DEVICE_CPU), ThreeNNOp);\n\n\n\nclass ThreeInterpolateOp: public OpKernel{\n    public:\n        explicit ThreeInterpolateOp(OpKernelConstruction * context):OpKernel(context){}\n\n        void Compute(OpKernelContext * context) override {\n            const Tensor& points_tensor=context->input(0);\n            OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument(\"ThreeInterpolate expects (b,m,c) points shape\"));\n            int b = points_tensor.shape().dim_size(0);\n            int m = points_tensor.shape().dim_size(1);\n            int c = points_tensor.shape().dim_size(2);\n\n            const Tensor& idx_tensor=context->input(1);\n            OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b && idx_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"ThreeInterpolate expects (b,n,3) idx shape\"));\n            int n = idx_tensor.shape().dim_size(1);\n            const Tensor& weight_tensor=context->input(2);\n            OP_REQUIRES(context,weight_tensor.dims()==3 && weight_tensor.shape().dim_size(0)==b && weight_tensor.shape().dim_size(1)==n && weight_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"ThreeInterpolate expects (b,n,3) weight shape\"));\n\n            Tensor * out_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,n,c}, &out_tensor));\n\n            auto points_flat = points_tensor.flat<float>();\n            const float *points = &(points_flat(0));\n            auto idx_flat = idx_tensor.flat<int>();\n            const int *idx = &(idx_flat(0));\n            auto weight_flat = weight_tensor.flat<float>();\n            const float *weight = &(weight_flat(0));\n            auto out_flat = out_tensor->flat<float>();\n            float *out = &(out_flat(0));\n            threeinterpolate_cpu(b,m,c,n,points,idx,weight,out);\n        }\n};\nREGISTER_KERNEL_BUILDER(Name(\"ThreeInterpolate\").Device(DEVICE_CPU),ThreeInterpolateOp);\n\n\nclass ThreeInterpolateGradOp: public OpKernel{\n    public:\n        explicit ThreeInterpolateGradOp(OpKernelConstruction * context):OpKernel(context){}\n\n        void Compute(OpKernelContext * context) override {\n            const Tensor& points_tensor=context->input(0);\n            OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument(\"ThreeInterpolateGrad expects (b,m,c) points shape\"));\n            int b = points_tensor.shape().dim_size(0);\n            int m = points_tensor.shape().dim_size(1);\n            int c = points_tensor.shape().dim_size(2);\n\n            const Tensor& idx_tensor=context->input(1);\n            OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument(\"ThreeInterpolateGrad expects (b,n,3) idx shape\"));\n            int n = idx_tensor.shape().dim_size(1);\n            const Tensor& weight_tensor=context->input(2);\n            OP_REQUIRES(context,weight_tensor.dims()==3 && weight_tensor.shape().dim_size(0)==b && weight_tensor.shape().dim_size(1)==n && weight_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"ThreeInterpolateGrad expects (b,n,3) weight shape\"));\n\n            const Tensor& grad_out_tensor=context->input(3);\n            OP_REQUIRES(context,grad_out_tensor.dims()==3 && grad_out_tensor.shape().dim_size(0)==b && grad_out_tensor.shape().dim_size(1)==n && grad_out_tensor.shape().dim_size(2)==c, errors::InvalidArgument(\"ThreeInterpolateGrad expects (b,n,c) grad_out shape\"));\n\n            Tensor * grad_points_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,m,c}, &grad_points_tensor));\n\n            auto points_flat = points_tensor.flat<float>();\n            const float *points = &(points_flat(0));\n            auto idx_flat = idx_tensor.flat<int>();\n            const int *idx = &(idx_flat(0));\n            auto weight_flat = weight_tensor.flat<float>();\n            const float *weight = &(weight_flat(0));\n            auto grad_out_flat = grad_out_tensor.flat<float>();\n            const float *grad_out = &(grad_out_flat(0));\n            auto grad_points_flat = grad_points_tensor->flat<float>();\n            float *grad_points = &(grad_points_flat(0));\n            memset(grad_points, 0, sizeof(float)*b*m*c);\n            threeinterpolate_grad_cpu(b,n,c,m,grad_out,idx,weight,grad_points);\n        }\n};\nREGISTER_KERNEL_BUILDER(Name(\"ThreeInterpolateGrad\").Device(DEVICE_CPU),ThreeInterpolateGradOp);\n\n\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/tf_interpolate.py",
    "content": "import tensorflow as tf\nfrom tensorflow.python.framework import ops\nimport sys\nimport os\nBASE_DIR = os.path.dirname(__file__)\nsys.path.append(BASE_DIR)\ninterpolate_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_interpolate_so.so'))\ndef three_nn(xyz1, xyz2):\n    '''\n    Input:\n        xyz1: (b,n,3) float32 array, unknown points\n        xyz2: (b,m,3) float32 array, known points\n    Output:\n        dist: (b,n,3) float32 array, distances to known points\n        idx: (b,n,3) int32 array, indices to known points\n    '''\n    return interpolate_module.three_nn(xyz1, xyz2)\nops.NoGradient('ThreeNN')\ndef three_interpolate(points, idx, weight):\n    '''\n    Input:\n        points: (b,m,c) float32 array, known points\n        idx: (b,n,3) int32 array, indices to known points\n        weight: (b,n,3) float32 array, weights on known points\n    Output:\n        out: (b,n,c) float32 array, interpolated point values\n    '''\n    return interpolate_module.three_interpolate(points, idx, weight)\n@tf.RegisterGradient('ThreeInterpolate')\ndef _three_interpolate_grad(op, grad_out):\n    points = op.inputs[0]\n    idx = op.inputs[1]\n    weight = op.inputs[2]\n    return [interpolate_module.three_interpolate_grad(points, idx, weight, grad_out), None, None]\n\nif __name__=='__main__':\n    import numpy as np\n    import time\n    np.random.seed(100)\n    pts = np.random.random((32,128,64)).astype('float32')\n    tmp1 = np.random.random((32,512,3)).astype('float32')\n    tmp2 = np.random.random((32,128,3)).astype('float32')\n    with tf.device('/cpu:0'):\n        points = tf.constant(pts)\n        xyz1 = tf.constant(tmp1)\n        xyz2 = tf.constant(tmp2)\n        dist, idx = three_nn(xyz1, xyz2)\n        weight = tf.ones_like(dist)/3.0\n        interpolated_points = three_interpolate(points, idx, weight)\n    with tf.Session('') as sess:\n        now = time.time() \n        for _ in range(100):\n            ret = sess.run(interpolated_points)\n        print time.time() - now\n        print ret.shape, ret.dtype\n        #print ret\n    \n    \n    \n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/tf_interpolate_compile.sh",
    "content": "# TF1.2\ng++ -std=c++11 tf_interpolate.cpp -o tf_interpolate_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -lcudart -L /usr/local/cuda-8.0/lib64/ -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n\n# TF1.4\n#g++ -std=c++11 tf_interpolate.cpp -o tf_interpolate_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -I /usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public -lcudart -L /usr/local/cuda-8.0/lib64/ -L/usr/local/lib/python2.7/dist-packages/tensorflow -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/tf_interpolate_op_test.py",
    "content": "import tensorflow as tf\nimport numpy as np\nfrom tf_interpolate import three_nn, three_interpolate\n\nclass GroupPointTest(tf.test.TestCase):\n  def test(self):\n    pass\n\n  def test_grad(self):\n    with self.test_session():\n      points = tf.constant(np.random.random((1,8,16)).astype('float32'))\n      print points\n      xyz1 = tf.constant(np.random.random((1,128,3)).astype('float32'))\n      xyz2 = tf.constant(np.random.random((1,8,3)).astype('float32'))\n      dist, idx = three_nn(xyz1, xyz2)\n      weight = tf.ones_like(dist)/3.0\n      interpolated_points = three_interpolate(points, idx, weight)\n      print interpolated_points\n      err = tf.test.compute_gradient_error(points, (1,8,16), interpolated_points, (1,128,16))\n      print err\n      self.assertLess(err, 1e-4) \n\nif __name__=='__main__':\n  tf.test.main() \n"
  },
  {
    "path": "pointnet2_tf/tf_ops/3d_interpolation/visu_interpolation.py",
    "content": "''' Visualize part segmentation '''\nimport os\nimport sys\nROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nsys.path.append('/home/rqi/Projects/toolkits/visualization')\nfrom show3d_balls import showpoints\nimport numpy as np\nfrom tf_interpolate import three_nn, three_interpolate\nimport tensorflow as tf\n\n\npts2 = np.array([[0,0,1],[1,0,0],[0,1,0],[1,1,0]]).astype('float32')\nxyz1 = np.random.random((100,3)).astype('float32')\nxyz2 = np.array([[0,0,0],[1,0,0],[0,1,0],[1,1,1]]).astype('float32')\n\ndef fun(xyz1,xyz2,pts2):\n    with tf.device('/cpu:0'):\n        points = tf.constant(np.expand_dims(pts2,0))\n        xyz1 = tf.constant(np.expand_dims(xyz1,0))\n        xyz2 = tf.constant(np.expand_dims(xyz2,0))\n        dist, idx = three_nn(xyz1, xyz2)\n        #weight = tf.ones_like(dist)/3.0\n        dist = tf.maximum(dist, 1e-10)\n        norm = tf.reduce_sum((1.0/dist),axis=2,keep_dims=True)\n        norm = tf.tile(norm, [1,1,3])\n        print norm\n        weight = (1.0/dist) / norm\n        interpolated_points = three_interpolate(points, idx, weight)\n    with tf.Session('') as sess:\n        tmp,pts1,d,w = sess.run([xyz1, interpolated_points, dist, weight])\n        #print w\n        pts1 = pts1.squeeze()\n    return pts1\n\npts1 = fun(xyz1,xyz2,pts2) \nall_pts = np.zeros((104,3))\nall_pts[0:100,:] = pts1\nall_pts[100:,:] = pts2\nall_xyz = np.zeros((104,3))\nall_xyz[0:100,:]=xyz1\nall_xyz[100:,:]=xyz2\nshowpoints(xyz2, pts2, ballradius=8)\nshowpoints(xyz1, pts1, ballradius=8)\nshowpoints(all_xyz, all_pts, ballradius=8)\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/.gitignore",
    "content": "a.out\nquery_ball_point\nquery_ball_point_block\nquery_ball_point_cuda\nquery_ball_point_grid\ntf_grouping_g.cu.o\ntf_grouping_so.so\nselection_sort\nselection_sort_cuda\nselection_sort_const_cuda\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/compile.sh",
    "content": "g++ query_ball_point.cpp -o query_ball_point\nnvcc query_ball_point.cu -o query_ball_point_cuda\nnvcc query_ball_point_block.cu -o query_ball_point_block\nnvcc query_ball_point_grid.cu -o query_ball_point_grid\ng++ -Wall selection_sort.cpp -o selection_sort\nnvcc selection_sort.cu -o selection_sort_cuda\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/query_ball_point.cpp",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n// input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3)\n// output: idx (b,m,nsample)\nvoid query_ball_point_cpu(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            int cnt = 0;\n            for (int k=0;k<n;++k) {\n                if (cnt == nsample)\n                    break; // only pick the FIRST nsample points in the ball\n\t        float x2=xyz2[j*3+0];\n\t        float y2=xyz2[j*3+1];\n\t        float z2=xyz2[j*3+2];\n\t        float x1=xyz1[k*3+0];\n\t        float y1=xyz1[k*3+1];\n\t        float z1=xyz1[k*3+2];\n\t\tfloat d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n                if (d<radius) {\n                    if (cnt==0) { // set ALL indices to k, s.t. if there are less points in ball than nsample, we still have valid (repeating) indices\n                        for (int l=0;l<nsample;++l)\n                            idx[j*nsample+l] = k;\n                    }\n                    idx[j*nsample+cnt] = k;\n                    cnt+=1;\n                }\n            }\n        }\n        xyz1+=n*3;\n        xyz2+=m*3;\n        idx+=m*nsample;\n    }\n}\n\n\n// input: points (b,n,c), idx (b,m,nsample)\n// output: out (b,m,nsample,c)\nvoid group_point_cpu(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int k=0;k<nsample;++k) {\n                int ii = idx[j*nsample+k];\n                for (int l=0;l<c;++l) {\n                    out[j*nsample*c+k*c+l] = points[ii*c+l];\n                }\n            }\n        }\n        points+=n*c;\n        idx+=m*nsample;\n        out+=m*nsample*c;\n    }\n}\n\n// input: grad_out (b,m,nsample,c), idx (b,m,nsample), \n// output: grad_points (b,n,c)\nvoid group_point_grad_cpu(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int k=0;k<nsample;++k) {\n                int ii = idx[j*nsample+k];\n                for (int l=0;l<c;++l) {\n                     grad_points[ii*c+l] += grad_out[j*nsample*c+k*c+l];\n                }\n            }\n        }\n        idx+=m*nsample;\n        grad_out+=m*nsample*c;\n        grad_points+=n*c;\n    }\n}\n\nint main()\n{\n    int b=32,n=512,m=128,nsample=64,c=64;\n    float radius=0.1;\n    float *xyz1=new float[b*n*3];\n    float *xyz2=new float[b*m*3];\n    float *points=new float[b*n*c];\n    int *idx=new int[b*m*nsample];\n    memset(idx, 0, sizeof(int)*b*m*nsample);\n    float *out=new float[b*m*nsample*c];\n    float *grad_out=new float[b*m*nsample*c]; // grad to out\n    memset(grad_out, 0.0, sizeof(float)*b*m*nsample*c);\n    float *grad_points=new float[b*n*c]; // grad to points\n    for (int i=0;i<b*n*3;i++)\n        xyz1[i]=randomf();\n    for (int i=0;i<b*m*3;i++)\n        xyz2[i]=randomf();\n    for (int i=0;i<b*n*c;i++)\n        points[i]=randomf();\n\n    double t0=get_time();\n    query_ball_point_cpu(b,n,m,radius,nsample,xyz1,xyz2,idx);\n    printf(\"query_ball_point cpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_cpu(b,n,c,m,nsample,points,idx,out);\n    printf(\"grou_point cpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_grad_cpu(b,n,c,m,nsample,grad_out,idx,grad_points);\n    printf(\"grou_point_grad cpu time %f\\n\",get_time()-t0);\n\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/query_ball_point.cu",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n// input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3)\n// output: idx (b,m,nsample)\n__global__ void query_ball_point_gpu(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            int cnt = 0;\n            for (int k=0;k<n;++k) {\n                if (cnt == nsample)\n                    break; // only pick the FIRST nsample points in the ball\n\t        float x2=xyz2[j*3+0];\n\t        float y2=xyz2[j*3+1];\n\t        float z2=xyz2[j*3+2];\n\t        float x1=xyz1[k*3+0];\n\t        float y1=xyz1[k*3+1];\n\t        float z1=xyz1[k*3+2];\n\t\tfloat d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n                if (d<radius) {\n                    if (cnt==0) { // set ALL indices to k, s.t. if there are less points in ball than nsample, we still have valid (repeating) indices\n                        for (int l=0;l<nsample;++l)\n                            idx[j*nsample+l] = k;\n                    }\n                    idx[j*nsample+cnt] = k;\n                    cnt+=1;\n                }\n            }\n        }\n        xyz1+=n*3;\n        xyz2+=m*3;\n        idx+=m*nsample;\n    }\n}\n\n\n// input: points (b,n,c), idx (b,m,nsample)\n// output: out (b,m,nsample,c)\n__global__ void group_point_gpu(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int k=0;k<nsample;++k) {\n                int ii = idx[j*nsample+k];\n                for (int l=0;l<c;++l) {\n                    out[j*nsample*c+k*c+l] = points[ii*c+l];\n                }\n            }\n        }\n        points+=n*c;\n        idx+=m*nsample;\n        out+=m*nsample*c;\n    }\n}\n\n// input: grad_out (b,m,nsample,c), idx (b,m,nsample), \n// output: grad_points (b,n,c)\n__global__ void group_point_grad_gpu(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points) {\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int k=0;k<nsample;++k) {\n                int ii = idx[j*nsample+k];\n                for (int l=0;l<c;++l) {\n                     grad_points[ii*c+l] += grad_out[j*nsample*c+k*c+l];\n                }\n            }\n        }\n        idx+=m*nsample;\n        grad_out+=m*nsample*c;\n        grad_points+=n*c;\n    }\n}\n\nint main()\n{\n    int b=32,n=512,m=128,nsample=64,c=64;\n    float radius=0.1;\n    float *xyz1, *xyz2, *points;\n    cudaMallocManaged(&xyz1, b*n*3*sizeof(float));\n    cudaMallocManaged(&xyz2, b*m*3*sizeof(float));\n    cudaMallocManaged(&points, b*n*c*sizeof(float));\n    int *idx;\n    cudaMallocManaged(&idx, b*m*nsample*sizeof(int));\n    memset(idx, 0, sizeof(int)*b*m*nsample);\n    float *out, *grad_out;\n    cudaMallocManaged(&out, b*m*nsample*c*sizeof(float));\n    cudaMallocManaged(&grad_out, b*m*nsample*c*sizeof(float));\n    memset(grad_out, 0.0, sizeof(float)*b*m*nsample*c);\n    float *grad_points;\n    cudaMallocManaged(&grad_points, b*n*c*sizeof(float));\n\n    for (int i=0;i<b*n*3;i++)\n        xyz1[i]=randomf();\n    for (int i=0;i<b*m*3;i++)\n        xyz2[i]=randomf();\n    for (int i=0;i<b*n*c;i++)\n        points[i]=randomf();\n\n    double t0=get_time();\n    query_ball_point_gpu<<<1,1>>>(b,n,m,radius,nsample,xyz1,xyz2,idx);\n    cudaDeviceSynchronize();\n    printf(\"query_ball_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_gpu<<<1,1>>>(b,n,c,m,nsample,points,idx,out);\n    cudaDeviceSynchronize();\n    printf(\"grou_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_grad_gpu<<<1,1>>>(b,n,c,m,nsample,grad_out,idx,grad_points);\n    cudaDeviceSynchronize();\n    printf(\"grou_point_grad gpu time %f\\n\",get_time()-t0);\n\n    cudaFree(xyz1);\n    cudaFree(xyz2);\n    cudaFree(points);\n    cudaFree(idx);\n    cudaFree(out);\n    cudaFree(grad_out);\n    cudaFree(grad_points);\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/query_ball_point_block.cu",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n// input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3)\n// output: idx (b,m,nsample)\n__global__ void query_ball_point_gpu(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx) {\n    int index = threadIdx.x;\n    xyz1 += n*3*index;\n    xyz2 += m*3*index;\n    idx += m*nsample*index;\n\n    for (int j=0;j<m;++j) {\n        int cnt = 0;\n        for (int k=0;k<n;++k) {\n            if (cnt == nsample)\n                break; // only pick the FIRST nsample points in the ball\n            float x2=xyz2[j*3+0];\n            float y2=xyz2[j*3+1];\n            float z2=xyz2[j*3+2];\n            float x1=xyz1[k*3+0];\n            float y1=xyz1[k*3+1];\n            float z1=xyz1[k*3+2];\n    \tfloat d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n            if (d<radius) {\n                if (cnt==0) { // set ALL indices to k, s.t. if there are less points in ball than nsample, we still have valid (repeating) indices\n                    for (int l=0;l<nsample;++l)\n                        idx[j*nsample+l] = k;\n                }\n                idx[j*nsample+cnt] = k;\n                cnt+=1;\n            }\n        }\n    }\n}\n\n\n// input: points (b,n,c), idx (b,m,nsample)\n// output: out (b,m,nsample,c)\n__global__ void group_point_gpu(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out) {\n    int index = threadIdx.x;\n    points += n*c*index;\n    idx += m*nsample*index;\n    out += m*nsample*c*index;\n\n    for (int j=0;j<m;++j) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                out[j*nsample*c+k*c+l] = points[ii*c+l];\n            }\n        }\n    }\n}\n\n// input: grad_out (b,m,nsample,c), idx (b,m,nsample), \n// output: grad_points (b,n,c)\n__global__ void group_point_grad_gpu(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points) {\n    int index = threadIdx.x;\n    idx += m*nsample*index;\n    grad_out += m*nsample*c*index;\n    grad_points += n*c*index;\n\n    for (int j=0;j<m;++j) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                 grad_points[ii*c+l] += grad_out[j*nsample*c+k*c+l];\n            }\n        }\n    }\n}\n\nint main()\n{\n    int b=32,n=512,m=128,nsample=64,c=64;\n    float radius=0.1;\n    float *xyz1, *xyz2, *points;\n    cudaMallocManaged(&xyz1, b*n*3*sizeof(float));\n    cudaMallocManaged(&xyz2, b*m*3*sizeof(float));\n    cudaMallocManaged(&points, b*n*c*sizeof(float));\n    int *idx;\n    cudaMallocManaged(&idx, b*m*nsample*sizeof(int));\n    memset(idx, 0, sizeof(int)*b*m*nsample);\n    float *out, *grad_out;\n    cudaMallocManaged(&out, b*m*nsample*c*sizeof(float));\n    cudaMallocManaged(&grad_out, b*m*nsample*c*sizeof(float));\n    memset(grad_out, 0.0, sizeof(float)*b*m*nsample*c);\n    float *grad_points;\n    cudaMallocManaged(&grad_points, b*n*c*sizeof(float));\n\n    for (int i=0;i<b*n*3;i++)\n        xyz1[i]=randomf();\n    for (int i=0;i<b*m*3;i++)\n        xyz2[i]=randomf();\n    for (int i=0;i<b*n*c;i++)\n        points[i]=randomf();\n\n    double t0=get_time();\n    query_ball_point_gpu<<<1,b>>>(b,n,m,radius,nsample,xyz1,xyz2,idx);\n    cudaDeviceSynchronize();\n    printf(\"query_ball_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_gpu<<<1,b>>>(b,n,c,m,nsample,points,idx,out);\n    cudaDeviceSynchronize();\n    printf(\"grou_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_grad_gpu<<<1,b>>>(b,n,c,m,nsample,grad_out,idx,grad_points);\n    cudaDeviceSynchronize();\n    printf(\"grou_point_grad gpu time %f\\n\",get_time()-t0);\n\n    cudaFree(xyz1);\n    cudaFree(xyz2);\n    cudaFree(points);\n    cudaFree(idx);\n    cudaFree(out);\n    cudaFree(grad_out);\n    cudaFree(grad_points);\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/query_ball_point_grid.cu",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n// input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3)\n// output: idx (b,m,nsample)\n__global__ void query_ball_point_gpu(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx) {\n    int batch_index = blockIdx.x;\n    xyz1 += n*3*batch_index;\n    xyz2 += m*3*batch_index;\n    idx += m*nsample*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n    \n    for (int j=index;j<m;j+=stride) {\n        int cnt = 0;\n        for (int k=0;k<n;++k) {\n            if (cnt == nsample)\n                break; // only pick the FIRST nsample points in the ball\n            float x2=xyz2[j*3+0];\n            float y2=xyz2[j*3+1];\n            float z2=xyz2[j*3+2];\n            float x1=xyz1[k*3+0];\n            float y1=xyz1[k*3+1];\n            float z1=xyz1[k*3+2];\n    \tfloat d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n            if (d<radius) {\n                if (cnt==0) { // set ALL indices to k, s.t. if there are less points in ball than nsample, we still have valid (repeating) indices\n                    for (int l=0;l<nsample;++l)\n                        idx[j*nsample+l] = k;\n                }\n                idx[j*nsample+cnt] = k;\n                cnt+=1;\n            }\n        }\n    }\n}\n\n\n// input: points (b,n,c), idx (b,m,nsample)\n// output: out (b,m,nsample,c)\n__global__ void group_point_gpu(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out) {\n    int batch_index = blockIdx.x;\n    points += n*c*batch_index;\n    idx += m*nsample*batch_index;\n    out += m*nsample*c*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n    \n    for (int j=index;j<m;j+=stride) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                out[j*nsample*c+k*c+l] = points[ii*c+l];\n            }\n        }\n    }\n}\n\n// input: grad_out (b,m,nsample,c), idx (b,m,nsample), \n// output: grad_points (b,n,c)\n__global__ void group_point_grad_gpu(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points) {\n    int batch_index = blockIdx.x;\n    idx += m*nsample*batch_index;\n    grad_out += m*nsample*c*batch_index;\n    grad_points += n*c*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    for (int j=index;j<m;j+=stride) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                 // Use atomic add to avoid race condition\n                 atomicAdd(&grad_points[ii*c+l], grad_out[j*nsample*c+k*c+l]);\n            }\n        }\n    }\n}\n\nint main()\n{\n    int b=32,n=512,m=128,nsample=64,c=64;\n    float radius=0.1;\n    float *xyz1, *xyz2, *points;\n    cudaMallocManaged(&xyz1, b*n*3*sizeof(float));\n    cudaMallocManaged(&xyz2, b*m*3*sizeof(float));\n    cudaMallocManaged(&points, b*n*c*sizeof(float));\n    int *idx;\n    cudaMallocManaged(&idx, b*m*nsample*sizeof(int));\n    memset(idx, 0, sizeof(int)*b*m*nsample);\n    float *out, *grad_out;\n    cudaMallocManaged(&out, b*m*nsample*c*sizeof(float));\n    cudaMallocManaged(&grad_out, b*m*nsample*c*sizeof(float));\n    memset(grad_out, 0.0, sizeof(float)*b*m*nsample*c);\n    float *grad_points;\n    cudaMallocManaged(&grad_points, b*n*c*sizeof(float));\n\n    for (int i=0;i<b*n*3;i++)\n        xyz1[i]=randomf();\n    for (int i=0;i<b*m*3;i++)\n        xyz2[i]=randomf();\n    for (int i=0;i<b*n*c;i++)\n        points[i]=randomf();\n\n    double t0=get_time();\n    query_ball_point_gpu<<<b,256>>>(b,n,m,radius,nsample,xyz1,xyz2,idx);\n    cudaDeviceSynchronize();\n    printf(\"query_ball_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_gpu<<<b,256>>>(b,n,c,m,nsample,points,idx,out);\n    cudaDeviceSynchronize();\n    printf(\"grou_point gpu time %f\\n\",get_time()-t0);\n\n    t0=get_time();\n    group_point_grad_gpu<<<b,256>>>(b,n,c,m,nsample,grad_out,idx,grad_points);\n    cudaDeviceSynchronize();\n    printf(\"grou_point_grad gpu time %f\\n\",get_time()-t0);\n\n    cudaFree(xyz1);\n    cudaFree(xyz2);\n    cudaFree(points);\n    cudaFree(idx);\n    cudaFree(out);\n    cudaFree(grad_out);\n    cudaFree(grad_points);\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/selection_sort.cpp",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n\n// input: k (1), distance matrix dist (b,m,n)\n// output: idx (b,m,n), val (b,m,n)\nvoid selection_sort_cpu(int b, int n, int m, int k, const float *dist, int *idx, float *val) {\n    float *p_dist;\n    float tmp;\n    int tmpi;\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int s=0;s<n;++s) {\n                val[i*m*n+j*n+s] = dist[i*m*n+j*n+s];\n                idx[i*m*n+j*n+s] = s;\n            }\n        }\n    }\n\n    for (int i=0;i<b;++i) {\n        for (int j=0;j<m;++j) {\n            for (int s=0;s<n;++s)\n                printf(\"%f \", dist[i*m*n+j*n+s]);\n            printf(\"\\n\");\n            p_dist = val+j*n;\n            // selection sort for the first k elements\n            for (int s=0;s<k;++s) {\n                int min=s; \n                // find the min\n                for (int t=s+1;t<n;++t) {\n                    if (p_dist[t]<p_dist[min]) {\n                        min = t;\n                    }\n                }\n                printf(\"%d\\n\", min);\n                // swap min-th and i-th element\n                if (min!=s) {\n                    tmp = p_dist[min];\n                    p_dist[min] = p_dist[s];\n                    p_dist[s] = tmp;\n                    tmpi = idx[j*n+min];\n                    idx[j*n+min] = idx[j*n+s];\n                    idx[j*n+s] = tmpi;\n                }       \n            }\n        }\n        idx+=m*n;\n        val+=m*n;\n    }\n}\n\nint main()\n{\n    //int b=32,n=10000,m=1000,k=128;\n    int b=2,n=4,m=2,k=3;\n    float *dist=new float[b*m*n];\n    int *idx=new int[b*m*n];\n    float *val=new float[b*m*n];\n    memset(idx, 0, sizeof(int)*b*m*n);\n    //for (int i=0;i<b*n*m;i++)\n    //    dist[i]=randomf();\n    for (int i=0;i<b*n*m;i++) {\n        dist[i] = float(10-i);\n        printf(\"%f \", dist[i]);\n    }\n    printf(\"\\n\");\n\n\n\n    double t0=get_time();\n    selection_sort_cpu(b,n,m,k,dist,idx,val);\n    printf(\"selection sort cpu time %f\\n\",get_time()-t0);\n\n    for (int i=0;i<b*n*m;++i)\n        printf(\"%d \", idx[i]);\n    printf(\"\\n\");\n    for (int i=0;i<b*n*m;++i)\n        printf(\"%f \", val[i]);\n    printf(\"\\n\");\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/selection_sort.cu",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n\n// input: k (1), distance matrix dist (b,m,n)\n// output: idx (b,m,k), val (b,m,k)\n__global__ void selection_sort_gpu(int b, int n, int m, int k, float *dist, int *idx, float *val) {\n    int batch_index = blockIdx.x;\n    dist+=m*n*batch_index;\n    idx+=m*k*batch_index;\n    val+=m*k*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    float *p_dist;\n    for (int j=index;j<m;j+=stride) {\n        p_dist = dist+j*n;\n        // selection sort for the first k elements\n        for (int s=0;s<k;++s) {\n            int min=s; \n            // find the min\n            for (int t=s+1;t<n;++t) {\n                if (p_dist[t]<p_dist[min]) {\n                    min = t;\n                }\n            }\n            // update idx and val\n            idx[j*n+s] = min;\n            val[j*n+s] = p_dist[min];\n            // swap min-th and i-th element\n            float tmp = p_dist[min];\n            p_dist[min] = p_dist[s];\n            p_dist[s] = tmp;\n        }\n    }\n}\n\nint main()\n{\n    //int b=32,n=10000,m=1000,k=128;\n    int b=32,n=2048,m=512,k=128;\n    float *dist;\n    int *idx;\n    float *val;\n    cudaMallocManaged(&dist, b*m*n*sizeof(float));\n    cudaMallocManaged(&idx, b*m*k*sizeof(int));\n    cudaMallocManaged(&val, b*m*k*sizeof(float));\n    cudaMemset(idx, 0, sizeof(int)*b*m*k);\n    for (int i=0;i<b*n*m;i++)\n        dist[i]=randomf();\n\n    double t0=get_time();\n    selection_sort_gpu<<<b,256>>>(b,n,m,k,dist,idx,val);\n    cudaDeviceSynchronize();\n    printf(\"selection sort cpu time %f\\n\",get_time()-t0);\n\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/test/selection_sort_const.cu",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include <string>\n#include <vector>\nusing namespace std;\nfloat randomf(){\n    return (rand()+0.5)/(RAND_MAX+1.0);\n}\nstatic double get_time(){\n    timespec tp;\n    clock_gettime(CLOCK_MONOTONIC,&tp);\n    return tp.tv_sec+tp.tv_nsec*1e-9;\n}\n\n// input: k (1), distance matrix dist (b,m,n)\n// output: idx (b,m,n), dist_out (b,m,n)\n__global__ void selection_sort_gpu(int b, int n, int m, int k, const float *dist, int *outi, float *out) {\n    int batch_index = blockIdx.x;\n    dist+=m*n*batch_index;\n    outi+=m*n*batch_index;\n    out+=m*n*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    // copy from dist to dist_out\n    for (int j=index;j<m;j+=stride) {\n        for (int s=0;s<n;++s) {\n            out[j*n+s] = dist[j*n+s];\n            outi[j*n+s] = s;\n        }\n    }\n\n    float *p_dist;\n    for (int j=index;j<m;j+=stride) {\n        p_dist = out+j*n;\n        // selection sort for the first k elements\n        for (int s=0;s<k;++s) {\n            int min=s; \n            // find the min\n            for (int t=s+1;t<n;++t) {\n                if (p_dist[t]<p_dist[min]) {\n                    min = t;\n                }\n            }\n            // swap min-th and i-th element\n            if (min!=s) {\n                float tmp = p_dist[min];\n                p_dist[min] = p_dist[s];\n                p_dist[s] = tmp;\n                int tmpi = outi[j*n+min];\n                outi[j*n+min] = outi[j*n+s];\n                outi[j*n+s] = tmpi;\n            }\n        }\n    }\n}\n\nint main()\n{\n    //int b=32,n=10000,m=1000,k=128;\n    int b=32,n=2048,m=512,k=128;\n    //int b=2,n=4,m=2,k=3;\n    float *dist;\n    int *idx;\n    float *dist_out;\n    cudaMallocManaged(&dist, b*m*n*sizeof(float));\n    cudaMallocManaged(&idx, b*m*n*sizeof(int));\n    cudaMallocManaged(&dist_out, b*m*n*sizeof(float));\n    cudaMemset(idx, 0, sizeof(int)*b*m*n);\n    for (int i=0;i<b*n*m;i++)\n        dist[i]=randomf();\n    //for (int i=0;i<b*n*m;i++) {\n    //    dist[i] = float(10-i);\n    //    printf(\"%f \", dist[i]);\n    //}\n    //printf(\"\\n\");\n\n    double t0=get_time();\n    selection_sort_gpu<<<b,256>>>(b,n,m,k,dist,idx,dist_out);\n    cudaDeviceSynchronize();\n    printf(\"selection sort cpu time %f\\n\",get_time()-t0);\n    \n    //for (int i=0;i<b*n*m;++i)\n    //    printf(\"%d \", idx[i]);\n    //printf(\"\\n\");\n\n    return 0;\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/tf_grouping.cpp",
    "content": "#include <cstdio>\n#include <ctime>\n#include <cstring> // memset\n#include <cstdlib> // rand, RAND_MAX\n#include <cmath> // sqrtf\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#include \"tensorflow/core/framework/common_shape_fns.h\"\n#include <cuda_runtime.h>\nusing namespace tensorflow;\n\nREGISTER_OP(\"QueryBallPoint\")\n    .Attr(\"radius: float\")\n    .Attr(\"nsample: int\")\n    .Input(\"xyz1: float32\")\n    .Input(\"xyz2: float32\")\n    .Output(\"idx: int32\")\n    .Output(\"pts_cnt: int32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoint * 3\n        c->WithRank(c->input(1), 3, &dims2);\n        int nsample;\n        TF_RETURN_IF_ERROR(c->GetAttr(\"nsample\", &nsample));\n        ::tensorflow::shape_inference::ShapeHandle output1 = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1), nsample});\n        c->set_output(0, output1);\n        ::tensorflow::shape_inference::ShapeHandle output2 = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1)});\n        c->set_output(1, output2);\n        return Status::OK();\n    });\nREGISTER_OP(\"SelectionSort\")\n    .Attr(\"k: int\")\n    .Input(\"dist: float32\")\n    .Output(\"outi: int32\")\n    .Output(\"out: float32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        c->set_output(0, c->input(0));\n        c->set_output(1, c->input(0));\n        return Status::OK();\n    });\nREGISTER_OP(\"GroupPoint\")\n    .Input(\"points: float32\")\n    .Input(\"idx: int32\")\n    .Output(\"out: float32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ndataset * channels\n        c->WithRank(c->input(0), 3, &dims1);\n        ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints * nsample\n        c->WithRank(c->input(1), 3, &dims2);\n        // batch_size * npoints * nsample * channels\n        ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1), c->Dim(dims2, 2), c->Dim(dims1, 2)});\n        c->set_output(0, output);\n        return Status::OK();\n    });\nREGISTER_OP(\"GroupPointGrad\")\n    .Input(\"points: float32\")\n    .Input(\"idx: int32\")\n    .Input(\"grad_out: float32\")\n    .Output(\"grad_points: float32\")\n    .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n        c->set_output(0, c->input(0));\n        return Status::OK();\n    });\n\n\nvoid queryBallPointLauncher(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx, int *pts_cnt);\nclass QueryBallPointGpuOp : public OpKernel {\n    public:\n        explicit QueryBallPointGpuOp(OpKernelConstruction* context) : OpKernel(context) {\n            OP_REQUIRES_OK(context, context->GetAttr(\"radius\", &radius_));\n            OP_REQUIRES(context, radius_ > 0, errors::InvalidArgument(\"QueryBallPoint expects positive radius\"));\n\n            OP_REQUIRES_OK(context, context->GetAttr(\"nsample\", &nsample_));\n            OP_REQUIRES(context, nsample_ > 0, errors::InvalidArgument(\"QueryBallPoint expects positive nsample\"));\n        }\n\n        void Compute(OpKernelContext* context) override {\n            const Tensor& xyz1_tensor = context->input(0);\n            OP_REQUIRES(context, xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"QueryBallPoint expects (batch_size, ndataset, 3) xyz1 shape.\"));\n            int b = xyz1_tensor.shape().dim_size(0);\n            int n = xyz1_tensor.shape().dim_size(1);\n\n            const Tensor& xyz2_tensor = context->input(1);\n            OP_REQUIRES(context, xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3, errors::InvalidArgument(\"QueryBallPoint expects (batch_size, npoint, 3) xyz2 shape.\"));\n            int m = xyz2_tensor.shape().dim_size(1);\n\n            Tensor *idx_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,m,nsample_}, &idx_tensor));\n            Tensor *pts_cnt_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,m}, &pts_cnt_tensor));\n\n            auto xyz1_flat = xyz1_tensor.flat<float>();\n            const float *xyz1 = &(xyz1_flat(0));\n            auto xyz2_flat = xyz2_tensor.flat<float>();\n            const float *xyz2 = &(xyz2_flat(0));\n            auto idx_flat = idx_tensor->flat<int>();\n            int *idx = &(idx_flat(0));\n            auto pts_cnt_flat = pts_cnt_tensor->flat<int>();\n            int *pts_cnt = &(pts_cnt_flat(0));\n            queryBallPointLauncher(b,n,m,radius_,nsample_,xyz1,xyz2,idx,pts_cnt);\n        }\n    private:\n        float radius_;\n        int nsample_;\n};\nREGISTER_KERNEL_BUILDER(Name(\"QueryBallPoint\").Device(DEVICE_GPU), QueryBallPointGpuOp);\n\nvoid selectionSortLauncher(int b, int n, int m, int k, const float *dist, int *outi, float *out);\nclass SelectionSortGpuOp : public OpKernel {\n    public:\n        explicit SelectionSortGpuOp(OpKernelConstruction* context) : OpKernel(context) {\n            OP_REQUIRES_OK(context, context->GetAttr(\"k\", &k_));\n            OP_REQUIRES(context, k_ > 0, errors::InvalidArgument(\"SelectionSort expects positive k\"));\n        }\n\n        void Compute(OpKernelContext* context) override {\n            const Tensor& dist_tensor = context->input(0);\n            OP_REQUIRES(context, dist_tensor.dims()==3, errors::InvalidArgument(\"SelectionSort expects (b,m,n) dist shape.\"));\n            int b = dist_tensor.shape().dim_size(0);\n            int m = dist_tensor.shape().dim_size(1);\n            int n = dist_tensor.shape().dim_size(2);\n\n            Tensor *outi_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,m,n}, &outi_tensor));\n            Tensor *out_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,m,n}, &out_tensor));\n\n            auto dist_flat = dist_tensor.flat<float>();\n            const float *dist = &(dist_flat(0));\n            auto outi_flat = outi_tensor->flat<int>();\n            int *outi = &(outi_flat(0));\n            auto out_flat = out_tensor->flat<float>();\n            float *out = &(out_flat(0));\n            selectionSortLauncher(b,n,m,k_,dist,outi,out);\n        }\n    private:\n        int k_;\n};\nREGISTER_KERNEL_BUILDER(Name(\"SelectionSort\").Device(DEVICE_GPU), SelectionSortGpuOp);\n\n\nvoid groupPointLauncher(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out);\nclass GroupPointGpuOp: public OpKernel{\n    public:\n        explicit GroupPointGpuOp(OpKernelConstruction * context):OpKernel(context){}\n\n        void Compute(OpKernelContext * context) override {\n            const Tensor& points_tensor=context->input(0);\n            OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument(\"GroupPoint expects (batch_size, num_points, channel) points shape\"));\n            int b = points_tensor.shape().dim_size(0);\n            int n = points_tensor.shape().dim_size(1);\n            int c = points_tensor.shape().dim_size(2);\n\n            const Tensor& idx_tensor=context->input(1);\n            OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument(\"GroupPoint expects (batch_size, npoints, nsample) idx shape\"));\n            int m = idx_tensor.shape().dim_size(1);\n            int nsample = idx_tensor.shape().dim_size(2);\n\n            Tensor * out_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,m,nsample,c}, &out_tensor));\n\n            auto points_flat = points_tensor.flat<float>();\n            const float *points = &(points_flat(0));\n            auto idx_flat = idx_tensor.flat<int>();\n            const int *idx = &(idx_flat(0));\n            auto out_flat = out_tensor->flat<float>();\n            float *out = &(out_flat(0));\n            groupPointLauncher(b,n,c,m,nsample,points,idx,out);\n        }\n};\nREGISTER_KERNEL_BUILDER(Name(\"GroupPoint\").Device(DEVICE_GPU),GroupPointGpuOp);\n\nvoid groupPointGradLauncher(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points);\nclass GroupPointGradGpuOp: public OpKernel{\n    public:\n        explicit GroupPointGradGpuOp(OpKernelConstruction * context):OpKernel(context){}\n\n        void Compute(OpKernelContext * context) override {\n            const Tensor& points_tensor=context->input(0);\n            OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument(\"GroupPointGrad expects (batch_size, num_points, channel) points shape\"));\n            int b = points_tensor.shape().dim_size(0);\n            int n = points_tensor.shape().dim_size(1);\n            int c = points_tensor.shape().dim_size(2);\n\n            const Tensor& idx_tensor=context->input(1);\n            OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument(\"GroupPointGrad expects (batch_size, npoints, nsample) idx shape\"));\n            int m = idx_tensor.shape().dim_size(1);\n            int nsample = idx_tensor.shape().dim_size(2);\n\n            const Tensor& grad_out_tensor=context->input(2);\n            OP_REQUIRES(context,grad_out_tensor.dims()==4 && grad_out_tensor.shape().dim_size(0)==b && grad_out_tensor.shape().dim_size(1)==m && grad_out_tensor.shape().dim_size(2)==nsample && grad_out_tensor.shape().dim_size(3)==c, errors::InvalidArgument(\"GroupPointGrad expects (batch_size, npoints, nsample, channel) grad_out shape\"));\n\n            Tensor * grad_points_tensor = nullptr;\n            OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,n,c}, &grad_points_tensor));\n\n            auto points_flat = points_tensor.flat<float>();\n            const float *points = &(points_flat(0));\n            auto idx_flat = idx_tensor.flat<int>();\n            const int *idx = &(idx_flat(0));\n            auto grad_out_flat = grad_out_tensor.flat<float>();\n            const float *grad_out = &(grad_out_flat(0));\n            auto grad_points_flat = grad_points_tensor->flat<float>();\n            float *grad_points = &(grad_points_flat(0));\n            cudaMemset(grad_points, 0, sizeof(float)*b*n*c);\n            groupPointGradLauncher(b,n,c,m,nsample,grad_out,idx,grad_points);\n        }\n};\nREGISTER_KERNEL_BUILDER(Name(\"GroupPointGrad\").Device(DEVICE_GPU),GroupPointGradGpuOp);\n\n\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/tf_grouping.py",
    "content": "import tensorflow as tf\nfrom tensorflow.python.framework import ops\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\ngrouping_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_grouping_so.so'))\ndef query_ball_point(radius, nsample, xyz1, xyz2):\n    '''\n    Input:\n        radius: float32, ball search radius\n        nsample: int32, number of points selected in each ball region\n        xyz1: (batch_size, ndataset, 3) float32 array, input points\n        xyz2: (batch_size, npoint, 3) float32 array, query points\n    Output:\n        idx: (batch_size, npoint, nsample) int32 array, indices to input points\n        pts_cnt: (batch_size, npoint) int32 array, number of unique points in each local region\n    '''\n    #return grouping_module.query_ball_point(radius, nsample, xyz1, xyz2)\n    return grouping_module.query_ball_point(xyz1, xyz2, radius, nsample)\nops.NoGradient('QueryBallPoint')\ndef select_top_k(k, dist):\n    '''\n    Input:\n        k: int32, number of k SMALLEST elements selected\n        dist: (b,m,n) float32 array, distance matrix, m query points, n dataset points\n    Output:\n        idx: (b,m,n) int32 array, first k in n are indices to the top k\n        dist_out: (b,m,n) float32 array, first k in n are the top k\n    '''\n    return grouping_module.selection_sort(dist, k)\nops.NoGradient('SelectionSort')\ndef group_point(points, idx):\n    '''\n    Input:\n        points: (batch_size, ndataset, channel) float32 array, points to sample from\n        idx: (batch_size, npoint, nsample) int32 array, indices to points\n    Output:\n        out: (batch_size, npoint, nsample, channel) float32 array, values sampled from points\n    '''\n    return grouping_module.group_point(points, idx)\n@tf.RegisterGradient('GroupPoint')\ndef _group_point_grad(op, grad_out):\n    points = op.inputs[0]\n    idx = op.inputs[1]\n    return [grouping_module.group_point_grad(points, idx, grad_out), None]\n\ndef knn_point(k, xyz1, xyz2):\n    '''\n    Input:\n        k: int32, number of k in k-nn search\n        xyz1: (batch_size, ndataset, c) float32 array, input points\n        xyz2: (batch_size, npoint, c) float32 array, query points\n    Output:\n        val: (batch_size, npoint, k) float32 array, L2 distances\n        idx: (batch_size, npoint, k) int32 array, indices to input points\n    '''\n    b = xyz1.get_shape()[0].value\n    n = xyz1.get_shape()[1].value\n    c = xyz1.get_shape()[2].value\n    m = xyz2.get_shape()[1].value\n    print b, n, c, m\n    print xyz1, (b,1,n,c)\n    xyz1 = tf.tile(tf.reshape(xyz1, (b,1,n,c)), [1,m,1,1])\n    xyz2 = tf.tile(tf.reshape(xyz2, (b,m,1,c)), [1,1,n,1])\n    dist = tf.reduce_sum((xyz1-xyz2)**2, -1)\n    print dist, k\n    outi, out = select_top_k(k, dist)\n    idx = tf.slice(outi, [0,0,0], [-1,-1,k])\n    val = tf.slice(out, [0,0,0], [-1,-1,k])\n    print idx, val\n    #val, idx = tf.nn.top_k(-dist, k=k) # ONLY SUPPORT CPU\n    return val, idx\n\nif __name__=='__main__':\n    knn=True\n    import numpy as np\n    import time\n    np.random.seed(100)\n    pts = np.random.random((32,512,64)).astype('float32')\n    tmp1 = np.random.random((32,512,3)).astype('float32')\n    tmp2 = np.random.random((32,128,3)).astype('float32')\n    with tf.device('/gpu:1'):\n        points = tf.constant(pts)\n        xyz1 = tf.constant(tmp1)\n        xyz2 = tf.constant(tmp2)\n        radius = 0.1 \n        nsample = 64\n        if knn:\n            _, idx = knn_point(nsample, xyz1, xyz2)\n            grouped_points = group_point(points, idx)\n        else:\n            idx, _ = query_ball_point(radius, nsample, xyz1, xyz2)\n            grouped_points = group_point(points, idx)\n            #grouped_points_grad = tf.ones_like(grouped_points)\n            #points_grad = tf.gradients(grouped_points, points, grouped_points_grad)\n    with tf.Session('') as sess:\n        now = time.time() \n        for _ in range(100):\n            ret = sess.run(grouped_points)\n        print time.time() - now\n        print ret.shape, ret.dtype\n        print ret\n    \n    \n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/tf_grouping_compile.sh",
    "content": "#/bin/bash\n/usr/local/cuda-8.0/bin/nvcc tf_grouping_g.cu -o tf_grouping_g.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n\n# TF1.2\ng++ -std=c++11 tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -lcudart -L /usr/local/cuda-8.0/lib64/ -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n\n# TF1.4\n#g++ -std=c++11 tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -I /usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public -lcudart -L /usr/local/cuda-8.0/lib64/ -L/usr/local/lib/python2.7/dist-packages/tensorflow -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/tf_grouping_g.cu",
    "content": "// input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3)\n// output: idx (b,m,nsample), pts_cnt (b,m)\n__global__ void query_ball_point_gpu(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx, int *pts_cnt) {\n    int batch_index = blockIdx.x;\n    xyz1 += n*3*batch_index;\n    xyz2 += m*3*batch_index;\n    idx += m*nsample*batch_index;\n    pts_cnt += m*batch_index; // counting how many unique points selected in local region\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n    \n    for (int j=index;j<m;j+=stride) {\n        int cnt = 0;\n        for (int k=0;k<n;++k) {\n            if (cnt == nsample)\n                break; // only pick the FIRST nsample points in the ball\n            float x2=xyz2[j*3+0];\n            float y2=xyz2[j*3+1];\n            float z2=xyz2[j*3+2];\n            float x1=xyz1[k*3+0];\n            float y1=xyz1[k*3+1];\n            float z1=xyz1[k*3+2];\n    \t    float d=max(sqrtf((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)),1e-20f);\n            if (d<radius) {\n                if (cnt==0) { // set ALL indices to k, s.t. if there are less points in ball than nsample, we still have valid (repeating) indices\n                    for (int l=0;l<nsample;++l)\n                        idx[j*nsample+l] = k;\n                }\n                idx[j*nsample+cnt] = k;\n                cnt+=1;\n            }\n        }\n        pts_cnt[j] = cnt;\n    }\n}\n\n// input: points (b,n,c), idx (b,m,nsample)\n// output: out (b,m,nsample,c)\n__global__ void group_point_gpu(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out) {\n    int batch_index = blockIdx.x;\n    points += n*c*batch_index;\n    idx += m*nsample*batch_index;\n    out += m*nsample*c*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n    \n    for (int j=index;j<m;j+=stride) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                out[j*nsample*c+k*c+l] = points[ii*c+l];\n            }\n        }\n    }\n}\n\n// input: grad_out (b,m,nsample,c), idx (b,m,nsample), \n// output: grad_points (b,n,c)\n__global__ void group_point_grad_gpu(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points) {\n    int batch_index = blockIdx.x;\n    idx += m*nsample*batch_index;\n    grad_out += m*nsample*c*batch_index;\n    grad_points += n*c*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    for (int j=index;j<m;j+=stride) {\n        for (int k=0;k<nsample;++k) {\n            int ii = idx[j*nsample+k];\n            for (int l=0;l<c;++l) {\n                 atomicAdd(&grad_points[ii*c+l], grad_out[j*nsample*c+k*c+l]);\n            }\n        }\n    }\n}\n\n// input: k (1), distance matrix dist (b,m,n)\n// output: idx (b,m,n), dist_out (b,m,n)\n// only the top k results within n are useful\n__global__ void selection_sort_gpu(int b, int n, int m, int k, const float *dist, int *outi, float *out) {\n    int batch_index = blockIdx.x;\n    dist+=m*n*batch_index;\n    outi+=m*n*batch_index;\n    out+=m*n*batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    // copy from dist to dist_out\n    for (int j=index;j<m;j+=stride) {\n        for (int s=0;s<n;++s) {\n            out[j*n+s] = dist[j*n+s];\n            outi[j*n+s] = s;\n        }\n    }\n\n    float *p_dist;\n    for (int j=index;j<m;j+=stride) {\n        p_dist = out+j*n;\n        // selection sort for the first k elements\n        for (int s=0;s<k;++s) {\n            int min=s; \n            // find the min\n            for (int t=s+1;t<n;++t) {\n                if (p_dist[t]<p_dist[min]) {\n                    min = t;\n                }\n            }\n            // swap min-th and i-th element\n            if (min!=s) {\n                float tmp = p_dist[min];\n                p_dist[min] = p_dist[s];\n                p_dist[s] = tmp;\n                int tmpi = outi[j*n+min];\n                outi[j*n+min] = outi[j*n+s];\n                outi[j*n+s] = tmpi;\n            }\n        }\n    }\n}\n\nvoid queryBallPointLauncher(int b, int n, int m, float radius, int nsample, const float *xyz1, const float *xyz2, int *idx, int *pts_cnt) {\n    query_ball_point_gpu<<<b,256>>>(b,n,m,radius,nsample,xyz1,xyz2,idx,pts_cnt);\n    //cudaDeviceSynchronize();\n}\nvoid selectionSortLauncher(int b, int n, int m, int k, const float *dist, int *outi, float *out) {\n    selection_sort_gpu<<<b,256>>>(b,n,m,k,dist,outi,out); \n    //cudaDeviceSynchronize();\n}\nvoid groupPointLauncher(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out){\n    group_point_gpu<<<b,256>>>(b,n,c,m,nsample,points,idx,out);\n    //cudaDeviceSynchronize();\n}\nvoid groupPointGradLauncher(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points){\n    group_point_grad_gpu<<<b,256>>>(b,n,c,m,nsample,grad_out,idx,grad_points);\n    //group_point_grad_gpu<<<1,1>>>(b,n,c,m,nsample,grad_out,idx,grad_points);\n    //cudaDeviceSynchronize();\n}\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/grouping/tf_grouping_op_test.py",
    "content": "import tensorflow as tf\nimport numpy as np\nfrom tf_grouping import query_ball_point, group_point\n\nclass GroupPointTest(tf.test.TestCase):\n  def test(self):\n    pass\n\n  def test_grad(self):\n    with tf.device('/gpu:0'):\n      points = tf.constant(np.random.random((1,128,16)).astype('float32'))\n      print points\n      xyz1 = tf.constant(np.random.random((1,128,3)).astype('float32'))\n      xyz2 = tf.constant(np.random.random((1,8,3)).astype('float32'))\n      radius = 0.3 \n      nsample = 32\n      idx, pts_cnt = query_ball_point(radius, nsample, xyz1, xyz2)\n      grouped_points = group_point(points, idx)\n      print grouped_points\n\n    with self.test_session():\n      print \"---- Going to compute gradient error\"\n      err = tf.test.compute_gradient_error(points, (1,128,16), grouped_points, (1,8,32,16))\n      print err\n      self.assertLess(err, 1e-4) \n\nif __name__=='__main__':\n  tf.test.main() \n"
  },
  {
    "path": "pointnet2_tf/tf_ops/sampling/.gitignore",
    "content": "*.o\n*.so\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/sampling/tf_sampling.cpp",
    "content": "/* Furthest point sampling\n * Original author: Haoqiang Fan\n * Modified by Charles R. Qi\n * All Rights Reserved. 2017. \n */\n#include \"tensorflow/core/framework/op.h\"\n#include \"tensorflow/core/framework/op_kernel.h\"\n#include \"tensorflow/core/framework/shape_inference.h\"\n#include \"tensorflow/core/framework/common_shape_fns.h\"\n#include <cuda_runtime.h>\n\nusing namespace tensorflow;\n\nREGISTER_OP(\"ProbSample\")\n  .Input(\"inp: float32\")\n  .Input(\"inpr: float32\")\n  .Output(\"out: int32\")\n  .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n    ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ncategory\n    c->WithRank(c->input(0), 2, &dims1);\n    ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints\n    c->WithRank(c->input(1), 2, &dims2);\n    // batch_size * npoints\n    ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1)});\n    c->set_output(0, output);\n    return Status::OK();\n  });\nREGISTER_OP(\"FarthestPointSample\")\n  .Attr(\"npoint: int\")\n  .Input(\"inp: float32\")\n  .Output(\"out: int32\")\n  .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n    ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * npoint * 3\n    c->WithRank(c->input(0), 3, &dims1);\n    int npoint;\n    TF_RETURN_IF_ERROR(c->GetAttr(\"npoint\", &npoint));\n    ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), npoint});\n    c->set_output(0, output);\n    return Status::OK();\n  });\nREGISTER_OP(\"GatherPoint\")\n  .Input(\"inp: float32\")\n  .Input(\"idx: int32\")\n  .Output(\"out: float32\")\n  .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n    ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ndataset * 3\n    c->WithRank(c->input(0), 3, &dims1);\n    ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints\n    c->WithRank(c->input(1), 2, &dims2);\n    // batch_size * npoints * 3\n    ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), c->Dim(dims2, 1), c->Dim(dims1, 2)});\n    c->set_output(0, output);\n    return Status::OK();\n  });\nREGISTER_OP(\"GatherPointGrad\")\n  .Input(\"inp: float32\")\n  .Input(\"idx: int32\")\n  .Input(\"out_g: float32\")\n  .Output(\"inp_g: float32\")\n  .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {\n    c->set_output(0, c->input(0));\n    return Status::OK();\n  });\n\nvoid probsampleLauncher(int b,int n,int m,const float * inp_p,const float * inp_r,float * temp,int * out);\nclass ProbSampleGpuOp: public OpKernel{\n  public:\n    explicit ProbSampleGpuOp(OpKernelConstruction* context):OpKernel(context){}\n    void Compute(OpKernelContext * context)override{\n      const Tensor& inp_tensor=context->input(0);\n      const Tensor& inpr_tensor=context->input(1);\n      auto inp_flat=inp_tensor.flat<float>();\n      auto inpr_flat=inpr_tensor.flat<float>();\n      const float * inp=&(inp_flat(0));\n      const float * inpr=&(inpr_flat(0));\n      OP_REQUIRES(context,inp_tensor.dims()==2,errors::InvalidArgument(\"ProbSample expects (batch_size,num_choices) inp shape\"));\n      int b=inp_tensor.shape().dim_size(0);\n      int n=inp_tensor.shape().dim_size(1);\n      OP_REQUIRES(context,inpr_tensor.dims()==2 && inpr_tensor.shape().dim_size(0)==b,errors::InvalidArgument(\"ProbSample expects (batch_size,num_points) inpr shape\"));\n      int m=inpr_tensor.shape().dim_size(1);\n      Tensor * out_tensor=NULL;\n      OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m},&out_tensor));\n      auto out_flat=out_tensor->flat<int>();\n      int * out=&(out_flat(0));\n      Tensor temp_tensor;\n      OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum<float>::value,TensorShape{b,n},&temp_tensor));\n      auto temp_flat=temp_tensor.flat<float>();\n      float * temp=&(temp_flat(0));\n      probsampleLauncher(b,n,m,inp,inpr,temp,out);\n    }\n};\nREGISTER_KERNEL_BUILDER(Name(\"ProbSample\").Device(DEVICE_GPU), ProbSampleGpuOp);\n\nvoid farthestpointsamplingLauncher(int b,int n,int m,const float * inp,float * temp,int * out);\nclass FarthestPointSampleGpuOp: public OpKernel{\n  public:\n    explicit FarthestPointSampleGpuOp(OpKernelConstruction* context):OpKernel(context) {\n                    OP_REQUIRES_OK(context, context->GetAttr(\"npoint\", &npoint_));\n                    OP_REQUIRES(context, npoint_ > 0, errors::InvalidArgument(\"FarthestPointSample expects positive npoint\"));\n                }\n    void Compute(OpKernelContext * context)override{\n      int m = npoint_;\n\n      const Tensor& inp_tensor=context->input(0);\n      OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument(\"FarthestPointSample expects (batch_size,num_points,3) inp shape\"));\n      int b=inp_tensor.shape().dim_size(0);\n      int n=inp_tensor.shape().dim_size(1);\n      auto inp_flat=inp_tensor.flat<float>();\n      const float * inp=&(inp_flat(0));\n      Tensor * out_tensor;\n      OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m},&out_tensor));\n      auto out_flat=out_tensor->flat<int>();\n      int * out=&(out_flat(0));\n      Tensor temp_tensor;\n      OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum<float>::value,TensorShape{32,n},&temp_tensor));\n      auto temp_flat=temp_tensor.flat<float>();\n      float * temp=&(temp_flat(0));\n      farthestpointsamplingLauncher(b,n,m,inp,temp,out);\n    }\n    private:\n        int npoint_;\n};\nREGISTER_KERNEL_BUILDER(Name(\"FarthestPointSample\").Device(DEVICE_GPU),FarthestPointSampleGpuOp);\n\nvoid gatherpointLauncher(int b,int n,int m,const float * inp,const int * idx,float * out);\nclass GatherPointGpuOp: public OpKernel{\n  public:\n    explicit GatherPointGpuOp(OpKernelConstruction * context):OpKernel(context){}\n    void Compute(OpKernelContext * context)override{\n      const Tensor& inp_tensor=context->input(0);\n      OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument(\"GatherPoint expects (batch_size,num_points,3) inp shape\"));\n      int b=inp_tensor.shape().dim_size(0);\n      int n=inp_tensor.shape().dim_size(1);\n      const Tensor& idx_tensor=context->input(1);\n      OP_REQUIRES(context,idx_tensor.dims()==2 && idx_tensor.shape().dim_size(0)==b,errors::InvalidArgument(\"GatherPoint expects (batch_size,num_result) idx shape\"));\n      int m=idx_tensor.shape().dim_size(1);\n      auto inp_flat=inp_tensor.flat<float>();\n      const float * inp=&(inp_flat(0));\n      auto idx_flat=idx_tensor.flat<int>();\n      const int * idx=&(idx_flat(0));\n      Tensor * out_tensor=NULL;\n      OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,3},&out_tensor));\n      auto out_flat=out_tensor->flat<float>();\n      float * out=&(out_flat(0));\n      gatherpointLauncher(b,n,m,inp,idx,out);\n    }\n};\nREGISTER_KERNEL_BUILDER(Name(\"GatherPoint\").Device(DEVICE_GPU),GatherPointGpuOp);\n\nvoid scatteraddpointLauncher(int b,int n,int m,const float * out_g,const int * idx,float * inp_g);\nclass GatherPointGradGpuOp: public OpKernel{\n  public:\n    explicit GatherPointGradGpuOp(OpKernelConstruction * context):OpKernel(context){}\n    void Compute(OpKernelContext * context)override{\n      const Tensor& inp_tensor=context->input(0);\n      OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument(\"GatherPointGradGpuOp expects (batch_size,num_points,3) inp\"));\n      int b=inp_tensor.shape().dim_size(0);\n      int n=inp_tensor.shape().dim_size(1);\n      const Tensor& idx_tensor=context->input(1);\n      OP_REQUIRES(context,idx_tensor.dims()==2 && idx_tensor.shape().dim_size(0)==b,errors::InvalidArgument(\"GatherPointGradGpuOp expects (batch_size,num_result) idx shape\"));\n      int m=idx_tensor.shape().dim_size(1);\n      auto inp_flat=inp_tensor.flat<float>();\n      const float * inp=&(inp_flat(0));\n      auto idx_flat=idx_tensor.flat<int>();\n      const int * idx=&(idx_flat(0));\n      const Tensor& out_g_tensor=context->input(2);\n      OP_REQUIRES(context,out_g_tensor.dims()==3 && out_g_tensor.shape().dim_size(0)==b && out_g_tensor.shape().dim_size(1)==m && out_g_tensor.shape().dim_size(2)==3,errors::InvalidArgument(\"GatherPointGradGpuOp expects (batch_size,num_result,3) out_g shape\"));\n      auto out_g_flat=out_g_tensor.flat<float>();\n      const float * out_g=&(out_g_flat(0));\n      Tensor * inp_g_tensor=NULL;\n      OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&inp_g_tensor));\n      auto inp_g_flat=inp_g_tensor->flat<float>();\n      float * inp_g=&(inp_g_flat(0));\n      cudaMemset(inp_g,0,b*n*3*4);\n      scatteraddpointLauncher(b,n,m,out_g,idx,inp_g);\n    }\n};\nREGISTER_KERNEL_BUILDER(Name(\"GatherPointGrad\").Device(DEVICE_GPU),GatherPointGradGpuOp);\n\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/sampling/tf_sampling.py",
    "content": "''' Furthest point sampling\nOriginal author: Haoqiang Fan\nModified by Charles R. Qi\nAll Rights Reserved. 2017. \n'''\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsampling_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_sampling_so.so'))\ndef prob_sample(inp,inpr):\n    '''\ninput:\n    batch_size * ncategory float32\n    batch_size * npoints   float32\nreturns:\n    batch_size * npoints   int32\n    '''\n    return sampling_module.prob_sample(inp,inpr)\nops.NoGradient('ProbSample')\n# TF1.0 API requires set shape in C++\n#@tf.RegisterShape('ProbSample')\n#def _prob_sample_shape(op):\n#    shape1=op.inputs[0].get_shape().with_rank(2)\n#    shape2=op.inputs[1].get_shape().with_rank(2)\n#    return [tf.TensorShape([shape2.dims[0],shape2.dims[1]])]\ndef gather_point(inp,idx):\n    '''\ninput:\n    batch_size * ndataset * 3   float32\n    batch_size * npoints        int32\nreturns:\n    batch_size * npoints * 3    float32\n    '''\n    return sampling_module.gather_point(inp,idx)\n#@tf.RegisterShape('GatherPoint')\n#def _gather_point_shape(op):\n#    shape1=op.inputs[0].get_shape().with_rank(3)\n#    shape2=op.inputs[1].get_shape().with_rank(2)\n#    return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[2]])]\n@tf.RegisterGradient('GatherPoint')\ndef _gather_point_grad(op,out_g):\n    inp=op.inputs[0]\n    idx=op.inputs[1]\n    return [sampling_module.gather_point_grad(inp,idx,out_g),None]\ndef farthest_point_sample(npoint,inp):\n    '''\ninput:\n    int32\n    batch_size * ndataset * 3   float32\nreturns:\n    batch_size * npoint         int32\n    '''\n    return sampling_module.farthest_point_sample(inp, npoint)\nops.NoGradient('FarthestPointSample')\n    \n\nif __name__=='__main__':\n    import numpy as np\n    np.random.seed(100)\n    triangles=np.random.rand(1,5,3,3).astype('float32')\n    with tf.device('/gpu:1'):\n        inp=tf.constant(triangles)\n        tria=inp[:,:,0,:]\n        trib=inp[:,:,1,:]\n        tric=inp[:,:,2,:]\n        areas=tf.sqrt(tf.reduce_sum(tf.cross(trib-tria,tric-tria)**2,2)+1e-9)\n        randomnumbers=tf.random_uniform((1,8192))\n        triids=prob_sample(areas,randomnumbers)\n        tria_sample=gather_point(tria,triids)\n        trib_sample=gather_point(trib,triids)\n        tric_sample=gather_point(tric,triids)\n        us=tf.random_uniform((1,8192))\n        vs=tf.random_uniform((1,8192))\n        uplusv=1-tf.abs(us+vs-1)\n        uminusv=us-vs\n        us=(uplusv+uminusv)*0.5\n        vs=(uplusv-uminusv)*0.5\n        pt_sample=tria_sample+(trib_sample-tria_sample)*tf.expand_dims(us,-1)+(tric_sample-tria_sample)*tf.expand_dims(vs,-1)\n        print 'pt_sample: ', pt_sample\n        reduced_sample=gather_point(pt_sample,farthest_point_sample(1024,pt_sample))\n        print reduced_sample\n    with tf.Session('') as sess:\n        ret=sess.run(reduced_sample)\n    print ret.shape,ret.dtype\n    import cPickle as pickle\n    pickle.dump(ret,open('1.pkl','wb'),-1)\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/sampling/tf_sampling_compile.sh",
    "content": "#/bin/bash\n/usr/local/cuda-8.0/bin/nvcc tf_sampling_g.cu -o tf_sampling_g.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n\n# TF1.2\ng++ -std=c++11 tf_sampling.cpp tf_sampling_g.cu.o -o tf_sampling_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -lcudart -L /usr/local/cuda-8.0/lib64/ -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n\n# TF1.4\n#g++ -std=c++11 tf_sampling.cpp tf_sampling_g.cu.o -o tf_sampling_so.so -shared -fPIC -I /usr/local/lib/python2.7/dist-packages/tensorflow/include -I /usr/local/cuda-8.0/include -I /usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public -lcudart -L /usr/local/cuda-8.0/lib64/ -L/usr/local/lib/python2.7/dist-packages/tensorflow -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n"
  },
  {
    "path": "pointnet2_tf/tf_ops/sampling/tf_sampling_g.cu",
    "content": "/* Furthest point sampling GPU implementation\n * Original author: Haoqiang Fan\n * Modified by Charles R. Qi\n * All Rights Reserved. 2017. \n */\n\n__global__ void cumsumKernel(int b,int n,const float * __restrict__ inp,float * __restrict__ out){\n  const int BlockSize=2048;\n  const int paddingLevel=5;\n  __shared__ float buffer4[BlockSize*4];\n  __shared__ float buffer[BlockSize+(BlockSize>>paddingLevel)];\n  for (int i=blockIdx.x;i<b;i+=gridDim.x){\n    float runningsum=0,runningsum2=0;\n    for (int j=0;j<n;j+=BlockSize*4){\n      int n24_i=min(n-j,BlockSize*4);\n      int n24=(n24_i+3)&~3;\n      int n2=n24>>2;\n      for (int k=threadIdx.x*4;k<n24_i;k+=blockDim.x*4){\n        if (k+3<n24_i){\n          float v1=inp[i*n+j+k];\n          float v2=inp[i*n+j+k+1];\n          v2+=v1;\n          float v3=inp[i*n+j+k+2];\n          float v4=inp[i*n+j+k+3];\n          v4+=v3;\n          v3+=v2;\n          v4+=v2;\n          buffer4[k]=v1;\n          buffer4[k+1]=v2;\n          buffer4[k+2]=v3;\n          buffer4[k+3]=v4;\n          buffer[(k>>2)+(k>>(2+paddingLevel))]=v4;\n        }else{\n          float v=0;\n          for (int k2=k;k2<n24_i;k2++){\n            v+=inp[i*n+j+k2];\n            buffer4[k2]=v;\n          }\n          for (int k2=n24_i;k2<n24;k2++){\n            buffer4[k2]=v;\n          }\n          buffer[(k>>2)+(k>>(2+paddingLevel))]=v;\n        }\n      }\n      int u=0;\n      for (;(2<<u)<=n2;u++){\n        __syncthreads();\n        for (int k=threadIdx.x;k<int(n2>>(u+1));k+=blockDim.x){\n          int i1=(((k<<1)+2)<<u)-1;\n          int i2=(((k<<1)+1)<<u)-1;\n          i1+=i1>>paddingLevel;\n          i2+=i2>>paddingLevel;\n          buffer[i1]+=buffer[i2];\n        }\n      }\n      u--;\n      for (;u>=0;u--){\n        __syncthreads();\n        for (int k=threadIdx.x;k<int((n2-(1<<u))>>(u+1));k+=blockDim.x){\n          int i1=(((k<<1)+3)<<u)-1;\n          int i2=(((k<<1)+2)<<u)-1;\n          i1+=i1>>paddingLevel;\n          i2+=i2>>paddingLevel;\n          buffer[i1]+=buffer[i2];\n        }\n      }\n      __syncthreads();\n      for (int k=threadIdx.x*4;k<n24;k+=blockDim.x*4){\n        if (k!=0){\n          int k2=((k>>2)-1)+(((k>>2)-1)>>paddingLevel);\n          buffer4[k]+=buffer[k2];\n          buffer4[k+1]+=buffer[k2];\n          buffer4[k+2]+=buffer[k2];\n          buffer4[k+3]+=buffer[k2];\n        }\n      }\n      __syncthreads();\n      for (int k=threadIdx.x;k<n24_i;k+=blockDim.x){\n        out[i*n+j+k]=buffer4[k]+runningsum;\n      }\n      float t=buffer[(n2-1)+((n2-1)>>paddingLevel)]+runningsum2;\n      float r2=runningsum+t;\n      runningsum2=t-(r2-runningsum);\n      runningsum=r2;\n      __syncthreads();\n    }\n  }\n}\n\n__global__ void binarysearchKernel(int b,int n,int m,const float * __restrict__ dataset,const float * __restrict__ query, int * __restrict__ result){\n  int base=1;\n  while (base<n)\n    base<<=1;\n  for (int i=blockIdx.x;i<b;i+=gridDim.x){\n    for (int j=blockIdx.y*blockDim.x+threadIdx.x;j<m;j+=blockDim.x*gridDim.y){\n      float q=query[i*m+j]*dataset[i*n+n-1];\n      int r=n-1;\n      for (int k=base;k>=1;k>>=1)\n        if (r>=k && dataset[i*n+r-k]>=q)\n          r-=k;\n      result[i*m+j]=r;\n    }\n  }\n}\n__global__ void farthestpointsamplingKernel(int b,int n,int m,const float * __restrict__ dataset,float * __restrict__ temp,int * __restrict__ idxs){\n  if (m<=0)\n    return;\n  const int BlockSize=512;\n  __shared__ float dists[BlockSize];\n  __shared__ int dists_i[BlockSize];\n  const int BufferSize=3072;\n  __shared__ float buf[BufferSize*3];\n  for (int i=blockIdx.x;i<b;i+=gridDim.x){\n    int old=0;\n    if (threadIdx.x==0)\n      idxs[i*m+0]=old;\n    for (int j=threadIdx.x;j<n;j+=blockDim.x){\n      temp[blockIdx.x*n+j]=1e38;\n    }\n    for (int j=threadIdx.x;j<min(BufferSize,n)*3;j+=blockDim.x){\n      buf[j]=dataset[i*n*3+j];\n    }\n    __syncthreads();\n    for (int j=1;j<m;j++){\n      int besti=0;\n      float best=-1;\n      float x1=dataset[i*n*3+old*3+0];\n      float y1=dataset[i*n*3+old*3+1];\n      float z1=dataset[i*n*3+old*3+2];\n      for (int k=threadIdx.x;k<n;k+=blockDim.x){\n        float td=temp[blockIdx.x*n+k];\n        float x2,y2,z2;\n        if (k<BufferSize){\n          x2=buf[k*3+0];\n          y2=buf[k*3+1];\n          z2=buf[k*3+2];\n        }else{\n          x2=dataset[i*n*3+k*3+0];\n          y2=dataset[i*n*3+k*3+1];\n          z2=dataset[i*n*3+k*3+2];\n        }\n        float d=(x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1);\n        float d2=min(d,td);\n        if (d2!=td)\n          temp[blockIdx.x*n+k]=d2;\n        if (d2>best){\n          best=d2;\n          besti=k;\n        }\n      }\n      dists[threadIdx.x]=best;\n      dists_i[threadIdx.x]=besti;\n      for (int u=0;(1<<u)<blockDim.x;u++){\n        __syncthreads();\n        if (threadIdx.x<(blockDim.x>>(u+1))){\n          int i1=(threadIdx.x*2)<<u;\n          int i2=(threadIdx.x*2+1)<<u;\n          if (dists[i1]<dists[i2]){\n            dists[i1]=dists[i2];\n            dists_i[i1]=dists_i[i2];\n          }\n        }\n      }\n      __syncthreads();\n      old=dists_i[0];\n      if (threadIdx.x==0)\n        idxs[i*m+j]=old;\n    }\n  }\n}\n\n__global__ void gatherpointKernel(int b,int n,int m,const float * __restrict__ inp,const int * __restrict__ idx,float * __restrict__ out){\n  for (int i=blockIdx.x;i<b;i+=gridDim.x){\n    for (int j=blockIdx.y*blockDim.x+threadIdx.x;j<m;j+=blockDim.x*gridDim.y){\n      int a=idx[i*m+j];\n      out[(i*m+j)*3+0]=inp[(i*n+a)*3+0];\n      out[(i*m+j)*3+1]=inp[(i*n+a)*3+1];\n      out[(i*m+j)*3+2]=inp[(i*n+a)*3+2];\n    }\n  }\n}\n\n__global__ void scatteraddpointKernel(int b,int n,int m,const float * __restrict__ out_g,const int * __restrict__ idx,float * __restrict__ inp_g){\n  for (int i=blockIdx.x;i<b;i+=gridDim.x){\n    for (int j=blockIdx.y*blockDim.x+threadIdx.x;j<m;j+=blockDim.x*gridDim.y){\n      int a=idx[i*m+j];\n      atomicAdd(&inp_g[(i*n+a)*3+0],out_g[(i*m+j)*3+0]);\n      atomicAdd(&inp_g[(i*n+a)*3+1],out_g[(i*m+j)*3+1]);\n      atomicAdd(&inp_g[(i*n+a)*3+2],out_g[(i*m+j)*3+2]);\n    }\n  }\n}\n\nvoid cumsumLauncher(int b,int n,const float * inp,float * out){\n  cumsumKernel<<<32,512>>>(b,n,inp,out);\n}\n//require b*n working space\nvoid probsampleLauncher(int b,int n,int m,const float * inp_p,const float * inp_r,float * temp,int * out){\n  cumsumKernel<<<32,512>>>(b,n,inp_p,temp);\n  binarysearchKernel<<<dim3(32,8,1),512>>>(b,n,m,temp,inp_r,out);\n}\n//require 32*n working space\nvoid farthestpointsamplingLauncher(int b,int n,int m,const float * inp,float * temp,int * out){\n  farthestpointsamplingKernel<<<32,512>>>(b,n,m,inp,temp,out);\n}\nvoid gatherpointLauncher(int b,int n,int m,const float * inp,const int * idx,float * out){\n  gatherpointKernel<<<dim3(2,8,1),512>>>(b,n,m,inp,idx,out);\n}\nvoid scatteraddpointLauncher(int b,int n,int m,const float * out_g,const int * idx,float * inp_g){\n  scatteraddpointKernel<<<dim3(2,8,1),512>>>(b,n,m,out_g,idx,inp_g);\n}\n\n"
  },
  {
    "path": "pointnet2_tf/train.py",
    "content": "'''\n    Single-GPU training.\n    Will use H5 dataset in default. If using normal, will shift to the normal dataset.\n'''\nimport argparse\nimport math\nfrom datetime import datetime\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = BASE_DIR\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nimport modelnet_dataset\nimport modelnet_h5_dataset\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')\nparser.add_argument('--model', default='pointnet2_cls_ssg', help='Model name [default: pointnet2_cls_ssg]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=1024, help='Point Number [default: 1024]')\nparser.add_argument('--max_epoch', type=int, default=251, help='Epoch to run [default: 251]')\nparser.add_argument('--batch_size', type=int, default=16, help='Batch Size during training [default: 16]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.7, help='Decay rate for lr decay [default: 0.7]')\nparser.add_argument('--normal', action='store_true', help='Whether to use normal information')\nFLAGS = parser.parse_args()\n\nEPOCH_CNT = 0\n\nBATCH_SIZE = FLAGS.batch_size\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nGPU_INDEX = FLAGS.gpu\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(ROOT_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nNUM_CLASSES = 40\n\n# Shapenet official train/test split\nif FLAGS.normal:\n    assert(NUM_POINT<=10000)\n    DATA_PATH = os.path.join(ROOT_DIR, 'data/modelnet40_normal_resampled')\n    TRAIN_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='train', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\n    TEST_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='test', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\nelse:\n    assert(NUM_POINT<=2048)\n    TRAIN_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/train_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=True)\n    TEST_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/test_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=False)\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/gpu:'+str(GPU_INDEX)):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter\n            # for you every time it trains.\n            batch = tf.get_variable('batch', [],\n                initializer=tf.constant_initializer(0), trainable=False)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            # Get model and loss \n            pred, end_points = MODEL.get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)\n            MODEL.get_loss(pred, labels_pl, end_points)\n            losses = tf.get_collection('losses')\n            total_loss = tf.add_n(losses, name='total_loss')\n            tf.summary.scalar('total_loss', total_loss)\n            for l in losses + [total_loss]:\n                tf.summary.scalar(l.op.name, l)\n\n            correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)\n            tf.summary.scalar('accuracy', accuracy)\n\n            print \"--- Get training operator\"\n            # Get training operator\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n            train_op = optimizer.minimize(total_loss, global_step=batch)\n            \n            # Add ops to save and restore all the variables.\n            saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'), sess.graph)\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        sess.run(init)\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': total_loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch,\n               'end_points': end_points}\n\n        best_acc = -1\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n             \n            train_one_epoch(sess, ops, train_writer)\n            eval_one_epoch(sess, ops, test_writer)\n\n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    log_string(str(datetime.now()))\n\n    # Make sure batch data is of same size\n    cur_batch_data = np.zeros((BATCH_SIZE,NUM_POINT,TRAIN_DATASET.num_channel()))\n    cur_batch_label = np.zeros((BATCH_SIZE), dtype=np.int32)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    batch_idx = 0\n    while TRAIN_DATASET.has_next_batch():\n        batch_data, batch_label = TRAIN_DATASET.next_batch(augment=True)\n        #batch_data = provider.random_point_dropout(batch_data)\n        bsize = batch_data.shape[0]\n        cur_batch_data[0:bsize,...] = batch_data\n        cur_batch_label[0:bsize] = batch_label\n\n        feed_dict = {ops['pointclouds_pl']: cur_batch_data,\n                     ops['labels_pl']: cur_batch_label,\n                     ops['is_training_pl']: is_training,}\n        summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n        train_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 1)\n        correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n        total_correct += correct\n        total_seen += bsize\n        loss_sum += loss_val\n        if (batch_idx+1)%50 == 0:\n            log_string(' ---- batch: %03d ----' % (batch_idx+1))\n            log_string('mean loss: %f' % (loss_sum / 50))\n            log_string('accuracy: %f' % (total_correct / float(total_seen)))\n            total_correct = 0\n            total_seen = 0\n            loss_sum = 0\n        batch_idx += 1\n\n    TRAIN_DATASET.reset()\n        \ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n\n    # Make sure batch data is of same size\n    cur_batch_data = np.zeros((BATCH_SIZE,NUM_POINT,TEST_DATASET.num_channel()))\n    cur_batch_label = np.zeros((BATCH_SIZE), dtype=np.int32)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    batch_idx = 0\n    shape_ious = []\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n    \n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n    \n    while TEST_DATASET.has_next_batch():\n        batch_data, batch_label = TEST_DATASET.next_batch(augment=False)\n        bsize = batch_data.shape[0]\n        # for the last batch in the epoch, the bsize:end are from last batch\n        cur_batch_data[0:bsize,...] = batch_data\n        cur_batch_label[0:bsize] = batch_label\n\n        feed_dict = {ops['pointclouds_pl']: cur_batch_data,\n                     ops['labels_pl']: cur_batch_label,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n        test_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 1)\n        correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n        total_correct += correct\n        total_seen += bsize\n        loss_sum += loss_val\n        batch_idx += 1\n        for i in range(0, bsize):\n            l = batch_label[i]\n            total_seen_class[l] += 1\n            total_correct_class[l] += (pred_val[i] == l)\n    \n    log_string('eval mean loss: %f' % (loss_sum / float(batch_idx)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    EPOCH_CNT += 1\n\n    TEST_DATASET.reset()\n    return total_correct/float(total_seen)\n\n\nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/train_multi_gpu.py",
    "content": "'''\n    Multi-GPU training.\n    Near linear scale acceleration for multi-gpus on a single machine.\n    Will use H5 dataset in default. If using normal, will shift to the normal dataset.\n'''\n\nimport argparse\nimport math\nfrom datetime import datetime\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport importlib\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = BASE_DIR\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'models'))\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nimport provider\nimport tf_util\nimport modelnet_dataset\nimport modelnet_h5_dataset\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--num_gpus', type=int, default=1, help='How many gpus to use [default: 1]')\nparser.add_argument('--model', default='pointnet2_cls_ssg', help='Model name [default: pointnet2_cls_ssg]')\nparser.add_argument('--log_dir', default='log', help='Log dir [default: log]')\nparser.add_argument('--num_point', type=int, default=1024, help='Point Number [default: 1024]')\nparser.add_argument('--max_epoch', type=int, default=251, help='Epoch to run [default: 251]')\nparser.add_argument('--batch_size', type=int, default=32, help='Batch Size during training [default: 32]')\nparser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')\nparser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')\nparser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')\nparser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')\nparser.add_argument('--decay_rate', type=float, default=0.7, help='Decay rate for lr decay [default: 0.7]')\nparser.add_argument('--normal', action='store_true', help='Whether to use normal information')\nFLAGS = parser.parse_args()\n\nEPOCH_CNT = 0\n\nNUM_GPUS = FLAGS.num_gpus\nBATCH_SIZE = FLAGS.batch_size\nassert(BATCH_SIZE % NUM_GPUS == 0)\nDEVICE_BATCH_SIZE = BATCH_SIZE / NUM_GPUS\n\nNUM_POINT = FLAGS.num_point\nMAX_EPOCH = FLAGS.max_epoch\nBASE_LEARNING_RATE = FLAGS.learning_rate\nMOMENTUM = FLAGS.momentum\nOPTIMIZER = FLAGS.optimizer\nDECAY_STEP = FLAGS.decay_step\nDECAY_RATE = FLAGS.decay_rate\n\nMODEL = importlib.import_module(FLAGS.model) # import network module\nMODEL_FILE = os.path.join(ROOT_DIR, 'models', FLAGS.model+'.py')\nLOG_DIR = FLAGS.log_dir\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp %s %s' % (MODEL_FILE, LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\nLOG_FOUT.write(str(FLAGS)+'\\n')\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()\n\nNUM_CLASSES = 40\n\n# Shapenet official train/test split\nif FLAGS.normal:\n    assert(NUM_POINT<=10000)\n    DATA_PATH = os.path.join(ROOT_DIR, 'data/modelnet40_normal_resampled')\n    TRAIN_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='train', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\n    TEST_DATASET = modelnet_dataset.ModelNetDataset(root=DATA_PATH, npoints=NUM_POINT, split='test', normal_channel=FLAGS.normal, batch_size=BATCH_SIZE)\nelse:\n    assert(NUM_POINT<=2048)\n    TRAIN_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/train_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=True)\n    TEST_DATASET = modelnet_h5_dataset.ModelNetH5Dataset(os.path.join(BASE_DIR, 'data/modelnet40_ply_hdf5_2048/test_files.txt'), batch_size=BATCH_SIZE, npoints=NUM_POINT, shuffle=False)\n\ndef log_string(out_str):\n    LOG_FOUT.write(out_str+'\\n')\n    LOG_FOUT.flush()\n    print(out_str)\n\ndef average_gradients(tower_grads):\n  \"\"\"Calculate the average gradient for each shared variable across all towers.\n  Note that this function provides a synchronization point across all towers.\n  From tensorflow tutorial: cifar10/cifar10_multi_gpu_train.py\n  Args:\n    tower_grads: List of lists of (gradient, variable) tuples. The outer list\n      is over individual gradients. The inner list is over the gradient\n      calculation for each tower.\n  Returns:\n     List of pairs of (gradient, variable) where the gradient has been averaged\n     across all towers.\n  \"\"\"\n  average_grads = []\n  for grad_and_vars in zip(*tower_grads):\n    # Note that each grad_and_vars looks like the following:\n    #   ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))\n    grads = []\n    #for g, _ in grad_and_vars:\n    for g, v in grad_and_vars:\n      # Add 0 dimension to the gradients to represent the tower.\n      expanded_g = tf.expand_dims(g, 0)\n\n      # Append on a 'tower' dimension which we will average over below.\n      grads.append(expanded_g)\n\n    # Average over the 'tower' dimension.\n    grad = tf.concat(axis=0, values=grads)\n    grad = tf.reduce_mean(grad, 0)\n\n    # Keep in mind that the Variables are redundant because they are shared\n    # across towers. So .. we will just return the first tower's pointer to\n    # the Variable.\n    v = grad_and_vars[0][1]\n    grad_and_var = (grad, v)\n    average_grads.append(grad_and_var)\n  return average_grads\n\n\ndef get_learning_rate(batch):\n    learning_rate = tf.train.exponential_decay(\n                        BASE_LEARNING_RATE,  # Base learning rate.\n                        batch * BATCH_SIZE,  # Current index into the dataset.\n                        DECAY_STEP,          # Decay step.\n                        DECAY_RATE,          # Decay rate.\n                        staircase=True)\n    learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!\n    return learning_rate        \n\ndef get_bn_decay(batch):\n    bn_momentum = tf.train.exponential_decay(\n                      BN_INIT_DECAY,\n                      batch*BATCH_SIZE,\n                      BN_DECAY_DECAY_STEP,\n                      BN_DECAY_DECAY_RATE,\n                      staircase=True)\n    bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n    return bn_decay\n\ndef train():\n    with tf.Graph().as_default():\n        with tf.device('/cpu:0'):\n            pointclouds_pl, labels_pl = MODEL.placeholder_inputs(BATCH_SIZE, NUM_POINT)\n            is_training_pl = tf.placeholder(tf.bool, shape=())\n            \n            # Note the global_step=batch parameter to minimize. \n            # That tells the optimizer to helpfully increment the 'batch' parameter\n            # for you every time it trains.\n            batch = tf.get_variable('batch', [],\n                initializer=tf.constant_initializer(0), trainable=False)\n            bn_decay = get_bn_decay(batch)\n            tf.summary.scalar('bn_decay', bn_decay)\n\n            # Set learning rate and optimizer\n            learning_rate = get_learning_rate(batch)\n            tf.summary.scalar('learning_rate', learning_rate)\n            if OPTIMIZER == 'momentum':\n                optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n            elif OPTIMIZER == 'adam':\n                optimizer = tf.train.AdamOptimizer(learning_rate)\n\n            # -------------------------------------------\n            # Get model and loss on multiple GPU devices\n            # -------------------------------------------\n            # Allocating variables on CPU first will greatly accelerate multi-gpu training.\n            # Ref: https://github.com/kuza55/keras-extras/issues/21\n            MODEL.get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)\n            \n            tower_grads = []\n            pred_gpu = []\n            total_loss_gpu = []\n            for i in range(NUM_GPUS):\n                with tf.variable_scope(tf.get_variable_scope(), reuse=True):\n                    with tf.device('/gpu:%d'%(i)), tf.name_scope('gpu_%d'%(i)) as scope:\n                        # Evenly split input data to each GPU\n                        pc_batch = tf.slice(pointclouds_pl,\n                            [i*DEVICE_BATCH_SIZE,0,0], [DEVICE_BATCH_SIZE,-1,-1])\n                        label_batch = tf.slice(labels_pl,\n                            [i*DEVICE_BATCH_SIZE], [DEVICE_BATCH_SIZE])\n\n                        pred, end_points = MODEL.get_model(pc_batch,\n                            is_training=is_training_pl, bn_decay=bn_decay)\n\n                        MODEL.get_loss(pred, label_batch, end_points)\n                        losses = tf.get_collection('losses', scope)\n                        total_loss = tf.add_n(losses, name='total_loss')\n                        for l in losses + [total_loss]:\n                            tf.summary.scalar(l.op.name, l)\n\n                        grads = optimizer.compute_gradients(total_loss)\n                        tower_grads.append(grads)\n\n                        pred_gpu.append(pred)\n                        total_loss_gpu.append(total_loss)\n            \n            # Merge pred and losses from multiple GPUs\n            pred = tf.concat(pred_gpu, 0)\n            total_loss = tf.reduce_mean(total_loss_gpu)\n\n            # Get training operator \n            grads = average_gradients(tower_grads)\n            train_op = optimizer.apply_gradients(grads, global_step=batch)\n\n            correct = tf.equal(tf.argmax(pred, 1), tf.to_int64(labels_pl))\n            accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE)\n            tf.summary.scalar('accuracy', accuracy)\n\n        # Add ops to save and restore all the variables.\n        saver = tf.train.Saver()\n        \n        # Create a session\n        config = tf.ConfigProto()\n        config.gpu_options.allow_growth = True\n        config.allow_soft_placement = True\n        config.log_device_placement = False\n        sess = tf.Session(config=config)\n\n        # Add summary writers\n        merged = tf.summary.merge_all()\n        train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'), sess.graph)\n        test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'), sess.graph)\n\n        # Init variables\n        init = tf.global_variables_initializer()\n        sess.run(init)\n\n        ops = {'pointclouds_pl': pointclouds_pl,\n               'labels_pl': labels_pl,\n               'is_training_pl': is_training_pl,\n               'pred': pred,\n               'loss': total_loss,\n               'train_op': train_op,\n               'merged': merged,\n               'step': batch,\n               'end_points': end_points}\n\n        best_acc = -1\n        for epoch in range(MAX_EPOCH):\n            log_string('**** EPOCH %03d ****' % (epoch))\n            sys.stdout.flush()\n             \n            train_one_epoch(sess, ops, train_writer)\n            eval_one_epoch(sess, ops, test_writer)\n\n            # Save the variables to disk.\n            if epoch % 10 == 0:\n                save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n                log_string(\"Model saved in file: %s\" % save_path)\n\n\ndef train_one_epoch(sess, ops, train_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    is_training = True\n    \n    log_string(str(datetime.now()))\n\n    # Make sure batch data is of same size\n    cur_batch_data = np.zeros((BATCH_SIZE,NUM_POINT,TRAIN_DATASET.num_channel()))\n    cur_batch_label = np.zeros((BATCH_SIZE), dtype=np.int32)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    batch_idx = 0\n    while TRAIN_DATASET.has_next_batch():\n        batch_data, batch_label = TRAIN_DATASET.next_batch(augment=True)\n        #batch_data = provider.random_point_dropout(batch_data)\n        bsize = batch_data.shape[0]\n        cur_batch_data[0:bsize,...] = batch_data\n        cur_batch_label[0:bsize] = batch_label\n\n        feed_dict = {ops['pointclouds_pl']: cur_batch_data,\n                     ops['labels_pl']: cur_batch_label,\n                     ops['is_training_pl']: is_training,}\n        summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)\n        train_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 1)\n        correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n        total_correct += correct\n        total_seen += bsize\n        loss_sum += loss_val\n        if (batch_idx+1)%50 == 0:\n            log_string(' ---- batch: %03d ----' % (batch_idx+1))\n            log_string('mean loss: %f' % (loss_sum / 50))\n            log_string('accuracy: %f' % (total_correct / float(total_seen)))\n            total_correct = 0\n            total_seen = 0\n            loss_sum = 0\n        batch_idx += 1\n\n    TRAIN_DATASET.reset()\n        \ndef eval_one_epoch(sess, ops, test_writer):\n    \"\"\" ops: dict mapping from string to tf ops \"\"\"\n    global EPOCH_CNT\n    is_training = False\n\n    # Make sure batch data is of same size\n    cur_batch_data = np.zeros((BATCH_SIZE,NUM_POINT,TEST_DATASET.num_channel()))\n    cur_batch_label = np.zeros((BATCH_SIZE), dtype=np.int32)\n\n    total_correct = 0\n    total_seen = 0\n    loss_sum = 0\n    batch_idx = 0\n    shape_ious = []\n    total_seen_class = [0 for _ in range(NUM_CLASSES)]\n    total_correct_class = [0 for _ in range(NUM_CLASSES)]\n    \n    log_string(str(datetime.now()))\n    log_string('---- EPOCH %03d EVALUATION ----'%(EPOCH_CNT))\n    \n    while TEST_DATASET.has_next_batch():\n        batch_data, batch_label = TEST_DATASET.next_batch(augment=False)\n        bsize = batch_data.shape[0]\n        # for the last batch in the epoch, the bsize:end are from last batch\n        cur_batch_data[0:bsize,...] = batch_data\n        cur_batch_label[0:bsize] = batch_label\n\n        feed_dict = {ops['pointclouds_pl']: cur_batch_data,\n                     ops['labels_pl']: cur_batch_label,\n                     ops['is_training_pl']: is_training}\n        summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'],\n            ops['loss'], ops['pred']], feed_dict=feed_dict)\n        test_writer.add_summary(summary, step)\n        pred_val = np.argmax(pred_val, 1)\n        correct = np.sum(pred_val[0:bsize] == batch_label[0:bsize])\n        total_correct += correct\n        total_seen += bsize\n        loss_sum += loss_val\n        batch_idx += 1\n        for i in range(0, bsize):\n            l = batch_label[i]\n            total_seen_class[l] += 1\n            total_correct_class[l] += (pred_val[i] == l)\n    \n    log_string('eval mean loss: %f' % (loss_sum / float(batch_idx)))\n    log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n    log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n    EPOCH_CNT += 1\n\n    TEST_DATASET.reset()\n    return total_correct/float(total_seen)\n\n\nif __name__ == \"__main__\":\n    log_string('pid: %s'%(str(os.getpid())))\n    train()\n    LOG_FOUT.close()\n"
  },
  {
    "path": "pointnet2_tf/utils/README.md",
    "content": "## Utilility Functions for 3D Point Cloud Deep Learning\n\n### visualization tool\n\n    sh compile_render_balls_so.sh\n    python show3d_balls.py\n"
  },
  {
    "path": "pointnet2_tf/utils/compile_render_balls_so.sh",
    "content": "g++ -std=c++11 render_balls_so.cpp -o render_balls_so.so -shared -fPIC -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n\n"
  },
  {
    "path": "pointnet2_tf/utils/pc_util.py",
    "content": "\"\"\" Utility functions for processing point clouds.\n\nAuthor: Charles R. Qi, Hao Su\nDate: November 2016\n\"\"\"\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\n# Draw point cloud\nfrom eulerangles import euler2mat\n\n# Point cloud IO\nimport numpy as np\nfrom plyfile import PlyData, PlyElement\n\n \n# ----------------------------------------\n# Point Cloud/Volume Conversions\n# ----------------------------------------\n\ndef point_cloud_to_volume_batch(point_clouds, vsize=12, radius=1.0, flatten=True):\n    \"\"\" Input is BxNx3 batch of point cloud\n        Output is Bx(vsize^3)\n    \"\"\"\n    vol_list = []\n    for b in range(point_clouds.shape[0]):\n        vol = point_cloud_to_volume(np.squeeze(point_clouds[b,:,:]), vsize, radius)\n        if flatten:\n            vol_list.append(vol.flatten())\n        else:\n            vol_list.append(np.expand_dims(np.expand_dims(vol, -1), 0))\n    if flatten:\n        return np.vstack(vol_list)\n    else:\n        return np.concatenate(vol_list, 0)\n\n\ndef point_cloud_to_volume(points, vsize, radius=1.0):\n    \"\"\" input is Nx3 points.\n        output is vsize*vsize*vsize\n        assumes points are in range [-radius, radius]\n    \"\"\"\n    vol = np.zeros((vsize,vsize,vsize))\n    voxel = 2*radius/float(vsize)\n    locations = (points + radius)/voxel\n    locations = locations.astype(int)\n    vol[locations[:,0],locations[:,1],locations[:,2]] = 1.0\n    return vol\n\n#a = np.zeros((16,1024,3))\n#print point_cloud_to_volume_batch(a, 12, 1.0, False).shape\n\ndef volume_to_point_cloud(vol):\n    \"\"\" vol is occupancy grid (value = 0 or 1) of size vsize*vsize*vsize\n        return Nx3 numpy array.\n    \"\"\"\n    vsize = vol.shape[0]\n    assert(vol.shape[1] == vsize and vol.shape[1] == vsize)\n    points = []\n    for a in range(vsize):\n        for b in range(vsize):\n            for c in range(vsize):\n                if vol[a,b,c] == 1:\n                    points.append(np.array([a,b,c]))\n    if len(points) == 0:\n        return np.zeros((0,3))\n    points = np.vstack(points)\n    return points\n\ndef point_cloud_to_volume_v2_batch(point_clouds, vsize=12, radius=1.0, num_sample=128):\n    \"\"\" Input is BxNx3 a batch of point cloud\n        Output is BxVxVxVxnum_samplex3\n        Added on Feb 19\n    \"\"\"\n    vol_list = []\n    for b in range(point_clouds.shape[0]):\n        vol = point_cloud_to_volume_v2(point_clouds[b,:,:], vsize, radius, num_sample)\n        vol_list.append(np.expand_dims(vol, 0))\n    return np.concatenate(vol_list, 0)\n\ndef point_cloud_to_volume_v2(points, vsize, radius=1.0, num_sample=128):\n    \"\"\" input is Nx3 points\n        output is vsize*vsize*vsize*num_sample*3\n        assumes points are in range [-radius, radius]\n        samples num_sample points in each voxel, if there are less than\n        num_sample points, replicate the points\n        Added on Feb 19\n    \"\"\"\n    vol = np.zeros((vsize,vsize,vsize,num_sample,3))\n    voxel = 2*radius/float(vsize)\n    locations = (points + radius)/voxel\n    locations = locations.astype(int)\n    loc2pc = {}\n    for n in range(points.shape[0]):\n        loc = tuple(locations[n,:])\n        if loc not in loc2pc:\n            loc2pc[loc] = []\n        loc2pc[loc].append(points[n,:])\n    #print loc2pc\n\n    for i in range(vsize):\n        for j in range(vsize):\n            for k in range(vsize):\n                if (i,j,k) not in loc2pc:\n                    vol[i,j,k,:,:] = np.zeros((num_sample,3))\n                else:\n                    pc = loc2pc[(i,j,k)] # a list of (3,) arrays\n                    pc = np.vstack(pc) # kx3\n                    # Sample/pad to num_sample points\n                    if pc.shape[0]>num_sample:\n                        choices = np.random.choice(pc.shape[0], num_sample, replace=False)\n                        pc = pc[choices,:]\n                    elif pc.shape[0]<num_sample:\n                        pc = np.lib.pad(pc, ((0,num_sample-pc.shape[0]),(0,0)), 'edge')\n                    # Normalize\n                    pc_center = (np.array([i,j,k])+0.5)*voxel - radius\n                    #print 'pc center: ', pc_center\n                    pc = (pc - pc_center) / voxel # shift and scale\n                    vol[i,j,k,:,:] = pc \n                #print (i,j,k), vol[i,j,k,:,:]\n    return vol\n\ndef point_cloud_to_image_batch(point_clouds, imgsize, radius=1.0, num_sample=128):\n    \"\"\" Input is BxNx3 a batch of point cloud\n        Output is BxIxIxnum_samplex3\n        Added on Feb 19\n    \"\"\"\n    img_list = []\n    for b in range(point_clouds.shape[0]):\n        img = point_cloud_to_image(point_clouds[b,:,:], imgsize, radius, num_sample)\n        img_list.append(np.expand_dims(img, 0))\n    return np.concatenate(img_list, 0)\n\n\ndef point_cloud_to_image(points, imgsize, radius=1.0, num_sample=128):\n    \"\"\" input is Nx3 points\n        output is imgsize*imgsize*num_sample*3\n        assumes points are in range [-radius, radius]\n        samples num_sample points in each pixel, if there are less than\n        num_sample points, replicate the points\n        Added on Feb 19\n    \"\"\"\n    img = np.zeros((imgsize, imgsize, num_sample, 3))\n    pixel = 2*radius/float(imgsize)\n    locations = (points[:,0:2] + radius)/pixel # Nx2\n    locations = locations.astype(int)\n    loc2pc = {}\n    for n in range(points.shape[0]):\n        loc = tuple(locations[n,:])\n        if loc not in loc2pc:\n            loc2pc[loc] = []\n        loc2pc[loc].append(points[n,:])\n    for i in range(imgsize):\n        for j in range(imgsize):\n            if (i,j) not in loc2pc:\n                img[i,j,:,:] = np.zeros((num_sample,3))\n            else:\n                pc = loc2pc[(i,j)]\n                pc = np.vstack(pc)\n                if pc.shape[0]>num_sample:\n                    choices = np.random.choice(pc.shape[0], num_sample, replace=False)\n                    pc = pc[choices,:]\n                elif pc.shape[0]<num_sample:\n                    pc = np.lib.pad(pc, ((0,num_sample-pc.shape[0]),(0,0)), 'edge')\n                pc_center = (np.array([i,j])+0.5)*pixel - radius\n                pc[:,0:2] = (pc[:,0:2] - pc_center)/pixel\n                img[i,j,:,:] = pc\n    return img\n# ----------------------------------------\n# Point cloud IO\n# ----------------------------------------\n\ndef read_ply(filename):\n    \"\"\" read XYZ point cloud from filename PLY file \"\"\"\n    plydata = PlyData.read(filename)\n    pc = plydata['vertex'].data\n    pc_array = np.array([[x, y, z] for x,y,z in pc])\n    return pc_array\n\n\ndef write_ply(points, filename, text=True):\n    \"\"\" input: Nx3, write points to filename as PLY format. \"\"\"\n    points = [(points[i,0], points[i,1], points[i,2]) for i in range(points.shape[0])]\n    vertex = np.array(points, dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')])\n    el = PlyElement.describe(vertex, 'vertex', comments=['vertices'])\n    PlyData([el], text=text).write(filename)\n\n\n# ----------------------------------------\n# Simple Point cloud and Volume Renderers\n# ----------------------------------------\n\ndef draw_point_cloud(input_points, canvasSize=500, space=200, diameter=25,\n                     xrot=0, yrot=0, zrot=0, switch_xyz=[0,1,2], normalize=True):\n    \"\"\" Render point cloud to image with alpha channel.\n        Input:\n            points: Nx3 numpy array (+y is up direction)\n        Output:\n            gray image as numpy array of size canvasSizexcanvasSize\n    \"\"\"\n    image = np.zeros((canvasSize, canvasSize))\n    if input_points is None or input_points.shape[0] == 0:\n        return image\n\n    points = input_points[:, switch_xyz]\n    M = euler2mat(zrot, yrot, xrot)\n    points = (np.dot(M, points.transpose())).transpose()\n\n    # Normalize the point cloud\n    # We normalize scale to fit points in a unit sphere\n    if normalize:\n        centroid = np.mean(points, axis=0)\n        points -= centroid\n        furthest_distance = np.max(np.sqrt(np.sum(abs(points)**2,axis=-1)))\n        points /= furthest_distance\n\n    # Pre-compute the Gaussian disk\n    radius = (diameter-1)/2.0\n    disk = np.zeros((diameter, diameter))\n    for i in range(diameter):\n        for j in range(diameter):\n            if (i - radius) * (i-radius) + (j-radius) * (j-radius) <= radius * radius:\n                disk[i, j] = np.exp((-(i-radius)**2 - (j-radius)**2)/(radius**2))\n    mask = np.argwhere(disk > 0)\n    dx = mask[:, 0]\n    dy = mask[:, 1]\n    dv = disk[disk > 0]\n    \n    # Order points by z-buffer\n    zorder = np.argsort(points[:, 2])\n    points = points[zorder, :]\n    points[:, 2] = (points[:, 2] - np.min(points[:, 2])) / (np.max(points[:, 2] - np.min(points[:, 2])))\n    max_depth = np.max(points[:, 2])\n       \n    for i in range(points.shape[0]):\n        j = points.shape[0] - i - 1\n        x = points[j, 0]\n        y = points[j, 1]\n        xc = canvasSize/2 + (x*space)\n        yc = canvasSize/2 + (y*space)\n        xc = int(np.round(xc))\n        yc = int(np.round(yc))\n        \n        px = dx + xc\n        py = dy + yc\n        \n        image[px, py] = image[px, py] * 0.7 + dv * (max_depth - points[j, 2]) * 0.3\n    \n    image = image / np.max(image)\n    return image\n\ndef point_cloud_three_views(points):\n    \"\"\" input points Nx3 numpy array (+y is up direction).\n        return an numpy array gray image of size 500x1500. \"\"\" \n    # +y is up direction\n    # xrot is azimuth\n    # yrot is in-plane\n    # zrot is elevation\n    img1 = draw_point_cloud(points, zrot=110/180.0*np.pi, xrot=45/180.0*np.pi, yrot=0/180.0*np.pi)\n    img2 = draw_point_cloud(points, zrot=70/180.0*np.pi, xrot=135/180.0*np.pi, yrot=0/180.0*np.pi)\n    img3 = draw_point_cloud(points, zrot=180.0/180.0*np.pi, xrot=90/180.0*np.pi, yrot=0/180.0*np.pi)\n    image_large = np.concatenate([img1, img2, img3], 1)\n    return image_large\n\n\ndef point_cloud_three_views_demo():\n    \"\"\" Demo for draw_point_cloud function \"\"\"\n    from PIL import Image\n    points = read_ply('../third_party/mesh_sampling/piano.ply')\n    im_array = point_cloud_three_views(points)\n    img = Image.fromarray(np.uint8(im_array*255.0))\n    img.save('piano.jpg')\n\nif __name__==\"__main__\":\n    point_cloud_three_views_demo()\n\n\ndef pyplot_draw_point_cloud(points, output_filename):\n    \"\"\" points is a Nx3 numpy array \"\"\"\n    import matplotlib.pyplot as plt\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d')\n    ax.scatter(points[:,0], points[:,1], points[:,2])\n    ax.set_xlabel('x')\n    ax.set_ylabel('y')\n    ax.set_zlabel('z')\n    #savefig(output_filename)\n\ndef pyplot_draw_volume(vol, output_filename):\n    \"\"\" vol is of size vsize*vsize*vsize\n        output an image to output_filename\n    \"\"\"\n    points = volume_to_point_cloud(vol)\n    pyplot_draw_point_cloud(points, output_filename)\n\ndef write_ply_color(points, labels, out_filename, num_classes=None):\n    \"\"\" Color (N,3) points with labels (N) within range 0 ~ num_classes-1 as OBJ file \"\"\"\n    import matplotlib.pyplot as pyplot\n    labels = labels.astype(int)\n    N = points.shape[0]\n    if num_classes is None:\n        num_classes = np.max(labels)+1\n    else:\n        assert(num_classes>np.max(labels))\n    fout = open(out_filename, 'w')\n    #colors = [pyplot.cm.hsv(i/float(num_classes)) for i in range(num_classes)]\n    colors = [pyplot.cm.jet(i/float(num_classes)) for i in range(num_classes)]\n    for i in range(N):\n        c = colors[labels[i]]\n        c = [int(x*255) for x in c]\n        fout.write('v %f %f %f %d %d %d\\n' % (points[i,0],points[i,1],points[i,2],c[0],c[1],c[2]))\n    fout.close()\n"
  },
  {
    "path": "pointnet2_tf/utils/pointnet_util.py",
    "content": "\"\"\" PointNet++ Layers\n\nAuthor: Charles R. Qi\nDate: November 2017\n\"\"\"\n\nimport os\nimport sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nROOT_DIR = os.path.dirname(BASE_DIR)\nsys.path.append(os.path.join(ROOT_DIR, 'utils'))\nsys.path.append(os.path.join(ROOT_DIR, 'tf_ops/sampling'))\nsys.path.append(os.path.join(ROOT_DIR, 'tf_ops/grouping'))\nsys.path.append(os.path.join(ROOT_DIR, 'tf_ops/3d_interpolation'))\nfrom tf_sampling import farthest_point_sample, gather_point\nfrom tf_grouping import query_ball_point, group_point, knn_point\nfrom tf_interpolate import three_nn, three_interpolate\nimport tensorflow as tf\nimport numpy as np\nimport tf_util\n\ndef sample_and_group(npoint, radius, nsample, xyz, points, knn=False, use_xyz=True):\n    '''\n    Input:\n        npoint: int32\n        radius: float32\n        nsample: int32\n        xyz: (batch_size, ndataset, 3) TF tensor\n        points: (batch_size, ndataset, channel) TF tensor, if None will just use xyz as points\n        knn: bool, if True use kNN instead of radius search\n        use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features\n    Output:\n        new_xyz: (batch_size, npoint, 3) TF tensor\n        new_points: (batch_size, npoint, nsample, 3+channel) TF tensor\n        idx: (batch_size, npoint, nsample) TF tensor, indices of local points as in ndataset points\n        grouped_xyz: (batch_size, npoint, nsample, 3) TF tensor, normalized point XYZs\n            (subtracted by seed point XYZ) in local regions\n    '''\n\n    new_xyz = gather_point(xyz, farthest_point_sample(npoint, xyz)) # (batch_size, npoint, 3)\n    if knn:\n        _,idx = knn_point(nsample, xyz, new_xyz)\n    else:\n        idx, pts_cnt = query_ball_point(radius, nsample, xyz, new_xyz)\n    grouped_xyz = group_point(xyz, idx) # (batch_size, npoint, nsample, 3)\n    grouped_xyz -= tf.tile(tf.expand_dims(new_xyz, 2), [1,1,nsample,1]) # translation normalization\n    if points is not None:\n        grouped_points = group_point(points, idx) # (batch_size, npoint, nsample, channel)\n        if use_xyz:\n            new_points = tf.concat([grouped_xyz, grouped_points], axis=-1) # (batch_size, npoint, nample, 3+channel)\n        else:\n            new_points = grouped_points\n    else:\n        new_points = grouped_xyz\n\n    return new_xyz, new_points, idx, grouped_xyz\n\n\ndef sample_and_group_all(xyz, points, use_xyz=True):\n    '''\n    Inputs:\n        xyz: (batch_size, ndataset, 3) TF tensor\n        points: (batch_size, ndataset, channel) TF tensor, if None will just use xyz as points\n        use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features\n    Outputs:\n        new_xyz: (batch_size, 1, 3) as (0,0,0)\n        new_points: (batch_size, 1, ndataset, 3+channel) TF tensor\n    Note:\n        Equivalent to sample_and_group with npoint=1, radius=inf, use (0,0,0) as the centroid\n    '''\n    batch_size = xyz.get_shape()[0].value\n    nsample = xyz.get_shape()[1].value\n    new_xyz = tf.constant(np.tile(np.array([0,0,0]).reshape((1,1,3)), (batch_size,1,1)),dtype=tf.float32) # (batch_size, 1, 3)\n    idx = tf.constant(np.tile(np.array(range(nsample)).reshape((1,1,nsample)), (batch_size,1,1)))\n    grouped_xyz = tf.reshape(xyz, (batch_size, 1, nsample, 3)) # (batch_size, npoint=1, nsample, 3)\n    if points is not None:\n        if use_xyz:\n            new_points = tf.concat([xyz, points], axis=2) # (batch_size, 16, 259)\n        else:\n            new_points = points\n        new_points = tf.expand_dims(new_points, 1) # (batch_size, 1, 16, 259)\n    else:\n        new_points = grouped_xyz\n    return new_xyz, new_points, idx, grouped_xyz\n\n\ndef pointnet_sa_module(xyz, points, npoint, radius, nsample, mlp, mlp2, group_all, is_training, bn_decay, scope, bn=True, pooling='max', knn=False, use_xyz=True, use_nchw=False):\n    ''' PointNet Set Abstraction (SA) Module\n        Input:\n            xyz: (batch_size, ndataset, 3) TF tensor\n            points: (batch_size, ndataset, channel) TF tensor\n            npoint: int32 -- #points sampled in farthest point sampling\n            radius: float32 -- search radius in local region\n            nsample: int32 -- how many points in each local region\n            mlp: list of int32 -- output size for MLP on each point\n            mlp2: list of int32 -- output size for MLP on each region\n            group_all: bool -- group all points into one PC if set true, OVERRIDE\n                npoint, radius and nsample settings\n            use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features\n            use_nchw: bool, if True, use NCHW data format for conv2d, which is usually faster than NHWC format\n        Return:\n            new_xyz: (batch_size, npoint, 3) TF tensor\n            new_points: (batch_size, npoint, mlp[-1] or mlp2[-1]) TF tensor\n            idx: (batch_size, npoint, nsample) int32 -- indices for local regions\n    '''\n    data_format = 'NCHW' if use_nchw else 'NHWC'\n    with tf.variable_scope(scope) as sc:\n        # Sample and Grouping\n        if group_all:\n            nsample = xyz.get_shape()[1].value\n            new_xyz, new_points, idx, grouped_xyz = sample_and_group_all(xyz, points, use_xyz)\n        else:\n            new_xyz, new_points, idx, grouped_xyz = sample_and_group(npoint, radius, nsample, xyz, points, knn, use_xyz)\n\n        # Point Feature Embedding\n        if use_nchw: new_points = tf.transpose(new_points, [0,3,1,2])\n        for i, num_out_channel in enumerate(mlp):\n            new_points = tf_util.conv2d(new_points, num_out_channel, [1,1],\n                                        padding='VALID', stride=[1,1],\n                                        bn=bn, is_training=is_training,\n                                        scope='conv%d'%(i), bn_decay=bn_decay,\n                                        data_format=data_format) \n        if use_nchw: new_points = tf.transpose(new_points, [0,2,3,1])\n\n        # Pooling in Local Regions\n        if pooling=='max':\n            new_points = tf.reduce_max(new_points, axis=[2], keep_dims=True, name='maxpool')\n        elif pooling=='avg':\n            new_points = tf.reduce_mean(new_points, axis=[2], keep_dims=True, name='avgpool')\n        elif pooling=='weighted_avg':\n            with tf.variable_scope('weighted_avg'):\n                dists = tf.norm(grouped_xyz,axis=-1,ord=2,keep_dims=True)\n                exp_dists = tf.exp(-dists * 5)\n                weights = exp_dists/tf.reduce_sum(exp_dists,axis=2,keep_dims=True) # (batch_size, npoint, nsample, 1)\n                new_points *= weights # (batch_size, npoint, nsample, mlp[-1])\n                new_points = tf.reduce_sum(new_points, axis=2, keep_dims=True)\n        elif pooling=='max_and_avg':\n            max_points = tf.reduce_max(new_points, axis=[2], keep_dims=True, name='maxpool')\n            avg_points = tf.reduce_mean(new_points, axis=[2], keep_dims=True, name='avgpool')\n            new_points = tf.concat([avg_points, max_points], axis=-1)\n\n        # [Optional] Further Processing \n        if mlp2 is not None:\n            if use_nchw: new_points = tf.transpose(new_points, [0,3,1,2])\n            for i, num_out_channel in enumerate(mlp2):\n                new_points = tf_util.conv2d(new_points, num_out_channel, [1,1],\n                                            padding='VALID', stride=[1,1],\n                                            bn=bn, is_training=is_training,\n                                            scope='conv_post_%d'%(i), bn_decay=bn_decay,\n                                            data_format=data_format) \n            if use_nchw: new_points = tf.transpose(new_points, [0,2,3,1])\n\n        new_points = tf.squeeze(new_points, [2]) # (batch_size, npoints, mlp2[-1])\n        return new_xyz, new_points, idx\n\ndef pointnet_sa_module_msg(xyz, points, npoint, radius_list, nsample_list, mlp_list, is_training, bn_decay, scope, bn=True, use_xyz=True, use_nchw=False):\n    ''' PointNet Set Abstraction (SA) module with Multi-Scale Grouping (MSG)\n        Input:\n            xyz: (batch_size, ndataset, 3) TF tensor\n            points: (batch_size, ndataset, channel) TF tensor\n            npoint: int32 -- #points sampled in farthest point sampling\n            radius: list of float32 -- search radius in local region\n            nsample: list of int32 -- how many points in each local region\n            mlp: list of list of int32 -- output size for MLP on each point\n            use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features\n            use_nchw: bool, if True, use NCHW data format for conv2d, which is usually faster than NHWC format\n        Return:\n            new_xyz: (batch_size, npoint, 3) TF tensor\n            new_points: (batch_size, npoint, \\sum_k{mlp[k][-1]}) TF tensor\n    '''\n    data_format = 'NCHW' if use_nchw else 'NHWC'\n    with tf.variable_scope(scope) as sc:\n        new_xyz = gather_point(xyz, farthest_point_sample(npoint, xyz))\n        new_points_list = []\n        for i in range(len(radius_list)):\n            radius = radius_list[i]\n            nsample = nsample_list[i]\n            idx, pts_cnt = query_ball_point(radius, nsample, xyz, new_xyz)\n            grouped_xyz = group_point(xyz, idx)\n            grouped_xyz -= tf.tile(tf.expand_dims(new_xyz, 2), [1,1,nsample,1])\n            if points is not None:\n                grouped_points = group_point(points, idx)\n                if use_xyz:\n                    grouped_points = tf.concat([grouped_points, grouped_xyz], axis=-1)\n            else:\n                grouped_points = grouped_xyz\n            if use_nchw: grouped_points = tf.transpose(grouped_points, [0,3,1,2])\n            for j,num_out_channel in enumerate(mlp_list[i]):\n                grouped_points = tf_util.conv2d(grouped_points, num_out_channel, [1,1],\n                                                padding='VALID', stride=[1,1], bn=bn, is_training=is_training,\n                                                scope='conv%d_%d'%(i,j), bn_decay=bn_decay)\n            if use_nchw: grouped_points = tf.transpose(grouped_points, [0,2,3,1])\n            new_points = tf.reduce_max(grouped_points, axis=[2])\n            new_points_list.append(new_points)\n        new_points_concat = tf.concat(new_points_list, axis=-1)\n        return new_xyz, new_points_concat\n\n \ndef pointnet_fp_module(xyz1, xyz2, points1, points2, mlp, is_training, bn_decay, scope, bn=True):\n    ''' PointNet Feature Propogation (FP) Module\n        Input:                                                                                                      \n            xyz1: (batch_size, ndataset1, 3) TF tensor                                                              \n            xyz2: (batch_size, ndataset2, 3) TF tensor, sparser than xyz1                                           \n            points1: (batch_size, ndataset1, nchannel1) TF tensor                                                   \n            points2: (batch_size, ndataset2, nchannel2) TF tensor\n            mlp: list of int32 -- output size for MLP on each point                                                 \n        Return:\n            new_points: (batch_size, ndataset1, mlp[-1]) TF tensor\n    '''\n    with tf.variable_scope(scope) as sc:\n        dist, idx = three_nn(xyz1, xyz2)\n        dist = tf.maximum(dist, 1e-10)\n        norm = tf.reduce_sum((1.0/dist),axis=2,keep_dims=True)\n        norm = tf.tile(norm,[1,1,3])\n        weight = (1.0/dist) / norm\n        interpolated_points = three_interpolate(points2, idx, weight)\n\n        if points1 is not None:\n            new_points1 = tf.concat(axis=2, values=[interpolated_points, points1]) # B,ndataset1,nchannel1+nchannel2\n        else:\n            new_points1 = interpolated_points\n        new_points1 = tf.expand_dims(new_points1, 2)\n        for i, num_out_channel in enumerate(mlp):\n            new_points1 = tf_util.conv2d(new_points1, num_out_channel, [1,1],\n                                         padding='VALID', stride=[1,1],\n                                         bn=bn, is_training=is_training,\n                                         scope='conv_%d'%(i), bn_decay=bn_decay)\n        new_points1 = tf.squeeze(new_points1, [2]) # B,ndataset1,mlp[-1]\n        return new_points1\n"
  },
  {
    "path": "pointnet2_tf/utils/provider.py",
    "content": "import os\nimport sys\nimport numpy as np\nimport h5py\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\ndef shuffle_data(data, labels):\n    \"\"\" Shuffle data and labels.\n        Input:\n          data: B,N,... numpy array\n          label: B,... numpy array\n        Return:\n          shuffled data, label and shuffle indices\n    \"\"\"\n    idx = np.arange(len(labels))\n    np.random.shuffle(idx)\n    return data[idx, ...], labels[idx], idx\n\ndef shuffle_points(batch_data):\n    \"\"\" Shuffle orders of points in each point cloud -- changes FPS behavior.\n        Use the same shuffling idx for the entire batch.\n        Input:\n            BxNxC array\n        Output:\n            BxNxC array\n    \"\"\"\n    idx = np.arange(batch_data.shape[1])\n    np.random.shuffle(idx)\n    return batch_data[:,idx,:]\n\ndef rotate_point_cloud(batch_data):\n    \"\"\" Randomly rotate the point clouds to augument the dataset\n        rotation is per shape based along up direction\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, 0, sinval],\n                                    [0, 1, 0],\n                                    [-sinval, 0, cosval]])\n        shape_pc = batch_data[k, ...]\n        rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n    return rotated_data\n\ndef rotate_point_cloud_z(batch_data):\n    \"\"\" Randomly rotate the point clouds to augument the dataset\n        rotation is per shape based along up direction\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, sinval, 0],\n                                    [-sinval, cosval, 0],\n                                    [0, 0, 1]])\n        shape_pc = batch_data[k, ...]\n        rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n    return rotated_data\n\ndef rotate_point_cloud_with_normal(batch_xyz_normal):\n    ''' Randomly rotate XYZ, normal point cloud.\n        Input:\n            batch_xyz_normal: B,N,6, first three channels are XYZ, last 3 all normal\n        Output:\n            B,N,6, rotated XYZ, normal point cloud\n    '''\n    for k in range(batch_xyz_normal.shape[0]):\n        rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, 0, sinval],\n                                    [0, 1, 0],\n                                    [-sinval, 0, cosval]])\n        shape_pc = batch_xyz_normal[k,:,0:3]\n        shape_normal = batch_xyz_normal[k,:,3:6]\n        batch_xyz_normal[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n        batch_xyz_normal[k,:,3:6] = np.dot(shape_normal.reshape((-1, 3)), rotation_matrix)\n    return batch_xyz_normal\n\ndef rotate_perturbation_point_cloud_with_normal(batch_data, angle_sigma=0.06, angle_clip=0.18):\n    \"\"\" Randomly perturb the point clouds by small rotations\n        Input:\n          BxNx6 array, original batch of point clouds and point normals\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip)\n        Rx = np.array([[1,0,0],\n                       [0,np.cos(angles[0]),-np.sin(angles[0])],\n                       [0,np.sin(angles[0]),np.cos(angles[0])]])\n        Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])],\n                       [0,1,0],\n                       [-np.sin(angles[1]),0,np.cos(angles[1])]])\n        Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0],\n                       [np.sin(angles[2]),np.cos(angles[2]),0],\n                       [0,0,1]])\n        R = np.dot(Rz, np.dot(Ry,Rx))\n        shape_pc = batch_data[k,:,0:3]\n        shape_normal = batch_data[k,:,3:6]\n        rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), R)\n        rotated_data[k,:,3:6] = np.dot(shape_normal.reshape((-1, 3)), R)\n    return rotated_data\n\n\ndef rotate_point_cloud_by_angle(batch_data, rotation_angle):\n    \"\"\" Rotate the point cloud along up direction with certain angle.\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        #rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, 0, sinval],\n                                    [0, 1, 0],\n                                    [-sinval, 0, cosval]])\n        shape_pc = batch_data[k,:,0:3]\n        rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n    return rotated_data\n\ndef rotate_point_cloud_by_angle_with_normal(batch_data, rotation_angle):\n    \"\"\" Rotate the point cloud along up direction with certain angle.\n        Input:\n          BxNx6 array, original batch of point clouds with normal\n          scalar, angle of rotation\n        Return:\n          BxNx6 array, rotated batch of point clouds iwth normal\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        #rotation_angle = np.random.uniform() * 2 * np.pi\n        cosval = np.cos(rotation_angle)\n        sinval = np.sin(rotation_angle)\n        rotation_matrix = np.array([[cosval, 0, sinval],\n                                    [0, 1, 0],\n                                    [-sinval, 0, cosval]])\n        shape_pc = batch_data[k,:,0:3]\n        shape_normal = batch_data[k,:,3:6]\n        rotated_data[k,:,0:3] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix)\n        rotated_data[k,:,3:6] = np.dot(shape_normal.reshape((-1,3)), rotation_matrix)\n    return rotated_data\n\n\n\ndef rotate_perturbation_point_cloud(batch_data, angle_sigma=0.06, angle_clip=0.18):\n    \"\"\" Randomly perturb the point clouds by small rotations\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, rotated batch of point clouds\n    \"\"\"\n    rotated_data = np.zeros(batch_data.shape, dtype=np.float32)\n    for k in range(batch_data.shape[0]):\n        angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip)\n        Rx = np.array([[1,0,0],\n                       [0,np.cos(angles[0]),-np.sin(angles[0])],\n                       [0,np.sin(angles[0]),np.cos(angles[0])]])\n        Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])],\n                       [0,1,0],\n                       [-np.sin(angles[1]),0,np.cos(angles[1])]])\n        Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0],\n                       [np.sin(angles[2]),np.cos(angles[2]),0],\n                       [0,0,1]])\n        R = np.dot(Rz, np.dot(Ry,Rx))\n        shape_pc = batch_data[k, ...]\n        rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), R)\n    return rotated_data\n\n\ndef jitter_point_cloud(batch_data, sigma=0.01, clip=0.05):\n    \"\"\" Randomly jitter points. jittering is per point.\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, jittered batch of point clouds\n    \"\"\"\n    B, N, C = batch_data.shape\n    assert(clip > 0)\n    jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip)\n    jittered_data += batch_data\n    return jittered_data\n\ndef shift_point_cloud(batch_data, shift_range=0.1):\n    \"\"\" Randomly shift point cloud. Shift is per point cloud.\n        Input:\n          BxNx3 array, original batch of point clouds\n        Return:\n          BxNx3 array, shifted batch of point clouds\n    \"\"\"\n    B, N, C = batch_data.shape\n    shifts = np.random.uniform(-shift_range, shift_range, (B,3))\n    for batch_index in range(B):\n        batch_data[batch_index,:,:] += shifts[batch_index,:]\n    return batch_data\n\n\ndef random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25):\n    \"\"\" Randomly scale the point cloud. Scale is per point cloud.\n        Input:\n            BxNx3 array, original batch of point clouds\n        Return:\n            BxNx3 array, scaled batch of point clouds\n    \"\"\"\n    B, N, C = batch_data.shape\n    scales = np.random.uniform(scale_low, scale_high, B)\n    for batch_index in range(B):\n        batch_data[batch_index,:,:] *= scales[batch_index]\n    return batch_data\n\ndef random_point_dropout(batch_pc, max_dropout_ratio=0.875):\n    ''' batch_pc: BxNx3 '''\n    for b in range(batch_pc.shape[0]):\n        dropout_ratio =  np.random.random()*max_dropout_ratio # 0~0.875\n        drop_idx = np.where(np.random.random((batch_pc.shape[1]))<=dropout_ratio)[0]\n        if len(drop_idx)>0:\n            batch_pc[b,drop_idx,:] = batch_pc[b,0,:] # set to the first point\n    return batch_pc\n\n\ndef getDataFiles(list_filename):\n    return [line.rstrip() for line in open(list_filename)]\n\ndef load_h5(h5_filename):\n    f = h5py.File(h5_filename)\n    data = f['data'][:]\n    label = f['label'][:]\n    return (data, label)\n\ndef loadDataFile(filename):\n    return load_h5(filename)\n"
  },
  {
    "path": "pointnet2_tf/utils/render_balls_so.cpp",
    "content": "#include <cstdio>\n#include <vector>\n#include <algorithm>\n#include <math.h>\nusing namespace std;\n\nstruct PointInfo{\n    int x,y,z;\n    float r,g,b;\n};\n\nextern \"C\"{\n\nvoid render_ball(int h,int w,unsigned char * show,int n,int * xyzs,float * c0,float * c1,float * c2,int r){\n    r=max(r,1);\n    vector<int> depth(h*w,-2100000000);\n    vector<PointInfo> pattern;\n    for (int dx=-r;dx<=r;dx++)\n        for (int dy=-r;dy<=r;dy++)\n            if (dx*dx+dy*dy<r*r){\n                double dz=sqrt(double(r*r-dx*dx-dy*dy));\n                PointInfo pinfo;\n                pinfo.x=dx;\n                pinfo.y=dy;\n                pinfo.z=dz;\n                pinfo.r=dz/r;\n                pinfo.g=dz/r;\n                pinfo.b=dz/r;\n                pattern.push_back(pinfo);\n            }\n    double zmin=0,zmax=0;\n    for (int i=0;i<n;i++){\n        if (i==0){\n            zmin=xyzs[i*3+2]-r;\n            zmax=xyzs[i*3+2]+r;\n        }else{\n            zmin=min(zmin,double(xyzs[i*3+2]-r));\n            zmax=max(zmax,double(xyzs[i*3+2]+r));\n        }\n    }\n    for (int i=0;i<n;i++){\n        int x=xyzs[i*3+0],y=xyzs[i*3+1],z=xyzs[i*3+2];\n        for (int j=0;j<int(pattern.size());j++){\n            int x2=x+pattern[j].x;\n            int y2=y+pattern[j].y;\n            int z2=z+pattern[j].z;\n            if (!(x2<0 || x2>=h || y2<0 || y2>=w) && depth[x2*w+y2]<z2){\n                depth[x2*w+y2]=z2;\n                double intensity=min(1.0,(z2-zmin)/(zmax-zmin)*0.7+0.3);\n                show[(x2*w+y2)*3+0]=pattern[j].b*c2[i]*intensity;\n                show[(x2*w+y2)*3+1]=pattern[j].g*c0[i]*intensity;\n                show[(x2*w+y2)*3+2]=pattern[j].r*c1[i]*intensity;\n            }\n        }\n    }\n}\n\n}//extern \"C\"\n"
  },
  {
    "path": "pointnet2_tf/utils/show3d_balls.py",
    "content": "\"\"\" Original Author: Haoqiang Fan \"\"\"\nimport numpy as np\nimport ctypes as ct\nimport cv2\nimport sys\nimport os\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nshowsz=800\nmousex,mousey=0.5,0.5\nzoom=1.0\nchanged=True\ndef onmouse(*args):\n    global mousex,mousey,changed\n    y=args[1]\n    x=args[2]\n    mousex=x/float(showsz)\n    mousey=y/float(showsz)\n    changed=True\ncv2.namedWindow('show3d')\ncv2.moveWindow('show3d',0,0)\ncv2.setMouseCallback('show3d',onmouse)\n\ndll=np.ctypeslib.load_library(os.path.join(BASE_DIR, 'render_balls_so'),'.')\n\ndef showpoints(xyz,c_gt=None, c_pred = None ,waittime=0,showrot=False,magnifyBlue=0,freezerot=False,background=(0,0,0),normalizecolor=True,ballradius=10):\n    global showsz,mousex,mousey,zoom,changed\n    xyz=xyz-xyz.mean(axis=0)\n    radius=((xyz**2).sum(axis=-1)**0.5).max()\n    xyz/=(radius*2.2)/showsz\n    if c_gt is None:\n        c0=np.zeros((len(xyz),),dtype='float32')+255\n        c1=np.zeros((len(xyz),),dtype='float32')+255\n        c2=np.zeros((len(xyz),),dtype='float32')+255\n    else:\n        c0=c_gt[:,0]\n        c1=c_gt[:,1]\n        c2=c_gt[:,2]\n\n\n    if normalizecolor:\n        c0/=(c0.max()+1e-14)/255.0\n        c1/=(c1.max()+1e-14)/255.0\n        c2/=(c2.max()+1e-14)/255.0\n\n\n    c0=np.require(c0,'float32','C')\n    c1=np.require(c1,'float32','C')\n    c2=np.require(c2,'float32','C')\n\n    show=np.zeros((showsz,showsz,3),dtype='uint8')\n    def render():\n        rotmat=np.eye(3)\n        if not freezerot:\n            xangle=(mousey-0.5)*np.pi*1.2\n        else:\n            xangle=0\n        rotmat=rotmat.dot(np.array([\n            [1.0,0.0,0.0],\n            [0.0,np.cos(xangle),-np.sin(xangle)],\n            [0.0,np.sin(xangle),np.cos(xangle)],\n            ]))\n        if not freezerot:\n            yangle=(mousex-0.5)*np.pi*1.2\n        else:\n            yangle=0\n        rotmat=rotmat.dot(np.array([\n            [np.cos(yangle),0.0,-np.sin(yangle)],\n            [0.0,1.0,0.0],\n            [np.sin(yangle),0.0,np.cos(yangle)],\n            ]))\n        rotmat*=zoom\n        nxyz=xyz.dot(rotmat)+[showsz/2,showsz/2,0]\n\n        ixyz=nxyz.astype('int32')\n        show[:]=background\n        dll.render_ball(\n            ct.c_int(show.shape[0]),\n            ct.c_int(show.shape[1]),\n            show.ctypes.data_as(ct.c_void_p),\n            ct.c_int(ixyz.shape[0]),\n            ixyz.ctypes.data_as(ct.c_void_p),\n            c0.ctypes.data_as(ct.c_void_p),\n            c1.ctypes.data_as(ct.c_void_p),\n            c2.ctypes.data_as(ct.c_void_p),\n            ct.c_int(ballradius)\n        )\n\n        if magnifyBlue>0:\n            show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],1,axis=0))\n            if magnifyBlue>=2:\n                show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],-1,axis=0))\n            show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],1,axis=1))\n            if magnifyBlue>=2:\n                show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],-1,axis=1))\n        if showrot:\n            cv2.putText(show,'xangle %d'%(int(xangle/np.pi*180)),(30,showsz-30),0,0.5,cv2.cv.CV_RGB(255,0,0))\n            cv2.putText(show,'yangle %d'%(int(yangle/np.pi*180)),(30,showsz-50),0,0.5,cv2.cv.CV_RGB(255,0,0))\n            cv2.putText(show,'zoom %d%%'%(int(zoom*100)),(30,showsz-70),0,0.5,cv2.cv.CV_RGB(255,0,0))\n    changed=True\n    while True:\n        if changed:\n            render()\n            changed=False\n        cv2.imshow('show3d',show)\n        if waittime==0:\n            cmd=cv2.waitKey(10)%256\n        else:\n            cmd=cv2.waitKey(waittime)%256\n        if cmd==ord('q'):\n            break\n        elif cmd==ord('Q'):\n            sys.exit(0)\n\n        if cmd==ord('t') or cmd == ord('p'):\n            if cmd == ord('t'):\n                if c_gt is None:\n                    c0=np.zeros((len(xyz),),dtype='float32')+255\n                    c1=np.zeros((len(xyz),),dtype='float32')+255\n                    c2=np.zeros((len(xyz),),dtype='float32')+255\n                else:\n                    c0=c_gt[:,0]\n                    c1=c_gt[:,1]\n                    c2=c_gt[:,2]\n            else:\n                if c_pred is None:\n                    c0=np.zeros((len(xyz),),dtype='float32')+255\n                    c1=np.zeros((len(xyz),),dtype='float32')+255\n                    c2=np.zeros((len(xyz),),dtype='float32')+255\n                else:\n                    c0=c_pred[:,0]\n                    c1=c_pred[:,1]\n                    c2=c_pred[:,2]\n            if normalizecolor:\n                c0/=(c0.max()+1e-14)/255.0\n                c1/=(c1.max()+1e-14)/255.0\n                c2/=(c2.max()+1e-14)/255.0\n            c0=np.require(c0,'float32','C')\n            c1=np.require(c1,'float32','C')\n            c2=np.require(c2,'float32','C')\n            changed = True\n\n\n\n        if cmd==ord('n'):\n            zoom*=1.1\n            changed=True\n        elif cmd==ord('m'):\n            zoom/=1.1\n            changed=True\n        elif cmd==ord('r'):\n            zoom=1.0\n            changed=True\n        elif cmd==ord('s'):\n            cv2.imwrite('show3d.png',show)\n        if waittime!=0:\n            break\n    return cmd\nif __name__=='__main__':\n    np.random.seed(100)\n    showpoints(np.random.randn(2500,3))\n\n"
  },
  {
    "path": "pointnet2_tf/utils/tf_util.py",
    "content": "\"\"\" Wrapper functions for TensorFlow layers.\n\nAuthor: Charles R. Qi\nDate: November 2017\n\"\"\"\n\nimport numpy as np\nimport tensorflow as tf\n\ndef _variable_on_cpu(name, shape, initializer, use_fp16=False):\n  \"\"\"Helper to create a Variable stored on CPU memory.\n  Args:\n    name: name of the variable\n    shape: list of ints\n    initializer: initializer for Variable\n  Returns:\n    Variable Tensor\n  \"\"\"\n  with tf.device(\"/cpu:0\"):\n    dtype = tf.float16 if use_fp16 else tf.float32\n    var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)\n  return var\n\ndef _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):\n  \"\"\"Helper to create an initialized Variable with weight decay.\n\n  Note that the Variable is initialized with a truncated normal distribution.\n  A weight decay is added only if one is specified.\n\n  Args:\n    name: name of the variable\n    shape: list of ints\n    stddev: standard deviation of a truncated Gaussian\n    wd: add L2Loss weight decay multiplied by this float. If None, weight\n        decay is not added for this Variable.\n    use_xavier: bool, whether to use xavier initializer\n\n  Returns:\n    Variable Tensor\n  \"\"\"\n  if use_xavier:\n    initializer = tf.contrib.layers.xavier_initializer()\n  else:\n    initializer = tf.truncated_normal_initializer(stddev=stddev)\n  var = _variable_on_cpu(name, shape, initializer)\n  if wd is not None:\n    weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')\n    tf.add_to_collection('losses', weight_decay)\n  return var\n\n\ndef conv1d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=1,\n           padding='SAME',\n           data_format='NHWC',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=None,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None):\n  \"\"\" 1D convolution with non-linear operation.\n\n  Args:\n    inputs: 3-D tensor variable BxLxC\n    num_output_channels: int\n    kernel_size: int\n    scope: string\n    stride: int\n    padding: 'SAME' or 'VALID'\n    data_format: 'NHWC' or 'NCHW'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    assert(data_format=='NHWC' or data_format=='NCHW')\n    if data_format == 'NHWC':\n      num_in_channels = inputs.get_shape()[-1].value\n    elif data_format=='NCHW':\n      num_in_channels = inputs.get_shape()[1].value\n    kernel_shape = [kernel_size,\n                    num_in_channels, num_output_channels]\n    kernel = _variable_with_weight_decay('weights',\n                                         shape=kernel_shape,\n                                         use_xavier=use_xavier,\n                                         stddev=stddev,\n                                         wd=weight_decay)\n    outputs = tf.nn.conv1d(inputs, kernel,\n                           stride=stride,\n                           padding=padding,\n                           data_format=data_format)\n    biases = _variable_on_cpu('biases', [num_output_channels],\n                              tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases, data_format=data_format)\n\n    if bn:\n      outputs = batch_norm_for_conv1d(outputs, is_training,\n                                      bn_decay=bn_decay, scope='bn',\n                                      data_format=data_format)\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\n\n\n\ndef conv2d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=[1, 1],\n           padding='SAME',\n           data_format='NHWC',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=None,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None):\n  \"\"\" 2D convolution with non-linear operation.\n\n  Args:\n    inputs: 4-D tensor variable BxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 2 ints\n    scope: string\n    stride: a list of 2 ints\n    padding: 'SAME' or 'VALID'\n    data_format: 'NHWC' or 'NCHW'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n      kernel_h, kernel_w = kernel_size\n      assert(data_format=='NHWC' or data_format=='NCHW')\n      if data_format == 'NHWC':\n        num_in_channels = inputs.get_shape()[-1].value\n      elif data_format=='NCHW':\n        num_in_channels = inputs.get_shape()[1].value\n      kernel_shape = [kernel_h, kernel_w,\n                      num_in_channels, num_output_channels]\n      kernel = _variable_with_weight_decay('weights',\n                                           shape=kernel_shape,\n                                           use_xavier=use_xavier,\n                                           stddev=stddev,\n                                           wd=weight_decay)\n      stride_h, stride_w = stride\n      outputs = tf.nn.conv2d(inputs, kernel,\n                             [1, stride_h, stride_w, 1],\n                             padding=padding,\n                             data_format=data_format)\n      biases = _variable_on_cpu('biases', [num_output_channels],\n                                tf.constant_initializer(0.0))\n      outputs = tf.nn.bias_add(outputs, biases, data_format=data_format)\n\n      if bn:\n        outputs = batch_norm_for_conv2d(outputs, is_training,\n                                        bn_decay=bn_decay, scope='bn',\n                                        data_format=data_format)\n\n      if activation_fn is not None:\n        outputs = activation_fn(outputs)\n      return outputs\n\n\ndef conv2d_transpose(inputs,\n                     num_output_channels,\n                     kernel_size,\n                     scope,\n                     stride=[1, 1],\n                     padding='SAME',\n                     use_xavier=True,\n                     stddev=1e-3,\n                     weight_decay=None,\n                     activation_fn=tf.nn.relu,\n                     bn=False,\n                     bn_decay=None,\n                     is_training=None):\n  \"\"\" 2D convolution transpose with non-linear operation.\n\n  Args:\n    inputs: 4-D tensor variable BxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 2 ints\n    scope: string\n    stride: a list of 2 ints\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n\n  Note: conv2d(conv2d_transpose(a, num_out, ksize, stride), a.shape[-1], ksize, stride) == a\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n      kernel_h, kernel_w = kernel_size\n      num_in_channels = inputs.get_shape()[-1].value\n      kernel_shape = [kernel_h, kernel_w,\n                      num_output_channels, num_in_channels] # reversed to conv2d\n      kernel = _variable_with_weight_decay('weights',\n                                           shape=kernel_shape,\n                                           use_xavier=use_xavier,\n                                           stddev=stddev,\n                                           wd=weight_decay)\n      stride_h, stride_w = stride\n      \n      # from slim.convolution2d_transpose\n      def get_deconv_dim(dim_size, stride_size, kernel_size, padding):\n          dim_size *= stride_size\n\n          if padding == 'VALID' and dim_size is not None:\n            dim_size += max(kernel_size - stride_size, 0)\n          return dim_size\n\n      # caculate output shape\n      batch_size = inputs.get_shape()[0].value\n      height = inputs.get_shape()[1].value\n      width = inputs.get_shape()[2].value\n      out_height = get_deconv_dim(height, stride_h, kernel_h, padding)\n      out_width = get_deconv_dim(width, stride_w, kernel_w, padding)\n      output_shape = [batch_size, out_height, out_width, num_output_channels]\n\n      outputs = tf.nn.conv2d_transpose(inputs, kernel, output_shape,\n                             [1, stride_h, stride_w, 1],\n                             padding=padding)\n      biases = _variable_on_cpu('biases', [num_output_channels],\n                                tf.constant_initializer(0.0))\n      outputs = tf.nn.bias_add(outputs, biases)\n\n      if bn:\n        outputs = batch_norm_for_conv2d(outputs, is_training,\n                                        bn_decay=bn_decay, scope='bn')\n\n      if activation_fn is not None:\n        outputs = activation_fn(outputs)\n      return outputs\n\n   \n\ndef conv3d(inputs,\n           num_output_channels,\n           kernel_size,\n           scope,\n           stride=[1, 1, 1],\n           padding='SAME',\n           use_xavier=True,\n           stddev=1e-3,\n           weight_decay=None,\n           activation_fn=tf.nn.relu,\n           bn=False,\n           bn_decay=None,\n           is_training=None):\n  \"\"\" 3D convolution with non-linear operation.\n\n  Args:\n    inputs: 5-D tensor variable BxDxHxWxC\n    num_output_channels: int\n    kernel_size: a list of 3 ints\n    scope: string\n    stride: a list of 3 ints\n    padding: 'SAME' or 'VALID'\n    use_xavier: bool, use xavier_initializer if true\n    stddev: float, stddev for truncated_normal init\n    weight_decay: float\n    activation_fn: function\n    bn: bool, whether to use batch norm\n    bn_decay: float or float tensor variable in [0,1]\n    is_training: bool Tensor variable\n\n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    num_in_channels = inputs.get_shape()[-1].value\n    kernel_shape = [kernel_d, kernel_h, kernel_w,\n                    num_in_channels, num_output_channels]\n    kernel = _variable_with_weight_decay('weights',\n                                         shape=kernel_shape,\n                                         use_xavier=use_xavier,\n                                         stddev=stddev,\n                                         wd=weight_decay)\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.conv3d(inputs, kernel,\n                           [1, stride_d, stride_h, stride_w, 1],\n                           padding=padding)\n    biases = _variable_on_cpu('biases', [num_output_channels],\n                              tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases)\n    \n    if bn:\n      outputs = batch_norm_for_conv3d(outputs, is_training,\n                                      bn_decay=bn_decay, scope='bn')\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\ndef fully_connected(inputs,\n                    num_outputs,\n                    scope,\n                    use_xavier=True,\n                    stddev=1e-3,\n                    weight_decay=None,\n                    activation_fn=tf.nn.relu,\n                    bn=False,\n                    bn_decay=None,\n                    is_training=None):\n  \"\"\" Fully connected layer with non-linear operation.\n  \n  Args:\n    inputs: 2-D tensor BxN\n    num_outputs: int\n  \n  Returns:\n    Variable tensor of size B x num_outputs.\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_input_units = inputs.get_shape()[-1].value\n    weights = _variable_with_weight_decay('weights',\n                                          shape=[num_input_units, num_outputs],\n                                          use_xavier=use_xavier,\n                                          stddev=stddev,\n                                          wd=weight_decay)\n    outputs = tf.matmul(inputs, weights)\n    biases = _variable_on_cpu('biases', [num_outputs],\n                             tf.constant_initializer(0.0))\n    outputs = tf.nn.bias_add(outputs, biases)\n     \n    if bn:\n      outputs = batch_norm_for_fc(outputs, is_training, bn_decay, 'bn')\n\n    if activation_fn is not None:\n      outputs = activation_fn(outputs)\n    return outputs\n\n\ndef max_pool2d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2],\n               padding='VALID'):\n  \"\"\" 2D max pooling.\n\n  Args:\n    inputs: 4-D tensor BxHxWxC\n    kernel_size: a list of 2 ints\n    stride: a list of 2 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_h, kernel_w = kernel_size\n    stride_h, stride_w = stride\n    outputs = tf.nn.max_pool(inputs,\n                             ksize=[1, kernel_h, kernel_w, 1],\n                             strides=[1, stride_h, stride_w, 1],\n                             padding=padding,\n                             name=sc.name)\n    return outputs\n\ndef avg_pool2d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2],\n               padding='VALID'):\n  \"\"\" 2D avg pooling.\n\n  Args:\n    inputs: 4-D tensor BxHxWxC\n    kernel_size: a list of 2 ints\n    stride: a list of 2 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_h, kernel_w = kernel_size\n    stride_h, stride_w = stride\n    outputs = tf.nn.avg_pool(inputs,\n                             ksize=[1, kernel_h, kernel_w, 1],\n                             strides=[1, stride_h, stride_w, 1],\n                             padding=padding,\n                             name=sc.name)\n    return outputs\n\n\ndef max_pool3d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2, 2],\n               padding='VALID'):\n  \"\"\" 3D max pooling.\n\n  Args:\n    inputs: 5-D tensor BxDxHxWxC\n    kernel_size: a list of 3 ints\n    stride: a list of 3 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.max_pool3d(inputs,\n                               ksize=[1, kernel_d, kernel_h, kernel_w, 1],\n                               strides=[1, stride_d, stride_h, stride_w, 1],\n                               padding=padding,\n                               name=sc.name)\n    return outputs\n\ndef avg_pool3d(inputs,\n               kernel_size,\n               scope,\n               stride=[2, 2, 2],\n               padding='VALID'):\n  \"\"\" 3D avg pooling.\n\n  Args:\n    inputs: 5-D tensor BxDxHxWxC\n    kernel_size: a list of 3 ints\n    stride: a list of 3 ints\n  \n  Returns:\n    Variable tensor\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    kernel_d, kernel_h, kernel_w = kernel_size\n    stride_d, stride_h, stride_w = stride\n    outputs = tf.nn.avg_pool3d(inputs,\n                               ksize=[1, kernel_d, kernel_h, kernel_w, 1],\n                               strides=[1, stride_d, stride_h, stride_w, 1],\n                               padding=padding,\n                               name=sc.name)\n    return outputs\n\n\ndef batch_norm_template_unused(inputs, is_training, scope, moments_dims, bn_decay):\n  \"\"\" NOTE: this is older version of the util func. it is deprecated.\n  Batch normalization on convolutional maps and beyond...\n  Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow\n  \n  Args:\n      inputs:        Tensor, k-D input ... x C could be BC or BHWC or BDHWC\n      is_training:   boolean tf.Varialbe, true indicates training phase\n      scope:         string, variable scope\n      moments_dims:  a list of ints, indicating dimensions for moments calculation\n      bn_decay:      float or float tensor variable, controling moving average weight\n  Return:\n      normed:        batch-normalized maps\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    num_channels = inputs.get_shape()[-1].value\n    beta = _variable_on_cpu(name='beta',shape=[num_channels],\n                            initializer=tf.constant_initializer(0))\n    gamma = _variable_on_cpu(name='gamma',shape=[num_channels],\n                            initializer=tf.constant_initializer(1.0))\n    batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments')\n    decay = bn_decay if bn_decay is not None else 0.9\n    ema = tf.train.ExponentialMovingAverage(decay=decay)\n    # Operator that maintains moving averages of variables.\n    # Need to set reuse=False, otherwise if reuse, will see moments_1/mean/ExponentialMovingAverage/ does not exist\n    # https://github.com/shekkizh/WassersteinGAN.tensorflow/issues/3\n    with tf.variable_scope(tf.get_variable_scope(), reuse=False):\n        ema_apply_op = tf.cond(is_training,\n                               lambda: ema.apply([batch_mean, batch_var]),\n                               lambda: tf.no_op())\n    \n    # Update moving average and return current batch's avg and var.\n    def mean_var_with_update():\n      with tf.control_dependencies([ema_apply_op]):\n        return tf.identity(batch_mean), tf.identity(batch_var)\n    \n    # ema.average returns the Variable holding the average of var.\n    mean, var = tf.cond(is_training,\n                        mean_var_with_update,\n                        lambda: (ema.average(batch_mean), ema.average(batch_var)))\n    normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3)\n  return normed\n\n\ndef batch_norm_template(inputs, is_training, scope, moments_dims_unused, bn_decay, data_format='NHWC'):\n  \"\"\" Batch normalization on convolutional maps and beyond...\n  Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow\n  \n  Args:\n      inputs:        Tensor, k-D input ... x C could be BC or BHWC or BDHWC\n      is_training:   boolean tf.Varialbe, true indicates training phase\n      scope:         string, variable scope\n      moments_dims:  a list of ints, indicating dimensions for moments calculation\n      bn_decay:      float or float tensor variable, controling moving average weight\n      data_format:   'NHWC' or 'NCHW'\n  Return:\n      normed:        batch-normalized maps\n  \"\"\"\n  bn_decay = bn_decay if bn_decay is not None else 0.9\n  return tf.contrib.layers.batch_norm(inputs, \n                                      center=True, scale=True,\n                                      is_training=is_training, decay=bn_decay,updates_collections=None,\n                                      scope=scope,\n                                      data_format=data_format)\n\n\ndef batch_norm_for_fc(inputs, is_training, bn_decay, scope):\n  \"\"\" Batch normalization on FC data.\n  \n  Args:\n      inputs:      Tensor, 2D BxC input\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  return batch_norm_template(inputs, is_training, scope, [0,], bn_decay)\n\n\ndef batch_norm_for_conv1d(inputs, is_training, bn_decay, scope, data_format):\n  \"\"\" Batch normalization on 1D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 3D BLC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      data_format: 'NHWC' or 'NCHW'\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  return batch_norm_template(inputs, is_training, scope, [0,1], bn_decay, data_format)\n\n\n\n  \ndef batch_norm_for_conv2d(inputs, is_training, bn_decay, scope, data_format):\n  \"\"\" Batch normalization on 2D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 4D BHWC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n      data_format: 'NHWC' or 'NCHW'\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  return batch_norm_template(inputs, is_training, scope, [0,1,2], bn_decay, data_format)\n\n\ndef batch_norm_for_conv3d(inputs, is_training, bn_decay, scope):\n  \"\"\" Batch normalization on 3D convolutional maps.\n  \n  Args:\n      inputs:      Tensor, 5D BDHWC input maps\n      is_training: boolean tf.Varialbe, true indicates training phase\n      bn_decay:    float or float tensor variable, controling moving average weight\n      scope:       string, variable scope\n  Return:\n      normed:      batch-normalized maps\n  \"\"\"\n  return batch_norm_template(inputs, is_training, scope, [0,1,2,3], bn_decay)\n\n\ndef dropout(inputs,\n            is_training,\n            scope,\n            keep_prob=0.5,\n            noise_shape=None):\n  \"\"\" Dropout layer.\n\n  Args:\n    inputs: tensor\n    is_training: boolean tf.Variable\n    scope: string\n    keep_prob: float in [0,1]\n    noise_shape: list of ints\n\n  Returns:\n    tensor variable\n  \"\"\"\n  with tf.variable_scope(scope) as sc:\n    outputs = tf.cond(is_training,\n                      lambda: tf.nn.dropout(inputs, keep_prob, noise_shape),\n                      lambda: inputs)\n    return outputs\n"
  },
  {
    "path": "pointnet_pyt/.gitignore",
    "content": ".ipynb_checkpoints/\ndata\n*.pyc\n*.ipynb\nshapenetcore_partanno_segmentation_benchmark_v0/\n*.so\n.idea*\ncls/\nseg/\n*.egg-info/\n"
  },
  {
    "path": "pointnet_pyt/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2017 Fei Xia\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "pointnet_pyt/README.md",
    "content": "# PointNet.pytorch\nThis repo is implementation for PointNet(https://arxiv.org/abs/1612.00593) in pytorch. The model is in `pointnet/model.py`.\n\nIt is tested with pytorch-1.0.\n\n# Download data and running\n\n```\ngit clone https://github.com/fxia22/pointnet.pytorch\ncd pointnet.pytorch\npip install -e .\n```\n\nDownload and build visualization tool\n```\ncd script\nbash build.sh #build C++ code for visualization\nbash download.sh #download dataset\n```\n\nTraining \n```\ncd utils\npython train_classification.py --dataset <dataset path> --nepoch=<number epochs> --dataset_type <modelnet40 | shapenet>\npython train_segmentation.py --dataset <dataset path> --nepoch=<number epochs> \n```\n\nUse `--feature_transform` to use feature transform.\n\n# Performance\n\n## Classification performance\n\nOn ModelNet40:\n\n|  | Overall Acc | \n| :---: | :---: | \n| Original implementation | 89.2 | \n| this implementation(w/o feature transform) | 86.4 | \n| this implementation(w/ feature transform) | 87.0 | \n\nOn [A subset of shapenet](http://web.stanford.edu/~ericyi/project_page/part_annotation/index.html)\n\n|  | Overall Acc | \n| :---: | :---: | \n| Original implementation | N/A | \n| this implementation(w/o feature transform) | 98.1 | \n| this implementation(w/ feature transform) | 97.7 | \n\n## Segmentation performance\n\nSegmentation on  [A subset of shapenet](http://web.stanford.edu/~ericyi/project_page/part_annotation/index.html).\n\n| Class(mIOU) | Airplane | Bag| Cap|Car|Chair|Earphone|Guitar|Knife|Lamp|Laptop|Motorbike|Mug|Pistol|Rocket|Skateboard|Table\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \n| Original implementation |  83.4 | 78.7 | 82.5| 74.9 |89.6| 73.0| 91.5| 85.9| 80.8| 95.3| 65.2| 93.0| 81.2| 57.9| 72.8| 80.6| \n| this implementation(w/o feature transform) | 73.5 | 71.3 | 64.3 | 61.1 | 87.2 | 69.5 | 86.1|81.6| 77.4|92.7|41.3|86.5|78.2|41.2|61.0|81.1|\n| this implementation(w/ feature transform) |  |  |  |  | 87.6 |  | | | | | | | | | |81.0|\n\nNote that this implementation trains each class separately, so classes with fewer data will have slightly lower performance than reference implementation.\n\nSample segmentation result:\n![seg](https://raw.githubusercontent.com/fxia22/pointnet.pytorch/master/misc/show3d.png?token=AE638Oy51TL2HDCaeCF273X_-Bsy6-E2ks5Y_BUzwA%3D%3D)\n\n# Links\n\n- [Project Page](http://stanford.edu/~rqi/pointnet/)\n- [Tensorflow implementation](https://github.com/charlesq34/pointnet)\n"
  },
  {
    "path": "pointnet_pyt/misc/modelnet_id.txt",
    "content": "airplane\t0\nbathtub\t1\nbed\t2\nbench\t3\nbookshelf\t4\nbottle\t5\nbowl\t6\ncar\t7\nchair\t8\ncone\t9\ncup\t10\ncurtain\t11\ndesk\t12\ndoor\t13\ndresser\t14\nflower_pot\t15\nglass_box\t16\nguitar\t17\nkeyboard\t18\nlamp\t19\nlaptop\t20\nmantel\t21\nmonitor\t22\nnight_stand\t23\nperson\t24\npiano\t25\nplant\t26\nradio\t27\nrange_hood\t28\nsink\t29\nsofa\t30\nstairs\t31\nstool\t32\ntable\t33\ntent\t34\ntoilet\t35\ntv_stand\t36\nvase\t37\nwardrobe\t38\nxbox\t39\n"
  },
  {
    "path": "pointnet_pyt/misc/num_seg_classes.txt",
    "content": "Airplane\t4\nBag\t2\nCap\t2\nCar\t4\nChair\t4\nEarphone\t3\nGuitar\t3\nKnife\t2\nLamp\t4\nLaptop\t2\nMotorbike\t6\nMug\t2\nPistol\t3\nRocket\t3\nSkateboard\t3\nTable\t3\n"
  },
  {
    "path": "pointnet_pyt/pointnet/__init__.py",
    "content": ""
  },
  {
    "path": "pointnet_pyt/pointnet/dataset.py",
    "content": "from __future__ import print_function\nimport torch.utils.data as data\nimport os\nimport os.path\nimport torch\nimport numpy as np\nimport sys\nfrom tqdm import tqdm \nimport json\nfrom plyfile import PlyData, PlyElement\n\ndef get_segmentation_classes(root):\n    catfile = os.path.join(root, 'synsetoffset2category.txt')\n    cat = {}\n    meta = {}\n\n    with open(catfile, 'r') as f:\n        for line in f:\n            ls = line.strip().split()\n            cat[ls[0]] = ls[1]\n\n    for item in cat:\n        dir_seg = os.path.join(root, cat[item], 'points_label')\n        dir_point = os.path.join(root, cat[item], 'points')\n        fns = sorted(os.listdir(dir_point))\n        meta[item] = []\n        for fn in fns:\n            token = (os.path.splitext(os.path.basename(fn))[0])\n            meta[item].append((os.path.join(dir_point, token + '.pts'), os.path.join(dir_seg, token + '.seg')))\n    \n    with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/num_seg_classes.txt'), 'w') as f:\n        for item in cat:\n            datapath = []\n            num_seg_classes = 0\n            for fn in meta[item]:\n                datapath.append((item, fn[0], fn[1]))\n\n            for i in tqdm(range(len(datapath))):\n                l = len(np.unique(np.loadtxt(datapath[i][-1]).astype(np.uint8)))\n                if l > num_seg_classes:\n                    num_seg_classes = l\n\n            print(\"category {} num segmentation classes {}\".format(item, num_seg_classes))\n            f.write(\"{}\\t{}\\n\".format(item, num_seg_classes))\n\ndef gen_modelnet_id(root):\n    classes = []\n    with open(os.path.join(root, 'train.txt'), 'r') as f:\n        for line in f:\n            classes.append(line.strip().split('/')[0])\n    classes = np.unique(classes)\n    with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/modelnet_id.txt'), 'w') as f:\n        for i in range(len(classes)):\n            f.write('{}\\t{}\\n'.format(classes[i], i))\n\nclass ShapeNetDataset(data.Dataset):\n    def __init__(self,\n                 root,\n                 npoints=2500,\n                 classification=False,\n                 class_choice=None,\n                 split='train',\n                 data_augmentation=True):\n        self.npoints = npoints\n        self.root = root\n        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')\n        self.cat = {}\n        self.data_augmentation = data_augmentation\n        self.classification = classification\n        self.seg_classes = {}\n        \n        with open(self.catfile, 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = ls[1]\n        #print(self.cat)\n        if not class_choice is None:\n            self.cat = {k: v for k, v in self.cat.items() if k in class_choice}\n\n        self.id2cat = {v: k for k, v in self.cat.items()}\n\n        self.meta = {}\n        splitfile = os.path.join(self.root, 'train_test_split', 'shuffled_{}_file_list.json'.format(split))\n        #from IPython import embed; embed()\n        filelist = json.load(open(splitfile, 'r'))\n        for item in self.cat:\n            self.meta[item] = []\n\n        for file in filelist:\n            _, category, uuid = file.split('/')\n            if category in self.cat.values():\n                self.meta[self.id2cat[category]].append((os.path.join(self.root, category, 'points', uuid+'.pts'),\n                                        os.path.join(self.root, category, 'points_label', uuid+'.seg')))\n\n        self.datapath = []\n        for item in self.cat:\n            for fn in self.meta[item]:\n                self.datapath.append((item, fn[0], fn[1]))\n\n        self.classes = dict(zip(sorted(self.cat), range(len(self.cat))))\n        print(self.classes)\n        with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/num_seg_classes.txt'), 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.seg_classes[ls[0]] = int(ls[1])\n        self.num_seg_classes = self.seg_classes[list(self.cat.keys())[0]]\n        print(self.seg_classes, self.num_seg_classes)\n\n    def __getitem__(self, index):\n        fn = self.datapath[index]\n        cls = self.classes[self.datapath[index][0]]\n        point_set = np.loadtxt(fn[1]).astype(np.float32)\n        seg = np.loadtxt(fn[2]).astype(np.int64)\n        #print(point_set.shape, seg.shape)\n\n        choice = np.random.choice(len(seg), self.npoints, replace=True)\n        #resample\n        point_set = point_set[choice, :]\n\n        point_set = point_set - np.expand_dims(np.mean(point_set, axis = 0), 0) # center\n        dist = np.max(np.sqrt(np.sum(point_set ** 2, axis = 1)),0)\n        point_set = point_set / dist #scale\n\n        if self.data_augmentation:\n            theta = np.random.uniform(0,np.pi*2)\n            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])\n            point_set[:,[0,2]] = point_set[:,[0,2]].dot(rotation_matrix) # random rotation\n            point_set += np.random.normal(0, 0.02, size=point_set.shape) # random jitter\n\n        seg = seg[choice]\n        point_set = torch.from_numpy(point_set)\n        seg = torch.from_numpy(seg)\n        cls = torch.from_numpy(np.array([cls]).astype(np.int64))\n\n        if self.classification:\n            return point_set, cls\n        else:\n            return point_set, seg\n\n    def __len__(self):\n        return len(self.datapath)\n\nclass ModelNetDataset(data.Dataset):\n    def __init__(self,\n                 root,\n                 npoints=2500,\n                 split='train',\n                 data_augmentation=True):\n        self.npoints = npoints\n        self.root = root\n        self.split = split\n        self.data_augmentation = data_augmentation\n        self.fns = []\n        with open(os.path.join(root, '{}.txt'.format(self.split)), 'r') as f:\n            for line in f:\n                self.fns.append(line.strip())\n\n        self.cat = {}\n        with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/modelnet_id.txt'), 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = int(ls[1])\n\n        print(self.cat)\n        self.classes = list(self.cat.keys())\n\n    def __getitem__(self, index):\n        fn = self.fns[index]\n        cls = self.cat[fn.split('/')[0]]\n        with open(os.path.join(self.root, fn), 'rb') as f:\n            plydata = PlyData.read(f)\n        pts = np.vstack([plydata['vertex']['x'], plydata['vertex']['y'], plydata['vertex']['z']]).T\n        choice = np.random.choice(len(pts), self.npoints, replace=True)\n        point_set = pts[choice, :]\n\n        point_set = point_set - np.expand_dims(np.mean(point_set, axis=0), 0)  # center\n        dist = np.max(np.sqrt(np.sum(point_set ** 2, axis=1)), 0)\n        point_set = point_set / dist  # scale\n\n        if self.data_augmentation:\n            theta = np.random.uniform(0, np.pi * 2)\n            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])\n            point_set[:, [0, 2]] = point_set[:, [0, 2]].dot(rotation_matrix)  # random rotation\n            point_set += np.random.normal(0, 0.02, size=point_set.shape)  # random jitter\n\n        point_set = torch.from_numpy(point_set.astype(np.float32))\n        cls = torch.from_numpy(np.array([cls]).astype(np.int64))\n        return point_set, cls\n\n\n    def __len__(self):\n        return len(self.fns)\n\nif __name__ == '__main__':\n    dataset = sys.argv[1]\n    datapath = sys.argv[2]\n\n    if dataset == 'shapenet':\n        d = ShapeNetDataset(root = datapath, class_choice = ['Chair'])\n        print(len(d))\n        ps, seg = d[0]\n        print(ps.size(), ps.type(), seg.size(),seg.type())\n\n        d = ShapeNetDataset(root = datapath, classification = True)\n        print(len(d))\n        ps, cls = d[0]\n        print(ps.size(), ps.type(), cls.size(),cls.type())\n        # get_segmentation_classes(datapath)\n\n    if dataset == 'modelnet':\n        gen_modelnet_id(datapath)\n        d = ModelNetDataset(root=datapath)\n        print(len(d))\n        print(d[0])\n\n"
  },
  {
    "path": "pointnet_pyt/pointnet/model.py",
    "content": "from __future__ import print_function\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.utils.data\nfrom torch.autograd import Variable\nimport numpy as np\nimport torch.nn.functional as F\n\n\nclass STN3d(nn.Module):\n    def __init__(self):\n        super(STN3d, self).__init__()\n        self.conv1 = torch.nn.Conv1d(3, 64, 1)\n        self.conv2 = torch.nn.Conv1d(64, 128, 1)\n        self.conv3 = torch.nn.Conv1d(128, 1024, 1)\n        self.fc1 = nn.Linear(1024, 512)\n        self.fc2 = nn.Linear(512, 256)\n        self.fc3 = nn.Linear(256, 9)\n        self.relu = nn.ReLU()\n\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(128)\n        self.bn3 = nn.BatchNorm1d(1024)\n        self.bn4 = nn.BatchNorm1d(512)\n        self.bn5 = nn.BatchNorm1d(256)\n\n\n    def forward(self, x):\n        batchsize = x.size()[0]\n        x = F.relu(self.bn1(self.conv1(x)))\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = F.relu(self.bn3(self.conv3(x)))\n        x = torch.max(x, 2, keepdim=True)[0]\n        x = x.view(-1, 1024)\n\n        x = F.relu(self.bn4(self.fc1(x)))\n        x = F.relu(self.bn5(self.fc2(x)))\n        x = self.fc3(x)\n\n        iden = Variable(torch.from_numpy(np.array([1,0,0,0,1,0,0,0,1]).astype(np.float32))).view(1,9).repeat(batchsize,1)\n        if x.is_cuda:\n            iden = iden.cuda()\n        x = x + iden\n        x = x.view(-1, 3, 3)\n        return x\n\n\nclass STNkd(nn.Module):\n    def __init__(self, k=64):\n        super(STNkd, self).__init__()\n        self.conv1 = torch.nn.Conv1d(k, 64, 1)\n        self.conv2 = torch.nn.Conv1d(64, 128, 1)\n        self.conv3 = torch.nn.Conv1d(128, 1024, 1)\n        self.fc1 = nn.Linear(1024, 512)\n        self.fc2 = nn.Linear(512, 256)\n        self.fc3 = nn.Linear(256, k*k)\n        self.relu = nn.ReLU()\n\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(128)\n        self.bn3 = nn.BatchNorm1d(1024)\n        self.bn4 = nn.BatchNorm1d(512)\n        self.bn5 = nn.BatchNorm1d(256)\n\n        self.k = k\n\n    def forward(self, x):\n        batchsize = x.size()[0]\n        x = F.relu(self.bn1(self.conv1(x)))\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = F.relu(self.bn3(self.conv3(x)))\n        x = torch.max(x, 2, keepdim=True)[0]\n        x = x.view(-1, 1024)\n\n        x = F.relu(self.bn4(self.fc1(x)))\n        x = F.relu(self.bn5(self.fc2(x)))\n        x = self.fc3(x)\n\n        iden = Variable(torch.from_numpy(np.eye(self.k).flatten().astype(np.float32))).view(1,self.k*self.k).repeat(batchsize,1)\n        if x.is_cuda:\n            iden = iden.cuda()\n        x = x + iden\n        x = x.view(-1, self.k, self.k)\n        return x\n\nclass PointNetfeat(nn.Module):\n    def __init__(self, global_feat = True, feature_transform = False):\n        super(PointNetfeat, self).__init__()\n        self.stn = STN3d()\n        self.conv1 = torch.nn.Conv1d(3, 64, 1)\n        self.conv2 = torch.nn.Conv1d(64, 128, 1)\n        self.conv3 = torch.nn.Conv1d(128, 1024, 1)\n        self.bn1 = nn.BatchNorm1d(64)\n        self.bn2 = nn.BatchNorm1d(128)\n        self.bn3 = nn.BatchNorm1d(1024)\n        self.global_feat = global_feat\n        self.feature_transform = feature_transform\n        if self.feature_transform:\n            self.fstn = STNkd(k=64)\n\n    def forward(self, x):\n        n_pts = x.size()[2]\n        trans = self.stn(x)\n        x = x.transpose(2, 1)\n        x = torch.bmm(x, trans)\n        x = x.transpose(2, 1)\n        x = F.relu(self.bn1(self.conv1(x)))\n\n        if self.feature_transform:\n            trans_feat = self.fstn(x)\n            x = x.transpose(2,1)\n            x = torch.bmm(x, trans_feat)\n            x = x.transpose(2,1)\n        else:\n            trans_feat = None\n\n        pointfeat = x\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = self.bn3(self.conv3(x))\n        x = torch.max(x, 2, keepdim=True)[0]\n        x = x.view(-1, 1024)\n        if self.global_feat:\n            return x, trans, trans_feat\n        else:\n            x = x.view(-1, 1024, 1).repeat(1, 1, n_pts)\n            return torch.cat([x, pointfeat], 1), trans, trans_feat\n\nclass PointNetCls(nn.Module):\n    def __init__(self, k=2, feature_transform=False):\n        super(PointNetCls, self).__init__()\n        self.feature_transform = feature_transform\n        self.feat = PointNetfeat(global_feat=True, feature_transform=feature_transform)\n        self.fc1 = nn.Linear(1024, 512)\n        self.fc2 = nn.Linear(512, 256)\n        self.fc3 = nn.Linear(256, k)\n        self.dropout = nn.Dropout(p=0.3)\n        self.bn1 = nn.BatchNorm1d(512)\n        self.bn2 = nn.BatchNorm1d(256)\n        self.relu = nn.ReLU()\n\n    def forward(self, x):\n        x, trans, trans_feat = self.feat(x)\n        x = F.relu(self.bn1(self.fc1(x)))\n        x = F.relu(self.bn2(self.dropout(self.fc2(x))))\n        x = self.fc3(x)\n        return F.log_softmax(x, dim=1), trans, trans_feat\n\n\nclass PointNetDenseCls(nn.Module):\n    def __init__(self, k = 2, feature_transform=False):\n        super(PointNetDenseCls, self).__init__()\n        self.k = k\n        self.feature_transform=feature_transform\n        self.feat = PointNetfeat(global_feat=False, feature_transform=feature_transform)\n        self.conv1 = torch.nn.Conv1d(1088, 512, 1)\n        self.conv2 = torch.nn.Conv1d(512, 256, 1)\n        self.conv3 = torch.nn.Conv1d(256, 128, 1)\n        self.conv4 = torch.nn.Conv1d(128, self.k, 1)\n        self.bn1 = nn.BatchNorm1d(512)\n        self.bn2 = nn.BatchNorm1d(256)\n        self.bn3 = nn.BatchNorm1d(128)\n\n    def forward(self, x):\n        batchsize = x.size()[0]\n        n_pts = x.size()[2]\n        x, trans, trans_feat = self.feat(x)\n        x = F.relu(self.bn1(self.conv1(x)))\n        x = F.relu(self.bn2(self.conv2(x)))\n        x = F.relu(self.bn3(self.conv3(x)))\n        x = self.conv4(x)\n        x = x.transpose(2,1).contiguous()\n        x = F.log_softmax(x.view(-1,self.k), dim=-1)\n        x = x.view(batchsize, n_pts, self.k)\n        return x, trans, trans_feat\n\ndef feature_transform_regularizer(trans):\n    d = trans.size()[1]\n    batchsize = trans.size()[0]\n    I = torch.eye(d)[None, :, :]\n    if trans.is_cuda:\n        I = I.cuda()\n    loss = torch.mean(torch.norm(torch.bmm(trans, trans.transpose(2,1)) - I, dim=(1,2), p=2))\n    return loss\n\nif __name__ == '__main__':\n    sim_data = Variable(torch.rand(32,3,2500))\n    trans = STN3d()\n    out = trans(sim_data)\n    print('stn', out.size())\n    print('loss', feature_transform_regularizer(out))\n\n    sim_data_64d = Variable(torch.rand(32, 64, 2500))\n    trans = STNkd(k=64)\n    out = trans(sim_data_64d)\n    print('stn64d', out.size())\n    print('loss', feature_transform_regularizer(out))\n\n    pointfeat = PointNetfeat(global_feat=True)\n    out, _, _ = pointfeat(sim_data)\n    print('global feat', out.size())\n\n    pointfeat = PointNetfeat(global_feat=False)\n    out, _, _ = pointfeat(sim_data)\n    print('point feat', out.size())\n\n    cls = PointNetCls(k = 5)\n    out, _, _ = cls(sim_data)\n    print('class', out.size())\n\n    seg = PointNetDenseCls(k = 3)\n    out, _, _ = seg(sim_data)\n    print('seg', out.size())\n"
  },
  {
    "path": "pointnet_pyt/scripts/build.sh",
    "content": "SCRIPT=`realpath $0`\nSCRIPTPATH=`dirname $SCRIPT`\necho $SCRIPTPATH\n\ng++ -std=c++11 $SCRIPTPATH/../utils/render_balls_so.cpp -o $SCRIPTPATH/../utils/render_balls_so.so -shared -fPIC -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n"
  },
  {
    "path": "pointnet_pyt/scripts/download.sh",
    "content": "SCRIPT=`realpath $0`\nSCRIPTPATH=`dirname $SCRIPT`\n\ncd $SCRIPTPATH/..\nwget https://shapenet.cs.stanford.edu/ericyi/shapenetcore_partanno_segmentation_benchmark_v0.zip --no-check-certificate\nunzip shapenetcore_partanno_segmentation_benchmark_v0.zip\nrm shapenetcore_partanno_segmentation_benchmark_v0.zip\ncd -\n"
  },
  {
    "path": "pointnet_pyt/setup.py",
    "content": "# install using 'pip install -e .'\n\nfrom setuptools import setup\n\nsetup(name='pointnet',\n      packages=['pointnet'],\n      package_dir={'pointnet': 'pointnet'},\n      install_requires=['torch',\n                        'tqdm',\n                        'plyfile'],\n    version='0.0.1')\n"
  },
  {
    "path": "pointnet_pyt/utils/render_balls_so.cpp",
    "content": "#include <cstdio>\n#include <vector>\n#include <algorithm>\n#include <math.h>\nusing namespace std;\n\nstruct PointInfo{\n\tint x,y,z;\n\tfloat r,g,b;\n};\n\nextern \"C\"{\n\nvoid render_ball(int h,int w,unsigned char * show,int n,int * xyzs,float * c0,float * c1,float * c2,int r){\n\tr=max(r,1);\n\tvector<int> depth(h*w,-2100000000);\n\tvector<PointInfo> pattern;\n\tfor (int dx=-r;dx<=r;dx++)\n\t\tfor (int dy=-r;dy<=r;dy++)\n\t\t\tif (dx*dx+dy*dy<r*r){\n\t\t\t\tdouble dz=sqrt(double(r*r-dx*dx-dy*dy));\n\t\t\t\tPointInfo pinfo;\n\t\t\t\tpinfo.x=dx;\n\t\t\t\tpinfo.y=dy;\n\t\t\t\tpinfo.z=dz;\n\t\t\t\tpinfo.r=dz/r;\n\t\t\t\tpinfo.g=dz/r;\n\t\t\t\tpinfo.b=dz/r;\n\t\t\t\tpattern.push_back(pinfo);\n\t\t\t}\n\tdouble zmin=0,zmax=0;\n\tfor (int i=0;i<n;i++){\n\t\tif (i==0){\n\t\t\tzmin=xyzs[i*3+2]-r;\n\t\t\tzmax=xyzs[i*3+2]+r;\n\t\t}else{\n\t\t\tzmin=min(zmin,double(xyzs[i*3+2]-r));\n\t\t\tzmax=max(zmax,double(xyzs[i*3+2]+r));\n\t\t}\n\t}\n\tfor (int i=0;i<n;i++){\n\t\tint x=xyzs[i*3+0],y=xyzs[i*3+1],z=xyzs[i*3+2];\n\t\tfor (int j=0;j<int(pattern.size());j++){\n\t\t\tint x2=x+pattern[j].x;\n\t\t\tint y2=y+pattern[j].y;\n\t\t\tint z2=z+pattern[j].z;\n\t\t\tif (!(x2<0 || x2>=h || y2<0 || y2>=w) && depth[x2*w+y2]<z2){\n\t\t\t\tdepth[x2*w+y2]=z2;\n\t\t\t\tdouble intensity=min(1.0,(z2-zmin)/(zmax-zmin)*0.7+0.3);\n\t\t\t\tshow[(x2*w+y2)*3+0]=pattern[j].b*c2[i]*intensity;\n\t\t\t\tshow[(x2*w+y2)*3+1]=pattern[j].g*c0[i]*intensity;\n\t\t\t\tshow[(x2*w+y2)*3+2]=pattern[j].r*c1[i]*intensity;\n\t\t\t}\n\t\t}\n\t}\n}\n\n}//extern \"C\"\n"
  },
  {
    "path": "pointnet_pyt/utils/show3d_balls.py",
    "content": "import numpy as np\nimport ctypes as ct\nimport cv2\nimport sys\nshowsz = 800\nmousex, mousey = 0.5, 0.5\nzoom = 1.0\nchanged = True\n\ndef onmouse(*args):\n    global mousex, mousey, changed\n    y = args[1]\n    x = args[2]\n    mousex = x / float(showsz)\n    mousey = y / float(showsz)\n    changed = True\n\ncv2.namedWindow('show3d')\ncv2.moveWindow('show3d', 0, 0)\ncv2.setMouseCallback('show3d', onmouse)\n\ndll = np.ctypeslib.load_library('render_balls_so', '.')\n\ndef showpoints(xyz,c_gt=None, c_pred = None, waittime=0, \n    showrot=False, magnifyBlue=0, freezerot=False, background=(0,0,0), \n    normalizecolor=True, ballradius=10):\n    global showsz, mousex, mousey, zoom, changed\n    xyz=xyz-xyz.mean(axis=0)\n    radius=((xyz**2).sum(axis=-1)**0.5).max()\n    xyz/=(radius*2.2)/showsz\n    if c_gt is None:\n        c0 = np.zeros((len(xyz), ), dtype='float32') + 255\n        c1 = np.zeros((len(xyz), ), dtype='float32') + 255\n        c2 = np.zeros((len(xyz), ), dtype='float32') + 255\n    else:\n        c0 = c_gt[:, 0]\n        c1 = c_gt[:, 1]\n        c2 = c_gt[:, 2]\n\n\n    if normalizecolor:\n        c0 /= (c0.max() + 1e-14) / 255.0\n        c1 /= (c1.max() + 1e-14) / 255.0\n        c2 /= (c2.max() + 1e-14) / 255.0\n\n\n    c0 = np.require(c0, 'float32', 'C')\n    c1 = np.require(c1, 'float32', 'C')\n    c2 = np.require(c2, 'float32', 'C')\n\n    show = np.zeros((showsz, showsz, 3), dtype='uint8')\n    def render():\n        rotmat=np.eye(3)\n        if not freezerot:\n            xangle=(mousey-0.5)*np.pi*1.2\n        else:\n            xangle=0\n        rotmat = rotmat.dot(\n            np.array([\n                [1.0, 0.0, 0.0],\n                [0.0, np.cos(xangle), -np.sin(xangle)],\n                [0.0, np.sin(xangle), np.cos(xangle)],\n            ]))\n        if not freezerot:\n            yangle = (mousex - 0.5) * np.pi * 1.2\n        else:\n            yangle = 0\n        rotmat = rotmat.dot(\n            np.array([\n                [np.cos(yangle), 0.0, -np.sin(yangle)],\n                [0.0, 1.0, 0.0],\n                [np.sin(yangle), 0.0, np.cos(yangle)],\n            ]))\n        rotmat *= zoom\n        nxyz = xyz.dot(rotmat) + [showsz / 2, showsz / 2, 0]\n\n        ixyz = nxyz.astype('int32')\n        show[:] = background\n        dll.render_ball(\n            ct.c_int(show.shape[0]), ct.c_int(show.shape[1]),\n            show.ctypes.data_as(ct.c_void_p), ct.c_int(ixyz.shape[0]),\n            ixyz.ctypes.data_as(ct.c_void_p), c0.ctypes.data_as(ct.c_void_p),\n            c1.ctypes.data_as(ct.c_void_p), c2.ctypes.data_as(ct.c_void_p),\n            ct.c_int(ballradius))\n\n        if magnifyBlue > 0:\n            show[:, :, 0] = np.maximum(show[:, :, 0], np.roll(\n                show[:, :, 0], 1, axis=0))\n            if magnifyBlue >= 2:\n                show[:, :, 0] = np.maximum(show[:, :, 0],\n                                           np.roll(show[:, :, 0], -1, axis=0))\n            show[:, :, 0] = np.maximum(show[:, :, 0], np.roll(\n                show[:, :, 0], 1, axis=1))\n            if magnifyBlue >= 2:\n                show[:, :, 0] = np.maximum(show[:, :, 0],\n                                           np.roll(show[:, :, 0], -1, axis=1))\n        if showrot:\n            cv2.putText(show, 'xangle %d' % (int(xangle / np.pi * 180)),\n                        (30, showsz - 30), 0, 0.5, cv2.cv.CV_RGB(255, 0, 0))\n            cv2.putText(show, 'yangle %d' % (int(yangle / np.pi * 180)),\n                        (30, showsz - 50), 0, 0.5, cv2.cv.CV_RGB(255, 0, 0))\n            cv2.putText(show, 'zoom %d%%' % (int(zoom * 100)), (30, showsz - 70), 0,\n                        0.5, cv2.cv.CV_RGB(255, 0, 0))\n    changed = True\n    while True:\n        if changed:\n            render()\n            changed = False\n        cv2.imshow('show3d', show)\n        if waittime == 0:\n            cmd = cv2.waitKey(10) % 256\n        else:\n            cmd = cv2.waitKey(waittime) % 256\n        if cmd == ord('q'):\n            break\n        elif cmd == ord('Q'):\n            sys.exit(0)\n\n        if cmd == ord('t') or cmd == ord('p'):\n            if cmd == ord('t'):\n                if c_gt is None:\n                    c0 = np.zeros((len(xyz), ), dtype='float32') + 255\n                    c1 = np.zeros((len(xyz), ), dtype='float32') + 255\n                    c2 = np.zeros((len(xyz), ), dtype='float32') + 255\n                else:\n                    c0 = c_gt[:, 0]\n                    c1 = c_gt[:, 1]\n                    c2 = c_gt[:, 2]\n            else:\n                if c_pred is None:\n                    c0 = np.zeros((len(xyz), ), dtype='float32') + 255\n                    c1 = np.zeros((len(xyz), ), dtype='float32') + 255\n                    c2 = np.zeros((len(xyz), ), dtype='float32') + 255\n                else:\n                    c0 = c_pred[:, 0]\n                    c1 = c_pred[:, 1]\n                    c2 = c_pred[:, 2]\n            if normalizecolor:\n                c0 /= (c0.max() + 1e-14) / 255.0\n                c1 /= (c1.max() + 1e-14) / 255.0\n                c2 /= (c2.max() + 1e-14) / 255.0\n            c0 = np.require(c0, 'float32', 'C')\n            c1 = np.require(c1, 'float32', 'C')\n            c2 = np.require(c2, 'float32', 'C')\n            changed = True\n\n        if cmd==ord('n'):\n            zoom*=1.1\n            changed=True\n        elif cmd==ord('m'):\n            zoom/=1.1\n            changed=True\n        elif cmd==ord('r'):\n            zoom=1.0\n            changed=True\n        elif cmd==ord('s'):\n            cv2.imwrite('show3d.png',show)\n        if waittime!=0:\n            break\n    return cmd\n\nif __name__ == '__main__':\n    np.random.seed(100)\n    showpoints(np.random.randn(2500, 3))\n"
  },
  {
    "path": "pointnet_pyt/utils/show_cls.py",
    "content": "from __future__ import print_function\nimport argparse\nimport torch\nimport torch.nn.parallel\nimport torch.utils.data\nfrom torch.autograd import Variable\nfrom pointnet.dataset import ShapeNetDataset\nfrom pointnet.model import PointNetCls\nimport torch.nn.functional as F\n\n\n#showpoints(np.random.randn(2500,3), c1 = np.random.uniform(0,1,size = (2500)))\n\nparser = argparse.ArgumentParser()\n\nparser.add_argument('--model', type=str, default = '',  help='model path')\nparser.add_argument('--num_points', type=int, default=2500, help='input batch size')\n\n\nopt = parser.parse_args()\nprint(opt)\n\ntest_dataset = ShapeNetDataset(\n    root='shapenetcore_partanno_segmentation_benchmark_v0',\n    split='test',\n    classification=True,\n    npoints=opt.num_points,\n    data_augmentation=False)\n\ntestdataloader = torch.utils.data.DataLoader(\n    test_dataset, batch_size=32, shuffle=True)\n\nclassifier = PointNetCls(k=len(test_dataset.classes))\nclassifier.cuda()\nclassifier.load_state_dict(torch.load(opt.model))\nclassifier.eval()\n\n\nfor i, data in enumerate(testdataloader, 0):\n    points, target = data\n    points, target = Variable(points), Variable(target[:, 0])\n    points = points.transpose(2, 1)\n    points, target = points.cuda(), target.cuda()\n    pred, _, _ = classifier(points)\n    loss = F.nll_loss(pred, target)\n\n    pred_choice = pred.data.max(1)[1]\n    correct = pred_choice.eq(target.data).cpu().sum()\n    print('i:%d  loss: %f accuracy: %f' % (i, loss.data.item(), correct / float(32)))\n"
  },
  {
    "path": "pointnet_pyt/utils/show_seg.py",
    "content": "from __future__ import print_function\nfrom show3d_balls import showpoints\nimport argparse\nimport numpy as np\nimport torch\nimport torch.nn.parallel\nimport torch.utils.data\nfrom torch.autograd import Variable\nfrom pointnet.dataset import ShapeNetDataset\nfrom pointnet.model import PointNetDenseCls\nimport matplotlib.pyplot as plt\n\n\n#showpoints(np.random.randn(2500,3), c1 = np.random.uniform(0,1,size = (2500)))\n\nparser = argparse.ArgumentParser()\n\nparser.add_argument('--model', type=str, default='', help='model path')\nparser.add_argument('--idx', type=int, default=0, help='model index')\nparser.add_argument('--dataset', type=str, default='', help='dataset path')\nparser.add_argument('--class_choice', type=str, default='', help='class choice')\n\nopt = parser.parse_args()\nprint(opt)\n\nd = ShapeNetDataset(\n    root=opt.dataset,\n    class_choice=[opt.class_choice],\n    split='test',\n    data_augmentation=False)\n\nidx = opt.idx\n\nprint(\"model %d/%d\" % (idx, len(d)))\npoint, seg = d[idx]\nprint(point.size(), seg.size())\npoint_np = point.numpy()\n\ncmap = plt.cm.get_cmap(\"hsv\", 10)\ncmap = np.array([cmap(i) for i in range(10)])[:, :3]\ngt = cmap[seg.numpy() - 1, :]\n\nstate_dict = torch.load(opt.model)\nclassifier = PointNetDenseCls(k= state_dict['conv4.weight'].size()[0])\nclassifier.load_state_dict(state_dict)\nclassifier.eval()\n\npoint = point.transpose(1, 0).contiguous()\n\npoint = Variable(point.view(1, point.size()[0], point.size()[1]))\npred, _, _ = classifier(point)\npred_choice = pred.data.max(2)[1]\nprint(pred_choice)\n\n#print(pred_choice.size())\npred_color = cmap[pred_choice.numpy()[0], :]\n\n#print(pred_color.shape)\nshowpoints(point_np, gt, pred_color)\n"
  },
  {
    "path": "pointnet_pyt/utils/train_classification.py",
    "content": "from __future__ import print_function\nimport argparse\nimport os\nimport random\nimport torch\nimport torch.nn.parallel\nimport torch.optim as optim\nimport torch.utils.data\nfrom pointnet.dataset import ShapeNetDataset, ModelNetDataset\nfrom pointnet.model import PointNetCls, feature_transform_regularizer\nimport torch.nn.functional as F\nfrom tqdm import tqdm\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n    '--batchSize', type=int, default=32, help='input batch size')\nparser.add_argument(\n    '--num_points', type=int, default=2500, help='input batch size')\nparser.add_argument(\n    '--workers', type=int, help='number of data loading workers', default=4)\nparser.add_argument(\n    '--nepoch', type=int, default=250, help='number of epochs to train for')\nparser.add_argument('--outf', type=str, default='cls', help='output folder')\nparser.add_argument('--model', type=str, default='', help='model path')\nparser.add_argument('--dataset', type=str, required=True, help=\"dataset path\")\nparser.add_argument('--dataset_type', type=str, default='shapenet', help=\"dataset type shapenet|modelnet40\")\nparser.add_argument('--feature_transform', action='store_true', help=\"use feature transform\")\n\nopt = parser.parse_args()\nprint(opt)\n\nblue = lambda x: '\\033[94m' + x + '\\033[0m'\n\nopt.manualSeed = random.randint(1, 10000)  # fix seed\nprint(\"Random Seed: \", opt.manualSeed)\nrandom.seed(opt.manualSeed)\ntorch.manual_seed(opt.manualSeed)\n\nif opt.dataset_type == 'shapenet':\n    dataset = ShapeNetDataset(\n        root=opt.dataset,\n        classification=True,\n        npoints=opt.num_points)\n\n    test_dataset = ShapeNetDataset(\n        root=opt.dataset,\n        classification=True,\n        split='test',\n        npoints=opt.num_points,\n        data_augmentation=False)\nelif opt.dataset_type == 'modelnet40':\n    dataset = ModelNetDataset(\n        root=opt.dataset,\n        npoints=opt.num_points,\n        split='trainval')\n\n    test_dataset = ModelNetDataset(\n        root=opt.dataset,\n        split='test',\n        npoints=opt.num_points,\n        data_augmentation=False)\nelse:\n    exit('wrong dataset type')\n\n\ndataloader = torch.utils.data.DataLoader(\n    dataset,\n    batch_size=opt.batchSize,\n    shuffle=True,\n    num_workers=int(opt.workers))\n\ntestdataloader = torch.utils.data.DataLoader(\n        test_dataset,\n        batch_size=opt.batchSize,\n        shuffle=True,\n        num_workers=int(opt.workers))\n\nprint(len(dataset), len(test_dataset))\nnum_classes = len(dataset.classes)\nprint('classes', num_classes)\n\ntry:\n    os.makedirs(opt.outf)\nexcept OSError:\n    pass\n\nclassifier = PointNetCls(k=num_classes, feature_transform=opt.feature_transform)\n\nif opt.model != '':\n    classifier.load_state_dict(torch.load(opt.model))\n\n\noptimizer = optim.Adam(classifier.parameters(), lr=0.001, betas=(0.9, 0.999))\nscheduler = optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.5)\nclassifier.cuda()\n\nnum_batch = len(dataset) / opt.batchSize\n\nfor epoch in range(opt.nepoch):\n    scheduler.step()\n    for i, data in enumerate(dataloader, 0):\n        points, target = data\n        target = target[:, 0]\n        points = points.transpose(2, 1)\n        points, target = points.cuda(), target.cuda()\n        optimizer.zero_grad()\n        classifier = classifier.train()\n        pred, trans, trans_feat = classifier(points)\n        loss = F.nll_loss(pred, target)\n        if opt.feature_transform:\n            loss += feature_transform_regularizer(trans_feat) * 0.001\n        loss.backward()\n        optimizer.step()\n        pred_choice = pred.data.max(1)[1]\n        correct = pred_choice.eq(target.data).cpu().sum()\n        print('[%d: %d/%d] train loss: %f accuracy: %f' % (epoch, i, num_batch, loss.item(), correct.item() / float(opt.batchSize)))\n\n        if i % 10 == 0:\n            j, data = next(enumerate(testdataloader, 0))\n            points, target = data\n            target = target[:, 0]\n            points = points.transpose(2, 1)\n            points, target = points.cuda(), target.cuda()\n            classifier = classifier.eval()\n            pred, _, _ = classifier(points)\n            loss = F.nll_loss(pred, target)\n            pred_choice = pred.data.max(1)[1]\n            correct = pred_choice.eq(target.data).cpu().sum()\n            print('[%d: %d/%d] %s loss: %f accuracy: %f' % (epoch, i, num_batch, blue('test'), loss.item(), correct.item()/float(opt.batchSize)))\n\n    torch.save(classifier.state_dict(), '%s/cls_model_%d.pth' % (opt.outf, epoch))\n\ntotal_correct = 0\ntotal_testset = 0\nfor i,data in tqdm(enumerate(testdataloader, 0)):\n    points, target = data\n    target = target[:, 0]\n    points = points.transpose(2, 1)\n    points, target = points.cuda(), target.cuda()\n    classifier = classifier.eval()\n    pred, _, _ = classifier(points)\n    pred_choice = pred.data.max(1)[1]\n    correct = pred_choice.eq(target.data).cpu().sum()\n    total_correct += correct.item()\n    total_testset += points.size()[0]\n\nprint(\"final accuracy {}\".format(total_correct / float(total_testset)))"
  },
  {
    "path": "pointnet_pyt/utils/train_segmentation.py",
    "content": "from __future__ import print_function\nimport argparse\nimport os\nimport random\nimport torch\nimport torch.nn.parallel\nimport torch.optim as optim\nimport torch.utils.data\nfrom pointnet.dataset import ShapeNetDataset\nfrom pointnet.model import PointNetDenseCls, feature_transform_regularizer\nimport torch.nn.functional as F\nfrom tqdm import tqdm\nimport numpy as np\n\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n    '--batchSize', type=int, default=32, help='input batch size')\nparser.add_argument(\n    '--workers', type=int, help='number of data loading workers', default=4)\nparser.add_argument(\n    '--nepoch', type=int, default=25, help='number of epochs to train for')\nparser.add_argument('--outf', type=str, default='seg', help='output folder')\nparser.add_argument('--model', type=str, default='', help='model path')\nparser.add_argument('--dataset', type=str, required=True, help=\"dataset path\")\nparser.add_argument('--class_choice', type=str, default='Chair', help=\"class_choice\")\nparser.add_argument('--feature_transform', action='store_true', help=\"use feature transform\")\n\nopt = parser.parse_args()\nprint(opt)\n\nopt.manualSeed = random.randint(1, 10000)  # fix seed\nprint(\"Random Seed: \", opt.manualSeed)\nrandom.seed(opt.manualSeed)\ntorch.manual_seed(opt.manualSeed)\n\ndataset = ShapeNetDataset(\n    root=opt.dataset,\n    classification=False,\n    class_choice=[opt.class_choice])\ndataloader = torch.utils.data.DataLoader(\n    dataset,\n    batch_size=opt.batchSize,\n    shuffle=True,\n    num_workers=int(opt.workers))\n\ntest_dataset = ShapeNetDataset(\n    root=opt.dataset,\n    classification=False,\n    class_choice=[opt.class_choice],\n    split='test',\n    data_augmentation=False)\ntestdataloader = torch.utils.data.DataLoader(\n    test_dataset,\n    batch_size=opt.batchSize,\n    shuffle=True,\n    num_workers=int(opt.workers))\n\nprint(len(dataset), len(test_dataset))\nnum_classes = dataset.num_seg_classes\nprint('classes', num_classes)\ntry:\n    os.makedirs(opt.outf)\nexcept OSError:\n    pass\n\nblue = lambda x: '\\033[94m' + x + '\\033[0m'\n\nclassifier = PointNetDenseCls(k=num_classes, feature_transform=opt.feature_transform)\n\nif opt.model != '':\n    classifier.load_state_dict(torch.load(opt.model))\n\noptimizer = optim.Adam(classifier.parameters(), lr=0.001, betas=(0.9, 0.999))\nscheduler = optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.5)\nclassifier.cuda()\n\nnum_batch = len(dataset) / opt.batchSize\n\nfor epoch in range(opt.nepoch):\n    scheduler.step()\n    for i, data in enumerate(dataloader, 0):\n        points, target = data\n        points = points.transpose(2, 1)\n        points, target = points.cuda(), target.cuda()\n        optimizer.zero_grad()\n        classifier = classifier.train()\n        pred, trans, trans_feat = classifier(points)\n        pred = pred.view(-1, num_classes)\n        target = target.view(-1, 1)[:, 0] - 1\n        #print(pred.size(), target.size())\n        loss = F.nll_loss(pred, target)\n        if opt.feature_transform:\n            loss += feature_transform_regularizer(trans_feat) * 0.001\n        loss.backward()\n        optimizer.step()\n        pred_choice = pred.data.max(1)[1]\n        correct = pred_choice.eq(target.data).cpu().sum()\n        print('[%d: %d/%d] train loss: %f accuracy: %f' % (epoch, i, num_batch, loss.item(), correct.item()/float(opt.batchSize * 2500)))\n\n        if i % 10 == 0:\n            j, data = next(enumerate(testdataloader, 0))\n            points, target = data\n            points = points.transpose(2, 1)\n            points, target = points.cuda(), target.cuda()\n            classifier = classifier.eval()\n            pred, _, _ = classifier(points)\n            pred = pred.view(-1, num_classes)\n            target = target.view(-1, 1)[:, 0] - 1\n            loss = F.nll_loss(pred, target)\n            pred_choice = pred.data.max(1)[1]\n            correct = pred_choice.eq(target.data).cpu().sum()\n            print('[%d: %d/%d] %s loss: %f accuracy: %f' % (epoch, i, num_batch, blue('test'), loss.item(), correct.item()/float(opt.batchSize * 2500)))\n\n    torch.save(classifier.state_dict(), '%s/seg_model_%s_%d.pth' % (opt.outf, opt.class_choice, epoch))\n\n## benchmark mIOU\nshape_ious = []\nfor i,data in tqdm(enumerate(testdataloader, 0)):\n    points, target = data\n    points = points.transpose(2, 1)\n    points, target = points.cuda(), target.cuda()\n    classifier = classifier.eval()\n    pred, _, _ = classifier(points)\n    pred_choice = pred.data.max(2)[1]\n\n    pred_np = pred_choice.cpu().data.numpy()\n    target_np = target.cpu().data.numpy() - 1\n\n    for shape_idx in range(target_np.shape[0]):\n        parts = range(num_classes)#np.unique(target_np[shape_idx])\n        part_ious = []\n        for part in parts:\n            I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))\n            U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))\n            if U == 0:\n                iou = 1 #If the union of groundtruth and prediction points is empty, then count part IoU as 1\n            else:\n                iou = I / float(U)\n            part_ious.append(iou)\n        shape_ious.append(np.mean(part_ious))\n\nprint(\"mIOU for class {}: {}\".format(opt.class_choice, np.mean(shape_ious)))"
  },
  {
    "path": "requirements.txt",
    "content": "git+https://github.com/imankgoyal/etw_pytorch_utils.git@v1.1.1#egg=etw_pytorch_utils\nenum34\nfuture\nh5py==2.10.0\nprogressbar2==3.50.0\ntensorboardX==2.0\n-f https://download.pytorch.org/whl/torch_stable.html\ntorch==1.4.0+cu100\n-f https://download.pytorch.org/whl/torch_stable.html\ntorchvision==0.5.0+cu100\nyacs==0.1.6\nopen3d==0.13.0\n"
  },
  {
    "path": "rs_cnn/.gitignore",
    "content": "cls/*.pth\nseg/*.pth\n"
  },
  {
    "path": "rs_cnn/CMakeLists.txt",
    "content": "project(PointNet2)\ncmake_minimum_required(VERSION 2.8)\n\nfind_package(CUDA REQUIRED)\n\ninclude_directories(\"${CMAKE_CURRENT_SOURCE_DIR}/utils/cinclude\")\ncuda_include_directories(\"${CMAKE_CURRENT_SOURCE_DIR}/utils/cinclude\")\nfile(GLOB cuda_kernels_src \"${CMAKE_CURRENT_SOURCE_DIR}/utils/csrc/*.cu\")\ncuda_compile(cuda_kernels SHARED ${cuda_kernels_src} OPTIONS -O3)\n\nset(BUILD_CMD python \"${CMAKE_CURRENT_SOURCE_DIR}/utils/build_ffi.py\")\nfile(GLOB wrapper_headers \"${CMAKE_CURRENT_SOURCE_DIR}/utils/cinclude/*wrapper.h\")\nfile(GLOB wrapper_sources \"${CMAKE_CURRENT_SOURCE_DIR}/utils/csrs/*.c\")\nadd_custom_command(OUTPUT \"${CMAKE_CURRENT_SOURCE_DIR}/utils/_ext/pointnet2/_pointnet2.so\"\n\t\t   WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/utils\n\t\t   COMMAND ${BUILD_CMD} --build --objs ${cuda_kernels}\n\t\t   DEPENDS ${cuda_kernels}\n\t\t   DEPENDS ${wrapper_headers}\n\t\t   DEPENDS ${wrapper_sources}\n\t\t   VERBATIM)\n\nadd_custom_target(pointnet2_ext ALL\n\t\t  DEPENDS \"${CMAKE_CURRENT_SOURCE_DIR}/utils/_ext/pointnet2/_pointnet2.so\")\n\n"
  },
  {
    "path": "rs_cnn/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Yongcheng Liu\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "rs_cnn/README.md",
    "content": "Relation-Shape Convolutional Neural Network for Point Cloud Analysis\n===\nThis repository contains the author's implementation in Pytorch for the paper:\n\n__Relation-Shape Convolutional Neural Network for Point Cloud Analysis__ [[arXiv](https://arxiv.org/abs/1904.07601)] [[CVF](http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Relation-Shape_Convolutional_Neural_Network_for_Point_Cloud_Analysis_CVPR_2019_paper.pdf)]\n<br>\n[Yongcheng Liu](https://yochengliu.github.io/), [Bin Fan](http://www.nlpr.ia.ac.cn/fanbin/), [Shiming Xiang](https://scholar.google.com/citations?user=0ggsACEAAAAJ&hl=zh-CN) and [Chunhong Pan](http://people.ucas.ac.cn/~0005314)\n<br>\n[__CVPR 2019 Oral & Best paper finalist__](http://cvpr2019.thecvf.com/) &nbsp;&nbsp;&nbsp; __Project Page__: [https://yochengliu.github.io/Relation-Shape-CNN/](https://yochengliu.github.io/Relation-Shape-CNN/)\n\n## Citation\n\nIf our paper is helpful for your research, please consider citing:   \n```BibTex\n        @inproceedings{liu2019rscnn,   \n            author = {Yongcheng Liu and    \n                            Bin Fan and    \n                      Shiming Xiang and   \n                           Chunhong Pan},   \n            title = {Relation-Shape Convolutional Neural Network for Point Cloud Analysis},   \n            booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},    \n            pages = {8895--8904},  \n            year = {2019}   \n        }   \n```\n## Usage: Preparation\n\n### Requirement\n\n- Ubuntu 14.04\n- Python 3 (recommend Anaconda3)\n- Pytorch 0.3.\\*/0.4.\\*\n- CMake > 2.8\n- CUDA 8.0 + cuDNN 5.1\n\n### Building Kernel\n\n    git clone https://github.com/Yochengliu/Relation-Shape-CNN.git \n    cd Relation-Shape-CNN\n\n- mkdir build && cd build\n- cmake .. && make\n\n### Dataset\n__Shape Classification__\n\nDownload and unzip [ModelNet40](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip) (415M). Replace `$data_root$` in `cfgs/config_*_cls.yaml` with the dataset parent path.\n\n__ShapeNet Part Segmentation__\n\nDownload and unzip [ShapeNet Part](https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip) (674M). Replace `$data_root$` in `cfgs/config_*_partseg.yaml` with the dataset path.\n\n## Usage: Training\n### Shape Classification\n\n    sh train_cls.sh\n        \nYou can modify `relation_prior` in `cfgs/config_*_cls.yaml`. We have trained a Single-Scale-Neighborhood classification model in `cls` folder, whose accuracy is 92.38%.\n        \n### Shape Part Segmentation\n\n    sh train_partseg.sh\n        \nWe have trained a Multi-Scale-Neighborhood part segmentation model in `seg` folder, whose class mIoU and instance mIoU is 84.18% and 85.81% respectively.\n\n## Usage: Evaluation\n### Shape Classification\n\n    Voting script: voting_evaluate_cls.py\n        \nYou can use our model `cls/model_cls_ssn_iter_16218_acc_0.923825.pth` as the checkpoint in `config_ssn_cls.yaml`, and after this voting you will get an accuracy of 92.71% if all things go right.\n\n### Shape Part Segmentation\n\n    Voting script: voting_evaluate_partseg.py\n        \nYou can use our model `seg/model_seg_msn_iter_57585_ins_0.858054_cls_0.841787.pth` as the checkpoint in `config_msn_partseg.yaml`.\n\n## License\n\nThe code is released under MIT License (see LICENSE file for details).\n\n## Acknowledgement\n\nThe code is heavily borrowed from [Pointnet2_PyTorch](https://github.com/erikwijmans/Pointnet2_PyTorch).\n        \n## Contact\n\nIf you have some ideas or questions about our research to share with us, please contact <yongcheng.liu@nlpr.ia.ac.cn>\n"
  },
  {
    "path": "rs_cnn/cfgs/config_msn_partseg.yaml",
    "content": "common:\n    workers: 4\n\n    num_points: 2048\n    num_classes: 50\n    batch_size: 28\n    \n    base_lr: 0.001\n    lr_clip: 0.00001\n    lr_decay: 0.5\n    decay_step: 21\n    epochs: 200\n\n    weight_decay: 0\n    bn_momentum: 0.9\n    bnm_clip: 0.01\n    bn_decay: 0.5\n    \n    evaluate: 1           # validation in training process\n    val_freq_epoch: 0.7   # frequency in epoch for validation, can be decimal\n    print_freq_iter: 20   # frequency in iteration for printing infomation\n    \n    input_channels: 0     # feature channels except (x, y, z)\n    \n    # h_ij: 0  for 3D Euclidean distance (3D Ed),    channels = 1\n    #       1  for (3D Ed, x_i, x_j, x_j - x_i),     channels = 10\n    #       2  for (2D Ed, x'_i, x'_j, x'_j - x'_i), channels = 10,  x' indicates 2D coordinates\n    relation_prior: 1\n    \n    checkpoint: ''        # the model to start from\n    save_path: seg\n    data_root: /u/agoyal/storage/view_point_cloud/Pytorch/data/shapenetcore_partanno_segmentation_benchmark_v0_normal\n"
  },
  {
    "path": "rs_cnn/cfgs/config_ssn_cls.yaml",
    "content": "common:\n    workers: 4\n\n    num_points: 1024\n    num_classes: 40\n    batch_size: 32\n    \n    base_lr: 0.001\n    lr_clip: 0.00001\n    lr_decay: 0.7\n    decay_step: 21\n    epochs: 200\n\n    weight_decay: 0\n    bn_momentum: 0.9\n    bnm_clip: 0.01\n    bn_decay: 0.5\n    \n    evaluate: 1\n    val_freq_epoch: 0.5   # frequency in epoch for validation, can be decimal\n    print_freq_iter: 20   # frequency in iteration for printing infomation\n    \n    input_channels: 0     # feature channels except (x, y, z)\n    \n    # h_ij: 0  for 3D Euclidean distance (3D Ed),    channels = 1\n    #       1  for (3D Ed, x_i, x_j, x_j - x_i),     channels = 10\n    #       2  for (2D Ed, x'_i, x'_j, x'_j - x'_i), channels = 10,  x' indicates 2D coordinates\n    relation_prior: 1\n    \n    checkpoint: ''        # the model to start from\n    save_path: cls\n    data_root: /u/agoyal/storage/view_point_cloud/data/\n"
  },
  {
    "path": "rs_cnn/data/ModelNet40Loader.py",
    "content": "import torch\nimport torch.utils.data as data\nimport numpy as np\nimport os, sys, h5py\n\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\n\ndef _get_data_files(list_filename):\n    with open(list_filename) as f:\n        return [line.rstrip()[5:] for line in f]\n\ndef _load_data_file(name):\n    f = h5py.File(name, 'r')\n    data = f['data'][:]\n    label = f['label'][:]\n    return data, label\n    \nclass ModelNet40Cls(data.Dataset):\n    def __init__(self, num_points, root, data_file, transforms=None, train=True):\n        super().__init__()\n\n        self.transforms = transforms\n\n        root = os.path.abspath(root)\n        self.folder = \"modelnet40_ply_hdf5_2048\"\n        self.data_dir = os.path.join(root, self.folder)\n\n        self.train, self.num_points = train, num_points\n        self.files = _get_data_files(os.path.join(self.data_dir, data_file))\n        # if self.train:\n            # self.files =  _get_data_files( \\\n                # os.path.join(self.data_dir, 'train_files.txt'))\n        # else:\n            # self.files =  _get_data_files( \\\n                # os.path.join(self.data_dir, 'test_files.txt'))\n\n        point_list, label_list = [], []\n        for f in self.files:\n            points, labels = _load_data_file(os.path.join(root, f))\n            point_list.append(points)\n            label_list.append(labels)\n\n        self.points = np.concatenate(point_list, 0)\n        self.labels = np.concatenate(label_list, 0)\n\n    def __getitem__(self, idx):\n        pt_idxs = np.arange(0, self.points.shape[1])   # 2048\n        if self.train:\n            np.random.shuffle(pt_idxs)\n        \n        current_points = self.points[idx, pt_idxs].copy()\n        label = torch.from_numpy(self.labels[idx]).type(torch.LongTensor)\n        \n        if self.transforms is not None:\n            current_points = self.transforms(current_points)\n        \n        return current_points, label\n\n    def __len__(self):\n        return self.points.shape[0]\n\nif __name__ == \"__main__\":\n    from torchvision import transforms\n    import data_utils as d_utils\n\n    transforms = transforms.Compose([\n        d_utils.PointcloudToTensor(),\n        d_utils.PointcloudRotate(axis=np.array([1,0,0])),\n        d_utils.PointcloudScale(),\n        d_utils.PointcloudTranslate(),\n        d_utils.PointcloudJitter()\n    ])\n    dset = ModelNet40Cls(16, \"./\", train=True, transforms=transforms)\n    print(dset[0][0])\n    print(dset[0][1])\n    print(len(dset))\n    dloader = torch.utils.data.DataLoader(dset, batch_size=32, shuffle=True)\n"
  },
  {
    "path": "rs_cnn/data/ShapeNetPartLoader.py",
    "content": "import os\nimport os.path\nimport torch\nimport json\nimport pickle\nimport numpy as np\nimport sys\nimport torchvision.transforms as transforms\nfrom progressbar import ProgressBar\nimport pdb\n\ndef pc_normalize(pc):\n    l = pc.shape[0]\n    centroid = np.mean(pc, axis=0)\n    pc = pc - centroid\n    m = np.max(np.sqrt(np.sum(pc**2, axis=1)))\n    pc = pc / m\n    return pc\n\nclass ShapeNetPart():\n    def __init__(self, root, num_points = 2048, split='train', normalize=True, transforms = None, all_points=False):\n        self.transforms = transforms\n        self.num_points = num_points\n        self.root = root\n        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')\n        self.normalize = normalize\n        self.all_points = all_points\n\n        self.cat = {}\n        with open(self.catfile, 'r') as f:\n            for line in f:\n                ls = line.strip().split()\n                self.cat[ls[0]] = ls[1]\n        self.cat = {k:v for k,v in self.cat.items()}\n            \n        self.meta = {}\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_train_file_list.json'), 'r') as f:\n            train_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_val_file_list.json'), 'r') as f:\n            val_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        with open(os.path.join(self.root, 'train_test_split', 'shuffled_test_file_list.json'), 'r') as f:\n            test_ids = set([str(d.split('/')[2]) for d in json.load(f)])\n        for item in self.cat:\n            self.meta[item] = []\n            dir_point = os.path.join(self.root, self.cat[item])\n            fns = sorted(os.listdir(dir_point))\n            if split=='trainval':\n                fns = [fn for fn in fns if ((fn[0:-4] in train_ids) or (fn[0:-4] in val_ids))]\n            elif split=='train':\n                fns = [fn for fn in fns if fn[0:-4] in train_ids]\n            elif split=='val':\n                fns = [fn for fn in fns if fn[0:-4] in val_ids]\n            elif split=='test':\n                fns = [fn for fn in fns if fn[0:-4] in test_ids]\n            else:\n                print('Unknown split: %s. Exiting..'%(split))\n                exit(-1)\n                \n            for fn in fns:\n                token = (os.path.splitext(os.path.basename(fn))[0]) \n                self.meta[item].append(os.path.join(dir_point, token + '.txt'))\n        \n        self.datapath = []\n        for item in self.cat:\n            for fn in self.meta[item]:\n                self.datapath.append((item, fn))\n         \n        self.classes = dict(zip(self.cat, range(len(self.cat))))  \n        # Mapping from category ('Chair') to a list of int [10,11,12,13] as segmentation labels\n        self.seg_classes = {'Earphone': [16, 17, 18], 'Motorbike': [30, 31, 32, 33, 34, 35], 'Rocket': [41, 42, 43], 'Car': [8, 9, 10, 11], 'Laptop': [28, 29], 'Cap': [6, 7], 'Skateboard': [44, 45, 46], 'Mug': [36, 37], 'Guitar': [19, 20, 21], 'Bag': [4, 5], 'Lamp': [24, 25, 26, 27], 'Table': [47, 48, 49], 'Airplane': [0, 1, 2, 3], 'Pistol': [38, 39, 40], 'Chair': [12, 13, 14, 15], 'Knife': [22, 23]}\n        \n        self.cache = {}\n        # self.cache_size = self.__len__()\n        preload_file = f\"{self.root}/{split}_{self.normalize}_preload.pkl\"\n        if os.path.exists(preload_file):\n            print(f\"Preloading all data from {preload_file}\")\n            with open(preload_file, 'rb') as file:\n                self.cache = pickle.load(file)\n        else:\n            print(\"Preloading data by reading individual files.\")\n            self.preload()\n            with open(preload_file, 'wb') as file:\n                print(f\"Saving pre-loaded data at {preload_file}\")\n                pickle.dump(self.cache, file)\n\n\n    def __getitem__(self, index):\n        point_set, normal, seg, cls = self.cache[index]\n        if self.all_points:\n            pass\n        else:\n            choice = np.random.choice(len(seg), self.num_points, replace=True)\n            #resample\n            point_set = point_set[choice, :]\n            seg = seg[choice]\n            normal = normal[choice, :]\n        if self.transforms is not None:\n            point_set = self.transforms(point_set)\n\n        return point_set, torch.from_numpy(normal), torch.from_numpy(seg), torch.from_numpy(cls)\n        \n    def __len__(self):\n        return len(self.datapath)\n\n    def preload(self):\n        bar = ProgressBar(max_value=self.__len__())\n        for index in range(self.__len__()):\n            fn = self.datapath[index]\n            cat = self.datapath[index][0]\n            cls = self.classes[cat]\n            cls = np.array([cls]).astype(np.int64)\n            data = np.loadtxt(fn[1]).astype(np.float32)\n            point_set = data[:,0:3]\n            # https://github.com/charlesq34/pointnet2/blob/master/part_seg/part_dataset_all_normal.py#L95\n            normal = data[:, 3:6]\n            if self.normalize:\n                point_set = pc_normalize(point_set)\n            seg = data[:,-1].astype(np.int64)\n            self.cache[index] = (point_set, normal, seg, cls)\n            bar.update(index)\n"
  },
  {
    "path": "rs_cnn/data/__init__.py",
    "content": "from .ModelNet40Loader import ModelNet40Cls\nfrom .ShapeNetPartLoader import ShapeNetPart"
  },
  {
    "path": "rs_cnn/data/data_utils.py",
    "content": "import torch\nimport numpy as np\n\nclass PointcloudToTensor(object):\n    def __call__(self, points):\n        return torch.from_numpy(points).float()\n\ndef angle_axis(angle: float, axis: np.ndarray):\n    r\"\"\"Returns a 4x4 rotation matrix that performs a rotation around axis by angle\n\n    Parameters\n    ----------\n    angle : float\n        Angle to rotate by\n    axis: np.ndarray\n        Axis to rotate about\n\n    Returns\n    -------\n    torch.Tensor\n        3x3 rotation matrix\n    \"\"\"\n    u = axis / np.linalg.norm(axis)\n    cosval, sinval = np.cos(angle), np.sin(angle)\n\n    # yapf: disable\n    cross_prod_mat = np.array([[0.0, -u[2], u[1]],\n                                [u[2], 0.0, -u[0]],\n                                [-u[1], u[0], 0.0]])\n\n    R = torch.from_numpy(\n        cosval * np.eye(3)\n        + sinval * cross_prod_mat\n        + (1.0 - cosval) * np.outer(u, u)\n    )\n    # yapf: enable\n    return R.float()    \n\nclass PointcloudRotatebyAngle(object):\n    def __init__(self, rotation_angle = 0.0):\n        self.rotation_angle = rotation_angle\n\n    def __call__(self, pc):\n        normals = pc.size(2) > 3\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            cosval = np.cos(self.rotation_angle)\n            sinval = np.sin(self.rotation_angle)\n            rotation_matrix = np.array([[cosval, 0, sinval],\n                                        [0, 1, 0],\n                                        [-sinval, 0, cosval]])\n            rotation_matrix = torch.from_numpy(rotation_matrix).float().cuda()\n            \n            cur_pc = pc[i, :, :]\n            if not normals:\n                cur_pc = cur_pc @ rotation_matrix\n            else:\n                pc_xyz = cur_pc[:, 0:3]\n                pc_normals = cur_pc[:, 3:]\n                cur_pc[:, 0:3] = pc_xyz @ rotation_matrix\n                cur_pc[:, 3:] = pc_normals @ rotation_matrix\n                \n            pc[i, :, :] = cur_pc\n            \n        return pc\n\nclass PointcloudJitter(object):\n    def __init__(self, std=0.01, clip=0.05):\n        self.std, self.clip = std, clip\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            jittered_data = pc.new(pc.size(1), 3).normal_(\n                mean=0.0, std=self.std\n            ).clamp_(-self.clip, self.clip)\n            pc[i, :, 0:3] += jittered_data\n            \n        return pc\n\nclass PointcloudScaleAndTranslate(object):\n    def __init__(self, scale_low=2. / 3., scale_high=3. / 2., translate_range=0.2):\n        self.scale_low = scale_low\n        self.scale_high = scale_high\n        self.translate_range = translate_range\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            xyz1 = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])\n            xyz2 = np.random.uniform(low=-self.translate_range, high=self.translate_range, size=[3])\n            \n            pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], torch.from_numpy(xyz1).float().cuda()) + torch.from_numpy(xyz2).float().cuda()\n            \n        return pc\n        \nclass PointcloudScale(object):\n    def __init__(self, scale_low=2. / 3., scale_high=3. / 2.):\n        self.scale_low = scale_low\n        self.scale_high = scale_high\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            xyz1 = np.random.uniform(low=self.scale_low, high=self.scale_high, size=[3])\n            \n            pc[i, :, 0:3] = torch.mul(pc[i, :, 0:3], torch.from_numpy(xyz1).float().cuda())\n            \n        return pc\n        \nclass PointcloudTranslate(object):\n    def __init__(self, translate_range=0.2):\n        self.translate_range = translate_range\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            xyz2 = np.random.uniform(low=-self.translate_range, high=self.translate_range, size=[3])\n            \n            pc[i, :, 0:3] = pc[i, :, 0:3] + torch.from_numpy(xyz2).float().cuda()\n            \n        return pc\n\nclass PointcloudRandomInputDropout(object):\n    def __init__(self, max_dropout_ratio=0.875):\n        assert max_dropout_ratio >= 0 and max_dropout_ratio < 1\n        self.max_dropout_ratio = max_dropout_ratio\n\n    def __call__(self, pc):\n        bsize = pc.size()[0]\n        for i in range(bsize):\n            dropout_ratio = np.random.random() * self.max_dropout_ratio  # 0~0.875\n            drop_idx = np.where(np.random.random((pc.size()[1])) <= dropout_ratio)[0]\n            if len(drop_idx) > 0:\n                cur_pc = pc[i, :, :]\n                cur_pc[drop_idx.tolist(), 0:3] = cur_pc[0, 0:3].repeat(len(drop_idx), 1)  # set to the first point\n                pc[i, :, :] = cur_pc\n\n        return pc\n"
  },
  {
    "path": "rs_cnn/docs/_config.yml",
    "content": "theme: jekyll-theme-cayman\ntitle: Relation-Shape CNN (RS-CNN)\ndescription: ' '\nshow_downloads: true"
  },
  {
    "path": "rs_cnn/docs/index.md",
    "content": "<h1 align = \"center\">Relation-Shape Convolutional Neural Network for Point Cloud Analysis</h1>\n<p align = \"center\">\n    <a href=\"https://yochengliu.github.io/\" style=\"font-size: 23px\">Yongcheng Liu</a> &emsp;\n    <a href=\"http://www.nlpr.ia.ac.cn/fanbin/\" style=\"font-size: 23px\">Bin Fan</a> &emsp;\n    <a href=\"https://scholar.google.com/citations?user=0ggsACEAAAAJ&hl=zh-CN\" style=\"font-size: 23px\">Shiming Xiang</a>  &emsp;\n    <a href=\"http://people.ucas.ac.cn/~0005314\" style=\"font-size: 23px\">Chunhong Pan</a>\n</p>\n<p align = \"center\">\n    <a href=\"http://cvpr2019.thecvf.com/\" style=\"font-size: 23px\"><strong>CVPR 2019</strong></a> &emsp;\n    <font color=\"#FF4500\" size=\"5\"><strong>Oral & Best paper finalist</strong></font>\n</p>\n<br>\n\n<div align=\"center\">\n    <img src=\"images/partseg.jpg\" width=\"90%\" height =\"90%\" alt=\"partseg.jpg\" />\n</div>\n<p align = 'center'>\n    <small>Segmentation examples on ShapeNet part benchmark. Although the part shapes implied in irregular points are extremely diverse and they may be very confusing to recognize, our RS-CNN can also segment them out with decent accuracy.</small>\n</p>\n\n<h1 align = \"center\">Abstract</h1> \n\nPoint cloud analysis is very challenging, as the shape implied in irregular points is difficult to capture. In this paper, we propose RS-CNN, namely, Relation-Shape Convolutional Neural Network, which extends regular grid CNN to irregular configuration for point cloud analysis. ___The key to RS-CNN is learning from relation___, _i.e._, the geometric topology constraint among points. Specifically, the convolutional weight for local point set is forced to ___learn a high-level relation expression from predefined geometric priors___, between a sampled point from this point set and the others. In this way, an inductive local representation with ___explicit reasoning about the spatial layout of points___ can be obtained, which leads to much shape awareness and robustness. With this convolution as a basic operator, RS-CNN, a hierarchical architecture can be developed to achieve contextual shape-aware learning for point cloud analysis. Extensive experiments on challenging benchmarks across three tasks verify RS-CNN achieves the state of the arts.\n\n<h1 align = \"center\">Motivation</h1> \n\n<div align=\"center\">\n    <img src=\"images/motivation.jpg\" width=\"80%\" height =\"80%\" alt=\"motivation.jpg\" />\n</div>\n<p align = 'center'>\n    <small>Left part: 3D Point cloud. Right part: Underlying shape formed by this point cloud.</small>\n</p>\n\n- The geometric relation among points is an explicit expression about the spatial layout of points, further discriminatively reflecting the underlying shape.\n\n- CNN has demonstrated its powerful visual abstraction capability for 2D images that are in the format of a regular grid.\n\n- Can we extend 2D grid CNN to 3D irregular configuration for point cloud analysis, by learning expressive geometric relation encoding for discriminative shape awareness?\n\n<h1 align = \"center\">RS-Conv: Relation-Shape Convolution</h1>\n\n[rsconv]: ./images/rsconv.jpg\n![rsconv]\n<p align = 'center'>\n<small> Overview of our relation-shape convolution (RS-Conv). </small>\n</p>\n\nIn this paper, we develop a hierarchical CNN-like architecture, _i.e._ RS-CNN. RS-CNN is equipped with a novel learn-from-relation convolution operator called relation-shape convolution (RS-Conv). As illustrated in the figure, the key to RS-CNN is learning from relation.\n\nTo be specific:\n\n- The convolutional weight <img src=\"maths/w_strong.png\" align=\"center\" border=\"0\" weight=\"24\" height=\"16\" alt=\"{\\bm{\\mathrm w}}_j\" /> for <img src=\"maths/xj.png\" align=\"center\" border=\"0\" alt=\"x_{j}\" width=\"19\" height=\"17\" /> is converted to <img src=\"maths/wij.png\" align=\"center\" border=\"0\" alt=\"{\\bm{\\mathrm w}}_{ij}\" width=\"28\" height=\"16\" />, which learns a high-level mapping <img src=\"maths/m.png\" align=\"center\" border=\"0\" alt=\"\\mathcal{M}\" width=\"25\" height=\"15\" />, _i.e._, <img src=\"maths/wijm.png\" align=\"center\" border=\"0\" alt=\"{\\bm{\\mathrm w}}_{ij}=\\mathcal{M}({\\bm{\\mathrm h}}_{ij})\" width=\"110\" height=\"22\" />, on predefined geometric relation vector <img src=\"maths/hij.png\" align=\"center\" border=\"0\" alt=\"{\\bm{\\mathrm h}}_{ij}\" width=\"23\" height=\"19\" />.\n\n- In this way, the inductive convolutional representation <img src=\"maths/conv.png\" align=\"center\" border=\"0\" weight=\"160\" height=\"24\"  alt=\"\\sigma \\big( \\mathcal{A}(\\{{\\bm{\\mathrm w}}_{ij} \\cdot {\\bm{\\mathrm f}}_{x_j}, \\hspace{0.1pt} \\forall x_j\\}) \\big)\"/> can expressively reason the spatial layout of points, resulting in discriminative shape awareness.\n\n- As in image CNN, further channel-raising mapping is conducted for a more powerful shape-aware representation.\n\n<h1 align = \"center\">Revisiting 2D Grid Convolution</h1>\n\n<div align=\"center\">\n    <img src=\"images/2dconv.jpg\" width=\"75%\" height =\"75%\" alt=\"2dconv.jpg\" />\n</div>\n<p align = 'center'>\n<small> Illustration of 2D grid convolution with a kernel of 3 x 3. </small>\n</p>\n\n- The convolutional weight <img src=\"maths/swj.png\" align=\"center\" border=\"0\" alt=\"w_{j}\" width=\"25\" height=\"17\" /> for <img src=\"maths/xj.png\" align=\"center\" border=\"0\" alt=\"x_{j}\" width=\"19\" height=\"17\" /> always implies a fixed positional relation between <img src=\"maths/xi.png\" align=\"center\" border=\"0\" alt=\"x_{i}\" width=\"19\" height=\"15\" /> and its neighbor <img src=\"maths/xj.png\" align=\"center\" border=\"0\" alt=\"x_{j}\" width=\"19\" height=\"17\" /> in the regular grid. That is, <img src=\"maths/swj.png\" align=\"center\" border=\"0\" alt=\"w_{j}\" width=\"25\" height=\"17\" /> is actually constrained to encode one kind of regular grid relation in the learning process.\n\n- Therefore, our RS-Conv with relation learning is more general and can be applied to model 2D grid spatial relationship.\n\n<h1 align = \"center\">Experiment</h1>\n\n### Shape Classification on ModelNet40 Benchmark\n\n<div align=\"center\">\n    <img src=\"images/cls.jpg\" width=\"70%\" height =\"70%\" alt=\"cls.jpg\" />\n</div>\n<p align = 'center'>\n<small> Shape classification results (%) (nor: normal). </small>\n</p>\n\n- Our RS-CNN outperforms the state-of-the-art point cloud-based methods with only <img src=\"maths/xyz.png\" align=\"center\" border=\"0\" alt=\"\\mathrm{xyz}\" width=\"29\" height=\"14\" /> as the input features. \n\n### Normal Estimation\n\n<div align=\"center\">\n    <img src=\"images/normal.jpg\" width=\"80%\" height =\"80%\" alt=\"normal.jpg\" />\n</div>\n<p align = 'center'>\n<small> Normal estimation examples. For clearness, we only show predictions with angle less than 30 degree in blue, and angle greater than 90 degree in red between the ground truth normals. </small>\n</p>\n\n### Geometric Relation Definition\n\n<div align=\"center\">\n    <img src=\"images/relation.jpg\" width=\"80%\" height =\"80%\" alt=\"relation.jpg\" />\n</div>\n<p align = 'center'>\n<small> The results (%) of five intuitive low-level relation. Model A applies only 3D Euclidean distance; Model B adds the coordinates difference to model A; Model C adds the coordinates of two points to model B; Model D utilizes the normals of two points and their cosine distance; Model E projects 3D points onto a 2D plane of XY, XZ and YZ. </small>\n</p>\n\n- The low-level relation vector can be defined flexibly. \n\n- Using only 3D Euclidean distance as relation can result in an accuracy of 92.5%.\n\n- Even learning from the relation in 2D projections of points, a decent performance of 92.2% can also be achieved. \n\n### Robustness to sampling density\n\n<div align=\"center\">\n    <img src=\"images/density.jpg\" width=\"90%\" height =\"90%\" alt=\"density.jpg\" />\n</div>\n<p align = 'center'>\n<small> Left part: Point cloud with random point dropout. Right part: Test results of using sparser points as the input to a model trained with 1024 points. </small>\n</p>\n\n### Robustness to point permutation and rigid transformation (%)\n\n<div align=\"center\">\n    <img src=\"images/rotation.jpg\" width=\"90%\" height =\"90%\" alt=\"rotation.jpg\" />\n</div>\n<p align = 'center'>\n<small> All the models are trained without related data augmentations, e.g., translation or rotation, to avoid confusion. During testing, we perform random permutation (perm.) of points, add a small translation of 0.2 and rotate the input point cloud by 90 degree and 180 degree. </small>\n</p>\n\n<h1 align = \"center\">Visualization and Complexity</h1>\n\n### Visualization\n\n<div align=\"center\">\n    <img src=\"images/visualization.jpg\" width=\"80%\" height =\"80%\" alt=\"visualization.jpg\" />\n</div>\n<p align = 'center'>\n<small> Visualization of the shape features learned by the first two layers of RS-CNN. </small>\n</p>\n\n- The features learned by the first layer mostly respond to edges, corners and arcs, while the ones in the second layer capture more semantical shape parts like airfoils and heads.\n\n- As image CNN, our RS-CNN learns 3D shape semantics from point cloud in a local-to-global manner.\n\n### Complexity\n\n<div align=\"center\">\n    <img src=\"images/complexity.jpg\" width=\"75%\" height =\"75%\" alt=\"complexity.jpg\" />\n</div>\n<p align = 'center'>\n<small> Complexity of RS-CNN in point cloud classification. </small>\n</p>\n\n<h1 align = \"center\">Publication</h1>\n\nYongcheng Liu, Bin Fan, Shiming Xiang and Chunhong Pan, \"Relation-Shape Convolutional Neural Network for Point Cloud Analysis\", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [[arXiv](https://arxiv.org/abs/1904.07601)] [[CVF](http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Relation-Shape_Convolutional_Neural_Network_for_Point_Cloud_Analysis_CVPR_2019_paper.pdf)]\n\n```\n        @inproceedings{liu2019rscnn,   \n            author = {Yongcheng Liu and    \n                            Bin Fan and    \n                      Shiming Xiang and   \n                           Chunhong Pan},   \n            title = {Relation-Shape Convolutional Neural Network for Point Cloud Analysis},   \n            booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},    \n            pages = {8895--8904},  \n            year = {2019}   \n        }   \n```\n"
  },
  {
    "path": "rs_cnn/models/__init__.py",
    "content": "from .rscnn_ssn_cls import RSCNN_SSN as RSCNN_SSN_Cls\nfrom .rscnn_msn_seg import RSCNN_MSN as RSCNN_MSN_Seg"
  },
  {
    "path": "rs_cnn/models/rscnn_msn_seg.py",
    "content": "import os, sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, \"../utils\"))\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport pytorch_utils as pt_utils\nfrom pointnet2_modules_updated import PointnetSAModule, PointnetFPModule, PointnetSAModuleMSG\nimport numpy as np\n\nclass RSCNN_MSN(nn.Module):\n    r\"\"\"\n        PointNet2 with multi-scale grouping\n        Semantic segmentation network that uses feature propogation layers\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier that run for each point\n        input_channels: int = 6\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=0, relation_prior=1, use_xyz=True):\n        super().__init__()\n\n        self.SA_modules = nn.ModuleList()\n        c_in = input_channels\n        self.SA_modules.append(     # 0\n            PointnetSAModuleMSG(\n                npoint=1024,\n                radii=[0.075, 0.1, 0.125],\n                nsamples=[16, 32, 48],\n                mlps=[[c_in, 64], [c_in, 64], [c_in, 64]],\n                first_layer=True,\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n        c_out_0 = 64*3\n\n        c_in = c_out_0\n        self.SA_modules.append(    # 1\n            PointnetSAModuleMSG(\n                npoint=256,\n                radii=[0.1, 0.15, 0.2],\n                nsamples=[16, 48, 64],\n                mlps=[[c_in, 128], [c_in, 128], [c_in, 128]],\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n        c_out_1 = 128*3\n\n        c_in = c_out_1\n        self.SA_modules.append(    # 2\n            PointnetSAModuleMSG(\n                npoint=64,\n                radii=[0.2, 0.3, 0.4],\n                nsamples=[16, 32, 48],\n                mlps=[[c_in, 256], [c_in, 256], [c_in, 256]],\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n        c_out_2 = 256*3\n\n        c_in = c_out_2\n        self.SA_modules.append(    # 3\n            PointnetSAModuleMSG(\n                npoint=16,\n                radii=[0.4, 0.6, 0.8],\n                nsamples=[16, 24, 32],\n                mlps=[[c_in, 512], [c_in, 512], [c_in, 512]],\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n        c_out_3 = 512*3\n        \n        self.SA_modules.append(   # 4   global pooling\n            PointnetSAModule(\n                nsample = 16,\n                mlp=[c_out_3, 128], use_xyz=use_xyz\n            )\n        )\n        global_out = 128\n        \n        self.SA_modules.append(   # 5   global pooling\n            PointnetSAModule(\n                nsample = 64,\n                mlp=[c_out_2, 128], use_xyz=use_xyz\n            )\n        )\n        global_out2 = 128\n\n        self.FP_modules = nn.ModuleList()\n        self.FP_modules.append(\n            PointnetFPModule(mlp=[256 + input_channels, 128, 128])\n        )\n        self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_0, 256, 256]))\n        self.FP_modules.append(PointnetFPModule(mlp=[512 + c_out_1, 512, 512]))\n        self.FP_modules.append(\n            PointnetFPModule(mlp=[c_out_3 + c_out_2, 512, 512])\n        )\n\n        self.FC_layer = nn.Sequential(\n            pt_utils.Conv1d(128+global_out+global_out2+16, 128, bn=True), nn.Dropout(),\n            pt_utils.Conv1d(128, num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = (\n            pc[..., 3:].transpose(1, 2).contiguous()\n            if pc.size(-1) > 3 else None\n        )\n\n        return xyz, features\n\n    def forward(self, pointcloud: torch.cuda.FloatTensor, cls):\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n        \n        l_xyz, l_features = [xyz], [features]\n        for i in range(len(self.SA_modules)):\n            if i < 5:\n                li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i])\n                if li_xyz is not None:\n                    random_index = np.arange(li_xyz.size()[1])\n                    np.random.shuffle(random_index)\n                    li_xyz = li_xyz[:, random_index, :]\n                    li_features = li_features[:, :, random_index]\n                l_xyz.append(li_xyz)\n                l_features.append(li_features)\n        \n        _, global_out2_feat = self.SA_modules[5](l_xyz[3], l_features[3])\n        \n        for i in range(-1, -(len(self.FP_modules) + 1), -1):\n            l_features[i - 1 - 1] = self.FP_modules[i](\n                l_xyz[i - 1 - 1], l_xyz[i - 1], l_features[i - 1 - 1], l_features[i - 1]\n            )\n        \n        cls = cls.view(-1, 16, 1).repeat(1, 1, l_features[0].size()[2])         # object class one-hot-vector\n        l_features[0] = torch.cat((l_features[0], l_features[-1].repeat(1, 1, l_features[0].size()[2]), global_out2_feat.repeat(1, 1, l_features[0].size()[2]), cls), 1)\n        return self.FC_layer(l_features[0]).transpose(1, 2).contiguous()\n"
  },
  {
    "path": "rs_cnn/models/rscnn_ssn_cls.py",
    "content": "import os, sys\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(BASE_DIR)\nsys.path.append(os.path.join(BASE_DIR, \"../utils\"))\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport pytorch_utils as pt_utils\nfrom pointnet2_modules_updated import PointnetSAModule, PointnetSAModuleMSG\nimport numpy as np\n\n# Relation-Shape CNN: Single-Scale Neighborhood\nclass RSCNN_SSN(nn.Module):\n    r\"\"\"\n        PointNet2 with multi-scale grouping\n        Semantic segmentation network that uses feature propogation layers\n\n        Parameters\n        ----------\n        num_classes: int\n            Number of semantics classes to predict over -- size of softmax classifier that run for each point\n        input_channels: int = 6\n            Number of input channels in the feature descriptor for each point.  If the point cloud is Nx9, this\n            value should be 6 as in an Nx9 point cloud, 3 of the channels are xyz, and 6 are feature descriptors\n        use_xyz: bool = True\n            Whether or not to use the xyz position of a point as a feature\n    \"\"\"\n\n    def __init__(self, num_classes, input_channels=0, relation_prior=1, use_xyz=True):\n        super().__init__()\n\n        self.SA_modules = nn.ModuleList()\n        \n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=512,\n                radii=[0.23],\n                nsamples=[48],\n                mlps=[[input_channels, 128]],\n                first_layer=True,\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n\n        self.SA_modules.append(\n            PointnetSAModuleMSG(\n                npoint=128,\n                radii=[0.32],\n                nsamples=[64],\n                mlps=[[128, 512]],\n                use_xyz=use_xyz,\n                relation_prior=relation_prior\n            )\n        )\n        \n        self.SA_modules.append(\n            # global convolutional pooling\n            PointnetSAModule(\n                nsample = 128,\n                mlp=[512, 1024], \n                use_xyz=use_xyz\n            )\n        )\n\n        self.FC_layer = nn.Sequential(\n            pt_utils.FC(1024, 512, activation=nn.ReLU(inplace=True), bn=True),\n            nn.Dropout(p=0.5),\n            pt_utils.FC(512, 256, activation=nn.ReLU(inplace=True), bn=True),\n            nn.Dropout(p=0.5),\n            pt_utils.FC(256, num_classes, activation=None)\n        )\n\n    def _break_up_pc(self, pc):\n        xyz = pc[..., 0:3].contiguous()\n        features = (\n            pc[..., 3:].transpose(1, 2).contiguous()\n            if pc.size(-1) > 3 else None\n        )\n        return xyz, features\n\n    def forward(self, pointcloud: torch.cuda.FloatTensor):\n        r\"\"\"\n            Forward pass of the network\n\n            Parameters\n            ----------\n            pointcloud: Variable(torch.cuda.FloatTensor)\n                (B, N, 3 + input_channels) tensor\n                Point cloud to run predicts on\n                Each point in the point-cloud MUST\n                be formated as (x, y, z, features...)\n        \"\"\"\n        xyz, features = self._break_up_pc(pointcloud)\n        for module in self.SA_modules:\n            xyz, features = module(xyz, features)\n        return self.FC_layer(features.squeeze(-1))\n\n\nif __name__ == \"__main__\":\n    sim_data = Variable(torch.rand(32, 2048, 6))\n    sim_data = sim_data.cuda()\n    sim_cls = Variable(torch.ones(32, 16))\n    sim_cls = sim_cls.cuda()\n\n    seg = RSCNN_SSN(num_classes=50, input_channels=3, use_xyz=True)\n    seg = seg.cuda()\n    out = seg(sim_data, sim_cls)\n    print('seg', out.size())"
  },
  {
    "path": "rs_cnn/train_cls.py",
    "content": "import torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.autograd import Variable\nimport numpy as np\nimport os\nfrom torchvision import transforms\nfrom models import RSCNN_SSN_Cls as RSCNN_SSN\nfrom data import ModelNet40Cls\nimport utils.pytorch_utils as pt_utils\nimport pointnet2.utils.pointnet2_utils as pointnet2_utils\nimport data.data_utils as d_utils\nimport argparse\nimport random\nimport yaml\nimport pdb\n\ntorch.backends.cudnn.enabled = True\ntorch.backends.cudnn.benchmark = True\ntorch.backends.cudnn.deterministic = True\n\nseed = 123\nrandom.seed(seed)\nnp.random.seed(seed)\ntorch.manual_seed(seed)            \ntorch.cuda.manual_seed(seed)       \ntorch.cuda.manual_seed_all(seed) \n\nparser = argparse.ArgumentParser(description='Relation-Shape CNN Shape Classification Training')\nparser.add_argument('--config', default='cfgs/config_ssn_cls.yaml', type=str)\n\ndef main():\n    args = parser.parse_args()\n    with open(args.config) as f:\n        config = yaml.load(f)\n    print(\"\\n**************************\")\n    for k, v in config['common'].items():\n        setattr(args, k, v)\n        print('\\n[%s]:'%(k), v)\n    print(\"\\n**************************\\n\")\n    \n    try:\n        os.makedirs(args.save_path)\n    except OSError:\n        pass\n    \n    train_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n    test_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n    \n    train_dataset = ModelNet40Cls(num_points = args.num_points, root = args.data_root, transforms=train_transforms)\n    train_dataloader = DataLoader(\n        train_dataset, \n        batch_size=args.batch_size,\n        shuffle=True, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n\n    test_dataset = ModelNet40Cls(num_points = args.num_points, root = args.data_root, transforms=test_transforms, train=False)\n    test_dataloader = DataLoader(\n        test_dataset, \n        batch_size=args.batch_size,\n        shuffle=False, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n    \n    model = RSCNN_SSN(num_classes = args.num_classes, input_channels = args.input_channels, relation_prior = args.relation_prior, use_xyz = True)\n    model.cuda()\n    optimizer = optim.Adam(\n        model.parameters(), lr=args.base_lr, weight_decay=args.weight_decay)\n\n    lr_lbmd = lambda e: max(args.lr_decay**(e // args.decay_step), args.lr_clip / args.base_lr)\n    bnm_lmbd = lambda e: max(args.bn_momentum * args.bn_decay**(e // args.decay_step), args.bnm_clip)\n    lr_scheduler = lr_sched.LambdaLR(optimizer, lr_lbmd)\n    bnm_scheduler = pt_utils.BNMomentumScheduler(model, bnm_lmbd)\n    \n    if args.checkpoint is not '':\n        model.load_state_dict(torch.load(args.checkpoint))\n        print('Load model successfully: %s' % (args.checkpoint))\n\n    criterion = nn.CrossEntropyLoss()\n    num_batch = len(train_dataset)/args.batch_size\n    \n    # training\n    train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch)\n    \n\ndef train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch):\n    PointcloudScaleAndTranslate = d_utils.PointcloudScaleAndTranslate()   # initialize augmentation\n    global g_acc \n    g_acc = 0.91    # only save the model whose acc > 0.91\n    batch_count = 0\n    model.train()\n    for epoch in range(args.epochs):\n        for i, data in enumerate(train_dataloader, 0):\n            if lr_scheduler is not None:\n                lr_scheduler.step(epoch)\n            if bnm_scheduler is not None:\n                bnm_scheduler.step(epoch-1)\n            points, target = data\n            points, target = points.cuda(), target.cuda()\n            points, target = Variable(points), Variable(target)\n            \n            # fastest point sampling\n            fps_idx = pointnet2_utils.furthest_point_sample(points, 1200)  # (B, npoint)\n            fps_idx = fps_idx[:, np.random.choice(1200, args.num_points, False)]\n            points = pointnet2_utils.gather_operation(points.transpose(1, 2).contiguous(), fps_idx).transpose(1, 2).contiguous()  # (B, N, 3)\n            \n            # augmentation\n            points.data = PointcloudScaleAndTranslate(points.data)\n            \n            optimizer.zero_grad()\n            \n            pred = model(points)\n            target = target.view(-1)\n            loss = criterion(pred, target)\n            loss.backward()\n            optimizer.step()\n            if i % args.print_freq_iter == 0:\n                print('[epoch %3d: %3d/%3d] \\t train loss: %0.6f \\t lr: %0.5f' %(epoch+1, i, num_batch, loss.data.clone(), lr_scheduler.get_lr()[0]))\n            batch_count += 1\n            \n            # validation in between an epoch\n            if args.evaluate and batch_count % int(args.val_freq_epoch * num_batch) == 0:\n                validate(test_dataloader, model, criterion, args, batch_count)\n\n\ndef validate(test_dataloader, model, criterion, args, iter): \n    global g_acc\n    model.eval()\n    losses, preds, labels = [], [], []\n    with torch.no_grad():\n        for j, data in enumerate(test_dataloader, 0):\n            points, target = data\n            points, target = points.cuda(), target.cuda()\n            # points, target = Variable(points, volatile=True), Variable(target, volatile=True)\n\n            # fastest point sampling\n            fps_idx = pointnet2_utils.furthest_point_sample(points, args.num_points)  # (B, npoint)\n            # fps_idx = fps_idx[:, np.random.choice(1200, args.num_points, False)]\n            points = pointnet2_utils.gather_operation(points.transpose(1, 2).contiguous(), fps_idx).transpose(1, 2).contiguous()\n\n            pred = model(points)\n            target = target.view(-1)\n            loss = criterion(pred, target)\n            losses.append(loss.data.cpu())\n            _, pred_choice = torch.max(pred.data, -1)\n\n            preds.append(pred_choice)\n            labels.append(target.data)\n\n        preds = torch.cat(preds, 0)\n        labels = torch.cat(labels, 0)\n        acc = (preds == labels).sum().float() / labels.numel()\n        print('\\nval loss: %0.6f \\t acc: %0.6f\\n' % (np.mean(np.array(losses)), acc))\n        if acc > g_acc:\n            g_acc = acc\n            torch.save(model.state_dict(), '%s/cls_ssn_iter_%d_acc_%0.6f.pth' % (args.save_path, iter, acc))\n    model.train()\n    \nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "rs_cnn/train_cls.sh",
    "content": "#!/usr/bin/env sh\nmkdir -p log\nnow=$(date +\"%Y%m%d_%H%M%S\")\nlog_name=\"Cls_LOG_\"$now\"\"\nexport CUDA_VISIBLE_DEVICES=0\npython -u train_cls.py \\\n--config cfgs/config_ssn_cls.yaml \\\n2>&1|tee log/$log_name.log &\n"
  },
  {
    "path": "rs_cnn/train_partseg.py",
    "content": "import torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.autograd import Variable\nimport numpy as np\nimport os\nfrom torchvision import transforms\nfrom models import RSCNN_MSN_Seg as RSCNN_MSN\nfrom data import ShapeNetPart\nimport utils.pytorch_utils as pt_utils\nimport data.data_utils as d_utils\nimport argparse\nimport random\nimport yaml\nimport pdb\n\ntorch.backends.cudnn.enabled = True\ntorch.backends.cudnn.benchmark = True\ntorch.backends.cudnn.deterministic = True\n\nseed = 123\nrandom.seed(seed)\nnp.random.seed(seed)\ntorch.manual_seed(seed)            \ntorch.cuda.manual_seed(seed)       \ntorch.cuda.manual_seed_all(seed) \n\nparser = argparse.ArgumentParser(description='Relation-Shape CNN Shape Part Segmentation Training')\nparser.add_argument('--config', default='cfgs/config_msn_partseg.yaml', type=str)\n\ndef main():\n    args = parser.parse_args()\n    with open(args.config) as f:\n        config = yaml.load(f)\n    print(\"\\n**************************\")\n    for k, v in config['common'].items():\n        setattr(args, k, v)\n        print('\\n[%s]:'%(k), v)\n    print(\"\\n**************************\\n\")\n    \n    try:\n        os.makedirs(args.save_path)\n    except OSError:\n        pass\n    \n    train_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n    test_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n    \n    train_dataset = ShapeNetPart(root = args.data_root, num_points = args.num_points, split = 'trainval', normalize = True, transforms = train_transforms)\n    train_dataloader = DataLoader(\n        train_dataset, \n        batch_size=args.batch_size,\n        shuffle=True, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n    \n    global test_dataset\n    test_dataset = ShapeNetPart(root = args.data_root, num_points = args.num_points, split = 'test', normalize = True, transforms = test_transforms)\n    test_dataloader = DataLoader(\n        test_dataset, \n        batch_size=args.batch_size,\n        shuffle=False, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n    \n    model = RSCNN_MSN(num_classes = args.num_classes, input_channels = args.input_channels, relation_prior = args.relation_prior, use_xyz = True)\n    model.cuda()\n    optimizer = optim.Adam(\n        model.parameters(), lr=args.base_lr, weight_decay=args.weight_decay)\n\n    lr_lbmd = lambda e: max(args.lr_decay**(e // args.decay_step), args.lr_clip / args.base_lr)\n    bnm_lmbd = lambda e: max(args.bn_momentum * args.bn_decay**(e // args.decay_step), args.bnm_clip)\n    lr_scheduler = lr_sched.LambdaLR(optimizer, lr_lbmd)\n    bnm_scheduler = pt_utils.BNMomentumScheduler(model, bnm_lmbd)\n    \n    if args.checkpoint is not '':\n        model.load_state_dict(torch.load(args.checkpoint))\n        print('Load model successfully: %s' % (args.checkpoint))\n\n    criterion = nn.CrossEntropyLoss()\n    num_batch = len(train_dataset)/args.batch_size\n    \n    # training\n    train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch)\n\n    \ndef train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch):\n    PointcloudScaleAndTranslate = d_utils.PointcloudScaleAndTranslate()   # initialize augmentation\n    global Class_mIoU, Inst_mIoU\n    Class_mIoU, Inst_mIoU = 0.83, 0.85\n    batch_count = 0\n    model.train()\n    for epoch in range(args.epochs):\n        for i, data in enumerate(train_dataloader, 0):\n            if lr_scheduler is not None:\n                lr_scheduler.step(epoch)\n            if bnm_scheduler is not None:\n                bnm_scheduler.step(epoch-1)\n            points, target, cls = data\n            points, target = points.cuda(), target.cuda()\n            points, target = Variable(points), Variable(target)\n            # augmentation\n            points.data = PointcloudScaleAndTranslate(points.data)\n            \n            optimizer.zero_grad()\n            \n            batch_one_hot_cls = np.zeros((len(cls), 16))   # 16 object classes\n            for b in range(len(cls)):\n                batch_one_hot_cls[b, int(cls[b])] = 1\n            batch_one_hot_cls = torch.from_numpy(batch_one_hot_cls)\n            batch_one_hot_cls = Variable(batch_one_hot_cls.float().cuda())\n\n            pred = model(points, batch_one_hot_cls)\n            pred = pred.view(-1, args.num_classes)\n            target = target.view(-1,1)[:,0]\n\n            loss = criterion(pred, target)\n            loss.backward()\n            optimizer.step()\n            \n            if i % args.print_freq_iter == 0:\n                print('[epoch %3d: %3d/%3d] \\t train loss: %0.6f \\t lr: %0.5f' %(epoch+1, i, num_batch, loss.data.clone(), lr_scheduler.get_lr()[0]))\n            batch_count += 1\n            \n            # validation in between an epoch\n            if (epoch < 3 or epoch > 40) and args.evaluate and batch_count % int(args.val_freq_epoch * num_batch) == 0:\n                validate(test_dataloader, model, criterion, args, batch_count)\n\n\ndef validate(test_dataloader, model, criterion, args, iter): \n    global Class_mIoU, Inst_mIoU, test_dataset\n    model.eval()\n    with torch.no_grad():\n        seg_classes = test_dataset.seg_classes\n        shape_ious = {cat:[] for cat in seg_classes.keys()}\n        seg_label_to_cat = {}           # {0:Airplane, 1:Airplane, ...49:Table}\n        for cat in seg_classes.keys():\n            for label in seg_classes[cat]:\n                seg_label_to_cat[label] = cat\n\n        losses = []\n        for _, data in enumerate(test_dataloader, 0):\n            points, target, cls = data\n            # points, target = Variable(points, volatile=True), Variable(target, volatile=True)\n            points, target = points.cuda(), target.cuda()\n\n            batch_one_hot_cls = np.zeros((len(cls), 16))   # 16 object classes\n            for b in range(len(cls)):\n                batch_one_hot_cls[b, int(cls[b])] = 1\n            batch_one_hot_cls = torch.from_numpy(batch_one_hot_cls)\n            batch_one_hot_cls = Variable(batch_one_hot_cls.float().cuda())\n\n            pred = model(points, batch_one_hot_cls)\n            loss = criterion(pred.view(-1, args.num_classes), target.view(-1,1)[:,0])\n            losses.append(loss.data.cpu())\n            pred = pred.data.cpu()\n            target = target.data.cpu()\n            pred_val = torch.zeros(len(cls), args.num_points).type(torch.LongTensor)\n            # pred to the groundtruth classes (selected by seg_classes[cat])\n            for b in range(len(cls)):\n                cat = seg_label_to_cat[target[b, 0].item()]\n                logits = pred[b, :, :]   # (num_points, num_classes)\n                pred_val[b, :] = logits[:, seg_classes[cat]].max(1)[1] + seg_classes[cat][0]\n\n            for b in range(len(cls)):\n                segp = pred_val[b, :]\n                segl = target[b, :]\n                cat = seg_label_to_cat[segl[0].item()]\n                part_ious = [0.0 for _ in range(len(seg_classes[cat]))]\n                for l in seg_classes[cat]:\n                    if torch.sum((segl == l) | (segp == l)) == 0:\n                        # part is not present in this shape\n                        part_ious[l - seg_classes[cat][0]] = 1.0\n                    else:\n                        part_ious[l - seg_classes[cat][0]] = torch.sum((segl == l) & (segp == l)) / float(torch.sum((segl == l) | (segp == l)))\n                shape_ious[cat].append(np.mean(part_ious))\n\n        instance_ious = []\n        for cat in shape_ious.keys():\n            for iou in shape_ious[cat]:\n                instance_ious.append(iou)\n            shape_ious[cat] = np.mean(shape_ious[cat])\n        mean_class_ious = np.mean(list(shape_ious.values()))\n\n        for cat in sorted(shape_ious.keys()):\n            print('****** %s: %0.6f'%(cat, shape_ious[cat]))\n        print('************ Test Loss: %0.6f' % (np.mean(np.array(losses))))\n        print('************ Class_mIoU: %0.6f' % (mean_class_ious))\n        print('************ Instance_mIoU: %0.6f' % (np.mean(instance_ious)))\n\n        if mean_class_ious > Class_mIoU or np.mean(instance_ious) > Inst_mIoU:\n            if mean_class_ious > Class_mIoU:\n                Class_mIoU = mean_class_ious\n            if np.mean(instance_ious) > Inst_mIoU:\n                Inst_mIoU = np.mean(instance_ious)\n            torch.save(model.state_dict(), '%s/seg_msn_iter_%d_ins_%0.6f_cls_%0.6f.pth' % (args.save_path, iter, np.mean(instance_ious), mean_class_ious))\n    model.train()\n    \nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "rs_cnn/train_partseg.sh",
    "content": "#!/usr/bin/env sh\nmkdir -p log\nnow=$(date +\"%Y%m%d_%H%M%S\")\nlog_name=\"PartSeg_LOG_\"$now\"\"\nexport CUDA_VISIBLE_DEVICES=0\npython -u train_partseg.py \\\n--config cfgs/config_msn_partseg.yaml \\\n2>&1|tee log/$log_name.log &\n"
  },
  {
    "path": "rs_cnn/utils/__init__.py",
    "content": ""
  },
  {
    "path": "rs_cnn/utils/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "rs_cnn/utils/_ext/pointnet2/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._pointnet2 import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        if callable(fn):\n            locals[symbol] = _wrap_function(fn, _ffi)\n        else:\n            locals[symbol] = fn\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "rs_cnn/utils/build_ffi.py",
    "content": "import glob\nimport torch\nimport os.path as osp\nfrom torch.utils.ffi import create_extension\nimport sys, argparse, shutil\n\nbase_dir = osp.dirname(osp.abspath(__file__))\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(\n        description=\"Arguments for building pointnet2 ffi extension\"\n    )\n    parser.add_argument(\"--objs\", nargs=\"*\")\n    clean_arg = parser.add_mutually_exclusive_group()\n    clean_arg.add_argument(\"--build\", dest='build', action=\"store_true\")\n    clean_arg.add_argument(\"--clean\", dest='clean', action=\"store_true\")\n    parser.set_defaults(build=False, clean=False)\n\n    args = parser.parse_args()\n    assert args.build or args.clean\n\n    return args\n\n\ndef build(args):\n    extra_objects = args.objs\n    extra_objects += [a for a in glob.glob('/usr/local/cuda/lib64/*.a')]\n\n    ffi = create_extension(\n        '_ext.pointnet2',\n        headers=[a for a in glob.glob(\"cinclude/*_wrapper.h\")],\n        sources=[a for a in glob.glob(\"csrc/*.c\")],\n        define_macros=[('WITH_CUDA', None)],\n        relative_to=__file__,\n        with_cuda=True,\n        extra_objects=extra_objects,\n        include_dirs=[osp.join(base_dir, 'cinclude')],\n        verbose=False,\n        package=False\n    )\n    ffi.build()\n\n\ndef clean(args):\n    shutil.rmtree(osp.join(base_dir, \"_ext\"))\n\n\nif __name__ == \"__main__\":\n    args = parse_args()\n    if args.clean:\n        clean(args)\n    else:\n        build(args)\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/ball_query_gpu.h",
    "content": "#ifndef _BALL_QUERY_GPU\n#define _BALL_QUERY_GPU\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n\t\t\t\t     int nsample, const float *xyz,\n\t\t\t\t     const float *new_xyz, const int *fps_idx, int *idx,\n\t\t\t\t     cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n#endif\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/ball_query_wrapper.h",
    "content": "\nint ball_query_wrapper(int b, int n, int m, float radius, int nsample,\n\t\t       THCudaTensor *new_xyz_tensor, THCudaTensor *xyz_tensor, THCudaIntTensor *fps_idx_tensor,\n\t\t       THCudaIntTensor *idx_tensor);\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/cuda_utils.h",
    "content": "#ifndef _CUDA_UTILS_H\n#define _CUDA_UTILS_H\n\n#include <cmath>\n\n#define TOTAL_THREADS 512\n\ninline int opt_n_threads(int work_size) {\n    const int pow_2 = std::log(static_cast<double>(work_size)) / std::log(2.0);\n\n    return max(min(1 << pow_2, TOTAL_THREADS), 1);\n}\n\ninline dim3 opt_block_config(int x, int y) {\n    const int x_threads = opt_n_threads(x);\n    const int y_threads =\n        max(min(opt_n_threads(y), TOTAL_THREADS / x_threads), 1);\n    dim3 block_config(x_threads, y_threads, 1);\n\n    return block_config;\n}\n\n#endif\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/group_points_gpu.h",
    "content": "#ifndef _BALL_QUERY_GPU\n#define _BALL_QUERY_GPU\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t\t const float *points, const int *idx,\n\t\t\t\t float *out, cudaStream_t stream);\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t      int nsample, const float *grad_out,\n\t\t\t\t      const int *idx, float *grad_points,\n\t\t\t\t      cudaStream_t stream);\n#ifdef __cplusplus\n}\n#endif\n#endif\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/group_points_wrapper.h",
    "content": "int group_points_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t THCudaTensor *points_tensor,\n\t\t\t THCudaIntTensor *idx_tensor, THCudaTensor *out);\nint group_points_grad_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t      THCudaTensor *grad_out_tensor,\n\t\t\t      THCudaIntTensor *idx_tensor,\n\t\t\t      THCudaTensor *grad_points_tensor);\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/interpolate_gpu.h",
    "content": "#ifndef _INTERPOLATE_GPU_H\n#define _INTERPOLATE_GPU_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n\t\t\t     const float *known, float *dist2, int *idx,\n\t\t\t     cudaStream_t stream);\n\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n\t\t\t\t      const float *points, const int *idx,\n\t\t\t\t      const float *weight, float *out,\n\t\t\t\t      cudaStream_t stream);\n\nvoid three_interpolate_grad_kernel_wrapper(int b, int c, int n, int m,\n\t\t\t\t\t   const float *grad_out,\n\t\t\t\t\t   const int *idx, const float *weight,\n\t\t\t\t\t   float *grad_points,\n\t\t\t\t\t   cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/interpolate_wrapper.h",
    "content": "\n\nvoid three_nn_wrapper(int b, int n, int m, THCudaTensor *unknown_tensor,\n\t\t      THCudaTensor *known_tensor, THCudaTensor *dist2_tensor,\n\t\t      THCudaIntTensor *idx_tensor);\nvoid three_interpolate_wrapper(int b, int c, int m, int n,\n\t\t\t       THCudaTensor *points_tensor,\n\t\t\t       THCudaIntTensor *idx_tensor,\n\t\t\t       THCudaTensor *weight_tensor,\n\t\t\t       THCudaTensor *out_tensor);\n\nvoid three_interpolate_grad_wrapper(int b, int c, int n, int m,\n\t\t\t\t    THCudaTensor *grad_out_tensor,\n\t\t\t\t    THCudaIntTensor *idx_tensor,\n\t\t\t\t    THCudaTensor *weight_tensor,\n\t\t\t\t    THCudaTensor *grad_points_tensor);\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/sampling_gpu.h",
    "content": "#ifndef _SAMPLING_GPU_H\n#define _SAMPLING_GPU_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t  const float *points, const int *idx,\n\t\t\t\t  float *out, cudaStream_t stream);\n\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t       const float *grad_out, const int *idx,\n\t\t\t\t       float *grad_points, cudaStream_t stream);\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n\t\t\t\t\t    const float *dataset, float *temp,\n\t\t\t\t\t    int *idxs, cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n#endif\n"
  },
  {
    "path": "rs_cnn/utils/cinclude/sampling_wrapper.h",
    "content": "\nint gather_points_wrapper(int b, int c, int n, int npoints,\n\t\t\t  THCudaTensor *points_tensor,\n\t\t\t  THCudaIntTensor *idx_tensor,\n\t\t\t  THCudaTensor *out_tensor);\nint gather_points_grad_wrapper(int b, int c, int n, int npoints,\n\t\t\t       THCudaTensor *grad_out_tensor,\n\t\t\t       THCudaIntTensor *idx_tensor,\n\t\t\t       THCudaTensor *grad_points_tensor);\n\nint furthest_point_sampling_wrapper(int b, int n, int m,\n\t\t\t\t    THCudaTensor *points_tensor,\n\t\t\t\t    THCudaTensor *temp_tensor,\n\t\t\t\t    THCudaIntTensor *idx_tensor);\n"
  },
  {
    "path": "rs_cnn/utils/csrc/ball_query.c",
    "content": "#include <THC/THC.h>\n\n#include \"ball_query_gpu.h\"\n\nextern THCState *state;\n\nint ball_query_wrapper(int b, int n, int m, float radius, int nsample,\n\t\t       THCudaTensor *new_xyz_tensor, THCudaTensor *xyz_tensor, THCudaIntTensor *fps_idx_tensor,\n\t\t       THCudaIntTensor *idx_tensor) {\n\n    const float *new_xyz = THCudaTensor_data(state, new_xyz_tensor);\n    const float *xyz = THCudaTensor_data(state, xyz_tensor);\n    const int *fps_idx = THCudaIntTensor_data(state, fps_idx_tensor);\n    int *idx = THCudaIntTensor_data(state, idx_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    query_ball_point_kernel_wrapper(b, n, m, radius, nsample, new_xyz, xyz, fps_idx, idx,\n\t\t\t\t    stream);\n    return 1;\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/ball_query_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"ball_query_gpu.h\"\n#include \"cuda_utils.h\"\n\n// input: new_xyz(b, m, 3) xyz(b, n, 3)\n// output: idx(b, m, nsample)\n__global__ void query_ball_point_kernel(int b, int n, int m, float radius,\n\t\t\t\t\tint nsample,\n\t\t\t\t\tconst float *__restrict__ new_xyz,\n\t\t\t\t\tconst float *__restrict__ xyz,\n                    const int *__restrict__ fps_idx,\n\t\t\t\t\tint *__restrict__ idx) {\n    int batch_index = blockIdx.x;\n    xyz += batch_index * n * 3;\n    new_xyz += batch_index * m * 3;\n    fps_idx += batch_index * m;\n    idx += m * nsample * batch_index;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n\n    float radius2 = radius * radius;\n    for (int j = index; j < m; j += stride) {\n\tfloat new_x = new_xyz[j * 3 + 0];\n\tfloat new_y = new_xyz[j * 3 + 1];\n\tfloat new_z = new_xyz[j * 3 + 2];\n    for (int l = 0; l < nsample; ++l) {\n        idx[j * nsample + l] = fps_idx[j];\n    }\n\tfor (int k = 0, cnt = 0; k < n && cnt < nsample; ++k) {\n\t    float x = xyz[k * 3 + 0];\n\t    float y = xyz[k * 3 + 1];\n\t    float z = xyz[k * 3 + 2];\n\t    float d2 = (new_x - x) * (new_x - x) + (new_y - y) * (new_y - y) +\n\t\t       (new_z - z) * (new_z - z);\n\t    if (d2 < radius2 && d2 > 0) {\n\t\tidx[j * nsample + cnt] = k;\n\t\t++cnt;\n\t    }\n\t}\n    }\n}\n\nvoid query_ball_point_kernel_wrapper(int b, int n, int m, float radius,\n\t\t\t\t     int nsample, const float *new_xyz,\n\t\t\t\t     const float *xyz, const int *fps_idx, int *idx,\n\t\t\t\t     cudaStream_t stream) {\n\n    cudaError_t err;\n    query_ball_point_kernel<<<b, opt_n_threads(m), 0, stream>>>(\n\tb, n, m, radius, nsample, new_xyz, xyz, fps_idx, idx);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/group_points.c",
    "content": "#include <THC/THC.h>\n\n#include \"group_points_gpu.h\"\n\nextern THCState *state;\n\nint group_points_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t THCudaTensor *points_tensor,\n\t\t\t THCudaIntTensor *idx_tensor,\n\t\t\t THCudaTensor *out_tensor) {\n\n    const float *points = THCudaTensor_data(state, points_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n    float *out = THCudaTensor_data(state, out_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    group_points_kernel_wrapper(b, c, n, npoints, nsample, points, idx, out,\n\t\t\t\tstream);\n    return 1;\n}\n\nint group_points_grad_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t      THCudaTensor *grad_out_tensor,\n\t\t\t      THCudaIntTensor *idx_tensor,\n\t\t\t      THCudaTensor *grad_points_tensor) {\n\n    float *grad_points = THCudaTensor_data(state, grad_points_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n    const float *grad_out = THCudaTensor_data(state, grad_out_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    group_points_grad_kernel_wrapper(b, c, n, npoints, nsample, grad_out, idx,\n\t\t\t\t     grad_points, stream);\n    return 1;\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/group_points_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n#include \"group_points_gpu.h\"\n\n// input: points(b, c, n) idx(b, npoints, nsample)\n// output: out(b, c, npoints, nsample)\n__global__ void group_points_kernel(int b, int c, int n, int npoints,\n\t\t\t\t    int nsample,\n\t\t\t\t    const float *__restrict__ points,\n\t\t\t\t    const int *__restrict__ idx,\n\t\t\t\t    float *__restrict__ out) {\n    int batch_index = blockIdx.x;\n    points += batch_index * n * c;\n    idx += batch_index * npoints * nsample;\n    out += batch_index * npoints * nsample * c;\n\n    const int index = threadIdx.y * blockDim.x + threadIdx.x;\n    const int stride = blockDim.y * blockDim.x;\n    for (int i = index; i < c * npoints; i += stride) {\n\tconst int l = i / npoints;\n\tconst int j = i % npoints;\n\tfor (int k = 0; k < nsample; ++k) {\n\t    int ii = idx[j * nsample + k];\n\t    out[(l * npoints + j) * nsample + k] = points[l * n + ii];\n\t}\n    }\n}\n\nvoid group_points_kernel_wrapper(int b, int c, int n, int npoints, int nsample,\n\t\t\t\t const float *points, const int *idx,\n\t\t\t\t float *out, cudaStream_t stream) {\n\n    cudaError_t err;\n    group_points_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n\tb, c, n, npoints, nsample, points, idx, out);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n\n// input: grad_out(b, c, npoints, nsample), idx(b, npoints, nsample)\n// output: grad_points(b, c, n)\n__global__ void group_points_grad_kernel(int b, int c, int n, int npoints,\n\t\t\t\t\t int nsample,\n\t\t\t\t\t const float *__restrict__ grad_out,\n\t\t\t\t\t const int *__restrict__ idx,\n\t\t\t\t\t float *__restrict__ grad_points) {\n    int batch_index = blockIdx.x;\n    grad_out += batch_index * npoints * nsample * c;\n    idx += batch_index * npoints * nsample;\n    grad_points += batch_index * n * c;\n\n    const int index = threadIdx.y * blockDim.x + threadIdx.x;\n    const int stride = blockDim.y * blockDim.x;\n    for (int i = index; i < c * npoints; i += stride) {\n\tconst int l = i / npoints;\n\tconst int j = i % npoints;\n\tfor (int k = 0; k < nsample; ++k) {\n\t    int ii = idx[j * nsample + k];\n\t    atomicAdd(grad_points + l * n + ii,\n\t\t      grad_out[(l * npoints + j) * nsample + k]);\n\t}\n    }\n}\n\nvoid group_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t      int nsample, const float *grad_out,\n\t\t\t\t      const int *idx, float *grad_points,\n\t\t\t\t      cudaStream_t stream) {\n    cudaError_t err;\n    group_points_grad_kernel<<<b, opt_block_config(npoints, c), 0, stream>>>(\n\tb, c, n, npoints, nsample, grad_out, idx, grad_points);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/interpolate.c",
    "content": "#include <THC/THC.h>\n#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"interpolate_gpu.h\"\n\nextern THCState *state;\n\nvoid three_nn_wrapper(int b, int n, int m, THCudaTensor *unknown_tensor,\n\t\t      THCudaTensor *known_tensor, THCudaTensor *dist2_tensor,\n\t\t      THCudaIntTensor *idx_tensor) {\n    const float *unknown = THCudaTensor_data(state, unknown_tensor);\n    const float *known = THCudaTensor_data(state, known_tensor);\n    float *dist2 = THCudaTensor_data(state, dist2_tensor);\n    int *idx = THCudaIntTensor_data(state, idx_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    three_nn_kernel_wrapper(b, n, m, unknown, known, dist2, idx, stream);\n}\n\nvoid three_interpolate_wrapper(int b, int c, int m, int n,\n\t\t\t       THCudaTensor *points_tensor,\n\t\t\t       THCudaIntTensor *idx_tensor,\n\t\t\t       THCudaTensor *weight_tensor,\n\t\t\t       THCudaTensor *out_tensor) {\n\n    const float *points = THCudaTensor_data(state, points_tensor);\n    const float *weight = THCudaTensor_data(state, weight_tensor);\n    float *out = THCudaTensor_data(state, out_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    three_interpolate_kernel_wrapper(b, c, m, n, points, idx, weight, out,\n\t\t\t\t     stream);\n}\n\nvoid three_interpolate_grad_wrapper(int b, int c, int n, int m,\n\t\t\t\t    THCudaTensor *grad_out_tensor,\n\t\t\t\t    THCudaIntTensor *idx_tensor,\n\t\t\t\t    THCudaTensor *weight_tensor,\n\t\t\t\t    THCudaTensor *grad_points_tensor) {\n\n    const float *grad_out = THCudaTensor_data(state, grad_out_tensor);\n    const float *weight = THCudaTensor_data(state, weight_tensor);\n    float *grad_points = THCudaTensor_data(state, grad_points_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    three_interpolate_grad_kernel_wrapper(b, c, n, m, grad_out, idx, weight,\n\t\t\t\t\t  grad_points, stream);\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/interpolate_gpu.cu",
    "content": "#include <math.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n#include \"interpolate_gpu.h\"\n\n// input: unknown(b, n, 3) known(b, m, 3)\n// output: dist2(b, n, 3), idx(b, n, 3)\n__global__ void three_nn_kernel(int b, int n, int m,\n\t\t\t\tconst float *__restrict__ unknown,\n\t\t\t\tconst float *__restrict__ known,\n\t\t\t\tfloat *__restrict__ dist2,\n\t\t\t\tint *__restrict__ idx) {\n    int batch_index = blockIdx.x;\n    unknown += batch_index * n * 3;\n    known += batch_index * m * 3;\n    dist2 += batch_index * n * 3;\n    idx += batch_index * n * 3;\n\n    int index = threadIdx.x;\n    int stride = blockDim.x;\n    for (int j = index; j < n; j += stride) {\n\tfloat ux = unknown[j * 3 + 0];\n\tfloat uy = unknown[j * 3 + 1];\n\tfloat uz = unknown[j * 3 + 2];\n\n\tdouble best1 = 1e40, best2 = 1e40, best3 = 1e40;\n\tint besti1 = 0, besti2 = 0, besti3 = 0;\n\tfor (int k = 0; k < m; ++k) {\n\t    float x = known[k * 3 + 0];\n\t    float y = known[k * 3 + 1];\n\t    float z = known[k * 3 + 2];\n\t    float d =\n\t\t(ux - x) * (ux - x) + (uy - y) * (uy - y) + (uz - z) * (uz - z);\n\t    if (d < best1) {\n\t\tbest3 = best2;\n\t\tbesti3 = besti2;\n\t\tbest2 = best1;\n\t\tbesti2 = besti1;\n\t\tbest1 = d;\n\t\tbesti1 = k;\n\t    } else if (d < best2) {\n\t\tbest3 = best2;\n\t\tbesti3 = besti2;\n\t\tbest2 = d;\n\t\tbesti2 = k;\n\t    } else if (d < best3) {\n\t\tbest3 = d;\n\t\tbesti3 = k;\n\t    }\n\t}\n\tdist2[j * 3 + 0] = best1;\n\tdist2[j * 3 + 1] = best2;\n\tdist2[j * 3 + 2] = best3;\n\n\tidx[j * 3 + 0] = besti1;\n\tidx[j * 3 + 1] = besti2;\n\tidx[j * 3 + 2] = besti3;\n    }\n}\n\nvoid three_nn_kernel_wrapper(int b, int n, int m, const float *unknown,\n\t\t\t     const float *known, float *dist2, int *idx,\n\t\t\t     cudaStream_t stream) {\n\n    cudaError_t err;\n    three_nn_kernel<<<b, opt_n_threads(n), 0, stream>>>(b, n, m, unknown, known,\n\t\t\t\t\t\t\tdist2, idx);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel \"\n\t\t\t\"failed : %s\\n\",\n\t\tcudaGetErrorString(err));\n\texit(-1);\n    }\n}\n\n// input: points(b, c, m), idx(b, n, 3), weight(b, n, 3)\n// output: out(b, c, n)\n__global__ void three_interpolate_kernel(int b, int c, int m, int n,\n\t\t\t\t\t const float *__restrict__ points,\n\t\t\t\t\t const int *__restrict__ idx,\n\t\t\t\t\t const float *__restrict__ weight,\n\t\t\t\t\t float *__restrict__ out) {\n    int batch_index = blockIdx.x;\n    points += batch_index * m * c;\n\n    idx += batch_index * n * 3;\n    weight += batch_index * n * 3;\n\n    out += batch_index * n * c;\n\n    const int index = threadIdx.y * blockDim.x + threadIdx.x;\n    const int stride = blockDim.y * blockDim.x;\n    for (int i = index; i < c * n; i += stride) {\n\tconst int l = i / n;\n\tconst int j = i % n;\n\tfloat w1 = weight[j * 3 + 0];\n\tfloat w2 = weight[j * 3 + 1];\n\tfloat w3 = weight[j * 3 + 2];\n\n\tint i1 = idx[j * 3 + 0];\n\tint i2 = idx[j * 3 + 1];\n\tint i3 = idx[j * 3 + 2];\n\n\tout[i] = points[l * m + i1] * w1 + points[l * m + i2] * w2 +\n\t\t points[l * m + i3] * w3;\n    }\n}\n\nvoid three_interpolate_kernel_wrapper(int b, int c, int m, int n,\n\t\t\t\t      const float *points, const int *idx,\n\t\t\t\t      const float *weight, float *out,\n\t\t\t\t      cudaStream_t stream) {\n\n    cudaError_t err;\n    three_interpolate_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n\tb, c, m, n, points, idx, weight, out);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel \"\n\t\t\t\"failed : %s\\n\",\n\t\tcudaGetErrorString(err));\n\texit(-1);\n    }\n}\n\n// input: grad_out(b, c, n), idx(b, n, 3), weight(b, n, 3)\n// output: grad_points(b, c, m)\n\n__global__ void three_interpolate_grad_kernel(\n    int b, int c, int n, int m, const float *__restrict__ grad_out,\n    const int *__restrict__ idx, const float *__restrict__ weight,\n    float *__restrict__ grad_points) {\n    int batch_index = blockIdx.x;\n    grad_out += batch_index * n * c;\n    idx += batch_index * n * 3;\n    weight += batch_index * n * 3;\n    grad_points += batch_index * m * c;\n\n    const int index = threadIdx.y * blockDim.x + threadIdx.x;\n    const int stride = blockDim.y * blockDim.x;\n    for (int i = index; i < c * n; i += stride) {\n\tconst int l = i / n;\n\tconst int j = i % n;\n\tfloat w1 = weight[j * 3 + 0];\n\tfloat w2 = weight[j * 3 + 1];\n\tfloat w3 = weight[j * 3 + 2];\n\n\tint i1 = idx[j * 3 + 0];\n\tint i2 = idx[j * 3 + 1];\n\tint i3 = idx[j * 3 + 2];\n\n\tatomicAdd(grad_points + l * m + i1, grad_out[i] * w1);\n\tatomicAdd(grad_points + l * m + i2, grad_out[i] * w2);\n\tatomicAdd(grad_points + l * m + i3, grad_out[i] * w3);\n    }\n}\n\nvoid three_interpolate_grad_kernel_wrapper(int b, int n, int c, int m,\n\t\t\t\t\t   const float *grad_out,\n\t\t\t\t\t   const int *idx, const float *weight,\n\t\t\t\t\t   float *grad_points,\n\t\t\t\t\t   cudaStream_t stream) {\n\n    cudaError_t err;\n    three_interpolate_grad_kernel<<<b, opt_block_config(n, c), 0, stream>>>(\n\tb, n, c, m, grad_out, idx, weight, grad_points);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel \"\n\t\t\t\"failed : %s\\n\",\n\t\tcudaGetErrorString(err));\n\texit(-1);\n    }\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/sampling.c",
    "content": "#include <THC/THC.h>\n\n#include \"sampling_gpu.h\"\n\nextern THCState *state;\n\nint gather_points_wrapper(int b, int c, int n, int npoints,\n\t\t\t  THCudaTensor *points_tensor,\n\t\t\t  THCudaIntTensor *idx_tensor,\n\t\t\t  THCudaTensor *out_tensor) {\n\n    const float *points = THCudaTensor_data(state, points_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n    float *out = THCudaTensor_data(state, out_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    gather_points_kernel_wrapper(b, c, n, npoints, points, idx, out, stream);\n    return 1;\n}\n\nint gather_points_grad_wrapper(int b, int c, int n, int npoints,\n\t\t\t       THCudaTensor *grad_out_tensor,\n\t\t\t       THCudaIntTensor *idx_tensor,\n\t\t\t       THCudaTensor *grad_points_tensor) {\n\n    const float *grad_out = THCudaTensor_data(state, grad_out_tensor);\n    const int *idx = THCudaIntTensor_data(state, idx_tensor);\n    float *grad_points = THCudaTensor_data(state, grad_points_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    gather_points_grad_kernel_wrapper(b, c, n, npoints, grad_out, idx,\n\t\t\t\t      grad_points, stream);\n    return 1;\n}\n\nint furthest_point_sampling_wrapper(int b, int n, int m,\n\t\t\t\t    THCudaTensor *points_tensor,\n\t\t\t\t    THCudaTensor *temp_tensor,\n\t\t\t\t    THCudaIntTensor *idx_tensor) {\n\n    const float *points = THCudaTensor_data(state, points_tensor);\n    float *temp = THCudaTensor_data(state, temp_tensor);\n    int *idx = THCudaIntTensor_data(state, idx_tensor);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    furthest_point_sampling_kernel_wrapper(b, n, m, points, temp, idx, stream);\n    return 1;\n}\n"
  },
  {
    "path": "rs_cnn/utils/csrc/sampling_gpu.cu",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n\n#include \"cuda_utils.h\"\n#include \"sampling_gpu.h\"\n\n// input: points(b, c, n) idx(b, m)\n// output: out(b, c, m)\n__global__ void gather_points_kernel(int b, int c, int n, int m,\n\t\t\t\t     const float *__restrict__ points,\n\t\t\t\t     const int *__restrict__ idx,\n\t\t\t\t     float *__restrict__ out) {\n    for (int i = blockIdx.x; i < b; i += gridDim.x) {\n\tfor (int l = blockIdx.y; l < c; l += gridDim.y) {\n\t    for (int j = threadIdx.x; j < m; j += blockDim.x) {\n\t\tint a = idx[i * m + j];\n\t\tout[(i * c + l) * m + j] = points[(i * c + l) * n + a];\n\t    }\n\t}\n    }\n}\n\nvoid gather_points_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t  const float *points, const int *idx,\n\t\t\t\t  float *out, cudaStream_t stream) {\n\n    cudaError_t err;\n    gather_points_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0, stream>>>(\n\tb, c, n, npoints, points, idx, out);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n\n// input: grad_out(b, c, m) idx(b, m)\n// output: grad_points(b, c, n)\n__global__ void gather_points_grad_kernel(int b, int c, int n, int m,\n\t\t\t\t\t  const float *__restrict__ grad_out,\n\t\t\t\t\t  const int *__restrict__ idx,\n\t\t\t\t\t  float *__restrict__ grad_points) {\n    for (int i = blockIdx.x; i < b; i += gridDim.x) {\n\tfor (int l = blockIdx.y; l < c; l += gridDim.y) {\n\t    for (int j = threadIdx.x; j < m; j += blockDim.x) {\n\t\tint a = idx[i * m + j];\n\t\tatomicAdd(grad_points + (i * c + l) * n + a,\n\t\t\t  grad_out[(i * c + l) * m + j]);\n\t    }\n\t}\n    }\n}\n\nvoid gather_points_grad_kernel_wrapper(int b, int c, int n, int npoints,\n\t\t\t\t       const float *grad_out, const int *idx,\n\t\t\t\t       float *grad_points,\n\t\t\t\t       cudaStream_t stream) {\n\n    cudaError_t err;\n    gather_points_grad_kernel<<<dim3(b, c, 1), opt_n_threads(npoints), 0,\n\t\t\t\tstream>>>(b, c, n, npoints, grad_out, idx,\n\t\t\t\t\t  grad_points);\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n\n__device__ void __update(float *__restrict__ dists, int *__restrict__ dists_i,\n\t\t\t int idx1, int idx2) {\n    const float v1 = dists[idx1], v2 = dists[idx2];\n    const int i1 = dists_i[idx1], i2 = dists_i[idx2];\n    dists[idx1] = max(v1, v2);\n    dists_i[idx1] = v2 > v1 ? i2 : i1;\n}\n\n// Input dataset: (b, n, 3), tmp: (b, n)\n// Ouput idxs (b, m)\ntemplate <unsigned int block_size>\n__global__ void furthest_point_sampling_kernel(\n    int b, int n, int m, const float *__restrict__ dataset,\n    float *__restrict__ temp, int *__restrict__ idxs) {\n    if (m <= 0)\n\treturn;\n    __shared__ float dists[block_size];\n    __shared__ int dists_i[block_size];\n\n    int batch_index = blockIdx.x;\n    dataset += batch_index * n * 3;\n    temp += batch_index * n;\n    idxs += batch_index * m;\n\n    int tid = threadIdx.x;\n    const int stride = block_size;\n\n    int old = 0;\n    if (threadIdx.x == 0)\n\tidxs[0] = old;\n\n    __syncthreads();\n    for (int j = 1; j < m; j++) {\n\tint besti = 0;\n\tfloat best = -1;\n\tfloat x1 = dataset[old * 3 + 0];\n\tfloat y1 = dataset[old * 3 + 1];\n\tfloat z1 = dataset[old * 3 + 2];\n\tfor (int k = tid; k < n; k += stride) {\n\t    float x2, y2, z2;\n\t    x2 = dataset[k * 3 + 0];\n\t    y2 = dataset[k * 3 + 1];\n\t    z2 = dataset[k * 3 + 2];\n\t    float mag = (x2 * x2) + (y2 * y2) + (z2 * z2);\n\t    if (mag <= 1e-3)\n\t\tcontinue;\n\n\t    float d = (x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1) +\n\t\t      (z2 - z1) * (z2 - z1);\n\n\t    float d2 = min(d, temp[k]);\n\t    temp[k] = d2;\n\t    besti = d2 > best ? k : besti;\n\t    best = d2 > best ? d2 : best;\n\t}\n\tdists[tid] = best;\n\tdists_i[tid] = besti;\n\t__syncthreads();\n\n\tif (block_size >= 512) {\n\t    if (tid < 256) {\n\t\t__update(dists, dists_i, tid, tid + 256);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 256) {\n\t    if (tid < 128) {\n\t\t__update(dists, dists_i, tid, tid + 128);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 128) {\n\t    if (tid < 64) {\n\t\t__update(dists, dists_i, tid, tid + 64);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 64) {\n\t    if (tid < 32) {\n\t\t__update(dists, dists_i, tid, tid + 32);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 32) {\n\t    if (tid < 16) {\n\t\t__update(dists, dists_i, tid, tid + 16);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 16) {\n\t    if (tid < 8) {\n\t\t__update(dists, dists_i, tid, tid + 8);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 8) {\n\t    if (tid < 4) {\n\t\t__update(dists, dists_i, tid, tid + 4);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 4) {\n\t    if (tid < 2) {\n\t\t__update(dists, dists_i, tid, tid + 2);\n\t    }\n\t    __syncthreads();\n\t}\n\tif (block_size >= 2) {\n\t    if (tid < 1) {\n\t\t__update(dists, dists_i, tid, tid + 1);\n\t    }\n\t    __syncthreads();\n\t}\n\n\told = dists_i[0];\n\tif (tid == 0)\n\t    idxs[j] = old;\n    }\n}\n\nvoid furthest_point_sampling_kernel_wrapper(int b, int n, int m,\n\t\t\t\t\t    const float *dataset, float *temp,\n\t\t\t\t\t    int *idxs, cudaStream_t stream) {\n\n    cudaError_t err;\n    unsigned int n_threads = opt_n_threads(n);\n\n    switch (n_threads) {\n    case 512:\n\tfurthest_point_sampling_kernel<512><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 256:\n\tfurthest_point_sampling_kernel<256><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 128:\n\tfurthest_point_sampling_kernel<128><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 64:\n\tfurthest_point_sampling_kernel<64><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 32:\n\tfurthest_point_sampling_kernel<32><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 16:\n\tfurthest_point_sampling_kernel<16><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 8:\n\tfurthest_point_sampling_kernel<8><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 4:\n\tfurthest_point_sampling_kernel<4><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 2:\n\tfurthest_point_sampling_kernel<2><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    case 1:\n\tfurthest_point_sampling_kernel<1><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n\tbreak;\n    default:\n\tfurthest_point_sampling_kernel<512><<<b, n_threads, 0, stream>>>(\n\t    b, n, m, dataset, temp, idxs);\n    }\n\n    err = cudaGetLastError();\n    if (cudaSuccess != err) {\n\tfprintf(stderr, \"CUDA kernel failed : %s\\n\", cudaGetErrorString(err));\n\texit(-1);\n    }\n}\n"
  },
  {
    "path": "rs_cnn/utils/linalg_utils.py",
    "content": "import torch\nfrom enum import Enum\n\nPDist2Order = Enum('PDist2Order', 'd_first d_second')\n\n\ndef pdist2(\n        X: torch.Tensor,\n        Z: torch.Tensor = None,\n        order: PDist2Order = PDist2Order.d_second\n) -> torch.Tensor:\n    r\"\"\" Calculates the pairwise distance between X and Z\n\n    D[b, i, j] = l2 distance X[b, i] and Z[b, j]\n\n    Parameters\n    ---------\n    X : torch.Tensor\n        X is a (B, N, d) tensor.  There are B batches, and N vectors of dimension d\n    Z: torch.Tensor\n        Z is a (B, M, d) tensor.  If Z is None, then Z = X\n\n    Returns\n    -------\n    torch.Tensor\n        Distance matrix is size (B, N, M)\n    \"\"\"\n\n    if order == PDist2Order.d_second:\n        if X.dim() == 2:\n            X = X.unsqueeze(0)\n        if Z is None:\n            Z = X\n            G = X @ Z.transpose(-2, -1)\n            S = (X * X).sum(-1, keepdim=True)\n            R = S.transpose(-2, -1)\n        else:\n            if Z.dim() == 2:\n                Z = Z.unsqueeze(0)\n            G = X @ Z.transpose(-2, -1)\n            S = (X * X).sum(-1, keepdim=True)\n            R = (Z * Z).sum(-1, keepdim=True).transpose(-2, -1)\n    else:\n        if X.dim() == 2:\n            X = X.unsqueeze(0)\n        if Z is None:\n            Z = X\n            G = X.transpose(-2, -1) @ Z\n            R = (X * X).sum(-2, keepdim=True)\n            S = R.transpose(-2, -1)\n        else:\n            if Z.dim() == 2:\n                Z = Z.unsqueeze(0)\n            G = X.transpose(-2, -1) @ Z\n            S = (X * X).sum(-2, keepdim=True).transpose(-2, -1)\n            R = (Z * Z).sum(-2, keepdim=True)\n\n    return torch.abs(R + S - 2 * G).squeeze(0)\n\n\ndef pdist2_slow(X, Z=None):\n    if Z is None: Z = X\n    D = torch.zeros(X.size(0), X.size(2), Z.size(2))\n\n    for b in range(D.size(0)):\n        for i in range(D.size(1)):\n            for j in range(D.size(2)):\n                D[b, i, j] = torch.dist(X[b, :, i], Z[b, :, j])\n    return D\n\n\nif __name__ == \"__main__\":\n    X = torch.randn(2, 3, 5)\n    Z = torch.randn(2, 3, 3)\n\n    print(pdist2(X, order=PDist2Order.d_first))\n    print(pdist2_slow(X))\n    print(torch.dist(pdist2(X, order=PDist2Order.d_first), pdist2_slow(X)))\n"
  },
  {
    "path": "rs_cnn/utils/pointnet2_modules.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport pointnet2_utils\nimport pytorch_utils as pt_utils\nfrom typing import List\nimport numpy as np\nimport time\nimport math\n\nclass _PointnetSAModuleBase(nn.Module):\n\n    def __init__(self):\n        super().__init__()\n        self.npoint = None\n        self.groupers = None\n        self.mlps = None\n\n    def forward(self, xyz: torch.Tensor,\n                features: torch.Tensor = None) -> (torch.Tensor, torch.Tensor):\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor of the xyz coordinates of the points\n        features : torch.Tensor\n            (B, N, C) tensor of the descriptors of the the points\n\n        Returns\n        -------\n        new_xyz : torch.Tensor\n            (B, npoint, 3) tensor of the new points' xyz\n        new_features : torch.Tensor\n            (B, npoint, \\sum_k(mlps[k][-1])) tensor of the new_points descriptors\n        \"\"\"\n\n        new_features_list = []\n        xyz_flipped = xyz.transpose(1, 2).contiguous()\n        if self.npoint is not None:\n            fps_idx = pointnet2_utils.furthest_point_sample(xyz, self.npoint)  # (B, npoint)\n            new_xyz = pointnet2_utils.gather_operation(xyz_flipped, fps_idx).transpose(1, 2).contiguous()\n            fps_idx = fps_idx.data\n        else:\n            new_xyz = None\n            fps_idx = None\n        \n        for i in range(len(self.groupers)):\n            new_features = self.groupers[i](xyz, new_xyz, features, fps_idx) if self.npoint is not None else self.groupers[i](xyz, new_xyz, features)  # (B, C, npoint, nsample)\n            new_features = self.mlps[i](\n                new_features\n            )  # (B, mlp[-1], npoint)\n\n            new_features_list.append(new_features)\n        \n        return new_xyz, torch.cat(new_features_list, dim=1)\n\n\nclass PointnetSAModuleMSG(_PointnetSAModuleBase):\n    r\"\"\"Pointnet set abstrction layer with multiscale grouping\n\n    Parameters\n    ----------\n    npoint : int\n        Number of points\n    radii : list of float32\n        list of radii to group with\n    nsamples : list of int32\n        Number of samples in each ball query\n    mlps : list of list of int32\n        Spec of the pointnet before the global max_pool for each scale\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n            self,\n            *,\n            npoint: int,\n            radii: List[float],\n            nsamples: List[int],\n            mlps: List[List[int]],\n            use_xyz: bool = True,\n            bias = True,\n            init = nn.init.kaiming_normal,\n            first_layer = False,\n            relation_prior = 1\n    ):\n        super().__init__()\n        assert len(radii) == len(nsamples) == len(mlps)\n        self.npoint = npoint\n        self.groupers = nn.ModuleList()\n        self.mlps = nn.ModuleList()\n        \n        # initialize shared mapping functions\n        C_in = (mlps[0][0] + 3) if use_xyz else mlps[0][0]\n        C_out = mlps[0][1]\n        \n        if relation_prior == 0:\n            in_channels = 1\n        elif relation_prior == 1 or relation_prior == 2:\n            in_channels = 10\n        else:\n            assert False, \"relation_prior can only be 0, 1, 2.\"\n        \n        if first_layer:\n            mapping_func1 = nn.Conv2d(in_channels = in_channels, out_channels = math.floor(C_out / 2), kernel_size = (1, 1), \n                                      stride = (1, 1), bias = bias)\n            mapping_func2 = nn.Conv2d(in_channels = math.floor(C_out / 2), out_channels = 16, kernel_size = (1, 1), \n                                  stride = (1, 1), bias = bias)\n            xyz_raising = nn.Conv2d(in_channels = C_in, out_channels = 16, kernel_size = (1, 1), \n                                  stride = (1, 1), bias = bias)\n            init(xyz_raising.weight)\n            if bias:\n                nn.init.constant(xyz_raising.bias, 0)\n        elif npoint is not None:\n            mapping_func1 = nn.Conv2d(in_channels = in_channels, out_channels = math.floor(C_out / 4), kernel_size = (1, 1), \n                                      stride = (1, 1), bias = bias)\n            mapping_func2 = nn.Conv2d(in_channels = math.floor(C_out / 4), out_channels = C_in, kernel_size = (1, 1), \n                                  stride = (1, 1), bias = bias)\n        if npoint is not None:\n            init(mapping_func1.weight)\n            init(mapping_func2.weight)\n            if bias:\n                nn.init.constant(mapping_func1.bias, 0)\n                nn.init.constant(mapping_func2.bias, 0)    \n                     \n            # channel raising mapping\n            cr_mapping = nn.Conv1d(in_channels = C_in if not first_layer else 16, out_channels = C_out, kernel_size = 1, \n                                      stride = 1, bias = bias)\n            init(cr_mapping.weight)\n            nn.init.constant(cr_mapping.bias, 0)\n        \n        if first_layer:\n            mapping = [mapping_func1, mapping_func2, cr_mapping, xyz_raising]\n        elif npoint is not None:\n            mapping = [mapping_func1, mapping_func2, cr_mapping]\n        \n        for i in range(len(radii)):\n            radius = radii[i]\n            nsample = nsamples[i]\n            self.groupers.append(\n                pointnet2_utils.QueryAndGroup(radius, nsample, use_xyz=use_xyz)\n                if npoint is not None else pointnet2_utils.GroupAll(use_xyz)\n            )\n            mlp_spec = mlps[i]\n            if use_xyz:\n                mlp_spec[0] += 3\n            if npoint is not None:\n                self.mlps.append(pt_utils.SharedRSConv(mlp_spec, mapping = mapping, relation_prior = relation_prior, first_layer = first_layer))\n            else:   # global convolutional pooling\n                self.mlps.append(pt_utils.GloAvgConv(C_in = C_in, C_out = C_out))\n\n\nclass PointnetSAModule(PointnetSAModuleMSG):\n    r\"\"\"Pointnet set abstrction layer\n\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radius : float\n        Radius of ball\n    nsample : int\n        Number of samples in the ball query\n    mlp : list\n        Spec of the pointnet before the global max_pool\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n            self,\n            *,\n            mlp: List[int],\n            npoint: int = None,\n            radius: float = None,\n            nsample: int = None,\n            use_xyz: bool = True,\n    ):\n        super().__init__(\n            mlps=[mlp],\n            npoint=npoint,\n            radii=[radius],\n            nsamples=[nsample],\n            use_xyz=use_xyz\n        )\n\n\nclass PointnetFPModule(nn.Module):\n    r\"\"\"Propigates the features of one set to another\n\n    Parameters\n    ----------\n    mlp : list\n        Pointnet module parameters\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, *, mlp: List[int], bn: bool = True):\n        super().__init__()\n        self.mlp = pt_utils.SharedMLP(mlp, bn=bn)\n\n    def forward(\n            self, unknown: torch.Tensor, known: torch.Tensor,\n            unknow_feats: torch.Tensor, known_feats: torch.Tensor\n    ) -> torch.Tensor:\n        r\"\"\"\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of the xyz positions of the unknown features\n        known : torch.Tensor\n            (B, m, 3) tensor of the xyz positions of the known features\n        unknow_feats : torch.Tensor\n            (B, C1, n) tensor of the features to be propigated to\n        known_feats : torch.Tensor\n            (B, C2, m) tensor of features to be propigated\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, mlp[-1], n) tensor of the features of the unknown features\n        \"\"\"\n\n        dist, idx = pointnet2_utils.three_nn(unknown, known)\n        dist_recip = 1.0 / (dist + 1e-8)\n        norm = torch.sum(dist_recip, dim=2, keepdim=True)\n        weight = dist_recip / norm\n\n        interpolated_feats = pointnet2_utils.three_interpolate(\n            known_feats, idx, weight\n        )\n        if unknow_feats is not None:\n            new_features = torch.cat([interpolated_feats, unknow_feats],\n                                     dim=1)  #(B, C2 + C1, n)\n        else:\n            new_features = interpolated_feats\n        \n        new_features = new_features.unsqueeze(-1)\n        new_features = self.mlp(new_features)\n\n        return new_features.squeeze(-1)\n\n\nif __name__ == \"__main__\":\n    from torch.autograd import Variable\n    torch.manual_seed(1)\n    torch.cuda.manual_seed_all(1)\n    xyz = Variable(torch.randn(2, 9, 3).cuda(), requires_grad=True)\n    xyz_feats = Variable(torch.randn(2, 9, 6).cuda(), requires_grad=True)\n\n    test_module = PointnetSAModuleMSG(\n        npoint=2, radii=[5.0, 10.0], nsamples=[6, 3], mlps=[[9, 3], [9, 6]]\n    )\n    test_module.cuda()\n    print(test_module(xyz, xyz_feats))\n\n    #  test_module = PointnetFPModule(mlp=[6, 6])\n    #  test_module.cuda()\n    #  from torch.autograd import gradcheck\n    #  inputs = (xyz, xyz, None, xyz_feats)\n    #  test = gradcheck(test_module, inputs, eps=1e-6, atol=1e-4)\n    #  print(test)\n\n    for _ in range(1):\n        _, new_features = test_module(xyz, xyz_feats)\n        new_features.backward(\n            torch.cuda.FloatTensor(*new_features.size()).fill_(1)\n        )\n        print(new_features)\n        print(xyz.grad)\n"
  },
  {
    "path": "rs_cnn/utils/pointnet2_modules_updated.py",
    "content": "from typing import *\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport PCT_Pytorch.pointnet2_ops_lib.pointnet2_ops.pointnet2_utils as tp\nimport pytorch_utils as pt_utils\nfrom typing import List\nimport numpy as np\nimport time\nimport math\n\nclass QueryAndGroup(nn.Module):\n    r\"\"\"\n    Groups with a ball query of radius\n    Parameters\n    ---------\n    radius : float32\n        Radius of ball\n    nsample : int32\n        Maximum number of points to gather in the ball\n    \"\"\"\n\n    def __init__(self, radius: float, nsample: int, use_xyz: bool = True):\n        super().__init__()\n        self.radius, self.nsample, self.use_xyz = radius, nsample, use_xyz\n\n    def forward(\n            self,\n            xyz: torch.Tensor,\n            new_xyz: torch.Tensor,\n            features: torch.Tensor = None,\n            fps_idx: torch.IntTensor = None\n    ) -> Tuple[torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            centriods (B, npoint, 3)\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, 3 + C, npoint, nsample) tensor\n        \"\"\"\n\n        # idx = tp.ball_query(self.radius, self.nsample, xyz, new_xyz, mode='dense')\n        idx = tp.ball_query(self.radius, self.nsample, xyz, new_xyz)\n        xyz_trans = xyz.transpose(1, 2).contiguous()\n        grouped_xyz = tp.grouping_operation(\n            xyz_trans, idx\n        )  # (B, 3, npoint, nsample)\n        raw_grouped_xyz = grouped_xyz\n        grouped_xyz -= new_xyz.transpose(1, 2).unsqueeze(-1)\n\n        if features is not None:\n            grouped_features = tp.grouping_operation(features, idx)\n            if self.use_xyz:\n                new_features = torch.cat([raw_grouped_xyz, grouped_xyz, grouped_features],\n                                         dim=1)  # (B, C + 3 + 3, npoint, nsample)\n            else:\n                new_features = grouped_features\n        else:\n            assert self.use_xyz, \"Cannot have not features and not use xyz as a feature!\"\n            new_features = torch.cat([raw_grouped_xyz, grouped_xyz], dim = 1)\n\n        return new_features\n\n\nclass GroupAll(nn.Module):\n    r\"\"\"\n    Groups all features\n    Parameters\n    ---------\n    \"\"\"\n\n    def __init__(self, use_xyz: bool = True):\n        super().__init__()\n        self.use_xyz = use_xyz\n\n    def forward(\n            self,\n            xyz: torch.Tensor,\n            new_xyz: torch.Tensor,\n            features: torch.Tensor = None\n    ) -> Tuple[torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            Ignored\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, C + 3, 1, N) tensor\n        \"\"\"\n\n        grouped_xyz = xyz.transpose(1, 2).unsqueeze(2)\n        if features is not None:\n            grouped_features = features.unsqueeze(2)\n            if self.use_xyz:\n                new_features = torch.cat([grouped_xyz, grouped_features],\n                                         dim=1)  # (B, 3 + C, 1, N)\n            else:\n                new_features = grouped_features\n        else:\n            new_features = grouped_xyz\n\n        return new_features\n\nclass _PointnetSAModuleBase(nn.Module):\n\n    def __init__(self):\n        super().__init__()\n        self.npoint = None\n        self.groupers = None\n        self.mlps = None\n\n    def forward(self, xyz: torch.Tensor,\n                features: torch.Tensor = None) -> (torch.Tensor, torch.Tensor):\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor of the xyz coordinates of the points\n        features : torch.Tensor\n            (B, N, C) tensor of the descriptors of the the points\n        Returns\n        -------\n        new_xyz : torch.Tensor\n            (B, npoint, 3) tensor of the new points' xyz\n        new_features : torch.Tensor\n            (B, npoint, \\sum_k(mlps[k][-1])) tensor of the new_points descriptors\n        \"\"\"\n\n        new_features_list = []\n        xyz_flipped = xyz.transpose(1, 2).contiguous()\n        if self.npoint is not None:\n            fps_idx = tp.furthest_point_sample(xyz, self.npoint)  # (B, npoint)\n            new_xyz = tp.gather_operation(xyz_flipped, fps_idx).transpose(1, 2).contiguous()\n            fps_idx = fps_idx.data\n        else:\n            new_xyz = None\n            fps_idx = None\n\n        for i in range(len(self.groupers)):\n            new_features = self.groupers[i](xyz, new_xyz, features, fps_idx) if self.npoint is not None else self.groupers[i](xyz, new_xyz, features)  # (B, C, npoint, nsample)\n            new_features = self.mlps[i](\n                new_features\n            )  # (B, mlp[-1], npoint)\n\n            new_features_list.append(new_features)\n\n        return new_xyz, torch.cat(new_features_list, dim=1)\n\n\nclass PointnetSAModuleMSG(_PointnetSAModuleBase):\n    r\"\"\"Pointnet set abstrction layer with multiscale grouping\n    Parameters\n    ----------\n    npoint : int\n        Number of points\n    radii : list of float32\n        list of radii to group with\n    nsamples : list of int32\n        Number of samples in each ball query\n    mlps : list of list of int32\n        Spec of the pointnet before the global max_pool for each scale\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n            self,\n            *,\n            npoint: int,\n            radii: List[float],\n            nsamples: List[int],\n            mlps: List[List[int]],\n            use_xyz: bool = True,\n            bias = True,\n            init = nn.init.kaiming_normal_,\n            first_layer = False,\n            relation_prior = 1\n    ):\n        super().__init__()\n        assert len(radii) == len(nsamples) == len(mlps)\n        self.npoint = npoint\n        self.groupers = nn.ModuleList()\n        self.mlps = nn.ModuleList()\n\n        # initialize shared mapping functions\n        C_in = (mlps[0][0] + 3) if use_xyz else mlps[0][0]\n        C_out = mlps[0][1]\n\n        if relation_prior == 0:\n            in_channels = 1\n        elif relation_prior == 1 or relation_prior == 2:\n            in_channels = 10\n        else:\n            assert False, \"relation_prior can only be 0, 1, 2.\"\n\n        if first_layer:\n            mapping_func1 = nn.Conv2d(in_channels = in_channels, out_channels = math.floor(C_out / 2), kernel_size = (1, 1),\n                                      stride = (1, 1), bias = bias)\n            mapping_func2 = nn.Conv2d(in_channels = math.floor(C_out / 2), out_channels = 16, kernel_size = (1, 1),\n                                  stride = (1, 1), bias = bias)\n            xyz_raising = nn.Conv2d(in_channels = C_in, out_channels = 16, kernel_size = (1, 1),\n                                  stride = (1, 1), bias = bias)\n            init(xyz_raising.weight)\n            if bias:\n                nn.init.constant_(xyz_raising.bias, 0)\n        elif npoint is not None:\n            mapping_func1 = nn.Conv2d(in_channels = in_channels, out_channels = math.floor(C_out / 4), kernel_size = (1, 1),\n                                      stride = (1, 1), bias = bias)\n            mapping_func2 = nn.Conv2d(in_channels = math.floor(C_out / 4), out_channels = C_in, kernel_size = (1, 1),\n                                  stride = (1, 1), bias = bias)\n        if npoint is not None:\n            init(mapping_func1.weight)\n            init(mapping_func2.weight)\n            if bias:\n                nn.init.constant_(mapping_func1.bias, 0)\n                nn.init.constant_(mapping_func2.bias, 0)\n\n            # channel raising mapping\n            cr_mapping = nn.Conv1d(in_channels = C_in if not first_layer else 16, out_channels = C_out, kernel_size = 1,\n                                      stride = 1, bias = bias)\n            init(cr_mapping.weight)\n            nn.init.constant_(cr_mapping.bias, 0)\n\n        if first_layer:\n            mapping = [mapping_func1, mapping_func2, cr_mapping, xyz_raising]\n        elif npoint is not None:\n            mapping = [mapping_func1, mapping_func2, cr_mapping]\n\n        for i in range(len(radii)):\n            radius = radii[i]\n            nsample = nsamples[i]\n            self.groupers.append(\n                QueryAndGroup(radius, nsample, use_xyz=use_xyz)\n                if npoint is not None else GroupAll(use_xyz)\n            )\n            mlp_spec = mlps[i]\n            if use_xyz:\n                mlp_spec[0] += 3\n            if npoint is not None:\n                self.mlps.append(pt_utils.SharedRSConv(mlp_spec, mapping = mapping, relation_prior = relation_prior, first_layer = first_layer))\n            else:   # global convolutional pooling\n                self.mlps.append(pt_utils.GloAvgConv(C_in = C_in, C_out = C_out))\n\n\nclass PointnetSAModule(PointnetSAModuleMSG):\n    r\"\"\"Pointnet set abstrction layer\n    Parameters\n    ----------\n    npoint : int\n        Number of features\n    radius : float\n        Radius of ball\n    nsample : int\n        Number of samples in the ball query\n    mlp : list\n        Spec of the pointnet before the global max_pool\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(\n            self,\n            *,\n            mlp: List[int],\n            npoint: int = None,\n            radius: float = None,\n            nsample: int = None,\n            use_xyz: bool = True,\n    ):\n        super().__init__(\n            mlps=[mlp],\n            npoint=npoint,\n            radii=[radius],\n            nsamples=[nsample],\n            use_xyz=use_xyz\n        )\n\n\nclass PointnetFPModule(nn.Module):\n    r\"\"\"Propigates the features of one set to another\n    Parameters\n    ----------\n    mlp : list\n        Pointnet module parameters\n    bn : bool\n        Use batchnorm\n    \"\"\"\n\n    def __init__(self, *, mlp: List[int], bn: bool = True):\n        super().__init__()\n        self.mlp = pt_utils.SharedMLP(mlp, bn=bn)\n\n    def forward(\n            self, unknown: torch.Tensor, known: torch.Tensor,\n            unknow_feats: torch.Tensor, known_feats: torch.Tensor\n    ) -> torch.Tensor:\n        r\"\"\"\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of the xyz positions of the unknown features\n        known : torch.Tensor\n            (B, m, 3) tensor of the xyz positions of the known features\n        unknow_feats : torch.Tensor\n            (B, C1, n) tensor of the features to be propigated to\n        known_feats : torch.Tensor\n            (B, C2, m) tensor of features to be propigated\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, mlp[-1], n) tensor of the features of the unknown features\n        \"\"\"\n\n        dist, idx = tp.three_nn(unknown, known)\n        dist_recip = 1.0 / (dist + 1e-8)\n        norm = torch.sum(dist_recip, dim=2, keepdim=True)\n        weight = dist_recip / norm\n\n        interpolated_feats = tp.three_interpolate(\n            known_feats, idx, weight\n        )\n        if unknow_feats is not None:\n            new_features = torch.cat([interpolated_feats, unknow_feats],\n                                     dim=1)  #(B, C2 + C1, n)\n        else:\n            new_features = interpolated_feats\n\n        new_features = new_features.unsqueeze(-1)\n        new_features = self.mlp(new_features)\n\n        return new_features.squeeze(-1)\n\n\nif __name__ == \"__main__\":\n    from torch.autograd import Variable\n    torch.manual_seed(1)\n    torch.cuda.manual_seed_all(1)\n    xyz = Variable(torch.randn(2, 9, 3).cuda(), requires_grad=True)\n    xyz_feats = Variable(torch.randn(2, 9, 6).cuda(), requires_grad=True)\n\n    test_module = PointnetSAModuleMSG(\n        npoint=2, radii=[5.0, 10.0], nsamples=[6, 3], mlps=[[9, 3], [9, 6]]\n    )\n    test_module.cuda()\n    print(test_module(xyz, xyz_feats))\n\n    #  test_module = PointnetFPModule(mlp=[6, 6])\n    #  test_module.cuda()\n    #  from torch.autograd import gradcheck\n    #  inputs = (xyz, xyz, None, xyz_feats)\n    #  test = gradcheck(test_module, inputs, eps=1e-6, atol=1e-4)\n    #  print(test)\n\n    for _ in range(1):\n        _, new_features = test_module(xyz, xyz_feats)\n        new_features.backward(\n            torch.cuda.FloatTensor(*new_features.size()).fill_(1)\n        )\n        print(new_features)\n        print(xyz.grad)"
  },
  {
    "path": "rs_cnn/utils/pointnet2_utils.py",
    "content": "import torch\nfrom torch.autograd import Variable\nfrom torch.autograd import Function\nimport torch.nn.functional as F\nimport torch.nn as nn\nfrom linalg_utils import pdist2, PDist2Order\nfrom collections import namedtuple\nimport pytorch_utils as pt_utils\nfrom typing import List, Tuple\n\nfrom _ext import pointnet2\n\n\nclass RandomDropout(nn.Module):\n\n    def __init__(self, p=0.5, inplace=False):\n        super().__init__()\n        self.p = p\n        self.inplace = inplace\n\n    def forward(self, X):\n        theta = torch.Tensor(1).uniform_(0, self.p)[0]\n        return pt_utils.feature_dropout_no_scaling(\n            X, theta, self.train, self.inplace\n        )\n\n\nclass FurthestPointSampling(Function):\n\n    @staticmethod\n    def forward(ctx, xyz: torch.Tensor, npoint: int) -> torch.Tensor:\n        r\"\"\"\n        Uses iterative furthest point sampling to select a set of npoint features that have the largest\n        minimum distance\n\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            (B, N, 3) tensor where N > npoint\n        npoint : int32\n            number of features in the sampled set\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint) tensor containing the set\n        \"\"\"\n        assert xyz.is_contiguous()\n\n        B, N, _ = xyz.size()\n\n        output = torch.cuda.IntTensor(B, npoint)\n        temp = torch.cuda.FloatTensor(B, N).fill_(1e10)\n        pointnet2.furthest_point_sampling_wrapper(\n            B, N, npoint, xyz, temp, output\n        )\n        return output\n\n    @staticmethod\n    def backward(xyz, a=None):\n        return None, None\n\n\nfurthest_point_sample = FurthestPointSampling.apply\n\n\nclass GatherOperation(Function):\n\n    @staticmethod\n    def forward(ctx, features: torch.Tensor, idx: torch.Tensor) -> torch.Tensor:\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor\n\n        idx : torch.Tensor\n            (B, npoint) tensor of the features to gather\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint) tensor\n        \"\"\"\n        assert features.is_contiguous()\n        assert idx.is_contiguous()\n\n        B, npoint = idx.size()\n        _, C, N = features.size()\n\n        output = torch.cuda.FloatTensor(B, C, npoint)\n\n        pointnet2.gather_points_wrapper(\n            B, C, N, npoint, features, idx, output\n        )\n\n        ctx.for_backwards = (idx, C, N)\n\n        return output\n\n    @staticmethod\n    def backward(ctx, grad_out):\n        idx, C, N = ctx.for_backwards\n        B, npoint = idx.size()\n\n        grad_features = Variable(torch.cuda.FloatTensor(B, C, N).zero_())\n        grad_out_data = grad_out.data.contiguous()\n        pointnet2.gather_points_grad_wrapper(\n            B, C, N, npoint, grad_out_data, idx, grad_features.data\n        )\n\n        return grad_features, None\n\n\ngather_operation = GatherOperation.apply\n\n\nclass ThreeNN(Function):\n\n    @staticmethod\n    def forward(ctx, unknown: torch.Tensor,\n                known: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:\n        r\"\"\"\n            Find the three nearest neighbors of unknown in known\n        Parameters\n        ----------\n        unknown : torch.Tensor\n            (B, n, 3) tensor of known features\n        known : torch.Tensor\n            (B, m, 3) tensor of unknown features\n\n        Returns\n        -------\n        dist : torch.Tensor\n            (B, n, 3) l2 distance to the three nearest neighbors\n        idx : torch.Tensor\n            (B, n, 3) index of 3 nearest neighbors\n        \"\"\"\n        assert unknown.is_contiguous()\n        assert known.is_contiguous()\n\n        B, N, _ = unknown.size()\n        m = known.size(1)\n        dist2 = torch.cuda.FloatTensor(B, N, 3)\n        idx = torch.cuda.IntTensor(B, N, 3)\n\n        pointnet2.three_nn_wrapper(B, N, m, unknown, known, dist2, idx)\n\n        return torch.sqrt(dist2), idx\n\n    @staticmethod\n    def backward(ctx, a=None, b=None):\n        return None, None\n\n\nthree_nn = ThreeNN.apply\n\n\nclass ThreeInterpolate(Function):\n\n    @staticmethod\n    def forward(\n            ctx, features: torch.Tensor, idx: torch.Tensor, weight: torch.Tensor\n    ) -> torch.Tensor:\n        r\"\"\"\n            Performs weight linear interpolation on 3 features\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, c, m) Features descriptors to be interpolated from\n        idx : torch.Tensor\n            (B, n, 3) three nearest neighbors of the target features in features\n        weight : torch.Tensor\n            (B, n, 3) weights\n\n        Returns\n        -------\n        torch.Tensor\n            (B, c, n) tensor of the interpolated features\n        \"\"\"\n        assert features.is_contiguous()\n        assert idx.is_contiguous()\n        assert weight.is_contiguous()\n\n        B, c, m = features.size()\n        n = idx.size(1)\n\n        ctx.three_interpolate_for_backward = (idx, weight, m)\n\n        output = torch.cuda.FloatTensor(B, c, n)\n\n        pointnet2.three_interpolate_wrapper(\n            B, c, m, n, features, idx, weight, output\n        )\n\n        return output\n\n    @staticmethod\n    def backward(ctx, grad_out: torch.Tensor\n                ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, c, n) tensor with gradients of ouputs\n\n        Returns\n        -------\n        grad_features : torch.Tensor\n            (B, c, m) tensor with gradients of features\n\n        None\n\n        None\n        \"\"\"\n        idx, weight, m = ctx.three_interpolate_for_backward\n        B, c, n = grad_out.size()\n\n        grad_features = Variable(torch.cuda.FloatTensor(B, c, m).zero_())\n\n        grad_out_data = grad_out.data.contiguous()\n        pointnet2.three_interpolate_grad_wrapper(\n            B, c, n, m, grad_out_data, idx, weight, grad_features.data\n        )\n\n        return grad_features, None, None\n\n\nthree_interpolate = ThreeInterpolate.apply\n\n\nclass GroupingOperation(Function):\n\n    @staticmethod\n    def forward(ctx, features: torch.Tensor, idx: torch.Tensor) -> torch.Tensor:\n        r\"\"\"\n\n        Parameters\n        ----------\n        features : torch.Tensor\n            (B, C, N) tensor of points to group\n        idx : torch.Tensor\n            (B, npoint, nsample) tensor containing the indicies of points to group with\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, npoint, nsample) tensor\n        \"\"\"\n        assert features.is_contiguous()\n        assert idx.is_contiguous()\n\n        B, nfeatures, nsample = idx.size()\n        _, C, N = features.size()\n\n        output = torch.cuda.FloatTensor(B, C, nfeatures, nsample)\n\n        pointnet2.group_points_wrapper(\n            B, C, N, nfeatures, nsample, features, idx, output\n        )\n\n        ctx.for_backwards = (idx, N)\n        return output\n\n    @staticmethod\n    def backward(ctx,\n                 grad_out: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:\n        r\"\"\"\n\n        Parameters\n        ----------\n        grad_out : torch.Tensor\n            (B, C, npoint, nsample) tensor of the gradients of the output from forward\n\n        Returns\n        -------\n        torch.Tensor\n            (B, C, N) gradient of the features\n        None\n        \"\"\"\n        idx, N = ctx.for_backwards\n\n        B, C, npoint, nsample = grad_out.size()\n        grad_features = Variable(torch.cuda.FloatTensor(B, C, N).zero_())\n\n        grad_out_data = grad_out.data.contiguous()\n        pointnet2.group_points_grad_wrapper(\n            B, C, N, npoint, nsample, grad_out_data, idx, grad_features.data\n        )\n\n        return grad_features, None\n\n\ngrouping_operation = GroupingOperation.apply\n\n\nclass BallQuery(Function):\n\n    @staticmethod\n    def forward(\n            ctx, radius: float, nsample: int, xyz: torch.Tensor,\n            new_xyz: torch.Tensor, fps_idx: torch.IntTensor\n    ) -> torch.Tensor:\n        r\"\"\"\n\n        Parameters\n        ----------\n        radius : float\n            radius of the balls\n        nsample : int\n            maximum number of features in the balls\n        xyz : torch.Tensor\n            (B, N, 3) xyz coordinates of the features\n        new_xyz : torch.Tensor\n            (B, npoint, 3) centers of the ball query\n\n        Returns\n        -------\n        torch.Tensor\n            (B, npoint, nsample) tensor with the indicies of the features that form the query balls\n        \"\"\"\n        assert new_xyz.is_contiguous()\n        assert xyz.is_contiguous()\n\n        B, N, _ = xyz.size()\n        npoint = new_xyz.size(1)\n        idx = torch.cuda.IntTensor(B, npoint, nsample).zero_()\n\n        pointnet2.ball_query_wrapper(\n            B, N, npoint, radius, nsample, new_xyz, xyz, fps_idx, idx\n        )\n        \n        return torch.cat([fps_idx.unsqueeze(2), idx], dim = 2)\n\n    @staticmethod\n    def backward(ctx, a=None):\n        return None, None, None, None\n\n\nball_query = BallQuery.apply\n\n\nclass QueryAndGroup(nn.Module):\n    r\"\"\"\n    Groups with a ball query of radius\n\n    Parameters\n    ---------\n    radius : float32\n        Radius of ball\n    nsample : int32\n        Maximum number of points to gather in the ball\n    \"\"\"\n\n    def __init__(self, radius: float, nsample: int, use_xyz: bool = True):\n        super().__init__()\n        self.radius, self.nsample, self.use_xyz = radius, nsample, use_xyz\n\n    def forward(\n            self,\n            xyz: torch.Tensor,\n            new_xyz: torch.Tensor,\n            features: torch.Tensor = None,\n            fps_idx: torch.IntTensor = None\n    ) -> Tuple[torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            centriods (B, npoint, 3)\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, 3 + C, npoint, nsample) tensor\n        \"\"\"\n\n        idx = ball_query(self.radius, self.nsample, xyz, new_xyz, fps_idx)\n        xyz_trans = xyz.transpose(1, 2).contiguous()\n        grouped_xyz = grouping_operation(\n            xyz_trans, idx\n        )  # (B, 3, npoint, nsample)\n        raw_grouped_xyz = grouped_xyz\n        grouped_xyz -= new_xyz.transpose(1, 2).unsqueeze(-1)\n\n        if features is not None:\n            grouped_features = grouping_operation(features, idx)\n            if self.use_xyz:\n                new_features = torch.cat([raw_grouped_xyz, grouped_xyz, grouped_features],\n                                         dim=1)  # (B, C + 3 + 3, npoint, nsample)\n            else:\n                new_features = grouped_features\n        else:\n            assert self.use_xyz, \"Cannot have not features and not use xyz as a feature!\"\n            new_features = torch.cat([raw_grouped_xyz, grouped_xyz], dim = 1)\n\n        return new_features\n\n\nclass GroupAll(nn.Module):\n    r\"\"\"\n    Groups all features\n\n    Parameters\n    ---------\n    \"\"\"\n\n    def __init__(self, use_xyz: bool = True):\n        super().__init__()\n        self.use_xyz = use_xyz\n\n    def forward(\n            self,\n            xyz: torch.Tensor,\n            new_xyz: torch.Tensor,\n            features: torch.Tensor = None\n    ) -> Tuple[torch.Tensor]:\n        r\"\"\"\n        Parameters\n        ----------\n        xyz : torch.Tensor\n            xyz coordinates of the features (B, N, 3)\n        new_xyz : torch.Tensor\n            Ignored\n        features : torch.Tensor\n            Descriptors of the features (B, C, N)\n\n        Returns\n        -------\n        new_features : torch.Tensor\n            (B, C + 3, 1, N) tensor\n        \"\"\"\n\n        grouped_xyz = xyz.transpose(1, 2).unsqueeze(2)\n        if features is not None:\n            grouped_features = features.unsqueeze(2)\n            if self.use_xyz:\n                new_features = torch.cat([grouped_xyz, grouped_features],\n                                         dim=1)  # (B, 3 + C, 1, N)\n            else:\n                new_features = grouped_features\n        else:\n            new_features = grouped_xyz\n\n        return new_features\n"
  },
  {
    "path": "rs_cnn/utils/pytorch_utils/__init__.py",
    "content": "from .pytorch_utils import *\n"
  },
  {
    "path": "rs_cnn/utils/pytorch_utils/pytorch_utils.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.autograd.function import InplaceFunction\nfrom itertools import repeat\nimport numpy as np\nimport shutil, os\nfrom typing import List, Tuple\nfrom scipy.stats import t as student_t\nimport statistics as stats\nimport math\n\n########## Relation-Shape Convolution begin ############\nclass RSConv(nn.Module):\n    '''\n    Input shape: (B, C_in, npoint, nsample)\n    Output shape: (B, C_out, npoint)\n    '''\n    def __init__(\n            self,\n            C_in,\n            C_out,\n            activation = nn.ReLU(),\n            mapping = None,\n            relation_prior = 1,\n            first_layer = False\n    ):\n        super(RSConv, self).__init__()\n        self.bn_rsconv = nn.BatchNorm2d(C_in) if not first_layer else nn.BatchNorm2d(16)\n        self.bn_channel_raising = nn.BatchNorm1d(C_out)\n        self.bn_xyz_raising = nn.BatchNorm2d(16)\n        if first_layer:\n            self.bn_mapping = nn.BatchNorm2d(math.floor(C_out / 2))\n        else:\n            self.bn_mapping = nn.BatchNorm2d(math.floor(C_out / 4))\n        self.activation = activation\n        self.relation_prior = relation_prior\n        self.first_layer = first_layer\n        self.mapping_func1 = mapping[0]\n        self.mapping_func2 = mapping[1]\n        self.cr_mapping = mapping[2]\n        if first_layer:\n            self.xyz_raising = mapping[3]\n\n    def forward(self, input): # input: (B, 3 + 3 + C_in, npoint, centroid + nsample)\n        x = input[:, 3:, :, :]           # (B, C_in, npoint, nsample+1), input features\n        C_in = x.size()[1]\n        nsample = x.size()[3]\n        if self.relation_prior == 2:\n            abs_coord = input[:, 0:2, :, :]\n            delta_x = input[:, 3:5, :, :]\n            zero_vec = Variable(torch.zeros(x.size()[0], 1, x.size()[2], nsample).cuda())\n        else:\n            abs_coord = input[:, 0:3, :, :]  # (B, 3, npoint, nsample+1), absolute coordinates\n            delta_x = input[:, 3:6, :, :]    # (B, 3, npoint, nsample+1), normalized coordinates\n\n        coord_xi = abs_coord[:, :, :, 0:1].repeat(1, 1, 1, nsample)   # (B, 3, npoint, nsample),  centroid point\n        h_xi_xj = torch.norm(delta_x, p = 2, dim = 1).unsqueeze(1)\n        if self.relation_prior == 1:\n            h_xi_xj = torch.cat((h_xi_xj, coord_xi, abs_coord, delta_x), dim = 1)\n        elif self.relation_prior == 2:\n            h_xi_xj = torch.cat((h_xi_xj, coord_xi, zero_vec, abs_coord, zero_vec, delta_x, zero_vec), dim = 1)\n\n        h_xi_xj = self.mapping_func1(h_xi_xj)\n        h_xi_xj = self.activation(self.bn_mapping(h_xi_xj))\n        h_xi_xj = self.mapping_func2(h_xi_xj)\n        if self.first_layer:\n            x = self.activation(self.bn_xyz_raising(self.xyz_raising(x)))\n        x = F.max_pool2d(self.activation(self.bn_rsconv(torch.mul(h_xi_xj, x))), kernel_size = (1, nsample)).squeeze(3)   # (B, C_in, npoint)\n        x = self.activation(self.bn_channel_raising(self.cr_mapping(x)))\n\n        return x\n\nclass RSConvLayer(nn.Sequential):\n\n    def __init__(\n            self,\n            in_size: int,\n            out_size: int,\n            activation=nn.ReLU(inplace=True),\n            conv=RSConv,\n            mapping = None,\n            relation_prior = 1,\n            first_layer = False\n    ):\n        super(RSConvLayer, self).__init__()\n\n        conv_unit = conv(\n            in_size,\n            out_size,\n            activation = activation,\n            mapping = mapping,\n            relation_prior = relation_prior,\n            first_layer = first_layer\n        )\n\n        self.add_module('RS_Conv', conv_unit)\n\nclass SharedRSConv(nn.Sequential):\n\n    def __init__(\n            self,\n            args: List[int],\n            *,\n            activation=nn.ReLU(inplace=True),\n            mapping = None,\n            relation_prior = 1,\n            first_layer = False\n    ):\n        super().__init__()\n\n        for i in range(len(args) - 1):\n            self.add_module(\n                'RSConvLayer{}'.format(i),\n                RSConvLayer(\n                    args[i],\n                    args[i + 1],\n                    activation = activation,\n                    mapping = mapping,\n                    relation_prior = relation_prior,\n                    first_layer = first_layer\n                )\n            )\n\n########## Relation-Shape Convolution end ############\n\n\n\n########## global convolutional pooling begin ############\n\nclass GloAvgConv(nn.Module):\n    '''\n    Input shape: (B, C_in, 1, nsample)\n    Output shape: (B, C_out, npoint)\n    '''\n    def __init__(\n            self,\n            C_in,\n            C_out,\n            init=nn.init.kaiming_normal_,\n            bias = True,\n            activation = nn.ReLU(inplace=True)\n    ):\n        super(GloAvgConv, self).__init__()\n\n        self.conv_avg = nn.Conv2d(in_channels = C_in, out_channels = C_out, kernel_size = (1, 1),\n                                  stride = (1, 1), bias = bias)\n        self.bn_avg = nn.BatchNorm2d(C_out)\n        self.activation = activation\n\n        init(self.conv_avg.weight)\n        if bias:\n            nn.init.constant_(self.conv_avg.bias, 0)\n\n    def forward(self, x):\n        nsample = x.size()[3]\n        x = self.activation(self.bn_avg(self.conv_avg(x)))\n        x = F.max_pool2d(x, kernel_size = (1, nsample)).squeeze(3)\n\n        return x\n\n########## global convolutional pooling end ############\n\n\nclass SharedMLP(nn.Sequential):\n\n    def __init__(\n            self,\n            args: List[int],\n            *,\n            bn: bool = False,\n            activation=nn.ReLU(inplace=True),\n            preact: bool = False,\n            first: bool = False,\n            name: str = \"\"\n    ):\n        super().__init__()\n\n        for i in range(len(args) - 1):\n            self.add_module(\n                name + 'layer{}'.format(i),\n                Conv2d(\n                    args[i],\n                    args[i + 1],\n                    bn=(not first or not preact or (i != 0)) and bn,\n                    activation=activation\n                    if (not first or not preact or (i != 0)) else None,\n                    preact=preact\n                )\n            )\n\n\nclass _BNBase(nn.Sequential):\n\n    def __init__(self, in_size, batch_norm=None, name=\"\"):\n        super().__init__()\n        self.add_module(name + \"bn\", batch_norm(in_size))\n\n        nn.init.constant_(self[0].weight, 1.0)\n        nn.init.constant_(self[0].bias, 0)\n\n\nclass BatchNorm1d(_BNBase):\n\n    def __init__(self, in_size: int, *, name: str = \"\"):\n        super().__init__(in_size, batch_norm=nn.BatchNorm1d, name=name)\n\n\nclass BatchNorm2d(_BNBase):\n\n    def __init__(self, in_size: int, name: str = \"\"):\n        super().__init__(in_size, batch_norm=nn.BatchNorm2d, name=name)\n\n\nclass BatchNorm3d(_BNBase):\n\n    def __init__(self, in_size: int, name: str = \"\"):\n        super().__init__(in_size, batch_norm=nn.BatchNorm3d, name=name)\n\n\nclass _ConvBase(nn.Sequential):\n\n    def __init__(\n            self,\n            in_size,\n            out_size,\n            kernel_size,\n            stride,\n            padding,\n            activation,\n            bn,\n            init,\n            conv=None,\n            batch_norm=None,\n            bias=True,\n            preact=False,\n            name=\"\"\n    ):\n        super().__init__()\n\n        bias = bias and (not bn)\n        conv_unit = conv(\n            in_size,\n            out_size,\n            kernel_size=kernel_size,\n            stride=stride,\n            padding=padding,\n            bias=bias\n        )\n        init(conv_unit.weight)\n        if bias:\n            nn.init.constant_(conv_unit.bias, 0)\n\n        if bn:\n            if not preact:\n                bn_unit = batch_norm(out_size)\n            else:\n                bn_unit = batch_norm(in_size)\n\n        if preact:\n            if bn:\n                self.add_module(name + 'bn', bn_unit)\n\n            if activation is not None:\n                self.add_module(name + 'activation', activation)\n\n        self.add_module(name + 'conv', conv_unit)\n\n        if not preact:\n            if bn:\n                self.add_module(name + 'bn', bn_unit)\n\n            if activation is not None:\n                self.add_module(name + 'activation', activation)\n\n\nclass Conv1d(_ConvBase):\n\n    def __init__(\n            self,\n            in_size: int,\n            out_size: int,\n            *,\n            kernel_size: int = 1,\n            stride: int = 1,\n            padding: int = 0,\n            activation=nn.ReLU(inplace=True),\n            bn: bool = False,\n            init=nn.init.kaiming_normal_,\n            bias: bool = True,\n            preact: bool = False,\n            name: str = \"\"\n    ):\n        super().__init__(\n            in_size,\n            out_size,\n            kernel_size,\n            stride,\n            padding,\n            activation,\n            bn,\n            init,\n            conv=nn.Conv1d,\n            batch_norm=BatchNorm1d,\n            bias=bias,\n            preact=preact,\n            name=name\n        )\n\n\nclass Conv2d(_ConvBase):\n\n    def __init__(\n            self,\n            in_size: int,\n            out_size: int,\n            *,\n            kernel_size: Tuple[int, int] = (1, 1),\n            stride: Tuple[int, int] = (1, 1),\n            padding: Tuple[int, int] = (0, 0),\n            activation=nn.ReLU(inplace=True),\n            bn: bool = False,\n            init=nn.init.kaiming_normal_,\n            bias: bool = True,\n            preact: bool = False,\n            name: str = \"\"\n    ):\n        super().__init__(\n            in_size,\n            out_size,\n            kernel_size,\n            stride,\n            padding,\n            activation,\n            bn,\n            init,\n            conv=nn.Conv2d,\n            batch_norm=BatchNorm2d,\n            bias=bias,\n            preact=preact,\n            name=name\n        )\n\n\nclass Conv3d(_ConvBase):\n\n    def __init__(\n            self,\n            in_size: int,\n            out_size: int,\n            *,\n            kernel_size: Tuple[int, int, int] = (1, 1, 1),\n            stride: Tuple[int, int, int] = (1, 1, 1),\n            padding: Tuple[int, int, int] = (0, 0, 0),\n            activation=nn.ReLU(inplace=True),\n            bn: bool = False,\n            init=nn.init.kaiming_normal_,\n            bias: bool = True,\n            preact: bool = False,\n            name: str = \"\"\n    ):\n        super().__init__(\n            in_size,\n            out_size,\n            kernel_size,\n            stride,\n            padding,\n            activation,\n            bn,\n            init,\n            conv=nn.Conv3d,\n            batch_norm=BatchNorm3d,\n            bias=bias,\n            preact=preact,\n            name=name\n        )\n\n\nclass FC(nn.Sequential):\n\n    def __init__(\n            self,\n            in_size: int,\n            out_size: int,\n            *,\n            activation=nn.ReLU(inplace=True),\n            bn: bool = False,\n            init=None,\n            preact: bool = False,\n            name: str = \"\"\n    ):\n        super().__init__()\n\n        fc = nn.Linear(in_size, out_size, bias=not bn)\n        if init is not None:\n            init(fc.weight)\n        if not bn:\n            nn.init.constant_(fc.bias, 0)\n\n        if preact:\n            if bn:\n                self.add_module(name + 'bn', BatchNorm1d(in_size))\n\n            if activation is not None:\n                self.add_module(name + 'activation', activation)\n\n        self.add_module(name + 'fc', fc)\n\n        if not preact:\n            if bn:\n                self.add_module(name + 'bn', BatchNorm1d(out_size))\n\n            if activation is not None:\n                self.add_module(name + 'activation', activation)\n\n\nclass _DropoutNoScaling(InplaceFunction):\n\n    @staticmethod\n    def _make_noise(input):\n        return input.new().resize_as_(input)\n\n    @staticmethod\n    def symbolic(g, input, p=0.5, train=False, inplace=False):\n        if inplace:\n            return None\n        n = g.appendNode(\n            g.create(\"Dropout\", [input]).f_(\"ratio\",\n                                            p).i_(\"is_test\", not train)\n        )\n        real = g.appendNode(g.createSelect(n, 0))\n        g.appendNode(g.createSelect(n, 1))\n        return real\n\n    @classmethod\n    def forward(cls, ctx, input, p=0.5, train=False, inplace=False):\n        if p < 0 or p > 1:\n            raise ValueError(\n                \"dropout probability has to be between 0 and 1, \"\n                \"but got {}\".format(p)\n            )\n        ctx.p = p\n        ctx.train = train\n        ctx.inplace = inplace\n\n        if ctx.inplace:\n            ctx.mark_dirty(input)\n            output = input\n        else:\n            output = input.clone()\n\n        if ctx.p > 0 and ctx.train:\n            ctx.noise = cls._make_noise(input)\n            if ctx.p == 1:\n                ctx.noise.fill_(0)\n            else:\n                ctx.noise.bernoulli_(1 - ctx.p)\n            ctx.noise = ctx.noise.expand_as(input)\n            output.mul_(ctx.noise)\n\n        return output\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        if ctx.p > 0 and ctx.train:\n            return grad_output.mul(Variable(ctx.noise)), None, None, None\n        else:\n            return grad_output, None, None, None\n\n\ndropout_no_scaling = _DropoutNoScaling.apply\n\n\nclass _FeatureDropoutNoScaling(_DropoutNoScaling):\n\n    @staticmethod\n    def symbolic(input, p=0.5, train=False, inplace=False):\n        return None\n\n    @staticmethod\n    def _make_noise(input):\n        return input.new().resize_(\n            input.size(0), input.size(1), *repeat(1,\n                                                  input.dim() - 2)\n        )\n\n\nfeature_dropout_no_scaling = _FeatureDropoutNoScaling.apply\n\n\ndef group_model_params(model: nn.Module):\n    decay_group = []\n    no_decay_group = []\n\n    for name, param in model.named_parameters():\n        if name.find(\"bn\") != -1 or name.find(\"bias\") != -1:\n            no_decay_group.append(param)\n        else:\n            decay_group.append(param)\n\n    assert len(list(model.parameters())\n              ) == len(decay_group) + len(no_decay_group)\n\n    return [\n        dict(params=decay_group),\n        dict(params=no_decay_group, weight_decay=0.0)\n    ]\n\n\ndef checkpoint_state(model=None, optimizer=None, best_prec=None, epoch=None):\n    optim_state = optimizer.state_dict() if optimizer is not None else None\n    if model is not None:\n        if isinstance(model, torch.nn.DataParallel):\n            model_state = model.module.state_dict()\n        else:\n            model_state = model.state_dict()\n    else:\n        model_state = None\n\n    return {\n        'epoch': epoch,\n        'best_prec': best_prec,\n        'model_state': model_state,\n        'optimizer_state': optim_state\n    }\n\n\ndef save_checkpoint(\n        state, is_best, filename='checkpoint', bestname='model_best'\n):\n    filename = '{}.pth.tar'.format(filename)\n    torch.save(state, filename)\n    if is_best:\n        shutil.copyfile(filename, '{}.pth.tar'.format(bestname))\n\n\ndef load_checkpoint(model=None, optimizer=None, filename='checkpoint'):\n    filename = \"{}.pth.tar\".format(filename)\n    if os.path.isfile(filename):\n        print(\"==> Loading from checkpoint '{}'\".format(filename))\n        checkpoint = torch.load(filename)\n        epoch = checkpoint['epoch']\n        best_prec = checkpoint['best_prec']\n        if model is not None and checkpoint['model_state'] is not None:\n            model.load_state_dict(checkpoint['model_state'])\n        if optimizer is not None and checkpoint['optimizer_state'] is not None:\n            optimizer.load_state_dict(checkpoint['optimizer_state'])\n        print(\"==> Done\")\n    else:\n        print(\"==> Checkpoint '{}' not found\".format(filename))\n\n    return epoch, best_prec\n\n\ndef variable_size_collate(pad_val=0, use_shared_memory=True):\n    import collections\n    _numpy_type_map = {\n        'float64': torch.DoubleTensor,\n        'float32': torch.FloatTensor,\n        'float16': torch.HalfTensor,\n        'int64': torch.LongTensor,\n        'int32': torch.IntTensor,\n        'int16': torch.ShortTensor,\n        'int8': torch.CharTensor,\n        'uint8': torch.ByteTensor,\n    }\n\n    def wrapped(batch):\n        \"Puts each data field into a tensor with outer dimension batch size\"\n\n        error_msg = \"batch must contain tensors, numbers, dicts or lists; found {}\"\n        elem_type = type(batch[0])\n        if torch.is_tensor(batch[0]):\n            max_len = 0\n            for b in batch:\n                max_len = max(max_len, b.size(0))\n\n            numel = sum([int(b.numel() / b.size(0) * max_len) for b in batch])\n            if use_shared_memory:\n                # If we're in a background process, concatenate directly into a\n                # shared memory tensor to avoid an extra copy\n                storage = batch[0].storage()._new_shared(numel)\n                out = batch[0].new(storage)\n            else:\n                out = batch[0].new(numel)\n\n            out = out.view(\n                len(batch), max_len,\n                *[batch[0].size(i) for i in range(1, batch[0].dim())]\n            )\n            out.fill_(pad_val)\n            for i in range(len(batch)):\n                out[i, 0:batch[i].size(0)] = batch[i]\n\n            return out\n        elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \\\n                and elem_type.__name__ != 'string_':\n            elem = batch[0]\n            if elem_type.__name__ == 'ndarray':\n                # array of string classes and object\n                if re.search('[SaUO]', elem.dtype.str) is not None:\n                    raise TypeError(error_msg.format(elem.dtype))\n\n                return wrapped([torch.from_numpy(b) for b in batch])\n            if elem.shape == ():  # scalars\n                py_type = float if elem.dtype.name.startswith('float') else int\n                return _numpy_type_map[elem.dtype.name](\n                    list(map(py_type, batch))\n                )\n        elif isinstance(batch[0], int):\n            return torch.LongTensor(batch)\n        elif isinstance(batch[0], float):\n            return torch.DoubleTensor(batch)\n        elif isinstance(batch[0], collections.Mapping):\n            return {key: wrapped([d[key] for d in batch]) for key in batch[0]}\n        elif isinstance(batch[0], collections.Sequence):\n            transposed = zip(*batch)\n            return [wrapped(samples) for samples in transposed]\n\n        raise TypeError((error_msg.format(type(batch[0]))))\n\n    return wrapped\n\n\nclass TrainValSplitter():\n    r\"\"\"\n        Creates a training and validation split to be used as the sampler in a pytorch DataLoader\n    Parameters\n    ---------\n        numel : int\n            Number of elements in the entire training dataset\n        percent_train : float\n            Percentage of data in the training split\n        shuffled : bool\n            Whether or not shuffle which data goes to which split\n    \"\"\"\n\n    def __init__(\n            self, *, numel: int, percent_train: float, shuffled: bool = False\n    ):\n        indicies = np.array([i for i in range(numel)])\n        if shuffled:\n            np.random.shuffle(indicies)\n\n        self.train = torch.utils.data.sampler.SubsetRandomSampler(\n            indicies[0:int(percent_train * numel)]\n        )\n        self.val = torch.utils.data.sampler.SubsetRandomSampler(\n            indicies[int(percent_train * numel):-1]\n        )\n\n\nclass CrossValSplitter():\n    r\"\"\"\n        Class that creates cross validation splits.  The train and val splits can be used in pytorch DataLoaders.  The splits can be updated\n        by calling next(self) or using a loop:\n            for _ in self:\n                ....\n    Parameters\n    ---------\n        numel : int\n            Number of elements in the training set\n        k_folds : int\n            Number of folds\n        shuffled : bool\n            Whether or not to shuffle which data goes in which fold\n    \"\"\"\n\n    def __init__(self, *, numel: int, k_folds: int, shuffled: bool = False):\n        inidicies = np.array([i for i in range(numel)])\n        if shuffled:\n            np.random.shuffle(inidicies)\n\n        self.folds = np.array(np.array_split(inidicies, k_folds), dtype=object)\n        self.current_v_ind = -1\n\n        self.val = torch.utils.data.sampler.SubsetRandomSampler(self.folds[0])\n        self.train = torch.utils.data.sampler.SubsetRandomSampler(\n            np.concatenate(self.folds[1:], axis=0)\n        )\n\n        self.metrics = {}\n\n    def __iter__(self):\n        self.current_v_ind = -1\n        return self\n\n    def __len__(self):\n        return len(self.folds)\n\n    def __getitem__(self, idx):\n        assert idx >= 0 and idx < len(self)\n        self.val.inidicies = self.folds[idx]\n        self.train.inidicies = np.concatenate(\n            self.folds[np.arange(len(self)) != idx], axis=0\n        )\n\n    def __next__(self):\n        self.current_v_ind += 1\n        if self.current_v_ind >= len(self):\n            raise StopIteration\n\n        self[self.current_v_ind]\n\n    def update_metrics(self, to_post: dict):\n        for k, v in to_post.items():\n            if k in self.metrics:\n                self.metrics[k].append(v)\n            else:\n                self.metrics[k] = [v]\n\n    def print_metrics(self):\n        for name, samples in self.metrics.items():\n            xbar = stats.mean(samples)\n            sx = stats.stdev(samples, xbar)\n            tstar = student_t.ppf(1.0 - 0.025, len(samples) - 1)\n            margin_of_error = tstar * sx / sqrt(len(samples))\n            print(\"{}: {} +/- {}\".format(name, xbar, margin_of_error))\n\n\ndef set_bn_momentum_default(bn_momentum):\n\n    def fn(m):\n        if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):\n            m.momentum = bn_momentum\n\n    return fn\n\n\nclass BNMomentumScheduler(object):\n\n    def __init__(\n            self, model, bn_lambda, last_epoch=-1,\n            setter=set_bn_momentum_default\n    ):\n        if not isinstance(model, nn.Module):\n            raise RuntimeError(\n                \"Class '{}' is not a PyTorch nn Module\".format(\n                    type(model).__name__\n                )\n            )\n\n        self.model = model\n        self.setter = setter\n        self.lmbd = bn_lambda\n\n        self.step(last_epoch + 1)\n        self.last_epoch = last_epoch\n\n    def step(self, epoch=None):\n        if epoch is None:\n            epoch = self.last_epoch + 1\n\n        self.last_epoch = epoch\n        self.model.apply(self.setter(self.lmbd(epoch)))\n\n    def get_momentum(self, epoch=None):\n        if epoch is None:\n            epoch = self.last_epoch + 1\n        return self.lmbd(epoch)"
  },
  {
    "path": "rs_cnn/voting_evaluate_cls.py",
    "content": "import torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nimport numpy as np\nimport os\nfrom torchvision import transforms\nfrom models import RSCNN_SSN_Cls as RSCNN_SSN\nfrom data import ModelNet40Cls\nimport utils.pytorch_utils as pt_utils\n# import utils.pointnet2_utils as pointnet2_utils\nimport pointnet2.utils.pointnet2_utils as pointnet2_utils\nimport data.data_utils as d_utils\nimport argparse\nimport random\nimport yaml\n\ntorch.backends.cudnn.enabled = True\ntorch.backends.cudnn.benchmark = True\ntorch.backends.cudnn.deterministic = True\n\nseed = 123\nrandom.seed(seed)\nnp.random.seed(seed)\ntorch.manual_seed(seed)            \ntorch.cuda.manual_seed(seed)       \ntorch.cuda.manual_seed_all(seed) \n\nparser = argparse.ArgumentParser(description='Relation-Shape CNN Shape Classification Voting Evaluation')\nparser.add_argument('--config', default='cfgs/config_ssn_cls.yaml', type=str)\n\nNUM_REPEAT = 300\nNUM_VOTE = 10\n\ndef main():\n    args = parser.parse_args()\n    with open(args.config) as f:\n        config = yaml.load(f)\n    for k, v in config['common'].items():\n        setattr(args, k, v)\n\n    test_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n\n    test_dataset = ModelNet40Cls(num_points = args.num_points, root = args.data_root, transforms=test_transforms, train=False)\n    test_dataloader = DataLoader(\n        test_dataset, \n        batch_size=args.batch_size,\n        shuffle=False, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n    \n    model = RSCNN_SSN(num_classes = args.num_classes, input_channels = args.input_channels, relation_prior = args.relation_prior, use_xyz = True)\n    model.cuda()\n    \n    if args.checkpoint is not '':\n        model.load_state_dict(torch.load(args.checkpoint))\n        print('Load model successfully: %s' % (args.checkpoint))\n    \n    # evaluate\n    PointcloudScale = d_utils.PointcloudScale()   # initialize random scaling\n    model.eval()\n    global_acc = 0\n    with torch.no_grad():\n        for i in range(NUM_REPEAT):\n            preds = []\n            labels = []\n            for j, data in enumerate(test_dataloader, 0):\n                points, target = data\n                points, target = points.cuda(), target.cuda()\n                # points, target = Variable(points, volatile=True), Variable(target, volatile=True)\n\n                # fastest point sampling\n                fps_idx = pointnet2_utils.furthest_point_sample(points, 1200)  # (B, npoint)\n                pred = 0\n                for v in range(NUM_VOTE):\n                    new_fps_idx = fps_idx[:, np.random.choice(1200, args.num_points, False)]\n                    new_points = pointnet2_utils.gather_operation(points.transpose(1, 2).contiguous(), new_fps_idx).transpose(1, 2).contiguous()\n                    if v > 0:\n                        new_points.data = PointcloudScale(new_points.data)\n                    pred += F.softmax(model(new_points), dim = 1)\n                pred /= NUM_VOTE\n                target = target.view(-1)\n                _, pred_choice = torch.max(pred.data, -1)\n\n                preds.append(pred_choice)\n                labels.append(target.data)\n\n            preds = torch.cat(preds, 0)\n            labels = torch.cat(labels, 0)\n            acc = (preds == labels).sum().float() / labels.numel()\n            if acc > global_acc:\n                global_acc = acc\n            print('Repeat %3d \\t Acc: %0.6f' % (i + 1, acc))\n    print('\\nBest voting acc: %0.6f' % (global_acc))\n        \nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "rs_cnn/voting_evaluate_partseg.py",
    "content": "import torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lr_sched\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nimport numpy as np\nimport os\nfrom torchvision import transforms\nfrom models import RSCNN_MSN_Seg as RSCNN_MSN\nfrom data import ShapeNetPart\nimport utils.pytorch_utils as pt_utils\nimport data.data_utils as d_utils\nimport argparse\nimport random\nimport yaml\nfrom progressbar import ProgressBar\n\ntorch.backends.cudnn.enabled = True\ntorch.backends.cudnn.benchmark = True\ntorch.backends.cudnn.deterministic = True\n\nseed = 123\nrandom.seed(seed)\nnp.random.seed(seed)\ntorch.manual_seed(seed)            \ntorch.cuda.manual_seed(seed)       \ntorch.cuda.manual_seed_all(seed) \n\nparser = argparse.ArgumentParser(description='Relation-Shape CNN Shape Part Segmentation Voting Evaluate')\nparser.add_argument('--config', default='cfgs/config_msn_partseg.yaml', type=str)\n\nNUM_REPEAT = 300\nNUM_VOTE = 10\n\ndef main():\n    args = parser.parse_args()\n    with open(args.config) as f:\n        config = yaml.load(f)\n    for k, v in config['common'].items():\n        setattr(args, k, v)\n    \n    test_transforms = transforms.Compose([\n        d_utils.PointcloudToTensor()\n    ])\n    \n    test_dataset = ShapeNetPart(root = args.data_root, num_points = args.num_points, split = 'test', normalize = True, transforms = test_transforms)\n    test_dataloader = DataLoader(\n        test_dataset, \n        batch_size=args.batch_size // 4,\n        shuffle=False, \n        num_workers=int(args.workers), \n        pin_memory=True\n    )\n    \n    model = RSCNN_MSN(num_classes = args.num_classes, input_channels = args.input_channels, relation_prior = args.relation_prior, use_xyz = True)\n    model.cuda()\n\n    if args.checkpoint is not '':\n        model.load_state_dict(torch.load(args.checkpoint))\n        print('Load model successfully: %s' % (args.checkpoint))\n\n    # evaluate\n    PointcloudScale = d_utils.PointcloudScale(scale_low=0.87, scale_high=1.15)   # initialize random scaling\n    model.eval()\n    global_Class_mIoU, global_Inst_mIoU = 0, 0\n    seg_classes = test_dataset.seg_classes\n    seg_label_to_cat = {}           # {0:Airplane, 1:Airplane, ...49:Table}\n    for cat in seg_classes.keys():\n        for label in seg_classes[cat]:\n            seg_label_to_cat[label] = cat\n\n    with torch.no_grad():\n        for i in range(NUM_REPEAT):\n            shape_ious = {cat:[] for cat in seg_classes.keys()}\n            bar = ProgressBar(max_value=len(test_dataloader))\n            for i, data in enumerate(test_dataloader, 0):\n                points, target, cls = data\n                # points, target = Variable(points, volatile=True), Variable(target, volatile=True)\n                points, target = points.cuda(), target.cuda()\n\n                batch_one_hot_cls = np.zeros((len(cls), 16))   # 16 object classes\n                for b in range(len(cls)):\n                    batch_one_hot_cls[b, int(cls[b])] = 1\n                batch_one_hot_cls = torch.from_numpy(batch_one_hot_cls)\n                batch_one_hot_cls = Variable(batch_one_hot_cls.float().cuda())\n\n                pred = 0\n                new_points = torch.zeros(points.size()[0], points.size()[1], points.size()[2]).cuda()\n                # new_points = Variable(torch.zeros(points.size()[0], points.size()[1], points.size()[2]).cuda(), volatile=True)\n                for v in range(NUM_VOTE):\n                    if v > 0:\n                        new_points.data = PointcloudScale(points.data)\n                    pred += F.softmax(model(new_points, batch_one_hot_cls), dim = 2)\n                pred /= NUM_VOTE\n\n                pred = pred.data.cpu()\n                target = target.data.cpu()\n                pred_val = torch.zeros(len(cls), args.num_points).type(torch.LongTensor)\n                # pred to the groundtruth classes (selected by seg_classes[cat])\n                for b in range(len(cls)):\n                    cat = seg_label_to_cat[target[b, 0].item()]\n                    logits = pred[b, :, :]   # (num_points, num_classes)\n                    pred_val[b, :] = logits[:, seg_classes[cat]].max(1)[1] + seg_classes[cat][0]\n\n                for b in range(len(cls)):\n                    segp = pred_val[b, :]\n                    segl = target[b, :]\n                    cat = seg_label_to_cat[segl[0].item()]\n                    part_ious = [0.0 for _ in range(len(seg_classes[cat]))]\n                    for l in seg_classes[cat]:\n                        if torch.sum((segl == l) | (segp == l)) == 0:\n                            # part is not present in this shape\n                            part_ious[l - seg_classes[cat][0]] = 1.0\n                        else:\n                            part_ious[l - seg_classes[cat][0]] = torch.sum((segl == l) & (segp == l)) / float(torch.sum((segl == l) | (segp == l)))\n                    shape_ious[cat].append(np.mean(part_ious))\n                bar.update(i)\n        \n            instance_ious = []\n            for cat in shape_ious.keys():\n                for iou in shape_ious[cat]:\n                    instance_ious.append(iou)\n                shape_ious[cat] = np.mean(shape_ious[cat])\n            mean_class_ious = np.mean(list(shape_ious.values()))\n\n            print('\\n------ Repeat %3d ------' % (i + 1))\n            for cat in sorted(shape_ious.keys()):\n                print('%s: %0.6f'%(cat, shape_ious[cat]))\n            print('Class_mIoU: %0.6f' % (mean_class_ious))\n            print('Instance_mIoU: %0.6f' % (np.mean(instance_ious)))\n\n            if mean_class_ious > global_Class_mIoU:\n                global_Class_mIoU = mean_class_ious\n                global_Inst_mIoU = np.mean(instance_ious)\n                \n    print('\\nBest voting Class_mIoU = %0.6f, Instance_mIoU = %0.6f' % (global_Class_mIoU, global_Inst_mIoU))\n        \nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "setup.sh",
    "content": "###\n # @Description: \n # @Autor: Jiachen Sun\n # @Date: 2022-02-23 17:25:05\n # @LastEditors: Jiachen Sun\n # @LastEditTime: 2022-02-24 23:20:04\n### \nset -e\nPYTHON_BIN=${PYTHON_BIN:-python}\n\nabspath=$(readlink -f \"$0\")\nroot=$(dirname $abspath)\nexport TORCH_CUDA_ARCH_LIST=\"6.0;6.1;6.2;7.0;7.5\"\n\ncd ${root}/pointnet2_pyt && ${PYTHON_BIN} -m pip install -e . && cd -\ncd ${root}/PCT_Pytorch/pointnet2_ops_lib && ${PYTHON_BIN} setup.py install && cd -\ncd ${root}/emd && ${PYTHON_BIN} setup.py install && cd -\ncd ${root}/PyGeM && ${PYTHON_BIN} setup.py install && cd -\n"
  },
  {
    "path": "third_party/bn_helper.py",
    "content": "import torch\nimport torch.nn as nn\n\ndef configure_model(model, eps, momentum, reset_stats, no_stats):\n    \"\"\"Configure model for adaptation by test-time normalization.\"\"\"\n    for m in model.modules():\n        if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):\n            # use batch-wise statistics in forward\n            m.train()\n            # configure epsilon for stability, and momentum for updates\n            m.eps = eps\n            m.momentum = momentum\n            if reset_stats:\n                # reset state to estimate test stats without train stats\n                m.reset_running_stats()\n            if no_stats:\n                # disable state entirely and use only batch stats\n                m.track_running_stats = False\n                m.running_mean = None\n                m.running_var = None\n    return model\n"
  },
  {
    "path": "third_party/tent_helper.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.jit\n\n\ndef collect_params(model):\n    \"\"\"Collect the affine scale + shift parameters from batch norms.\n    Walk the model's modules and collect all batch normalization parameters.\n    Return the parameters and their names.\n    Note: other choices of parameterization are possible!\n    \"\"\"\n    params = []\n    names = []\n    for nm, m in model.named_modules():\n        if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):\n            for np, p in m.named_parameters():\n                if np in ['weight', 'bias']:  # weight is scale, bias is shift\n                    params.append(p)\n                    names.append(f\"{nm}.{np}\")\n    return params, names\n\n@torch.jit.script\ndef softmax_entropy(x: torch.Tensor) -> torch.Tensor:\n    \"\"\"Entropy of softmax distribution from logits.\"\"\"\n    return -(x.softmax(1) * x.log_softmax(1)).sum(1)\n\ndef configure_model(model,eps, momentum):\n    \"\"\"Configure model for use with tent.\"\"\"\n    # train mode, because tent optimizes the model to minimize entropy\n    model.train()\n    # disable grad, to (re-)enable only what tent updates\n    model.requires_grad_(False)\n    # configure norm for tent updates: enable grad + force batch statisics\n    for m in model.modules():\n        if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):\n            m.requires_grad_(True)\n            m.eps = eps\n            m.momentum = momentum\n            # force use of batch stats in train and eval modes\n            m.track_running_stats = True\n            # m.running_mean -= m.running_mean\n            # m.running_var /= m.running_var\n            # m.reset_running_stats()\n    return model\n\n@torch.enable_grad()  # ensure grads in possible no grad context for testing\ndef forward_and_adapt(x, model, optimizer):\n    \"\"\"Forward and adapt model on batch of data.\n    Measure entropy of the model prediction, take gradients, and update params.\n    \"\"\"\n    # forward\n    outputs = model(**x)\n    # adapt\n    loss = softmax_entropy(outputs['logit']).mean(0)\n    loss.backward()\n    optimizer.step()\n    optimizer.zero_grad()\n    return outputs\n"
  },
  {
    "path": "visualize/README.md",
    "content": "# Visualization of ModelNet40-C\n\n## Requirements\n\n* Python3\n* Python packages numpy, pandas, matplotlib, seaborn, and open3d (optional).\n* Github repository [mitsuba2](https://github.com/mitsuba-renderer/mitsuba2) (optional).\n\n## Steps\n\n* Configuration file `config.py` records the path of ModelNet40-C dataset and the log files of experiments. Please make sure they are refering to right directories.\n* Script `main_results.py` processes the log files to collect accuracy results of all experiments. It also draws most of the figures and tables in the paper (Figure 1, 4, 9-16, Table 1-4).\n* Script `confusion_matrix.py` draws the confusion matrices in paper (Figure 3, 8).\n* Script `examples.py` draws the point cloud examples (Figure 2, 5-7).\n\nAll results are saved to `figures` folder by default.\n"
  },
  {
    "path": "visualize/config.py",
    "content": "from collections import OrderedDict\n\n# Path of the dataset containing npy files.\ndata_dir = \"modelnet40_c\"\n# Path of logs of experiments.\nresult_dir = \"output\"\n\n# For each OrderedDict item, the first element is the name in log files while the\n# second item is the name to display on visualized figures.\ndef_ranges = {\n    'model': OrderedDict([\n        ('pointnet', 'PointNet'), \n        ('pointnet2', 'PointNet++'), \n        ('dgcnn', 'DGCNN'), \n        ('rscnn', 'RSCNN'), \n        ('pct', 'PCT'), \n        ('simpleview', 'SimpleView'),\n        ('curvenet', 'CurveNet'),\n        (\"gdanet\", \"GDANet\"),\n        ('pointMLP', \"PointMLP\"),\n        (\"pointMLP2\", \"PointMLP-Elite\")\n    ]),\n    'train_mode': OrderedDict([\n        ('cutmix_r', 'PointCutMix-R'),\n        ('cutmix_k', 'PointCutMix-K'), \n        ('mixup', 'PointMixup'),\n        ('rsmix', 'RSMix'),\n        ('bn', 'BN'),\n        ('tent', 'TENT'),\n        ('pgd', 'PGD'),\n        ('none', \"Standard\"),\n        ('megamerger', \"megamerger\")\n    ]),\n    'corruption': OrderedDict([\n        (\"occlusion\", \"Occlusion\"),\n        (\"lidar\", \"LiDAR\"),\n        (\"density_inc\", \"Local_Density_Inc\"),\n        (\"density\", \"Local_Density_Dec\"),\n        (\"cutout\", \"Cutout\"),\n        (\"uniform\", \"Uniform\"),\n        (\"gaussian\", \"Gaussian\"),\n        (\"impulse\", \"Impulse\"),\n        (\"upsampling\", \"Upsampling\"),\n        (\"background\", \"Background\"),\n        (\"rotation\", \"Rotation\"),\n        (\"shear\", \"Shear\"),\n        (\"distortion\", \"FFD\"),\n        (\"distortion_rbf\", \"RBF\"),\n        (\"distortion_rbf_inv\", \"Inv_RBF\"),\n        (\"none\", \"none\"),\n    ]),\n    'severity': [1, 2, 3, 4, 5],\n    'metric': ['acc', \"class_acc\", \"err\", \"class_err\"]\n}\n"
  },
  {
    "path": "visualize/confusion_matrix.py",
    "content": "import os\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom config import result_dir\nfrom config import def_ranges\nfrom sklearn.metrics import confusion_matrix\nimport seaborn as sns\nimport math\n\n\nDATA_DIR = result_dir\nSHAPE = [\"airplane\",\n        \"bathtub\",\n        \"bed\",\n        \"bench\",\n        \"bookshelf\",\n        \"bottle\",\n        \"bowl\",\n        \"car\",\n        \"chair\",\n        \"cone\",\n        \"cup\",\n        \"curtain\",\n        \"desk\",\n        \"door\",\n        \"dresser\",\n        \"flower_pot\",\n        \"glass_box\",\n        \"guitar\",\n        \"keyboard\",\n        \"lamp\",\n        \"laptop\",\n        \"mantel\",\n        \"monitor\",\n        \"night_stand\",\n        \"person\",\n        \"piano\",\n        \"plant\",\n        \"radio\",\n        \"range_hood\",\n        \"sink\",\n        \"sofa\",\n        \"stairs\",\n        \"stool\",\n        \"table\",\n        \"tent\",\n        \"toilet\",\n        \"tv_stand\",\n        \"vase\",\n        \"wardrobe\",\n        \"xbox\"]\n\n\ndata = {}\nmodel_list = list(def_ranges[\"model\"].keys())\ncorruption_list = list(def_ranges[\"corruption\"].keys())\ncorruption_list.remove(\"none\")\nseverity_list = def_ranges[\"severity\"]\n\nfor model in model_list:\n    data[model] = {\"ground\": [], \"pred\": []}\n\n\nfor filename in os.listdir(DATA_DIR):\n    if len(filename) < 4 or filename[-4:] != \".npy\":\n        continue\n    tokens = filename.split('.')[0].split('_')\n    gt = tokens[-1]\n    severity = int(tokens[-2])\n    model = tokens[0]\n    corruption = \"_\".join(tokens[1:-2])\n    print(model, corruption, severity, gt)\n    filepath = os.path.join(DATA_DIR, filename)\n    x = np.load(filepath)\n    data[model][gt].append(x)\n\n\nmatrixes = []\nfig, axes = plt.subplots(3, 2, figsize=(20, 26))\nfor model_id, model in enumerate(model_list):\n    ax = axes[model_id//2][model_id%2]\n    total_number = len(data[model][\"ground\"]) * data[model][\"ground\"][0].shape[0]\n    y_true = np.hstack(data[model][\"ground\"]).reshape(-1)\n    y_pred = np.hstack(data[model][\"pred\"]).reshape(-1)\n    m = confusion_matrix(y_true, y_pred)\n    m = m / np.tile(np.sum(m, axis=1).reshape(40, 1), (1, 40))\n    matrixes.append(m)\n    sns.heatmap(m, ax=ax, vmin=0, vmax=1, center=1)\n    ax.set_xticks([i+0.5 for i in range(len(SHAPE))])\n    ax.set_xticklabels(SHAPE, rotation=45, ha=\"right\")\n    ax.set_yticks([i+0.5 for i in range(len(SHAPE))])\n    ax.set_yticklabels(SHAPE)\n    ax.set_title(def_ranges[\"model\"][model])\n    ax.set_xlabel(\"Prediction\")\n    ax.set_ylabel(\"Ground truth\")\n    ax.set_aspect('equal', adjustable='box')\nplt.tight_layout(pad=0, h_pad=0, w_pad=0)\nplt.savefig(\"figures/confusion_matrix_2.pdf\")\n\nplt.clf()\nfig, ax = plt.subplots(figsize=(10,8))\nm = np.sum(np.stack(matrixes, axis=0), axis=0) / 6\n\nsns.heatmap(m, ax=ax, vmin=0, vmax=1, center=1)\nax.set_xticks([i+0.5 for i in range(len(SHAPE))])\nax.set_xticklabels(SHAPE, rotation=45, ha=\"right\")\nax.set_yticks([i+0.5 for i in range(len(SHAPE))])\nax.set_yticklabels(SHAPE)\nax.set_xlabel(\"Prediction\")\nax.set_ylabel(\"Ground truth\")\nax.set_aspect('equal', adjustable='box')\nplt.tight_layout(pad=0, h_pad=0, w_pad=0)\nplt.savefig(\"figures/confusion_matrix_average_2.pdf\")\n"
  },
  {
    "path": "visualize/examples.py",
    "content": "import os\nimport numpy as np\nimport pickle\nfrom config import data_dir, def_ranges\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom PIL import Image\n\nexample_ids = [6,7,32,54,85]\nseverity = 4\nexamples_file = \"examples.pkl\"\n\n\ndef build_examples(example_ids, severity):\n    all_examples = {}\n    for example_id in example_ids:\n        examples = []\n        for corruption in list(def_ranges[\"corruption\"].keys()):\n            if corruption == \"none\":\n                continue\n            file_path = os.path.join(data_dir, \"data_{}_{:d}.npy\".format(corruption, severity))\n            examples.append(np.load(file_path)[example_id,:])\n        all_examples[example_id] = examples\n\n    with open(examples_file, \"wb\") as f:\n        pickle.dump(all_examples, f)\n    return examples\n\n\ndef load_examples():\n    with open(examples_file, 'rb') as f:\n        all_examples = pickle.load(f)\n    return all_examples\n\n\ndef rotation_matrix(pitch, yaw, roll):\n    R = np.array([[np.cos(yaw)*np.cos(pitch), \n                   np.cos(yaw)*np.sin(pitch)*np.sin(roll)-np.sin(yaw)*np.cos(roll), \n                   np.cos(yaw)*np.sin(pitch)*np.cos(roll)+np.sin(yaw)*np.sin(roll)],\n                  [np.sin(yaw)*np.cos(pitch), \n                   np.sin(yaw)*np.sin(pitch)*np.sin(roll)+np.cos(yaw)*np.cos(roll), \n                   np.sin(yaw)*np.sin(pitch)*np.cos(roll)-np.cos(yaw)*np.sin(roll)],\n                  [-np.sin(pitch), \n                   np.cos(pitch)*np.sin(roll), \n                   np.cos(pitch)*np.cos(roll)]])\n    return R\n\n\ndef draw_one_example(example, rotate=[0, 0], scale=1, window_width=1080, window_height=720, show=False, save=\"test.png\", flag=0):\n    import open3d as o3d\n    pcd = o3d.geometry.PointCloud()\n    pcd.points = o3d.utility.Vector3dVector(example[:,:3])\n\n    meshes = []\n    for i in range(example.shape[0]):\n        ball = o3d.geometry.TriangleMesh.create_sphere(radius=0.0125)\n        ball.translate(example[i,:3])\n        ball.rotate(rotation_matrix(0, np.pi, np.pi), center=np.array([0,0,0]))\n        meshes.append(ball)\n\n    vis = o3d.visualization.Visualizer()\n    vis.create_window(width=window_width, height=window_height, visible=True)\n    for ball in meshes:\n        vis.add_geometry(ball)\n\n    opt = vis.get_render_option()\n    opt.background_color = np.array([0.90, 0.90, 0.90])\n    opt.mesh_color_option = o3d.visualization.MeshColorOption.ZCoordinate\n\n    control = vis.get_view_control()\n    # control.convert_from_pinhole_camera_parameters(camera_parameters)\n    control.rotate(400, 0)\n    control.rotate(0, 100)\n    if flag:\n        control.rotate(0, -50)\n    control.scale(6)\n    vis.update_geometry(pcd)\n    \n    if show:\n        vis.run()\n    elif save is not None:\n        vis.poll_events()\n        vis.update_renderer()\n        vis.capture_screen_image(save)\n        vis.destroy_window()\n\n\ndef draw_one_example_colorful(example, save=\"test.png\"):\n    from pointflow_fig_colorful import colorful_pcd\n    import mitsuba\n    mitsuba.set_variant('scalar_rgb')\n    from mitsuba.core import Thread\n    from mitsuba.core.xml import load_file\n    xml_filename = \"tmp.xml\"\n    colorful_pcd(example, xml_filename)\n    Thread.thread().file_resolver().append(os.path.dirname(xml_filename))\n    scene = load_file(xml_filename)\n    sensor = scene.sensors()[0]\n    scene.integrator().render(scene, sensor)\n    film = sensor.film()\n    from mitsuba.core import Bitmap, Struct\n    img = film.bitmap(raw=True).convert(Bitmap.PixelFormat.RGB, Struct.Type.UInt8, srgb_gamma=True)\n    img.write(save)\n\n\ndef draw_examples(tag, examples, colorful=False):\n    os.makedirs(\"figures/{}\".format(tag), exist_ok=True)\n    for i, example in enumerate(examples):\n        if not colorful:\n            draw_one_example(example, window_width=720, window_height=600, show=False, save=\"figures/{}/example_{}.png\".format(tag, i))\n        else:\n            draw_one_example_colorful(example, save=\"figures/{}/example_{}.png\".format(tag, i))\n\n    matplotlib.rcParams.update({'font.size': 13, 'font.weight': 'bold'})\n    fig, axes = plt.subplots(3, 5, figsize=(15, 9))\n    for i in range(15):\n        ax = axes[i//5][i%5]\n        im = Image.open(\"figures/{}/example_{}.png\".format(tag, i))\n        w, h = im.size\n        im = im.crop((w * 0.15, h * 0.25, w * 0.85, h * 0.95))\n        ax.set_xlim([0,1])\n        ax.set_ylim([0,1])\n        ax.imshow(im, extent=[0, 1, 0, 1])\n        ax.set_title(list(def_ranges[\"corruption\"].values())[i], y=0)\n        ax.axis('off')\n    plt.tight_layout(pad=0, h_pad=0, w_pad=0)\n    plt.savefig(\"figures/{}/examples.pdf\".format(tag))\n\n\n# Executes once is enough to build the npy file containing examples.\nif not os.path.isfile(examples_file):\n    build_examples(example_ids)\n\n# Loads examples.\nall_examples = load_examples()\n\n# Draws the example demo (one object point cloud with different corruptions).\nfor example_id in example_ids:\n    # Sets `colorful` to True will draw colorful images but require mitsuba module.\n    draw_examples(example_id, all_examples[example_id], colorful=False)\n"
  },
  {
    "path": "visualize/main_results.py",
    "content": "import os\nimport statistics\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport matplotlib\nimport matplotlib.pylab as pylab\nfrom config import result_dir\nfrom config import def_ranges\n\nDATA_DIR = result_dir\nDATA_FILE = \"data.csv\"\nmarkers = ['o', 'v']\ncolors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'dimgray', 'darkorange', 'seagreen', 'navy', 'purple', 'skyblue', 'olive', 'salmon', 'saddlebrown', 'slategray']\n\n\ndef format_data():\n    data = []\n\n    for filename in os.listdir(DATA_DIR):\n        if len(filename) < 4 or filename[-4:] != \".txt\":\n            continue\n        filepath = os.path.join(DATA_DIR, filename)\n        with open(filepath, 'r') as f:\n            log_str = f.readlines()\n            if len(log_str) == 0:\n                print(\"Error no content\", filename)\n                continue\n\n            log_str = log_str[-1]\n            if log_str[-1] == \"\\n\":\n                log_str = log_str[:-1]\n\n        one_data = []\n        tokens = log_str.split(' ')\n        next_value = False\n        for token in tokens:\n            if not next_value and token[-1] == \":\":\n                next_value = True\n            elif next_value:\n                if token.isnumeric():\n                    one_data.append(int(token))\n                elif token.replace('.','',1).isdigit():\n                    one_data.append(float(token))\n                else:\n                    if token == \"nan\":\n                        print(\"Error nan\", filename)\n                        token = 0\n                    one_data.append(token)\n                next_value = False\n\n        tokens = filename.split('.')[-2].split('_')\n        if tokens[-1] == \"clean\":\n            one_data[1] = \"none\"\n            train_mode = '_'.join(tokens[1:len(tokens)-1])\n        else:\n            for index in range(len(tokens)):\n                if one_data[1].split('_')[0] == tokens[index]:\n                    break\n            train_mode = '_'.join(tokens[1:index])\n        one_data.insert(1, train_mode if train_mode != \"\" else \"none\")\n        one_data.insert(5, 100 - one_data[4] * 100)\n        one_data.insert(7, 100 - one_data[6] * 100)\n        \n        if one_data[0] not in def_ranges[\"model\"]:\n            continue\n        if one_data[1] not in def_ranges[\"train_mode\"]:\n            continue\n        if one_data[2] not in def_ranges[\"corruption\"]:\n            continue\n\n        data.append(one_data)\n\n    df_data = pd.DataFrame(data, columns=['model', 'train_mode', 'corruption', 'severity', 'acc', 'err', 'class_acc', 'class_err'])\n    df_data = df_data.sort_values(['model', 'train_mode', 'corruption', 'severity'])\n    df_data.to_csv(DATA_FILE, index=False)\n\n\ndef update_font(font):\n    params = {'legend.fontsize': font,\n             'axes.labelsize': font,\n             'axes.titlesize': font,\n             'xtick.labelsize': font,\n             'ytick.labelsize': font}\n    pylab.rcParams.update(params)\n\n\ndef load_data():\n    return pd.read_csv(DATA_FILE)\n\n\ndef draw_train_mode_comparison(df=None, figure_path=None):\n    update_font('x-small')\n    if df is None:\n        df = load_data()\n\n    corruption_list = list(def_ranges[\"corruption\"].keys())[:-1]\n    model_list = list(def_ranges[\"model\"].keys())\n    train_mode_list = [\"none\"] + list(def_ranges[\"train_mode\"].keys())[:4]\n    metric = \"err\"\n\n    dim0, dim1 = len(corruption_list), len(model_list)\n    fig, ax = plt.subplots(dim0, dim1, figsize=(dim1 * 3, dim0 * 3))\n\n    for corruption_id, corruption in enumerate(corruption_list):\n        for model_id, model in enumerate(model_list):\n            for train_mode_id, train_mode in enumerate(train_mode_list):\n                selected_data = df[(df[\"model\"] == model) & (df[\"train_mode\"] == train_mode) & (df[\"corruption\"] == corruption)].sort_values(\"severity\")\n                label = \"{}\".format(def_ranges[\"train_mode\"][train_mode])\n                if selected_data[metric].shape != 5:\n                    print(model, train_mode, corruption)\n                    continue\n                ax[corruption_id, model_id].plot(def_ranges[\"severity\"], selected_data[metric], \"{}{}-\".format(colors[train_mode_id], markers[0]), label=label)\n            \n            ax[corruption_id, model_id].set_title(\"Model {} and Corruption {}\".format(def_ranges[\"model\"][model], def_ranges[\"corruption\"][corruption]))\n            # ax[corruption_id, model_id].set_xlabel(\"Severity\")\n            # ax[corruption_id, model_id].set_ylabel(\"Accuracy\")\n            ax[corruption_id, model_id].legend()\n            ax[corruption_id, model_id].set_xticks([1,2,3,4,5])\n            # ax[corruption_id, model_id].set_ylim([0,100])\n\n    plt.tight_layout(pad=0, w_pad=-2)\n    if figure_path is None:\n        plt.savefig(\"figures/train_mode_comparison.pdf\")\n    else:\n        plt.savefig(figure_path)\n\n\ndef draw_model_comparison(df=None, figure_path=None, mode=\"train\", metric=\"err\", split=0):\n    update_font('x-small')\n    if df is None:\n        df = load_data()\n\n    corruption_list = list(def_ranges[\"corruption\"].keys())[:-1]\n    if split == 1:\n        corruption_list = corruption_list[:8]\n    elif split == 2:\n         corruption_list = corruption_list[8:]\n    model_list = list(def_ranges[\"model\"].keys())\n    if mode == \"train\":\n        train_mode_list = [\"none\"] + list(def_ranges[\"train_mode\"].keys())[:4]\n    else:\n        train_mode_list = [\"none\"] + list(def_ranges[\"train_mode\"].keys())[4:6]\n    dim0, dim1 = len(corruption_list), len(train_mode_list)\n    fig, ax = plt.subplots(dim0, dim1, figsize=(dim1 * 3, dim0 * 2.3))\n\n    for corruption_id, corruption in enumerate(corruption_list):\n        for train_mode_id, train_mode in enumerate(train_mode_list):\n            for model_id, model in enumerate(model_list):\n                selected_data = df[(df[\"model\"] == model) & (df[\"train_mode\"] == train_mode) & (df[\"corruption\"] == corruption)].sort_values(\"severity\")\n                label = \"{}\".format(def_ranges[\"model\"][model])\n                if selected_data[metric].shape[0] != 5:\n                    print(model, train_mode, corruption)\n                    continue\n                ax[corruption_id, train_mode_id].plot(def_ranges[\"severity\"], selected_data[metric], color=colors[model_id], marker=markers[0], linestyle='-', label=label)\n            \n            ax[corruption_id, train_mode_id].set_title(\"{} {} and {} corruption\".format(def_ranges[\"train_mode\"][train_mode], \"training\" if mode == \"train\" else \"testing\", def_ranges[\"corruption\"][corruption]))\n            # ax[corruption_id, train_mode_id].set_xlabel(\"Severity\")\n            # ax[corruption_id, train_mode_id].set_ylabel(\"Accuracy\")\n            ax[corruption_id, train_mode_id].legend()\n            ax[corruption_id, train_mode_id].set_xticks([1,2,3,4,5])\n    if mode == \"train\" and split == 1:\n        plt.tight_layout(pad=2, w_pad=-3)\n    elif mode == \"train\" and split == 2:\n        plt.tight_layout(pad=2, w_pad=-1)\n    elif mode == \"test\" and split == 1:\n        plt.tight_layout(pad=2, w_pad=0)\n    else:\n        plt.tight_layout(pad=2, w_pad=1)\n\n    if figure_path is None:\n        plt.savefig(\"figures/model_comparison_{}_{}_{}.pdf\".format(mode, metric, split))\n    else:\n        plt.savefig(figure_path)\n    plt.clf()\n\n\ndef draw_corruption_comparison(df=None, figure_path=None):\n    if df is None:\n        df = load_data()\n\n    dim0, dim1 = len(def_ranges[\"model\"]), len(def_ranges[\"train_mode\"])\n    fig, ax = plt.subplots(dim0, dim1, figsize=(dim1 * 4, dim0 * 4))\n\n    for model_id, model in enumerate(list(def_ranges[\"model\"].keys())):\n        for train_mode_id, train_mode in enumerate(list(def_ranges[\"train_mode\"].keys())):\n            for corruption_id, corruption in enumerate(list(def_ranges[\"corruption\"].keys())):\n                if corruption == \"none\":\n                    continue\n                selected_data = df[(df[\"corruption\"] == corruption) & (df[\"train_mode\"] == train_mode) & (df[\"model\"] == model)].sort_values(\"severity\")\n                for metric_id, metric in enumerate(def_ranges[\"metric\"]):\n                    label = \"{}-{}\".format(corruption, metric)\n                    ax[model_id, train_mode_id].plot(def_ranges[\"severity\"], selected_data[metric], marker=markers[metric_id], color=colors[corruption_id], label=label if metric_id == 0 else None)\n            \n            ax[model_id, train_mode_id].set_title(\"Corruptions with Model {} and TrainMode {}\".format(model, train_mode))\n            # ax[model_id, train_mode_id].set_xlabel(\"Severity\")\n            # ax[model_id, train_mode_id].set_ylabel(\"Accuracy\")\n            ax[model_id, train_mode_id].legend()\n            ax[model_id, train_mode_id].set_xticks([1,2,3,4,5])\n\n    if figure_path is None:\n        plt.savefig(\"figures/corruption_comparison.pdf\")\n    else:\n        plt.savefig(figure_path)\n\n\ndef get_best_model(df=None):\n    if df is None:\n        df = load_data()\n\n    best_model = {}\n\n    for metric_id, metric in enumerate([\"acc\", \"class_acc\"]):\n        best_model[metric] = {}\n        for corruption_id, corruption in enumerate(list(def_ranges[\"corruption\"].keys())):\n            best_model[metric][corruption] = []\n            for model_id, model in enumerate(list(def_ranges[\"model\"].keys())):\n                best_model[metric][corruption].append([])\n                for train_mode_id, train_mode in enumerate(list(def_ranges[\"train_mode\"].keys())):\n                    selected_data = df[(df[\"corruption\"] == corruption) & (df[\"train_mode\"] == train_mode) & (df[\"model\"] == model)][metric]\n                    best_model[metric][corruption][model_id].append(selected_data.sum()/5)\n                best_model[metric][corruption][model_id] = float(np.mean(np.array(best_model[metric][corruption][model_id])))\n            best_model[metric][corruption] = list(def_ranges[\"model\"].keys())[np.argmax(np.array(best_model[metric][corruption]))]\n    \n    data = []\n    for metric in best_model:\n        for corruption in best_model[metric]:\n            data.append([metric, corruption, best_model[metric][corruption]])\n\n    df_data = pd.DataFrame(data, columns=['metric', 'corruption', 'best_model'])\n    df_data.to_csv(\"best_model.csv\", index=False)\n\n\ndef get_best_train_mode(df=None):\n    if df is None:\n        df = load_data()\n\n    best_train_mode = {}\n\n    for metric_id, metric in enumerate([\"acc\", \"class_acc\"]):\n        best_train_mode[metric] = {}\n        for corruption_id, corruption in enumerate(list(def_ranges[\"corruption\"].keys())):\n            best_train_mode[metric][corruption] = []\n            for train_mode_id, train_mode in enumerate(list(def_ranges[\"train_mode\"].keys())):\n                if train_mode in [\"bn\", \"tent\", \"none\", \"pgd\"]:\n                    continue\n                best_train_mode[metric][corruption].append([])\n                for model_id, model in enumerate(list(def_ranges[\"model\"].keys())):\n                    selected_data = df[(df[\"corruption\"] == corruption) & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode)][metric]\n                    best_train_mode[metric][corruption][train_mode_id].append(selected_data.sum()/5)\n                best_train_mode[metric][corruption][train_mode_id] = float(np.mean(np.array(best_train_mode[metric][corruption][train_mode_id])))\n            best_train_mode[metric][corruption] = list(def_ranges[\"train_mode\"].keys())[np.argmax(np.array(best_train_mode[metric][corruption]))]\n    \n    data = []\n    for metric in best_train_mode:\n        for corruption in best_train_mode[metric]:\n            data.append([metric, corruption, best_train_mode[metric][corruption]])\n\n    df_data = pd.DataFrame(data, columns=['metric', 'corruption', 'best_train_mode'])\n    print(df_data)\n    df_data.to_csv(\"best_train_mode.csv\", index=False)\n                \n\ndef get_corruption_tables(df=None, metric=\"acc\"):\n    if df is None:\n        df = load_data()\n\n    all_tables = \"\"\n    for corruption_id, corruption in enumerate(list(def_ranges[\"corruption\"].keys())):\n        all_tables += \"{}\\n\".format(corruption)\n        data = []\n        for model_id, model in enumerate(list(def_ranges[\"model\"].keys())):\n            data.append([])\n            for train_mode_id, train_mode in enumerate(list(def_ranges[\"train_mode\"].keys())):\n                selected_data = df[(df[\"corruption\"] == corruption) & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode)][metric].sum()/5\n                data[-1].append(selected_data)\n            data[-1].append(sum(data[-1])/len(data[-1]))\n            data[-1] = [model] + data[-1]\n\n        data.append([\"average\"])\n        data[-1] += [statistics.mean([data[model_id][train_mode_id+1] for model_id, model in enumerate(def_ranges[\"model\"])]) for train_mode_id, train_mode in enumerate(list(def_ranges[\"train_mode\"].keys()))]\n\n        df_data = pd.DataFrame(data, columns=[\"model\"]+list(def_ranges[\"train_mode\"].keys())+[\"average\"])\n        df_data.to_csv(\"tables/{}_table.csv\".format(corruption), index=False)\n\n        with open(\"tables/{}_table.csv\".format(corruption), 'r') as f:\n            all_tables += f.read()\n        all_tables += \"\\n\\n\"\n\n    with open(\"corruption_tables.csv\", 'w') as f:\n        f.write(all_tables)\n\n\ndef draw_teaser(df=None):\n    update_font(\"xx-small\")\n    if df is None:\n        df = load_data()\n\n    model_dim = len(def_ranges[\"model\"])\n    fig, ax = plt.subplots(figsize=(6, 1.2))\n\n    bar_width = 0.4\n    bar_params = {\n        \"edgecolor\": None,\n    }\n    for bar_id, bar in enumerate([\"Clean Inputs\", \"Corrupted Inputs\"]):\n        bar_mean = []\n        if bar == \"Clean Inputs\":\n            bar_std = [0.1, 0.2, 0.1, 0.2, 0.1, 0.3, 0.2, 0.1, 0.2, 0.3]\n        else:\n            bar_std = [0.2, 0.2, 0.2, 0.3, 0.2, 0.2, 0.3, 0.2, 0.2, 0.3]\n        for model_id, model in enumerate(list(def_ranges[\"model\"].keys())):\n            if bar == \"Clean Inputs\":\n                one_bar_mean = (100 - df[(df[\"corruption\"] == \"none\") & (df[\"model\"] == model) & (df[\"train_mode\"] == \"none\")][\"acc\"].mean() * 100)\n                color = 'royalblue'\n            else:\n                one_bar_mean = (100 - df[(df[\"corruption\"] != \"none\") & (df[\"model\"] == model) & (df[\"train_mode\"] == \"none\")][\"acc\"].mean() * 100)\n                color = '#F15757'\n            bar_mean.append(one_bar_mean)\n        ax.bar(np.arange(model_dim) + (bar_id - 0.5) * bar_width, bar_mean, bar_width, color=color, label=bar, **bar_params)\n    \n    ax.set_ylabel(\"Error Rate (%)\")\n    ax.set_ylim([0, 40])\n    ax.set_xticks(np.arange(model_dim))\n    ax.set_xticklabels(list(def_ranges[\"model\"].values()))\n    ax.legend(ncol=2)\n    for axis in ['top','bottom','left','right']:\n        ax.spines[axis].set_linewidth(1)\n    plt.tight_layout(pad=0.2)\n\n    plt.savefig(\"figures/teaser.png\")\n\n\ndef get_table_1(df=None):\n    if df is None:\n        df = load_data()\n\n    train_mode_list = [\"cutmix_r\", \"cutmix_k\", \"mixup\", \"rsmix\", \"pgd\"]\n    format_string =  \" & \".join([\"{: <10}\"] + [\"{:4.1f}\" for i in range (len(train_mode_list))]) + \" \\\\\\\\\"\n    print(\" & \".join([\"\"] + train_mode_list))\n    for model, model_name in def_ranges[\"model\"].items():\n        row = [model_name]\n        for train_mode in train_mode_list:\n            data = (100 - df[(df[\"corruption\"] == \"none\") & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode)][\"acc\"].mean() * 100)\n            row.append(data)\n        print(format_string.format(*row))\n\n\ndef get_table_2():\n    df = load_data()\n\n    format_string =  \" & \".join([\"{: <18}\"] + [\"{:4.1f}\" for i in range (len(def_ranges[\"corruption\"]))]) + \" \\\\\\\\\"\n    print(\" & \".join([\"\", \"average\"] + list(def_ranges[\"corruption\"].values())[:-1]))\n    rows = []\n    for model, model_name in def_ranges[\"model\"].items():\n        row = [model_name]\n        for corruption, corruption_name in def_ranges[\"corruption\"].items():\n            if corruption == \"none\":\n                continue\n            data = df[(df[\"corruption\"] == corruption) & (df[\"model\"] == model) & (df[\"train_mode\"] == \"none\")][\"err\"].mean()\n            row.append(data)\n        average = sum(row[1:]) / len(row[1:])\n        row.insert(1, average)\n        print(format_string.format(*row))\n        rows.append(row)\n    row = [\"Average\"]\n    for col_id in range(1, len(def_ranges[\"corruption\"])+1):\n        row.append(np.mean([r[col_id] for r in rows]))\n    print(format_string.format(*row))\n    \n\ndef get_table_3():\n    df = load_data()\n\n    train_mode_list = [\"cutmix_r\", \"cutmix_k\", \"mixup\", \"rsmix\", \"pgd\"]\n    corruption_list = list(def_ranges[\"corruption\"].keys())\n    corruption_groups = [corruption_list[:5], corruption_list[5:10], corruption_list[10:15]]\n    format_string =  \" & \".join([\"{: <10}\"] + [\"{:4.1f}\" for i in range (20)]) + \" \\\\\\\\\"\n    print(\" & \".join([\"\", \"average\"] + [\"Average\", \"density\", \"noise\", \"trans\"] * 5))\n    rows = []\n    for model, model_name in def_ranges[\"model\"].items():\n        if model_name == \"PointMLP-Elite\":\n            continue\n        row = [model_name]\n        x = df[(df[\"model\"] == model) & (df[\"train_mode\"] == \"none\")][\"err\"].mean()\n        row.append(x)\n        for train_mode in train_mode_list:\n            group_data = []\n            for corruption_group in corruption_groups:\n                data = []\n                for corruption in corruption_group:\n                    x = df[(df[\"corruption\"] == corruption) & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode)][\"err\"].mean()\n                    data.append(x)\n                group_data.append(np.mean(data))\n            row.append(np.mean(group_data))\n            row += group_data\n        print(format_string.format(*row))\n        rows.append(row)\n    row = [\"Average\"]\n    for col_id in range(1, 22):\n        row.append(np.mean([r[col_id] for r in rows]))\n    print(format_string.format(*row))\n\n\ndef get_table_4():\n    df = load_data()\n\n    # train_mode_list = [\"bn\", \"tent\"]\n    train_mode_list = [\"megamerger\"]\n    corruption_list = list(def_ranges[\"corruption\"].keys())\n    corruption_groups = [corruption_list[:5], corruption_list[5:10], corruption_list[10:15]]\n    format_string =  \" & \".join([\"{: <10}\"] + [\"{:6.3f}\" for i in range (4 * len(train_mode_list))]) + \" \\\\\\\\\"\n    print(\" & \".join([\"\"] + [\"Average\", \"density\", \"noise\", \"trans\"] * len(train_mode_list)))\n    rows = []\n    for model, model_name in def_ranges[\"model\"].items():\n        if model_name == \"PointMLP-Elite\":\n            continue\n        row = [model_name]\n        for train_mode in train_mode_list:\n            group_data = []\n            for corruption_group in corruption_groups:\n                data = []\n                for corruption in corruption_group:\n                    x = df[(df[\"corruption\"] == corruption) & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode)][\"err\"].mean()\n                    data.append(x)\n                group_data.append(np.mean(data))\n            row.append(np.mean(group_data))\n            row += group_data\n        print(format_string.format(*row))\n        rows.append(row)\n    row = [\"Average\"]\n    for col_id in range(1, 4 * len(train_mode_list)+1):\n        row.append(np.mean([r[col_id] for r in rows]))\n    print(format_string.format(*row))\n\n\ndef draw_severity_comparison():\n    df = load_data()\n\n    model_list = list(def_ranges[\"model\"].keys())\n    model_list.remove(\"pointMLP2\")\n    corruption_list = list(def_ranges[\"corruption\"].keys())\n    corruption_list.remove(\"none\")\n    corruption_list.remove(\"lidar\")\n    corruption_list.remove(\"occlusion\")\n    train_mode_list = list(def_ranges[\"train_mode\"].keys())\n    train_mode_list.remove(\"none\")\n    train_mode_list.remove(\"bn\")\n    train_mode_list.remove(\"tent\")\n    train_mode_list.remove(\"pgd\")\n    train_mode_list.remove(\"megamerger\")\n    train_mode_list = [\"none\"] + train_mode_list\n\n    figure_dim = len(model_list)+1\n    bar_dim = len(train_mode_list)\n    # fig, axes = plt.subplots(figure_dim, 1, figsize=(8, figure_dim * 4))\n    fig, ax = plt.subplots(figsize=(6.0, 2.4))\n\n    bar_width = 0.18\n    bar_params = {\n        \"edgecolor\": None,\n    }\n\n    figure_data_all = []\n    for model_id, model in enumerate(model_list):\n        # ax = axes[model_id]\n        figure_data = []\n        for i, severity in enumerate(def_ranges[\"severity\"]):\n            bar_data = []\n            for train_mode_id, train_mode in enumerate(train_mode_list):\n                x = df[(df[\"corruption\"] != \"none\") & (df[\"model\"] == model) & (df[\"train_mode\"] == train_mode) & (df[\"severity\"] == severity)][\"err\"].mean()\n                bar_data.append(x)\n            figure_data.append(bar_data)\n        figure_data_all.append(figure_data)\n    figure_data_all = np.array(figure_data_all)\n    print(figure_data_all)\n\n    for i, severity in enumerate(def_ranges[\"severity\"]):\n        bar_mean = np.mean(figure_data_all[:,i,:], axis=0)\n        bar_std = np.std(figure_data_all[:,i,:], axis=0)\n        ax.bar(np.arange(bar_dim) + (i - 2) * bar_width, bar_mean, bar_width, label=\"Severity-\"+str(severity), **bar_params)\n        ax.errorbar(np.arange(bar_dim) + (i - 2) * bar_width, bar_mean, yerr=bar_std, color=\"k\", fmt='o', markersize=2, capsize=4)\n\n    ax.set_ylim([0, 44])\n    ax.set_ylabel(\"Average Error Rate (%)\")\n    ax.set_xticks(np.arange(bar_dim))\n    ax.set_xticklabels([def_ranges[\"train_mode\"][train_mode] for train_mode in train_mode_list])\n    # ax.set_title(\"All\")\n    ax.legend(ncol=3)\n    for axis in ['top','bottom','left','right']:\n        ax.spines[axis].set_linewidth(1)\n    \n    plt.tight_layout(pad=0.2)\n\n    plt.savefig(\"figures/draw_severity_comparison.pdf\")\n\n\ndef draw_test_adaptation():\n    import copy\n    from collections import OrderedDict\n    df = load_data()\n    fig, ax = plt.subplots(figsize=(6.0, 2.4))\n\n    best_train = OrderedDict([(\"occlusion\", \"cutmix_k\"), (\"lidar\", \"rsmix\"), (\"rotation\", \"mixup\")])\n    test = [\"tent\", \"bn\"]\n    data = {\"train\": {}, \"bn\": {}, \"tent\": {}}\n    for severity in [1, 2, 3, 4, 5]:\n        for key in data:\n            data[key][severity] = []\n        for corruption, train_mode in best_train.items():\n            data[\"train\"][severity].append(df[(df[\"corruption\"] == corruption) & (df[\"train_mode\"] == train_mode) & (df[\"severity\"] == severity)][\"err\"].mean())\n            for test_mode in test:\n                data[test_mode][severity].append(df[(df[\"corruption\"] == corruption) & (df[\"train_mode\"] == test_mode) & (df[\"severity\"] == severity)][\"err\"].mean())\n    for mode in [\"train\", \"tent\", \"bn\"]:\n        data[mode][0] = [0,0,0]\n\n    bar_width = 0.06\n    default_bar_params = {\n        \"edgecolor\": 'black',\n        'linewidth': 0.5,\n    }\n    for mode_id, mode in enumerate([\"train\", \"tent\", \"bn\"]):\n        for severity in [1, 2, 3, 4, 5]:\n            print(mode, severity, data[mode][severity])\n            color = [0.9 - 0.18 * severity] * 3\n            color[mode_id] = 1\n            bar_params = copy.deepcopy(default_bar_params)\n            if severity == 5:\n                bar_params[\"label\"] = \"Best augmentation\" if mode == \"train\" else def_ranges[\"train_mode\"][mode]\n            ax.bar([i+(severity-0.5)*bar_width+5*mode_id*bar_width for i in range(3)], \n                    data[mode][severity],\n                    bar_width,\n                    color=tuple(color),\n                    **bar_params\n                    )\n    ax.set_xticks([i + 7.5 * bar_width for i in range(3)])\n    ax.set_xticklabels([def_ranges[\"corruption\"][corruption] for corruption, train_mode in best_train.items()])\n    ax.legend()\n    plt.tight_layout(pad=0)\n    plt.savefig(\"figures/draw_test_adaptation.pdf\")\n    \n\n\n# Sets the font of matplotlib.\nupdate_font('xx-small')\n\n# Builds the csv data table once.\nif not os.path.isfile(DATA_FILE):\n    format_data()\n\n# Figure 9-16.\ndraw_model_comparison()\ndraw_train_mode_comparison()\ndraw_corruption_comparison()\n\nget_best_model()\nget_best_train_mode()\nget_corruption_tables()\n\n# Figure 1.\ndraw_teaser()\n\nget_table_1()\nget_table_2()\nget_table_3()\nget_table_4()\n\n# Figure 4.\ndraw_severity_comparison()\ndraw_test_adaptation()\n"
  },
  {
    "path": "visualize/pointflow_fig_colorful.py",
    "content": "import numpy as np\n\ndef standardize_bbox(pcl, points_per_object):\n    # pt_indices = np.random.choice(pcl.shape[0], points_per_object, replace=False)\n    # np.random.shuffle(pt_indices)\n    # pcl = pcl[pt_indices] # n by 3\n    mins = np.amin(pcl, axis=0)\n    maxs = np.amax(pcl, axis=0)\n    center = ( mins + maxs ) / 2.\n    scale = np.amax(maxs-mins)\n    print(\"Center: {}, Scale: {}\".format(center, scale))\n    result = ((pcl - center)/scale).astype(np.float32) # [-0.5, 0.5]\n    return result\n\nxml_head = \\\n\"\"\"\n<scene version=\"0.6.0\">\n    <integrator type=\"path\">\n        <integer name=\"maxDepth\" value=\"-1\"/>\n    </integrator>\n    <sensor type=\"perspective\">\n        <float name=\"farClip\" value=\"100\"/>\n        <float name=\"nearClip\" value=\"0.1\"/>\n        <transform name=\"toWorld\">\n            <lookat origin=\"3,3,3\" target=\"0,0,0\" up=\"0,0,1\"/>\n        </transform>\n        <float name=\"fov\" value=\"25\"/>\n        \n        <sampler type=\"ldsampler\">\n            <integer name=\"sampleCount\" value=\"256\"/>\n        </sampler>\n        <film type=\"hdrfilm\">\n            <integer name=\"width\" value=\"1600\"/>\n            <integer name=\"height\" value=\"1200\"/>\n            <rfilter type=\"gaussian\"/>\n            <boolean name=\"banner\" value=\"false\"/>\n        </film>\n    </sensor>\n    \n    <bsdf type=\"roughplastic\" id=\"surfaceMaterial\">\n        <string name=\"distribution\" value=\"ggx\"/>\n        <float name=\"alpha\" value=\"0.05\"/>\n        <float name=\"intIOR\" value=\"1.46\"/>\n        <rgb name=\"diffuseReflectance\" value=\"1,1,1\"/> <!-- default 0.5 -->\n    </bsdf>\n    \n\"\"\"\n\nxml_ball_segment = \\\n\"\"\"\n    <shape type=\"sphere\">\n        <float name=\"radius\" value=\"0.025\"/>\n        <transform name=\"toWorld\">\n            <translate x=\"{}\" y=\"{}\" z=\"{}\"/>\n        </transform>\n        <bsdf type=\"diffuse\">\n            <rgb name=\"reflectance\" value=\"{},{},{}\"/>\n        </bsdf>\n    </shape>\n\"\"\"\n\nxml_tail = \\\n\"\"\"\n    <shape type=\"rectangle\">\n        <ref name=\"bsdf\" id=\"surfaceMaterial\"/>\n        <transform name=\"toWorld\">\n            <scale x=\"10\" y=\"10\" z=\"1\"/>\n            <translate x=\"0\" y=\"0\" z=\"-0.5\"/>\n        </transform>\n    </shape>\n    \n    <shape type=\"rectangle\">\n        <transform name=\"toWorld\">\n            <scale x=\"10\" y=\"10\" z=\"1\"/>\n            <lookat origin=\"-4,4,20\" target=\"0,0,0\" up=\"0,0,1\"/>\n        </transform>\n        <emitter type=\"area\">\n            <rgb name=\"radiance\" value=\"6,6,6\"/>\n        </emitter>\n    </shape>\n</scene>\n\"\"\"\n\ndef colormap(x,y,z):\n    vec = np.array([x,y,z])\n    vec = np.clip(vec, 0.001,1.0)\n    norm = np.sqrt(np.sum(vec**2))\n    vec /= norm\n    return [vec[0], vec[1], vec[2]]\n\n\ndef colorful_pcd(pcd_data, output_file):\n    xml_segments = [xml_head]\n\n    pcl = standardize_bbox(pcd_data, 2048)\n    pcl = pcl[:,[2,0,1]]\n    pcl[:,0] *= -1\n    pcl[:,2] += 0.0125\n\n    for i in range(pcl.shape[0]):\n        color = colormap(pcl[i,0]+0.5,pcl[i,1]+0.5,pcl[i,2]+0.5-0.0125)\n        xml_segments.append(xml_ball_segment.format(pcl[i,0],pcl[i,1],pcl[i,2], *color))\n    xml_segments.append(xml_tail)\n\n    xml_content = str.join('', xml_segments)\n\n    with open(output_file, 'w') as f:\n        f.write(xml_content)\n\n\n"
  }
]