[
  {
    "path": ".gitignore",
    "content": ".DS_Store\n*/.DS_Store\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 An Tao\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Point Cloud Datasets\n\nThis repository provides ShapeNetCore.v2, ShapeNetPart, ModelNet40 and ModelNet10 datasets in HDF5 format. For each shape in these datasets, we use farthest point sampling algorithm to uniformly sample 2,048 points from shape surface. All points are then centered and scaled. We follow the train/val/test split in official documents.\n\nWe also provide code to load and visualize our datasets with PyTorch 1.2 and Python 3.7. See `dataset.py` and run it to have a try.\n\nTo visualize, run `visualize.py` to generate XML file and use [Mitsuba](https://www.mitsuba-renderer.org/index.html) to render it. Our code is from this [repo](https://github.com/zekunhao1995/PointFlowRenderer). \n\n&nbsp;\n## Download link:\n\n- ShapeNetCore.v2 (0.98G)&ensp;[[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/06a3c383dc474179b97d/)&ensp;[[BaiduDisk]](https://pan.baidu.com/s/154As2kzHZczMipuoZIc0kg)\n- ShapeNetPart (338M)&ensp;[[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/c25d94e163454196a26b/)&ensp;[[BaiduDisk]](https://pan.baidu.com/s/1yi4bMVBE2mV8NqVRtNLoqw)\n- ModelNet40 (194M)&ensp;[[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/b3d9fe3e2a514def8097/)&ensp;[[BaiduDisk]](https://pan.baidu.com/s/1NQZgN8tvHVqQntxefcdVAg)\n- ModelNet10 (72.5M)&ensp;[[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/5414376f6afd41ce9b6d/)&ensp;[[BaiduDisk]](https://pan.baidu.com/s/1tfnKQ_yg3SfIgyLSwQ2E0g)\n\n&nbsp;\n## ShapeNetCore.v2\nShapeNetCore.v2 datset contains 51,127 pre-aligned shapes from 55 categories, which are split into 35,708 (70%) for training, 5,158 (10%) shapes for validation and 10,261 (20%) shapes for testing. In official document there should be 51,190 shapes in total, but 63 shapes are missing in original downloaded ShapeNetCore.v2 dataset from [here](https://www.shapenet.org/download/shapenetcore). \n\nThe 55 categories include: `airplane`, `bag`, `basket`, `bathtub`, `bed`, `bench`, `birdhouse`, `bookshelf`, `bottle`, `bowl`, `bus`, `cabinet`, `camera`, `can`, `cap`, `car`, `cellphone`, `chair`, `clock`, `dishwasher`, `earphone`, `faucet`, `file`, `guitar`, `helmet`, `jar`, `keyboard`, `knife`, `lamp`, `laptop`, `mailbox`, `microphone`, `microwave`, `monitor`, `motorcycle`, `mug`, `piano`, `pillow`, `pistol`, `pot`, `printer`, `remote_control`, `rifle`, `rocket`, `skateboard`, `sofa`, `speaker`, `stove`, `table`, `telephone`, `tin_can`, `tower`, `train`, `vessel`, `washer`.\n\nSome visualized point clouds in our ShapeNetCore.v2 dataset:\n<p float=\"left\">\n    <img src=\"image/shapenetcorev2_test37_earphone.png\" height=\"170\"/>\n    <img src=\"image/shapenetcorev2_test59_lamp.png\" height=\"170\"/> \n    <img src=\"image/shapenetcorev2_train4_tower.png\" height=\"170\"/>\n</p>\n&emsp;&emsp;&emsp;&emsp;&emsp;earphone&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;lamp&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;tower\n\n&nbsp;\n## ShapeNetPart\nShapeNetPart dataset contains 16,881 pre-aligned shapes from 16 categories, annotated with 50 segmentation parts in total. Most object categories are labeled with two to five segmentation parts. There are 12,137 (70%) shapes for training, 1,870 (10%) shapes for validation, and 2,874 (20%) shapes for testing. We also pack the segementation label in our dataset. The link for official dataset is [here](https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip).\n\nThe 16 categories include: `airplane`, `bag`, `cap`, `car`, `chair`, `earphone`, `guitar`, `knife`, `lamp`, `laptop`, `motorbike`, `mug`, `pistol`, `rocket`, `skateboard`, `table`.\n\nAlthough ShapeNetPart is made from ShapeNetCore, the number of points per shape in official ShapeNetPart dataset is not very large and sometimes less than 2,048. Thus the uniform sampling quality of our ShapeNetPart dataset is lower than our ShapeNetCore.v2 dataset. \n\nIn this dataset, we change segmentation label for each point into range 0~49 according to its category. You can find a index mapping list in `dataset.py`.\n\nSome visualized point clouds in our ShapeNetPart dataset:\n<p float=\"left\">\n    <img src=\"image/shapenetpart_train4_airplane.png\" height=\"170\"/>\n    <img src=\"image/shapenetpart_train2_table.png\" height=\"170\"/>\n    <img src=\"image/shapenetpart_train13_chair.png\" height=\"170\"/>\n</p>\n&emsp;&emsp;&emsp;&emsp;&emsp;airplane&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp;table&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp;chair\n\n&nbsp;\n## ModelNet40\nModelNet40 dataset contains 12,311 pre-aligned shapes from 40 categories, which are split into 9,843 (80%) for training and 2,468 (20%) for testing. The link for official dataset is [here](http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip).\n\nThe 40 categories include: `airplane`, `bathtub`, `bed`, `bench`, `bookshelf`, `bottle`, `bowl`, `car`, `chair`, `cone`, `cup`, `curtain`, `desk`, `door`, `dresser`, `flower_pot`, `glass_box`, `guitar`, `keyboard`, `lamp`, `laptop`, `mantel`, `monitor`, `night_stand`, `person`, `piano`, `plant`, `radio`, `range_hood`, `sink`, `sofa`, `stairs`, `stool`, `table`, `tent`, `toilet`, `tv_stand`, `vase`, `wardrobe`, `xbox`.\n\n**Note**: The widely used 2,048 points sampled ModelNet40 dataset ([link](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip)) only contains 9,840 shapes for training, not 9,843 in official. Our ModelNet40 dataset fixs this problem and can substitute the above mentioned dataset perfectly.\n\nSome visualized point clouds in our ModelNet40 dataset:\n<p float=\"left\">\n    <img src=\"image/modelnet40_train7_vase.png\" height=\"170\"/>\n    <img src=\"image/modelnet40_train10_bookshelf.png\" height=\"170\"/>\n    <img src=\"image/modelnet40_train14_plant.png\" height=\"170\" hspace=\"10\"/>\n</p>\n&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;vase&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;bookshelf&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;plant\n\n&nbsp;\n## ModelNet10\nModelNet10 dataset is a part of ModelNet40 dataset, containing 4,899 pre-aligned shapes from 10 categories. There are 3,991 (80%) shapes for training and 908 (20%) shapes for testing. The link for official dataset is [here](http://modelnet.cs.princeton.edu/ModelNet40.zip).\n\nThe 10 categories include: `bathtub`, `bed`, `chair`, `desk`, `dresser`, `monitor`, `night_stand`, `sofa`, `table`, `toilet`.\n\n&nbsp;\n## Dataset performance\nRepos below use our datasets:\n\n- [antao97/UnsupervisedPointCloudReconstruction](https://github.com/antao97/UnsupervisedPointCloudReconstruction)\n- coming soon ...\n\n&nbsp;\n\n#### Reference repos:\n\n- [charlesq34/pointnet](https://github.com/charlesq34/pointnet)\n- [charlesq34/pointnet2](https://github.com/charlesq34/pointnet2)  \n- [stevenygd/PointFlow](https://github.com/stevenygd/PointFlow)  \n- [zekunhao1995/PointFlowRenderer](https://github.com/zekunhao1995/PointFlowRenderer)\n- [WangYueFt/dgcnn](https://github.com/WangYueFt/dgcnn)  \n\n\n"
  },
  {
    "path": "dataset.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Author: An Tao\n@Contact: ta19@mails.tsinghua.edu.cn\n@File: dataset.py\n@Time: 2020/1/2 10:26 AM\n\"\"\"\n\nimport os\nimport torch\nimport json\nimport h5py\nfrom glob import glob\nimport numpy as np\nimport torch.utils.data as data\n\n\nshapenetpart_cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4, \n                       'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9, \n                       'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}\nshapenetpart_seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]\nshapenetpart_seg_start_index = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]\n\n\ndef translate_pointcloud(pointcloud):\n    xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])\n    xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])\n       \n    translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')\n    return translated_pointcloud\n\n\ndef jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):\n    N, C = pointcloud.shape\n    pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)\n    return pointcloud\n\n\ndef rotate_pointcloud(pointcloud):\n    theta = np.pi*2 * np.random.rand()\n    rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])\n    pointcloud[:,[0,2]] = pointcloud[:,[0,2]].dot(rotation_matrix) # random rotation (x,z)\n    return pointcloud\n\n\nclass Dataset(data.Dataset):\n    def __init__(self, root, dataset_name='modelnet40', class_choice=None,\n            num_points=2048, split='train', load_name=True, load_file=True,\n            segmentation=False, random_rotate=False, random_jitter=False, \n            random_translate=False):\n\n        assert dataset_name.lower() in ['shapenetcorev2', 'shapenetpart', \n            'modelnet10', 'modelnet40', 'shapenetpartpart']\n        assert num_points <= 2048        \n\n        if dataset_name in ['shapenetcorev2', 'shapenetpart', 'shapenetpartpart']:\n            assert split.lower() in ['train', 'test', 'val', 'trainval', 'all']\n        else:\n            assert split.lower() in ['train', 'test', 'all']\n\n        if dataset_name not in ['shapenetpart'] and segmentation == True:\n            raise AssertionError\n\n        self.root = os.path.join(root, dataset_name + '_hdf5_2048')\n        self.dataset_name = dataset_name\n        self.class_choice = class_choice\n        self.num_points = num_points\n        self.split = split\n        self.load_name = load_name\n        self.load_file = load_file\n        self.segmentation = segmentation\n        self.random_rotate = random_rotate\n        self.random_jitter = random_jitter\n        self.random_translate = random_translate\n        \n        self.path_h5py_all = []\n        self.path_name_all = []\n        self.path_file_all = []\n\n        if self.split in ['train', 'trainval', 'all']:   \n            self.get_path('train')\n        if self.dataset_name in ['shapenetcorev2', 'shapenetpart', 'shapenetpartpart']:\n            if self.split in ['val', 'trainval', 'all']: \n                self.get_path('val')\n        if self.split in ['test', 'all']:   \n            self.get_path('test')\n\n        data, label, seg = self.load_h5py(self.path_h5py_all)\n\n        if self.load_name or self.class_choice != None:\n            self.name = np.array(self.load_json(self.path_name_all))    # load label name\n\n        if self.load_file:\n            self.file = np.array(self.load_json(self.path_file_all))    # load file name\n        \n        self.data = np.concatenate(data, axis=0)\n        self.label = np.concatenate(label, axis=0) \n        if self.segmentation:\n            self.seg = np.concatenate(seg, axis=0) \n\n        if self.class_choice != None:\n            indices = (self.name == class_choice)\n            self.data = self.data[indices]\n            self.label = self.label[indices]\n            self.name = self.name[indices]\n            if self.segmentation:\n                self.seg = self.seg[indices]\n                id_choice = shapenetpart_cat2id[class_choice]\n                self.seg_num_all = shapenetpart_seg_num[id_choice]\n                self.seg_start_index = shapenetpart_seg_start_index[id_choice]\n            if self.load_file:\n                self.file = self.file[indices]\n        elif self.segmentation:\n            self.seg_num_all = 50\n            self.seg_start_index = 0\n\n    def get_path(self, type):\n        path_h5py = os.path.join(self.root, '%s*.h5'%type)\n        paths = glob(path_h5py)\n        paths_sort = [os.path.join(self.root, type + str(i) + '.h5') for i in range(len(paths))]\n        self.path_h5py_all += paths_sort\n        if self.load_name:\n            paths_json = [os.path.join(self.root, type + str(i) + '_id2name.json') for i in range(len(paths))]\n            self.path_name_all += paths_json\n        if self.load_file:\n            paths_json = [os.path.join(self.root, type + str(i) + '_id2file.json') for i in range(len(paths))]\n            self.path_file_all += paths_json\n        return \n\n    def load_h5py(self, path):\n        all_data = []\n        all_label = []\n        all_seg = []\n        for h5_name in path:\n            f = h5py.File(h5_name, 'r+')\n            data = f['data'][:].astype('float32')\n            label = f['label'][:].astype('int64')\n            if self.segmentation:\n                seg = f['seg'][:].astype('int64')\n            f.close()\n            all_data.append(data)\n            all_label.append(label)\n            if self.segmentation:\n                all_seg.append(seg)\n        return all_data, all_label, all_seg\n\n    def load_json(self, path):\n        all_data = []\n        for json_name in path:\n            j =  open(json_name, 'r+')\n            data = json.load(j)\n            all_data += data\n        return all_data\n\n    def __getitem__(self, item):\n        point_set = self.data[item][:self.num_points]\n        label = self.label[item]\n        if self.load_name:\n            name = self.name[item]  # get label name\n        if self.load_file:\n            file = self.file[item]  # get file name\n\n        if self.random_rotate:\n            point_set = rotate_pointcloud(point_set)\n        if self.random_jitter:\n            point_set = jitter_pointcloud(point_set)\n        if self.random_translate:\n            point_set = translate_pointcloud(point_set)\n\n        # convert numpy array to pytorch Tensor\n        point_set = torch.from_numpy(point_set)\n        label = torch.from_numpy(np.array([label]).astype(np.int64))\n        label = label.squeeze(0)\n        \n        if self.segmentation:\n            seg = self.seg[item]\n            seg = torch.from_numpy(seg)\n            return point_set, label, seg, name, file\n        else:\n            return point_set, label, name, file\n\n    def __len__(self):\n        return self.data.shape[0]\n\n\nif __name__ == '__main__':\n    root = os.getcwd()\n\n    # choose dataset name from 'shapenetcorev2', 'shapenetpart', 'modelnet40' and 'modelnet10'\n    dataset_name = 'shapenetcorev2'\n\n    # choose split type from 'train', 'test', 'all', 'trainval' and 'val'\n    # only shapenetcorev2 and shapenetpart dataset support 'trainval' and 'val'\n    split = 'train'\n\n    d = Dataset(root=root, dataset_name=dataset_name, num_points=2048, split=split)\n    print(\"datasize:\", d.__len__())\n\n    item = 0\n    ps, lb, n, f = d[item]\n    print(ps.size(), ps.type(), lb.size(), lb.type(), n, f) "
  },
  {
    "path": "visualize.py",
    "content": "import os\nimport numpy as np\n\ndef standardize_bbox(pcl, points_per_object):\n    pt_indices = np.random.choice(pcl.shape[0], points_per_object, replace=False)\n    np.random.shuffle(pt_indices)\n    pcl = pcl[pt_indices] # n by 3\n    mins = np.amin(pcl, axis=0)\n    maxs = np.amax(pcl, axis=0)\n    center = ( mins + maxs ) / 2.\n    scale = np.amax(maxs-mins)\n    print(\"Center: {}, Scale: {}\".format(center, scale))\n    result = ((pcl - center)/scale).astype(np.float32) # [-0.5, 0.5]\n    return result\n\nxml_head = \\\n\"\"\"\n<scene version=\"0.5.0\">\n    <integrator type=\"path\">\n        <integer name=\"maxDepth\" value=\"-1\"/>\n    </integrator>\n    <sensor type=\"perspective\">\n        <float name=\"farClip\" value=\"100\"/>\n        <float name=\"nearClip\" value=\"0.1\"/>\n        <transform name=\"toWorld\">\n            <lookat origin=\"3,3,3\" target=\"0,0,0\" up=\"0,0,1\"/>\n        </transform>\n        <float name=\"fov\" value=\"25\"/>\n        \n        <sampler type=\"ldsampler\">\n            <integer name=\"sampleCount\" value=\"256\"/>\n        </sampler>\n        <film type=\"ldrfilm\">\n            <integer name=\"width\" value=\"1600\"/>\n            <integer name=\"height\" value=\"1200\"/>\n            <rfilter type=\"gaussian\"/>\n            <boolean name=\"banner\" value=\"false\"/>\n        </film>\n    </sensor>\n    \n    <bsdf type=\"roughplastic\" id=\"surfaceMaterial\">\n        <string name=\"distribution\" value=\"ggx\"/>\n        <float name=\"alpha\" value=\"0.05\"/>\n        <float name=\"intIOR\" value=\"1.46\"/>\n        <rgb name=\"diffuseReflectance\" value=\"1,1,1\"/> <!-- default 0.5 -->\n    </bsdf>\n    \n\"\"\"\n\nxml_ball_segment = \\\n\"\"\"\n    <shape type=\"sphere\">\n        <float name=\"radius\" value=\"0.02\"/>\n        <transform name=\"toWorld\">\n            <translate x=\"{}\" y=\"{}\" z=\"{}\"/>\n            <scale value=\"0.7\"/>\n        </transform>\n        <bsdf type=\"diffuse\">\n            <rgb name=\"reflectance\" value=\"{},{},{}\"/>\n        </bsdf>\n    </shape>\n\"\"\"\n\nxml_tail = \\\n\"\"\"\n    <shape type=\"rectangle\">\n        <ref name=\"bsdf\" id=\"surfaceMaterial\"/>\n        <transform name=\"toWorld\">\n            <scale x=\"10\" y=\"10\" z=\"10\"/>\n            <translate x=\"0\" y=\"0\" z=\"-0.5\"/>\n        </transform>\n    </shape>\n    \n    <shape type=\"rectangle\">\n        <transform name=\"toWorld\">\n            <scale x=\"10\" y=\"10\" z=\"1\"/>\n            <lookat origin=\"-4,4,20\" target=\"0,0,0\" up=\"0,0,1\"/>\n        </transform>\n        <emitter type=\"area\">\n            <rgb name=\"radiance\" value=\"6,6,6\"/>\n        </emitter>\n    </shape>\n</scene>\n\"\"\"\n\ndef colormap(x,y,z):\n    vec = np.array([x,y,z])\n    vec = np.clip(vec, 0.001,1.0)\n    norm = np.sqrt(np.sum(vec**2))\n    vec /= norm\n    return [vec[0], vec[1], vec[2]]\n\ndef mitsuba(pcl, path, clr=None):\n    xml_segments = [xml_head]\n\n    # pcl = standardize_bbox(pcl, 2048)\n    pcl = pcl[:,[2,0,1]]\n    pcl[:,0] *= -1\n    h = np.min(pcl[:,2])\n\n    for i in range(pcl.shape[0]):\n        if clr == None:\n            color = colormap(pcl[i,0]+0.5,pcl[i,1]+0.5,pcl[i,2]+0.5)\n        else:\n            color = clr\n        if h < -0.25:\n            xml_segments.append(xml_ball_segment.format(pcl[i,0],pcl[i,1],pcl[i,2]-h-0.6875, *color))\n        else:\n            xml_segments.append(xml_ball_segment.format(pcl[i,0],pcl[i,1],pcl[i,2], *color))\n    xml_segments.append(xml_tail)\n\n    xml_content = str.join('', xml_segments)\n\n    with open(path, 'w') as f:\n        f.write(xml_content)\n\nif __name__ == '__main__':   \n    item = 0\n    split = 'train'\n    dataset_name = 'shapenetcorev2'\n    root = os.getcwd()\n    save_root = os.path.join(\"image\", dataset_name)\n    if not os.path.exists(save_root):\n        os.makedirs(save_root)\n\n    from dataset import Dataset\n    d = Dataset(root=root, dataset_name=dataset_name, \n                        num_points=2048, split=split, random_rotation=False, load_name=True)\n    print(\"datasize:\", d.__len__())\n\n    pts, lb, n = d[item]\n    print(pts.size(), pts.type(), lb.size(), lb.type(), n) \n    path = os.path.join(save_root, dataset_name + '_' + split + str(item) + '_' + str(n) + '.xml')\n    mitsuba(pts.numpy(), path)\n"
  }
]