[
  {
    "path": ".gitignore",
    "content": "__pycache__/\ntb/\n*.egg-info/\n.idea/"
  },
  {
    "path": "License.txt",
    "content": "MIT License\n\nCopyright (c) 2024 Jumin Lee\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Readme.md",
    "content": "<h1 align=center>\nSemCity: Semantic Scene Generation \n\nwith Triplane Diffusion\n</h1>\n\n![fig0](./figs/semcity.gif)\n\n> SemCity : Semantic Scene Generation with Triplane Diffusion\n> \n> Jumin Lee*, Sebin Lee*, Changho Jo, Woobin Im, Juhyeong Seon and Sung-Eui Yoon* \n\n[Paper](https://arxiv.org/abs/2403.07773) | [Project Page](https://sglab.kaist.ac.kr/SemCity)\n\n## 📌 Setup\nWe test our code on Ubuntu 20.04 with a single RTX 3090 or 4090 GPU.\n\n### Environment \n\n    git clone https://github.com/zoomin-lee/SemCity.git\n    conda create -n semcity \n    conda activate semcity\n    conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia\n    pip install blobfile matplotlib prettytable tensorboard tensorboardX scikit-learn tqdm\n    pip install --user -e .\n\n### Datasets\nWe use the SemanticKITTI and CarlaSC datasets. See [dataset.md](./dataset/dataset.md) for detailed data structure.\n\nPlease adjust the `sequences` folder path in `dataset/path_manager.py`.\n\n## 📌 Training\nTrain the Triplane Autoencoder and then the Triplane Diffusion.\nYou can set dataset using `--dataset kitti` or `--dataset carla`.\nIn/outpainting and semantic scene completion refinement are only possible with SemanticKITTI datasets.\n\n### Triplane Autoencoder\n\n    python scripts/train_ae_main.py --save_path exp/ae\n\nWhen you are finished training the triplane autoencoder, save the triplane. \nThe triplane is a proxy representation of the scene for triplane diffusion training.\n\n    python scripts/save_triplane.py --data_name voxels --save_tail .npy --resume {ae.pt path}\n\nIf you want to train semantic scene completion refinement, also save the triplane of the result of the ssc method (e.g. monoscene).\n\n    python scripts/save_triplane.py --data_name monoscene --save_tail _monoscene.npy --resume {ae.pt path}\n\n### Triplane Diffusion\n\nFor training for semantic scene generation or in/outpainting,\n\n    python scripts/train_diffusion_main.py --triplane_loss_type l2 --save_path exp/diff\n\nFor training semantic scene completion refinement,\n\n    python scripts/train_diffusion_main.py --ssc_refine --refine_dataset monoscene --triplane_loss_type l1 --save_path exp/diff\n\n## 📌 Sampling\nIn `dataset/path_manager.py`, adjust the triplane autoencoder and triplane diffusion `.pt` paths to `AE_PATH` and `DIFF_PATH`.\n\n![fig1](./figs/semcity.png)\n\nTo generate 3D semantic scene like `fig(a)`,\n\n    python sampling/generation.py --num_samples 10 --save_path exp/gen\n\nFor semantic scene completion refinement like `fig(b)`,\n\n    python sampling/ssc_refine.py --refine_dataset monoscene --save_path exp/ssc_refine\n\nCurrently, we're only releasing the code to outpaint twice the original scene.\n\n    python sampling/outpainting.py --load_path figs/000840.label --save_path exp/out\n\nFor inpainting, as in `fig(d)`, you can define the region (top right, top left, bottom right, bottom left) where you want to regenerate.\n\n    python sampling/inpainting.py --load_path figs/000840.label --save_path exp/in\n\n## 📌 Evaluation\n\nWe render our scene with [pyrender](https://pyrender.readthedocs.io/en/latest/index.html) and then evaluate it using [torch-fidelity](https://github.com/toshas/torch-fidelity). \n\n## Acknowledgement\nThe code is partly based on [guided-diffusion](https://github.com/openai/guided-diffusion), [Sin3DM](https://github.com/Sin3DM/Sin3DM) and [scene-scale-diffusion](https://github.com/zoomin-lee/scene-scale-diffusion). \n\n## Bibtex\nIf you find this code useful for your research, please consider citing our paper:\n\n    @inproceedings{lee2024semcity,\n        title={SemCity: Semantic Scene Generation with Triplane Diffusion},\n        author={Lee, Jumin and Lee, Sebin and Jo, Changho and Im, Woobin and Seon, Juhyeong and Yoon, Sung-Eui},\n        booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},\n        year={2024}\n    }\n\n## 📌 License\n\nThis project is released under the MIT License.\n"
  },
  {
    "path": "dataset/carla.yaml",
    "content": "color_map :\n  0 : [255, 255, 255]  # None\n  1 : [70, 70, 70]     # Building\n  2 : [100, 40, 40]    # Fences\n  3 : [55, 90, 80]     # Other\n  4 : [255, 255, 0 ]   # Pedestrian\n  5 : [153, 153, 153]  # Pole\n  6 : [157, 234, 50]   # RoadLines\n  7 : [0, 0, 255]      # Road\n  8 : [255, 255, 255]  # Sidewalk\n  9 : [0, 155, 0]      # Vegetation\n  10 : [255, 0, 0]     # Vehicle\n  11 : [102, 102, 156] # Wall\n  12 : [220, 220, 0]   # TrafficSign\n  13 : [70, 130, 180]  # Sky\n  14 : [255, 255, 255] # Ground\n  15 : [150, 100, 100] # Bridge\n  16 : [230, 150, 140] # RailTrack\n  17 : [180, 165, 180] # GuardRail\n  18 : [250, 170, 30]  # TrafficLight\n  19 : [110, 190, 160] # Static\n  20 : [170, 120, 50]  # Dynamic\n  21 : [45, 60, 150]   # Water\n  22 : [145, 170, 100] # Terrain\n\nlearning_map :\n  0 : 0\n  1 : 1\n  2 : 2\n  3 : 3\n  4 : 4\n  5 : 5\n  6 : 6\n  7 : 6\n  8 : 8\n  9 : 9\n  10: 10\n  11 : 2\n  12 : 5\n  13 : 3\n  14 : 7\n  15 : 3\n  16 : 3\n  17 : 2\n  18 : 5\n  19 : 3\n  20 : 3\n  21 : 3\n  22 : 7\n\nremap_color_map:\n  0 : [255, 255, 255]  # None\n  1 : [70, 70, 70]     # Building\n  2 : [100, 40, 40]    # Fences\n  3 : [55, 90, 80]     # Other\n  4 : [255, 255, 0]   # Pedestrian\n  5 : [153, 153, 153]  # Pole\n  6 : [0, 0, 255]      # Road\n  7 : [145, 170, 100] # Ground\n  8 : [240, 240, 240]  # Sidewalk\n  9 : [0, 155, 0]      # Vegetation\n  10 : [255, 0, 0]     # Vehicle\n\nlabel_to_names:\n  0 : Free\n  1 : Building\n  2 : Barrier\n  3 : Other\n  4 : Pedestrian\n  5 : Pole\n  6 : Road\n  7 : Ground\n  8 : Sidewalk\n  9 : Vegetation\n  10 : Vehicle\n\ncontent :\n  0 : 4166593275\n  1 : 42309744\n  2 : 8550180\n  3 : 478193\n  4 : 905663\n  5 : 2801091\n  6 : 6452733\n  7 : 229316930\n  8 : 112863867\n  9 : 29816894\n  10: 13839655\n  11 : 15581458\n  12 : 221821\n  13 : 0\n  14 : 7931550\n  15 : 467989\n  16 : 3354\n  17 : 9201043\n  18 : 61011\n  19 : 3796746\n  20 : 3217865\n  21 : 215372\n  22 : 79669695\n\nremap_content : \n  0 : 4.16659328e+09\n  1 : 4.23097440e+07\n  2 : 3.33326810e+07\n  3 : 8.17951900e+06\n  4 : 9.05663000e+05\n  5 : 3.08392300e+06\n  6 : 2.35769663e+08\n  7 : 8.76012450e+07\n  8 : 1.12863867e+08\n  9 : 2.98168940e+07\n  10 : 1.38396550e+07\n\nsplit: # sequence numbers\n  train:\n    - Town01_Heavy   \n    - Town02_Heavy   \n    - Town03_Heavy   \n    - Town04_Heavy   \n    - Town05_Heavy   \n    - Town06_Heavy\n    - Town01_Medium\n    - Town02_Medium\n    - Town03_Medium\n    - Town04_Medium\n    - Town05_Medium\n    - Town06_Medium\n    - Town01_Light\n    - Town02_Light\n    - Town03_Light\n    - Town04_Light\n    - Town05_Light\n    - Town06_Light\n\n  valid:\n    - Town10_Heavy\n    - Town10_Medium\n    - Town10_Light"
  },
  {
    "path": "dataset/carla_dataset.py",
    "content": "import os\nimport numpy as np\nimport json\nimport yaml\nimport torch\nimport pathlib\nfrom torch.utils.data import Dataset\nfrom dataset.kitti_dataset import flip, get_query\n\nclass CarlaDataset(Dataset):\n    def __init__(self, args, imageset='train', get_query=True):\n        self.get_query = get_query\n        carla_config = yaml.safe_load(open(args.yaml_path, 'r'))\n        label_remap = carla_config[\"learning_map\"]  \n        self.learning_map = np.asarray(list(label_remap.values()))\n        self.learning_map_inv = None\n        \n        if imageset == 'train':\n            split = carla_config['split']['train']\n        elif imageset == 'val':\n            split = carla_config['split']['valid']\n            \n        complt_num_per_class= np.asarray([4.16659328e+09, 4.23097440e+07,  3.33326810e+07, 8.17951900e+06, 9.05663000e+05, 3.08392300e+06, 2.35769663e+08, 8.76012450e+07, 1.12863867e+08, 2.98168940e+07, 1.38396550e+07])\n        compl_labelweights = complt_num_per_class / np.sum(complt_num_per_class)\n        self.weights = torch.Tensor(np.power(np.amax(compl_labelweights) / compl_labelweights, 1 / 3.0)).cuda()\n        \n        self.imageset = imageset\n\n        param_file = os.path.join(args.data_path, split[0], 'voxels', 'params.json')\n        with open(param_file) as f:\n            self._eval_param = json.load(f)\n        \n        self._grid_size = self._eval_param['grid_size']\n        self._eval_size = list(np.uint32(self._grid_size))\n        self.im_idx = []\n        \n        for i_folder in split:\n            complete_path = os.path.join(args.data_path, str(i_folder), 'voxels')\n            files = list(pathlib.Path(complete_path).glob('*.label'))\n            for filename in files:\n                #if int(str(filename).split('/')[-1].split('.')[0]) % 5 == 0 :\n                self.im_idx.append(str(filename))\n        \n\n    # Use all frames, if there is no data then zero pad\n    def __len__(self):\n        return len(self.im_idx)\n    \n    def __getitem__(self, index):\n\n        voxel_label = np.fromfile(self.im_idx[index],dtype=np.uint32).reshape(self._eval_size).astype(np.uint8)\n        valid = np.fromfile(self.im_idx[index].replace(\"label\", 'bin'),dtype=np.float32).reshape(self._eval_size)\n        voxel_label = self.learning_map[voxel_label].astype(np.uint8)            \n\n        \n        if self.imageset == 'train' :\n            p = torch.randint(0, 6, (1,)).item()\n            if p == 0:\n                voxel_label, valid = flip(voxel_label, valid, flip_dim=0)\n            elif p == 1:\n                voxel_label, valid = flip(voxel_label, valid, flip_dim=1)\n            elif p == 2:\n                voxel_label, valid = flip(voxel_label, valid, flip_dim=0)\n                voxel_label, valid = flip(voxel_label, valid, flip_dim=1)\n        \n        invalid = torch.zeros_like(torch.from_numpy(valid))\n        invalid[torch.from_numpy(valid)==0]=1\n        invalid = invalid.numpy()\n        if self.get_query:\n            query, xyz_label, xyz_center = get_query(voxel_label, 11, (128,128,8), 80000)\n        else : query, xyz_label, xyz_center = torch.zeros(1), torch.zeros(1), torch.zeros(1)\n        return voxel_label, query, xyz_label, xyz_center, self.im_idx[index], invalid\n    "
  },
  {
    "path": "dataset/dataset.md",
    "content": "## Datasets\nDatasets should have the following structure.\n\nThe triplane folder is created by `scripts/save_triplane.py` after `scripts/train_ae_main.py`.\n\n### SemanticKITTI\nYou can download SemanticKITTI datasets from [here](http://www.semantic-kitti.org/assets/data_odometry_voxels_all.zip).\n\nIf you want to do semantic scene completion refinement, place the `.label` file from ssc method(e.g. [monoscene](https://github.com/astra-vision/MonoScene), [occdepth](https://github.com/megvii-research/OccDepth), [scpnet](https://github.com/SCPNet/Codes-for-SCPNet), [ssasc](https://github.com/jokester-zzz/ssa-sc)) in the following structure. \n\n    /dataset/\n        └── sequences/\n            ├── 00/\n            |   ├── voxels/\n            │   |     ├ 000000.label\n            │   |     ├ 000000.invalid\n            │   ├── monoscene/\n            │   |     ├ 000000.label\n            │   ├── occdepth/\n            │   |     ├ 000000.label\n            │   ├── scpnet/\n            │   |     ├ 000000.label\n            │   ├── ssasc/\n            │   |     ├ 000000.label\n            │   └── triplane/\n            │         ├ 000000.npy\n            │         ├ 000000_monoscene.npy\n            │         ├ 000000_occdepth.npy\n            │         ├ 000000_scpnet.npy\n            │         ├ 000000_ssasc.npy\n            ├── 01/\n            .\n            .\n            └── 10/\n        \n### CarlaSC\nYou can download CarlaSC Cartesian datasets from [here](https://umich-curly.github.io/CarlaSC.github.io/download/).\n\nThe structure differs slightly from the original CarlaSC dataset to align with the SemanticKITTI dataset.\nThe `voxels` folder was originally the `evaluation` folder, which contains the GT for semantic scene completion.\n    \n    /carla/\n        └── sequences/\n            ├── Town01_Heavy/\n            |   ├── voxels/\n            │   |     ├ 000000.label\n            │   |     ├ 000000.bin\n            │   └── triplane/\n            │         ├ 000000.npy\n            ├── Town01_Medium/\n            .\n            .\n            └── Town10_Light/"
  },
  {
    "path": "dataset/dataset_builder.py",
    "content": "from dataset.kitti_dataset import SemKITTI\nfrom dataset.carla_dataset import CarlaDataset\n\ndef dataset_builder(args):\n    print(\"build dataset\")\n    if args.dataset == 'kitti':\n        dataset = SemKITTI(args, 'train')\n        val_dataset = SemKITTI(args, 'val')\n        args.num_class = 20\n        args.grid_size = [256, 256, 32]\n        class_names = [\n                'car', 'bicycle', 'motorcycle', 'truck', 'other-vehicle', 'person', 'bicyclist',\n                'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building', 'fence',\n                'vegetation', 'trunk', 'terrain', 'pole', 'traffic-sign'\n            ]\n    elif args.dataset == 'carla':\n        dataset = CarlaDataset(args, 'train')\n        val_dataset = CarlaDataset(args, 'val')\n        args.num_class = 11 \n        args.grid_size = [128, 128, 8]\n        class_names = ['building', 'barrier', 'other', 'pedestrian', 'pole', 'road', 'ground', 'sidewalk', 'vegetation', 'vehicle']\n        \n    return dataset, val_dataset, args.num_class, class_names"
  },
  {
    "path": "dataset/kitti_dataset.py",
    "content": "import os\nimport numpy as np\nfrom torch.utils import data\nimport yaml\nimport pathlib\nimport torch\nfrom scipy.ndimage import distance_transform_edt\n\n\nclass SemKITTI(data.Dataset):\n    def __init__(self, args, imageset='train', get_query=True, folder = 'voxels'):\n        with open(args.yaml_path, 'r') as stream:\n            semkittiyaml = yaml.safe_load(stream)\n            \n        self.args = args\n        self.get_query = get_query\n        remapdict = semkittiyaml['learning_map']\n        self.learning_map_inv = semkittiyaml[\"learning_map_inv\"]\n\n        maxkey = max(remapdict.keys())\n        remap_lut = np.zeros((maxkey + 100), dtype=np.int32)\n        remap_lut[list(remapdict.keys())] = list(remapdict.values())\n\n        remap_lut[remap_lut == 0] = 255  # map 0 to 'invalid'\n        remap_lut[0] = 0  # only 'empty' stays 'empty'.\n        self.learning_map = remap_lut\n\n        self.imageset = imageset\n        self.data_path = args.data_path\n        self.folder = folder\n        \n        if imageset == 'train':\n            split = semkittiyaml['split']['train']\n            complt_num_per_class= np.asarray([7632350044, 15783539,  125136, 118809, 646799, 821951, 262978, 283696, 204750, 61688703, 4502961, 44883650, 2269923, 56840218, 15719652, 158442623, 2061623, 36970522, 1151988, 334146])\n            compl_labelweights = complt_num_per_class / np.sum(complt_num_per_class)\n            self.weights = torch.Tensor(np.power(np.amax(compl_labelweights) / compl_labelweights, 1 / 3.0)).cuda()\n            \n        elif imageset == 'val':\n            split = semkittiyaml['split']['valid']\n            self.weights = torch.Tensor(np.ones(20) * 3).cuda()\n            self.weights[0] = 1\n            \n        elif imageset == 'test':\n            split = semkittiyaml['split']['test']\n            self.weights = torch.Tensor(np.ones(20) * 3).cuda()\n            self.weights[0] = 1\n        else:\n            raise Exception('Split must be train/val/test')\n        \n        self.im_idx=[]\n        for i_folder in split:\n            # velodyne path corresponding to voxel path\n            complete_path = os.path.join(args.data_path, str(i_folder).zfill(2), folder)\n            files = list(pathlib.Path(complete_path).glob('*.label'))\n            for filename in files:\n                if (imageset == 'val') :\n                    if (int(str(filename).split('/')[-1].split('.')[0]) % 5 == 0) :\n                        self.im_idx.append(str(filename))\n                else : \n                    self.im_idx.append(str(filename))\n                \n    def unpack(self, compressed):\n        ''' given a bit encoded voxel grid, make a normal voxel grid out of it.  '''\n        uncompressed = np.zeros(compressed.shape[0] * 8, dtype=np.uint8)\n        uncompressed[::8] = compressed[:] >> 7 & 1\n        uncompressed[1::8] = compressed[:] >> 6 & 1\n        uncompressed[2::8] = compressed[:] >> 5 & 1\n        uncompressed[3::8] = compressed[:] >> 4 & 1\n        uncompressed[4::8] = compressed[:] >> 3 & 1\n        uncompressed[5::8] = compressed[:] >> 2 & 1\n        uncompressed[6::8] = compressed[:] >> 1 & 1\n        uncompressed[7::8] = compressed[:] & 1\n        return uncompressed\n\n    def __len__(self):\n        'Denotes the total number of samples'\n        return len(self.im_idx)\n\n    def __getitem__(self, index):\n        path = self.im_idx[index]\n        \n        if self.imageset == 'test':\n            voxel_label = np.zeros([256, 256, 32], dtype=int).reshape((-1, 1))\n        else:\n            voxel_label = np.fromfile(path, dtype=np.uint16).reshape((-1, 1))  # voxel labels\n            invalid = self.unpack(np.fromfile(path.replace('label', 'invalid').replace(self.folder, 'voxels'), dtype=np.uint8)).astype(np.float32)\n            \n        voxel_label = self.learning_map[voxel_label]\n        voxel_label = voxel_label.reshape((256, 256, 32))\n        invalid = invalid.reshape((256,256,32))\n        voxel_label[invalid == 1]=255\n\n        if self.get_query :\n            if self.imageset == 'train' :\n                p = torch.randint(0, 6, (1,)).item()\n                if p == 0:\n                    voxel_label, invalid = flip(voxel_label, invalid, flip_dim=0)\n                elif p == 1:\n                    voxel_label, invalid = flip(voxel_label, invalid, flip_dim=1)\n                elif p == 2:\n                    voxel_label, invalid = flip(voxel_label, invalid, flip_dim=0)\n                    voxel_label, invalid = flip(voxel_label, invalid, flip_dim=1)\n            query, xyz_label, xyz_center = get_query(voxel_label)\n\n        else : \n            query, xyz_label, xyz_center = torch.zeros(1), torch.zeros(1), torch.zeros(1)\n        return voxel_label, query, xyz_label, xyz_center, self.im_idx[index], invalid\n    \ndef get_query(voxel_label, num_class=20, grid_size = (256,256,32), max_points = 400000):\n    xyzl = []\n    for i in range(1, num_class):\n        xyz = torch.nonzero(torch.Tensor(voxel_label) == i, as_tuple=False)\n        xyzlabel = torch.nn.functional.pad(xyz, (1,0),'constant', value=i)\n        xyzl.append(xyzlabel)\n    tdf = compute_tdf(voxel_label, trunc_distance=2)\n    xyz = torch.nonzero(torch.tensor(np.logical_and(tdf > 0, tdf <= 2)), as_tuple=False)\n    xyzlabel = torch.nn.functional.pad(xyz, (1, 0), 'constant', value=0)\n    xyzl.append(xyzlabel)\n    \n    num_far_free = int(max_points - len(torch.cat(xyzl, dim=0)))\n    if num_far_free <= 0 :\n        xyzl = torch.cat(xyzl, dim=0)\n        xyzl = xyzl[:max_points]\n    else : \n        xyz = torch.nonzero(torch.tensor(np.logical_and(voxel_label == 0, tdf == -1)), as_tuple=False)\n        xyzlabel = torch.nn.functional.pad(xyz, (1, 0), 'constant', value=0)\n        idx = torch.randperm(xyzlabel.shape[0])\n        xyzlabel = xyzlabel[idx][:min(xyzlabel.shape[0], num_far_free)]\n        xyzl.append(xyzlabel)\n        while len(torch.cat(xyzl, dim=0)) < max_points:\n            for i in range(1, num_class):\n                xyz = torch.nonzero(torch.Tensor(voxel_label) == i, as_tuple=False)\n                xyzlabel = torch.nn.functional.pad(xyz, (1,0),'constant', value=i)\n                xyzl.append(xyzlabel)\n        xyzl = torch.cat(xyzl, dim=0)\n        xyzl = xyzl[:max_points]\n        \n    xyz_label = xyzl[:, 0]\n    xyz_center = xyzl[:, 1:]\n    xyz = xyz_center.float()\n\n    query = torch.zeros(xyz.shape, dtype=torch.float32, device=xyz.device)\n    query[:,0] = 2*xyz[:,0].clamp(0,grid_size[0]-1)/float(grid_size[0]-1) -1\n    query[:,1] = 2*xyz[:,1].clamp(0,grid_size[1]-1)/float(grid_size[1]-1) -1\n    query[:,2] = 2*xyz[:,2].clamp(0,grid_size[2]-1)/float(grid_size[2]-1) -1\n    \n    return query, xyz_label, xyz_center\n\ndef compute_tdf(voxel_label: np.ndarray, trunc_distance: float = 3, trunc_value: float = -1) -> np.ndarray:\n    \"\"\" Compute Truncated Distance Field (TDF). voxel_label -- [X, Y, Z] \"\"\"\n    # make TDF at free voxels.\n    # distance is defined as Euclidean distance to nearest unfree voxel (occupied or unknown).\n    free = voxel_label == 0\n    tdf = distance_transform_edt(free)\n\n    # Set -1 if distance is greater than truncation_distance\n    tdf[tdf > trunc_distance] = trunc_value\n    return tdf  # [X, Y, Z]\n\ndef flip(voxel, invalid, flip_dim=0):\n    voxel = np.flip(voxel, axis=flip_dim).copy()\n    invalid = np.flip(invalid, axis=flip_dim).copy()\n    return voxel, invalid\n"
  },
  {
    "path": "dataset/path_manager.py",
    "content": "import os\n\n# manual definition\nPROJECT_NAMES = 'SemCity' \nSEMKITTI_DATA_PATH = '' # the path to the sequences folder\nCARLA_DATA_PATH = '' # the path to the sequences folder\n\n# auto definition\nCARLA_YAML_PATH = os.getcwd() + '/dataset/carla.yaml'\nSEMKITTI_YAML_PATH = os.getcwd() + '/dataset/semantic-kitti.yaml'\n\n# manual definition after training\nAE_PATH = os.getcwd() + ''  # the path to the pt file \nGEN_DIFF_PATH = os.getcwd() + '' \nSSC_DIFF_PATH = os.getcwd()  + ''"
  },
  {
    "path": "dataset/semantic-kitti.yaml",
    "content": "labels:\n  0 : \"unlabeled\"\n  1 : \"outlier\"\n  10: \"car\"\n  11: \"bicycle\"\n  13: \"bus\"\n  15: \"motorcycle\"\n  16: \"on-rails\"\n  18: \"truck\"\n  20: \"other-vehicle\"\n  30: \"person\"\n  31: \"bicyclist\"\n  32: \"motorcyclist\"\n  40: \"road\"\n  44: \"parking\"\n  48: \"sidewalk\"\n  49: \"other-ground\"\n  50: \"building\"\n  51: \"fence\"\n  52: \"other-structure\"\n  60: \"lane-marking\"\n  70: \"vegetation\"\n  71: \"trunk\"\n  72: \"terrain\"\n  80: \"pole\"\n  81: \"traffic-sign\"\n  99: \"other-object\"\n  252: \"moving-car\"\n  253: \"moving-bicyclist\"\n  254: \"moving-person\"\n  255: \"moving-motorcyclist\"\n  256: \"moving-on-rails\"\n  257: \"moving-bus\"\n  258: \"moving-truck\"\n  259: \"moving-other-vehicle\"\ncolor_map: # bgr\n  0 : [0, 0, 0]\n  1 : [0, 0, 255]\n  10: [245, 150, 100]\n  11: [245, 230, 100]\n  13: [250, 80, 100]\n  15: [150, 60, 30]\n  16: [255, 0, 0]\n  18: [180, 30, 80]\n  20: [255, 0, 0]\n  30: [30, 30, 255]\n  31: [200, 40, 255]\n  32: [90, 30, 150]\n  40: [255, 0, 255]\n  44: [255, 150, 255]\n  48: [75, 0, 75]\n  49: [75, 0, 175]\n  50: [0, 200, 255]\n  51: [50, 120, 255]\n  52: [0, 150, 255]\n  60: [170, 255, 150]\n  70: [0, 175, 0]\n  71: [0, 60, 135]\n  72: [80, 240, 150]\n  80: [150, 240, 255]\n  81: [0, 0, 255]\n  99: [255, 255, 50]\n  252: [245, 150, 100]\n  256: [255, 0, 0]\n  253: [200, 40, 255]\n  254: [30, 30, 255]\n  255: [90, 30, 150]\n  257: [250, 80, 100]\n  258: [180, 30, 80]\n  259: [255, 0, 0]\ncontent: # as a ratio with the total number of points\n  0: 0.018889854628292943\n  1: 0.0002937197336781505\n  10: 0.040818519255974316\n  11: 0.00016609538710764618\n  13: 2.7879693665067774e-05\n  15: 0.00039838616015114444\n  16: 0.0\n  18: 0.0020633612104619787\n  20: 0.0016218197275284021\n  30: 0.00017698551338515307\n  31: 1.1065903904919655e-08\n  32: 5.532951952459828e-09\n  40: 0.1987493871255525\n  44: 0.014717169549888214\n  48: 0.14392298360372\n  49: 0.0039048553037472045\n  50: 0.1326861944777486\n  51: 0.0723592229456223\n  52: 0.002395131480328884\n  60: 4.7084144280367186e-05\n  70: 0.26681502148037506\n  71: 0.006035012012626033\n  72: 0.07814222006271769\n  80: 0.002855498193863172\n  81: 0.0006155958086189918\n  99: 0.009923127583046915\n  252: 0.001789309418528068\n  253: 0.00012709999297008662\n  254: 0.00016059776092534436\n  255: 3.745553104802113e-05\n  256: 0.0\n  257: 0.00011351574470342043\n  258: 0.00010157861367183268\n  259: 4.3840131989471124e-05\n# classes that are indistinguishable from single scan or inconsistent in\n# ground truth are mapped to their closest equivalent\nlearning_map:\n  0 : 0     # \"unlabeled\"\n  1 : 0     # \"outlier\" mapped to \"unlabeled\" --------------------------mapped\n  10: 1     # \"car\"\n  11: 2     # \"bicycle\"\n  13: 5     # \"bus\" mapped to \"other-vehicle\" --------------------------mapped\n  15: 3     # \"motorcycle\"\n  16: 5     # \"on-rails\" mapped to \"other-vehicle\" ---------------------mapped\n  18: 4     # \"truck\"\n  20: 5     # \"other-vehicle\"\n  30: 6     # \"person\"\n  31: 7     # \"bicyclist\"\n  32: 8     # \"motorcyclist\"\n  40: 9     # \"road\"\n  44: 10    # \"parking\"\n  48: 11    # \"sidewalk\"\n  49: 12    # \"other-ground\"\n  50: 13    # \"building\"\n  51: 14    # \"fence\"\n  52: 0     # \"other-structure\" mapped to \"unlabeled\" ------------------mapped\n  60: 9     # \"lane-marking\" to \"road\" ---------------------------------mapped\n  70: 15    # \"vegetation\"\n  71: 16    # \"trunk\"\n  72: 17    # \"terrain\"\n  80: 18    # \"pole\"\n  81: 19    # \"traffic-sign\"\n  99: 0     # \"other-object\" to \"unlabeled\" ----------------------------mapped\n  252: 1    # \"moving-car\" to \"car\" ------------------------------------mapped\n  253: 7    # \"moving-bicyclist\" to \"bicyclist\" ------------------------mapped\n  254: 6    # \"moving-person\" to \"person\" ------------------------------mapped\n  255: 8    # \"moving-motorcyclist\" to \"motorcyclist\" ------------------mapped\n  256: 5    # \"moving-on-rails\" mapped to \"other-vehicle\" --------------mapped\n  257: 5    # \"moving-bus\" mapped to \"other-vehicle\" -------------------mapped\n  258: 4    # \"moving-truck\" to \"truck\" --------------------------------mapped\n  259: 5    # \"moving-other\"-vehicle to \"other-vehicle\" ----------------mapped\nlearning_map_inv: # inverse of previous map\n  0: 0      # \"unlabeled\", and others ignored\n  1: 10     # \"car\"\n  2: 11     # \"bicycle\"\n  3: 15     # \"motorcycle\"\n  4: 18     # \"truck\"\n  5: 20     # \"other-vehicle\"\n  6: 30     # \"person\"\n  7: 31     # \"bicyclist\"\n  8: 32     # \"motorcyclist\"\n  9: 40     # \"road\"\n  10: 44    # \"parking\"\n  11: 48    # \"sidewalk\"\n  12: 49    # \"other-ground\"\n  13: 50    # \"building\"\n  14: 51    # \"fence\"\n  15: 70    # \"vegetation\"\n  16: 71    # \"trunk\"\n  17: 72    # \"terrain\"\n  18: 80    # \"pole\"\n  19: 81    # \"traffic-sign\"\nlearning_ignore: # Ignore classes\n  0: True      # \"unlabeled\", and others ignored\n  1: False     # \"car\"\n  2: False     # \"bicycle\"\n  3: False     # \"motorcycle\"\n  4: False     # \"truck\"\n  5: False     # \"other-vehicle\"\n  6: False     # \"person\"\n  7: False     # \"bicyclist\"\n  8: False     # \"motorcyclist\"\n  9: False     # \"road\"\n  10: False    # \"parking\"\n  11: False    # \"sidewalk\"\n  12: False    # \"other-ground\"\n  13: False    # \"building\"\n  14: False    # \"fence\"\n  15: False    # \"vegetation\"\n  16: False    # \"trunk\"\n  17: False    # \"terrain\"\n  18: False    # \"pole\"\n  19: False    # \"traffic-sign\"\nsplit: # sequence numbers\n  train:\n    - 0\n    - 1\n    - 2\n    - 3\n    - 4\n    - 5\n    - 6\n    - 7\n    - 9\n    - 10\n  valid:\n    - 8\n  test:\n    - 11\n    - 12\n    - 13\n    - 14\n    - 15\n    - 16\n    - 17\n    - 18\n    - 19\n    - 20\n    - 21\n"
  },
  {
    "path": "dataset/tri_dataset_builder.py",
    "content": "import torch\nimport yaml\nimport os\nimport numpy as np\nimport pathlib\nfrom diffusion.triplane_util import augment\nfrom utils.parser_util import get_gen_args\n\nclass TriplaneDataset(torch.utils.data.Dataset):\n    def __init__(self, args, imageset):\n        self.args = args\n        self.imageset = imageset\n        with open(args.yaml_path, 'r') as stream:\n            data_yaml = yaml.safe_load(stream)\n        if imageset == 'train': split = data_yaml['split']['train']\n        elif imageset == 'val': split = data_yaml['split']['valid']    \n        \n        H, W, D, self.learning_map, self.learning_map_inv, class_name, grid_size, self.tri_size, self.num_class, self.max_points = get_gen_args(args)\n        self.grid_size = grid_size[1:]\n\n        self.im_idx = []\n        for i_folder in split:\n            if args.dataset == 'kitti': folder = str(i_folder).zfill(2)\n            elif args.dataset == 'carla' : folder = str(i_folder)\n            \n            if args.diff_net_type == 'unet_voxel':\n                tri_path = os.path.join(args.data_path, folder, 'voxel')\n            elif args.diff_net_type == 'unet_bev':\n                tri_path = os.path.join(args.data_path, folder, 'bev')\n            else : \n                tri_path = os.path.join(args.data_path, folder, 'triplane')    \n                    \n            files = list(pathlib.Path(tri_path).glob('??????.npy'))\n           \n            for filename in files:\n                if imageset == 'val':\n                    if (int(str(filename).split('/')[-1].split('.')[0].split(\"_\")[0]) % 5 == 0) :\n                        self.im_idx.append(str(filename))\n                else : self.im_idx.append(str(filename))\n\n        if imageset == 'val':\n            self.im_idx = sorted(self.im_idx)\n   \n    def __len__(self):\n        return len(self.im_idx)  \n    \n    def __getitem__(self, index):\n        triplane = np.load(self.im_idx[index]).squeeze()    \n        if self.args.ssc_refine :\n            condition = np.load(self.im_idx[index])\n            path = self.im_idx[index].replace('.npy', f'_{self.args.ssc_refine_dataset}.npy') \n        else: \n            condition = np.zeros_like(triplane)\n            path = self.im_idx[index]\n            \n        if (not self.args.diff_net_type == 'unet_voxel') and (self.imageset == 'train') :\n            # rotation\n            q = torch.randint(0, 3, (1,)).item()    \n            if q==0:\n                triplane = torch.from_numpy(triplane).permute(0, 2, 1).numpy()\n                condition = torch.from_numpy(condition).permute(0, 2, 1).numpy()\n                        \n            # other augmentations (flip, crop, noise.)\n            p = torch.randint(0, 6, (1,)).item()\n            triplane = augment(triplane, p, self.tri_size)\n            condition = augment(condition, p, self.tri_size)\n                    \n        return triplane, {'y':condition, 'H':self.tri_size[0], 'W':self.tri_size[1], 'D':self.tri_size[2], 'path':(path)}\n    "
  },
  {
    "path": "diffusion/fp16_util.py",
    "content": "\"\"\"\nHelpers to train with 16-bit precision.\n\"\"\"\n\nimport numpy as np\nimport torch as th\nimport torch.nn as nn\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom . import logger\n\nINITIAL_LOG_LOSS_SCALE = 20.0\n\n\ndef convert_module_to_f16(l):\n    \"\"\"\n    Convert primitive modules to float16.\n    \"\"\"\n    if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):\n        l.weight.data = l.weight.data.half()\n        if l.bias is not None:\n            l.bias.data = l.bias.data.half()\n\n\ndef convert_module_to_f32(l):\n    \"\"\"\n    Convert primitive modules to float32, undoing convert_module_to_f16().\n    \"\"\"\n    if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):\n        l.weight.data = l.weight.data.float()\n        if l.bias is not None:\n            l.bias.data = l.bias.data.float()\n\n\ndef make_master_params(param_groups_and_shapes):\n    \"\"\"\n    Copy model parameters into a (differently-shaped) list of full-precision\n    parameters.\n    \"\"\"\n    master_params = []\n    for param_group, shape in param_groups_and_shapes:\n        master_param = nn.Parameter(\n            _flatten_dense_tensors(\n                [param.detach().float() for (_, param) in param_group]\n            ).view(shape)\n        )\n        master_param.requires_grad = True\n        master_params.append(master_param)\n    return master_params\n\n\ndef model_grads_to_master_grads(param_groups_and_shapes, master_params):\n    \"\"\"\n    Copy the gradients from the model parameters into the master parameters\n    from make_master_params().\n    \"\"\"\n    for master_param, (param_group, shape) in zip(\n        master_params, param_groups_and_shapes\n    ):\n        master_param.grad = _flatten_dense_tensors(\n            [param_grad_or_zeros(param) for (_, param) in param_group]\n        ).view(shape)\n\n\ndef master_params_to_model_params(param_groups_and_shapes, master_params):\n    \"\"\"\n    Copy the master parameter data back into the model parameters.\n    \"\"\"\n    # Without copying to a list, if a generator is passed, this will\n    # silently not copy any parameters.\n    for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes):\n        for (_, param), unflat_master_param in zip(\n            param_group, unflatten_master_params(param_group, master_param.view(-1))\n        ):\n            param.detach().copy_(unflat_master_param)\n\n\ndef unflatten_master_params(param_group, master_param):\n    return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group])\n\n\ndef get_param_groups_and_shapes(named_model_params):\n    named_model_params = list(named_model_params)\n    scalar_vector_named_params = (\n        [(n, p) for (n, p) in named_model_params if p.ndim <= 1],\n        (-1),\n    )\n    matrix_named_params = (\n        [(n, p) for (n, p) in named_model_params if p.ndim > 1],\n        (1, -1),\n    )\n    return [scalar_vector_named_params, matrix_named_params]\n\n\ndef master_params_to_state_dict(\n    model, param_groups_and_shapes, master_params, use_fp16\n):\n    if use_fp16:\n        state_dict = model.state_dict()\n        for master_param, (param_group, _) in zip(\n            master_params, param_groups_and_shapes\n        ):\n            for (name, _), unflat_master_param in zip(\n                param_group, unflatten_master_params(param_group, master_param.view(-1))\n            ):\n                assert name in state_dict\n                state_dict[name] = unflat_master_param\n    else:\n        state_dict = model.state_dict()\n        for i, (name, _value) in enumerate(model.named_parameters()):\n            assert name in state_dict\n            state_dict[name] = master_params[i]\n    return state_dict\n\n\ndef state_dict_to_master_params(model, state_dict, use_fp16):\n    if use_fp16:\n        named_model_params = [\n            (name, state_dict[name]) for name, _ in model.named_parameters()\n        ]\n        param_groups_and_shapes = get_param_groups_and_shapes(named_model_params)\n        master_params = make_master_params(param_groups_and_shapes)\n    else:\n        master_params = [state_dict[name] for name, _ in model.named_parameters()]\n    return master_params\n\n\ndef zero_master_grads(master_params):\n    for param in master_params:\n        param.grad = None\n\n\ndef zero_grad(model_params):\n    for param in model_params:\n        # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group\n        if param.grad is not None:\n            param.grad.detach_()\n            param.grad.zero_()\n\n\ndef param_grad_or_zeros(param):\n    if param.grad is not None:\n        return param.grad.data.detach()\n    else:\n        return th.zeros_like(param)\n\n\nclass MixedPrecisionTrainer:\n    def __init__(\n        self,\n        *,\n        model,\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n        initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE,\n    ):\n        self.model = model\n        self.use_fp16 = use_fp16\n        self.fp16_scale_growth = fp16_scale_growth\n\n        self.model_params = list(self.model.parameters())\n        self.master_params = self.model_params\n        self.param_groups_and_shapes = None\n        self.lg_loss_scale = initial_lg_loss_scale\n\n        if self.use_fp16:\n            self.param_groups_and_shapes = get_param_groups_and_shapes(\n                self.model.named_parameters()\n            )\n            self.master_params = make_master_params(self.param_groups_and_shapes)\n            self.model.convert_to_fp16()\n\n    def zero_grad(self):\n        zero_grad(self.model_params)\n\n    def backward(self, loss: th.Tensor):\n        if self.use_fp16:\n            loss_scale = 2 ** self.lg_loss_scale\n            (loss * loss_scale).backward()\n        else:\n            loss.backward()\n\n    def optimize(self, opt: th.optim.Optimizer):\n        if self.use_fp16:\n            return self._optimize_fp16(opt)\n        else:\n            return self._optimize_normal(opt)\n\n    def _optimize_fp16(self, opt: th.optim.Optimizer):\n        logger.logkv_mean(\"lg_loss_scale\", self.lg_loss_scale)\n        model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params)\n        grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale)\n        if check_overflow(grad_norm):\n            self.lg_loss_scale -= 1\n            logger.log(f\"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}\")\n            zero_master_grads(self.master_params)\n            return False\n\n        logger.logkv_mean(\"grad_norm\", grad_norm)\n        logger.logkv_mean(\"param_norm\", param_norm)\n\n        for p in self.master_params:\n            p.grad.mul_(1.0 / (2 ** self.lg_loss_scale))\n        opt.step()\n        zero_master_grads(self.master_params)\n        master_params_to_model_params(self.param_groups_and_shapes, self.master_params)\n        self.lg_loss_scale += self.fp16_scale_growth\n        return True\n\n    def _optimize_normal(self, opt: th.optim.Optimizer):\n        grad_norm, param_norm = self._compute_norms()\n        logger.logkv_mean(\"grad_norm\", grad_norm)\n        logger.logkv_mean(\"param_norm\", param_norm)\n        opt.step()\n        return True\n\n    def _compute_norms(self, grad_scale=1.0):\n        grad_norm = 0.0\n        param_norm = 0.0\n        for p in self.master_params:\n            with th.no_grad():\n                param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2\n                if p.grad is not None:\n                    grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2\n        return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm)\n\n    def master_params_to_state_dict(self, master_params):\n        return master_params_to_state_dict(\n            self.model, self.param_groups_and_shapes, master_params, self.use_fp16\n        )\n\n    def state_dict_to_master_params(self, state_dict):\n        return state_dict_to_master_params(self.model, state_dict, self.use_fp16)\n\n\ndef check_overflow(value):\n    return (value == float(\"inf\")) or (value == -float(\"inf\")) or (value != value)\n"
  },
  {
    "path": "diffusion/gaussian_diffusion.py",
    "content": "\"\"\"\nThis code started out as a PyTorch port of Ho et al's diffusion models:\nhttps://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py\n\nDocstrings have been added, as well as DDIM sampling and a new collection of beta schedules.\n\"\"\"\nimport enum\nimport math\nimport numpy as np\nimport torch as th\nfrom dataset.path_manager import *\nfrom diffusion.nn import mean_flat, mask_img, decompose_featmaps\nfrom diffusion.losses import normal_kl, discretized_gaussian_log_likelihood\nfrom diffusion.scheduler import get_schedule_jump\n\ndef get_named_beta_schedule(schedule_name, num_diffusion_timesteps):\n    \"\"\"\n    Get a pre-defined beta schedule for the given name.\n\n    The beta schedule library consists of beta schedules which remain similar\n    in the limit of num_diffusion_timesteps.\n    Beta schedules may be added, but should not be removed or changed once\n    they are committed to maintain backwards compatibility.\n    \"\"\"\n    if schedule_name == \"linear\":\n        # Linear schedule from Ho et al, extended to work for any number of\n        # diffusion steps.\n        scale = 1000 / num_diffusion_timesteps\n        beta_start = scale * 0.0001\n        beta_end = scale * 0.02\n        return np.linspace(\n            beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64\n        )\n    elif schedule_name == \"cosine\":\n        return betas_for_alpha_bar(\n            num_diffusion_timesteps,\n            lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2,\n        )\n    else:\n        raise NotImplementedError(f\"unknown beta schedule: {schedule_name}\")\n\n\ndef betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):\n    \"\"\"\n    Create a beta schedule that discretizes the given alpha_t_bar function,\n    which defines the cumulative product of (1-beta) over time from t = [0,1].\n\n    :param num_diffusion_timesteps: the number of betas to produce.\n    :param alpha_bar: a lambda that takes an argument t from 0 to 1 and\n                      produces the cumulative product of (1-beta) up to that\n                      part of the diffusion process.\n    :param max_beta: the maximum beta to use; use values lower than 1 to\n                     prevent singularities.\n    \"\"\"\n    betas = []\n    for i in range(num_diffusion_timesteps):\n        t1 = i / num_diffusion_timesteps\n        t2 = (i + 1) / num_diffusion_timesteps\n        betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))\n    return np.array(betas)\n\n\nclass ModelMeanType(enum.Enum):\n    \"\"\"\n    Which type of output the model predicts.\n    \"\"\"\n\n    PREVIOUS_X = enum.auto()  # the model predicts x_{t-1}\n    START_X = enum.auto()  # the model predicts x_0\n    EPSILON = enum.auto()  # the model predicts epsilon\n\n\nclass ModelVarType(enum.Enum):\n    \"\"\"\n    What is used as the model's output variance.\n\n    The LEARNED_RANGE option has been added to allow the model to predict\n    values between FIXED_SMALL and FIXED_LARGE, making its job easier.\n    \"\"\"\n\n    LEARNED = enum.auto()\n    FIXED_SMALL = enum.auto()\n    FIXED_LARGE = enum.auto()\n    LEARNED_RANGE = enum.auto()\n\n\nclass LossType(enum.Enum):\n    MSE = enum.auto()  # use raw MSE loss (and KL when learning variances)\n    RESCALED_MSE = (\n        enum.auto()\n    )  # use raw MSE loss (with RESCALED_KL when learning variances)\n    KL = enum.auto()  # use the variational lower-bound\n    RESCALED_KL = enum.auto()  # like KL, but rescale to estimate the full VLB\n\n    def is_vb(self):\n        return self == LossType.KL or self == LossType.RESCALED_KL\n\n\nclass GaussianDiffusion:\n    \"\"\"\n    Utilities for training and sampling diffusion models.\n\n    Ported directly from here, and then adapted over time to further experimentation.\n    https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42\n\n    :param betas: a 1-D numpy array of betas for each diffusion timestep,\n                  starting at T and going to 1.\n    :param model_mean_type: a ModelMeanType determining what the model outputs.\n    :param model_var_type: a ModelVarType determining how variance is output.\n    :param loss_type: a LossType determining the loss function to use.\n    :param rescale_timesteps: if True, pass floating point timesteps into the\n                              model so that they are always scaled like in the\n                              original paper (0 to 1000).\n    \"\"\"\n\n    def __init__(\n        self,\n        *,\n        args,\n        betas,\n        model_mean_type,\n        model_var_type,\n        loss_type,\n        rescale_timesteps,\n    ):\n        self.model_mean_type = model_mean_type\n        self.model_var_type = model_var_type\n        self.loss_type = loss_type\n        self.rescale_timesteps = rescale_timesteps\n        self.ssc_refine = args.ssc_refine\n        self.triplane_loss_type = args.triplane_loss_type\n        self.args = args\n\n       \n        # Use float64 for accuracy.\n        betas = np.array(betas, dtype=np.float64)\n        self.betas = betas\n        assert len(betas.shape) == 1, \"betas must be 1-D\"\n        assert (betas > 0).all() and (betas <= 1).all()\n\n        self.num_timesteps = int(betas.shape[0])\n\n        alphas = 1.0 - betas\n        self.alphas_cumprod = np.cumprod(alphas, axis=0)\n        self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1])\n        self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0)\n        assert self.alphas_cumprod_prev.shape == (self.num_timesteps,)\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod)\n        self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod)\n        self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod)\n        self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod)\n        self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1)\n\n        # calculations for posterior q(x_{t-1} | x_t, x_0)\n        self.posterior_variance = (\n            betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)\n        )\n        # log calculation clipped because the posterior variance is 0 at the\n        # beginning of the diffusion chain.\n        self.posterior_log_variance_clipped = np.log(\n            np.append(self.posterior_variance[1], self.posterior_variance[1:])\n        )\n        self.posterior_mean_coef1 = (\n            betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)\n        )\n        self.posterior_mean_coef2 = (\n            (1.0 - self.alphas_cumprod_prev)\n            * np.sqrt(alphas)\n            / (1.0 - self.alphas_cumprod)\n        )\n    \n    def undo(self, img_out, t, debug=False):\n        '''p(x_t|x_{t-1})'''\n        \n        beta = _extract_into_tensor(self.betas, t, img_out.shape)\n\n        img_in_est = th.sqrt(1 - beta) * img_out + th.sqrt(beta) * th.randn_like(img_out)\n\n        return img_in_est\n    \n    def q_mean_variance(self, x_start, t):\n        \"\"\"\n        Get the distribution q(x_t | x_0).\n\n        :param x_start: the [N x C x ...] tensor of noiseless inputs.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :return: A tuple (mean, variance, log_variance), all of x_start's shape.\n        \"\"\"\n        mean = (\n            _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start\n        )\n        variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)\n        log_variance = _extract_into_tensor(\n            self.log_one_minus_alphas_cumprod, t, x_start.shape\n        )\n        return mean, variance, log_variance\n\n    def q_sample(self, x_start, t, noise=None):\n        \"\"\"\n        Diffuse the data for a given number of diffusion steps.\n\n        In other words, sample from q(x_t | x_0).\n\n        :param x_start: the initial data batch.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :param noise: if specified, the split-out normal noise.\n        :return: A noisy version of x_start.\n        \"\"\"\n        if noise is None:\n            noise = th.randn_like(x_start)\n        assert noise.shape == x_start.shape\n        return (\n            _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start\n            + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape)\n            * noise\n        )\n\n    def q_posterior_mean_variance(self, x_start, x_t, t):\n        \"\"\"\n        Compute the mean and variance of the diffusion posterior:\n\n            q(x_{t-1} | x_t, x_0)\n\n        \"\"\"\n        assert x_start.shape == x_t.shape\n        posterior_mean = (\n            _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start\n            + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t\n        )\n        posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape)\n        posterior_log_variance_clipped = _extract_into_tensor(\n            self.posterior_log_variance_clipped, t, x_t.shape\n        )\n        assert (\n            posterior_mean.shape[0]\n            == posterior_variance.shape[0]\n            == posterior_log_variance_clipped.shape[0]\n            == x_start.shape[0]\n        )\n        return posterior_mean, posterior_variance, posterior_log_variance_clipped\n\n    def p_mean_variance(\n        self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None\n    ):\n        \"\"\"\n        Apply the model to get p(x_{t-1} | x_t), as well as a prediction of\n        the initial x, x_0.\n\n        :param model: the model, which takes a signal and a batch of timesteps\n                      as input.\n        :param x: the [N x C x ...] tensor at time t.\n        :param t: a 1-D Tensor of timesteps.\n        :param clip_denoised: if True, clip the denoised signal into [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample. Applies before\n            clip_denoised.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :return: a dict with the following keys:\n                 - 'mean': the model mean output.\n                 - 'variance': the model variance output.\n                 - 'log_variance': the log of 'variance'.\n                 - 'pred_xstart': the prediction for x_0.\n        \"\"\"\n        if model_kwargs is None:\n            model_kwargs = {}\n\n        B, C = x.shape[:2]\n        assert t.shape == (B,)\n        model_output = model(x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y'])\n\n        if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]:\n            assert model_output.shape == (B, C * 2, *x.shape[2:])\n            model_output, model_var_values = th.split(model_output, C, dim=1)\n            if self.model_var_type == ModelVarType.LEARNED:\n                model_log_variance = model_var_values\n                model_variance = th.exp(model_log_variance)\n            else:\n                min_log = _extract_into_tensor(\n                    self.posterior_log_variance_clipped, t, x.shape\n                )\n                max_log = _extract_into_tensor(np.log(self.betas), t, x.shape)\n                # The model_var_values is [-1, 1] for [min_var, max_var].\n                frac = (model_var_values + 1) / 2\n                model_log_variance = frac * max_log + (1 - frac) * min_log\n                model_variance = th.exp(model_log_variance)\n        else:\n            model_variance, model_log_variance = {\n                # for fixedlarge, we set the initial (log-)variance like so\n                # to get a better decoder log likelihood.\n                ModelVarType.FIXED_LARGE: (\n                    np.append(self.posterior_variance[1], self.betas[1:]),\n                    np.log(np.append(self.posterior_variance[1], self.betas[1:])),\n                ),\n                ModelVarType.FIXED_SMALL: (\n                    self.posterior_variance,\n                    self.posterior_log_variance_clipped,\n                ),\n            }[self.model_var_type]\n            model_variance = _extract_into_tensor(model_variance, t, x.shape)\n            model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape)\n\n        def process_xstart(x):\n            if denoised_fn is not None:\n                x = denoised_fn(x)\n            if clip_denoised:\n                return x.clamp(-1, 1)\n            return x\n\n        if self.model_mean_type == ModelMeanType.PREVIOUS_X:\n            pred_xstart = process_xstart(\n                self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output)\n            )\n            model_mean = model_output\n        elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]:\n            if self.model_mean_type == ModelMeanType.START_X:\n                pred_xstart = process_xstart(model_output)\n            else:\n                pred_xstart = process_xstart(\n                    self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)\n                )\n            model_mean, _, _ = self.q_posterior_mean_variance(\n                x_start=pred_xstart, x_t=x, t=t\n            )\n        else:\n            raise NotImplementedError(self.model_mean_type)\n\n        assert (\n            model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape\n        )\n        return {\n            \"mean\": model_mean,\n            \"variance\": model_variance,\n            \"log_variance\": model_log_variance,\n            \"pred_xstart\": pred_xstart,\n        }\n\n    def _predict_xstart_from_eps(self, x_t, t, eps):\n        assert x_t.shape == eps.shape\n        return (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t\n            - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps\n        )\n\n    def _predict_xstart_from_xprev(self, x_t, t, xprev):\n        assert x_t.shape == xprev.shape\n        return (  # (xprev - coef2*x_t) / coef1\n            _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev\n            - _extract_into_tensor(\n                self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape\n            )\n            * x_t\n        )\n\n    def _predict_eps_from_xstart(self, x_t, t, pred_xstart):\n        return (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t\n            - pred_xstart\n        ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)\n\n    def _scale_timesteps(self, t):\n        if self.rescale_timesteps:\n            return t.float() * (1000.0 / self.num_timesteps)\n        return t\n\n    def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None):\n        \"\"\"\n        Compute the mean for the previous step, given a function cond_fn that\n        computes the gradient of a conditional log probability with respect to\n        x. In particular, cond_fn computes grad(log(p(y|x))), and we want to\n        condition on y.\n\n        This uses the conditioning strategy from Sohl-Dickstein et al. (2015).\n        \"\"\"\n        gradient = cond_fn(x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y'])\n        new_mean = (\n            p_mean_var[\"mean\"].float() + p_mean_var[\"variance\"] * gradient.float()\n        )\n        return new_mean\n\n    def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None):\n        \"\"\"\n        Compute what the p_mean_variance output would have been, should the\n        model's score function be conditioned by cond_fn.\n\n        See condition_mean() for details on cond_fn.\n\n        Unlike condition_mean(), this instead uses the conditioning strategy\n        from Song et al (2020).\n        \"\"\"\n        alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape)\n\n        eps = self._predict_eps_from_xstart(x, t, p_mean_var[\"pred_xstart\"])\n        eps = eps - (1 - alpha_bar).sqrt() * cond_fn(\n            x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y'])\n        \n\n        out = p_mean_var.copy()\n        out[\"pred_xstart\"] = self._predict_xstart_from_eps(x, t, eps)\n        out[\"mean\"], _, _ = self.q_posterior_mean_variance(\n            x_start=out[\"pred_xstart\"], x_t=x, t=t\n        )\n        return out\n\n    def p_sample(\n        self,\n        model,\n        x,\n        t,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n    ):\n        \"\"\"\n        Sample x_{t-1} from the model at the given timestep.\n\n        :param model: the model to sample from.\n        :param x: the current tensor at x_{t-1}.\n        :param t: the value of t, starting at 0 for the first diffusion step.\n        :param clip_denoised: if True, clip the x_start prediction to [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample.\n        :param cond_fn: if not None, this is a gradient function that acts\n                        similarly to the model.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :return: a dict containing the following keys:\n                 - 'sample': a random sample from the model.\n                 - 'pred_xstart': a prediction of x_0.\n        \"\"\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        noise = th.randn_like(x)\n        nonzero_mask = (\n            (t != 0).float().view(-1, *([1] * (len(x.shape) - 1)))\n        )  # no noise when t == 0\n        if cond_fn is not None:\n            out[\"mean\"] = self.condition_mean(\n                cond_fn, out, x, t, model_kwargs=model_kwargs\n            )\n        sample = out[\"mean\"] + nonzero_mask * th.exp(0.5 * out[\"log_variance\"]) * noise\n        if (self.triplane_loss_type == 'residual_plus_decoder') or (self.triplane_loss_type == 'residual'):\n            sample = sample + model_kwargs['y'].to(sample.device)\n        return {\"sample\": sample, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def p_sample_loop(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n        save_timestep_interval=None,\n    ):\n        \"\"\"\n        Generate samples from the model.\n\n        :param model: the model module.\n        :param shape: the shape of the samples, (N, C, H, W).\n        :param noise: if specified, the noise from the encoder to sample.\n                      Should be of the same shape as `shape`.\n        :param clip_denoised: if True, clip x_start predictions to [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample.\n        :param cond_fn: if not None, this is a gradient function that acts\n                        similarly to the model.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :param device: if specified, the device to create the samples on.\n                       If not specified, use a model parameter's device.\n        :param progress: if True, show a tqdm progress bar.\n        :return: a non-differentiable batch of samples.\n        \"\"\"\n        final = None\n        if save_timestep_interval is not None:\n            prev_steps = dict()\n            \n        for idx, sample in enumerate(self.p_sample_loop_progressive(\n            model,\n            shape,\n            noise=noise,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            cond_fn=cond_fn,\n            model_kwargs=model_kwargs,\n            device=device,\n            progress=progress,\n        )):\n            final = sample\n            if (save_timestep_interval is not None) and (idx % save_timestep_interval == 0): # save every save_timestep_interval steps\n                prev_steps[str(idx)] = final[\"sample\"]\n            if (save_timestep_interval is not None) and (idx > 960): # # save every steps after 900 steps\n                prev_steps[str(idx)] = final[\"sample\"]\n        \n        if save_timestep_interval is not None: \n            prev_steps[str(1000)] = final[\"sample\"]\n            return prev_steps\n        else : return final[\"sample\"]\n\n    def p_sample_loop_progressive(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n    ):\n        \"\"\"\n        Generate samples from the model and yield intermediate samples from\n        each timestep of diffusion.\n\n        Arguments are the same as p_sample_loop().\n        Returns a generator over dicts, where each dict is the return value of\n        p_sample().\n        \"\"\"\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        if noise is not None:\n            img = noise\n        else:\n            img = th.randn(*shape, device=device)\n        indices = list(range(self.num_timesteps))[::-1]\n\n        if progress:\n            # Lazy import so that we don't depend on tqdm.\n            from tqdm.auto import tqdm\n\n            indices = tqdm(indices)\n\n        for i in indices:\n            t = th.tensor([i] * shape[0], device=device)\n            with th.no_grad():\n                out = self.p_sample(\n                    model,\n                    img,\n                    t,\n                    clip_denoised=clip_denoised,\n                    denoised_fn=denoised_fn,\n                    cond_fn=cond_fn,\n                    model_kwargs=model_kwargs,\n                )\n                yield out\n                img = out[\"sample\"]\n\n    def p_sample_loop_scene_repaint(\n        self,\n        model,\n        shape,\n        cond,\n        mode = 'down',\n        overlap = 64,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n    ):\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        \n        image_after_step = th.randn(*shape, device=device)\n        mask_cond = cond.detach().clone()\n        times = get_schedule_jump(t_T=self.num_timesteps, jump_length=20, jump_n_sample=5)\n        time_pairs = list(zip(times[:-1], times[1:]))\n        with th.no_grad():\n            for t_last, t_cur in time_pairs:\n                t_last_t = th.tensor([t_last] * shape[0], device=device)\n                if t_cur < t_last:  # reverse\n                    t_cond = self.q_sample(mask_cond, t_last_t)\n                    image_after_step = mask_img(image_after_step, t_cond, mode, overlap, H=model_kwargs['H'])\n                    out = self.p_sample(\n                        model,\n                        image_after_step,\n                        t_last_t,\n                        clip_denoised=clip_denoised,\n                        denoised_fn=denoised_fn,\n                        cond_fn=cond_fn,\n                        model_kwargs=model_kwargs,\n                    )\n                    image_after_step = out[\"sample\"]\n                else:\n                    t_shift = 1\n                    image_after_step = self.undo(image_after_step, t=t_last_t+t_shift, debug=False)\n                \n        return image_after_step\n                    \n    def p_sample_loop_scene(\n        self,\n        model,\n        shape,\n        cond,\n        mode = 'down',\n        overlap = 64,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n    ):\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        img = th.randn(*shape, device=device)\n        indices = list(range(self.num_timesteps))[::-1]\n        mask_cond = cond.detach().clone()\n        \n        for i in indices:\n            t = th.tensor([i] * shape[0], device=device)\n            with th.no_grad():\n                m_cond = self.q_sample(mask_cond, t)\n                img = mask_img(img, m_cond, mode, overlap, H=model_kwargs['H'])\n                \n                out = self.p_sample(\n                    model,\n                    img,\n                    t,\n                    clip_denoised=clip_denoised,\n                    denoised_fn=denoised_fn,\n                    cond_fn=cond_fn,\n                    model_kwargs=model_kwargs,\n                )     \n                img = out[\"sample\"]\n        return img\n                \n    def ddim_sample(\n        self,\n        model,\n        x,\n        t,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        eta=0.0,\n        y0=None,\n        mask=None,\n        is_mask_t0=False,\n    ):\n        \"\"\"\n        Sample x_{t-1} from the model using DDIM.\n\n        Same usage as p_sample().\n        \"\"\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        if cond_fn is not None:\n            out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs)\n        # masked generation\n        if y0 is not None and mask is not None:\n            assert y0.shape == x.shape\n            assert mask.shape == x.shape\n            if is_mask_t0:\n                out[\"pred_xstart\"] = mask * y0 + (1 - mask) * out[\"pred_xstart\"]\n            else:\n                nonzero_mask = (\n                    (t != 0).float().view(-1, *([1] * (len(x.shape) - 1)))\n                )  # no noise when t == 0\n                out[\"pred_xstart\"] = (mask * y0 + (1 - mask) * out[\"pred_xstart\"]) * nonzero_mask + out[\"pred_xstart\"] * (1 - nonzero_mask)\n\n        # Usually our model outputs epsilon, but we re-derive it\n        # in case we used x_start or x_prev prediction.\n        eps = self._predict_eps_from_xstart(x, t, out[\"pred_xstart\"])\n\n        alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape)\n        alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape)\n        sigma = (\n            eta\n            * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar))\n            * th.sqrt(1 - alpha_bar / alpha_bar_prev)\n        )\n        # Equation 12.\n        noise = th.randn_like(x)\n        mean_pred = (\n            out[\"pred_xstart\"] * th.sqrt(alpha_bar_prev)\n            + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps\n        )\n        nonzero_mask = (\n            (t != 0).float().view(-1, *([1] * (len(x.shape) - 1)))\n        )  # no noise when t == 0\n        sample = mean_pred + nonzero_mask * sigma * noise\n        return {\"sample\": sample, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def ddim_reverse_sample(\n        self,\n        model,\n        x,\n        t,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        eta=0.0,\n    ):\n        \"\"\"\n        Sample x_{t+1} from the model using DDIM reverse ODE.\n        \"\"\"\n        assert eta == 0.0, \"Reverse ODE only for deterministic path\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        # Usually our model outputs epsilon, but we re-derive it\n        # in case we used x_start or x_prev prediction.\n        eps = (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x\n            - out[\"pred_xstart\"]\n        ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape)\n        alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape)\n\n        # Equation 12. reversed\n        mean_pred = (\n            out[\"pred_xstart\"] * th.sqrt(alpha_bar_next)\n            + th.sqrt(1 - alpha_bar_next) * eps\n        )\n\n        return {\"sample\": mean_pred, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def ddim_sample_loop(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n        eta=0.0,\n        y0=None,\n        mask=None,\n        is_mask_t0=False,\n    ):\n        \"\"\"\n        Generate samples from the model using DDIM.\n\n        Same usage as p_sample_loop().\n        \"\"\"\n        final = None\n        for sample in self.ddim_sample_loop_progressive(\n            model,\n            shape,\n            noise=noise,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            cond_fn=cond_fn,\n            model_kwargs=model_kwargs,\n            device=device,\n            progress=progress,\n            eta=eta,\n            y0=y0,\n            mask=mask,\n            is_mask_t0=is_mask_t0,\n        ):\n            final = sample\n        return final[\"sample\"]\n\n    def ddim_sample_loop_progressive(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        cond_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n        eta=0.0,\n        y0=None,\n        mask=None,\n        is_mask_t0=False,\n    ):\n        \"\"\"\n        Use DDIM to sample from the model and yield intermediate samples from\n        each timestep of DDIM.\n\n        Same usage as p_sample_loop_progressive().\n        \"\"\"\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        if noise is not None:\n            img = noise\n        else:\n            img = th.randn(*shape, device=device)\n        indices = list(range(self.num_timesteps))[::-1]\n\n        if progress:\n            # Lazy import so that we don't depend on tqdm.\n            from tqdm.auto import tqdm\n\n            indices = tqdm(indices)\n\n        for i in indices:\n            t = th.tensor([i] * shape[0], device=device)\n            with th.no_grad():\n                out = self.ddim_sample(\n                    model,\n                    img,\n                    t,\n                    clip_denoised=clip_denoised,\n                    denoised_fn=denoised_fn,\n                    cond_fn=cond_fn,\n                    model_kwargs=model_kwargs,\n                    eta=eta,\n                    y0=y0,\n                    mask=mask,\n                    is_mask_t0=is_mask_t0,\n                )\n                yield out\n                img = out[\"sample\"]\n\n    def _vb_terms_bpd(\n        self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None\n    ):\n        \"\"\"\n        Get a term for the variational lower-bound.\n\n        The resulting units are bits (rather than nats, as one might expect).\n        This allows for comparison to other papers.\n\n        :return: a dict with the following keys:\n                 - 'output': a shape [N] tensor of NLLs or KLs.\n                 - 'pred_xstart': the x_0 predictions.\n        \"\"\"\n        true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance(\n            x_start=x_start, x_t=x_t, t=t\n        )\n        out = self.p_mean_variance(\n            model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs\n        )\n        kl = normal_kl(\n            true_mean, true_log_variance_clipped, out[\"mean\"], out[\"log_variance\"]\n        )\n        kl = mean_flat(kl) / np.log(2.0)\n\n        decoder_nll = -discretized_gaussian_log_likelihood(\n            x_start, means=out[\"mean\"], log_scales=0.5 * out[\"log_variance\"]\n        )\n        assert decoder_nll.shape == x_start.shape\n        decoder_nll = mean_flat(decoder_nll) / np.log(2.0)\n\n        # At the first timestep return the decoder NLL,\n        # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t))\n        output = th.where((t == 0), decoder_nll, kl)\n        return {\"output\": output, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def merge_features(self, xy_feat, xz_feat, yz_feat):\n        # Expand dimensions\n        xy_feat_exp = xy_feat.unsqueeze(4)  # Add z dimension\n        xz_feat_exp = xz_feat.unsqueeze(3)   # Add y dimension\n        yz_feat_exp = yz_feat.unsqueeze(2)   # Add x dimension\n\n        # Calculate the size of the new 3D tensor\n        B, C, H, W, D = xy_feat_exp.size(0), xy_feat_exp.size(1), xy_feat_exp.size(2), xy_feat_exp.size(3), yz_feat_exp.size(4)\n\n        # Initialize a 3D tensor with zeros\n        merged_tensor = th.zeros((B, C, H, W, D), device=xy_feat.device)\n\n        # Fill the tensor with the expanded feature maps\n        merged_tensor += xy_feat_exp.expand_as(merged_tensor)\n        merged_tensor += xz_feat_exp.expand_as(merged_tensor)\n        merged_tensor += yz_feat_exp.expand_as(merged_tensor)\n        return merged_tensor\n    \n    def training_losses(self, model, x_start, t, model_kwargs=None, noise=None):\n        \"\"\"\n        Compute training losses for a single timestep.\n\n        :param model: the model to evaluate loss on.\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :param t: a batch of timestep indices.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :param noise: if specified, the specific Gaussian noise to try to remove.\n        :return: a dict with the key \"loss\" containing a tensor of shape [N].\n                 Some mean or variance settings may also have other keys.\n        \"\"\"\n        if model_kwargs is None:\n            model_kwargs = {}\n        if noise is None:\n            noise = th.randn_like(x_start)\n\n        terms = {}\n        \n        if self.ssc_refine :\n            with th.no_grad():\n                large_T = th.tensor([self.num_timesteps-1] * x_start.shape[0], device=x_start.device)\n                m_t = self.q_sample(x_start, large_T)\n                m_1 = model(m_t, large_T, model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y'])\n            x_t = self.q_sample(m_1, t, noise=noise)\n        else : \n            x_t = self.q_sample(x_start, t, noise=noise)\n        \n        model_output = model(x_t, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y'])\n         \n        if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]:\n            B, C = x_t.shape[:2]\n            assert model_output.shape == (B, C * 2, *x_t.shape[2:])\n            model_output, model_var_values = th.split(model_output, C, dim=1)\n            # Learn the variance using the variational bound, but don't let\n            # it affect our mean prediction.\n            frozen_out = th.cat([model_output.detach(), model_var_values], dim=1)\n            terms[\"vb\"] = self._vb_terms_bpd(\n                model=lambda *args, r=frozen_out: r,\n                x_start=x_start,\n                x_t=x_t,\n                t=t,\n                clip_denoised=False,\n            )[\"output\"]\n            if self.loss_type == LossType.RESCALED_MSE:\n                # Divide by 1000 for equivalence with initial implementation.\n                # Without a factor of 1/1000, the VB term hurts the MSE term.\n                terms[\"vb\"] *= self.num_timesteps / 1000.0\n\n        target = {\n            ModelMeanType.PREVIOUS_X: self.q_posterior_mean_variance(\n                x_start=x_start, x_t=x_t, t=t\n            )[0],\n            ModelMeanType.START_X: x_start,\n            ModelMeanType.EPSILON: noise,\n        }[self.model_mean_type]\n        assert model_output.shape == target.shape == x_start.shape\n\n        if self.args.voxel_fea :\n            if self.triplane_loss_type == 'l1':\n                terms[\"loss\"] = mean_flat(th.abs(target - model_output))\n            elif self.triplane_loss_type == 'l2':\n                terms[\"loss\"] = mean_flat((target - model_output)**2)\n        else : \n            H, W, D = model_kwargs[\"H\"], model_kwargs[\"W\"], model_kwargs[\"D\"]\n            trisize = (H[0], W[0], D[0])\n            \n            target_xy, target_xz, target_yz = decompose_featmaps(target, trisize)\n            model_output_xy, model_output_xz, model_output_yz = decompose_featmaps(model_output, trisize)\n\n            if self.triplane_loss_type == 'l1':\n                terms[\"l1_xy\"] = mean_flat(th.abs(target_xy - model_output_xy))\n                terms[\"l1_xz\"] = mean_flat(th.abs(target_xz - model_output_xz))\n                terms[\"l1_yz\"] = mean_flat(th.abs(target_yz - model_output_yz))\n                if \"vb\" in terms:\n                    terms[\"loss\"] = terms[\"l1_xy\"] + terms[\"l1_xz\"] + terms[\"l1_yz\"] + terms[\"vb\"]\n                else:\n                    terms[\"loss\"] = terms[\"l1_xy\"] + terms[\"l1_xz\"] + terms[\"l1_yz\"]\n\n            elif self.triplane_loss_type == 'l2':\n                terms[\"l2_xy\"] = mean_flat((target_xy - model_output_xy)**2)\n                terms[\"l2_xz\"] = mean_flat((target_xz - model_output_xz)**2)\n                terms[\"l2_yz\"] = mean_flat((target_yz - model_output_yz)**2)\n                if \"vb\" in terms:\n                    terms[\"loss\"] = terms[\"l2_xy\"] + terms[\"l2_xz\"] + terms[\"l2_yz\"] + terms[\"vb\"]\n                else:\n                    terms[\"loss\"] = terms[\"l2_xy\"] + terms[\"l2_xz\"] + terms[\"l2_yz\"]\n                    \n            else:\n                raise ValueError(\"Unknown loss type: {}\".format(self.triplane_loss_type))   \n        \n        return terms\n\n    def _prior_bpd(self, x_start):\n        \"\"\"\n        Get the prior KL term for the variational lower-bound, measured in\n        bits-per-dim.\n\n        This term can't be optimized, as it only depends on the encoder.\n\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :return: a batch of [N] KL values (in bits), one per batch element.\n        \"\"\"\n        batch_size = x_start.shape[0]\n        t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)\n        qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)\n        kl_prior = normal_kl(\n            mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0\n        )\n        return mean_flat(kl_prior) / np.log(2.0)\n\n    def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None):\n        \"\"\"\n        Compute the entire variational lower-bound, measured in bits-per-dim,\n        as well as other related quantities.\n\n        :param model: the model to evaluate loss on.\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :param clip_denoised: if True, clip denoised samples.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n\n        :return: a dict containing the following keys:\n                 - total_bpd: the total variational lower-bound, per batch element.\n                 - prior_bpd: the prior term in the lower-bound.\n                 - vb: an [N x T] tensor of terms in the lower-bound.\n                 - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep.\n                 - mse: an [N x T] tensor of epsilon MSEs for each timestep.\n        \"\"\"\n        device = x_start.device\n        batch_size = x_start.shape[0]\n\n        vb = []\n        xstart_mse = []\n        mse = []\n        for t in list(range(self.num_timesteps))[::-1]:\n            t_batch = th.tensor([t] * batch_size, device=device)\n            noise = th.randn_like(x_start)\n            x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise)\n            # Calculate VLB term at the current timestep\n            with th.no_grad():\n                out = self._vb_terms_bpd(\n                    model,\n                    x_start=x_start,\n                    x_t=x_t,\n                    t=t_batch,\n                    clip_denoised=clip_denoised,\n                    model_kwargs=model_kwargs,\n                )\n            vb.append(out[\"output\"])\n            xstart_mse.append(mean_flat((out[\"pred_xstart\"] - x_start) ** 2))\n            eps = self._predict_eps_from_xstart(x_t, t_batch, out[\"pred_xstart\"])\n            mse.append(mean_flat((eps - noise) ** 2))\n\n        vb = th.stack(vb, dim=1)\n        xstart_mse = th.stack(xstart_mse, dim=1)\n        mse = th.stack(mse, dim=1)\n\n        prior_bpd = self._prior_bpd(x_start)\n        total_bpd = vb.sum(dim=1) + prior_bpd\n        return {\n            \"total_bpd\": total_bpd,\n            \"prior_bpd\": prior_bpd,\n            \"vb\": vb,\n            \"xstart_mse\": xstart_mse,\n            \"mse\": mse,\n        }\n\n\ndef _extract_into_tensor(arr, timesteps, broadcast_shape):\n    \"\"\"\n    Extract values from a 1-D numpy array for a batch of indices.\n\n    :param arr: the 1-D numpy array.\n    :param timesteps: a tensor of indices into the array to extract.\n    :param broadcast_shape: a larger shape of K dimensions with the batch\n                            dimension equal to the length of timesteps.\n    :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims.\n    \"\"\"\n    res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float()\n    while len(res.shape) < len(broadcast_shape):\n        res = res[..., None]\n    return res.expand(broadcast_shape)"
  },
  {
    "path": "diffusion/logger.py",
    "content": "\"\"\"\nLogger copied from OpenAI baselines to avoid extra RL-based dependencies:\nhttps://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/logger.py\n\"\"\"\n\nimport os\nimport sys\nimport os.path as osp\nimport json\nimport time\nimport datetime\nimport tempfile\nimport warnings\nfrom collections import defaultdict\nfrom contextlib import contextmanager\n\nDEBUG = 10\nINFO = 20\nWARN = 30\nERROR = 40\n\nDISABLED = 50\n\n\nclass KVWriter(object):\n    def writekvs(self, kvs):\n        raise NotImplementedError\n\n\nclass SeqWriter(object):\n    def writeseq(self, seq):\n        raise NotImplementedError\n\n\nclass HumanOutputFormat(KVWriter, SeqWriter):\n    def __init__(self, filename_or_file):\n        if isinstance(filename_or_file, str):\n            self.file = open(filename_or_file, \"wt\")\n            self.own_file = True\n        else:\n            assert hasattr(filename_or_file, \"read\"), (\n                \"expected file or str, got %s\" % filename_or_file\n            )\n            self.file = filename_or_file\n            self.own_file = False\n\n    def writekvs(self, kvs):\n        # Create strings for printing\n        key2str = {}\n        for (key, val) in sorted(kvs.items()):\n            if hasattr(val, \"__float__\"):\n                valstr = \"%-8.3g\" % val\n            else:\n                valstr = str(val)\n            key2str[self._truncate(key)] = self._truncate(valstr)\n\n        # Find max widths\n        if len(key2str) == 0:\n            print(\"WARNING: tried to write empty key-value dict\")\n            return\n        else:\n            keywidth = max(map(len, key2str.keys()))\n            valwidth = max(map(len, key2str.values()))\n\n        # Write out the data\n        dashes = \"-\" * (keywidth + valwidth + 7)\n        lines = [dashes]\n        for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()):\n            lines.append(\n                \"| %s%s | %s%s |\"\n                % (key, \" \" * (keywidth - len(key)), val, \" \" * (valwidth - len(val)))\n            )\n        lines.append(dashes)\n        self.file.write(\"\\n\".join(lines) + \"\\n\")\n\n        # Flush the output to the file\n        self.file.flush()\n\n    def _truncate(self, s):\n        maxlen = 30\n        return s[: maxlen - 3] + \"...\" if len(s) > maxlen else s\n\n    def writeseq(self, seq):\n        seq = list(seq)\n        for (i, elem) in enumerate(seq):\n            self.file.write(elem)\n            if i < len(seq) - 1:  # add space unless this is the last one\n                self.file.write(\" \")\n        self.file.write(\"\\n\")\n        self.file.flush()\n\n    def close(self):\n        if self.own_file:\n            self.file.close()\n\n\nclass JSONOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, \"wt\")\n\n    def writekvs(self, kvs):\n        for k, v in sorted(kvs.items()):\n            if hasattr(v, \"dtype\"):\n                kvs[k] = float(v)\n        self.file.write(json.dumps(kvs) + \"\\n\")\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\n\nclass CSVOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, \"w+t\")\n        self.keys = []\n        self.sep = \",\"\n\n    def writekvs(self, kvs):\n        # Add our current row to the history\n        extra_keys = list(kvs.keys() - self.keys)\n        extra_keys.sort()\n        if extra_keys:\n            self.keys.extend(extra_keys)\n            self.file.seek(0)\n            lines = self.file.readlines()\n            self.file.seek(0)\n            for (i, k) in enumerate(self.keys):\n                if i > 0:\n                    self.file.write(\",\")\n                self.file.write(k)\n            self.file.write(\"\\n\")\n            for line in lines[1:]:\n                self.file.write(line[:-1])\n                self.file.write(self.sep * len(extra_keys))\n                self.file.write(\"\\n\")\n        for (i, k) in enumerate(self.keys):\n            if i > 0:\n                self.file.write(\",\")\n            v = kvs.get(k)\n            if v is not None:\n                self.file.write(str(v))\n        self.file.write(\"\\n\")\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\n\nclass TensorBoardOutputFormat(KVWriter):\n    \"\"\"\n    Dumps key/value pairs into TensorBoard's numeric format.\n    \"\"\"\n\n    def __init__(self, dir):\n        os.makedirs(dir, exist_ok=True)\n        self.dir = dir\n        self.step = 1\n        prefix = \"events\"\n        path = osp.join(osp.abspath(dir), prefix)\n        import tensorflow as tf\n        from tensorflow.python import pywrap_tensorflow\n        from tensorflow.core.util import event_pb2\n        from tensorflow.python.util import compat\n\n        self.tf = tf\n        self.event_pb2 = event_pb2\n        self.pywrap_tensorflow = pywrap_tensorflow\n        self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path))\n\n    def writekvs(self, kvs):\n        def summary_val(k, v):\n            kwargs = {\"tag\": k, \"simple_value\": float(v)}\n            return self.tf.Summary.Value(**kwargs)\n\n        summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()])\n        event = self.event_pb2.Event(wall_time=time.time(), summary=summary)\n        event.step = (\n            self.step\n        )  # is there any reason why you'd want to specify the step?\n        self.writer.WriteEvent(event)\n        self.writer.Flush()\n        self.step += 1\n\n    def close(self):\n        if self.writer:\n            self.writer.Close()\n            self.writer = None\n\n\ndef make_output_format(format, ev_dir, log_suffix=\"\"):\n    os.makedirs(ev_dir, exist_ok=True)\n    if format == \"stdout\":\n        return HumanOutputFormat(sys.stdout)\n    elif format == \"log\":\n        return HumanOutputFormat(osp.join(ev_dir, \"log%s.txt\" % log_suffix))\n    elif format == \"json\":\n        return JSONOutputFormat(osp.join(ev_dir, \"progress%s.json\" % log_suffix))\n    elif format == \"csv\":\n        return CSVOutputFormat(osp.join(ev_dir, \"progress%s.csv\" % log_suffix))\n    elif format == \"tensorboard\":\n        return TensorBoardOutputFormat(osp.join(ev_dir, \"tb%s\" % log_suffix))\n    else:\n        raise ValueError(\"Unknown format specified: %s\" % (format,))\n\n\n# ================================================================\n# API\n# ================================================================\n\n\ndef logkv(key, val):\n    \"\"\"\n    Log a value of some diagnostic\n    Call this once for each diagnostic quantity, each iteration\n    If called many times, last value will be used.\n    \"\"\"\n    get_current().logkv(key, val)\n\n\ndef logkv_mean(key, val):\n    \"\"\"\n    The same as logkv(), but if called many times, values averaged.\n    \"\"\"\n    get_current().logkv_mean(key, val)\n\n\ndef logkvs(d):\n    \"\"\"\n    Log a dictionary of key-value pairs\n    \"\"\"\n    for (k, v) in d.items():\n        logkv(k, v)\n\n\ndef dumpkvs():\n    \"\"\"\n    Write all of the diagnostics from the current iteration\n    \"\"\"\n    return get_current().dumpkvs()\n\n\ndef getkvs():\n    return get_current().name2val\n\n\ndef log(*args, level=INFO):\n    \"\"\"\n    Write the sequence of args, with no separators, to the console and output files (if you've configured an output file).\n    \"\"\"\n    get_current().log(*args, level=level)\n\n\ndef debug(*args):\n    log(*args, level=DEBUG)\n\n\ndef info(*args):\n    log(*args, level=INFO)\n\n\ndef warn(*args):\n    log(*args, level=WARN)\n\n\ndef error(*args):\n    log(*args, level=ERROR)\n\n\ndef set_level(level):\n    \"\"\"\n    Set logging threshold on current logger.\n    \"\"\"\n    get_current().set_level(level)\n\n\ndef set_comm(comm):\n    get_current().set_comm(comm)\n\n\ndef get_dir():\n    \"\"\"\n    Get directory that log files are being written to.\n    will be None if there is no output directory (i.e., if you didn't call start)\n    \"\"\"\n    return get_current().get_dir()\n\n\nrecord_tabular = logkv\ndump_tabular = dumpkvs\n\n\n@contextmanager\ndef profile_kv(scopename):\n    logkey = \"wait_\" + scopename\n    tstart = time.time()\n    try:\n        yield\n    finally:\n        get_current().name2val[logkey] += time.time() - tstart\n\n\ndef profile(n):\n    \"\"\"\n    Usage:\n    @profile(\"my_func\")\n    def my_func(): code\n    \"\"\"\n\n    def decorator_with_name(func):\n        def func_wrapper(*args, **kwargs):\n            with profile_kv(n):\n                return func(*args, **kwargs)\n\n        return func_wrapper\n\n    return decorator_with_name\n\n\n# ================================================================\n# Backend\n# ================================================================\n\n\ndef get_current():\n    if Logger.CURRENT is None:\n        _configure_default_logger()\n\n    return Logger.CURRENT\n\n\nclass Logger(object):\n    DEFAULT = None  # A logger with no output files. (See right below class definition)\n    # So that you can still log to the terminal without setting up any output files\n    CURRENT = None  # Current logger being used by the free functions above\n\n    def __init__(self, dir, output_formats, comm=None):\n        self.name2val = defaultdict(float)  # values this iteration\n        self.name2cnt = defaultdict(int)\n        self.level = INFO\n        self.dir = dir\n        self.output_formats = output_formats\n        self.comm = comm\n\n    # Logging API, forwarded\n    # ----------------------------------------\n    def logkv(self, key, val):\n        self.name2val[key] = val\n\n    def logkv_mean(self, key, val):\n        oldval, cnt = self.name2val[key], self.name2cnt[key]\n        self.name2val[key] = oldval * cnt / (cnt + 1) + val / (cnt + 1)\n        self.name2cnt[key] = cnt + 1\n\n    def dumpkvs(self):\n        if self.comm is None:\n            d = self.name2val\n        else:\n            d = mpi_weighted_mean(\n                self.comm,\n                {\n                    name: (val, self.name2cnt.get(name, 1))\n                    for (name, val) in self.name2val.items()\n                },\n            )\n            if self.comm.rank != 0:\n                d[\"dummy\"] = 1  # so we don't get a warning about empty dict\n        out = d.copy()  # Return the dict for unit testing purposes\n        for fmt in self.output_formats:\n            if isinstance(fmt, KVWriter):\n                fmt.writekvs(d)\n        self.name2val.clear()\n        self.name2cnt.clear()\n        return out\n\n    def log(self, *args, level=INFO):\n        if self.level <= level:\n            self._do_log(args)\n\n    # Configuration\n    # ----------------------------------------\n    def set_level(self, level):\n        self.level = level\n\n    def set_comm(self, comm):\n        self.comm = comm\n\n    def get_dir(self):\n        return self.dir\n\n    def close(self):\n        for fmt in self.output_formats:\n            fmt.close()\n\n    # Misc\n    # ----------------------------------------\n    def _do_log(self, args):\n        for fmt in self.output_formats:\n            if isinstance(fmt, SeqWriter):\n                fmt.writeseq(map(str, args))\n\n\ndef get_rank_without_mpi_import():\n    # check environment variables here instead of importing mpi4py\n    # to avoid calling MPI_Init() when this module is imported\n    for varname in [\"PMI_RANK\", \"OMPI_COMM_WORLD_RANK\"]:\n        if varname in os.environ:\n            return int(os.environ[varname])\n    return 0\n\n\ndef mpi_weighted_mean(comm, local_name2valcount):\n    \"\"\"\n    Copied from: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/common/mpi_util.py#L110\n    Perform a weighted average over dicts that are each on a different node\n    Input: local_name2valcount: dict mapping key -> (value, count)\n    Returns: key -> mean\n    \"\"\"\n    all_name2valcount = comm.gather(local_name2valcount)\n    if comm.rank == 0:\n        name2sum = defaultdict(float)\n        name2count = defaultdict(float)\n        for n2vc in all_name2valcount:\n            for (name, (val, count)) in n2vc.items():\n                try:\n                    val = float(val)\n                except ValueError:\n                    if comm.rank == 0:\n                        warnings.warn(\n                            \"WARNING: tried to compute mean on non-float {}={}\".format(\n                                name, val\n                            )\n                        )\n                else:\n                    name2sum[name] += val * count\n                    name2count[name] += count\n        return {name: name2sum[name] / name2count[name] for name in name2sum}\n    else:\n        return {}\n\n\ndef configure(dir=None, format_strs=None, comm=None, log_suffix=\"\"):\n    \"\"\"\n    If comm is provided, average all numerical stats across that comm\n    \"\"\"\n    if dir is None:\n        dir = os.getenv(\"OPENAI_LOGDIR\")\n    if dir is None:\n        dir = osp.join(\n            tempfile.gettempdir(),\n            datetime.datetime.now().strftime(\"openai-%Y-%m-%d-%H-%M-%S-%f\"),\n        )\n    assert isinstance(dir, str)\n    dir = os.path.expanduser(dir)\n    os.makedirs(os.path.expanduser(dir), exist_ok=True)\n\n    rank = get_rank_without_mpi_import()\n    if rank > 0:\n        log_suffix = log_suffix + \"-rank%03i\" % rank\n\n    if format_strs is None:\n        if rank == 0:\n            format_strs = os.getenv(\"OPENAI_LOG_FORMAT\", \"stdout,log,csv\").split(\",\")\n        else:\n            format_strs = os.getenv(\"OPENAI_LOG_FORMAT_MPI\", \"log\").split(\",\")\n    format_strs = filter(None, format_strs)\n    output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs]\n\n    Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm)\n    if output_formats:\n        log(\"Logging to %s\" % dir)\n\n\ndef _configure_default_logger():\n    configure()\n    Logger.DEFAULT = Logger.CURRENT\n\n\ndef reset():\n    if Logger.CURRENT is not Logger.DEFAULT:\n        Logger.CURRENT.close()\n        Logger.CURRENT = Logger.DEFAULT\n        log(\"Reset logger\")\n\n\n@contextmanager\ndef scoped_configure(dir=None, format_strs=None, comm=None):\n    prevlogger = Logger.CURRENT\n    configure(dir=dir, format_strs=format_strs, comm=comm)\n    try:\n        yield\n    finally:\n        Logger.CURRENT.close()\n        Logger.CURRENT = prevlogger\n\n"
  },
  {
    "path": "diffusion/losses.py",
    "content": "\"\"\"\nHelpers for various likelihood-based losses. These are ported from the original\nHo et al. diffusion models codebase:\nhttps://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/utils.py\n\"\"\"\n\nimport numpy as np\n\nimport torch as th\n\n\ndef normal_kl(mean1, logvar1, mean2, logvar2):\n    \"\"\"\n    Compute the KL divergence between two gaussians.\n\n    Shapes are automatically broadcasted, so batches can be compared to\n    scalars, among other use cases.\n    \"\"\"\n    tensor = None\n    for obj in (mean1, logvar1, mean2, logvar2):\n        if isinstance(obj, th.Tensor):\n            tensor = obj\n            break\n    assert tensor is not None, \"at least one argument must be a Tensor\"\n\n    # Force variances to be Tensors. Broadcasting helps convert scalars to\n    # Tensors, but it does not work for th.exp().\n    logvar1, logvar2 = [\n        x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor)\n        for x in (logvar1, logvar2)\n    ]\n\n    return 0.5 * (\n        -1.0\n        + logvar2\n        - logvar1\n        + th.exp(logvar1 - logvar2)\n        + ((mean1 - mean2) ** 2) * th.exp(-logvar2)\n    )\n\n\ndef approx_standard_normal_cdf(x):\n    \"\"\"\n    A fast approximation of the cumulative distribution function of the\n    standard normal.\n    \"\"\"\n    return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3))))\n\n\ndef discretized_gaussian_log_likelihood(x, *, means, log_scales):\n    \"\"\"\n    Compute the log-likelihood of a Gaussian distribution discretizing to a\n    given image.\n\n    :param x: the target images. It is assumed that this was uint8 values,\n              rescaled to the range [-1, 1].\n    :param means: the Gaussian mean Tensor.\n    :param log_scales: the Gaussian log stddev Tensor.\n    :return: a tensor like x of log probabilities (in nats).\n    \"\"\"\n    assert x.shape == means.shape == log_scales.shape\n    centered_x = x - means\n    inv_stdv = th.exp(-log_scales)\n    plus_in = inv_stdv * (centered_x + 1.0 / 255.0)\n    cdf_plus = approx_standard_normal_cdf(plus_in)\n    min_in = inv_stdv * (centered_x - 1.0 / 255.0)\n    cdf_min = approx_standard_normal_cdf(min_in)\n    log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12))\n    log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12))\n    cdf_delta = cdf_plus - cdf_min\n    log_probs = th.where(\n        x < -0.999,\n        log_cdf_plus,\n        th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))),\n    )\n    assert log_probs.shape == x.shape\n    return log_probs\n"
  },
  {
    "path": "diffusion/nn.py",
    "content": "\"\"\"\nVarious utilities for neural networks.\n\"\"\"\n\nimport math\nimport torch as th\nimport torch.nn as nn\n\n\ndef mask_img(img, cond, mode, overlap, H=[128]):\n    H = H[0]\n    if type(mode) == tuple:\n        cond[:, :, int((mode[2])/2):int((mode[3])/2), int((mode[0])/2):int((mode[1])/2)] =\\\n            img[:, :, int((mode[2])/2):int((mode[3])/2), int((mode[0])/2):int((mode[1])/2)]\n        if overlap == 'inpainting':\n            cond[:, :, int((mode[2])/2):int((mode[3])/2), H:] = img[:, :, int((mode[2])/2):int((mode[3])/2), H:]\n            cond[:, :, H:, int((mode[0])/2):int((mode[1])/2)] = img[:, :, H:, int((mode[0])/2):int((mode[1])/2)]\n        return cond\n    else :\n        tri_overlap = int(overlap/2) \n        if mode == 'downright':\n            img[:, :, H-tri_overlap:H, :H] = cond[:, :, H-tri_overlap:H, :H]\n            img[:, :, :H, H-tri_overlap:H] = cond[:, :, :H, H-tri_overlap:H]\n        elif mode == 'downleft':\n            img[:, :, H-tri_overlap:H, :H] = cond[:, :, H-tri_overlap:H, :H]\n            img[:, :, :H, :tri_overlap] = cond[:, :, :H, :tri_overlap]\n        elif mode == 'upright':\n            img[:, :, :tri_overlap, :H] = cond[:, :, :tri_overlap, :H]\n            img[:, :, :H, H-tri_overlap:H] = cond[:, :, :H, H-tri_overlap:H]\n        elif mode == 'upleft':\n            img[:, :, :tri_overlap, :H] = cond[:, :, :tri_overlap, :H]\n            img[:, :, :H, :tri_overlap] = cond[:, :, :H, :tri_overlap]\n        elif mode == 'down':\n            img[:, :, H-tri_overlap:H, :] = cond[:, :, :tri_overlap, :]\n        elif mode == 'up':\n            img[:, :, :tri_overlap, :] = cond[:, :, H-tri_overlap:H, :]\n        elif mode == 'right':\n            img[:, :, :, H-tri_overlap:H] = cond[:, :, :, :tri_overlap]\n        elif mode == 'left':\n            img[:, :, :, :tri_overlap] = cond[:, :, :, H-tri_overlap:H]\n        return img\n    \ndef compose_featmaps(feat_xy, feat_xz, feat_yz, tri_size=(128,128,16) , transpose=True):\n    H, W, D = tri_size\n\n    empty_block = th.zeros(list(feat_xy.shape[:-2]) + [D, D], dtype=feat_xy.dtype, device=feat_xy.device)\n    if transpose:\n        feat_yz = feat_yz.transpose(-1, -2)\n    composed_map = th.cat(\n        [th.cat([feat_xy, feat_xz], dim=-1),\n         th.cat([feat_yz, empty_block], dim=-1)], \n        dim=-2\n    )\n    return composed_map, (H, W, D)\n\n\ndef decompose_featmaps(composed_map, tri_size=(128,128,16) , transpose=True):\n    H, W, D = tri_size\n    feat_xy = composed_map[..., :H, :W] # (C, H, W)\n    feat_xz = composed_map[..., :H, W:] # (C, H, D)\n    feat_yz = composed_map[..., H:, :W] # (C, W, D)\n    if transpose:\n        return feat_xy, feat_xz, feat_yz.transpose(-1, -2)\n    else:\n        return feat_xy, feat_xz, feat_yz\n    \n# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.\nclass SiLU(nn.Module):\n    def forward(self, x):\n        return x * th.sigmoid(x)\n\n\nclass GroupNorm32(nn.GroupNorm):\n    def forward(self, x):\n        return super().forward(x.float()).type(x.dtype)\n\n\ndef conv_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D convolution module.\n    \"\"\"\n    if dims == 1:\n        return nn.Conv1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.Conv2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.Conv3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef linear(*args, **kwargs):\n    \"\"\"\n    Create a linear module.\n    \"\"\"\n    return nn.Linear(*args, **kwargs)\n\n\ndef avg_pool_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D average pooling module.\n    \"\"\"\n    if dims == 1:\n        return nn.AvgPool1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.AvgPool2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.AvgPool3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef update_ema(target_params, source_params, rate=0.99):\n    \"\"\"\n    Update target parameters to be closer to those of source parameters using\n    an exponential moving average.\n\n    :param target_params: the target parameter sequence.\n    :param source_params: the source parameter sequence.\n    :param rate: the EMA rate (closer to 1 means slower).\n    \"\"\"\n    for targ, src in zip(target_params, source_params):\n        targ.detach().mul_(rate).add_(src, alpha=1 - rate)\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef scale_module(module, scale):\n    \"\"\"\n    Scale the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().mul_(scale)\n    return module\n\n\ndef mean_flat(tensor):\n    \"\"\"\n    Take the mean over all non-batch dimensions.\n    \"\"\"\n\n    return tensor.mean(dim=list(range(1, len(tensor.shape))))\n\n\ndef normalization(channels):\n    \"\"\"\n    Make a standard normalization layer.\n\n    :param channels: number of input channels.\n    :return: an nn.Module for normalization.\n    \"\"\"\n    return GroupNorm32(32, channels)\n\n\ndef timestep_embedding(timesteps, dim, max_period=10000):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n\n    :param timesteps: a 1-D Tensor of N indices, one per batch element.\n                      These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an [N x dim] Tensor of positional embeddings.\n    \"\"\"\n    half = dim // 2\n    freqs = th.exp(\n        -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half\n    ).to(device=timesteps.device)\n    args = timesteps[:, None].float() * freqs[None]\n    embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)\n    if dim % 2:\n        embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)\n    return embedding\n\n\ndef checkpoint(func, inputs, params, flag):\n    \"\"\"\n    Evaluate a function without caching intermediate activations, allowing for\n    reduced memory at the expense of extra compute in the backward pass.\n\n    :param func: the function to evaluate.\n    :param inputs: the argument sequence to pass to `func`.\n    :param params: a sequence of parameters `func` depends on but does not\n                   explicitly take as arguments.\n    :param flag: if False, disable gradient checkpointing.\n    \"\"\"\n    if flag:\n        args = tuple(inputs) + tuple(params)\n        return CheckpointFunction.apply(func, len(inputs), *args)\n    else:\n        return func(*inputs)\n\n\nclass CheckpointFunction(th.autograd.Function):\n    @staticmethod\n    def forward(ctx, run_function, length, *args):\n        ctx.run_function = run_function\n        ctx.input_tensors = list(args[:length])\n        ctx.input_params = list(args[length:])\n        with th.no_grad():\n            output_tensors = ctx.run_function(*ctx.input_tensors)\n        return output_tensors\n\n    @staticmethod\n    def backward(ctx, *output_grads):\n        ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]\n        with th.enable_grad():\n            # Fixes a bug where the first op in run_function modifies the\n            # Tensor storage in place, which is not allowed for detach()'d\n            # Tensors.\n            shallow_copies = [x.view_as(x) for x in ctx.input_tensors]\n            output_tensors = ctx.run_function(*shallow_copies)\n        input_grads = th.autograd.grad(\n            output_tensors,\n            ctx.input_tensors + ctx.input_params,\n            output_grads,\n            allow_unused=True,\n        )\n        del ctx.input_tensors\n        del ctx.input_params\n        del output_tensors\n        return (None, None) + input_grads\n"
  },
  {
    "path": "diffusion/resample.py",
    "content": "from abc import ABC, abstractmethod\n\nimport numpy as np\nimport torch as th\nimport torch.distributed as dist\n\n\ndef create_named_schedule_sampler(name, diffusion):\n    \"\"\"\n    Create a ScheduleSampler from a library of pre-defined samplers.\n\n    :param name: the name of the sampler.\n    :param diffusion: the diffusion object to sample for.\n    \"\"\"\n    if name == \"uniform\":\n        return UniformSampler(diffusion)\n    elif name == \"loss-second-moment\":\n        return LossSecondMomentResampler(diffusion)\n    else:\n        raise NotImplementedError(f\"unknown schedule sampler: {name}\")\n\n\nclass ScheduleSampler(ABC):\n    \"\"\"\n    A distribution over timesteps in the diffusion process, intended to reduce\n    variance of the objective.\n\n    By default, samplers perform unbiased importance sampling, in which the\n    objective's mean is unchanged.\n    However, subclasses may override sample() to change how the resampled\n    terms are reweighted, allowing for actual changes in the objective.\n    \"\"\"\n\n    @abstractmethod\n    def weights(self):\n        \"\"\"\n        Get a numpy array of weights, one per diffusion step.\n\n        The weights needn't be normalized, but must be positive.\n        \"\"\"\n\n    def sample(self, batch_size, device):\n        \"\"\"\n        Importance-sample timesteps for a batch.\n\n        :param batch_size: the number of timesteps.\n        :param device: the torch device to save to.\n        :return: a tuple (timesteps, weights):\n                 - timesteps: a tensor of timestep indices.\n                 - weights: a tensor of weights to scale the resulting losses.\n        \"\"\"\n        w = self.weights()\n        p = w / np.sum(w)\n        indices_np = np.random.choice(len(p), size=(batch_size,), p=p)\n        indices = th.from_numpy(indices_np).long().to(device)\n        weights_np = 1 / (len(p) * p[indices_np])\n        weights = th.from_numpy(weights_np).float().to(device)\n        return indices, weights\n\n\nclass UniformSampler(ScheduleSampler):\n    def __init__(self, diffusion):\n        self.diffusion = diffusion\n        self._weights = np.ones([diffusion.num_timesteps])\n\n    def weights(self):\n        return self._weights\n\n\nclass LossAwareSampler(ScheduleSampler):\n    def update_with_local_losses(self, local_ts, local_losses):\n        \"\"\"\n        Update the reweighting using losses from a model.\n\n        Call this method from each rank with a batch of timesteps and the\n        corresponding losses for each of those timesteps.\n        This method will perform synchronization to make sure all of the ranks\n        maintain the exact same reweighting.\n\n        :param local_ts: an integer Tensor of timesteps.\n        :param local_losses: a 1D Tensor of losses.\n        \"\"\"\n        batch_sizes = [\n            th.tensor([0], dtype=th.int32, device=local_ts.device)\n            for _ in range(dist.get_world_size())\n        ]\n        dist.all_gather(\n            batch_sizes,\n            th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device),\n        )\n\n        # Pad all_gather batches to be the maximum batch size.\n        batch_sizes = [x.item() for x in batch_sizes]\n        max_bs = max(batch_sizes)\n\n        timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes]\n        loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes]\n        dist.all_gather(timestep_batches, local_ts)\n        dist.all_gather(loss_batches, local_losses)\n        timesteps = [\n            x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs]\n        ]\n        losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]]\n        self.update_with_all_losses(timesteps, losses)\n\n    @abstractmethod\n    def update_with_all_losses(self, ts, losses):\n        \"\"\"\n        Update the reweighting using losses from a model.\n\n        Sub-classes should override this method to update the reweighting\n        using losses from the model.\n\n        This method directly updates the reweighting without synchronizing\n        between workers. It is called by update_with_local_losses from all\n        ranks with identical arguments. Thus, it should have deterministic\n        behavior to maintain state across workers.\n\n        :param ts: a list of int timesteps.\n        :param losses: a list of float losses, one per timestep.\n        \"\"\"\n\n\nclass LossSecondMomentResampler(LossAwareSampler):\n    def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001):\n        self.diffusion = diffusion\n        self.history_per_term = history_per_term\n        self.uniform_prob = uniform_prob\n        self._loss_history = np.zeros(\n            [diffusion.num_timesteps, history_per_term], dtype=np.float64\n        )\n        self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int)\n\n    def weights(self):\n        if not self._warmed_up():\n            return np.ones([self.diffusion.num_timesteps], dtype=np.float64)\n        weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1))\n        weights /= np.sum(weights)\n        weights *= 1 - self.uniform_prob\n        weights += self.uniform_prob / len(weights)\n        return weights\n\n    def update_with_all_losses(self, ts, losses):\n        for t, loss in zip(ts, losses):\n            if self._loss_counts[t] == self.history_per_term:\n                # Shift out the oldest loss term.\n                self._loss_history[t, :-1] = self._loss_history[t, 1:]\n                self._loss_history[t, -1] = loss\n            else:\n                self._loss_history[t, self._loss_counts[t]] = loss\n                self._loss_counts[t] += 1\n\n    def _warmed_up(self):\n        return (self._loss_counts == self.history_per_term).all()\n"
  },
  {
    "path": "diffusion/respace.py",
    "content": "import numpy as np\nimport torch as th\n\nfrom diffusion.gaussian_diffusion import GaussianDiffusion\n\n\ndef space_timesteps(num_timesteps, section_counts):\n    \"\"\"\n    Create a list of timesteps to use from an original diffusion process,\n    given the number of timesteps we want to take from equally-sized portions\n    of the original process.\n\n    For example, if there's 300 timesteps and the section counts are [10,15,20]\n    then the first 100 timesteps are strided to be 10 timesteps, the second 100\n    are strided to be 15 timesteps, and the final 100 are strided to be 20.\n\n    If the stride is a string starting with \"ddim\", then the fixed striding\n    from the DDIM paper is used, and only one section is allowed.\n\n    :param num_timesteps: the number of diffusion steps in the original\n                          process to divide up.\n    :param section_counts: either a list of numbers, or a string containing\n                           comma-separated numbers, indicating the step count\n                           per section. As a special case, use \"ddimN\" where N\n                           is a number of steps to use the striding from the\n                           DDIM paper.\n    :return: a set of diffusion steps from the original process to use.\n    \"\"\"\n    if isinstance(section_counts, str):\n        if section_counts.startswith(\"ddim\"):\n            desired_count = int(section_counts[len(\"ddim\") :])\n            for i in range(1, num_timesteps):\n                if len(range(0, num_timesteps, i)) == desired_count:\n                    return set(range(0, num_timesteps, i))\n            raise ValueError(\n                f\"cannot create exactly {num_timesteps} steps with an integer stride\"\n            )\n        section_counts = [int(x) for x in section_counts.split(\",\")]\n    size_per = num_timesteps // len(section_counts)\n    extra = num_timesteps % len(section_counts)\n    start_idx = 0\n    all_steps = []\n    for i, section_count in enumerate(section_counts):\n        size = size_per + (1 if i < extra else 0)\n        if size < section_count:\n            raise ValueError(\n                f\"cannot divide section of {size} steps into {section_count}\"\n            )\n        if section_count <= 1:\n            frac_stride = 1\n        else:\n            frac_stride = (size - 1) / (section_count - 1)\n        cur_idx = 0.0\n        taken_steps = []\n        for _ in range(section_count):\n            taken_steps.append(start_idx + round(cur_idx))\n            cur_idx += frac_stride\n        all_steps += taken_steps\n        start_idx += size\n    return set(all_steps)\n\n\nclass SpacedDiffusion(GaussianDiffusion):\n    \"\"\"\n    A diffusion process which can skip steps in a base diffusion process.\n\n    :param use_timesteps: a collection (sequence or set) of timesteps from the\n                          original diffusion process to retain.\n    :param kwargs: the kwargs to create the base diffusion process.\n    \"\"\"\n\n    def __init__(self, use_timesteps, **kwargs):\n        self.use_timesteps = set(use_timesteps)\n        self.timestep_map = []\n        self.original_num_steps = len(kwargs[\"betas\"])\n\n        base_diffusion = GaussianDiffusion(**kwargs)  # pylint: disable=missing-kwoa\n        last_alpha_cumprod = 1.0\n        new_betas = []\n        for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):\n            if i in self.use_timesteps:\n                new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)\n                last_alpha_cumprod = alpha_cumprod\n                self.timestep_map.append(i)\n        kwargs[\"betas\"] = np.array(new_betas)\n        super().__init__(**kwargs)\n\n    def p_mean_variance(\n        self, model, *args, **kwargs\n    ):  # pylint: disable=signature-differs\n        return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)\n\n    def training_losses(\n        self, model, *args, **kwargs\n    ):  # pylint: disable=signature-differs\n        return super().training_losses(self._wrap_model(model), *args, **kwargs)\n\n    def condition_mean(self, cond_fn, *args, **kwargs):\n        return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)\n\n    def condition_score(self, cond_fn, *args, **kwargs):\n        return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)\n\n    def _wrap_model(self, model):\n        if isinstance(model, _WrappedModel):\n            return model\n        return _WrappedModel(\n            model, self.timestep_map, self.rescale_timesteps, self.original_num_steps\n        )\n\n    def _scale_timesteps(self, t):\n        # Scaling is done by the wrapped model.\n        return t\n\n\nclass _WrappedModel:\n    def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps):\n        self.model = model\n        self.timestep_map = timestep_map\n        self.rescale_timesteps = rescale_timesteps\n        self.original_num_steps = original_num_steps\n\n    def __call__(self, x, ts, H, W, D, y):\n        map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)\n        new_ts = map_tensor[ts]\n        if self.rescale_timesteps:\n            new_ts = new_ts.float() * (1000.0 / self.original_num_steps)\n        return self.model(x, new_ts, H, W, D, y)\n"
  },
  {
    "path": "diffusion/scheduler.py",
    "content": "\ndef get_schedule_jump(t_T, jump_length, jump_n_sample):\n    jumps = {}\n    for j in range(0, t_T - jump_length, jump_length):\n        jumps[j] = jump_n_sample - 1\n\n    t = t_T\n    ts = []\n\n    while t >= 1:\n        t = t-1\n        ts.append(t)\n\n        if jumps.get(t, 0) > 0:\n            jumps[t] = jumps[t] - 1\n            for _ in range(jump_length):\n                t = t + 1\n                ts.append(t)\n\n    ts.append(-1)\n    _check_times(ts, -1, t_T)\n\n    return ts\n\ndef _check_times(times, t_0, t_T):\n    # Check end\n    assert times[0] > times[1], (times[0], times[1])\n\n    # Check beginning\n    assert times[-1] == -1, times[-1]\n\n    # Steplength = 1\n    for t_last, t_cur in zip(times[:-1], times[1:]):\n        assert abs(t_last - t_cur) == 1, (t_last, t_cur)\n\n    # Value range\n    for t in times:\n        assert t >= t_0, (t, t_0)\n        assert t <= t_T, (t, t_T)"
  },
  {
    "path": "diffusion/script_util.py",
    "content": "from diffusion.unet_triplane import TriplaneUNetModel, BEVUNetModel\nfrom diffusion.respace import SpacedDiffusion, space_timesteps\nfrom diffusion import gaussian_diffusion as gd\n\ndef create_model_and_diffusion_from_args(args):\n    diffusion = create_gaussian_diffusion(args)\n    \n    if (args.diff_net_type == \"unet_bev\") or (args.diff_net_type == \"unet_voxel\"):\n        model = BEVUNetModel(args)\n    elif args.diff_net_type == \"unet_tri\":\n        model = TriplaneUNetModel(args)\n    return model, diffusion\n\ndef create_gaussian_diffusion(args):\n    steps = args.steps\n    predict_xstart = args.predict_xstart\n    learn_sigma = args.learn_sigma\n    timestep_respacing= args.timestep_respacing\n    \n    sigma_small=False\n    noise_schedule=\"linear\"\n    use_kl=False\n    rescale_timesteps=False\n    rescale_learned_sigmas=False\n    \n    betas = gd.get_named_beta_schedule(noise_schedule, steps)\n    if use_kl:\n        loss_type = gd.LossType.RESCALED_KL\n    elif rescale_learned_sigmas:\n        loss_type = gd.LossType.RESCALED_MSE\n    else:\n        loss_type = gd.LossType.MSE\n    if not timestep_respacing:\n        timestep_respacing = [steps]\n        \n    return SpacedDiffusion(\n        use_timesteps=space_timesteps(steps, timestep_respacing),\n        args=args,\n        betas=betas,\n        model_mean_type=(\n            gd.ModelMeanType.EPSILON if not predict_xstart else gd.ModelMeanType.START_X\n        ),\n        model_var_type=(\n            (\n                gd.ModelVarType.FIXED_LARGE\n                if not sigma_small\n                else gd.ModelVarType.FIXED_SMALL\n            )\n            if not learn_sigma\n            else gd.ModelVarType.LEARNED_RANGE\n        ),\n        loss_type=loss_type,\n        rescale_timesteps=rescale_timesteps,\n    )\n"
  },
  {
    "path": "diffusion/train_util.py",
    "content": "import copy\nimport functools\nimport os\nimport blobfile as bf\nimport torch as th\nfrom torch.optim import AdamW\nfrom tensorboardX import SummaryWriter\n\nfrom diffusion import logger\nfrom diffusion.fp16_util import MixedPrecisionTrainer\nfrom diffusion.nn import update_ema\nfrom diffusion.resample import LossAwareSampler, UniformSampler\nfrom utils.common_util import draw_scalar_field2D\nfrom utils import dist_util\n\n# For ImageNet experiments, this was a good default value.\n# We found that the lg_loss_scale quickly climbed to\n# 20-21 within the first ~1K steps of training.\nINITIAL_LOG_LOSS_SCALE = 20.0\n\n\nclass TrainLoop:\n    def __init__(\n        self,\n        *,\n        diffusion_net,\n        triplane_loss_type,\n        timestep_respacing,\n        training_step,\n        model,\n        diffusion,\n        data,\n        val_data,\n        ssc_refine,\n        batch_size,\n        microbatch,\n        lr,\n        ema_rate,\n        log_interval,\n        save_interval,\n        resume_checkpoint,\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n        schedule_sampler=None,\n        weight_decay=0.0,\n        lr_anneal_steps=0,\n    ):\n        self.triplane_loss_type = triplane_loss_type\n        self.model = model\n        self.diffusion = diffusion\n        self.data = data\n        self.val_data = val_data\n        self.ssc_refine = ssc_refine\n        self.training_step = training_step\n        self.timestep_respacing = timestep_respacing\n        self.diffusion_net = diffusion_net\n                      \n        self.batch_size = batch_size\n        self.microbatch = microbatch if microbatch > 0 else batch_size\n        self.lr = lr\n        self.ema_rate = (\n            [ema_rate]\n            if isinstance(ema_rate, float)\n            else [float(x) for x in ema_rate.split(\",\")]\n        )\n        self.log_interval = log_interval\n        self.save_interval = save_interval\n        self.resume_checkpoint = resume_checkpoint\n        self.use_fp16 = use_fp16\n        self.fp16_scale_growth = fp16_scale_growth\n        self.schedule_sampler = schedule_sampler or UniformSampler(diffusion)\n        self.weight_decay = weight_decay\n        self.lr_anneal_steps = lr_anneal_steps\n\n        tblog_dir = os.path.join(logger.get_current().get_dir(), \"tblog\")\n        self.tb = SummaryWriter(tblog_dir)\n\n        self.step = 0\n        self.resume_step = 0\n        self.global_batch = self.batch_size # * dist.get_world_size()\n\n        self.sync_cuda = th.cuda.is_available()\n\n        self._load_and_sync_parameters()\n        self.mp_trainer = MixedPrecisionTrainer(\n            model=self.model,\n            use_fp16=self.use_fp16,\n            fp16_scale_growth=fp16_scale_growth,\n        )\n\n        self.opt = AdamW(\n            self.mp_trainer.master_params, lr=self.lr, weight_decay=self.weight_decay\n        )\n        if self.resume_step:\n            self._load_optimizer_state()\n            # Model was resumed, either due to a restart or a checkpoint\n            # being specified at the command line.\n            self.ema_params = [\n                self._load_ema_parameters(rate) for rate in self.ema_rate\n            ]\n        else:\n            self.ema_params = [\n                copy.deepcopy(self.mp_trainer.master_params)\n                for _ in range(len(self.ema_rate))\n            ]\n\n        self.use_ddp = False\n        self.ddp_model = self.model\n\n    def _load_and_sync_parameters(self):\n        resume_checkpoint = find_resume_checkpoint() or self.resume_checkpoint\n\n        if resume_checkpoint:\n            self.resume_step = parse_resume_step_from_filename(resume_checkpoint)\n            # if dist.get_rank() == 0:\n            logger.log(f\"loading model from checkpoint: {resume_checkpoint}...\")\n            self.model.load_state_dict(\n                dist_util.load_state_dict(\n                    resume_checkpoint, map_location=dist_util.dev()\n                )\n            )\n\n        # dist_util.sync_params(self.model.parameters())\n\n    def _load_ema_parameters(self, rate):\n        ema_params = copy.deepcopy(self.mp_trainer.master_params)\n\n        main_checkpoint = find_resume_checkpoint() or self.resume_checkpoint\n        ema_checkpoint = find_ema_checkpoint(main_checkpoint, self.resume_step, rate)\n        if ema_checkpoint:\n            # if dist.get_rank() == 0:\n            logger.log(f\"loading EMA from checkpoint: {ema_checkpoint}...\")\n            state_dict = dist_util.load_state_dict(\n                ema_checkpoint, map_location=dist_util.dev()\n            )\n            ema_params = self.mp_trainer.state_dict_to_master_params(state_dict)\n\n        # dist_util.sync_params(ema_params)\n        return ema_params\n\n    def _load_optimizer_state(self):\n        main_checkpoint = find_resume_checkpoint() or self.resume_checkpoint\n        opt_checkpoint = bf.join(\n            bf.dirname(main_checkpoint), f\"opt{self.resume_step:06}.pt\"\n        )\n        if bf.exists(opt_checkpoint):\n            logger.log(f\"loading optimizer state from checkpoint: {opt_checkpoint}\")\n            state_dict = dist_util.load_state_dict(\n                opt_checkpoint, map_location=dist_util.dev()\n            )\n            self.opt.load_state_dict(state_dict)\n\n    def run_loop(self):\n        while (\n            not self.lr_anneal_steps\n            or self.step + self.resume_step < self.lr_anneal_steps\n        ):\n            batch, cond = next(self.data)\n            self.run_step(batch, cond)\n            if self.step % self.log_interval == 0 :\n                logger.dumpkvs()\n            if self.step % self.save_interval == 0 and self.step > 0:\n                self.save()\n                # Run for a finite amount of time in integration tests.\n                if os.environ.get(\"DIFFUSION_TRAINING_TEST\", \"\") and self.step > 0:\n                    return\n            self.step += 1\n            \n        if self.diffusion_net != 'unet_voxel':\n            # Save the last checkpoint if it wasn't already saved.\n            if (self.step - 1) % self.save_interval != 0:\n                self.save()\n\n    def run_step(self, batch, cond):\n        self.forward_backward(batch, cond)\n        took_step = self.mp_trainer.optimize(self.opt)\n        if took_step:\n            self._update_ema()\n        self._anneal_lr()\n        self.log_step()\n        \n        if self.diffusion_net != 'unet_voxel':\n            if self.step % self.log_interval == 0:\n                self._sample_and_visualize()\n\n    def _sample_and_visualize(self):\n        print(\"Sampling and visualizing...\")\n        self.ddp_model.eval()\n\n        batch, cond = next(self.val_data)\n\n        _shape = [len(cond['path'])] + list(batch.shape[1:])\n        with th.no_grad():\n            if self.ssc_refine:\n                large_T = th.tensor([self.training_step-1] * _shape[0], device=dist_util.dev())\n                batch = batch.to(dist_util.dev())\n                m_t = self.diffusion.q_sample(batch, large_T)\n                noise = self.ddp_model(m_t, large_T, cond['H'], cond['W'], cond['D'], cond['y']).to(dist_util.dev())\n            else : noise = None\n            sample = self.diffusion.p_sample_loop(self.ddp_model, _shape, noise = noise, progress=True, model_kwargs=cond, clip_denoised=True)\n        sample = sample.detach().cpu().numpy()\n        feat_dim = sample.shape[1]\n        \n        for i in range(sample.shape[0]):\n            for c in range(feat_dim//4):\n                fig = draw_scalar_field2D(sample[i, c*4])\n                self.tb.add_figure(f\"sample{i}/channel{c*4}\", fig, global_step=self.step)\n            if self.ssc_refine :\n                for c in range(feat_dim//4):\n                    fig = draw_scalar_field2D(cond['y'][i, c*4].detach().cpu().numpy())\n                    self.tb.add_figure(f\"sample{i}/condition{c*4}\", fig, global_step=self.step)\n            for c in range(feat_dim//4):\n                fig = draw_scalar_field2D(batch[i, c*4].detach().cpu().numpy())\n                self.tb.add_figure(f\"sample{i}/gt{c*4}\", fig, global_step=self.step)\n                       \n        self.ddp_model.train()\n\n\n    def forward_backward(self, batch, cond):\n        self.mp_trainer.zero_grad()\n        for i in range(0, batch.shape[0], self.microbatch):\n            # Eliminates the microbatch feature\n            assert i == 0\n            assert self.microbatch == self.batch_size\n            micro = batch.to(dist_util.dev())\n            micro_cond = {}\n\n            for k, v in cond.items():\n                if (k != 'path'):\n                    micro_cond[k] = v.to(dist_util.dev())\n                else :\n                    micro_cond[k] = [i for i in v]\n                                \n            last_batch = (i + self.microbatch) >= batch.shape[0]\n            t, weights = self.schedule_sampler.sample(micro.shape[0], dist_util.dev())\n\n            compute_losses = functools.partial(\n                self.diffusion.training_losses,\n                self.ddp_model,\n                micro,\n                t,\n                model_kwargs=micro_cond,)\n\n            if last_batch or not self.use_ddp:\n                losses = compute_losses()\n            else:\n                with self.ddp_model.no_sync():\n                    losses = compute_losses()\n\n            if isinstance(self.schedule_sampler, LossAwareSampler):\n                self.schedule_sampler.update_with_local_losses(\n                    t, losses[\"loss\"].detach()\n                )\n\n            loss = (losses[\"loss\"] * weights).mean()\n            self.mp_trainer.backward(loss)\n\n            if self.step % 10 == 0:\n                self.log_loss_dict(\n                    self.diffusion, t, {k: v * weights for k, v in losses.items()}\n                )\n\n    def _update_ema(self):\n        for rate, params in zip(self.ema_rate, self.ema_params):\n            update_ema(params, self.mp_trainer.master_params, rate=rate)\n\n    def _anneal_lr(self):\n        if not self.lr_anneal_steps:\n            return\n        frac_done = (self.step + self.resume_step) / self.lr_anneal_steps\n        lr = self.lr * (1 - frac_done)\n        for param_group in self.opt.param_groups:\n            param_group[\"lr\"] = lr\n\n    def log_step(self):\n        logger.logkv(\"step\", self.step + self.resume_step)\n        logger.logkv(\"samples\", (self.step + self.resume_step + 1) * self.global_batch)\n        logger.logkv(\"lr\", self.opt.param_groups[0][\"lr\"])\n        if self.step % 10 == 0:\n            self.tb.add_scalar(\"step\", self.step + self.resume_step, global_step=self.step)\n            self.tb.add_scalar(\"samples\", (self.step + self.resume_step + 1) * self.global_batch, global_step=self.step)\n            self.tb.add_scalar(\"lr\", self.opt.param_groups[0][\"lr\"], global_step=self.step)\n\n    def save(self):\n        def save_checkpoint(rate, params):\n            state_dict = self.mp_trainer.master_params_to_state_dict(params)\n            # if dist.get_rank() == 0:\n            logger.log(f\"saving model {rate}...\")\n            if not rate:\n                filename = f\"model{(self.step+self.resume_step):06d}.pt\"\n            else:\n                filename = f\"ema_{rate}_{(self.step+self.resume_step):06d}.pt\"\n            with bf.BlobFile(bf.join(get_blob_logdir(), filename), \"wb\") as f:\n                th.save(state_dict, f)\n\n        # save_checkpoint(0, self.mp_trainer.master_params)\n        for rate, params in zip(self.ema_rate, self.ema_params):\n            save_checkpoint(rate, params)\n\n        # if dist.get_rank() == 0:\n        with bf.BlobFile(\n            bf.join(get_blob_logdir(), f\"opt{(self.step+self.resume_step):06d}.pt\"),\n            \"wb\",\n        ) as f:\n            th.save(self.opt.state_dict(), f)\n\n        # dist.barrier()\n\n    def log_loss_dict(self, diffusion, ts, losses):\n        for key, values in losses.items():\n            loss_dict = {}\n            logger.logkv_mean(key, values.mean().item())\n            loss_dict[f\"{key}_mean\"] = values.mean().item()\n            # Log the quantiles (four quartiles, in particular).\n            for sub_t, sub_loss in zip(ts.cpu().numpy(), values.detach().cpu().numpy()):\n                quartile = int(4 * sub_t / diffusion.num_timesteps)\n                logger.logkv_mean(f\"{key}_q{quartile}\", sub_loss)\n                loss_dict[f\"{key}_q{quartile}\"] = sub_loss\n            self.tb.add_scalars(f\"{key}\", loss_dict, global_step=self.step)\n\n\ndef parse_resume_step_from_filename(filename):\n    \"\"\"\n    Parse filenames of the form path/to/modelNNNNNN.pt, where NNNNNN is the\n    checkpoint's number of steps.\n    \"\"\"\n    split = filename.split(\"_\")[-1].split(\".\")[0]\n    return int(split)\n\n\ndef get_blob_logdir():\n    # You can change this to be a separate path to save checkpoints to\n    # a blobstore or some external drive.\n    return logger.get_dir()\n\n\ndef find_resume_checkpoint():\n    # On your infrastructure, you may want to override this to automatically\n    # discover the latest checkpoint on your blob storage, etc.\n    return None\n\n\ndef find_ema_checkpoint(main_checkpoint, step, rate):\n    if main_checkpoint is None:\n        return None\n    filename = f\"ema_{rate}_{(step):06d}.pt\"\n    path = bf.join(bf.dirname(main_checkpoint), filename)\n    if bf.exists(path):\n        return path\n    return None\n\n"
  },
  {
    "path": "diffusion/triplane_util.py",
    "content": "import torch\nimport torch.nn.functional as F\nimport numpy as np\nfrom utils.parser_util import get_gen_args\nfrom utils.utils import make_query\nfrom diffusion.script_util import create_model_and_diffusion_from_args\nfrom encoding.networks import AutoEncoderGroupSkip\nfrom dataset.path_manager import *\nfrom diffusion.nn import decompose_featmaps, compose_featmaps\n\ndef augment(triplane, p, tri_size=(128,128,32)):\n    H, W, D = tri_size\n    triplane = torch.from_numpy(triplane).float()\n    feat_xy, feat_xz, feat_zy = decompose_featmaps(triplane,tri_size, False)\n    if p == 0: # 좌우 뒤집기\n        feat_xy = torch.flip(feat_xy, [2])\n        feat_zy = torch.flip(feat_zy, [2])\n    elif p == 1: # 상하 뒤집기\n        feat_xy = torch.flip(feat_xy, [1])\n        feat_xz = torch.flip(feat_xz, [1])\n    elif p == 2: # 상하좌우 뒤집기\n        feat_xy = torch.flip(feat_xy, [2])\n        feat_zy = torch.flip(feat_zy, [2])\n        feat_xy = torch.flip(feat_xy, [1])\n        feat_xz = torch.flip(feat_xz, [1])\n    elif p == 3: \n        feat_xy += torch.randn_like(feat_xy) * 0.05\n        feat_xz += torch.randn_like(feat_xz) * 0.05\n        feat_zy += torch.randn_like(feat_zy) * 0.05\n    elif p == 4 :# crop&resize\n        size = torch.randint(0, 3, (1,)).item()\n        s = 80 + size*16\n        region = 128-s\n        x, y = torch.randint(0, region, (2,)).tolist()\n        feat_xy = feat_xy[:, y:y+s, x:x+s]\n        feat_xz = feat_xz[:, y:y+s, :]\n        feat_zy = feat_zy[:, :, x:x+s]\n        feat_xy = F.interpolate(feat_xy.unsqueeze(0).float(), size=(H, W), mode='bilinear').squeeze(0)\n        feat_xz = F.interpolate(feat_xz.unsqueeze(0).float(), size=(H, D), mode='bilinear').squeeze(0)\n        feat_zy = F.interpolate(feat_zy.unsqueeze(0).float(), size=(D, W), mode='bilinear').squeeze(0)\n        \n    triplane, _ = compose_featmaps(feat_xy, feat_xz, feat_zy, tri_size, False)\n    return np.array(triplane)\n\ndef build_sampling_model(args):\n    H, W, D, learning_map, learning_map_inv, class_name, grid_size, tri_size, num_class, max_points= get_gen_args(args)\n    if args.dataset == 'kitti' :\n        args.data_path=SEMKITTI_DATA_PATH\n        args.yaml_path=SEMKITTI_YAML_PATH\n    elif args.dataset == 'carla' :\n        args.data_path=CARLA_DATA_PATH\n        args.yaml_path=CARLA_YAML_PATH\n    args.num_class = num_class\n\n    DIFF_PATH = SSC_DIFF_PATH if args.ssc_refine else GEN_DIFF_PATH\n    model, diffusion = create_model_and_diffusion_from_args(args)\n    model.load_state_dict(torch.load(DIFF_PATH, map_location=\"cpu\"))\n    model = model.cuda().eval()\n    \n    ae = AutoEncoderGroupSkip(args)\n    ae.load_state_dict(torch.load(AE_PATH, map_location='cpu')['model'])\n    ae = ae.cuda().eval()\n\n    sample_fn = (diffusion.p_sample_loop if not args.repaint else diffusion.p_sample_loop_scene_repaint)    \n    C = args.geo_feat_channels\n    coords, query = make_query(grid_size)\n    coords, query = coords.cuda(), query.cuda()    \n    out_shape = [args.batch_size, C, H + D, W + D]\n\n    return model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, class_name, args"
  },
  {
    "path": "diffusion/unet_triplane.py",
    "content": "from abc import abstractmethod\nimport torch as th\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom diffusion.fp16_util import convert_module_to_f16, convert_module_to_f32\nfrom diffusion.nn import (\n    checkpoint,\n    linear,\n    SiLU,\n    zero_module,\n    normalization,\n    timestep_embedding,\n    compose_featmaps, decompose_featmaps\n)\n\n\nclass TriplaneConv(nn.Module):\n    def __init__(self, channels, out_channels, kernel_size, padding, is_rollout=True) -> None:\n        super().__init__()\n        in_channels = channels * 3 if is_rollout else channels\n        self.is_rollout = is_rollout\n\n        self.conv_xy = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding)\n        self.conv_xz = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding)\n        self.conv_yz = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding)\n\n    def forward(self, featmaps):\n        # tpl: [B, C, H + D, W + D]\n        tpl_xy, tpl_xz, tpl_yz = featmaps\n        H, W = tpl_xy.shape[-2:]\n        D = tpl_xz.shape[-1]\n\n        if self.is_rollout:\n            tpl_xy_h = th.cat([tpl_xy,\n                            th.mean(tpl_yz, dim=-1, keepdim=True).transpose(-1, -2).expand_as(tpl_xy),\n                            th.mean(tpl_xz, dim=-1, keepdim=True).expand_as(tpl_xy)], dim=1) # [B, C * 3, H, W]\n            tpl_xz_h = th.cat([tpl_xz,\n                                th.mean(tpl_xy, dim=-1, keepdim=True).expand_as(tpl_xz),\n                                th.mean(tpl_yz, dim=-2, keepdim=True).expand_as(tpl_xz)], dim=1) # [B, C * 3, H, D]\n            tpl_yz_h = th.cat([tpl_yz,\n                            th.mean(tpl_xy, dim=-2, keepdim=True).transpose(-1, -2).expand_as(tpl_yz),\n                            th.mean(tpl_xz, dim=-2, keepdim=True).expand_as(tpl_yz)], dim=1) # [B, C * 3, W, D]\n        else:\n            tpl_xy_h = tpl_xy\n            tpl_xz_h = tpl_xz\n            tpl_yz_h = tpl_yz\n        \n        assert tpl_xy_h.shape[-2] == H and tpl_xy_h.shape[-1] == W\n        assert tpl_xz_h.shape[-2] == H and tpl_xz_h.shape[-1] == D\n        assert tpl_yz_h.shape[-2] == W and tpl_yz_h.shape[-1] == D\n\n        if tpl_xy_h.dtype != [param.dtype for param in self.conv_xy.parameters()][0]:\n            if tpl_xy_h.dtype == th.float16:\n                tpl_xy_h = self.conv_xy(tpl_xy_h.float())\n                tpl_xz_h = self.conv_xz(tpl_xz_h.float())\n                tpl_yz_h = self.conv_yz(tpl_yz_h.float())\n            else:\n                tpl_xy_h = self.conv_xy(tpl_xy_h.half())\n                tpl_xz_h = self.conv_xz(tpl_xz_h.half())\n                tpl_yz_h = self.conv_yz(tpl_yz_h.half())\n        else:\n            tpl_xy_h = self.conv_xy(tpl_xy_h)\n            tpl_xz_h = self.conv_xz(tpl_xz_h)\n            tpl_yz_h = self.conv_yz(tpl_yz_h)\n\n        return (tpl_xy_h, tpl_xz_h, tpl_yz_h)\n\n\nclass TriplaneNorm(nn.Module):\n    def __init__(self, channels) -> None:\n        super().__init__()\n        self.norm_xy = normalization(channels)\n        self.norm_xz = normalization(channels)\n        self.norm_yz = normalization(channels)\n\n    def forward(self, featmaps):\n        # tpl: [B, C, H + D, W + D]\n        tpl_xy, tpl_xz, tpl_yz = featmaps\n        H, W = tpl_xy.shape[-2:]\n        D = tpl_xz.shape[-1]\n\n        tpl_xy_h = self.norm_xy(tpl_xy) # [B, C, H, W]\n        tpl_xz_h = self.norm_xz(tpl_xz) # [B, C, H, D]\n        tpl_yz_h = self.norm_yz(tpl_yz) # [B, C, W, D]\n\n        assert tpl_xy_h.shape[-2] == H and tpl_xy_h.shape[-1] == W\n        assert tpl_xz_h.shape[-2] == H and tpl_xz_h.shape[-1] == D\n        assert tpl_yz_h.shape[-2] == W and tpl_yz_h.shape[-1] == D\n\n        return (tpl_xy_h, tpl_xz_h, tpl_yz_h)\n    \n\nclass TriplaneSiLU(nn.Module):\n    def __init__(self) -> None:\n        super().__init__()\n        self.silu = SiLU()\n\n    def forward(self, featmaps):\n        # tpl: [B, C, H + D, W + D]\n        tpl_xy, tpl_xz, tpl_yz = featmaps\n        return (self.silu(tpl_xy), self.silu(tpl_xz), self.silu(tpl_yz))\n\nclass TriplaneUpsample2x(nn.Module):\n    def __init__(self, tri_z_down, conv_up, channels=None) -> None:\n        super().__init__()\n        self.tri_z_down = tri_z_down\n        self.conv_up = conv_up\n        if conv_up :\n            if self.tri_z_down:\n                self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n                self.conv_xz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n                self.conv_yz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n            else :\n                self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n                self.conv_xz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=(1,0), stride=(2, 1))\n                self.conv_yz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=(1,0), stride=(2, 1))\n\n    def forward(self, featmaps):\n        # tpl: [B, C, H + D, W + D]\n        tpl_xy, tpl_xz, tpl_yz = featmaps\n        H, W = tpl_xy.shape[-2:]\n        D = tpl_xz.shape[-1]\n        if self.conv_up:\n            tpl_xy = self.conv_xy(tpl_xy)\n            tpl_xz = self.conv_xz(tpl_xz)\n            tpl_yz = self.conv_yz(tpl_yz)\n        else : \n            tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='bilinear', align_corners=False)\n            if self.tri_z_down:\n                tpl_xz = F.interpolate(tpl_xz, scale_factor=2, mode='bilinear', align_corners=False)\n                tpl_yz = F.interpolate(tpl_yz, scale_factor=2, mode='bilinear', align_corners=False)\n            else :    \n                tpl_xz = F.interpolate(tpl_xz, scale_factor=(2, 1), mode='bilinear', align_corners=False)\n                tpl_yz = F.interpolate(tpl_yz, scale_factor=(2, 1), mode='bilinear', align_corners=False)\n                \n        return (tpl_xy, tpl_xz, tpl_yz)\n\n\nclass TriplaneDownsample2x(nn.Module):\n    def __init__(self, tri_z_down, conv_down, channels=None) -> None:\n        super().__init__()\n        self.tri_z_down = tri_z_down\n        self.conv_down = conv_down\n\n        if conv_down :\n            if self.tri_z_down:\n                self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n                self.conv_xz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n                self.conv_yz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n            else : \n                self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n                self.conv_xz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=(2, 1), padding_mode='replicate')\n                self.conv_yz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=(2, 1), padding_mode='replicate')\n                \n    def forward(self, featmaps):\n        # tpl: [B, C, H + D, W + D]\n        tpl_xy, tpl_xz, tpl_yz = featmaps\n        H, W = tpl_xy.shape[-2:]\n        D = tpl_xz.shape[-1]\n        if self.conv_down:\n            tpl_xy = self.conv_xy(tpl_xy)\n            tpl_xz = self.conv_xz(tpl_xz)\n            tpl_yz = self.conv_yz(tpl_yz)\n        else : \n            tpl_xy = F.avg_pool2d(tpl_xy, kernel_size=2, stride=2)\n            if self.tri_z_down:\n                tpl_xz = F.avg_pool2d(tpl_xz, kernel_size=2, stride=2)\n                tpl_yz = F.avg_pool2d(tpl_yz, kernel_size=2, stride=2)\n            else : \n                tpl_xz = F.avg_pool2d(tpl_xz, kernel_size=(2, 1), stride=(2, 1))\n                tpl_yz = F.avg_pool2d(tpl_yz, kernel_size=(2, 1), stride=(2, 1))\n        return (tpl_xy, tpl_xz, tpl_yz)\n\n\nclass BeVplaneNorm(nn.Module):\n    def __init__(self, channels) -> None:\n        super().__init__()\n        self.norm_xy = normalization(channels)\n\n    def forward(self, tpl_xy):\n        tpl_xy_h = self.norm_xy(tpl_xy) # [B, C, H, W]\n        return tpl_xy_h\n    \nclass BeVplaneSiLU(nn.Module):\n    def __init__(self) -> None:\n        super().__init__()\n        self.silu = SiLU()\n\n    def forward(self, tpl_xy):\n        # tpl: [B, C, H + D, W + D]\n        return self.silu(tpl_xy)\n    \nclass BeVplaneUpsample2x(nn.Module):\n    def __init__(self, tri_z_down, conv_up, channels=None, voxelfea=False) -> None:\n        super().__init__()\n        self.tri_z_down = tri_z_down\n        self.conv_up = conv_up\n        self.voxelfea = voxelfea\n        if conv_up :\n            if voxelfea:\n                self.conv_xy = nn.ConvTranspose3d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n            else : \n                self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2)\n\n    def forward(self, tpl_xy):\n        # tpl: [B, C, H + D, W + D]\n        if self.conv_up:\n            tpl_xy = self.conv_xy(tpl_xy)\n        else : \n            if self.voxelfea:\n                tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='trilinear', align_corners=False)\n            else :\n                tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='bilinear', align_corners=False)\n             \n        return tpl_xy\n\nclass BeVplaneDownsample2x(nn.Module):\n    def __init__(self, tri_z_down, conv_down, channels=None, voxelfea=False) -> None:\n        super().__init__()\n        self.tri_z_down = tri_z_down\n        self.conv_down = conv_down\n        self.voxelfea = voxelfea\n        if conv_down :\n            if voxelfea:\n                self.conv_xy = nn.Conv3d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n            else :\n                self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate')\n                \n    def forward(self, tpl_xy):\n        # tpl: [B, C, H + D, W + D]\n        if self.conv_down:\n            tpl_xy = self.conv_xy(tpl_xy)\n        else : \n            if self.voxelfea :\n                tpl_xy = F.avg_pool3d(tpl_xy, kernel_size=2, stride=2)\n            else :\n                tpl_xy = F.avg_pool2d(tpl_xy, kernel_size=2, stride=2)\n        return tpl_xy\n    \nclass BeVplaneConv(nn.Module):\n    def __init__(self, channels, out_channels, kernel_size, padding, voxelfea=False) -> None:\n        super().__init__()\n        in_channels = channels \n        if voxelfea : \n            self.conv_xy = nn.Conv3d(in_channels, out_channels, kernel_size, padding=padding)\n        else:\n            self.conv_xy = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding)\n \n    def forward(self, tpl_xy):\n        # tpl: [B, C, H + D, W + D]  \n        tpl_xy_h = self.conv_xy(tpl_xy)\n    \n        return tpl_xy_h\n\nclass TimestepBlock(nn.Module):\n    \"\"\"\n    Any module where forward() takes timestep embeddings as a second argument.\n    \"\"\"\n\n    @abstractmethod\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the module to `x` given `emb` timestep embeddings.\n        \"\"\"\n\n\nclass TimestepEmbedSequential(nn.Sequential, TimestepBlock):\n    \"\"\"\n    A sequential module that passes timestep embeddings to the children that\n    support it as an extra input.\n    \"\"\"\n\n    def forward(self, x, emb):\n        for layer in self:\n            if isinstance(layer, TimestepBlock):\n                x = layer(x, emb)\n            else:\n                x = layer(x)\n        return x\n\nclass TriplaneResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param use_checkpoint: if True, use gradient checkpointing on this module.\n    :param up: if True, use this block for upsampling.\n    :param down: if True, use this block for downsampling.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        out_channels=None,\n        level=(128,128,16),\n        use_conv=False,\n        use_scale_shift_norm=True,\n        use_checkpoint=False,\n        up=False,\n        down=False,\n        is_rollout=True,\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n        self.level=level\n        \n        self.in_layers = nn.Sequential(\n            TriplaneNorm(channels),\n            TriplaneSiLU(),\n            TriplaneConv(channels, self.out_channels, 3, padding=1, is_rollout=is_rollout),\n        )\n\n        self.updown = up or down\n\n        if up:\n            self.h_upd = TriplaneUpsample2x()\n            self.x_upd = TriplaneUpsample2x()\n        elif down:\n            self.h_upd = TriplaneDownsample2x()\n            self.x_upd = TriplaneDownsample2x()\n        else:\n            self.h_upd = self.x_upd = nn.Identity()\n\n        self.emb_layers = nn.Sequential(\n            SiLU(),\n            linear(\n                emb_channels,\n                2 * self.out_channels if use_scale_shift_norm else self.out_channels,\n            ),\n        )\n        self.out_layers = nn.Sequential(\n            TriplaneNorm(self.out_channels),\n            TriplaneSiLU(),\n            # nn.Dropout(p=dropout),\n            zero_module(\n                TriplaneConv(self.out_channels, self.out_channels, 3, padding=1, is_rollout=is_rollout)\n            ),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = TriplaneConv(\n                channels, self.out_channels, 3, padding=1, is_rollout=False\n            )\n        else:\n            self.skip_connection = TriplaneConv(channels, self.out_channels, 1, padding=0, is_rollout=False)\n\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        return checkpoint(\n            self._forward, (x, emb), self.parameters(), self.use_checkpoint\n        )\n\n    def _forward(self, x, emb):\n        # x: (h_xy, h_xz, h_yz)\n        h = self.in_layers(x)\n\n        emb_out = self.emb_layers(emb).type(h[0].dtype)\n        while len(emb_out.shape) < len(h[0].shape):\n            emb_out = emb_out[..., None]\n\n        if self.use_scale_shift_norm:\n            out_norm, out_silu, out_conv = self.out_layers[0], self.out_layers[1], self.out_layers[2]\n            scale, shift = th.chunk(emb_out, 2, dim=1)\n\n            h = out_norm(h)\n            h_xy, h_xz, h_yz = h\n            h_xy = h_xy * (1 + scale) + shift\n            h_xz = h_xz * (1 + scale) + shift\n            h_yz = h_yz * (1 + scale) + shift\n            h = (h_xy, h_xz, h_yz)\n            # h = out_norm(h) * (1 + scale) + shift\n\n            h = out_silu(h)\n            h = out_conv(h)\n        else:\n            h_xy, h_xz, h_yz = h\n            h_xy = h_xy + emb_out\n            h_xz = h_xz + emb_out\n            h_yz = h_yz + emb_out\n            h = (h_xy, h_xz, h_yz)\n            # h = h + emb_out\n\n            h = self.out_layers(h)\n        \n        x_skip = self.skip_connection(x)\n        x_skip_xy, x_skip_xz, x_skip_yz = x_skip\n        h_xy, h_xz, h_yz = h\n        return (h_xy + x_skip_xy, h_xz + x_skip_xz, h_yz + x_skip_yz)\n        # return self.skip_connection(x) + h\n\n\nclass BeVplaneResBlock(TimestepBlock):\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        out_channels=None,\n        level=(128,128,16),\n        use_conv=False,\n        use_scale_shift_norm=True,\n        use_checkpoint=False,\n        up=False,\n        down=False,\n        voxelfea=False,\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n        \n        self.in_layers = nn.Sequential(\n            BeVplaneNorm(channels),\n            BeVplaneSiLU(),\n            BeVplaneConv(channels, self.out_channels, 3, padding=1, voxelfea=voxelfea),\n        )\n\n        self.updown = up or down\n\n        self.h_upd = self.x_upd = nn.Identity()\n\n        self.emb_layers = nn.Sequential(\n            SiLU(),\n            linear(\n                emb_channels,\n                2 * self.out_channels if use_scale_shift_norm else self.out_channels,\n            ),\n        )\n        self.out_layers = nn.Sequential(\n            BeVplaneNorm(self.out_channels),\n            BeVplaneSiLU(),\n            # nn.Dropout(p=dropout),\n            zero_module(\n                BeVplaneConv(self.out_channels, self.out_channels, 3, padding=1, voxelfea=voxelfea)\n            ),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = BeVplaneConv(\n                channels, self.out_channels, 3, padding=1, voxelfea=voxelfea\n            )\n        else:\n            self.skip_connection = BeVplaneConv(channels, self.out_channels, 1, padding=0, voxelfea=voxelfea)\n\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        return checkpoint(\n            self._forward, (x, emb), self.parameters(), self.use_checkpoint\n        )\n\n    def _forward(self, x, emb):\n        # x: (h_xy, h_xz, h_yz)\n\n        h = self.in_layers(x)\n\n        emb_out = self.emb_layers(emb).type(h[0].dtype)\n        while len(emb_out.shape) < len(h.shape):\n            emb_out = emb_out[..., None]\n\n        if self.use_scale_shift_norm:\n            out_norm, out_silu, out_conv = self.out_layers[0], self.out_layers[1], self.out_layers[2]\n            scale, shift = th.chunk(emb_out, 2, dim=1)\n\n            h = out_norm(h)\n            h = h * (1 + scale) + shift\n            h = out_silu(h)\n            h = out_conv(h)\n        else:\n            h = h + emb_out\n            h = self.out_layers(h)\n        \n        x_skip = self.skip_connection(x)\n        return x_skip+h\n\n\nclass BEVUNetModel(nn.Module):\n    def __init__(\n        self,\n        args,\n        num_res_blocks=1,\n        dropout=0,\n        use_checkpoint=False,\n        use_fp16=False,\n    ):\n        \n        super().__init__()\n        learn_sigma = args.learn_sigma\n        ssc_refine = args.ssc_refine\n        model_channels = args.model_channels\n        channel_mult = args.mult_channels\n        tri_unet_updown = args.tri_unet_updown\n        tri_z_down = args.tri_z_down\n        conv_down = args.conv_down\n        dataset = args.dataset\n        in_channels = args.geo_feat_channels\n        out_channels = args.geo_feat_channels\n        voxelfea=args.voxel_fea\n        self.voxelfea = voxelfea\n\n        self.ssc_refine = ssc_refine\n        self.in_channels = 2*in_channels if self.ssc_refine else in_channels\n            \n        self.model_channels = model_channels\n        self.out_channels = out_channels*2 if learn_sigma else out_channels\n        self.num_res_blocks = num_res_blocks\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.use_checkpoint = use_checkpoint\n        self.dtype = th.float16 if use_fp16 else th.float32\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n\n        ch = input_ch = int(channel_mult[0] * model_channels)\n        level_shape = ((128, 128, 16), (64, 64, 8), (32, 32, 4))\n        self.in_conv = TimestepEmbedSequential(BeVplaneConv(self.in_channels, ch, 1, padding=0, voxelfea=voxelfea))\n        print(\"\\nIn conv: BeVplaneConv\")\n        n_down, n_up = 0, 0\n        \n        input_block_chans = [ch]\n        self.input_blocks = nn.ModuleList([])\n        for level, mult in enumerate(channel_mult):\n            layers = []\n            if tri_unet_updown and (level != 0):\n                if (dataset == 'carla') and (n_down == 0) :\n                    layers.append(BeVplaneDownsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea))\n                    n_down+=1\n                    print(f\"Down level {level}: BeVplaneDownsample2x, ch {ch}\")\n                elif (dataset == 'kitti') : \n                    layers.append(BeVplaneDownsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea))\n                    print(f\"Down level {level}: BeVplaneDownsample2x, ch {ch}\")\n                \n            for _ in range(num_res_blocks):\n                layers.append(\n                    BeVplaneResBlock(\n                        ch,\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        voxelfea=voxelfea\n                    )\n                )\n                print(f\"Down level {level} block 1: BeVplaneResBlock, ch {int(model_channels * mult)}\")\n                \n              \n                layers.append(\n                    BeVplaneResBlock(\n                        int(mult * model_channels),\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        voxelfea=voxelfea\n                    )\n                )\n                print(f\"Down level {level} block 2: BeVplaneResBlock, ch {int(model_channels * mult)}\")  \n            ch = int(mult * model_channels)\n            input_block_chans.append(ch)\n            self.input_blocks.append(TimestepEmbedSequential(*layers)) \n            \n\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            layers = []\n            for i in range(num_res_blocks):\n                ich = input_block_chans.pop()\n                if level == len(channel_mult) - 1 and i == 0:\n                    ich = 0\n                layers.append(\n                    BeVplaneResBlock(\n                        ch + ich,\n                        time_embed_dim,\n                        out_channels=int(model_channels * mult),\n                        level=level_shape[level],\n                        voxelfea=voxelfea\n                    )\n                )\n                print(f\"Up level {level} block 1 : BeVplaneResBlock, ch {int(model_channels * mult)}\")\n            \n                layers.append(\n                    BeVplaneResBlock(\n                        int(mult * model_channels),\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        voxelfea=voxelfea\n                    )\n                )\n                print(f\"Up level {level} block 2: BeVplaneResBlock, ch {int(model_channels * mult)}\")  \n                ch = int(model_channels * mult)\n            \n\n            if tri_unet_updown and (level > 0):\n                if (dataset == 'carla') and (n_up == 0) :\n                    layers.append(BeVplaneUpsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea))\n                    n_up+=1\n                    print(f\"Up level {level}: BeVplaneUpsample2x, ch {int(model_channels * mult)}\")\n                elif (dataset == 'kitti') : \n                    layers.append(BeVplaneUpsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea))\n                    print(f\"Up level {level}: BeVplaneUpsample2x, ch {int(model_channels * mult)}\")\n\n            self.output_blocks.append(TimestepEmbedSequential(*layers))\n\n        self.out = nn.Sequential(\n            BeVplaneNorm(ch),\n            BeVplaneSiLU(),\n            BeVplaneConv(input_ch, self.out_channels, 1, padding=0, voxelfea=voxelfea)\n        )\n\n        print(\"Out conv: TriplaneConv\\n\")\n\n    def convert_to_fp16(self):\n        \"\"\"\n        Convert the torso of the model to float16.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f16)\n        self.output_blocks.apply(convert_module_to_f16)\n\n    def convert_to_fp32(self):\n        \"\"\"\n        Convert the torso of the model to float32.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f32)\n        self.output_blocks.apply(convert_module_to_f32)\n\n    def forward(self, x, timesteps, H=128, W=128, D=16, y=None):\n        \"\"\"\n        Apply the model to an input batch.\n\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        assert H is not None and W is not None and D is not None\n\n        hs = []\n        tri_size = (H[0], W[0], D[0])\n        emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))\n\n        if self.ssc_refine : \n            y=y.to(x.device).type(self.dtype)\n            h=th.cat([x, y], dim=1).type(self.dtype)\n        else : \n            h = x.type(self.dtype)\n\n        if not self.voxelfea:\n            triplane = decompose_featmaps(h, tri_size)\n            h_triplane, xz, yz = triplane\n        else :\n            h_triplane = h\n        h_triplane = self.in_conv(h_triplane, emb)\n\n        for level, module in enumerate(self.input_blocks):\n            h_triplane = module(h_triplane, emb)\n            hs.append(h_triplane)\n\n        for level, module in enumerate(self.output_blocks):\n            if level == 0:\n                h_triplane = hs.pop()\n            else:\n                h_triplane_pop = hs.pop()\n                h_triplane = th.cat([h_triplane, h_triplane_pop], dim=1)\n            \n            h_triplane = module(h_triplane, emb)\n        \n        h_triplane = self.out(h_triplane)\n        if not self.voxelfea:\n            h = compose_featmaps(h_triplane, xz, yz, tri_size)[0]\n        #assert h.shape == x.shape\n        return h\n    \n\nclass TriplaneUNetModel(nn.Module):\n    def __init__(\n        self,\n        args,\n        num_res_blocks=1,\n        dropout=0,\n        use_checkpoint=False,\n        use_fp16=False,\n    ):\n        \n        super().__init__()\n        learn_sigma = args.learn_sigma\n        ssc_refine = args.ssc_refine\n        model_channels = args.model_channels\n        is_rollout = args.is_rollout\n        channel_mult = args.mult_channels\n        tri_unet_updown = args.tri_unet_updown\n        tri_z_down = args.tri_z_down\n        conv_down = args.conv_down\n        dataset = args.dataset\n        in_channels = args.geo_feat_channels\n        out_channels = args.geo_feat_channels\n        \n        if tri_unet_updown:\n            n_level = len(channel_mult)\n            level_shape=((128, 128, 16),)\n            for n in range(1, n_level):\n                level_shape += ((int(128//2**n), int(128//2**n), int(16//2**n)),)\n        else : \n            level_shape=()\n            n_level = len(channel_mult)\n            for n in range(n_level):\n                level_shape += ((128, 128, 16),)\n                \n        self.ssc_refine = ssc_refine\n        self.in_channels = 2*in_channels if ssc_refine else in_channels\n            \n        self.model_channels = model_channels\n        self.out_channels = out_channels*2 if learn_sigma else out_channels\n        self.num_res_blocks = num_res_blocks\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.use_checkpoint = use_checkpoint\n        self.dtype = th.float16 if use_fp16 else th.float32\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n\n        ch = input_ch = int(channel_mult[0] * model_channels)\n        level_shape = ((128, 128, 16), (64, 64, 8), (32, 32, 4))\n        self.in_conv = TimestepEmbedSequential(TriplaneConv(self.in_channels, ch, 1, padding=0, is_rollout=False))\n        print(\"\\nIn conv: TriplaneConv\")\n        n_down, n_up = 0, 0\n        \n        input_block_chans = [ch]\n        self.input_blocks = nn.ModuleList([])\n        for level, mult in enumerate(channel_mult):\n            layers = []\n            if tri_unet_updown and (level != 0):\n                if (dataset == 'carla') and (n_down == 0) :\n                    layers.append(TriplaneDownsample2x(tri_z_down, conv_down, channels=ch))\n                    n_down+=1\n                    print(f\"Down level {level}: TriplaneDownsample2x, ch {ch}\")\n                elif (dataset == 'kitti') : \n                    layers.append(TriplaneDownsample2x(tri_z_down, conv_down, channels=ch))\n                    print(f\"Down level {level}: TriplaneDownsample2x, ch {ch}\")\n                \n            for _ in range(num_res_blocks):\n                layers.append(\n                    TriplaneResBlock(\n                        ch,\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        is_rollout=is_rollout\n                    )\n                )\n                print(f\"Down level {level} block 1: TriplaneResBlock, ch {int(model_channels * mult)}\")\n                \n               \n                layers.append(\n                    TriplaneResBlock(\n                        int(mult * model_channels),\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        is_rollout=is_rollout\n                    )\n                )\n                print(f\"Down level {level} block 2: TriplaneResBlock, ch {int(model_channels * mult)}\")  \n            ch = int(mult * model_channels)\n            input_block_chans.append(ch)\n            self.input_blocks.append(TimestepEmbedSequential(*layers)) \n            \n\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            layers = []\n            for i in range(num_res_blocks):\n                ich = input_block_chans.pop()\n                if level == len(channel_mult) - 1 and i == 0:\n                    ich = 0\n                layers.append(\n                    TriplaneResBlock(\n                        ch + ich,\n                        time_embed_dim,\n                        out_channels=int(model_channels * mult),\n                        level=level_shape[level],\n                        is_rollout=is_rollout\n                    )\n                )\n                print(f\"Up level {level} block 1 : TriplaneResBlock, ch {int(model_channels * mult)}\")\n                \n                layers.append(\n                    TriplaneResBlock(\n                        int(mult * model_channels),\n                        time_embed_dim,\n                        out_channels=int(mult * model_channels),\n                        level=level_shape[level],\n                        is_rollout=is_rollout\n                    )\n                )\n                print(f\"Up level {level} block 2: TriplaneResBlock, ch {int(model_channels * mult)}\")  \n                ch = int(model_channels * mult)\n            \n\n            if tri_unet_updown and (level > 0):\n                if (dataset == 'carla') and (n_up == 0) :\n                    layers.append(TriplaneUpsample2x(tri_z_down, conv_down, channels=ch))\n                    n_up+=1\n                    print(f\"Up level {level}: TriplaneUpsample2x, ch {int(model_channels * mult)}\")\n                elif (dataset == 'kitti') : \n                    layers.append(TriplaneUpsample2x(tri_z_down, conv_down, channels=ch))\n                    print(f\"Up level {level}: TriplaneUpsample2x, ch {int(model_channels * mult)}\")\n\n            self.output_blocks.append(TimestepEmbedSequential(*layers))\n\n        self.out = nn.Sequential(\n            TriplaneNorm(ch),\n            TriplaneSiLU(),\n            TriplaneConv(input_ch, self.out_channels, 1, padding=0, is_rollout=False)\n        )\n\n        print(\"Out conv: TriplaneConv\\n\")\n\n    def convert_to_fp16(self):\n        \"\"\"\n        Convert the torso of the model to float16.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f16)\n        self.output_blocks.apply(convert_module_to_f16)\n\n    def convert_to_fp32(self):\n        \"\"\"\n        Convert the torso of the model to float32.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f32)\n        self.output_blocks.apply(convert_module_to_f32)\n\n    def forward(self, x, timesteps, H=128, W=128, D=16, y=None):\n        \"\"\"\n        Apply the model to an input batch.\n\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        assert H is not None and W is not None and D is not None\n\n        hs = []\n        if type(H) == int:\n            tri_size = (H, W, D)\n        else : \n            tri_size = (H[0], W[0], D[0])\n        emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))\n\n        if self.ssc_refine:\n            y=y.to(x.device).type(self.dtype)\n            h=th.cat([x, y], dim=1).type(self.dtype)\n        else : \n            h = x.type(self.dtype)\n       \n            \n        h_triplane = decompose_featmaps(h, tri_size)\n        h_triplane = self.in_conv(h_triplane, emb)\n\n        for level, module in enumerate(self.input_blocks):\n            h_triplane = module(h_triplane, emb)\n            hs.append(h_triplane)\n\n        for level, module in enumerate(self.output_blocks):\n            if level == 0:\n                h_triplane = hs.pop()\n            else:\n                h_triplane_pop = hs.pop()\n                h_triplane = list(h_triplane)\n                if h_triplane[0].shape[2:] != h_triplane_pop[0].shape[2:]:\n                    h_triplane[0] = F.interpolate(h_triplane[0], size=h_triplane_pop[0].shape[2:], mode='bilinear', align_corners=False)\n                if h_triplane[1].shape[2:] != h_triplane_pop[1].shape[2:]:\n                    h_triplane[1] = F.interpolate(h_triplane[1], size=h_triplane_pop[1].shape[2:], mode='bilinear', align_corners=False)\n                if h_triplane[2].shape[2:] != h_triplane_pop[2].shape[2:]:\n                    h_triplane[2] = F.interpolate(h_triplane[2], size=h_triplane_pop[2].shape[2:], mode='bilinear', align_corners=False)\n\n                h_triplane = (th.cat([h_triplane[0], h_triplane_pop[0]], dim=1),\n                              th.cat([h_triplane[1], h_triplane_pop[1]], dim=1),\n                              th.cat([h_triplane[2], h_triplane_pop[2]], dim=1))\n            \n            h_triplane = module(h_triplane, emb)\n        \n        h_triplane = self.out(h_triplane)\n        h = compose_featmaps(*h_triplane, tri_size)[0]\n        #assert h.shape == x.shape\n        return h"
  },
  {
    "path": "encoding/blocks.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport math\n\n\nclass SinusoidalEncoder(nn.Module):\n    \"\"\"Sinusoidal Positional Encoder used in Nerf.\"\"\"\n\n    def __init__(self, x_dim, min_deg, max_deg, use_identity: bool = True):\n        super().__init__()\n        self.x_dim = x_dim\n        self.min_deg = min_deg\n        self.max_deg = max_deg\n        self.use_identity = use_identity\n        self.register_buffer(\n            \"scales\", torch.tensor([2**i for i in range(min_deg, max_deg)])\n        )\n\n    @property\n    def latent_dim(self) -> int:\n        return (\n            int(self.use_identity) + (self.max_deg - self.min_deg) * 2\n        ) * self.x_dim\n\n    def forward(self, x: torch.Tensor) -> torch.Tensor:\n        \"\"\"\n        Args:\n            x: [..., x_dim]\n        Returns:\n            latent: [..., latent_dim]\n        \"\"\"\n        if self.max_deg == self.min_deg:\n            return x\n        xb = torch.reshape(\n            (x[Ellipsis, None, :] * self.scales[:, None]),\n            list(x.shape[:-1]) + [(self.max_deg - self.min_deg) * self.x_dim],\n        )\n        latent = torch.sin(torch.cat([xb, xb + 0.5 * math.pi], dim=-1))\n        if self.use_identity:\n            latent = torch.cat([x] + [latent], dim=-1)\n        return latent\n\nclass DecoderMLPSkipConcat(nn.Module):\n    def __init__(self, in_channels, out_channels, hidden_channels, num_hidden_layers, posenc=0) -> None:\n        super().__init__()\n        self.posenc = posenc\n        if posenc > 0:\n            self.PE = SinusoidalEncoder(in_channels, 0, posenc, use_identity=True)\n            in_channels = self.PE.latent_dim\n        first_layer_list = [nn.Linear(in_channels, hidden_channels), nn.ReLU()]\n        for _ in range(num_hidden_layers // 2):\n            first_layer_list.append(nn.Linear(hidden_channels, hidden_channels))\n            first_layer_list.append(nn.ReLU())\n        self.first_layers = nn.Sequential(*first_layer_list)\n        \n        second_layer_list = [nn.Linear(in_channels + hidden_channels, hidden_channels), nn.ReLU()]\n        for _ in range(num_hidden_layers // 2 - 1):\n            second_layer_list.append(nn.Linear(hidden_channels, hidden_channels))\n            second_layer_list.append(nn.ReLU())\n        second_layer_list.append(nn.Linear(hidden_channels, out_channels))\n        self.second_layers = nn.Sequential(*second_layer_list)\n    \n    def forward(self, x):\n        if self.posenc > 0:\n            x = self.PE(x)\n        h = self.first_layers(x)\n        h = torch.cat([x, h], dim=-1)\n        h = self.second_layers(h)\n        return h\n\n\nclass SiLU(nn.Module):\n    def forward(self, x):\n        return x * torch.sigmoid(x)\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef compose_triplane_channelwise(feat_maps):\n    h_xy, h_xz, h_yz = feat_maps # (H, W), (H, D), (W, D)\n    assert h_xy.shape[1] == h_xz.shape[1] == h_yz.shape[1]\n    C, H, W = h_xy.shape[-3:]\n    D = h_xz.shape[-1]\n\n    newH = max(H, W)\n    newW = max(W, D)\n    h_xy = F.pad(h_xy, (0, newW - W, 0, newH - H))\n    h_xz = F.pad(h_xz, (0, newW - D, 0, newH - H))\n    h_yz = F.pad(h_yz, (0, newW - D, 0, newH - W))\n    h = torch.cat([h_xy, h_xz, h_yz], dim=1) # (B, 3C, H, W)\n\n    return h, (H, W, D)\n\n\ndef decompose_triplane_channelwise(composed_map, sizes):\n    H, W, D = sizes\n    C = composed_map.shape[1] // 3\n    h_xy = composed_map[:, :C, :H, :W]\n    h_xz = composed_map[:, C:2*C, :H, :D]\n    h_yz = composed_map[:, 2*C:, :W, :D]\n    return h_xy, h_xz, h_yz\n\n\nclass TriplaneGroupResnetBlock(nn.Module):\n    def __init__(self, in_channels, out_channels, up=False, ks=3, input_norm=True, input_act=True):\n        super().__init__()\n        in_channels *= 3\n        out_channels *= 3\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.up = up\n        \n        self.input_norm = input_norm\n        if input_norm and input_act:\n            self.in_layers = nn.Sequential(\n                # nn.GroupNorm(num_groups=3, num_channels=in_channels, eps=1e-6, affine=True),\n                SiLU(),\n                nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2)\n            )\n        elif not input_norm:\n            if input_act:\n                self.in_layers = nn.Sequential(\n                    SiLU(),\n                    nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2)\n                )\n            else:\n                self.in_layers = nn.Sequential(\n                    nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2)\n                )\n        else:\n            raise NotImplementedError\n\n        self.norm_xy = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True)\n        self.norm_xz = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True)\n        self.norm_yz = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True)\n\n        self.out_layers = nn.Sequential(\n            # nn.GroupNorm(num_groups=3, num_channels=out_channels, eps=1e-6, affine=True),\n            SiLU(),\n            # nn.Dropout(p=dropout),\n            zero_module(\n                nn.Conv2d(out_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2)\n            ),\n        )\n\n        if self.in_channels != self.out_channels:\n            self.shortcut = nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=1, stride=1, padding=0)\n        else:\n            self.shortcut = nn.Identity()\n\n    def forward(self, feat_maps):\n        if self.input_norm:\n            feat_maps = [self.norm_xy(feat_maps[0]), self.norm_xz(feat_maps[1]), self.norm_yz(feat_maps[2])]\n        x, (H, W, D) = compose_triplane_channelwise(feat_maps)\n\n        if self.up:\n            raise NotImplementedError\n        else:\n            h = self.in_layers(x)\n        \n        h_xy, h_xz, h_yz = decompose_triplane_channelwise(h, (H, W, D))\n        h_xy = self.norm_xy(h_xy)\n        h_xz = self.norm_xz(h_xz)\n        h_yz = self.norm_yz(h_yz)\n        h, _ = compose_triplane_channelwise([h_xy, h_xz, h_yz])\n\n        h = self.out_layers(h)\n        h = h + self.shortcut(x)\n        h_maps = decompose_triplane_channelwise(h, (H, W, D))\n        return h_maps\n\nclass BeVplaneGroupResnetBlock(nn.Module):\n    def __init__(self, in_channels, out_channels, up=False, ks=3, input_norm=True, input_act=True):\n        super().__init__()\n        in_channels \n        out_channels \n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.up = up\n        \n        self.input_norm = input_norm\n        if input_norm and input_act:\n            self.in_layers = nn.Sequential(\n                # nn.GroupNorm(num_groups=3, num_channels=in_channels, eps=1e-6, affine=True),\n                SiLU(),\n                nn.Conv2d(in_channels, out_channels,  kernel_size=ks, stride=1, padding=(ks - 1)//2)\n            )\n        elif not input_norm:\n            if input_act:\n                self.in_layers = nn.Sequential(\n                    SiLU(),\n                    nn.Conv2d(in_channels, out_channels,  kernel_size=ks, stride=1, padding=(ks - 1)//2)\n                )\n            else:\n                self.in_layers = nn.Sequential(\n                    nn.Conv2d(in_channels, out_channels, kernel_size=ks, stride=1, padding=(ks - 1)//2)\n                )\n        else:\n            raise NotImplementedError\n\n        self.norm_xy = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True)\n        self.norm_xz = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True)\n        self.norm_yz = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True)\n\n        self.out_layers = nn.Sequential(\n            # nn.GroupNorm(num_groups=3, num_channels=out_channels, eps=1e-6, affine=True),\n            SiLU(),\n            # nn.Dropout(p=dropout),\n            zero_module(\n                nn.Conv2d(out_channels, out_channels,  kernel_size=ks, stride=1, padding=(ks - 1)//2)\n            ),\n        )\n\n        if self.in_channels != self.out_channels:\n            self.shortcut = nn.Conv2d(in_channels, out_channels,  kernel_size=1, stride=1, padding=0)\n        else:\n            self.shortcut = nn.Identity()\n\n    def forward(self, feat_maps):\n        if self.input_norm:\n            feat_maps = [self.norm_xy(feat_maps[0]), self.norm_xz(feat_maps[1]), self.norm_yz(feat_maps[2])]\n        \n        x = feat_maps[0]\n        if self.up:\n            raise NotImplementedError\n        else:\n            h = self.in_layers(x)\n        \n        h = self.norm_xy(h)\n        h = self.out_layers(h)\n        h = h + self.shortcut(x)\n        h_maps = [h, feat_maps[1], feat_maps[2]]\n        return h_maps\n\n"
  },
  {
    "path": "encoding/lovasz.py",
    "content": "import torch\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\ntry:\n    from itertools import  ifilterfalse\nexcept ImportError: # py3k\n    from itertools import  filterfalse as ifilterfalse\n\n\n\n# -*- coding:utf-8 -*-\n# author: Xinge\n\ndef dice_coef(y_true, y_pred, smooth=1e-6):\n    y_true_f = y_true.view(-1)\n    y_pred_f = y_pred.view(-1)\n    intersection = (y_true_f * y_pred_f).sum()\n    return (2. * intersection + smooth) / (y_true_f.sum() + y_pred_f.sum() + smooth)\n\ndef dice_coef_multilabel(y_true, y_pred, numLabels=11):\n    dice=0\n    for index in range(1, numLabels):\n        dice += dice_coef(y_true[:,index,:,:,:], y_pred[:,index,:,:,:])\n    return (numLabels-1) - dice\n\n\"\"\"\nLovasz-Softmax and Jaccard hinge loss in PyTorch\nMaxim Berman 2018 ESAT-PSI KU Leuven (MIT License)\n\"\"\"\n\ndef lovasz_grad(gt_sorted):\n    \"\"\"\n    Computes gradient of the Lovasz extension w.r.t sorted errors\n    See Alg. 1 in paper\n    \"\"\"\n    p = len(gt_sorted)\n    gts = gt_sorted.sum()\n    intersection = gts - gt_sorted.float().cumsum(0)\n    union = gts + (1 - gt_sorted).float().cumsum(0)\n    jaccard = 1. - intersection / union\n    if p > 1: # cover 1-pixel case\n        jaccard[1:p] = jaccard[1:p] - jaccard[0:-1]\n    return jaccard\n\n# --------------------------- MULTICLASS LOSSES ---------------------------\n\n\ndef lovasz_softmax(probas, labels, classes='present', per_image=False, ignore=None):\n    \"\"\"\n    Multi-class Lovasz-Softmax loss\n      probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1).\n              Interpreted as binary (sigmoid) output with outputs of size [B, H, W].\n      labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1)\n      classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.\n      per_image: compute the loss per image instead of per batch\n      ignore: void class labels\n    \"\"\"\n    if per_image:\n        loss = mean(lovasz_softmax_flat(*flatten_probas(prob.unsqueeze(0), lab.unsqueeze(0), ignore), classes=classes)\n                          for prob, lab in zip(probas, labels))\n    else:\n        loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), classes=classes)\n    return loss\n\n\ndef lovasz_softmax_flat(probas, labels, classes='present'):\n    \"\"\"\n    Multi-class Lovasz-Softmax loss\n      probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1)\n      labels: [P] Tensor, ground truth labels (between 0 and C - 1)\n      classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.\n    \"\"\"\n    if probas.numel() == 0:\n        # only void pixels, the gradients should be 0\n        return probas * 0.\n    C = probas.size(1)\n    losses = []\n    class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes\n    for c in class_to_sum:\n        fg = (labels == c).float() # foreground for class c\n        if (classes is 'present' and fg.sum() == 0):\n            continue\n        if C == 1:\n            if len(classes) > 1:\n                raise ValueError('Sigmoid output possible only with 1 class')\n            class_pred = probas[:, 0]\n        else:\n            class_pred = probas[:, c]\n        errors = (Variable(fg) - class_pred).abs()\n        errors_sorted, perm = torch.sort(errors, 0, descending=True)\n        perm = perm.data\n        fg_sorted = fg[perm]\n        losses.append(torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted))))\n    return mean(losses)\n\n\ndef flatten_probas(probas, labels, ignore=None):\n    \"\"\"\n    Flattens predictions in the batch\n    \"\"\"\n    if probas.dim() == 3:\n        # assumes output of a sigmoid layer\n        B, H, W = probas.size()\n        probas = probas.view(B, 1, H, W)\n    elif probas.dim() == 5:\n        #3D segmentation\n        B, C, L, H, W = probas.size()\n        probas = probas.contiguous().view(B, C, L, H*W)\n    B, C, H, W = probas.size()\n    probas = probas.permute(0, 2, 3, 1).contiguous().view(-1, C)  # B * H * W, C = P, C\n    labels = labels.view(-1)\n    if ignore is None:\n        return probas, labels\n    valid = (labels != ignore)\n    vprobas = probas[valid.nonzero().squeeze()]\n    vlabels = labels[valid]\n    return vprobas, vlabels\n\n\n# --------------------------- HELPER FUNCTIONS ---------------------------\ndef isnan(x):\n    return x != x\n    \n    \ndef mean(l, ignore_nan=False, empty=0):\n    \"\"\"\n    nanmean compatible with generators.\n    \"\"\"\n    l = iter(l)\n    if ignore_nan:\n        l = ifilterfalse(isnan, l)\n    try:\n        n = 1\n        acc = next(l)\n    except StopIteration:\n        if empty == 'raise':\n            raise ValueError('Empty mean')\n        return empty\n    for n, v in enumerate(l, 2):\n        acc += v\n    if n == 1:\n        return acc\n    return acc / n"
  },
  {
    "path": "encoding/networks.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom encoding.blocks import TriplaneGroupResnetBlock, BeVplaneGroupResnetBlock, DecoderMLPSkipConcat\n\nclass Encoder(nn.Module):\n    def __init__(self, geo_feat_channels, z_down, padding_mode, kernel_size = (5, 5, 3), padding = (2, 2, 1)):\n        super().__init__()\n        self.z_down = z_down\n        self.conv0 = nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode)\n        self.convblock1 = nn.Sequential(\n            nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode),\n            nn.InstanceNorm3d(geo_feat_channels),\n            nn.LeakyReLU(1e-1, True),\n            nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode),\n            nn.InstanceNorm3d(geo_feat_channels)\n        )\n        if self.z_down :\n            self.downsample = nn.Sequential(\n                nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=(0, 0, 0), bias=True, padding_mode=padding_mode),\n                nn.InstanceNorm3d(geo_feat_channels)\n            )\n        else :\n            self.downsample = nn.Sequential(\n                nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=(2, 2, 1), stride=(2, 2, 1), padding=(0, 0, 0), bias=True, padding_mode=padding_mode),\n                nn.InstanceNorm3d(geo_feat_channels)\n            )\n        self.convblock2 = nn.Sequential(\n            nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode),\n            nn.InstanceNorm3d(geo_feat_channels),\n            nn.LeakyReLU(1e-1, True),\n            nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode),\n            nn.InstanceNorm3d(geo_feat_channels)\n        )\n\n    def forward(self, x):  # [b, geo_feat_channels, X, Y, Z]\n        x = self.conv0(x)  # [b, geo_feat_channels, X, Y, Z]\n\n        residual_feat = x\n        x = self.convblock1(x)  # [b, geo_feat_channels, X, Y, Z]\n        x = x + residual_feat   # [b, geo_feat_channels, X, Y, Z]\n        x = self.downsample(x)  # [b, geo_feat_channels, X//2, Y//2, Z//2]\n\n        residual_feat = x\n        x = self.convblock2(x)\n        x = x + residual_feat\n\n        return x  # [b, geo_feat_channels, X//2, Y//2, Z//2]\n\nclass AutoEncoderGroupSkip(nn.Module):\n    def __init__(self, args) -> None:\n        super().__init__()\n        class_num = args.num_class \n        self.embedding = nn.Embedding(class_num, args.geo_feat_channels)\n\n        print('build encoder...')\n        if args.dataset == 'kitti':\n            self.geo_encoder = Encoder(args.geo_feat_channels, args.z_down, args.padding_mode)\n        else:\n            self.geo_encoder = Encoder(args.geo_feat_channels, args.z_down, args.padding_mode, kernel_size = 3, padding = 1)\n\n        if args.voxel_fea :\n            self.norm = nn.InstanceNorm3d(args.geo_feat_channels) \n        else:\n            self.norm = nn.InstanceNorm2d(args.geo_feat_channels)\n        self.geo_feat_dim = args.geo_feat_channels\n        self.pos = args.pos\n        self.pos_num_freq = 6  # the defualt value 6 like NeRF\n        self.args = args\n        \n        print('triplane features are summed for decoding...')\n        if args.dataset == 'kitti':\n            if args.voxel_fea:\n                self.geo_convs = nn.Sequential(\n                    nn.Conv3d(args.geo_feat_channels, args.feat_channel_up, kernel_size=3, stride=1, padding=1, bias=True, padding_mode=args.padding_mode),\n                    nn.InstanceNorm3d(args.geo_feat_channels)\n                )\n            else : \n                if args.triplane:\n                    self.geo_convs = TriplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=5, input_norm=False, input_act=False)\n                else : \n                    self.geo_convs = BeVplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=5, input_norm=False, input_act=False)\n        else:\n            self.geo_convs = TriplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=3, input_norm=False, input_act=False)\n\n        print(f'build shared decoder... (PE: {self.pos})\\n')\n        if self.pos:\n            self.geo_decoder = DecoderMLPSkipConcat(args.feat_channel_up+6*self.pos_num_freq, args.num_class, args.mlp_hidden_channels, args.mlp_hidden_layers)\n        else:\n            self.geo_decoder = DecoderMLPSkipConcat(args.feat_channel_up, args.num_class, args.mlp_hidden_channels, args.mlp_hidden_layers)\n\n    def geo_parameters(self):\n        return list(self.geo_encoder.parameters()) + list(self.geo_convs.parameters()) + list(self.geo_decoder.parameters())\n    \n    def tex_parameters(self):\n        return list(self.tex_encoder.parameters()) + list(self.tex_convs.parameters()) + list(self.tex_decoder.parameters())\n\n    def encode(self, vol):\n        x = vol.detach().clone()\n        x[x == 255] = 0\n            \n        x = self.embedding(x)\n        x = x.permute(0, 4, 1, 2, 3)\n        vol_feat = self.geo_encoder(x)\n\n        if self.args.voxel_fea:\n            vol_feat = self.norm(vol_feat).tanh()\n            return vol_feat\n        else :\n            xy_feat = vol_feat.mean(dim=4)\n            xz_feat = vol_feat.mean(dim=3)\n            yz_feat = vol_feat.mean(dim=2)\n            \n            xy_feat = (self.norm(xy_feat) * 0.5).tanh()\n            xz_feat = (self.norm(xz_feat) * 0.5).tanh()\n            yz_feat = (self.norm(yz_feat) * 0.5).tanh()\n            return [xy_feat, xz_feat, yz_feat]\n    \n    def sample_feature_plane2D(self, feat_map, x):\n        \"\"\"Sample feature map at given coordinates\"\"\"\n        # feat_map: [bs, C, H, W]\n        # x: [bs, N, 2]\n        sample_coords = x.view(x.shape[0], 1, -1, 2) # sample_coords: [bs, 1, N, 2]\n        feat = F.grid_sample(feat_map, sample_coords.flip(-1), align_corners=False, padding_mode='border') # feat : [bs, C, 1, N]\n        feat = feat[:, :, 0, :] # feat : [bs, C, N]\n        feat = feat.transpose(1, 2) # feat : [bs, N, C]\n        return feat\n\n    def sample_feature_plane3D(self, vol_feat, x):\n        \"\"\"Sample feature map at given coordinates\"\"\"\n        # feat_map: [bs, C, H, W, D]\n        # x: [bs, N, 3]\n        sample_coords = x.view(x.shape[0], 1, 1, -1, 3)\n        feat = F.grid_sample(vol_feat, sample_coords.flip(-1), align_corners=False, padding_mode='border') # feat : [bs, C, 1, 1, N]\n        feat = feat[:, :, 0, 0, :] # feat : [bs, C, N]\n        feat = feat.transpose(1, 2) # feat : [bs, N, C]\n        return feat \n\n    def decode(self, feat_maps, query):        \n        if self.args.voxel_fea:\n            h_geo = self.geo_convs(feat_maps)\n            h_geo = self.sample_feature_plane3D(h_geo, query)\n            \n        else : \n            # coords [N, 3]\n            coords_list = [[0, 1], [0, 2], [1, 2]]\n            geo_feat_maps = [fm[:, :self.geo_feat_dim] for fm in feat_maps]\n            geo_feat_maps = self.geo_convs(geo_feat_maps)\n\n            if self.args.triplane:\n                h_geo = 0\n                for i in range(3):\n                    h_geo += self.sample_feature_plane2D(geo_feat_maps[i], query[..., coords_list[i]]) # feat : [bs, N, C]\n            else :\n                h_geo = self.sample_feature_plane2D(geo_feat_maps[0], query[..., coords_list[0]]) # feat : [bs, N, C]\n            \n        if self.pos :\n            # multiply_PE_res = 1\n            # embed_fn, input_ch = get_embedder(multires=multiply_PE_res)\n            # sample_PE = embed_fn(query)\n            PE = []\n            for freq in range(self.pos_num_freq):\n                PE.append(torch.sin((2.**freq) * query))\n                PE.append(torch.cos((2.**freq) * query))\n\n            PE = torch.cat(PE, dim=-1)  # [bs, N, 6*self.pos_num_freq]\n            h_geo = torch.cat([h_geo, PE], dim=-1)\n\n        h = self.geo_decoder(h_geo) # h : [bs, N, 1]\n        return h\n    \n    def forward(self, vol, query):\n        feat_map = self.encode(vol)\n        return self.decode(feat_map, query)\n"
  },
  {
    "path": "encoding/ssc_metrics.py",
    "content": "import torch\nimport numpy as np\nimport os\n\ndef compose_featmaps(feat_xy, feat_xz, feat_yz):\n    H, W = feat_xy.shape[-2:]\n    D = feat_xz.shape[-1]\n    empty_block = torch.zeros(list(feat_xy.shape[:-2]) + [D, D], dtype=feat_xy.dtype, device=feat_xy.device)\n    composed_map = torch.cat(\n        [torch.cat([feat_xy, feat_xz], dim=-1),\n         torch.cat([feat_yz.transpose(-1, -2), empty_block], dim=-1)], \n        dim=-2\n    )\n    return composed_map\n\ndef decompose_featmaps(composed_map):\n    H, W, D = 256, 256, 32\n    feat_xy = composed_map[..., :H, :W] # (C, H, W)\n    feat_xz = composed_map[..., :H, W:] # (C, H, D)\n    feat_yz = np.asarray(torch.tensor(composed_map[..., H:, :W]).transpose(-1, -2)) # (C, W, D)\n    return feat_xy, feat_xz, feat_yz\n\ndef visualization(args, coords, preds, folder, idx, learning_map_inv, training=True):\n    output = torch.zeros((256, 256, 32), device=preds.device)\n    coords = coords.squeeze(0)\n    output[coords[:,0], coords[:,1], coords[:,2]] = preds.squeeze(0)\n    \n    pred = output.cpu().long().data.numpy()\n    maxkey = max(learning_map_inv.keys())\n\n    # +100 hack making lut bigger just in case there are unknown labels\n    remap_lut_First = np.zeros((maxkey + 100), dtype=np.int32)\n    remap_lut_First[list(learning_map_inv.keys())] = list(learning_map_inv.values())\n\n    pred = pred.astype(np.uint32)\n    pred = pred.reshape((-1))\n    upper_half = pred >> 16  # get upper half for instances\n    lower_half = pred & 0xFFFF  # get lower half for semantics\n    lower_half = remap_lut_First[lower_half]  # do the remapping of semantics\n    pred = (upper_half << 16) + lower_half  # reconstruct full label\n    pred = pred.astype(np.uint32)\n\n    # Save\n    final_preds = pred.astype(np.uint16)\n    if training:\n        os.makedirs(args.save_path+'/Prediction/', exist_ok=True)\n        for i in range(11):\n            os.makedirs(args.save_path+'/Prediction/'+str(i).zfill(2), exist_ok=True)\n\n        if torch.is_tensor(idx):\n            save_path = args.save_path+'/Prediction/'+str(folder)+'/'+str(idx.item()).zfill(3)+'.label'\n        else : \n            save_path = args.save_path+'/Prediction/'+str(folder)+'/'+str(idx).zfill(3)+'.label'\n    else : save_path = args.save_path+'/'+str(folder)+'/'+str(idx).zfill(3)+'.label'\n    \n    final_preds.tofile(save_path)\n    \n    \n\"\"\"\nPart of the code is taken from https://github.com/waterljwant/SSC/blob/master/sscMetrics.py\n\"\"\"\nimport numpy as np\nfrom sklearn.metrics import accuracy_score, precision_recall_fscore_support\n\n\n#!/usr/bin/env python3\n# This file is covered by the LICENSE file in the root of this project.\n\nimport sys\nimport numpy as np\n\n\nclass SSCMetrics:\n    def __init__(self, n_classes, ignore=None):\n        # classes\n        self.n_classes = n_classes\n\n        # What to include and ignore from the means\n        self.ignore = np.array(ignore, dtype=np.int64)\n        self.include = np.array([n for n in range(self.n_classes) if n not in self.ignore], dtype=np.int64)\n        #print(\"[IOU EVAL] IGNORE: \", self.ignore)\n        #print(\"[IOU EVAL] INCLUDE: \", self.include)\n\n        # reset the class counters\n        self.reset()\n\n    def num_classes(self):\n        return self.n_classes\n\n    def get_eval_mask(self, labels, invalid_voxels):  # from samantickitti api\n        \"\"\"\n        Ignore labels set to 255 and invalid voxels (the ones never hit by a laser ray, probed using ray tracing)\n        :param labels: input ground truth voxels\n        :param invalid_voxels: voxels ignored during evaluation since the lie beyond the scene that was captured by the laser\n        :return: boolean mask to subsample the voxels to evaluate\n        \"\"\"\n        masks = np.ones_like(labels, dtype=np.bool_)\n        masks[labels == 255] = False\n        masks[invalid_voxels == 1] = False\n        return masks\n\n    def reset(self):\n        self.conf_matrix = np.zeros((self.n_classes,\n                                    self.n_classes),\n                                    dtype=np.int64)\n        \n    def one_stats(self, x, y):\n        # sizes should be matching\n        x_row = x.reshape(-1)  # de-batchify\n        y_row = y.reshape(-1)  # de-batchify\n        idxs = tuple(np.stack((x_row, y_row), axis=0))\n        conf_matrix = np.zeros((self.n_classes, self.n_classes), dtype=np.int64)\n        np.add.at(conf_matrix, idxs, 1)\n        conf_matrix[:, self.ignore] = 0\n        tp = np.diag(conf_matrix)\n        fp = conf_matrix.sum(axis=1) - tp\n        fn = conf_matrix.sum(axis=0) - tp\n        intersection = tp\n        union = tp + fp + fn + 1e-15\n        n = len(np.unique(y)) - 1\n        miou = (intersection[1:] / union[1:]).sum()/n *100\n        #miou = (intersection / union).sum()/n *100\n        all_miou = (intersection / union).sum()/(n+1) *100\n        iou = (np.sum(conf_matrix[1:, 1:])) / (np.sum(conf_matrix) - conf_matrix[0, 0] + 1e-8) * 100\n        return iou, miou, all_miou\n    \n    def addBatch(self, x, y):  # x=preds, y=targets\n        # sizes should be matching\n        x_row = x.reshape(-1)  # de-batchify\n        y_row = y.reshape(-1)  # de-batchify\n\n        # check\n        assert(x_row.shape == y_row.shape)\n\n        # create indexes\n        idxs = tuple(np.stack((x_row, y_row), axis=0))\n\n        # make confusion matrix (cols = gt, rows = pred)\n        np.add.at(self.conf_matrix, idxs, 1)\n        iou, miou, all_miou = self.one_stats(x, y)\n        return iou, miou\n        \n\n    def getStats(self):\n        # remove fp from confusion on the ignore classes cols\n        conf = self.conf_matrix.copy()\n        conf[:, self.ignore] = 0\n\n        # get the clean stats\n        tp = np.diag(conf)\n        fp = conf.sum(axis=1) - tp\n        fn = conf.sum(axis=0) - tp\n        return tp, fp, fn\n\n    def getIoU(self):\n        tp, fp, fn = self.getStats()\n        intersection = tp\n        union = tp + fp + fn + 1e-15\n        iou = intersection / union\n        iou_mean = (intersection[self.include] / union[self.include]).mean()\n        return iou_mean, iou  # returns \"iou mean\", \"iou per class\" ALL CLASSES\n\n    def getacc(self):\n        tp, fp, fn = self.getStats()\n        total_tp = tp.sum()\n        total = tp[self.include].sum() + fp[self.include].sum() + 1e-15\n        acc_mean = total_tp / total\n        return acc_mean  # returns \"acc mean\"\n        \n    def get_confusion(self):\n        return self.conf_matrix.copy()"
  },
  {
    "path": "encoding/train_ae.py",
    "content": "from torch.utils.tensorboard import SummaryWriter\nfrom dataset.dataset_builder import dataset_builder\nfrom encoding.networks import AutoEncoderGroupSkip\nfrom encoding.lovasz import lovasz_softmax\nfrom utils.utils import save_remap_lut, point2voxel\nimport os\nimport torch\nfrom tqdm.auto import tqdm\nfrom torch.cuda.amp import autocast, GradScaler\nimport numpy as np\nfrom encoding.ssc_metrics import SSCMetrics\n\nclass Trainer:\n    def __init__(self, args):\n        # etc\n        self.args = args\n        self.writer = SummaryWriter(os.path.join(args.save_path, 'tb'))\n        self.epoch, self.start_epoch = 0, 0\n        self.global_step = 0\n        self.best_miou = 0\n\n        # dataset\n        self.train_dataset, self.val_dataset, self.num_class, class_names = dataset_builder(args)\n        self.train_dataloader = torch.utils.data.DataLoader(self.train_dataset, batch_size=args.bs, shuffle=True, num_workers=8, pin_memory=True)\n        self.val_dataloader = torch.utils.data.DataLoader(self.val_dataset, batch_size=1, shuffle=False, num_workers=8, pin_memory=True)\n        self.iou_class_names = class_names\n\n        # model & optimizer\n        self.model = AutoEncoderGroupSkip(args).cuda()\n        self.optimizer = torch.optim.Adam(self.model.parameters(), lr=args.lr)\n        self.scheduler = torch.optim.lr_scheduler.MultiStepLR(self.optimizer, args.lr_scheduler_steps, args.lr_scheduler_decay) if args.lr_scheduler else None\n        self.grad_scaler = GradScaler()\n        \n        if args.resume:\n            checkpoint = torch.load(args.resume)\n            self.model.load_state_dict(checkpoint['model'])\n            self.optimizer.load_state_dict(checkpoint['optimizer'])\n            self.start_epoch = checkpoint['epoch']\n            # TODO: load scheduler\n\n        # loss functions\n        self.loss_fns = {}\n        self.loss_fns['ce'] = torch.nn.CrossEntropyLoss(weight=self.train_dataset.weights, ignore_index=255)\n        self.loss_fns['lovasz'] = None\n\n    def train(self):\n        for epoch in range(30000):\n            self.epoch = self.start_epoch + epoch + 1\n                \n            print('Training...')\n            self._train_model()\n            \n            if epoch % self.args.eval_epoch == 0:\n                print('Evaluation...')\n                self._eval_and_save_model()\n\n            # learning rate scheduling\n            self.scheduler.step()\n            self.writer.add_scalar('lr_epochwise', self.optimizer.param_groups[0]['lr'], global_step=self.epoch)\n\n    def _loss(self, vox, query, label, losses, coord):\n        empty_label = 0.\n        preds = self.model(vox, query) # [bs, N, 20]\n        losses['ce'] = self.loss_fns['ce'](preds.view(-1, self.num_class), label.view(-1,))\n        losses['loss'] = losses['ce']\n        \n        pred_output = torch.full((preds.shape[0], vox.shape[1], vox.shape[2], vox.shape[3], self.num_class), fill_value=empty_label, device=preds.device)\n        gt_output = torch.full((preds.shape[0], vox.shape[1], vox.shape[2], vox.shape[3]), fill_value=empty_label, device=preds.device)\n        softmax_preds = torch.nn.functional.softmax(preds, dim=2)\n        for i in range(softmax_preds.shape[0]):\n            pred_output[i, coord[i, :, 0], coord[i, :, 1], coord[i, :, 2], :] = softmax_preds[i]\n            gt_output[i, coord[i, :, 0], coord[i, :, 1], coord[i, :, 2]] = label[i].float()\n        losses['lovasz'] = lovasz_softmax(pred_output.permute(0,4,1,2,3), gt_output)\n        losses['loss'] += losses['lovasz']\n\n        adaptive_weight = None\n        return losses, preds, adaptive_weight\n    \n    def _train_model(self):\n        self.model.train()\n\n        total_losses = {loss_name: 0. for loss_name in self.loss_fns.keys()}\n        total_losses['loss'] = 0.\n        evaluator = SSCMetrics(self.num_class, [])\n        dataloader_tqdm = tqdm(self.train_dataloader)\n\n        for vox, query, label, coord, path, invalid in dataloader_tqdm:\n            vox = vox.type(torch.LongTensor).cuda()\n            query = query.type(torch.FloatTensor).cuda()\n            label = label.type(torch.LongTensor).cuda()\n            coord = coord.type(torch.LongTensor).cuda()\n            invalid = invalid.type(torch.LongTensor).cuda()\n            b_size = vox.size(0)  # TODO: bsize is correct?\n\n            # forward\n            losses = {}\n            with autocast():\n                losses, model_output, adaptive_weight = self._loss(vox, query, label, losses, coord)\n\n            # optimize\n            self.optimizer.zero_grad()\n            self.grad_scaler.scale(losses['loss']).backward()\n            self.grad_scaler.unscale_(self.optimizer)\n            grad_norm = torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0)  # gradient clipping\n            self.grad_scaler.step(self.optimizer)\n            self.grad_scaler.update()\n\n            # eval and log each iteration\n            if self.global_step % self.args.display_period == 0:\n                pred_mask = get_pred_mask(model_output)\n\n                masks = torch.from_numpy(evaluator.get_eval_mask(vox.cpu().numpy(), invalid.cpu().numpy()))\n                output = point2voxel(self.args, pred_mask, coord)\n                eval_output = output[masks]\n                eval_label = vox[masks]\n                this_iou, this_miou = evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int))\n\n                # on display\n                dataloader_tqdm.set_postfix({\"loss\": losses['loss'].detach().item(), \"iou\": this_iou, \"miou\": this_miou})\n\n                # on tensorboard\n                self.writer.add_scalar('Grad_Norm', grad_norm, global_step=self.global_step)\n                self.writer.add_scalar('Train_Performance_stepwise/IoU', this_iou, global_step=self.global_step)\n                self.writer.add_scalar('Train_Performance_stepwise/mIoU', this_miou, global_step=self.global_step)\n                for loss_name in losses.keys():\n                    self.writer.add_scalar(f'Train_Loss_stepwise/loss_{loss_name}', losses[loss_name], self.global_step)\n          \n            # loss accumulation for logging\n            for loss_name in losses.keys():\n                total_losses[loss_name] += (losses[loss_name] * b_size)\n\n            self.global_step += 1\n\n        # eval for 1 epoch\n        _, class_jaccard = evaluator.getIoU()\n        m_jaccard = class_jaccard[1:].mean()\n        miou = m_jaccard * 100\n        conf = evaluator.get_confusion()\n        iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8)\n        evaluator.reset()\n\n        # log for 1 epoch\n        self.writer.add_scalar('Train_Performance_epochwise/IoU', iou, global_step=self.epoch)\n        self.writer.add_scalar('Train_Performance_epochwise/mIoU', miou, global_step=self.epoch)\n        for class_idx, class_name in enumerate(self.iou_class_names):\n            self.writer.add_scalar(f'Train_ClassPerformance_epochwise/class{class_idx + 1}_IoU_{class_name}', class_jaccard[class_idx + 1], global_step=self.epoch)\n        for loss_name in losses.keys():\n            self.writer.add_scalar(f'Train_Loss_epochwise/loss_{loss_name}', total_losses[loss_name] / len(self.train_dataset), global_step=self.epoch)\n\n        print(f\"Epoch: {self.epoch} \\t IOU: \\t {iou:01f} \\t mIoU: \\t {miou:01f}\")\n\n\n    @torch.no_grad()\n    def _eval_and_save_model(self):\n        self.model.eval()\n\n        total_losses = {loss_name: 0. for loss_name in self.loss_fns.keys()}\n        total_losses['loss'] = 0.\n        evaluator = SSCMetrics(self.num_class, [])\n        dataloader_tqdm = tqdm(self.val_dataloader)\n\n        for sample_idx, (vox, query, label, coord, path, invalid) in enumerate(dataloader_tqdm):\n            vox = vox.type(torch.LongTensor).cuda()\n            query = query.type(torch.FloatTensor).cuda()\n            label = label.type(torch.LongTensor).cuda()\n            coord = coord.type(torch.LongTensor).cuda()\n            invalid = invalid.type(torch.LongTensor).cuda()\n            b_size = vox.size(0)  # TODO: check correctness\n            assert b_size == 1, 'For accurate logging, please set batch size of validation dataloader to 1.'\n\n            losses = {}\n            losses, model_output, adaptive_weight = self._loss(vox, query, label, losses, coord)\n            pred_mask =  get_pred_mask(model_output)\n\n            masks = torch.from_numpy(evaluator.get_eval_mask(vox.cpu().numpy(), invalid.cpu().numpy()))\n            output = point2voxel(self.args, pred_mask, coord)\n            eval_output = output[masks]\n            eval_label = vox[masks]\n            this_iou, this_miou = evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int))\n\n            # log on display for each sample\n            dataloader_tqdm.set_postfix({\"loss\": losses['loss'].detach().item(), \"iou\": this_iou, \"miou\": this_miou})\n\n            for loss_name in losses.keys():\n                total_losses[loss_name] += (losses[loss_name] * b_size)\n\n            idx = path[0].split('/')[-1].split('.')[0]\n            folder = path[0].split('/')[-3]\n            save_remap_lut(self.args, output, folder, idx, self.train_dataset.learning_map_inv, True)\n\n        # eval for all validation samples\n        _, class_jaccard = evaluator.getIoU()\n        m_jaccard = class_jaccard[1:].mean()\n        miou = m_jaccard * 100\n        conf = evaluator.get_confusion()\n        iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8)\n        evaluator.reset()\n\n        self.writer.add_scalar('Val_Performance_epochwise/IoU', iou, global_step=self.epoch)\n        self.writer.add_scalar('Val_Performance_epochwise/mIoU', miou, global_step=self.epoch)\n        for class_idx, class_name in enumerate(self.iou_class_names):\n            self.writer.add_scalar(f'Val_ClassPerformance_epochwise/class{class_idx + 1}_IoU_{class_name}', class_jaccard[class_idx + 1], global_step=self.epoch)\n        for loss_name in losses.keys():\n            self.writer.add_scalar(f'Val_Loss_epochwise/loss_{loss_name}', total_losses[loss_name] / len(self.val_dataset), global_step=self.epoch)\n        print(f\"Epoch: {self.epoch} \\t IOU: \\t {iou:01f} \\t mIoU: \\t {miou:01f}\")\n\n        if self.best_miou < miou:\n            self.best_miou = miou\n            checkpoint = {'optimizer': self.optimizer.state_dict(), 'model': self.model.state_dict(), 'epoch': self.epoch}  # TODO: save scheduler\n            torch.save(checkpoint, self.args.save_path + \"/\" + str(self.epoch) + \"_miou=\" + str(f\"{miou:.3f}\") + '.pt')\n\ndef get_pred_mask(model_output, separate_decoder=False):\n    preds = model_output\n    pred_prob = torch.softmax(preds, dim=2)\n    pred_mask = pred_prob.argmax(dim=2).float()\n    return pred_mask\n"
  },
  {
    "path": "sampling/generation.py",
    "content": "from utils.parser_util import add_encoding_training_options, add_diffusion_training_options, add_generation_options\nfrom utils.utils import save_remap_lut, point2voxel\nfrom encoding.train_ae import get_pred_mask\nfrom diffusion.triplane_util import build_sampling_model, decompose_featmaps\nfrom utils import dist_util\nimport torch\nimport os\nimport argparse\nimport numpy as np\n\ndef sample(args):\n    model, ae, sample_fn, coords, query, out_shape, _, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args)\n    args.grid_size = grid_size\n    with torch.no_grad():\n        condition = np.zeros(out_shape)\n        cond = {'y':condition, 'H':H, 'W':W, 'D':D, 'path':args.save_path}\n\n        for r in range(args.num_samples):\n            samples = sample_fn(model, out_shape, progress=False, model_kwargs=cond)         \n            xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, (H, W, D))\n            model_output = ae.decode([xy_feat, xz_feat, yz_feat], query)\n            sample = get_pred_mask(model_output)\n            output = point2voxel(args, sample, coords)\n            sample = save_remap_lut(args, output, \"sample\", r, learning_map_inv, training=False)\n            \n            os.umask(0)\n            save_path = os.path.join(args.save_path, f\"sample/{r}.label\")\n            os.makedirs(args.save_path+'/sample', mode=0o777, exist_ok=True)\n            sample.tofile(save_path)\n            \ndef sample_parser():\n    parser = argparse.ArgumentParser()\n    add_encoding_training_options(parser)\n    add_diffusion_training_options(parser)\n    add_generation_options(parser)\n    parser.add_argument(\"--gpu_id\", default=0, type=int)\n    parser.add_argument(\"--save_path\", type=str, default = '')\n\n    parser.add_argument(\"--dataset\",  default='kitti', choices=['kitti', 'carla'])\n    parser.add_argument(\"--num_samples\", type=int, default=10)\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = sample_parser()\n    dist_util.setup_dist(args.gpu_id)\n    sample(args)"
  },
  {
    "path": "sampling/inpainting.py",
    "content": "from utils.parser_util import add_diffusion_training_options, add_encoding_training_options, add_in_out_sampling\nfrom sampling.outpainting import edit_scene\nfrom utils.utils import load_label, save_remap_lut\nfrom diffusion.triplane_util import build_sampling_model\nfrom utils import dist_util\nimport torch\nimport argparse\n\ndef inpainting(scene, cond_1, cond_2, cond_3, cond_4, Generate_Scene):\n    cond = scene.clone().detach()\n    edit_scene = scene.clone().detach()\n    output = Generate_Scene(cond, m=(cond_1, cond_2, cond_3, cond_4))\n    edit_scene[:, cond_3 : cond_4,  cond_1 : cond_2, :] = output[:, cond_3 : cond_4,  cond_1 : cond_2, :] \n    return edit_scene\n        \ndef edit(args):   \n    model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args)  \n    args.grid_size = grid_size  \n    scene = load_label(args.load_path, learning_map, grid_size)\n    \n    Generate_Scene = edit_scene(args, ae, model, sample_fn, coords, query, out_shape, (H, W, D), args.overlap)\n    \n    more_edit_answer = 'y'\n    while more_edit_answer != 'n' :\n        cond_1, cond_2, cond_3, cond_4 = input('points of re-generation region tl, tr, dl, dr:').split()\n        answer = 'y'\n        while answer == 'y' :\n            new_scene = inpainting(scene, int(cond_1), int(cond_2), int(cond_3), int(cond_4), Generate_Scene)\n            save_scene = save_remap_lut(args, new_scene, None, None, learning_map_inv, training=False)\n            save_scene.tofile(args.save_path+'/inpainting.label')\n            answer = input('Again? (y/n/q) :')\n            if answer == 'n' : scene = new_scene\n            if answer == 'q' : break\n        more_edit_answer = input('More edit? (y/n) :')    \n    scene = new_scene\n\ndef sample_parser():\n    parser = argparse.ArgumentParser()\n    add_encoding_training_options(parser)\n    add_diffusion_training_options(parser)\n    add_in_out_sampling(parser)\n    parser.add_argument(\"--save_path\", type=str, default = '')\n    parser.add_argument(\"--gpu_id\", default=0, type=int)\n    \n    parser.add_argument(\"--load_path\", default='./dataset/001335.label')\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = sample_parser()\n    args.overlap = 'inpainting'\n\n    dist_util.setup_dist(args.gpu_id)\n    edit(args)"
  },
  {
    "path": "sampling/outpainting.py",
    "content": "from diffusion.triplane_util import build_sampling_model,  compose_featmaps, decompose_featmaps\nfrom utils.parser_util import add_in_out_sampling, add_diffusion_training_options, add_encoding_training_options\nfrom utils.utils import point2voxel, load_label, save_remap_lut\nfrom encoding.train_ae import get_pred_mask\nfrom utils import dist_util\nimport torch\nimport argparse\nimport numpy as np\n \ndef city_generate(m, scene, Generate_Scene, overlap, out_shape, H=128):\n    new_scene = scene.clone().detach()\n    if m == 'upleft':\n        left_cond = new_scene[:, overlap*2 : overlap*4 , overlap: overlap*3].detach().clone()\n        up_cond = new_scene[:, overlap : overlap*3, overlap*2 : overlap*4 ].detach().clone()\n        condition = torch.zeros(out_shape, device=dist_util.dev())    \n        \n        left_tri= Generate_Scene(left_cond, m, decode=False)\n        up_tri  = Generate_Scene(up_cond, m, decode=False)\n        condition[:, :, :, :int(overlap/2)] = left_tri[:, :, :, H-int(overlap/2):H].detach().clone()\n        condition[:, :, :int(overlap/2), :] = up_tri[:, :, H-int(overlap/2):H, :].detach().clone()\n        output = Generate_Scene(condition, m, encode=False)\n        new_scene[:, overlap*2 : overlap*4 , overlap*2 : overlap*4 , :] = output\n\n    elif m == 'upright' :\n        right_cond = new_scene[:, overlap*2 : overlap*4 , overlap : overlap*3, :].detach().clone()\n        up_cond = new_scene[:, overlap : overlap*3, :overlap*2 , :].detach().clone()\n        condition = torch.zeros(out_shape, device=dist_util.dev())    \n        \n        right_tri= Generate_Scene(right_cond, m, decode=False)\n        up_tri  = Generate_Scene(up_cond, m, decode=False)\n        condition[:, :, :, H-int(overlap/2):H] = right_tri[:, :, :, :int(overlap/2)].detach().clone()\n        condition[:, :, :int(overlap/2), :] = up_tri[:, :, H-int(overlap/2):H, :].detach().clone()\n        output = Generate_Scene(condition, m, encode=False)\n        new_scene[:, overlap*2  : overlap*4 , :overlap*2 , :] = output\n        \n    elif m == 'downright':\n        right_cond = new_scene[:, :overlap*2 , overlap : overlap*3].detach().clone()\n        down_cond = new_scene[:, overlap : overlap*3, :overlap*2 ].detach().clone()\n        condition = torch.zeros(out_shape, device=dist_util.dev())    \n        \n        right_tri= Generate_Scene(right_cond, m, decode=False)\n        down_tri  = Generate_Scene(down_cond, m, decode=False)\n        condition[:, :, :, H-int(overlap/2):H] = right_tri[:, :, :, :int(overlap/2)].detach().clone()\n        condition[:, :, H-int(overlap/2):H, :] = down_tri[:, :, :int(overlap/2), :].detach().clone()\n        output = Generate_Scene(condition, m, encode=False)\n        new_scene[:, : overlap*2 , :overlap*2 , :] = output\n    \n    elif m == 'downleft':\n        left_cond = new_scene[:, : overlap*2 , overlap: overlap*3].detach().clone()\n        down_cond = new_scene[:, overlap : overlap*3, overlap*2  : overlap*4 ].detach().clone()\n        condition = torch.zeros(out_shape, device=dist_util.dev())    \n        \n        left_tri= Generate_Scene(left_cond, m, decode=False)\n        down_tri  = Generate_Scene(down_cond, m, decode=False)\n        condition[:, :, :, :int(overlap/2)] = left_tri[:, :, :, H-int(overlap/2):H].detach().clone()\n        condition[:, :, H-int(overlap/2):H, :] = down_tri[:, :, :int(overlap/2), :].detach().clone()\n        output = Generate_Scene(condition, m, encode=False)\n        new_scene[:, :overlap*2 , overlap*2:overlap*4, :] = output\n\n    else :\n        condition = new_scene[:, overlap:3*overlap, overlap:3*overlap, :]\n        output = Generate_Scene(condition, m)\n        if m == 'down': new_scene[:, :2*overlap, overlap:3*overlap, :] = output\n        elif m == 'up': new_scene[:, 2*overlap:, overlap:3*overlap, :] = output\n        elif m == 'left': new_scene[:, overlap:3*overlap, 2*overlap:, :] = output\n        elif m == 'right': new_scene[:, overlap:3*overlap, :2*overlap, :] = output\n    return new_scene\n\nclass edit_scene(torch.nn.Module):\n    def __init__(self, args, ae, model, sample_fn, coords, query, out_shape, tri_size, overlap):\n        super().__init__()\n        self.args = args\n        self.overlap = overlap\n        self.model, self.ae = model, ae\n        self.sample_fn = sample_fn\n        self.coords, self.query = coords, query\n        self.out_shape = out_shape\n        self.tri_size = tri_size\n        H, W, D = tri_size\n        self.cond = {'y':np.zeros((1, H + D, H + D)), 'H':[H], 'W':[W], 'D':[D], 'path':0}\n\n    def encode(self, condition):\n        xy_feat, xz_feat, yz_feat = self.ae.encode(condition)\n        before_scene, _ = compose_featmaps(xy_feat, xz_feat, yz_feat, self.tri_size)\n        return before_scene\n    \n    def decode(self, samples):\n        xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, self.tri_size)\n        model_output = self.ae.decode([xy_feat, xz_feat, yz_feat], self.query)\n        sample = get_pred_mask(model_output)\n        output = point2voxel(self.args, sample, self.coords)\n        return output\n    \n    def forward(self, condition, m, encode=True, decode=True):\n        condition = condition.detach().clone()\n        with torch.no_grad():\n            if encode and decode :\n                before_scene = self.encode(condition)\n                samples = self.sample_fn(self.model, self.out_shape, model_kwargs=self.cond, cond=before_scene, mode = m, overlap=self.overlap)\n                output = self.decode(samples)\n            elif encode :\n                output = self.encode(condition)\n            elif decode:\n                samples = self.sample_fn(self.model, self.out_shape, model_kwargs=self.cond, cond=condition, mode = m, overlap=self.overlap)\n                output = self.decode(samples)\n        return output\n        \ndef outpaint(args):   \n    model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args)    \n    args.grid_size = grid_size\n    voxel_label = load_label(args.load_path, learning_map, grid_size)\n        \n    scene = torch.zeros(1, 2*grid_size[1], 2*grid_size[1], grid_size[-1]).type(torch.LongTensor).to(dist_util.dev())\n    overlap = int(grid_size[1]/2)\n    scene[:, overlap : overlap*3, overlap : overlap*3, :] = voxel_label\n    \n    Generate_Scene = edit_scene(args, ae, model, sample_fn, coords, query, out_shape, (H, W, D), overlap)\n    \n    for m in ['down', 'left', 'right', 'up', 'downleft', 'downright', 'upleft', 'upright']:\n        print(\"Generating :\", m)\n        new_scene= city_generate(m, scene,  Generate_Scene, overlap, out_shape)\n        scene = new_scene\n    save_scene = save_remap_lut(args, scene, None, None, learning_map_inv, training=False)\n    save_scene.tofile(args.save_path+'/outpainting.label')\n\ndef sample_parser():\n    parser = argparse.ArgumentParser()\n    add_in_out_sampling(parser)\n    add_encoding_training_options(parser)\n    add_diffusion_training_options(parser)\n    parser.add_argument(\"--save_path\", type=str, default = '')\n    parser.add_argument(\"--gpu_id\", default=0, type=int)\n\n    parser.add_argument(\"--load_path\", default='./dataset/001335.label')\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = sample_parser()\n    dist_util.setup_dist(args.gpu_id)\n    outpaint(args)\n    "
  },
  {
    "path": "sampling/ssc_refine.py",
    "content": "from diffusion.triplane_util import build_sampling_model\nfrom utils.parser_util import add_encoding_training_options, add_diffusion_training_options, add_refine_options\nfrom utils.common_util import get_result\nfrom utils.utils import  save_remap_lut, point2voxel, unpack, load_label\nfrom dataset.tri_dataset_builder import TriplaneDataset\nfrom encoding.ssc_metrics import SSCMetrics\nfrom encoding.train_ae import get_pred_mask\nfrom diffusion.nn import decompose_featmaps\nfrom utils import dist_util\nfrom torch.utils.data import DataLoader\nfrom torch.utils.tensorboard import SummaryWriter\nimport torch\nimport os\nimport argparse\nimport numpy as np\nfrom tqdm.auto import tqdm\n    \ndef sample(args, tb):\n    model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, class_name, args = build_sampling_model(args)\n    args.grid_size = grid_size\n    ds = TriplaneDataset(args, 'val')\n    dl = DataLoader(ds, batch_size = args.batch_size, shuffle = False, pin_memory = True)\n    tqdm_ = tqdm(dl)  \n    refine_evaluator, ssc_evaluator = SSCMetrics(args.num_class, []), SSCMetrics(args.num_class, [])\n    \n    with torch.no_grad():\n        for _, cond in tqdm_:\n            # load dataset\n            idx = cond['path'][0].split(\"/\")[-1].split(\".\")[0].split(\"_\")[0]\n            folder = cond['path'][0].split(\"/\")[-3]\n            os.umask(0)\n            os.makedirs(args.save_path+'/'+folder, mode=0o777, exist_ok=True)\n            save_path = os.path.join(args.save_path, f\"{folder}/{idx}.label\")\n            gt_path = os.path.join(args.data_path, f\"{folder}/voxels/{idx}.label\")\n            cond_path = os.path.join(args.data_path, f\"{folder}/{args.refine_dataset}/{idx}.label\")\n\n            vox_label = load_label(gt_path, learning_map, grid_size)\n            cond_label = load_label(cond_path, learning_map, grid_size)\n            invalid = torch.from_numpy(unpack(np.fromfile(gt_path.replace('label', 'invalid'), dtype=np.uint8)))\n            invalid = invalid.squeeze().type(torch.FloatTensor).cuda().reshape(grid_size)\n            masks = torch.from_numpy(refine_evaluator.get_eval_mask(vox_label.cpu().numpy(), invalid.cpu().numpy()))\n            \n            eval_label = vox_label[masks]\n            cond_eval_label = cond_label[masks]\n\n            # ssc refine\n            samples = sample_fn(model, out_shape, progress=False, model_kwargs=cond)            \n            xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, (H, W, D))\n            model_output = ae.decode([xy_feat, xz_feat, yz_feat], query)\n            sample = get_pred_mask(model_output)                \n            output = point2voxel(args, sample, coords)\n            eval_output = output[masks]\n            \n            this_iou, this_miou, _ = refine_evaluator.one_stats(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int))\n            tqdm_.set_postfix({\"iou\": this_iou, \"miou\": this_miou})      \n            \n            sample = save_remap_lut(args, output, folder, idx, learning_map_inv, training=False)\n            sample.tofile(save_path)\n\n            ssc_evaluator.addBatch(cond_eval_label.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int))\n            refine_evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int))     \n             \n        get_result(ssc_evaluator, class_name, tb, args.save_path) \n        get_result(refine_evaluator, class_name, tb, args.save_path)        \n\ndef sample_parser():\n    parser = argparse.ArgumentParser()\n    add_encoding_training_options(parser)\n    add_diffusion_training_options(parser)\n    add_refine_options(parser)\n    parser.add_argument(\"--gpu_id\", default=0, type=int)\n    parser.add_argument(\"--refine_dataset\", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc'])\n    parser.add_argument(\"--save_path\", type=str, default = '')\n    args = parser.parse_args()\n    return args\n\nif __name__ == '__main__':\n    args = sample_parser()\n    dist_util.setup_dist(args.gpu_id)\n    tb = SummaryWriter(os.path.join(args.save_path, 'tb'))\n    sample(args, tb)"
  },
  {
    "path": "scripts/save_triplane.py",
    "content": "import torch\nimport numpy as np\nimport argparse\nfrom encoding.networks import AutoEncoderGroupSkip\nfrom diffusion.triplane_util import compose_featmaps\nfrom tqdm.auto import tqdm\nimport os\nfrom dataset.kitti_dataset import SemKITTI\nfrom dataset.carla_dataset import CarlaDataset\nfrom dataset.path_manager import *\nfrom pathlib import Path\n\ndef get_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--geo_feat_channels\", type=int, default=16, help=\"geometry feature dimension\")\n    parser.add_argument(\"--feat_channel_up\", type=int, default=64, help=\"conv feature dimension\")\n    parser.add_argument(\"--mlp_hidden_channels\", type=int, default=256, help=\"mlp hidden dimension\")\n    parser.add_argument(\"--mlp_hidden_layers\", type=int, default=4, help=\"mlp hidden layers\")\n    parser.add_argument(\"--z_down\", default=False)\n    parser.add_argument(\"--padding_mode\", default='replicate')\n    parser.add_argument('--lovasz', type=bool, default=True)\n\n    parser.add_argument(\"--dataset\", default='kitti', choices=['kitti', 'carla'])\n    parser.add_argument('--data_name', default='voxels')\n    parser.add_argument('--data_tail', default='.label')\n    parser.add_argument('--save_name', default='triplane')\n    parser.add_argument('--save_tail', default='_scpnet.npy')\n    parser.add_argument('--resume', default = '/home/jumin/Documents/Projects/SemCity/results/4_miou=81.715.pt')\n    \n    ### Ablation ###\n    parser.add_argument(\"--triplane\", type=bool, default=True)\n    parser.add_argument(\"--pos\", default=True, type=bool)\n    parser.add_argument(\"--voxel_fea\", default=False, type=bool)\n    args = parser.parse_args()\n    return args\n\n@torch.no_grad()\ndef save(args):    \n    if args.dataset == 'kitti':\n        dataset = SemKITTI(args, 'train', get_query=False, folder=args.data_name)\n        val_dataset = SemKITTI(args, 'val', get_query=False, folder=args.data_name)\n        tri_size = (128, 128, 16) if args.z_down else (128, 128, 32)\n\n    elif args.dataset == 'carla':\n        dataset = CarlaDataset(args, 'train', get_query=False)\n        val_dataset = CarlaDataset(args, 'val', get_query=False)\n        tri_size = (64, 64, 4) if args.z_down else (64, 64, 8)\n        \n    dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, num_workers=4)  #collate_fn=dataset.collate_fn, num_workers=4)\n    val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=1, shuffle=False, num_workers=4)  #collate_fn=dataset.collate_fn, num_workers=4)\n    \n    print(args.data_name)\n    print(f'The number of voxel labels is {len(dataset)}.')\n    print(f'Load autoencoder model from \"{args.resume}\"')\n    model = AutoEncoderGroupSkip(args)\n    model = model.cuda()\n    checkpoint = torch.load(args.resume)\n    model.load_state_dict(checkpoint['model'])\n    model.eval()\n\n    print(\"\\nSave Triplane...\")\n    for loader in [dataloader, val_dataloader]:\n        for vox, _, _, _, path, invalid in tqdm(loader):\n            # to gpu\n            vox = vox.type(torch.LongTensor).cuda()\n            invalid = invalid.type(torch.LongTensor).cuda()\n            vox[invalid == 1] = 0\n            triplane = model.encode(vox)\n            \n            if not args.voxel_fea :\n                triplane, _ = compose_featmaps(triplane[0].squeeze(), triplane[1].squeeze(), triplane[2].squeeze(), tri_size)\n\n            file_idx = str(Path(path[0]).stem.split('_')[0])  # e.g., 002165\n            folder_idx = str(Path(path[0]).parent.parent.stem)  # e.g., 00\n            save_folder_path = os.path.join(args.save_path, folder_idx, args.save_name)  # e.g., /home/sebin/dataset/sequence/00/tri_1enc_1dec_0pad\n            os.makedirs(save_folder_path, exist_ok=True)\n            np.save(os.path.join(save_folder_path, file_idx +args.save_tail), triplane.cpu().numpy())   \n        \ndef main():\n    args = get_args()\n    if args.dataset == 'kitti':\n        args.num_class = 20\n        args.data_path=SEMKITTI_DATA_PATH\n        args.save_path=SEMKITTI_DATA_PATH\n        args.yaml_path=SEMKITTI_YAML_PATH\n    elif args.dataset == 'carla':\n        args.num_class = 11\n        args.data_path=CARLA_DATA_PATH\n        args.save_path=CARLA_DATA_PATH\n        args.yaml_path=CARLA_YAML_PATH\n    save(args)\n    \nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "scripts/train_ae_main.py",
    "content": "import argparse\nfrom encoding.train_ae import Trainer\nfrom dataset.path_manager import *\n\ndef get_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--geo_feat_channels\", type=int, default=16, help=\"geometry feature dimension\")\n    parser.add_argument(\"--feat_channel_up\", type=int, default=64, help=\"conv feature dimension\")\n    parser.add_argument(\"--mlp_hidden_channels\", type=int, default=256, help=\"mlp hidden dimension\")\n    parser.add_argument(\"--mlp_hidden_layers\", type=int, default=4, help=\"mlp hidden layers\")\n    parser.add_argument(\"--padding_mode\", default='replicate')\n    parser.add_argument(\"--bs\", type=int, default=4, help=\"batch size for autoencoding training\")\n    parser.add_argument(\"--dataset\", default='kitti', choices=['kitti', 'carla'])\n    parser.add_argument(\"--z_down\", default=False)\n\n    parser.add_argument(\"--lr\", type=float, default=0.001)\n    parser.add_argument(\"--lr_scheduler\", default=True)\n    parser.add_argument(\"--lr_scheduler_steps\", nargs='+', type=int, default=[30, 40])\n    parser.add_argument(\"--lr_scheduler_decay\", type=float, default=0.5)\n\n    parser.add_argument('--save_path', type=str, default='')\n    parser.add_argument('--resume', default = None)\n    parser.add_argument('--display_period', type=int, default=50)\n    parser.add_argument('--eval_epoch', type=int, default=1)\n    \n    ### Ablation ###\n    parser.add_argument(\"--triplane\", type=bool, default=True, help=\"use triplane feature, if False, use bev feature\")\n    parser.add_argument(\"--pos\", default=True, type=bool)\n    parser.add_argument(\"--voxel_fea\", default=False, type=bool, help=\"use 3d voxel feature\")\n    args = parser.parse_args()\n    return args\n\ndef main():\n    args = get_args()\n    if args.dataset == 'carla':\n        args.data_path=CARLA_DATA_PATH\n        args.yaml_path=CARLA_YAML_PATH\n\n    elif args.dataset == 'kitti':\n        args.data_path=SEMKITTI_DATA_PATH\n        args.yaml_path=SEMKITTI_YAML_PATH\n \n    trainer = Trainer(args)\n    trainer.train()\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "scripts/train_diffusion_main.py",
    "content": "from utils.parser_util import add_diffusion_training_options, add_encoding_training_options\nfrom dataset.tri_dataset_builder import TriplaneDataset\nfrom diffusion.script_util import create_model_and_diffusion_from_args\nfrom diffusion.resample import create_named_schedule_sampler\nfrom diffusion.train_util import TrainLoop\nfrom diffusion import logger\nfrom utils import dist_util\nfrom dataset.path_manager import *\nfrom utils.utils import cycle\nfrom torch.utils.data import DataLoader\nimport argparse\n\ndef train_diffusion(args) :\n    log_dir = args.save_path\n    logger.configure(dir=log_dir)\n    \n    ds = TriplaneDataset(args, 'train')\n    val_ds = TriplaneDataset(args, 'val')\n    collate_fn = None\n        \n    dl = DataLoader(ds, batch_size = args.batch_size, shuffle = True, pin_memory = True, collate_fn=collate_fn)\n    dl = cycle(dl)\n    val_dl = DataLoader(val_ds, batch_size = args.batch_size, shuffle = False, pin_memory = True, collate_fn=collate_fn)\n    val_dl = cycle(val_dl)\n\n    model, diffusion = create_model_and_diffusion_from_args(args)\n    model.to(dist_util.dev())\n    schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)\n\n    TrainLoop(\n        diffusion_net = args.diff_net_type, \n        triplane_loss_type = args.triplane_loss_type,\n        timestep_respacing = args.timestep_respacing,\n        training_step = args.steps, \n        model=model,\n        diffusion=diffusion,\n        data=dl,\n        val_data=val_dl,\n        ssc_refine = args.ssc_refine,\n        batch_size=args.batch_size,\n        microbatch=-1,\n        lr=args.diff_lr,\n        ema_rate=args.ema_rate,\n        log_interval=args.log_interval,\n        save_interval=args.save_interval,\n        resume_checkpoint=args.resume_checkpoint,\n        use_fp16=args.use_fp16,\n        schedule_sampler=schedule_sampler,\n        weight_decay=args.weight_decay,\n        lr_anneal_steps=args.diff_n_iters,\n    ).run_loop()\n    \nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    add_diffusion_training_options(parser)\n    parser.add_argument(\"--gpu_id\", default=0, type=int)\n    parser.add_argument(\"--save_path\", type=str, default='')\n    parser.add_argument('--ssc_refine', action='store_true')\n    parser.add_argument(\"--ssc_refine_dataset\", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc'])\n    \n    parser.add_argument(\"--dataset\", default='kitti', choices=['kitti', 'carla'])\n    parser.add_argument(\"--batch_size\", type=int, default=16, help=\"batch size for diffusion training\")\n    parser.add_argument(\"--resume_checkpoint\", type=str, default = None)\n    parser.add_argument(\"--triplane_loss_type\", type=str, default='l2', choices=['l1', 'l2'])\n    \n    add_encoding_training_options(parser)\n    parser.add_argument(\"--triplane\", default=True)\n    parser.add_argument(\"--pos\", default=True, type=bool)\n    parser.add_argument(\"--voxel_fea\", default=False, type=bool)\n    args = parser.parse_args()\n    \n    if args.dataset == 'carla':\n        args.data_path=CARLA_DATA_PATH\n        args.yaml_path=CARLA_YAML_PATH\n        \n    elif args.dataset == 'kitti':\n        args.data_path=SEMKITTI_DATA_PATH\n        args.yaml_path=SEMKITTI_YAML_PATH\n    \n    if args.voxel_fea :\n        args.diff_net_type = \"unet_voxel\"\n    else :\n        args.diff_net_type = \"unet_tri\" if args.triplane else \"unet_bev\"\n\n    #CUDA_VISIBLE_DEVICES=1\n    dist_util.setup_dist(args.gpu_id)\n    train_diffusion(args)\n"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup\n\nsetup(\n    name=\"SemCity\",\n    version = \"0.1\",\n    py_modules=[\"scripts\", \"dataset\", \"encoding\", \"diffusion\", \"sample\", \"utils\"]\n)"
  },
  {
    "path": "utils/common_util.py",
    "content": "import random\nimport numpy as np\nimport torch\nimport matplotlib.pyplot as plt\n\n\ndef seed_all(seed):\n    random.seed(seed)     # python random generator\n    np.random.seed(seed)  # numpy random generator\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    torch.backends.cudnn.deterministic = True\n    torch.backends.cudnn.benchmark = False\n\n\ndef draw_scalar_field2D(arr, vmin=None, vmax=None, cmap=None, title=None):\n    multi = max(arr.shape[0] // 512, 1)\n    fig, ax = plt.subplots(figsize=(5 * multi, 5 * multi))\n    cax1 = ax.matshow(arr, vmin=vmin, vmax=vmax, cmap=cmap)\n    fig.colorbar(cax1, ax=ax, fraction=0.046, pad=0.04)\n    fig.tight_layout()\n    if title is not None:\n        ax.set_title('08/'+str(title).zfill(6))\n    return fig\n\ndef get_result(evaluator, class_name):\n    _, class_jaccard = evaluator.getIoU()\n    m_jaccard = class_jaccard[1:].mean()\n    miou = m_jaccard * 100\n    conf = evaluator.get_confusion()\n    iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8) * 100\n    evaluator.reset()\n    \n    print(f\"mIoU: {miou:.2f}\")\n    print(f\"iou: {iou:.2f}\")\n\n    for i, c in enumerate(class_name) :\n        print(f\"{c}: {class_jaccard[i]*100:.2f}\")"
  },
  {
    "path": "utils/dist_util.py",
    "content": "\"\"\"\nHelpers for distributed training.\n\"\"\"\n\nimport socket\n\nimport os\nimport torch as th\nimport torch.distributed as dist\n\n# Change this to reflect your cluster layout.\n# The GPU for a given rank is (rank % GPUS_PER_NODE).\nGPUS_PER_NODE = 8\n\nSETUP_RETRY_COUNT = 3\n\nused_device = 0\n\ndef setup_dist(device=0):\n    \"\"\"\n    Setup a distributed process group.\n    \"\"\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(device) # f\"{MPI.COMM_WORLD.Get_rank() % GPUS_PER_NODE}\"\n\ndef dev():\n    \"\"\"\n    Get the device to use for torch.distributed.\n    \"\"\"\n    global used_device\n    if th.cuda.is_available() and used_device>=0:\n        return th.device(f\"cuda:{used_device}\")\n    return th.device(\"cpu\")\n\n\ndef load_state_dict(path, **kwargs):\n    \"\"\"\n    Load a PyTorch file without redundant fetches across MPI ranks.\n    \"\"\"\n    return th.load(path, **kwargs)\n\n\ndef sync_params(params):\n    \"\"\"\n    Synchronize a sequence of Tensors across ranks from rank 0.\n    \"\"\"\n    for p in params:\n        with th.no_grad():\n            dist.broadcast(p, 0)\n"
  },
  {
    "path": "utils/parser_util.py",
    "content": "import argparse\nimport json\nfrom dataset.path_manager import *\nimport numpy as np\nfrom utils.utils import read_semantickitti_yaml\nimport yaml\n\n\ndef add_encoding_training_options(parser):\n    group = parser.add_argument_group(\"encoding\")\n    group.add_argument(\"--feat_channel_up\", type=int, default=64, help=\"conv feature dimension\")\n    group.add_argument(\"--mlp_hidden_channels\", type=int, default=256, help=\"mlp hidden dimension\")\n    group.add_argument(\"--mlp_hidden_layers\", type=int, default=4, help=\"mlp hidden layers\")\n    group.add_argument(\"--invalid_class\", type=bool, default=False)\n    group.add_argument(\"--padding_mode\", default='replicate')\n    group.add_argument(\"--lovasz\", default=True)\n    group.add_argument(\"--geo_feat_channels\", type=int, default=16, help=\"geometry feature dimension\")\n    group.add_argument(\"--z_down\", default=False)\n\ndef add_diffusion_training_options(parser):\n    group = parser.add_argument_group(\"diffusion\")\n    group.add_argument(\"--steps\", type=int, default=100, help=\"diffusion step\")\n    group.add_argument(\"--is_rollout\", type=bool, default=True)\n    group.add_argument('--mult_channels', default=(1, 2, 4))\n    group.add_argument(\"--diff_lr\", type=float, default=5e-4, help=\"initial learning rate for diffusion training\")\n    group.add_argument(\"--schedule_sampler\", type=str, default=\"uniform\", help=\"schedule sampler\")\n    group.add_argument(\"--ema_rate\", type=float, default=0.9999, help=\"ema rate\")\n    group.add_argument(\"--weight_decay\", type=float, default=0.0, help=\"weight decay\")\n    group.add_argument(\"--log_interval\", type=int, default=500, help=\"log interval\")\n    group.add_argument(\"--save_interval\", type=int, default=1000, help=\"save interval\")\n    group.add_argument(\"--use_fp16\", type=bool, default=False)\n    group.add_argument(\"--predict_xstart\", type=bool, default=True)\n    group.add_argument(\"--learn_sigma\", type=bool, default=False)\n    group.add_argument(\"--timestep_respacing\", default='')\n    group.add_argument(\"--use_ddim\", type=str2bool, default=False, help=\"use ddim\")\n    group.add_argument(\"--conv_down\", default=True)\n    group.add_argument(\"--diff_n_iters\", type=int, default=50000, help=\"lr ann eal steps for diffusion training\")\n    group.add_argument(\"--tri_z_down\", default=False)\n    group.add_argument('--tri_unet_updown', type=bool, default=True)\n    group.add_argument(\"--model_channels\", default=64, help=\"model channels\")\n    \ndef add_generation_options(parser):\n    group = parser.add_argument_group(\"sampling\")\n    group.add_argument(\"--triplane\", default=True)\n    group.add_argument(\"--pos\", default=True, type=bool)\n    group.add_argument(\"--voxel_fea\", default=False)\n    group.add_argument('--ssc_refine', default=False, type=bool)\n    group.add_argument(\"--refine_dataset\", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc', 'lmsc', 'motionsc', 'sscfull'])\n    group.add_argument(\"--triplane_loss_type\", type=str, default='l2', choices=['l1',  'l2',])\n    group.add_argument(\"--batch_size\", type=int, default=1)\n    group.add_argument(\"--diff_net_type\", type=str, default='unet_tri')\n    group.add_argument(\"--repaint\", default=False, type=bool)\n\ndef add_refine_options(parser):\n    group = parser.add_argument_group(\"sampling\")\n    group.add_argument(\"--triplane\", default=True)\n    group.add_argument(\"--pos\", default=True, type=bool)\n    group.add_argument(\"--voxel_fea\", default=False)\n    group.add_argument('--ssc_refine', default=True, type=bool)\n    group.add_argument(\"--dataset\",  default='kitti')\n    group.add_argument(\"--triplane_loss_type\", type=str, default='l2', choices=['l1',  'l2',])\n    group.add_argument(\"--diff_net_type\", type=str, default='unet_tri')\n    group.add_argument(\"--repaint\", default=False, type=bool)\n    group.add_argument(\"--batch_size\", type=int, default=1)\n\ndef add_in_out_sampling(parser):\n    group = parser.add_argument_group(\"sampling\")    \n    group.add_argument(\"--triplane\", default=True)\n    group.add_argument(\"--pos\", default=True, type=bool)\n    group.add_argument(\"--voxel_fea\", default=False)\n    group.add_argument('--ssc_refine', default=False, type=bool)\n    group.add_argument(\"--refine_dataset\", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc', 'lmsc', 'motionsc', 'sscfull'])\n    group.add_argument(\"--triplane_loss_type\", type=str, default='l2', choices=['l1',  'l2',])\n    group.add_argument(\"--batch_size\", type=int, default=1)\n    group.add_argument(\"--diff_net_type\", type=str, default='unet_tri')\n    group.add_argument(\"--repaint\", default=True, type=bool)\n    group.add_argument(\"--dataset\",  default='kitti')\n\n\ndef get_gen_args(args):\n    if args.dataset == 'kitti' :\n        if args.z_down : H, W, D = 128 ,128, 16 \n        else : H, W, D = 128, 128, 32\n        learning_map, learning_map_inv = read_semantickitti_yaml()\n        grid_size = (1, 256, 256, 32)\n        class_name = [\n                'car', 'bicycle', 'motorcycle', 'truck', 'other-vehicle', 'person', 'bicyclist',\n                'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building', 'fence',\n                'vegetation', 'trunk', 'terrain', 'pole', 'traffic-sign'\n            ]\n        tri_size = (128, 128, 16) if args.z_down else (128, 128, 32)\n        num_class = 20\n        max_points = 400000\n        \n    elif args.dataset == 'carla' : \n        if args.z_down : H, W, D = 64 ,64, 4\n        else : H, W, D = 64, 64, 8\n        with open(args.yaml_path, 'r') as stream:\n            data_yaml = yaml.safe_load(stream)\n        label_remap = data_yaml[\"learning_map\"]  \n        learning_map = np.asarray(list(label_remap.values()))\n        learning_map_inv = None\n        class_name = ['building', 'barrier', 'other', 'pedestrian', 'pole', 'road', 'ground', 'sidewalk', 'vegetation', 'vehicle']\n        grid_size = (1, 128, 128, 8)\n        tri_size = (64, 64, 4) if args.z_down else (64, 64, 8)\n        num_class = 11\n        max_points = 70000\n        \n    return H, W, D, learning_map, learning_map_inv, class_name, grid_size, tri_size, num_class, max_points\n        \n\ndef diffusion_defaults():\n    return dict(\n        learn_sigma=False,\n        noise_schedule=\"linear\",\n        timestep_respacing=\"\",\n        use_kl=False,\n        rescale_timesteps=False,\n        rescale_learned_sigmas=False,\n    )\n\n\ndef diffusion_model_defaults():\n    return dict(\n        in_channels=8,\n        out_channels=8,\n        num_res_blocks=1,\n        dropout=0,\n        use_checkpoint=False,\n        use_fp16=False,\n        use_scale_shift_norm=True,\n    )\n\n\ndef get_args_by_group(parser, args, group_name):\n    for group in parser._action_groups:\n        if group.title == group_name:\n            group_dict = {a.dest: getattr(args, a.dest, None) for a in group._group_actions}\n            return group_dict\n    return ValueError('group_name was not found.')\n\n\ndef load_and_overwrite_args(args, path, ignore_keys=[]):\n    with open(path, \"r\") as f:\n        overwrite_args = json.load(f)\n    for k, v in overwrite_args.items():\n        if k not in ignore_keys:\n            setattr(args, k, v)\n    return args\n\n\ndef add_dict_to_argparser(parser, default_dict):\n    for k, v in default_dict.items():\n        v_type = type(v)\n        if v is None:\n            v_type = str\n        elif isinstance(v, bool):\n            v_type = str2bool\n        parser.add_argument(f\"--{k}\", default=v, type=v_type)\n\n\ndef args_to_dict(args, keys):\n    return {k: getattr(args, k) for k in keys}\n\n\ndef str2bool(v):\n    \"\"\"\n    https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse\n    \"\"\"\n    if isinstance(v, bool):\n        return v\n    if v.lower() in (\"yes\", \"true\", \"t\", \"y\", \"1\"):\n        return True\n    elif v.lower() in (\"no\", \"false\", \"f\", \"n\", \"0\"):\n        return False\n    else:\n        raise argparse.ArgumentTypeError(\"boolean value expected\")\n"
  },
  {
    "path": "utils/utils.py",
    "content": "from prettytable import PrettyTable\nimport os\nimport torch\nimport yaml\nimport numpy as np\nfrom functools import lru_cache\nfrom dataset.path_manager import *\n\ndef read_semantickitti_yaml():\n    with open(SEMKITTI_YAML_PATH, 'r') as stream:\n        semkittiyaml = yaml.safe_load(stream)\n    learning_map_inv = semkittiyaml[\"learning_map_inv\"]\n    learning_map = semkittiyaml['learning_map']\n\n    maxkey = max(learning_map.keys())\n    remap_lut = np.zeros((maxkey + 100), dtype=np.int32)\n    remap_lut[list(learning_map.keys())] = list(learning_map.values())\n    remap_lut[remap_lut == 0] = 255  # map 0 to 'invalid'\n    remap_lut[0] = 0\n    return remap_lut, learning_map_inv\n\ndef unpack(compressed):\n    ''' given a bit encoded voxel grid, make a normal voxel grid out of it.  '''\n    uncompressed = np.zeros(compressed.shape[0] * 8, dtype=np.uint8)\n    uncompressed[::8] = compressed[:] >> 7 & 1\n    uncompressed[1::8] = compressed[:] >> 6 & 1\n    uncompressed[2::8] = compressed[:] >> 5 & 1\n    uncompressed[3::8] = compressed[:] >> 4 & 1\n    uncompressed[4::8] = compressed[:] >> 3 & 1\n    uncompressed[5::8] = compressed[:] >> 2 & 1\n    uncompressed[6::8] = compressed[:] >> 1 & 1\n    uncompressed[7::8] = compressed[:] & 1\n    return uncompressed\n\ndef load_label(path, learning_map, grid_size):\n    label = np.fromfile(path, dtype=np.uint16).reshape((-1, 1))\n    label = learning_map[label]\n    label = torch.from_numpy(label).squeeze().type(torch.LongTensor).cuda().reshape(grid_size)\n    label[label==255]=0\n    return label\n\ndef write_result(args):\n    os.umask(0)\n    os.makedirs(args.save_path, mode=0o777, exist_ok=True)\n    args_table = PrettyTable(['Arg', 'Value'])\n    for arg, val in vars(args).items():\n        args_table.add_row([arg, val])\n    with open(os.path.join(args.save_path, 'results.txt'), \"w\") as f:\n        f.write(str(args_table))\n\ndef point2voxel(args, preds, coords):\n    if len(args.grid_size)==4:\n        output = torch.zeros((preds.shape[0], args.grid_size[1], args.grid_size[2], args.grid_size[3]), device=preds.device)\n    else :\n        output = torch.zeros((preds.shape[0], args.grid_size[0], args.grid_size[1], args.grid_size[2]), device=preds.device)\n    for i in range(preds.shape[0]):\n        output[i, coords[i, :, 0], coords[i, :, 1], coords[i, :, 2]] = preds[i]\n    return output\n\ndef visualization(args, coords, preds, folder, idx, learning_map_inv, training):\n    output = point2voxel(args, preds, coords)\n    return save_remap_lut(args, output, folder, idx, learning_map_inv, training)\n\ndef save_remap_lut(args, pred, folder, idx, learning_map_inv, training, make_numpy=True):\n    if make_numpy:\n        pred = pred.cpu().long().data.numpy()\n\n    if learning_map_inv is not None:\n        maxkey = max(learning_map_inv.keys())\n        # +100 hack making lut bigger just in case there are unknown labels\n        remap_lut_First = np.zeros((maxkey + 100), dtype=np.int32)\n        remap_lut_First[list(learning_map_inv.keys())] = list(learning_map_inv.values())\n\n        pred = pred.astype(np.uint32)\n        pred = pred.reshape((-1))\n        upper_half = pred >> 16  # get upper half for instances\n        lower_half = pred & 0xFFFF  # get lower half for semantics\n        lower_half = remap_lut_First[lower_half]  # do the remapping of semantics\n        pred = (upper_half << 16) + lower_half  # reconstruct full label\n        pred = pred.astype(np.uint32)\n\n    if training:\n        final_preds = pred.astype(np.uint16)        \n        os.umask(0)\n        os.makedirs(args.save_path+'/sample/'+str(folder), mode=0o777, exist_ok=True)\n        if torch.is_tensor(idx):\n            save_path = args.save_path+'/sample/'+str(folder)+'/'+str(idx.item()).zfill(3)+'.label'\n        else : \n            save_path = args.save_path+'/sample/'+str(folder)+'/'+str(idx).zfill(3)+'.label'\n        final_preds.tofile(save_path)\n    else:\n        return pred.astype(np.uint16)  \n    \n\ndef cycle(dl):\n    while True:\n        for data in dl:\n            yield data\n\n@lru_cache(4)\ndef voxel_coord(voxel_shape):\n    x = np.arange(voxel_shape[0])\n    y = np.arange(voxel_shape[1])\n    z = np.arange(voxel_shape[2])\n    Y, X, Z = np.meshgrid(x, y, z)\n    voxel_coord = np.concatenate((X[..., None], Y[..., None], Z[..., None]), axis=-1)\n    return voxel_coord\n\n\ndef make_query(grid_size):\n    gs = grid_size[1:]\n    coords = torch.from_numpy(voxel_coord(gs))\n    coords = coords.reshape(-1, 3)\n    query = torch.zeros(coords.shape, dtype=torch.float32)\n    query[:,0] = 2*coords[:,0]/float(gs[0]-1) -1\n    query[:,1] = 2*coords[:,1]/float(gs[1]-1) -1\n    query[:,2] = 2*coords[:,2]/float(gs[2]-1) -1\n    \n    query = query.reshape(-1, 3)\n    return coords.unsqueeze(0), query.unsqueeze(0)\n\n   "
  }
]