Repository: zoomin-lee/SemCity Branch: main Commit: 5d317202a662 Files: 41 Total size: 243.3 KB Directory structure: gitextract_3tcnvc5c/ ├── .gitignore ├── License.txt ├── Readme.md ├── dataset/ │ ├── 001335.label │ ├── carla.yaml │ ├── carla_dataset.py │ ├── dataset.md │ ├── dataset_builder.py │ ├── kitti_dataset.py │ ├── path_manager.py │ ├── semantic-kitti.yaml │ └── tri_dataset_builder.py ├── diffusion/ │ ├── fp16_util.py │ ├── gaussian_diffusion.py │ ├── logger.py │ ├── losses.py │ ├── nn.py │ ├── resample.py │ ├── respace.py │ ├── scheduler.py │ ├── script_util.py │ ├── train_util.py │ ├── triplane_util.py │ └── unet_triplane.py ├── encoding/ │ ├── blocks.py │ ├── lovasz.py │ ├── networks.py │ ├── ssc_metrics.py │ └── train_ae.py ├── sampling/ │ ├── generation.py │ ├── inpainting.py │ ├── outpainting.py │ └── ssc_refine.py ├── scripts/ │ ├── save_triplane.py │ ├── train_ae_main.py │ └── train_diffusion_main.py ├── setup.py └── utils/ ├── common_util.py ├── dist_util.py ├── parser_util.py └── utils.py ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ __pycache__/ tb/ *.egg-info/ .idea/ ================================================ FILE: License.txt ================================================ MIT License Copyright (c) 2024 Jumin Lee Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: Readme.md ================================================

SemCity: Semantic Scene Generation with Triplane Diffusion

![fig0](./figs/semcity.gif) > SemCity : Semantic Scene Generation with Triplane Diffusion > > Jumin Lee*, Sebin Lee*, Changho Jo, Woobin Im, Juhyeong Seon and Sung-Eui Yoon* [Paper](https://arxiv.org/abs/2403.07773) | [Project Page](https://sglab.kaist.ac.kr/SemCity) ## 📌 Setup We test our code on Ubuntu 20.04 with a single RTX 3090 or 4090 GPU. ### Environment git clone https://github.com/zoomin-lee/SemCity.git conda create -n semcity conda activate semcity conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia pip install blobfile matplotlib prettytable tensorboard tensorboardX scikit-learn tqdm pip install --user -e . ### Datasets We use the SemanticKITTI and CarlaSC datasets. See [dataset.md](./dataset/dataset.md) for detailed data structure. Please adjust the `sequences` folder path in `dataset/path_manager.py`. ## 📌 Training Train the Triplane Autoencoder and then the Triplane Diffusion. You can set dataset using `--dataset kitti` or `--dataset carla`. In/outpainting and semantic scene completion refinement are only possible with SemanticKITTI datasets. ### Triplane Autoencoder python scripts/train_ae_main.py --save_path exp/ae When you are finished training the triplane autoencoder, save the triplane. The triplane is a proxy representation of the scene for triplane diffusion training. python scripts/save_triplane.py --data_name voxels --save_tail .npy --resume {ae.pt path} If you want to train semantic scene completion refinement, also save the triplane of the result of the ssc method (e.g. monoscene). python scripts/save_triplane.py --data_name monoscene --save_tail _monoscene.npy --resume {ae.pt path} ### Triplane Diffusion For training for semantic scene generation or in/outpainting, python scripts/train_diffusion_main.py --triplane_loss_type l2 --save_path exp/diff For training semantic scene completion refinement, python scripts/train_diffusion_main.py --ssc_refine --refine_dataset monoscene --triplane_loss_type l1 --save_path exp/diff ## 📌 Sampling In `dataset/path_manager.py`, adjust the triplane autoencoder and triplane diffusion `.pt` paths to `AE_PATH` and `DIFF_PATH`. ![fig1](./figs/semcity.png) To generate 3D semantic scene like `fig(a)`, python sampling/generation.py --num_samples 10 --save_path exp/gen For semantic scene completion refinement like `fig(b)`, python sampling/ssc_refine.py --refine_dataset monoscene --save_path exp/ssc_refine Currently, we're only releasing the code to outpaint twice the original scene. python sampling/outpainting.py --load_path figs/000840.label --save_path exp/out For inpainting, as in `fig(d)`, you can define the region (top right, top left, bottom right, bottom left) where you want to regenerate. python sampling/inpainting.py --load_path figs/000840.label --save_path exp/in ## 📌 Evaluation We render our scene with [pyrender](https://pyrender.readthedocs.io/en/latest/index.html) and then evaluate it using [torch-fidelity](https://github.com/toshas/torch-fidelity). ## Acknowledgement The code is partly based on [guided-diffusion](https://github.com/openai/guided-diffusion), [Sin3DM](https://github.com/Sin3DM/Sin3DM) and [scene-scale-diffusion](https://github.com/zoomin-lee/scene-scale-diffusion). ## Bibtex If you find this code useful for your research, please consider citing our paper: @inproceedings{lee2024semcity, title={SemCity: Semantic Scene Generation with Triplane Diffusion}, author={Lee, Jumin and Lee, Sebin and Jo, Changho and Im, Woobin and Seon, Juhyeong and Yoon, Sung-Eui}, booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, year={2024} } ## 📌 License This project is released under the MIT License. ================================================ FILE: dataset/carla.yaml ================================================ color_map : 0 : [255, 255, 255] # None 1 : [70, 70, 70] # Building 2 : [100, 40, 40] # Fences 3 : [55, 90, 80] # Other 4 : [255, 255, 0 ] # Pedestrian 5 : [153, 153, 153] # Pole 6 : [157, 234, 50] # RoadLines 7 : [0, 0, 255] # Road 8 : [255, 255, 255] # Sidewalk 9 : [0, 155, 0] # Vegetation 10 : [255, 0, 0] # Vehicle 11 : [102, 102, 156] # Wall 12 : [220, 220, 0] # TrafficSign 13 : [70, 130, 180] # Sky 14 : [255, 255, 255] # Ground 15 : [150, 100, 100] # Bridge 16 : [230, 150, 140] # RailTrack 17 : [180, 165, 180] # GuardRail 18 : [250, 170, 30] # TrafficLight 19 : [110, 190, 160] # Static 20 : [170, 120, 50] # Dynamic 21 : [45, 60, 150] # Water 22 : [145, 170, 100] # Terrain learning_map : 0 : 0 1 : 1 2 : 2 3 : 3 4 : 4 5 : 5 6 : 6 7 : 6 8 : 8 9 : 9 10: 10 11 : 2 12 : 5 13 : 3 14 : 7 15 : 3 16 : 3 17 : 2 18 : 5 19 : 3 20 : 3 21 : 3 22 : 7 remap_color_map: 0 : [255, 255, 255] # None 1 : [70, 70, 70] # Building 2 : [100, 40, 40] # Fences 3 : [55, 90, 80] # Other 4 : [255, 255, 0] # Pedestrian 5 : [153, 153, 153] # Pole 6 : [0, 0, 255] # Road 7 : [145, 170, 100] # Ground 8 : [240, 240, 240] # Sidewalk 9 : [0, 155, 0] # Vegetation 10 : [255, 0, 0] # Vehicle label_to_names: 0 : Free 1 : Building 2 : Barrier 3 : Other 4 : Pedestrian 5 : Pole 6 : Road 7 : Ground 8 : Sidewalk 9 : Vegetation 10 : Vehicle content : 0 : 4166593275 1 : 42309744 2 : 8550180 3 : 478193 4 : 905663 5 : 2801091 6 : 6452733 7 : 229316930 8 : 112863867 9 : 29816894 10: 13839655 11 : 15581458 12 : 221821 13 : 0 14 : 7931550 15 : 467989 16 : 3354 17 : 9201043 18 : 61011 19 : 3796746 20 : 3217865 21 : 215372 22 : 79669695 remap_content : 0 : 4.16659328e+09 1 : 4.23097440e+07 2 : 3.33326810e+07 3 : 8.17951900e+06 4 : 9.05663000e+05 5 : 3.08392300e+06 6 : 2.35769663e+08 7 : 8.76012450e+07 8 : 1.12863867e+08 9 : 2.98168940e+07 10 : 1.38396550e+07 split: # sequence numbers train: - Town01_Heavy - Town02_Heavy - Town03_Heavy - Town04_Heavy - Town05_Heavy - Town06_Heavy - Town01_Medium - Town02_Medium - Town03_Medium - Town04_Medium - Town05_Medium - Town06_Medium - Town01_Light - Town02_Light - Town03_Light - Town04_Light - Town05_Light - Town06_Light valid: - Town10_Heavy - Town10_Medium - Town10_Light ================================================ FILE: dataset/carla_dataset.py ================================================ import os import numpy as np import json import yaml import torch import pathlib from torch.utils.data import Dataset from dataset.kitti_dataset import flip, get_query class CarlaDataset(Dataset): def __init__(self, args, imageset='train', get_query=True): self.get_query = get_query carla_config = yaml.safe_load(open(args.yaml_path, 'r')) label_remap = carla_config["learning_map"] self.learning_map = np.asarray(list(label_remap.values())) self.learning_map_inv = None if imageset == 'train': split = carla_config['split']['train'] elif imageset == 'val': split = carla_config['split']['valid'] complt_num_per_class= np.asarray([4.16659328e+09, 4.23097440e+07, 3.33326810e+07, 8.17951900e+06, 9.05663000e+05, 3.08392300e+06, 2.35769663e+08, 8.76012450e+07, 1.12863867e+08, 2.98168940e+07, 1.38396550e+07]) compl_labelweights = complt_num_per_class / np.sum(complt_num_per_class) self.weights = torch.Tensor(np.power(np.amax(compl_labelweights) / compl_labelweights, 1 / 3.0)).cuda() self.imageset = imageset param_file = os.path.join(args.data_path, split[0], 'voxels', 'params.json') with open(param_file) as f: self._eval_param = json.load(f) self._grid_size = self._eval_param['grid_size'] self._eval_size = list(np.uint32(self._grid_size)) self.im_idx = [] for i_folder in split: complete_path = os.path.join(args.data_path, str(i_folder), 'voxels') files = list(pathlib.Path(complete_path).glob('*.label')) for filename in files: #if int(str(filename).split('/')[-1].split('.')[0]) % 5 == 0 : self.im_idx.append(str(filename)) # Use all frames, if there is no data then zero pad def __len__(self): return len(self.im_idx) def __getitem__(self, index): voxel_label = np.fromfile(self.im_idx[index],dtype=np.uint32).reshape(self._eval_size).astype(np.uint8) valid = np.fromfile(self.im_idx[index].replace("label", 'bin'),dtype=np.float32).reshape(self._eval_size) voxel_label = self.learning_map[voxel_label].astype(np.uint8) if self.imageset == 'train' : p = torch.randint(0, 6, (1,)).item() if p == 0: voxel_label, valid = flip(voxel_label, valid, flip_dim=0) elif p == 1: voxel_label, valid = flip(voxel_label, valid, flip_dim=1) elif p == 2: voxel_label, valid = flip(voxel_label, valid, flip_dim=0) voxel_label, valid = flip(voxel_label, valid, flip_dim=1) invalid = torch.zeros_like(torch.from_numpy(valid)) invalid[torch.from_numpy(valid)==0]=1 invalid = invalid.numpy() if self.get_query: query, xyz_label, xyz_center = get_query(voxel_label, 11, (128,128,8), 80000) else : query, xyz_label, xyz_center = torch.zeros(1), torch.zeros(1), torch.zeros(1) return voxel_label, query, xyz_label, xyz_center, self.im_idx[index], invalid ================================================ FILE: dataset/dataset.md ================================================ ## Datasets Datasets should have the following structure. The triplane folder is created by `scripts/save_triplane.py` after `scripts/train_ae_main.py`. ### SemanticKITTI You can download SemanticKITTI datasets from [here](http://www.semantic-kitti.org/assets/data_odometry_voxels_all.zip). If you want to do semantic scene completion refinement, place the `.label` file from ssc method(e.g. [monoscene](https://github.com/astra-vision/MonoScene), [occdepth](https://github.com/megvii-research/OccDepth), [scpnet](https://github.com/SCPNet/Codes-for-SCPNet), [ssasc](https://github.com/jokester-zzz/ssa-sc)) in the following structure. /dataset/ └── sequences/ ├── 00/ | ├── voxels/ │ | ├ 000000.label │ | ├ 000000.invalid │   ├── monoscene/ │ | ├ 000000.label │   ├── occdepth/ │ | ├ 000000.label │   ├── scpnet/ │ | ├ 000000.label │   ├── ssasc/ │ | ├ 000000.label │   └── triplane/ │ ├ 000000.npy │ ├ 000000_monoscene.npy │ ├ 000000_occdepth.npy │ ├ 000000_scpnet.npy │ ├ 000000_ssasc.npy ├── 01/ . . └── 10/ ### CarlaSC You can download CarlaSC Cartesian datasets from [here](https://umich-curly.github.io/CarlaSC.github.io/download/). The structure differs slightly from the original CarlaSC dataset to align with the SemanticKITTI dataset. The `voxels` folder was originally the `evaluation` folder, which contains the GT for semantic scene completion. /carla/ └── sequences/ ├── Town01_Heavy/ | ├── voxels/ │ | ├ 000000.label │ | ├ 000000.bin │   └── triplane/ │ ├ 000000.npy ├── Town01_Medium/ . . └── Town10_Light/ ================================================ FILE: dataset/dataset_builder.py ================================================ from dataset.kitti_dataset import SemKITTI from dataset.carla_dataset import CarlaDataset def dataset_builder(args): print("build dataset") if args.dataset == 'kitti': dataset = SemKITTI(args, 'train') val_dataset = SemKITTI(args, 'val') args.num_class = 20 args.grid_size = [256, 256, 32] class_names = [ 'car', 'bicycle', 'motorcycle', 'truck', 'other-vehicle', 'person', 'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building', 'fence', 'vegetation', 'trunk', 'terrain', 'pole', 'traffic-sign' ] elif args.dataset == 'carla': dataset = CarlaDataset(args, 'train') val_dataset = CarlaDataset(args, 'val') args.num_class = 11 args.grid_size = [128, 128, 8] class_names = ['building', 'barrier', 'other', 'pedestrian', 'pole', 'road', 'ground', 'sidewalk', 'vegetation', 'vehicle'] return dataset, val_dataset, args.num_class, class_names ================================================ FILE: dataset/kitti_dataset.py ================================================ import os import numpy as np from torch.utils import data import yaml import pathlib import torch from scipy.ndimage import distance_transform_edt class SemKITTI(data.Dataset): def __init__(self, args, imageset='train', get_query=True, folder = 'voxels'): with open(args.yaml_path, 'r') as stream: semkittiyaml = yaml.safe_load(stream) self.args = args self.get_query = get_query remapdict = semkittiyaml['learning_map'] self.learning_map_inv = semkittiyaml["learning_map_inv"] maxkey = max(remapdict.keys()) remap_lut = np.zeros((maxkey + 100), dtype=np.int32) remap_lut[list(remapdict.keys())] = list(remapdict.values()) remap_lut[remap_lut == 0] = 255 # map 0 to 'invalid' remap_lut[0] = 0 # only 'empty' stays 'empty'. self.learning_map = remap_lut self.imageset = imageset self.data_path = args.data_path self.folder = folder if imageset == 'train': split = semkittiyaml['split']['train'] complt_num_per_class= np.asarray([7632350044, 15783539, 125136, 118809, 646799, 821951, 262978, 283696, 204750, 61688703, 4502961, 44883650, 2269923, 56840218, 15719652, 158442623, 2061623, 36970522, 1151988, 334146]) compl_labelweights = complt_num_per_class / np.sum(complt_num_per_class) self.weights = torch.Tensor(np.power(np.amax(compl_labelweights) / compl_labelweights, 1 / 3.0)).cuda() elif imageset == 'val': split = semkittiyaml['split']['valid'] self.weights = torch.Tensor(np.ones(20) * 3).cuda() self.weights[0] = 1 elif imageset == 'test': split = semkittiyaml['split']['test'] self.weights = torch.Tensor(np.ones(20) * 3).cuda() self.weights[0] = 1 else: raise Exception('Split must be train/val/test') self.im_idx=[] for i_folder in split: # velodyne path corresponding to voxel path complete_path = os.path.join(args.data_path, str(i_folder).zfill(2), folder) files = list(pathlib.Path(complete_path).glob('*.label')) for filename in files: if (imageset == 'val') : if (int(str(filename).split('/')[-1].split('.')[0]) % 5 == 0) : self.im_idx.append(str(filename)) else : self.im_idx.append(str(filename)) def unpack(self, compressed): ''' given a bit encoded voxel grid, make a normal voxel grid out of it. ''' uncompressed = np.zeros(compressed.shape[0] * 8, dtype=np.uint8) uncompressed[::8] = compressed[:] >> 7 & 1 uncompressed[1::8] = compressed[:] >> 6 & 1 uncompressed[2::8] = compressed[:] >> 5 & 1 uncompressed[3::8] = compressed[:] >> 4 & 1 uncompressed[4::8] = compressed[:] >> 3 & 1 uncompressed[5::8] = compressed[:] >> 2 & 1 uncompressed[6::8] = compressed[:] >> 1 & 1 uncompressed[7::8] = compressed[:] & 1 return uncompressed def __len__(self): 'Denotes the total number of samples' return len(self.im_idx) def __getitem__(self, index): path = self.im_idx[index] if self.imageset == 'test': voxel_label = np.zeros([256, 256, 32], dtype=int).reshape((-1, 1)) else: voxel_label = np.fromfile(path, dtype=np.uint16).reshape((-1, 1)) # voxel labels invalid = self.unpack(np.fromfile(path.replace('label', 'invalid').replace(self.folder, 'voxels'), dtype=np.uint8)).astype(np.float32) voxel_label = self.learning_map[voxel_label] voxel_label = voxel_label.reshape((256, 256, 32)) invalid = invalid.reshape((256,256,32)) voxel_label[invalid == 1]=255 if self.get_query : if self.imageset == 'train' : p = torch.randint(0, 6, (1,)).item() if p == 0: voxel_label, invalid = flip(voxel_label, invalid, flip_dim=0) elif p == 1: voxel_label, invalid = flip(voxel_label, invalid, flip_dim=1) elif p == 2: voxel_label, invalid = flip(voxel_label, invalid, flip_dim=0) voxel_label, invalid = flip(voxel_label, invalid, flip_dim=1) query, xyz_label, xyz_center = get_query(voxel_label) else : query, xyz_label, xyz_center = torch.zeros(1), torch.zeros(1), torch.zeros(1) return voxel_label, query, xyz_label, xyz_center, self.im_idx[index], invalid def get_query(voxel_label, num_class=20, grid_size = (256,256,32), max_points = 400000): xyzl = [] for i in range(1, num_class): xyz = torch.nonzero(torch.Tensor(voxel_label) == i, as_tuple=False) xyzlabel = torch.nn.functional.pad(xyz, (1,0),'constant', value=i) xyzl.append(xyzlabel) tdf = compute_tdf(voxel_label, trunc_distance=2) xyz = torch.nonzero(torch.tensor(np.logical_and(tdf > 0, tdf <= 2)), as_tuple=False) xyzlabel = torch.nn.functional.pad(xyz, (1, 0), 'constant', value=0) xyzl.append(xyzlabel) num_far_free = int(max_points - len(torch.cat(xyzl, dim=0))) if num_far_free <= 0 : xyzl = torch.cat(xyzl, dim=0) xyzl = xyzl[:max_points] else : xyz = torch.nonzero(torch.tensor(np.logical_and(voxel_label == 0, tdf == -1)), as_tuple=False) xyzlabel = torch.nn.functional.pad(xyz, (1, 0), 'constant', value=0) idx = torch.randperm(xyzlabel.shape[0]) xyzlabel = xyzlabel[idx][:min(xyzlabel.shape[0], num_far_free)] xyzl.append(xyzlabel) while len(torch.cat(xyzl, dim=0)) < max_points: for i in range(1, num_class): xyz = torch.nonzero(torch.Tensor(voxel_label) == i, as_tuple=False) xyzlabel = torch.nn.functional.pad(xyz, (1,0),'constant', value=i) xyzl.append(xyzlabel) xyzl = torch.cat(xyzl, dim=0) xyzl = xyzl[:max_points] xyz_label = xyzl[:, 0] xyz_center = xyzl[:, 1:] xyz = xyz_center.float() query = torch.zeros(xyz.shape, dtype=torch.float32, device=xyz.device) query[:,0] = 2*xyz[:,0].clamp(0,grid_size[0]-1)/float(grid_size[0]-1) -1 query[:,1] = 2*xyz[:,1].clamp(0,grid_size[1]-1)/float(grid_size[1]-1) -1 query[:,2] = 2*xyz[:,2].clamp(0,grid_size[2]-1)/float(grid_size[2]-1) -1 return query, xyz_label, xyz_center def compute_tdf(voxel_label: np.ndarray, trunc_distance: float = 3, trunc_value: float = -1) -> np.ndarray: """ Compute Truncated Distance Field (TDF). voxel_label -- [X, Y, Z] """ # make TDF at free voxels. # distance is defined as Euclidean distance to nearest unfree voxel (occupied or unknown). free = voxel_label == 0 tdf = distance_transform_edt(free) # Set -1 if distance is greater than truncation_distance tdf[tdf > trunc_distance] = trunc_value return tdf # [X, Y, Z] def flip(voxel, invalid, flip_dim=0): voxel = np.flip(voxel, axis=flip_dim).copy() invalid = np.flip(invalid, axis=flip_dim).copy() return voxel, invalid ================================================ FILE: dataset/path_manager.py ================================================ import os # manual definition PROJECT_NAMES = 'SemCity' SEMKITTI_DATA_PATH = '' # the path to the sequences folder CARLA_DATA_PATH = '' # the path to the sequences folder # auto definition CARLA_YAML_PATH = os.getcwd() + '/dataset/carla.yaml' SEMKITTI_YAML_PATH = os.getcwd() + '/dataset/semantic-kitti.yaml' # manual definition after training AE_PATH = os.getcwd() + '' # the path to the pt file GEN_DIFF_PATH = os.getcwd() + '' SSC_DIFF_PATH = os.getcwd() + '' ================================================ FILE: dataset/semantic-kitti.yaml ================================================ labels: 0 : "unlabeled" 1 : "outlier" 10: "car" 11: "bicycle" 13: "bus" 15: "motorcycle" 16: "on-rails" 18: "truck" 20: "other-vehicle" 30: "person" 31: "bicyclist" 32: "motorcyclist" 40: "road" 44: "parking" 48: "sidewalk" 49: "other-ground" 50: "building" 51: "fence" 52: "other-structure" 60: "lane-marking" 70: "vegetation" 71: "trunk" 72: "terrain" 80: "pole" 81: "traffic-sign" 99: "other-object" 252: "moving-car" 253: "moving-bicyclist" 254: "moving-person" 255: "moving-motorcyclist" 256: "moving-on-rails" 257: "moving-bus" 258: "moving-truck" 259: "moving-other-vehicle" color_map: # bgr 0 : [0, 0, 0] 1 : [0, 0, 255] 10: [245, 150, 100] 11: [245, 230, 100] 13: [250, 80, 100] 15: [150, 60, 30] 16: [255, 0, 0] 18: [180, 30, 80] 20: [255, 0, 0] 30: [30, 30, 255] 31: [200, 40, 255] 32: [90, 30, 150] 40: [255, 0, 255] 44: [255, 150, 255] 48: [75, 0, 75] 49: [75, 0, 175] 50: [0, 200, 255] 51: [50, 120, 255] 52: [0, 150, 255] 60: [170, 255, 150] 70: [0, 175, 0] 71: [0, 60, 135] 72: [80, 240, 150] 80: [150, 240, 255] 81: [0, 0, 255] 99: [255, 255, 50] 252: [245, 150, 100] 256: [255, 0, 0] 253: [200, 40, 255] 254: [30, 30, 255] 255: [90, 30, 150] 257: [250, 80, 100] 258: [180, 30, 80] 259: [255, 0, 0] content: # as a ratio with the total number of points 0: 0.018889854628292943 1: 0.0002937197336781505 10: 0.040818519255974316 11: 0.00016609538710764618 13: 2.7879693665067774e-05 15: 0.00039838616015114444 16: 0.0 18: 0.0020633612104619787 20: 0.0016218197275284021 30: 0.00017698551338515307 31: 1.1065903904919655e-08 32: 5.532951952459828e-09 40: 0.1987493871255525 44: 0.014717169549888214 48: 0.14392298360372 49: 0.0039048553037472045 50: 0.1326861944777486 51: 0.0723592229456223 52: 0.002395131480328884 60: 4.7084144280367186e-05 70: 0.26681502148037506 71: 0.006035012012626033 72: 0.07814222006271769 80: 0.002855498193863172 81: 0.0006155958086189918 99: 0.009923127583046915 252: 0.001789309418528068 253: 0.00012709999297008662 254: 0.00016059776092534436 255: 3.745553104802113e-05 256: 0.0 257: 0.00011351574470342043 258: 0.00010157861367183268 259: 4.3840131989471124e-05 # classes that are indistinguishable from single scan or inconsistent in # ground truth are mapped to their closest equivalent learning_map: 0 : 0 # "unlabeled" 1 : 0 # "outlier" mapped to "unlabeled" --------------------------mapped 10: 1 # "car" 11: 2 # "bicycle" 13: 5 # "bus" mapped to "other-vehicle" --------------------------mapped 15: 3 # "motorcycle" 16: 5 # "on-rails" mapped to "other-vehicle" ---------------------mapped 18: 4 # "truck" 20: 5 # "other-vehicle" 30: 6 # "person" 31: 7 # "bicyclist" 32: 8 # "motorcyclist" 40: 9 # "road" 44: 10 # "parking" 48: 11 # "sidewalk" 49: 12 # "other-ground" 50: 13 # "building" 51: 14 # "fence" 52: 0 # "other-structure" mapped to "unlabeled" ------------------mapped 60: 9 # "lane-marking" to "road" ---------------------------------mapped 70: 15 # "vegetation" 71: 16 # "trunk" 72: 17 # "terrain" 80: 18 # "pole" 81: 19 # "traffic-sign" 99: 0 # "other-object" to "unlabeled" ----------------------------mapped 252: 1 # "moving-car" to "car" ------------------------------------mapped 253: 7 # "moving-bicyclist" to "bicyclist" ------------------------mapped 254: 6 # "moving-person" to "person" ------------------------------mapped 255: 8 # "moving-motorcyclist" to "motorcyclist" ------------------mapped 256: 5 # "moving-on-rails" mapped to "other-vehicle" --------------mapped 257: 5 # "moving-bus" mapped to "other-vehicle" -------------------mapped 258: 4 # "moving-truck" to "truck" --------------------------------mapped 259: 5 # "moving-other"-vehicle to "other-vehicle" ----------------mapped learning_map_inv: # inverse of previous map 0: 0 # "unlabeled", and others ignored 1: 10 # "car" 2: 11 # "bicycle" 3: 15 # "motorcycle" 4: 18 # "truck" 5: 20 # "other-vehicle" 6: 30 # "person" 7: 31 # "bicyclist" 8: 32 # "motorcyclist" 9: 40 # "road" 10: 44 # "parking" 11: 48 # "sidewalk" 12: 49 # "other-ground" 13: 50 # "building" 14: 51 # "fence" 15: 70 # "vegetation" 16: 71 # "trunk" 17: 72 # "terrain" 18: 80 # "pole" 19: 81 # "traffic-sign" learning_ignore: # Ignore classes 0: True # "unlabeled", and others ignored 1: False # "car" 2: False # "bicycle" 3: False # "motorcycle" 4: False # "truck" 5: False # "other-vehicle" 6: False # "person" 7: False # "bicyclist" 8: False # "motorcyclist" 9: False # "road" 10: False # "parking" 11: False # "sidewalk" 12: False # "other-ground" 13: False # "building" 14: False # "fence" 15: False # "vegetation" 16: False # "trunk" 17: False # "terrain" 18: False # "pole" 19: False # "traffic-sign" split: # sequence numbers train: - 0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 9 - 10 valid: - 8 test: - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 ================================================ FILE: dataset/tri_dataset_builder.py ================================================ import torch import yaml import os import numpy as np import pathlib from diffusion.triplane_util import augment from utils.parser_util import get_gen_args class TriplaneDataset(torch.utils.data.Dataset): def __init__(self, args, imageset): self.args = args self.imageset = imageset with open(args.yaml_path, 'r') as stream: data_yaml = yaml.safe_load(stream) if imageset == 'train': split = data_yaml['split']['train'] elif imageset == 'val': split = data_yaml['split']['valid'] H, W, D, self.learning_map, self.learning_map_inv, class_name, grid_size, self.tri_size, self.num_class, self.max_points = get_gen_args(args) self.grid_size = grid_size[1:] self.im_idx = [] for i_folder in split: if args.dataset == 'kitti': folder = str(i_folder).zfill(2) elif args.dataset == 'carla' : folder = str(i_folder) if args.diff_net_type == 'unet_voxel': tri_path = os.path.join(args.data_path, folder, 'voxel') elif args.diff_net_type == 'unet_bev': tri_path = os.path.join(args.data_path, folder, 'bev') else : tri_path = os.path.join(args.data_path, folder, 'triplane') files = list(pathlib.Path(tri_path).glob('??????.npy')) for filename in files: if imageset == 'val': if (int(str(filename).split('/')[-1].split('.')[0].split("_")[0]) % 5 == 0) : self.im_idx.append(str(filename)) else : self.im_idx.append(str(filename)) if imageset == 'val': self.im_idx = sorted(self.im_idx) def __len__(self): return len(self.im_idx) def __getitem__(self, index): triplane = np.load(self.im_idx[index]).squeeze() if self.args.ssc_refine : condition = np.load(self.im_idx[index]) path = self.im_idx[index].replace('.npy', f'_{self.args.ssc_refine_dataset}.npy') else: condition = np.zeros_like(triplane) path = self.im_idx[index] if (not self.args.diff_net_type == 'unet_voxel') and (self.imageset == 'train') : # rotation q = torch.randint(0, 3, (1,)).item() if q==0: triplane = torch.from_numpy(triplane).permute(0, 2, 1).numpy() condition = torch.from_numpy(condition).permute(0, 2, 1).numpy() # other augmentations (flip, crop, noise.) p = torch.randint(0, 6, (1,)).item() triplane = augment(triplane, p, self.tri_size) condition = augment(condition, p, self.tri_size) return triplane, {'y':condition, 'H':self.tri_size[0], 'W':self.tri_size[1], 'D':self.tri_size[2], 'path':(path)} ================================================ FILE: diffusion/fp16_util.py ================================================ """ Helpers to train with 16-bit precision. """ import numpy as np import torch as th import torch.nn as nn from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors from . import logger INITIAL_LOG_LOSS_SCALE = 20.0 def convert_module_to_f16(l): """ Convert primitive modules to float16. """ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): l.weight.data = l.weight.data.half() if l.bias is not None: l.bias.data = l.bias.data.half() def convert_module_to_f32(l): """ Convert primitive modules to float32, undoing convert_module_to_f16(). """ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): l.weight.data = l.weight.data.float() if l.bias is not None: l.bias.data = l.bias.data.float() def make_master_params(param_groups_and_shapes): """ Copy model parameters into a (differently-shaped) list of full-precision parameters. """ master_params = [] for param_group, shape in param_groups_and_shapes: master_param = nn.Parameter( _flatten_dense_tensors( [param.detach().float() for (_, param) in param_group] ).view(shape) ) master_param.requires_grad = True master_params.append(master_param) return master_params def model_grads_to_master_grads(param_groups_and_shapes, master_params): """ Copy the gradients from the model parameters into the master parameters from make_master_params(). """ for master_param, (param_group, shape) in zip( master_params, param_groups_and_shapes ): master_param.grad = _flatten_dense_tensors( [param_grad_or_zeros(param) for (_, param) in param_group] ).view(shape) def master_params_to_model_params(param_groups_and_shapes, master_params): """ Copy the master parameter data back into the model parameters. """ # Without copying to a list, if a generator is passed, this will # silently not copy any parameters. for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes): for (_, param), unflat_master_param in zip( param_group, unflatten_master_params(param_group, master_param.view(-1)) ): param.detach().copy_(unflat_master_param) def unflatten_master_params(param_group, master_param): return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group]) def get_param_groups_and_shapes(named_model_params): named_model_params = list(named_model_params) scalar_vector_named_params = ( [(n, p) for (n, p) in named_model_params if p.ndim <= 1], (-1), ) matrix_named_params = ( [(n, p) for (n, p) in named_model_params if p.ndim > 1], (1, -1), ) return [scalar_vector_named_params, matrix_named_params] def master_params_to_state_dict( model, param_groups_and_shapes, master_params, use_fp16 ): if use_fp16: state_dict = model.state_dict() for master_param, (param_group, _) in zip( master_params, param_groups_and_shapes ): for (name, _), unflat_master_param in zip( param_group, unflatten_master_params(param_group, master_param.view(-1)) ): assert name in state_dict state_dict[name] = unflat_master_param else: state_dict = model.state_dict() for i, (name, _value) in enumerate(model.named_parameters()): assert name in state_dict state_dict[name] = master_params[i] return state_dict def state_dict_to_master_params(model, state_dict, use_fp16): if use_fp16: named_model_params = [ (name, state_dict[name]) for name, _ in model.named_parameters() ] param_groups_and_shapes = get_param_groups_and_shapes(named_model_params) master_params = make_master_params(param_groups_and_shapes) else: master_params = [state_dict[name] for name, _ in model.named_parameters()] return master_params def zero_master_grads(master_params): for param in master_params: param.grad = None def zero_grad(model_params): for param in model_params: # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group if param.grad is not None: param.grad.detach_() param.grad.zero_() def param_grad_or_zeros(param): if param.grad is not None: return param.grad.data.detach() else: return th.zeros_like(param) class MixedPrecisionTrainer: def __init__( self, *, model, use_fp16=False, fp16_scale_growth=1e-3, initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE, ): self.model = model self.use_fp16 = use_fp16 self.fp16_scale_growth = fp16_scale_growth self.model_params = list(self.model.parameters()) self.master_params = self.model_params self.param_groups_and_shapes = None self.lg_loss_scale = initial_lg_loss_scale if self.use_fp16: self.param_groups_and_shapes = get_param_groups_and_shapes( self.model.named_parameters() ) self.master_params = make_master_params(self.param_groups_and_shapes) self.model.convert_to_fp16() def zero_grad(self): zero_grad(self.model_params) def backward(self, loss: th.Tensor): if self.use_fp16: loss_scale = 2 ** self.lg_loss_scale (loss * loss_scale).backward() else: loss.backward() def optimize(self, opt: th.optim.Optimizer): if self.use_fp16: return self._optimize_fp16(opt) else: return self._optimize_normal(opt) def _optimize_fp16(self, opt: th.optim.Optimizer): logger.logkv_mean("lg_loss_scale", self.lg_loss_scale) model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params) grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale) if check_overflow(grad_norm): self.lg_loss_scale -= 1 logger.log(f"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}") zero_master_grads(self.master_params) return False logger.logkv_mean("grad_norm", grad_norm) logger.logkv_mean("param_norm", param_norm) for p in self.master_params: p.grad.mul_(1.0 / (2 ** self.lg_loss_scale)) opt.step() zero_master_grads(self.master_params) master_params_to_model_params(self.param_groups_and_shapes, self.master_params) self.lg_loss_scale += self.fp16_scale_growth return True def _optimize_normal(self, opt: th.optim.Optimizer): grad_norm, param_norm = self._compute_norms() logger.logkv_mean("grad_norm", grad_norm) logger.logkv_mean("param_norm", param_norm) opt.step() return True def _compute_norms(self, grad_scale=1.0): grad_norm = 0.0 param_norm = 0.0 for p in self.master_params: with th.no_grad(): param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2 if p.grad is not None: grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2 return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm) def master_params_to_state_dict(self, master_params): return master_params_to_state_dict( self.model, self.param_groups_and_shapes, master_params, self.use_fp16 ) def state_dict_to_master_params(self, state_dict): return state_dict_to_master_params(self.model, state_dict, self.use_fp16) def check_overflow(value): return (value == float("inf")) or (value == -float("inf")) or (value != value) ================================================ FILE: diffusion/gaussian_diffusion.py ================================================ """ This code started out as a PyTorch port of Ho et al's diffusion models: https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py Docstrings have been added, as well as DDIM sampling and a new collection of beta schedules. """ import enum import math import numpy as np import torch as th from dataset.path_manager import * from diffusion.nn import mean_flat, mask_img, decompose_featmaps from diffusion.losses import normal_kl, discretized_gaussian_log_likelihood from diffusion.scheduler import get_schedule_jump def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): """ Get a pre-defined beta schedule for the given name. The beta schedule library consists of beta schedules which remain similar in the limit of num_diffusion_timesteps. Beta schedules may be added, but should not be removed or changed once they are committed to maintain backwards compatibility. """ if schedule_name == "linear": # Linear schedule from Ho et al, extended to work for any number of # diffusion steps. scale = 1000 / num_diffusion_timesteps beta_start = scale * 0.0001 beta_end = scale * 0.02 return np.linspace( beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64 ) elif schedule_name == "cosine": return betas_for_alpha_bar( num_diffusion_timesteps, lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, ) else: raise NotImplementedError(f"unknown beta schedule: {schedule_name}") def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): """ Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of (1-beta) over time from t = [0,1]. :param num_diffusion_timesteps: the number of betas to produce. :param alpha_bar: a lambda that takes an argument t from 0 to 1 and produces the cumulative product of (1-beta) up to that part of the diffusion process. :param max_beta: the maximum beta to use; use values lower than 1 to prevent singularities. """ betas = [] for i in range(num_diffusion_timesteps): t1 = i / num_diffusion_timesteps t2 = (i + 1) / num_diffusion_timesteps betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) return np.array(betas) class ModelMeanType(enum.Enum): """ Which type of output the model predicts. """ PREVIOUS_X = enum.auto() # the model predicts x_{t-1} START_X = enum.auto() # the model predicts x_0 EPSILON = enum.auto() # the model predicts epsilon class ModelVarType(enum.Enum): """ What is used as the model's output variance. The LEARNED_RANGE option has been added to allow the model to predict values between FIXED_SMALL and FIXED_LARGE, making its job easier. """ LEARNED = enum.auto() FIXED_SMALL = enum.auto() FIXED_LARGE = enum.auto() LEARNED_RANGE = enum.auto() class LossType(enum.Enum): MSE = enum.auto() # use raw MSE loss (and KL when learning variances) RESCALED_MSE = ( enum.auto() ) # use raw MSE loss (with RESCALED_KL when learning variances) KL = enum.auto() # use the variational lower-bound RESCALED_KL = enum.auto() # like KL, but rescale to estimate the full VLB def is_vb(self): return self == LossType.KL or self == LossType.RESCALED_KL class GaussianDiffusion: """ Utilities for training and sampling diffusion models. Ported directly from here, and then adapted over time to further experimentation. https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 :param betas: a 1-D numpy array of betas for each diffusion timestep, starting at T and going to 1. :param model_mean_type: a ModelMeanType determining what the model outputs. :param model_var_type: a ModelVarType determining how variance is output. :param loss_type: a LossType determining the loss function to use. :param rescale_timesteps: if True, pass floating point timesteps into the model so that they are always scaled like in the original paper (0 to 1000). """ def __init__( self, *, args, betas, model_mean_type, model_var_type, loss_type, rescale_timesteps, ): self.model_mean_type = model_mean_type self.model_var_type = model_var_type self.loss_type = loss_type self.rescale_timesteps = rescale_timesteps self.ssc_refine = args.ssc_refine self.triplane_loss_type = args.triplane_loss_type self.args = args # Use float64 for accuracy. betas = np.array(betas, dtype=np.float64) self.betas = betas assert len(betas.shape) == 1, "betas must be 1-D" assert (betas > 0).all() and (betas <= 1).all() self.num_timesteps = int(betas.shape[0]) alphas = 1.0 - betas self.alphas_cumprod = np.cumprod(alphas, axis=0) self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) # calculations for diffusion q(x_t | x_{t-1}) and others self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) # calculations for posterior q(x_{t-1} | x_t, x_0) self.posterior_variance = ( betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) ) # log calculation clipped because the posterior variance is 0 at the # beginning of the diffusion chain. self.posterior_log_variance_clipped = np.log( np.append(self.posterior_variance[1], self.posterior_variance[1:]) ) self.posterior_mean_coef1 = ( betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) ) self.posterior_mean_coef2 = ( (1.0 - self.alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - self.alphas_cumprod) ) def undo(self, img_out, t, debug=False): '''p(x_t|x_{t-1})''' beta = _extract_into_tensor(self.betas, t, img_out.shape) img_in_est = th.sqrt(1 - beta) * img_out + th.sqrt(beta) * th.randn_like(img_out) return img_in_est def q_mean_variance(self, x_start, t): """ Get the distribution q(x_t | x_0). :param x_start: the [N x C x ...] tensor of noiseless inputs. :param t: the number of diffusion steps (minus 1). Here, 0 means one step. :return: A tuple (mean, variance, log_variance), all of x_start's shape. """ mean = ( _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start ) variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) log_variance = _extract_into_tensor( self.log_one_minus_alphas_cumprod, t, x_start.shape ) return mean, variance, log_variance def q_sample(self, x_start, t, noise=None): """ Diffuse the data for a given number of diffusion steps. In other words, sample from q(x_t | x_0). :param x_start: the initial data batch. :param t: the number of diffusion steps (minus 1). Here, 0 means one step. :param noise: if specified, the split-out normal noise. :return: A noisy version of x_start. """ if noise is None: noise = th.randn_like(x_start) assert noise.shape == x_start.shape return ( _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise ) def q_posterior_mean_variance(self, x_start, x_t, t): """ Compute the mean and variance of the diffusion posterior: q(x_{t-1} | x_t, x_0) """ assert x_start.shape == x_t.shape posterior_mean = ( _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t ) posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) posterior_log_variance_clipped = _extract_into_tensor( self.posterior_log_variance_clipped, t, x_t.shape ) assert ( posterior_mean.shape[0] == posterior_variance.shape[0] == posterior_log_variance_clipped.shape[0] == x_start.shape[0] ) return posterior_mean, posterior_variance, posterior_log_variance_clipped def p_mean_variance( self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None ): """ Apply the model to get p(x_{t-1} | x_t), as well as a prediction of the initial x, x_0. :param model: the model, which takes a signal and a batch of timesteps as input. :param x: the [N x C x ...] tensor at time t. :param t: a 1-D Tensor of timesteps. :param clip_denoised: if True, clip the denoised signal into [-1, 1]. :param denoised_fn: if not None, a function which applies to the x_start prediction before it is used to sample. Applies before clip_denoised. :param model_kwargs: if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning. :return: a dict with the following keys: - 'mean': the model mean output. - 'variance': the model variance output. - 'log_variance': the log of 'variance'. - 'pred_xstart': the prediction for x_0. """ if model_kwargs is None: model_kwargs = {} B, C = x.shape[:2] assert t.shape == (B,) model_output = model(x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y']) if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]: assert model_output.shape == (B, C * 2, *x.shape[2:]) model_output, model_var_values = th.split(model_output, C, dim=1) if self.model_var_type == ModelVarType.LEARNED: model_log_variance = model_var_values model_variance = th.exp(model_log_variance) else: min_log = _extract_into_tensor( self.posterior_log_variance_clipped, t, x.shape ) max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) # The model_var_values is [-1, 1] for [min_var, max_var]. frac = (model_var_values + 1) / 2 model_log_variance = frac * max_log + (1 - frac) * min_log model_variance = th.exp(model_log_variance) else: model_variance, model_log_variance = { # for fixedlarge, we set the initial (log-)variance like so # to get a better decoder log likelihood. ModelVarType.FIXED_LARGE: ( np.append(self.posterior_variance[1], self.betas[1:]), np.log(np.append(self.posterior_variance[1], self.betas[1:])), ), ModelVarType.FIXED_SMALL: ( self.posterior_variance, self.posterior_log_variance_clipped, ), }[self.model_var_type] model_variance = _extract_into_tensor(model_variance, t, x.shape) model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape) def process_xstart(x): if denoised_fn is not None: x = denoised_fn(x) if clip_denoised: return x.clamp(-1, 1) return x if self.model_mean_type == ModelMeanType.PREVIOUS_X: pred_xstart = process_xstart( self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output) ) model_mean = model_output elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]: if self.model_mean_type == ModelMeanType.START_X: pred_xstart = process_xstart(model_output) else: pred_xstart = process_xstart( self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output) ) model_mean, _, _ = self.q_posterior_mean_variance( x_start=pred_xstart, x_t=x, t=t ) else: raise NotImplementedError(self.model_mean_type) assert ( model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape ) return { "mean": model_mean, "variance": model_variance, "log_variance": model_log_variance, "pred_xstart": pred_xstart, } def _predict_xstart_from_eps(self, x_t, t, eps): assert x_t.shape == eps.shape return ( _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps ) def _predict_xstart_from_xprev(self, x_t, t, xprev): assert x_t.shape == xprev.shape return ( # (xprev - coef2*x_t) / coef1 _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev - _extract_into_tensor( self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape ) * x_t ) def _predict_eps_from_xstart(self, x_t, t, pred_xstart): return ( _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) def _scale_timesteps(self, t): if self.rescale_timesteps: return t.float() * (1000.0 / self.num_timesteps) return t def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None): """ Compute the mean for the previous step, given a function cond_fn that computes the gradient of a conditional log probability with respect to x. In particular, cond_fn computes grad(log(p(y|x))), and we want to condition on y. This uses the conditioning strategy from Sohl-Dickstein et al. (2015). """ gradient = cond_fn(x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y']) new_mean = ( p_mean_var["mean"].float() + p_mean_var["variance"] * gradient.float() ) return new_mean def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None): """ Compute what the p_mean_variance output would have been, should the model's score function be conditioned by cond_fn. See condition_mean() for details on cond_fn. Unlike condition_mean(), this instead uses the conditioning strategy from Song et al (2020). """ alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) eps = self._predict_eps_from_xstart(x, t, p_mean_var["pred_xstart"]) eps = eps - (1 - alpha_bar).sqrt() * cond_fn( x, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y']) out = p_mean_var.copy() out["pred_xstart"] = self._predict_xstart_from_eps(x, t, eps) out["mean"], _, _ = self.q_posterior_mean_variance( x_start=out["pred_xstart"], x_t=x, t=t ) return out def p_sample( self, model, x, t, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, ): """ Sample x_{t-1} from the model at the given timestep. :param model: the model to sample from. :param x: the current tensor at x_{t-1}. :param t: the value of t, starting at 0 for the first diffusion step. :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. :param denoised_fn: if not None, a function which applies to the x_start prediction before it is used to sample. :param cond_fn: if not None, this is a gradient function that acts similarly to the model. :param model_kwargs: if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning. :return: a dict containing the following keys: - 'sample': a random sample from the model. - 'pred_xstart': a prediction of x_0. """ out = self.p_mean_variance( model, x, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, model_kwargs=model_kwargs, ) noise = th.randn_like(x) nonzero_mask = ( (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) ) # no noise when t == 0 if cond_fn is not None: out["mean"] = self.condition_mean( cond_fn, out, x, t, model_kwargs=model_kwargs ) sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise if (self.triplane_loss_type == 'residual_plus_decoder') or (self.triplane_loss_type == 'residual'): sample = sample + model_kwargs['y'].to(sample.device) return {"sample": sample, "pred_xstart": out["pred_xstart"]} def p_sample_loop( self, model, shape, noise=None, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, progress=False, save_timestep_interval=None, ): """ Generate samples from the model. :param model: the model module. :param shape: the shape of the samples, (N, C, H, W). :param noise: if specified, the noise from the encoder to sample. Should be of the same shape as `shape`. :param clip_denoised: if True, clip x_start predictions to [-1, 1]. :param denoised_fn: if not None, a function which applies to the x_start prediction before it is used to sample. :param cond_fn: if not None, this is a gradient function that acts similarly to the model. :param model_kwargs: if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning. :param device: if specified, the device to create the samples on. If not specified, use a model parameter's device. :param progress: if True, show a tqdm progress bar. :return: a non-differentiable batch of samples. """ final = None if save_timestep_interval is not None: prev_steps = dict() for idx, sample in enumerate(self.p_sample_loop_progressive( model, shape, noise=noise, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, device=device, progress=progress, )): final = sample if (save_timestep_interval is not None) and (idx % save_timestep_interval == 0): # save every save_timestep_interval steps prev_steps[str(idx)] = final["sample"] if (save_timestep_interval is not None) and (idx > 960): # # save every steps after 900 steps prev_steps[str(idx)] = final["sample"] if save_timestep_interval is not None: prev_steps[str(1000)] = final["sample"] return prev_steps else : return final["sample"] def p_sample_loop_progressive( self, model, shape, noise=None, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, progress=False, ): """ Generate samples from the model and yield intermediate samples from each timestep of diffusion. Arguments are the same as p_sample_loop(). Returns a generator over dicts, where each dict is the return value of p_sample(). """ if device is None: device = next(model.parameters()).device assert isinstance(shape, (tuple, list)) if noise is not None: img = noise else: img = th.randn(*shape, device=device) indices = list(range(self.num_timesteps))[::-1] if progress: # Lazy import so that we don't depend on tqdm. from tqdm.auto import tqdm indices = tqdm(indices) for i in indices: t = th.tensor([i] * shape[0], device=device) with th.no_grad(): out = self.p_sample( model, img, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, ) yield out img = out["sample"] def p_sample_loop_scene_repaint( self, model, shape, cond, mode = 'down', overlap = 64, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, ): if device is None: device = next(model.parameters()).device assert isinstance(shape, (tuple, list)) image_after_step = th.randn(*shape, device=device) mask_cond = cond.detach().clone() times = get_schedule_jump(t_T=self.num_timesteps, jump_length=20, jump_n_sample=5) time_pairs = list(zip(times[:-1], times[1:])) with th.no_grad(): for t_last, t_cur in time_pairs: t_last_t = th.tensor([t_last] * shape[0], device=device) if t_cur < t_last: # reverse t_cond = self.q_sample(mask_cond, t_last_t) image_after_step = mask_img(image_after_step, t_cond, mode, overlap, H=model_kwargs['H']) out = self.p_sample( model, image_after_step, t_last_t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, ) image_after_step = out["sample"] else: t_shift = 1 image_after_step = self.undo(image_after_step, t=t_last_t+t_shift, debug=False) return image_after_step def p_sample_loop_scene( self, model, shape, cond, mode = 'down', overlap = 64, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, ): if device is None: device = next(model.parameters()).device assert isinstance(shape, (tuple, list)) img = th.randn(*shape, device=device) indices = list(range(self.num_timesteps))[::-1] mask_cond = cond.detach().clone() for i in indices: t = th.tensor([i] * shape[0], device=device) with th.no_grad(): m_cond = self.q_sample(mask_cond, t) img = mask_img(img, m_cond, mode, overlap, H=model_kwargs['H']) out = self.p_sample( model, img, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, ) img = out["sample"] return img def ddim_sample( self, model, x, t, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, eta=0.0, y0=None, mask=None, is_mask_t0=False, ): """ Sample x_{t-1} from the model using DDIM. Same usage as p_sample(). """ out = self.p_mean_variance( model, x, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, model_kwargs=model_kwargs, ) if cond_fn is not None: out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) # masked generation if y0 is not None and mask is not None: assert y0.shape == x.shape assert mask.shape == x.shape if is_mask_t0: out["pred_xstart"] = mask * y0 + (1 - mask) * out["pred_xstart"] else: nonzero_mask = ( (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) ) # no noise when t == 0 out["pred_xstart"] = (mask * y0 + (1 - mask) * out["pred_xstart"]) * nonzero_mask + out["pred_xstart"] * (1 - nonzero_mask) # Usually our model outputs epsilon, but we re-derive it # in case we used x_start or x_prev prediction. eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) sigma = ( eta * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) * th.sqrt(1 - alpha_bar / alpha_bar_prev) ) # Equation 12. noise = th.randn_like(x) mean_pred = ( out["pred_xstart"] * th.sqrt(alpha_bar_prev) + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps ) nonzero_mask = ( (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) ) # no noise when t == 0 sample = mean_pred + nonzero_mask * sigma * noise return {"sample": sample, "pred_xstart": out["pred_xstart"]} def ddim_reverse_sample( self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None, eta=0.0, ): """ Sample x_{t+1} from the model using DDIM reverse ODE. """ assert eta == 0.0, "Reverse ODE only for deterministic path" out = self.p_mean_variance( model, x, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, model_kwargs=model_kwargs, ) # Usually our model outputs epsilon, but we re-derive it # in case we used x_start or x_prev prediction. eps = ( _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - out["pred_xstart"] ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) # Equation 12. reversed mean_pred = ( out["pred_xstart"] * th.sqrt(alpha_bar_next) + th.sqrt(1 - alpha_bar_next) * eps ) return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} def ddim_sample_loop( self, model, shape, noise=None, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, progress=False, eta=0.0, y0=None, mask=None, is_mask_t0=False, ): """ Generate samples from the model using DDIM. Same usage as p_sample_loop(). """ final = None for sample in self.ddim_sample_loop_progressive( model, shape, noise=noise, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, device=device, progress=progress, eta=eta, y0=y0, mask=mask, is_mask_t0=is_mask_t0, ): final = sample return final["sample"] def ddim_sample_loop_progressive( self, model, shape, noise=None, clip_denoised=True, denoised_fn=None, cond_fn=None, model_kwargs=None, device=None, progress=False, eta=0.0, y0=None, mask=None, is_mask_t0=False, ): """ Use DDIM to sample from the model and yield intermediate samples from each timestep of DDIM. Same usage as p_sample_loop_progressive(). """ if device is None: device = next(model.parameters()).device assert isinstance(shape, (tuple, list)) if noise is not None: img = noise else: img = th.randn(*shape, device=device) indices = list(range(self.num_timesteps))[::-1] if progress: # Lazy import so that we don't depend on tqdm. from tqdm.auto import tqdm indices = tqdm(indices) for i in indices: t = th.tensor([i] * shape[0], device=device) with th.no_grad(): out = self.ddim_sample( model, img, t, clip_denoised=clip_denoised, denoised_fn=denoised_fn, cond_fn=cond_fn, model_kwargs=model_kwargs, eta=eta, y0=y0, mask=mask, is_mask_t0=is_mask_t0, ) yield out img = out["sample"] def _vb_terms_bpd( self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None ): """ Get a term for the variational lower-bound. The resulting units are bits (rather than nats, as one might expect). This allows for comparison to other papers. :return: a dict with the following keys: - 'output': a shape [N] tensor of NLLs or KLs. - 'pred_xstart': the x_0 predictions. """ true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance( x_start=x_start, x_t=x_t, t=t ) out = self.p_mean_variance( model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs ) kl = normal_kl( true_mean, true_log_variance_clipped, out["mean"], out["log_variance"] ) kl = mean_flat(kl) / np.log(2.0) decoder_nll = -discretized_gaussian_log_likelihood( x_start, means=out["mean"], log_scales=0.5 * out["log_variance"] ) assert decoder_nll.shape == x_start.shape decoder_nll = mean_flat(decoder_nll) / np.log(2.0) # At the first timestep return the decoder NLL, # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t)) output = th.where((t == 0), decoder_nll, kl) return {"output": output, "pred_xstart": out["pred_xstart"]} def merge_features(self, xy_feat, xz_feat, yz_feat): # Expand dimensions xy_feat_exp = xy_feat.unsqueeze(4) # Add z dimension xz_feat_exp = xz_feat.unsqueeze(3) # Add y dimension yz_feat_exp = yz_feat.unsqueeze(2) # Add x dimension # Calculate the size of the new 3D tensor B, C, H, W, D = xy_feat_exp.size(0), xy_feat_exp.size(1), xy_feat_exp.size(2), xy_feat_exp.size(3), yz_feat_exp.size(4) # Initialize a 3D tensor with zeros merged_tensor = th.zeros((B, C, H, W, D), device=xy_feat.device) # Fill the tensor with the expanded feature maps merged_tensor += xy_feat_exp.expand_as(merged_tensor) merged_tensor += xz_feat_exp.expand_as(merged_tensor) merged_tensor += yz_feat_exp.expand_as(merged_tensor) return merged_tensor def training_losses(self, model, x_start, t, model_kwargs=None, noise=None): """ Compute training losses for a single timestep. :param model: the model to evaluate loss on. :param x_start: the [N x C x ...] tensor of inputs. :param t: a batch of timestep indices. :param model_kwargs: if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning. :param noise: if specified, the specific Gaussian noise to try to remove. :return: a dict with the key "loss" containing a tensor of shape [N]. Some mean or variance settings may also have other keys. """ if model_kwargs is None: model_kwargs = {} if noise is None: noise = th.randn_like(x_start) terms = {} if self.ssc_refine : with th.no_grad(): large_T = th.tensor([self.num_timesteps-1] * x_start.shape[0], device=x_start.device) m_t = self.q_sample(x_start, large_T) m_1 = model(m_t, large_T, model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y']) x_t = self.q_sample(m_1, t, noise=noise) else : x_t = self.q_sample(x_start, t, noise=noise) model_output = model(x_t, self._scale_timesteps(t), model_kwargs['H'], model_kwargs['W'], model_kwargs['D'], model_kwargs['y']) if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]: B, C = x_t.shape[:2] assert model_output.shape == (B, C * 2, *x_t.shape[2:]) model_output, model_var_values = th.split(model_output, C, dim=1) # Learn the variance using the variational bound, but don't let # it affect our mean prediction. frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) terms["vb"] = self._vb_terms_bpd( model=lambda *args, r=frozen_out: r, x_start=x_start, x_t=x_t, t=t, clip_denoised=False, )["output"] if self.loss_type == LossType.RESCALED_MSE: # Divide by 1000 for equivalence with initial implementation. # Without a factor of 1/1000, the VB term hurts the MSE term. terms["vb"] *= self.num_timesteps / 1000.0 target = { ModelMeanType.PREVIOUS_X: self.q_posterior_mean_variance( x_start=x_start, x_t=x_t, t=t )[0], ModelMeanType.START_X: x_start, ModelMeanType.EPSILON: noise, }[self.model_mean_type] assert model_output.shape == target.shape == x_start.shape if self.args.voxel_fea : if self.triplane_loss_type == 'l1': terms["loss"] = mean_flat(th.abs(target - model_output)) elif self.triplane_loss_type == 'l2': terms["loss"] = mean_flat((target - model_output)**2) else : H, W, D = model_kwargs["H"], model_kwargs["W"], model_kwargs["D"] trisize = (H[0], W[0], D[0]) target_xy, target_xz, target_yz = decompose_featmaps(target, trisize) model_output_xy, model_output_xz, model_output_yz = decompose_featmaps(model_output, trisize) if self.triplane_loss_type == 'l1': terms["l1_xy"] = mean_flat(th.abs(target_xy - model_output_xy)) terms["l1_xz"] = mean_flat(th.abs(target_xz - model_output_xz)) terms["l1_yz"] = mean_flat(th.abs(target_yz - model_output_yz)) if "vb" in terms: terms["loss"] = terms["l1_xy"] + terms["l1_xz"] + terms["l1_yz"] + terms["vb"] else: terms["loss"] = terms["l1_xy"] + terms["l1_xz"] + terms["l1_yz"] elif self.triplane_loss_type == 'l2': terms["l2_xy"] = mean_flat((target_xy - model_output_xy)**2) terms["l2_xz"] = mean_flat((target_xz - model_output_xz)**2) terms["l2_yz"] = mean_flat((target_yz - model_output_yz)**2) if "vb" in terms: terms["loss"] = terms["l2_xy"] + terms["l2_xz"] + terms["l2_yz"] + terms["vb"] else: terms["loss"] = terms["l2_xy"] + terms["l2_xz"] + terms["l2_yz"] else: raise ValueError("Unknown loss type: {}".format(self.triplane_loss_type)) return terms def _prior_bpd(self, x_start): """ Get the prior KL term for the variational lower-bound, measured in bits-per-dim. This term can't be optimized, as it only depends on the encoder. :param x_start: the [N x C x ...] tensor of inputs. :return: a batch of [N] KL values (in bits), one per batch element. """ batch_size = x_start.shape[0] t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) kl_prior = normal_kl( mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0 ) return mean_flat(kl_prior) / np.log(2.0) def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None): """ Compute the entire variational lower-bound, measured in bits-per-dim, as well as other related quantities. :param model: the model to evaluate loss on. :param x_start: the [N x C x ...] tensor of inputs. :param clip_denoised: if True, clip denoised samples. :param model_kwargs: if not None, a dict of extra keyword arguments to pass to the model. This can be used for conditioning. :return: a dict containing the following keys: - total_bpd: the total variational lower-bound, per batch element. - prior_bpd: the prior term in the lower-bound. - vb: an [N x T] tensor of terms in the lower-bound. - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep. - mse: an [N x T] tensor of epsilon MSEs for each timestep. """ device = x_start.device batch_size = x_start.shape[0] vb = [] xstart_mse = [] mse = [] for t in list(range(self.num_timesteps))[::-1]: t_batch = th.tensor([t] * batch_size, device=device) noise = th.randn_like(x_start) x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise) # Calculate VLB term at the current timestep with th.no_grad(): out = self._vb_terms_bpd( model, x_start=x_start, x_t=x_t, t=t_batch, clip_denoised=clip_denoised, model_kwargs=model_kwargs, ) vb.append(out["output"]) xstart_mse.append(mean_flat((out["pred_xstart"] - x_start) ** 2)) eps = self._predict_eps_from_xstart(x_t, t_batch, out["pred_xstart"]) mse.append(mean_flat((eps - noise) ** 2)) vb = th.stack(vb, dim=1) xstart_mse = th.stack(xstart_mse, dim=1) mse = th.stack(mse, dim=1) prior_bpd = self._prior_bpd(x_start) total_bpd = vb.sum(dim=1) + prior_bpd return { "total_bpd": total_bpd, "prior_bpd": prior_bpd, "vb": vb, "xstart_mse": xstart_mse, "mse": mse, } def _extract_into_tensor(arr, timesteps, broadcast_shape): """ Extract values from a 1-D numpy array for a batch of indices. :param arr: the 1-D numpy array. :param timesteps: a tensor of indices into the array to extract. :param broadcast_shape: a larger shape of K dimensions with the batch dimension equal to the length of timesteps. :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. """ res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() while len(res.shape) < len(broadcast_shape): res = res[..., None] return res.expand(broadcast_shape) ================================================ FILE: diffusion/logger.py ================================================ """ Logger copied from OpenAI baselines to avoid extra RL-based dependencies: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/logger.py """ import os import sys import os.path as osp import json import time import datetime import tempfile import warnings from collections import defaultdict from contextlib import contextmanager DEBUG = 10 INFO = 20 WARN = 30 ERROR = 40 DISABLED = 50 class KVWriter(object): def writekvs(self, kvs): raise NotImplementedError class SeqWriter(object): def writeseq(self, seq): raise NotImplementedError class HumanOutputFormat(KVWriter, SeqWriter): def __init__(self, filename_or_file): if isinstance(filename_or_file, str): self.file = open(filename_or_file, "wt") self.own_file = True else: assert hasattr(filename_or_file, "read"), ( "expected file or str, got %s" % filename_or_file ) self.file = filename_or_file self.own_file = False def writekvs(self, kvs): # Create strings for printing key2str = {} for (key, val) in sorted(kvs.items()): if hasattr(val, "__float__"): valstr = "%-8.3g" % val else: valstr = str(val) key2str[self._truncate(key)] = self._truncate(valstr) # Find max widths if len(key2str) == 0: print("WARNING: tried to write empty key-value dict") return else: keywidth = max(map(len, key2str.keys())) valwidth = max(map(len, key2str.values())) # Write out the data dashes = "-" * (keywidth + valwidth + 7) lines = [dashes] for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()): lines.append( "| %s%s | %s%s |" % (key, " " * (keywidth - len(key)), val, " " * (valwidth - len(val))) ) lines.append(dashes) self.file.write("\n".join(lines) + "\n") # Flush the output to the file self.file.flush() def _truncate(self, s): maxlen = 30 return s[: maxlen - 3] + "..." if len(s) > maxlen else s def writeseq(self, seq): seq = list(seq) for (i, elem) in enumerate(seq): self.file.write(elem) if i < len(seq) - 1: # add space unless this is the last one self.file.write(" ") self.file.write("\n") self.file.flush() def close(self): if self.own_file: self.file.close() class JSONOutputFormat(KVWriter): def __init__(self, filename): self.file = open(filename, "wt") def writekvs(self, kvs): for k, v in sorted(kvs.items()): if hasattr(v, "dtype"): kvs[k] = float(v) self.file.write(json.dumps(kvs) + "\n") self.file.flush() def close(self): self.file.close() class CSVOutputFormat(KVWriter): def __init__(self, filename): self.file = open(filename, "w+t") self.keys = [] self.sep = "," def writekvs(self, kvs): # Add our current row to the history extra_keys = list(kvs.keys() - self.keys) extra_keys.sort() if extra_keys: self.keys.extend(extra_keys) self.file.seek(0) lines = self.file.readlines() self.file.seek(0) for (i, k) in enumerate(self.keys): if i > 0: self.file.write(",") self.file.write(k) self.file.write("\n") for line in lines[1:]: self.file.write(line[:-1]) self.file.write(self.sep * len(extra_keys)) self.file.write("\n") for (i, k) in enumerate(self.keys): if i > 0: self.file.write(",") v = kvs.get(k) if v is not None: self.file.write(str(v)) self.file.write("\n") self.file.flush() def close(self): self.file.close() class TensorBoardOutputFormat(KVWriter): """ Dumps key/value pairs into TensorBoard's numeric format. """ def __init__(self, dir): os.makedirs(dir, exist_ok=True) self.dir = dir self.step = 1 prefix = "events" path = osp.join(osp.abspath(dir), prefix) import tensorflow as tf from tensorflow.python import pywrap_tensorflow from tensorflow.core.util import event_pb2 from tensorflow.python.util import compat self.tf = tf self.event_pb2 = event_pb2 self.pywrap_tensorflow = pywrap_tensorflow self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path)) def writekvs(self, kvs): def summary_val(k, v): kwargs = {"tag": k, "simple_value": float(v)} return self.tf.Summary.Value(**kwargs) summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()]) event = self.event_pb2.Event(wall_time=time.time(), summary=summary) event.step = ( self.step ) # is there any reason why you'd want to specify the step? self.writer.WriteEvent(event) self.writer.Flush() self.step += 1 def close(self): if self.writer: self.writer.Close() self.writer = None def make_output_format(format, ev_dir, log_suffix=""): os.makedirs(ev_dir, exist_ok=True) if format == "stdout": return HumanOutputFormat(sys.stdout) elif format == "log": return HumanOutputFormat(osp.join(ev_dir, "log%s.txt" % log_suffix)) elif format == "json": return JSONOutputFormat(osp.join(ev_dir, "progress%s.json" % log_suffix)) elif format == "csv": return CSVOutputFormat(osp.join(ev_dir, "progress%s.csv" % log_suffix)) elif format == "tensorboard": return TensorBoardOutputFormat(osp.join(ev_dir, "tb%s" % log_suffix)) else: raise ValueError("Unknown format specified: %s" % (format,)) # ================================================================ # API # ================================================================ def logkv(key, val): """ Log a value of some diagnostic Call this once for each diagnostic quantity, each iteration If called many times, last value will be used. """ get_current().logkv(key, val) def logkv_mean(key, val): """ The same as logkv(), but if called many times, values averaged. """ get_current().logkv_mean(key, val) def logkvs(d): """ Log a dictionary of key-value pairs """ for (k, v) in d.items(): logkv(k, v) def dumpkvs(): """ Write all of the diagnostics from the current iteration """ return get_current().dumpkvs() def getkvs(): return get_current().name2val def log(*args, level=INFO): """ Write the sequence of args, with no separators, to the console and output files (if you've configured an output file). """ get_current().log(*args, level=level) def debug(*args): log(*args, level=DEBUG) def info(*args): log(*args, level=INFO) def warn(*args): log(*args, level=WARN) def error(*args): log(*args, level=ERROR) def set_level(level): """ Set logging threshold on current logger. """ get_current().set_level(level) def set_comm(comm): get_current().set_comm(comm) def get_dir(): """ Get directory that log files are being written to. will be None if there is no output directory (i.e., if you didn't call start) """ return get_current().get_dir() record_tabular = logkv dump_tabular = dumpkvs @contextmanager def profile_kv(scopename): logkey = "wait_" + scopename tstart = time.time() try: yield finally: get_current().name2val[logkey] += time.time() - tstart def profile(n): """ Usage: @profile("my_func") def my_func(): code """ def decorator_with_name(func): def func_wrapper(*args, **kwargs): with profile_kv(n): return func(*args, **kwargs) return func_wrapper return decorator_with_name # ================================================================ # Backend # ================================================================ def get_current(): if Logger.CURRENT is None: _configure_default_logger() return Logger.CURRENT class Logger(object): DEFAULT = None # A logger with no output files. (See right below class definition) # So that you can still log to the terminal without setting up any output files CURRENT = None # Current logger being used by the free functions above def __init__(self, dir, output_formats, comm=None): self.name2val = defaultdict(float) # values this iteration self.name2cnt = defaultdict(int) self.level = INFO self.dir = dir self.output_formats = output_formats self.comm = comm # Logging API, forwarded # ---------------------------------------- def logkv(self, key, val): self.name2val[key] = val def logkv_mean(self, key, val): oldval, cnt = self.name2val[key], self.name2cnt[key] self.name2val[key] = oldval * cnt / (cnt + 1) + val / (cnt + 1) self.name2cnt[key] = cnt + 1 def dumpkvs(self): if self.comm is None: d = self.name2val else: d = mpi_weighted_mean( self.comm, { name: (val, self.name2cnt.get(name, 1)) for (name, val) in self.name2val.items() }, ) if self.comm.rank != 0: d["dummy"] = 1 # so we don't get a warning about empty dict out = d.copy() # Return the dict for unit testing purposes for fmt in self.output_formats: if isinstance(fmt, KVWriter): fmt.writekvs(d) self.name2val.clear() self.name2cnt.clear() return out def log(self, *args, level=INFO): if self.level <= level: self._do_log(args) # Configuration # ---------------------------------------- def set_level(self, level): self.level = level def set_comm(self, comm): self.comm = comm def get_dir(self): return self.dir def close(self): for fmt in self.output_formats: fmt.close() # Misc # ---------------------------------------- def _do_log(self, args): for fmt in self.output_formats: if isinstance(fmt, SeqWriter): fmt.writeseq(map(str, args)) def get_rank_without_mpi_import(): # check environment variables here instead of importing mpi4py # to avoid calling MPI_Init() when this module is imported for varname in ["PMI_RANK", "OMPI_COMM_WORLD_RANK"]: if varname in os.environ: return int(os.environ[varname]) return 0 def mpi_weighted_mean(comm, local_name2valcount): """ Copied from: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/common/mpi_util.py#L110 Perform a weighted average over dicts that are each on a different node Input: local_name2valcount: dict mapping key -> (value, count) Returns: key -> mean """ all_name2valcount = comm.gather(local_name2valcount) if comm.rank == 0: name2sum = defaultdict(float) name2count = defaultdict(float) for n2vc in all_name2valcount: for (name, (val, count)) in n2vc.items(): try: val = float(val) except ValueError: if comm.rank == 0: warnings.warn( "WARNING: tried to compute mean on non-float {}={}".format( name, val ) ) else: name2sum[name] += val * count name2count[name] += count return {name: name2sum[name] / name2count[name] for name in name2sum} else: return {} def configure(dir=None, format_strs=None, comm=None, log_suffix=""): """ If comm is provided, average all numerical stats across that comm """ if dir is None: dir = os.getenv("OPENAI_LOGDIR") if dir is None: dir = osp.join( tempfile.gettempdir(), datetime.datetime.now().strftime("openai-%Y-%m-%d-%H-%M-%S-%f"), ) assert isinstance(dir, str) dir = os.path.expanduser(dir) os.makedirs(os.path.expanduser(dir), exist_ok=True) rank = get_rank_without_mpi_import() if rank > 0: log_suffix = log_suffix + "-rank%03i" % rank if format_strs is None: if rank == 0: format_strs = os.getenv("OPENAI_LOG_FORMAT", "stdout,log,csv").split(",") else: format_strs = os.getenv("OPENAI_LOG_FORMAT_MPI", "log").split(",") format_strs = filter(None, format_strs) output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs] Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm) if output_formats: log("Logging to %s" % dir) def _configure_default_logger(): configure() Logger.DEFAULT = Logger.CURRENT def reset(): if Logger.CURRENT is not Logger.DEFAULT: Logger.CURRENT.close() Logger.CURRENT = Logger.DEFAULT log("Reset logger") @contextmanager def scoped_configure(dir=None, format_strs=None, comm=None): prevlogger = Logger.CURRENT configure(dir=dir, format_strs=format_strs, comm=comm) try: yield finally: Logger.CURRENT.close() Logger.CURRENT = prevlogger ================================================ FILE: diffusion/losses.py ================================================ """ Helpers for various likelihood-based losses. These are ported from the original Ho et al. diffusion models codebase: https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/utils.py """ import numpy as np import torch as th def normal_kl(mean1, logvar1, mean2, logvar2): """ Compute the KL divergence between two gaussians. Shapes are automatically broadcasted, so batches can be compared to scalars, among other use cases. """ tensor = None for obj in (mean1, logvar1, mean2, logvar2): if isinstance(obj, th.Tensor): tensor = obj break assert tensor is not None, "at least one argument must be a Tensor" # Force variances to be Tensors. Broadcasting helps convert scalars to # Tensors, but it does not work for th.exp(). logvar1, logvar2 = [ x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor) for x in (logvar1, logvar2) ] return 0.5 * ( -1.0 + logvar2 - logvar1 + th.exp(logvar1 - logvar2) + ((mean1 - mean2) ** 2) * th.exp(-logvar2) ) def approx_standard_normal_cdf(x): """ A fast approximation of the cumulative distribution function of the standard normal. """ return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3)))) def discretized_gaussian_log_likelihood(x, *, means, log_scales): """ Compute the log-likelihood of a Gaussian distribution discretizing to a given image. :param x: the target images. It is assumed that this was uint8 values, rescaled to the range [-1, 1]. :param means: the Gaussian mean Tensor. :param log_scales: the Gaussian log stddev Tensor. :return: a tensor like x of log probabilities (in nats). """ assert x.shape == means.shape == log_scales.shape centered_x = x - means inv_stdv = th.exp(-log_scales) plus_in = inv_stdv * (centered_x + 1.0 / 255.0) cdf_plus = approx_standard_normal_cdf(plus_in) min_in = inv_stdv * (centered_x - 1.0 / 255.0) cdf_min = approx_standard_normal_cdf(min_in) log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12)) log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12)) cdf_delta = cdf_plus - cdf_min log_probs = th.where( x < -0.999, log_cdf_plus, th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))), ) assert log_probs.shape == x.shape return log_probs ================================================ FILE: diffusion/nn.py ================================================ """ Various utilities for neural networks. """ import math import torch as th import torch.nn as nn def mask_img(img, cond, mode, overlap, H=[128]): H = H[0] if type(mode) == tuple: cond[:, :, int((mode[2])/2):int((mode[3])/2), int((mode[0])/2):int((mode[1])/2)] =\ img[:, :, int((mode[2])/2):int((mode[3])/2), int((mode[0])/2):int((mode[1])/2)] if overlap == 'inpainting': cond[:, :, int((mode[2])/2):int((mode[3])/2), H:] = img[:, :, int((mode[2])/2):int((mode[3])/2), H:] cond[:, :, H:, int((mode[0])/2):int((mode[1])/2)] = img[:, :, H:, int((mode[0])/2):int((mode[1])/2)] return cond else : tri_overlap = int(overlap/2) if mode == 'downright': img[:, :, H-tri_overlap:H, :H] = cond[:, :, H-tri_overlap:H, :H] img[:, :, :H, H-tri_overlap:H] = cond[:, :, :H, H-tri_overlap:H] elif mode == 'downleft': img[:, :, H-tri_overlap:H, :H] = cond[:, :, H-tri_overlap:H, :H] img[:, :, :H, :tri_overlap] = cond[:, :, :H, :tri_overlap] elif mode == 'upright': img[:, :, :tri_overlap, :H] = cond[:, :, :tri_overlap, :H] img[:, :, :H, H-tri_overlap:H] = cond[:, :, :H, H-tri_overlap:H] elif mode == 'upleft': img[:, :, :tri_overlap, :H] = cond[:, :, :tri_overlap, :H] img[:, :, :H, :tri_overlap] = cond[:, :, :H, :tri_overlap] elif mode == 'down': img[:, :, H-tri_overlap:H, :] = cond[:, :, :tri_overlap, :] elif mode == 'up': img[:, :, :tri_overlap, :] = cond[:, :, H-tri_overlap:H, :] elif mode == 'right': img[:, :, :, H-tri_overlap:H] = cond[:, :, :, :tri_overlap] elif mode == 'left': img[:, :, :, :tri_overlap] = cond[:, :, :, H-tri_overlap:H] return img def compose_featmaps(feat_xy, feat_xz, feat_yz, tri_size=(128,128,16) , transpose=True): H, W, D = tri_size empty_block = th.zeros(list(feat_xy.shape[:-2]) + [D, D], dtype=feat_xy.dtype, device=feat_xy.device) if transpose: feat_yz = feat_yz.transpose(-1, -2) composed_map = th.cat( [th.cat([feat_xy, feat_xz], dim=-1), th.cat([feat_yz, empty_block], dim=-1)], dim=-2 ) return composed_map, (H, W, D) def decompose_featmaps(composed_map, tri_size=(128,128,16) , transpose=True): H, W, D = tri_size feat_xy = composed_map[..., :H, :W] # (C, H, W) feat_xz = composed_map[..., :H, W:] # (C, H, D) feat_yz = composed_map[..., H:, :W] # (C, W, D) if transpose: return feat_xy, feat_xz, feat_yz.transpose(-1, -2) else: return feat_xy, feat_xz, feat_yz # PyTorch 1.7 has SiLU, but we support PyTorch 1.5. class SiLU(nn.Module): def forward(self, x): return x * th.sigmoid(x) class GroupNorm32(nn.GroupNorm): def forward(self, x): return super().forward(x.float()).type(x.dtype) def conv_nd(dims, *args, **kwargs): """ Create a 1D, 2D, or 3D convolution module. """ if dims == 1: return nn.Conv1d(*args, **kwargs) elif dims == 2: return nn.Conv2d(*args, **kwargs) elif dims == 3: return nn.Conv3d(*args, **kwargs) raise ValueError(f"unsupported dimensions: {dims}") def linear(*args, **kwargs): """ Create a linear module. """ return nn.Linear(*args, **kwargs) def avg_pool_nd(dims, *args, **kwargs): """ Create a 1D, 2D, or 3D average pooling module. """ if dims == 1: return nn.AvgPool1d(*args, **kwargs) elif dims == 2: return nn.AvgPool2d(*args, **kwargs) elif dims == 3: return nn.AvgPool3d(*args, **kwargs) raise ValueError(f"unsupported dimensions: {dims}") def update_ema(target_params, source_params, rate=0.99): """ Update target parameters to be closer to those of source parameters using an exponential moving average. :param target_params: the target parameter sequence. :param source_params: the source parameter sequence. :param rate: the EMA rate (closer to 1 means slower). """ for targ, src in zip(target_params, source_params): targ.detach().mul_(rate).add_(src, alpha=1 - rate) def zero_module(module): """ Zero out the parameters of a module and return it. """ for p in module.parameters(): p.detach().zero_() return module def scale_module(module, scale): """ Scale the parameters of a module and return it. """ for p in module.parameters(): p.detach().mul_(scale) return module def mean_flat(tensor): """ Take the mean over all non-batch dimensions. """ return tensor.mean(dim=list(range(1, len(tensor.shape)))) def normalization(channels): """ Make a standard normalization layer. :param channels: number of input channels. :return: an nn.Module for normalization. """ return GroupNorm32(32, channels) def timestep_embedding(timesteps, dim, max_period=10000): """ Create sinusoidal timestep embeddings. :param timesteps: a 1-D Tensor of N indices, one per batch element. These may be fractional. :param dim: the dimension of the output. :param max_period: controls the minimum frequency of the embeddings. :return: an [N x dim] Tensor of positional embeddings. """ half = dim // 2 freqs = th.exp( -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half ).to(device=timesteps.device) args = timesteps[:, None].float() * freqs[None] embedding = th.cat([th.cos(args), th.sin(args)], dim=-1) if dim % 2: embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1) return embedding def checkpoint(func, inputs, params, flag): """ Evaluate a function without caching intermediate activations, allowing for reduced memory at the expense of extra compute in the backward pass. :param func: the function to evaluate. :param inputs: the argument sequence to pass to `func`. :param params: a sequence of parameters `func` depends on but does not explicitly take as arguments. :param flag: if False, disable gradient checkpointing. """ if flag: args = tuple(inputs) + tuple(params) return CheckpointFunction.apply(func, len(inputs), *args) else: return func(*inputs) class CheckpointFunction(th.autograd.Function): @staticmethod def forward(ctx, run_function, length, *args): ctx.run_function = run_function ctx.input_tensors = list(args[:length]) ctx.input_params = list(args[length:]) with th.no_grad(): output_tensors = ctx.run_function(*ctx.input_tensors) return output_tensors @staticmethod def backward(ctx, *output_grads): ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] with th.enable_grad(): # Fixes a bug where the first op in run_function modifies the # Tensor storage in place, which is not allowed for detach()'d # Tensors. shallow_copies = [x.view_as(x) for x in ctx.input_tensors] output_tensors = ctx.run_function(*shallow_copies) input_grads = th.autograd.grad( output_tensors, ctx.input_tensors + ctx.input_params, output_grads, allow_unused=True, ) del ctx.input_tensors del ctx.input_params del output_tensors return (None, None) + input_grads ================================================ FILE: diffusion/resample.py ================================================ from abc import ABC, abstractmethod import numpy as np import torch as th import torch.distributed as dist def create_named_schedule_sampler(name, diffusion): """ Create a ScheduleSampler from a library of pre-defined samplers. :param name: the name of the sampler. :param diffusion: the diffusion object to sample for. """ if name == "uniform": return UniformSampler(diffusion) elif name == "loss-second-moment": return LossSecondMomentResampler(diffusion) else: raise NotImplementedError(f"unknown schedule sampler: {name}") class ScheduleSampler(ABC): """ A distribution over timesteps in the diffusion process, intended to reduce variance of the objective. By default, samplers perform unbiased importance sampling, in which the objective's mean is unchanged. However, subclasses may override sample() to change how the resampled terms are reweighted, allowing for actual changes in the objective. """ @abstractmethod def weights(self): """ Get a numpy array of weights, one per diffusion step. The weights needn't be normalized, but must be positive. """ def sample(self, batch_size, device): """ Importance-sample timesteps for a batch. :param batch_size: the number of timesteps. :param device: the torch device to save to. :return: a tuple (timesteps, weights): - timesteps: a tensor of timestep indices. - weights: a tensor of weights to scale the resulting losses. """ w = self.weights() p = w / np.sum(w) indices_np = np.random.choice(len(p), size=(batch_size,), p=p) indices = th.from_numpy(indices_np).long().to(device) weights_np = 1 / (len(p) * p[indices_np]) weights = th.from_numpy(weights_np).float().to(device) return indices, weights class UniformSampler(ScheduleSampler): def __init__(self, diffusion): self.diffusion = diffusion self._weights = np.ones([diffusion.num_timesteps]) def weights(self): return self._weights class LossAwareSampler(ScheduleSampler): def update_with_local_losses(self, local_ts, local_losses): """ Update the reweighting using losses from a model. Call this method from each rank with a batch of timesteps and the corresponding losses for each of those timesteps. This method will perform synchronization to make sure all of the ranks maintain the exact same reweighting. :param local_ts: an integer Tensor of timesteps. :param local_losses: a 1D Tensor of losses. """ batch_sizes = [ th.tensor([0], dtype=th.int32, device=local_ts.device) for _ in range(dist.get_world_size()) ] dist.all_gather( batch_sizes, th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device), ) # Pad all_gather batches to be the maximum batch size. batch_sizes = [x.item() for x in batch_sizes] max_bs = max(batch_sizes) timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes] loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes] dist.all_gather(timestep_batches, local_ts) dist.all_gather(loss_batches, local_losses) timesteps = [ x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs] ] losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]] self.update_with_all_losses(timesteps, losses) @abstractmethod def update_with_all_losses(self, ts, losses): """ Update the reweighting using losses from a model. Sub-classes should override this method to update the reweighting using losses from the model. This method directly updates the reweighting without synchronizing between workers. It is called by update_with_local_losses from all ranks with identical arguments. Thus, it should have deterministic behavior to maintain state across workers. :param ts: a list of int timesteps. :param losses: a list of float losses, one per timestep. """ class LossSecondMomentResampler(LossAwareSampler): def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001): self.diffusion = diffusion self.history_per_term = history_per_term self.uniform_prob = uniform_prob self._loss_history = np.zeros( [diffusion.num_timesteps, history_per_term], dtype=np.float64 ) self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int) def weights(self): if not self._warmed_up(): return np.ones([self.diffusion.num_timesteps], dtype=np.float64) weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1)) weights /= np.sum(weights) weights *= 1 - self.uniform_prob weights += self.uniform_prob / len(weights) return weights def update_with_all_losses(self, ts, losses): for t, loss in zip(ts, losses): if self._loss_counts[t] == self.history_per_term: # Shift out the oldest loss term. self._loss_history[t, :-1] = self._loss_history[t, 1:] self._loss_history[t, -1] = loss else: self._loss_history[t, self._loss_counts[t]] = loss self._loss_counts[t] += 1 def _warmed_up(self): return (self._loss_counts == self.history_per_term).all() ================================================ FILE: diffusion/respace.py ================================================ import numpy as np import torch as th from diffusion.gaussian_diffusion import GaussianDiffusion def space_timesteps(num_timesteps, section_counts): """ Create a list of timesteps to use from an original diffusion process, given the number of timesteps we want to take from equally-sized portions of the original process. For example, if there's 300 timesteps and the section counts are [10,15,20] then the first 100 timesteps are strided to be 10 timesteps, the second 100 are strided to be 15 timesteps, and the final 100 are strided to be 20. If the stride is a string starting with "ddim", then the fixed striding from the DDIM paper is used, and only one section is allowed. :param num_timesteps: the number of diffusion steps in the original process to divide up. :param section_counts: either a list of numbers, or a string containing comma-separated numbers, indicating the step count per section. As a special case, use "ddimN" where N is a number of steps to use the striding from the DDIM paper. :return: a set of diffusion steps from the original process to use. """ if isinstance(section_counts, str): if section_counts.startswith("ddim"): desired_count = int(section_counts[len("ddim") :]) for i in range(1, num_timesteps): if len(range(0, num_timesteps, i)) == desired_count: return set(range(0, num_timesteps, i)) raise ValueError( f"cannot create exactly {num_timesteps} steps with an integer stride" ) section_counts = [int(x) for x in section_counts.split(",")] size_per = num_timesteps // len(section_counts) extra = num_timesteps % len(section_counts) start_idx = 0 all_steps = [] for i, section_count in enumerate(section_counts): size = size_per + (1 if i < extra else 0) if size < section_count: raise ValueError( f"cannot divide section of {size} steps into {section_count}" ) if section_count <= 1: frac_stride = 1 else: frac_stride = (size - 1) / (section_count - 1) cur_idx = 0.0 taken_steps = [] for _ in range(section_count): taken_steps.append(start_idx + round(cur_idx)) cur_idx += frac_stride all_steps += taken_steps start_idx += size return set(all_steps) class SpacedDiffusion(GaussianDiffusion): """ A diffusion process which can skip steps in a base diffusion process. :param use_timesteps: a collection (sequence or set) of timesteps from the original diffusion process to retain. :param kwargs: the kwargs to create the base diffusion process. """ def __init__(self, use_timesteps, **kwargs): self.use_timesteps = set(use_timesteps) self.timestep_map = [] self.original_num_steps = len(kwargs["betas"]) base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa last_alpha_cumprod = 1.0 new_betas = [] for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod): if i in self.use_timesteps: new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) last_alpha_cumprod = alpha_cumprod self.timestep_map.append(i) kwargs["betas"] = np.array(new_betas) super().__init__(**kwargs) def p_mean_variance( self, model, *args, **kwargs ): # pylint: disable=signature-differs return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) def training_losses( self, model, *args, **kwargs ): # pylint: disable=signature-differs return super().training_losses(self._wrap_model(model), *args, **kwargs) def condition_mean(self, cond_fn, *args, **kwargs): return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs) def condition_score(self, cond_fn, *args, **kwargs): return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs) def _wrap_model(self, model): if isinstance(model, _WrappedModel): return model return _WrappedModel( model, self.timestep_map, self.rescale_timesteps, self.original_num_steps ) def _scale_timesteps(self, t): # Scaling is done by the wrapped model. return t class _WrappedModel: def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps): self.model = model self.timestep_map = timestep_map self.rescale_timesteps = rescale_timesteps self.original_num_steps = original_num_steps def __call__(self, x, ts, H, W, D, y): map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) new_ts = map_tensor[ts] if self.rescale_timesteps: new_ts = new_ts.float() * (1000.0 / self.original_num_steps) return self.model(x, new_ts, H, W, D, y) ================================================ FILE: diffusion/scheduler.py ================================================ def get_schedule_jump(t_T, jump_length, jump_n_sample): jumps = {} for j in range(0, t_T - jump_length, jump_length): jumps[j] = jump_n_sample - 1 t = t_T ts = [] while t >= 1: t = t-1 ts.append(t) if jumps.get(t, 0) > 0: jumps[t] = jumps[t] - 1 for _ in range(jump_length): t = t + 1 ts.append(t) ts.append(-1) _check_times(ts, -1, t_T) return ts def _check_times(times, t_0, t_T): # Check end assert times[0] > times[1], (times[0], times[1]) # Check beginning assert times[-1] == -1, times[-1] # Steplength = 1 for t_last, t_cur in zip(times[:-1], times[1:]): assert abs(t_last - t_cur) == 1, (t_last, t_cur) # Value range for t in times: assert t >= t_0, (t, t_0) assert t <= t_T, (t, t_T) ================================================ FILE: diffusion/script_util.py ================================================ from diffusion.unet_triplane import TriplaneUNetModel, BEVUNetModel from diffusion.respace import SpacedDiffusion, space_timesteps from diffusion import gaussian_diffusion as gd def create_model_and_diffusion_from_args(args): diffusion = create_gaussian_diffusion(args) if (args.diff_net_type == "unet_bev") or (args.diff_net_type == "unet_voxel"): model = BEVUNetModel(args) elif args.diff_net_type == "unet_tri": model = TriplaneUNetModel(args) return model, diffusion def create_gaussian_diffusion(args): steps = args.steps predict_xstart = args.predict_xstart learn_sigma = args.learn_sigma timestep_respacing= args.timestep_respacing sigma_small=False noise_schedule="linear" use_kl=False rescale_timesteps=False rescale_learned_sigmas=False betas = gd.get_named_beta_schedule(noise_schedule, steps) if use_kl: loss_type = gd.LossType.RESCALED_KL elif rescale_learned_sigmas: loss_type = gd.LossType.RESCALED_MSE else: loss_type = gd.LossType.MSE if not timestep_respacing: timestep_respacing = [steps] return SpacedDiffusion( use_timesteps=space_timesteps(steps, timestep_respacing), args=args, betas=betas, model_mean_type=( gd.ModelMeanType.EPSILON if not predict_xstart else gd.ModelMeanType.START_X ), model_var_type=( ( gd.ModelVarType.FIXED_LARGE if not sigma_small else gd.ModelVarType.FIXED_SMALL ) if not learn_sigma else gd.ModelVarType.LEARNED_RANGE ), loss_type=loss_type, rescale_timesteps=rescale_timesteps, ) ================================================ FILE: diffusion/train_util.py ================================================ import copy import functools import os import blobfile as bf import torch as th from torch.optim import AdamW from tensorboardX import SummaryWriter from diffusion import logger from diffusion.fp16_util import MixedPrecisionTrainer from diffusion.nn import update_ema from diffusion.resample import LossAwareSampler, UniformSampler from utils.common_util import draw_scalar_field2D from utils import dist_util # For ImageNet experiments, this was a good default value. # We found that the lg_loss_scale quickly climbed to # 20-21 within the first ~1K steps of training. INITIAL_LOG_LOSS_SCALE = 20.0 class TrainLoop: def __init__( self, *, diffusion_net, triplane_loss_type, timestep_respacing, training_step, model, diffusion, data, val_data, ssc_refine, batch_size, microbatch, lr, ema_rate, log_interval, save_interval, resume_checkpoint, use_fp16=False, fp16_scale_growth=1e-3, schedule_sampler=None, weight_decay=0.0, lr_anneal_steps=0, ): self.triplane_loss_type = triplane_loss_type self.model = model self.diffusion = diffusion self.data = data self.val_data = val_data self.ssc_refine = ssc_refine self.training_step = training_step self.timestep_respacing = timestep_respacing self.diffusion_net = diffusion_net self.batch_size = batch_size self.microbatch = microbatch if microbatch > 0 else batch_size self.lr = lr self.ema_rate = ( [ema_rate] if isinstance(ema_rate, float) else [float(x) for x in ema_rate.split(",")] ) self.log_interval = log_interval self.save_interval = save_interval self.resume_checkpoint = resume_checkpoint self.use_fp16 = use_fp16 self.fp16_scale_growth = fp16_scale_growth self.schedule_sampler = schedule_sampler or UniformSampler(diffusion) self.weight_decay = weight_decay self.lr_anneal_steps = lr_anneal_steps tblog_dir = os.path.join(logger.get_current().get_dir(), "tblog") self.tb = SummaryWriter(tblog_dir) self.step = 0 self.resume_step = 0 self.global_batch = self.batch_size # * dist.get_world_size() self.sync_cuda = th.cuda.is_available() self._load_and_sync_parameters() self.mp_trainer = MixedPrecisionTrainer( model=self.model, use_fp16=self.use_fp16, fp16_scale_growth=fp16_scale_growth, ) self.opt = AdamW( self.mp_trainer.master_params, lr=self.lr, weight_decay=self.weight_decay ) if self.resume_step: self._load_optimizer_state() # Model was resumed, either due to a restart or a checkpoint # being specified at the command line. self.ema_params = [ self._load_ema_parameters(rate) for rate in self.ema_rate ] else: self.ema_params = [ copy.deepcopy(self.mp_trainer.master_params) for _ in range(len(self.ema_rate)) ] self.use_ddp = False self.ddp_model = self.model def _load_and_sync_parameters(self): resume_checkpoint = find_resume_checkpoint() or self.resume_checkpoint if resume_checkpoint: self.resume_step = parse_resume_step_from_filename(resume_checkpoint) # if dist.get_rank() == 0: logger.log(f"loading model from checkpoint: {resume_checkpoint}...") self.model.load_state_dict( dist_util.load_state_dict( resume_checkpoint, map_location=dist_util.dev() ) ) # dist_util.sync_params(self.model.parameters()) def _load_ema_parameters(self, rate): ema_params = copy.deepcopy(self.mp_trainer.master_params) main_checkpoint = find_resume_checkpoint() or self.resume_checkpoint ema_checkpoint = find_ema_checkpoint(main_checkpoint, self.resume_step, rate) if ema_checkpoint: # if dist.get_rank() == 0: logger.log(f"loading EMA from checkpoint: {ema_checkpoint}...") state_dict = dist_util.load_state_dict( ema_checkpoint, map_location=dist_util.dev() ) ema_params = self.mp_trainer.state_dict_to_master_params(state_dict) # dist_util.sync_params(ema_params) return ema_params def _load_optimizer_state(self): main_checkpoint = find_resume_checkpoint() or self.resume_checkpoint opt_checkpoint = bf.join( bf.dirname(main_checkpoint), f"opt{self.resume_step:06}.pt" ) if bf.exists(opt_checkpoint): logger.log(f"loading optimizer state from checkpoint: {opt_checkpoint}") state_dict = dist_util.load_state_dict( opt_checkpoint, map_location=dist_util.dev() ) self.opt.load_state_dict(state_dict) def run_loop(self): while ( not self.lr_anneal_steps or self.step + self.resume_step < self.lr_anneal_steps ): batch, cond = next(self.data) self.run_step(batch, cond) if self.step % self.log_interval == 0 : logger.dumpkvs() if self.step % self.save_interval == 0 and self.step > 0: self.save() # Run for a finite amount of time in integration tests. if os.environ.get("DIFFUSION_TRAINING_TEST", "") and self.step > 0: return self.step += 1 if self.diffusion_net != 'unet_voxel': # Save the last checkpoint if it wasn't already saved. if (self.step - 1) % self.save_interval != 0: self.save() def run_step(self, batch, cond): self.forward_backward(batch, cond) took_step = self.mp_trainer.optimize(self.opt) if took_step: self._update_ema() self._anneal_lr() self.log_step() if self.diffusion_net != 'unet_voxel': if self.step % self.log_interval == 0: self._sample_and_visualize() def _sample_and_visualize(self): print("Sampling and visualizing...") self.ddp_model.eval() batch, cond = next(self.val_data) _shape = [len(cond['path'])] + list(batch.shape[1:]) with th.no_grad(): if self.ssc_refine: large_T = th.tensor([self.training_step-1] * _shape[0], device=dist_util.dev()) batch = batch.to(dist_util.dev()) m_t = self.diffusion.q_sample(batch, large_T) noise = self.ddp_model(m_t, large_T, cond['H'], cond['W'], cond['D'], cond['y']).to(dist_util.dev()) else : noise = None sample = self.diffusion.p_sample_loop(self.ddp_model, _shape, noise = noise, progress=True, model_kwargs=cond, clip_denoised=True) sample = sample.detach().cpu().numpy() feat_dim = sample.shape[1] for i in range(sample.shape[0]): for c in range(feat_dim//4): fig = draw_scalar_field2D(sample[i, c*4]) self.tb.add_figure(f"sample{i}/channel{c*4}", fig, global_step=self.step) if self.ssc_refine : for c in range(feat_dim//4): fig = draw_scalar_field2D(cond['y'][i, c*4].detach().cpu().numpy()) self.tb.add_figure(f"sample{i}/condition{c*4}", fig, global_step=self.step) for c in range(feat_dim//4): fig = draw_scalar_field2D(batch[i, c*4].detach().cpu().numpy()) self.tb.add_figure(f"sample{i}/gt{c*4}", fig, global_step=self.step) self.ddp_model.train() def forward_backward(self, batch, cond): self.mp_trainer.zero_grad() for i in range(0, batch.shape[0], self.microbatch): # Eliminates the microbatch feature assert i == 0 assert self.microbatch == self.batch_size micro = batch.to(dist_util.dev()) micro_cond = {} for k, v in cond.items(): if (k != 'path'): micro_cond[k] = v.to(dist_util.dev()) else : micro_cond[k] = [i for i in v] last_batch = (i + self.microbatch) >= batch.shape[0] t, weights = self.schedule_sampler.sample(micro.shape[0], dist_util.dev()) compute_losses = functools.partial( self.diffusion.training_losses, self.ddp_model, micro, t, model_kwargs=micro_cond,) if last_batch or not self.use_ddp: losses = compute_losses() else: with self.ddp_model.no_sync(): losses = compute_losses() if isinstance(self.schedule_sampler, LossAwareSampler): self.schedule_sampler.update_with_local_losses( t, losses["loss"].detach() ) loss = (losses["loss"] * weights).mean() self.mp_trainer.backward(loss) if self.step % 10 == 0: self.log_loss_dict( self.diffusion, t, {k: v * weights for k, v in losses.items()} ) def _update_ema(self): for rate, params in zip(self.ema_rate, self.ema_params): update_ema(params, self.mp_trainer.master_params, rate=rate) def _anneal_lr(self): if not self.lr_anneal_steps: return frac_done = (self.step + self.resume_step) / self.lr_anneal_steps lr = self.lr * (1 - frac_done) for param_group in self.opt.param_groups: param_group["lr"] = lr def log_step(self): logger.logkv("step", self.step + self.resume_step) logger.logkv("samples", (self.step + self.resume_step + 1) * self.global_batch) logger.logkv("lr", self.opt.param_groups[0]["lr"]) if self.step % 10 == 0: self.tb.add_scalar("step", self.step + self.resume_step, global_step=self.step) self.tb.add_scalar("samples", (self.step + self.resume_step + 1) * self.global_batch, global_step=self.step) self.tb.add_scalar("lr", self.opt.param_groups[0]["lr"], global_step=self.step) def save(self): def save_checkpoint(rate, params): state_dict = self.mp_trainer.master_params_to_state_dict(params) # if dist.get_rank() == 0: logger.log(f"saving model {rate}...") if not rate: filename = f"model{(self.step+self.resume_step):06d}.pt" else: filename = f"ema_{rate}_{(self.step+self.resume_step):06d}.pt" with bf.BlobFile(bf.join(get_blob_logdir(), filename), "wb") as f: th.save(state_dict, f) # save_checkpoint(0, self.mp_trainer.master_params) for rate, params in zip(self.ema_rate, self.ema_params): save_checkpoint(rate, params) # if dist.get_rank() == 0: with bf.BlobFile( bf.join(get_blob_logdir(), f"opt{(self.step+self.resume_step):06d}.pt"), "wb", ) as f: th.save(self.opt.state_dict(), f) # dist.barrier() def log_loss_dict(self, diffusion, ts, losses): for key, values in losses.items(): loss_dict = {} logger.logkv_mean(key, values.mean().item()) loss_dict[f"{key}_mean"] = values.mean().item() # Log the quantiles (four quartiles, in particular). for sub_t, sub_loss in zip(ts.cpu().numpy(), values.detach().cpu().numpy()): quartile = int(4 * sub_t / diffusion.num_timesteps) logger.logkv_mean(f"{key}_q{quartile}", sub_loss) loss_dict[f"{key}_q{quartile}"] = sub_loss self.tb.add_scalars(f"{key}", loss_dict, global_step=self.step) def parse_resume_step_from_filename(filename): """ Parse filenames of the form path/to/modelNNNNNN.pt, where NNNNNN is the checkpoint's number of steps. """ split = filename.split("_")[-1].split(".")[0] return int(split) def get_blob_logdir(): # You can change this to be a separate path to save checkpoints to # a blobstore or some external drive. return logger.get_dir() def find_resume_checkpoint(): # On your infrastructure, you may want to override this to automatically # discover the latest checkpoint on your blob storage, etc. return None def find_ema_checkpoint(main_checkpoint, step, rate): if main_checkpoint is None: return None filename = f"ema_{rate}_{(step):06d}.pt" path = bf.join(bf.dirname(main_checkpoint), filename) if bf.exists(path): return path return None ================================================ FILE: diffusion/triplane_util.py ================================================ import torch import torch.nn.functional as F import numpy as np from utils.parser_util import get_gen_args from utils.utils import make_query from diffusion.script_util import create_model_and_diffusion_from_args from encoding.networks import AutoEncoderGroupSkip from dataset.path_manager import * from diffusion.nn import decompose_featmaps, compose_featmaps def augment(triplane, p, tri_size=(128,128,32)): H, W, D = tri_size triplane = torch.from_numpy(triplane).float() feat_xy, feat_xz, feat_zy = decompose_featmaps(triplane,tri_size, False) if p == 0: # 좌우 뒤집기 feat_xy = torch.flip(feat_xy, [2]) feat_zy = torch.flip(feat_zy, [2]) elif p == 1: # 상하 뒤집기 feat_xy = torch.flip(feat_xy, [1]) feat_xz = torch.flip(feat_xz, [1]) elif p == 2: # 상하좌우 뒤집기 feat_xy = torch.flip(feat_xy, [2]) feat_zy = torch.flip(feat_zy, [2]) feat_xy = torch.flip(feat_xy, [1]) feat_xz = torch.flip(feat_xz, [1]) elif p == 3: feat_xy += torch.randn_like(feat_xy) * 0.05 feat_xz += torch.randn_like(feat_xz) * 0.05 feat_zy += torch.randn_like(feat_zy) * 0.05 elif p == 4 :# crop&resize size = torch.randint(0, 3, (1,)).item() s = 80 + size*16 region = 128-s x, y = torch.randint(0, region, (2,)).tolist() feat_xy = feat_xy[:, y:y+s, x:x+s] feat_xz = feat_xz[:, y:y+s, :] feat_zy = feat_zy[:, :, x:x+s] feat_xy = F.interpolate(feat_xy.unsqueeze(0).float(), size=(H, W), mode='bilinear').squeeze(0) feat_xz = F.interpolate(feat_xz.unsqueeze(0).float(), size=(H, D), mode='bilinear').squeeze(0) feat_zy = F.interpolate(feat_zy.unsqueeze(0).float(), size=(D, W), mode='bilinear').squeeze(0) triplane, _ = compose_featmaps(feat_xy, feat_xz, feat_zy, tri_size, False) return np.array(triplane) def build_sampling_model(args): H, W, D, learning_map, learning_map_inv, class_name, grid_size, tri_size, num_class, max_points= get_gen_args(args) if args.dataset == 'kitti' : args.data_path=SEMKITTI_DATA_PATH args.yaml_path=SEMKITTI_YAML_PATH elif args.dataset == 'carla' : args.data_path=CARLA_DATA_PATH args.yaml_path=CARLA_YAML_PATH args.num_class = num_class DIFF_PATH = SSC_DIFF_PATH if args.ssc_refine else GEN_DIFF_PATH model, diffusion = create_model_and_diffusion_from_args(args) model.load_state_dict(torch.load(DIFF_PATH, map_location="cpu")) model = model.cuda().eval() ae = AutoEncoderGroupSkip(args) ae.load_state_dict(torch.load(AE_PATH, map_location='cpu')['model']) ae = ae.cuda().eval() sample_fn = (diffusion.p_sample_loop if not args.repaint else diffusion.p_sample_loop_scene_repaint) C = args.geo_feat_channels coords, query = make_query(grid_size) coords, query = coords.cuda(), query.cuda() out_shape = [args.batch_size, C, H + D, W + D] return model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, class_name, args ================================================ FILE: diffusion/unet_triplane.py ================================================ from abc import abstractmethod import torch as th import torch.nn as nn import torch.nn.functional as F from diffusion.fp16_util import convert_module_to_f16, convert_module_to_f32 from diffusion.nn import ( checkpoint, linear, SiLU, zero_module, normalization, timestep_embedding, compose_featmaps, decompose_featmaps ) class TriplaneConv(nn.Module): def __init__(self, channels, out_channels, kernel_size, padding, is_rollout=True) -> None: super().__init__() in_channels = channels * 3 if is_rollout else channels self.is_rollout = is_rollout self.conv_xy = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding) self.conv_xz = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding) self.conv_yz = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding) def forward(self, featmaps): # tpl: [B, C, H + D, W + D] tpl_xy, tpl_xz, tpl_yz = featmaps H, W = tpl_xy.shape[-2:] D = tpl_xz.shape[-1] if self.is_rollout: tpl_xy_h = th.cat([tpl_xy, th.mean(tpl_yz, dim=-1, keepdim=True).transpose(-1, -2).expand_as(tpl_xy), th.mean(tpl_xz, dim=-1, keepdim=True).expand_as(tpl_xy)], dim=1) # [B, C * 3, H, W] tpl_xz_h = th.cat([tpl_xz, th.mean(tpl_xy, dim=-1, keepdim=True).expand_as(tpl_xz), th.mean(tpl_yz, dim=-2, keepdim=True).expand_as(tpl_xz)], dim=1) # [B, C * 3, H, D] tpl_yz_h = th.cat([tpl_yz, th.mean(tpl_xy, dim=-2, keepdim=True).transpose(-1, -2).expand_as(tpl_yz), th.mean(tpl_xz, dim=-2, keepdim=True).expand_as(tpl_yz)], dim=1) # [B, C * 3, W, D] else: tpl_xy_h = tpl_xy tpl_xz_h = tpl_xz tpl_yz_h = tpl_yz assert tpl_xy_h.shape[-2] == H and tpl_xy_h.shape[-1] == W assert tpl_xz_h.shape[-2] == H and tpl_xz_h.shape[-1] == D assert tpl_yz_h.shape[-2] == W and tpl_yz_h.shape[-1] == D if tpl_xy_h.dtype != [param.dtype for param in self.conv_xy.parameters()][0]: if tpl_xy_h.dtype == th.float16: tpl_xy_h = self.conv_xy(tpl_xy_h.float()) tpl_xz_h = self.conv_xz(tpl_xz_h.float()) tpl_yz_h = self.conv_yz(tpl_yz_h.float()) else: tpl_xy_h = self.conv_xy(tpl_xy_h.half()) tpl_xz_h = self.conv_xz(tpl_xz_h.half()) tpl_yz_h = self.conv_yz(tpl_yz_h.half()) else: tpl_xy_h = self.conv_xy(tpl_xy_h) tpl_xz_h = self.conv_xz(tpl_xz_h) tpl_yz_h = self.conv_yz(tpl_yz_h) return (tpl_xy_h, tpl_xz_h, tpl_yz_h) class TriplaneNorm(nn.Module): def __init__(self, channels) -> None: super().__init__() self.norm_xy = normalization(channels) self.norm_xz = normalization(channels) self.norm_yz = normalization(channels) def forward(self, featmaps): # tpl: [B, C, H + D, W + D] tpl_xy, tpl_xz, tpl_yz = featmaps H, W = tpl_xy.shape[-2:] D = tpl_xz.shape[-1] tpl_xy_h = self.norm_xy(tpl_xy) # [B, C, H, W] tpl_xz_h = self.norm_xz(tpl_xz) # [B, C, H, D] tpl_yz_h = self.norm_yz(tpl_yz) # [B, C, W, D] assert tpl_xy_h.shape[-2] == H and tpl_xy_h.shape[-1] == W assert tpl_xz_h.shape[-2] == H and tpl_xz_h.shape[-1] == D assert tpl_yz_h.shape[-2] == W and tpl_yz_h.shape[-1] == D return (tpl_xy_h, tpl_xz_h, tpl_yz_h) class TriplaneSiLU(nn.Module): def __init__(self) -> None: super().__init__() self.silu = SiLU() def forward(self, featmaps): # tpl: [B, C, H + D, W + D] tpl_xy, tpl_xz, tpl_yz = featmaps return (self.silu(tpl_xy), self.silu(tpl_xz), self.silu(tpl_yz)) class TriplaneUpsample2x(nn.Module): def __init__(self, tri_z_down, conv_up, channels=None) -> None: super().__init__() self.tri_z_down = tri_z_down self.conv_up = conv_up if conv_up : if self.tri_z_down: self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) self.conv_xz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) self.conv_yz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) else : self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) self.conv_xz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=(1,0), stride=(2, 1)) self.conv_yz = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=(1,0), stride=(2, 1)) def forward(self, featmaps): # tpl: [B, C, H + D, W + D] tpl_xy, tpl_xz, tpl_yz = featmaps H, W = tpl_xy.shape[-2:] D = tpl_xz.shape[-1] if self.conv_up: tpl_xy = self.conv_xy(tpl_xy) tpl_xz = self.conv_xz(tpl_xz) tpl_yz = self.conv_yz(tpl_yz) else : tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='bilinear', align_corners=False) if self.tri_z_down: tpl_xz = F.interpolate(tpl_xz, scale_factor=2, mode='bilinear', align_corners=False) tpl_yz = F.interpolate(tpl_yz, scale_factor=2, mode='bilinear', align_corners=False) else : tpl_xz = F.interpolate(tpl_xz, scale_factor=(2, 1), mode='bilinear', align_corners=False) tpl_yz = F.interpolate(tpl_yz, scale_factor=(2, 1), mode='bilinear', align_corners=False) return (tpl_xy, tpl_xz, tpl_yz) class TriplaneDownsample2x(nn.Module): def __init__(self, tri_z_down, conv_down, channels=None) -> None: super().__init__() self.tri_z_down = tri_z_down self.conv_down = conv_down if conv_down : if self.tri_z_down: self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') self.conv_xz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') self.conv_yz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') else : self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') self.conv_xz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=(2, 1), padding_mode='replicate') self.conv_yz = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=(2, 1), padding_mode='replicate') def forward(self, featmaps): # tpl: [B, C, H + D, W + D] tpl_xy, tpl_xz, tpl_yz = featmaps H, W = tpl_xy.shape[-2:] D = tpl_xz.shape[-1] if self.conv_down: tpl_xy = self.conv_xy(tpl_xy) tpl_xz = self.conv_xz(tpl_xz) tpl_yz = self.conv_yz(tpl_yz) else : tpl_xy = F.avg_pool2d(tpl_xy, kernel_size=2, stride=2) if self.tri_z_down: tpl_xz = F.avg_pool2d(tpl_xz, kernel_size=2, stride=2) tpl_yz = F.avg_pool2d(tpl_yz, kernel_size=2, stride=2) else : tpl_xz = F.avg_pool2d(tpl_xz, kernel_size=(2, 1), stride=(2, 1)) tpl_yz = F.avg_pool2d(tpl_yz, kernel_size=(2, 1), stride=(2, 1)) return (tpl_xy, tpl_xz, tpl_yz) class BeVplaneNorm(nn.Module): def __init__(self, channels) -> None: super().__init__() self.norm_xy = normalization(channels) def forward(self, tpl_xy): tpl_xy_h = self.norm_xy(tpl_xy) # [B, C, H, W] return tpl_xy_h class BeVplaneSiLU(nn.Module): def __init__(self) -> None: super().__init__() self.silu = SiLU() def forward(self, tpl_xy): # tpl: [B, C, H + D, W + D] return self.silu(tpl_xy) class BeVplaneUpsample2x(nn.Module): def __init__(self, tri_z_down, conv_up, channels=None, voxelfea=False) -> None: super().__init__() self.tri_z_down = tri_z_down self.conv_up = conv_up self.voxelfea = voxelfea if conv_up : if voxelfea: self.conv_xy = nn.ConvTranspose3d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) else : self.conv_xy = nn.ConvTranspose2d(channels, channels, kernel_size=3, padding=1, output_padding=1, stride=2) def forward(self, tpl_xy): # tpl: [B, C, H + D, W + D] if self.conv_up: tpl_xy = self.conv_xy(tpl_xy) else : if self.voxelfea: tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='trilinear', align_corners=False) else : tpl_xy = F.interpolate(tpl_xy, scale_factor=2, mode='bilinear', align_corners=False) return tpl_xy class BeVplaneDownsample2x(nn.Module): def __init__(self, tri_z_down, conv_down, channels=None, voxelfea=False) -> None: super().__init__() self.tri_z_down = tri_z_down self.conv_down = conv_down self.voxelfea = voxelfea if conv_down : if voxelfea: self.conv_xy = nn.Conv3d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') else : self.conv_xy = nn.Conv2d(channels, channels, kernel_size=3, padding=1, stride=2, padding_mode='replicate') def forward(self, tpl_xy): # tpl: [B, C, H + D, W + D] if self.conv_down: tpl_xy = self.conv_xy(tpl_xy) else : if self.voxelfea : tpl_xy = F.avg_pool3d(tpl_xy, kernel_size=2, stride=2) else : tpl_xy = F.avg_pool2d(tpl_xy, kernel_size=2, stride=2) return tpl_xy class BeVplaneConv(nn.Module): def __init__(self, channels, out_channels, kernel_size, padding, voxelfea=False) -> None: super().__init__() in_channels = channels if voxelfea : self.conv_xy = nn.Conv3d(in_channels, out_channels, kernel_size, padding=padding) else: self.conv_xy = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding) def forward(self, tpl_xy): # tpl: [B, C, H + D, W + D] tpl_xy_h = self.conv_xy(tpl_xy) return tpl_xy_h class TimestepBlock(nn.Module): """ Any module where forward() takes timestep embeddings as a second argument. """ @abstractmethod def forward(self, x, emb): """ Apply the module to `x` given `emb` timestep embeddings. """ class TimestepEmbedSequential(nn.Sequential, TimestepBlock): """ A sequential module that passes timestep embeddings to the children that support it as an extra input. """ def forward(self, x, emb): for layer in self: if isinstance(layer, TimestepBlock): x = layer(x, emb) else: x = layer(x) return x class TriplaneResBlock(TimestepBlock): """ A residual block that can optionally change the number of channels. :param channels: the number of input channels. :param emb_channels: the number of timestep embedding channels. :param dropout: the rate of dropout. :param out_channels: if specified, the number of out channels. :param use_conv: if True and out_channels is specified, use a spatial convolution instead of a smaller 1x1 convolution to change the channels in the skip connection. :param dims: determines if the signal is 1D, 2D, or 3D. :param use_checkpoint: if True, use gradient checkpointing on this module. :param up: if True, use this block for upsampling. :param down: if True, use this block for downsampling. """ def __init__( self, channels, emb_channels, out_channels=None, level=(128,128,16), use_conv=False, use_scale_shift_norm=True, use_checkpoint=False, up=False, down=False, is_rollout=True, ): super().__init__() self.channels = channels self.emb_channels = emb_channels self.out_channels = out_channels or channels self.use_conv = use_conv self.use_checkpoint = use_checkpoint self.use_scale_shift_norm = use_scale_shift_norm self.level=level self.in_layers = nn.Sequential( TriplaneNorm(channels), TriplaneSiLU(), TriplaneConv(channels, self.out_channels, 3, padding=1, is_rollout=is_rollout), ) self.updown = up or down if up: self.h_upd = TriplaneUpsample2x() self.x_upd = TriplaneUpsample2x() elif down: self.h_upd = TriplaneDownsample2x() self.x_upd = TriplaneDownsample2x() else: self.h_upd = self.x_upd = nn.Identity() self.emb_layers = nn.Sequential( SiLU(), linear( emb_channels, 2 * self.out_channels if use_scale_shift_norm else self.out_channels, ), ) self.out_layers = nn.Sequential( TriplaneNorm(self.out_channels), TriplaneSiLU(), # nn.Dropout(p=dropout), zero_module( TriplaneConv(self.out_channels, self.out_channels, 3, padding=1, is_rollout=is_rollout) ), ) if self.out_channels == channels: self.skip_connection = nn.Identity() elif use_conv: self.skip_connection = TriplaneConv( channels, self.out_channels, 3, padding=1, is_rollout=False ) else: self.skip_connection = TriplaneConv(channels, self.out_channels, 1, padding=0, is_rollout=False) def forward(self, x, emb): """ Apply the block to a Tensor, conditioned on a timestep embedding. :param x: an [N x C x ...] Tensor of features. :param emb: an [N x emb_channels] Tensor of timestep embeddings. :return: an [N x C x ...] Tensor of outputs. """ return checkpoint( self._forward, (x, emb), self.parameters(), self.use_checkpoint ) def _forward(self, x, emb): # x: (h_xy, h_xz, h_yz) h = self.in_layers(x) emb_out = self.emb_layers(emb).type(h[0].dtype) while len(emb_out.shape) < len(h[0].shape): emb_out = emb_out[..., None] if self.use_scale_shift_norm: out_norm, out_silu, out_conv = self.out_layers[0], self.out_layers[1], self.out_layers[2] scale, shift = th.chunk(emb_out, 2, dim=1) h = out_norm(h) h_xy, h_xz, h_yz = h h_xy = h_xy * (1 + scale) + shift h_xz = h_xz * (1 + scale) + shift h_yz = h_yz * (1 + scale) + shift h = (h_xy, h_xz, h_yz) # h = out_norm(h) * (1 + scale) + shift h = out_silu(h) h = out_conv(h) else: h_xy, h_xz, h_yz = h h_xy = h_xy + emb_out h_xz = h_xz + emb_out h_yz = h_yz + emb_out h = (h_xy, h_xz, h_yz) # h = h + emb_out h = self.out_layers(h) x_skip = self.skip_connection(x) x_skip_xy, x_skip_xz, x_skip_yz = x_skip h_xy, h_xz, h_yz = h return (h_xy + x_skip_xy, h_xz + x_skip_xz, h_yz + x_skip_yz) # return self.skip_connection(x) + h class BeVplaneResBlock(TimestepBlock): def __init__( self, channels, emb_channels, out_channels=None, level=(128,128,16), use_conv=False, use_scale_shift_norm=True, use_checkpoint=False, up=False, down=False, voxelfea=False, ): super().__init__() self.channels = channels self.emb_channels = emb_channels self.out_channels = out_channels or channels self.use_conv = use_conv self.use_checkpoint = use_checkpoint self.use_scale_shift_norm = use_scale_shift_norm self.in_layers = nn.Sequential( BeVplaneNorm(channels), BeVplaneSiLU(), BeVplaneConv(channels, self.out_channels, 3, padding=1, voxelfea=voxelfea), ) self.updown = up or down self.h_upd = self.x_upd = nn.Identity() self.emb_layers = nn.Sequential( SiLU(), linear( emb_channels, 2 * self.out_channels if use_scale_shift_norm else self.out_channels, ), ) self.out_layers = nn.Sequential( BeVplaneNorm(self.out_channels), BeVplaneSiLU(), # nn.Dropout(p=dropout), zero_module( BeVplaneConv(self.out_channels, self.out_channels, 3, padding=1, voxelfea=voxelfea) ), ) if self.out_channels == channels: self.skip_connection = nn.Identity() elif use_conv: self.skip_connection = BeVplaneConv( channels, self.out_channels, 3, padding=1, voxelfea=voxelfea ) else: self.skip_connection = BeVplaneConv(channels, self.out_channels, 1, padding=0, voxelfea=voxelfea) def forward(self, x, emb): """ Apply the block to a Tensor, conditioned on a timestep embedding. :param x: an [N x C x ...] Tensor of features. :param emb: an [N x emb_channels] Tensor of timestep embeddings. :return: an [N x C x ...] Tensor of outputs. """ return checkpoint( self._forward, (x, emb), self.parameters(), self.use_checkpoint ) def _forward(self, x, emb): # x: (h_xy, h_xz, h_yz) h = self.in_layers(x) emb_out = self.emb_layers(emb).type(h[0].dtype) while len(emb_out.shape) < len(h.shape): emb_out = emb_out[..., None] if self.use_scale_shift_norm: out_norm, out_silu, out_conv = self.out_layers[0], self.out_layers[1], self.out_layers[2] scale, shift = th.chunk(emb_out, 2, dim=1) h = out_norm(h) h = h * (1 + scale) + shift h = out_silu(h) h = out_conv(h) else: h = h + emb_out h = self.out_layers(h) x_skip = self.skip_connection(x) return x_skip+h class BEVUNetModel(nn.Module): def __init__( self, args, num_res_blocks=1, dropout=0, use_checkpoint=False, use_fp16=False, ): super().__init__() learn_sigma = args.learn_sigma ssc_refine = args.ssc_refine model_channels = args.model_channels channel_mult = args.mult_channels tri_unet_updown = args.tri_unet_updown tri_z_down = args.tri_z_down conv_down = args.conv_down dataset = args.dataset in_channels = args.geo_feat_channels out_channels = args.geo_feat_channels voxelfea=args.voxel_fea self.voxelfea = voxelfea self.ssc_refine = ssc_refine self.in_channels = 2*in_channels if self.ssc_refine else in_channels self.model_channels = model_channels self.out_channels = out_channels*2 if learn_sigma else out_channels self.num_res_blocks = num_res_blocks self.dropout = dropout self.channel_mult = channel_mult self.use_checkpoint = use_checkpoint self.dtype = th.float16 if use_fp16 else th.float32 time_embed_dim = model_channels * 4 self.time_embed = nn.Sequential( linear(model_channels, time_embed_dim), SiLU(), linear(time_embed_dim, time_embed_dim), ) ch = input_ch = int(channel_mult[0] * model_channels) level_shape = ((128, 128, 16), (64, 64, 8), (32, 32, 4)) self.in_conv = TimestepEmbedSequential(BeVplaneConv(self.in_channels, ch, 1, padding=0, voxelfea=voxelfea)) print("\nIn conv: BeVplaneConv") n_down, n_up = 0, 0 input_block_chans = [ch] self.input_blocks = nn.ModuleList([]) for level, mult in enumerate(channel_mult): layers = [] if tri_unet_updown and (level != 0): if (dataset == 'carla') and (n_down == 0) : layers.append(BeVplaneDownsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea)) n_down+=1 print(f"Down level {level}: BeVplaneDownsample2x, ch {ch}") elif (dataset == 'kitti') : layers.append(BeVplaneDownsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea)) print(f"Down level {level}: BeVplaneDownsample2x, ch {ch}") for _ in range(num_res_blocks): layers.append( BeVplaneResBlock( ch, time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], voxelfea=voxelfea ) ) print(f"Down level {level} block 1: BeVplaneResBlock, ch {int(model_channels * mult)}") layers.append( BeVplaneResBlock( int(mult * model_channels), time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], voxelfea=voxelfea ) ) print(f"Down level {level} block 2: BeVplaneResBlock, ch {int(model_channels * mult)}") ch = int(mult * model_channels) input_block_chans.append(ch) self.input_blocks.append(TimestepEmbedSequential(*layers)) self.output_blocks = nn.ModuleList([]) for level, mult in list(enumerate(channel_mult))[::-1]: layers = [] for i in range(num_res_blocks): ich = input_block_chans.pop() if level == len(channel_mult) - 1 and i == 0: ich = 0 layers.append( BeVplaneResBlock( ch + ich, time_embed_dim, out_channels=int(model_channels * mult), level=level_shape[level], voxelfea=voxelfea ) ) print(f"Up level {level} block 1 : BeVplaneResBlock, ch {int(model_channels * mult)}") layers.append( BeVplaneResBlock( int(mult * model_channels), time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], voxelfea=voxelfea ) ) print(f"Up level {level} block 2: BeVplaneResBlock, ch {int(model_channels * mult)}") ch = int(model_channels * mult) if tri_unet_updown and (level > 0): if (dataset == 'carla') and (n_up == 0) : layers.append(BeVplaneUpsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea)) n_up+=1 print(f"Up level {level}: BeVplaneUpsample2x, ch {int(model_channels * mult)}") elif (dataset == 'kitti') : layers.append(BeVplaneUpsample2x(tri_z_down, conv_down, channels=ch, voxelfea=voxelfea)) print(f"Up level {level}: BeVplaneUpsample2x, ch {int(model_channels * mult)}") self.output_blocks.append(TimestepEmbedSequential(*layers)) self.out = nn.Sequential( BeVplaneNorm(ch), BeVplaneSiLU(), BeVplaneConv(input_ch, self.out_channels, 1, padding=0, voxelfea=voxelfea) ) print("Out conv: TriplaneConv\n") def convert_to_fp16(self): """ Convert the torso of the model to float16. """ self.input_blocks.apply(convert_module_to_f16) self.output_blocks.apply(convert_module_to_f16) def convert_to_fp32(self): """ Convert the torso of the model to float32. """ self.input_blocks.apply(convert_module_to_f32) self.output_blocks.apply(convert_module_to_f32) def forward(self, x, timesteps, H=128, W=128, D=16, y=None): """ Apply the model to an input batch. :param x: an [N x C x ...] Tensor of inputs. :param timesteps: a 1-D batch of timesteps. :param y: an [N] Tensor of labels, if class-conditional. :return: an [N x C x ...] Tensor of outputs. """ assert H is not None and W is not None and D is not None hs = [] tri_size = (H[0], W[0], D[0]) emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) if self.ssc_refine : y=y.to(x.device).type(self.dtype) h=th.cat([x, y], dim=1).type(self.dtype) else : h = x.type(self.dtype) if not self.voxelfea: triplane = decompose_featmaps(h, tri_size) h_triplane, xz, yz = triplane else : h_triplane = h h_triplane = self.in_conv(h_triplane, emb) for level, module in enumerate(self.input_blocks): h_triplane = module(h_triplane, emb) hs.append(h_triplane) for level, module in enumerate(self.output_blocks): if level == 0: h_triplane = hs.pop() else: h_triplane_pop = hs.pop() h_triplane = th.cat([h_triplane, h_triplane_pop], dim=1) h_triplane = module(h_triplane, emb) h_triplane = self.out(h_triplane) if not self.voxelfea: h = compose_featmaps(h_triplane, xz, yz, tri_size)[0] #assert h.shape == x.shape return h class TriplaneUNetModel(nn.Module): def __init__( self, args, num_res_blocks=1, dropout=0, use_checkpoint=False, use_fp16=False, ): super().__init__() learn_sigma = args.learn_sigma ssc_refine = args.ssc_refine model_channels = args.model_channels is_rollout = args.is_rollout channel_mult = args.mult_channels tri_unet_updown = args.tri_unet_updown tri_z_down = args.tri_z_down conv_down = args.conv_down dataset = args.dataset in_channels = args.geo_feat_channels out_channels = args.geo_feat_channels if tri_unet_updown: n_level = len(channel_mult) level_shape=((128, 128, 16),) for n in range(1, n_level): level_shape += ((int(128//2**n), int(128//2**n), int(16//2**n)),) else : level_shape=() n_level = len(channel_mult) for n in range(n_level): level_shape += ((128, 128, 16),) self.ssc_refine = ssc_refine self.in_channels = 2*in_channels if ssc_refine else in_channels self.model_channels = model_channels self.out_channels = out_channels*2 if learn_sigma else out_channels self.num_res_blocks = num_res_blocks self.dropout = dropout self.channel_mult = channel_mult self.use_checkpoint = use_checkpoint self.dtype = th.float16 if use_fp16 else th.float32 time_embed_dim = model_channels * 4 self.time_embed = nn.Sequential( linear(model_channels, time_embed_dim), SiLU(), linear(time_embed_dim, time_embed_dim), ) ch = input_ch = int(channel_mult[0] * model_channels) level_shape = ((128, 128, 16), (64, 64, 8), (32, 32, 4)) self.in_conv = TimestepEmbedSequential(TriplaneConv(self.in_channels, ch, 1, padding=0, is_rollout=False)) print("\nIn conv: TriplaneConv") n_down, n_up = 0, 0 input_block_chans = [ch] self.input_blocks = nn.ModuleList([]) for level, mult in enumerate(channel_mult): layers = [] if tri_unet_updown and (level != 0): if (dataset == 'carla') and (n_down == 0) : layers.append(TriplaneDownsample2x(tri_z_down, conv_down, channels=ch)) n_down+=1 print(f"Down level {level}: TriplaneDownsample2x, ch {ch}") elif (dataset == 'kitti') : layers.append(TriplaneDownsample2x(tri_z_down, conv_down, channels=ch)) print(f"Down level {level}: TriplaneDownsample2x, ch {ch}") for _ in range(num_res_blocks): layers.append( TriplaneResBlock( ch, time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], is_rollout=is_rollout ) ) print(f"Down level {level} block 1: TriplaneResBlock, ch {int(model_channels * mult)}") layers.append( TriplaneResBlock( int(mult * model_channels), time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], is_rollout=is_rollout ) ) print(f"Down level {level} block 2: TriplaneResBlock, ch {int(model_channels * mult)}") ch = int(mult * model_channels) input_block_chans.append(ch) self.input_blocks.append(TimestepEmbedSequential(*layers)) self.output_blocks = nn.ModuleList([]) for level, mult in list(enumerate(channel_mult))[::-1]: layers = [] for i in range(num_res_blocks): ich = input_block_chans.pop() if level == len(channel_mult) - 1 and i == 0: ich = 0 layers.append( TriplaneResBlock( ch + ich, time_embed_dim, out_channels=int(model_channels * mult), level=level_shape[level], is_rollout=is_rollout ) ) print(f"Up level {level} block 1 : TriplaneResBlock, ch {int(model_channels * mult)}") layers.append( TriplaneResBlock( int(mult * model_channels), time_embed_dim, out_channels=int(mult * model_channels), level=level_shape[level], is_rollout=is_rollout ) ) print(f"Up level {level} block 2: TriplaneResBlock, ch {int(model_channels * mult)}") ch = int(model_channels * mult) if tri_unet_updown and (level > 0): if (dataset == 'carla') and (n_up == 0) : layers.append(TriplaneUpsample2x(tri_z_down, conv_down, channels=ch)) n_up+=1 print(f"Up level {level}: TriplaneUpsample2x, ch {int(model_channels * mult)}") elif (dataset == 'kitti') : layers.append(TriplaneUpsample2x(tri_z_down, conv_down, channels=ch)) print(f"Up level {level}: TriplaneUpsample2x, ch {int(model_channels * mult)}") self.output_blocks.append(TimestepEmbedSequential(*layers)) self.out = nn.Sequential( TriplaneNorm(ch), TriplaneSiLU(), TriplaneConv(input_ch, self.out_channels, 1, padding=0, is_rollout=False) ) print("Out conv: TriplaneConv\n") def convert_to_fp16(self): """ Convert the torso of the model to float16. """ self.input_blocks.apply(convert_module_to_f16) self.output_blocks.apply(convert_module_to_f16) def convert_to_fp32(self): """ Convert the torso of the model to float32. """ self.input_blocks.apply(convert_module_to_f32) self.output_blocks.apply(convert_module_to_f32) def forward(self, x, timesteps, H=128, W=128, D=16, y=None): """ Apply the model to an input batch. :param x: an [N x C x ...] Tensor of inputs. :param timesteps: a 1-D batch of timesteps. :param y: an [N] Tensor of labels, if class-conditional. :return: an [N x C x ...] Tensor of outputs. """ assert H is not None and W is not None and D is not None hs = [] if type(H) == int: tri_size = (H, W, D) else : tri_size = (H[0], W[0], D[0]) emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) if self.ssc_refine: y=y.to(x.device).type(self.dtype) h=th.cat([x, y], dim=1).type(self.dtype) else : h = x.type(self.dtype) h_triplane = decompose_featmaps(h, tri_size) h_triplane = self.in_conv(h_triplane, emb) for level, module in enumerate(self.input_blocks): h_triplane = module(h_triplane, emb) hs.append(h_triplane) for level, module in enumerate(self.output_blocks): if level == 0: h_triplane = hs.pop() else: h_triplane_pop = hs.pop() h_triplane = list(h_triplane) if h_triplane[0].shape[2:] != h_triplane_pop[0].shape[2:]: h_triplane[0] = F.interpolate(h_triplane[0], size=h_triplane_pop[0].shape[2:], mode='bilinear', align_corners=False) if h_triplane[1].shape[2:] != h_triplane_pop[1].shape[2:]: h_triplane[1] = F.interpolate(h_triplane[1], size=h_triplane_pop[1].shape[2:], mode='bilinear', align_corners=False) if h_triplane[2].shape[2:] != h_triplane_pop[2].shape[2:]: h_triplane[2] = F.interpolate(h_triplane[2], size=h_triplane_pop[2].shape[2:], mode='bilinear', align_corners=False) h_triplane = (th.cat([h_triplane[0], h_triplane_pop[0]], dim=1), th.cat([h_triplane[1], h_triplane_pop[1]], dim=1), th.cat([h_triplane[2], h_triplane_pop[2]], dim=1)) h_triplane = module(h_triplane, emb) h_triplane = self.out(h_triplane) h = compose_featmaps(*h_triplane, tri_size)[0] #assert h.shape == x.shape return h ================================================ FILE: encoding/blocks.py ================================================ import torch import torch.nn as nn import torch.nn.functional as F import math class SinusoidalEncoder(nn.Module): """Sinusoidal Positional Encoder used in Nerf.""" def __init__(self, x_dim, min_deg, max_deg, use_identity: bool = True): super().__init__() self.x_dim = x_dim self.min_deg = min_deg self.max_deg = max_deg self.use_identity = use_identity self.register_buffer( "scales", torch.tensor([2**i for i in range(min_deg, max_deg)]) ) @property def latent_dim(self) -> int: return ( int(self.use_identity) + (self.max_deg - self.min_deg) * 2 ) * self.x_dim def forward(self, x: torch.Tensor) -> torch.Tensor: """ Args: x: [..., x_dim] Returns: latent: [..., latent_dim] """ if self.max_deg == self.min_deg: return x xb = torch.reshape( (x[Ellipsis, None, :] * self.scales[:, None]), list(x.shape[:-1]) + [(self.max_deg - self.min_deg) * self.x_dim], ) latent = torch.sin(torch.cat([xb, xb + 0.5 * math.pi], dim=-1)) if self.use_identity: latent = torch.cat([x] + [latent], dim=-1) return latent class DecoderMLPSkipConcat(nn.Module): def __init__(self, in_channels, out_channels, hidden_channels, num_hidden_layers, posenc=0) -> None: super().__init__() self.posenc = posenc if posenc > 0: self.PE = SinusoidalEncoder(in_channels, 0, posenc, use_identity=True) in_channels = self.PE.latent_dim first_layer_list = [nn.Linear(in_channels, hidden_channels), nn.ReLU()] for _ in range(num_hidden_layers // 2): first_layer_list.append(nn.Linear(hidden_channels, hidden_channels)) first_layer_list.append(nn.ReLU()) self.first_layers = nn.Sequential(*first_layer_list) second_layer_list = [nn.Linear(in_channels + hidden_channels, hidden_channels), nn.ReLU()] for _ in range(num_hidden_layers // 2 - 1): second_layer_list.append(nn.Linear(hidden_channels, hidden_channels)) second_layer_list.append(nn.ReLU()) second_layer_list.append(nn.Linear(hidden_channels, out_channels)) self.second_layers = nn.Sequential(*second_layer_list) def forward(self, x): if self.posenc > 0: x = self.PE(x) h = self.first_layers(x) h = torch.cat([x, h], dim=-1) h = self.second_layers(h) return h class SiLU(nn.Module): def forward(self, x): return x * torch.sigmoid(x) def zero_module(module): """ Zero out the parameters of a module and return it. """ for p in module.parameters(): p.detach().zero_() return module def compose_triplane_channelwise(feat_maps): h_xy, h_xz, h_yz = feat_maps # (H, W), (H, D), (W, D) assert h_xy.shape[1] == h_xz.shape[1] == h_yz.shape[1] C, H, W = h_xy.shape[-3:] D = h_xz.shape[-1] newH = max(H, W) newW = max(W, D) h_xy = F.pad(h_xy, (0, newW - W, 0, newH - H)) h_xz = F.pad(h_xz, (0, newW - D, 0, newH - H)) h_yz = F.pad(h_yz, (0, newW - D, 0, newH - W)) h = torch.cat([h_xy, h_xz, h_yz], dim=1) # (B, 3C, H, W) return h, (H, W, D) def decompose_triplane_channelwise(composed_map, sizes): H, W, D = sizes C = composed_map.shape[1] // 3 h_xy = composed_map[:, :C, :H, :W] h_xz = composed_map[:, C:2*C, :H, :D] h_yz = composed_map[:, 2*C:, :W, :D] return h_xy, h_xz, h_yz class TriplaneGroupResnetBlock(nn.Module): def __init__(self, in_channels, out_channels, up=False, ks=3, input_norm=True, input_act=True): super().__init__() in_channels *= 3 out_channels *= 3 self.in_channels = in_channels self.out_channels = out_channels self.up = up self.input_norm = input_norm if input_norm and input_act: self.in_layers = nn.Sequential( # nn.GroupNorm(num_groups=3, num_channels=in_channels, eps=1e-6, affine=True), SiLU(), nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) elif not input_norm: if input_act: self.in_layers = nn.Sequential( SiLU(), nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) else: self.in_layers = nn.Sequential( nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) else: raise NotImplementedError self.norm_xy = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True) self.norm_xz = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True) self.norm_yz = nn.InstanceNorm2d(out_channels//3, eps=1e-6, affine=True) self.out_layers = nn.Sequential( # nn.GroupNorm(num_groups=3, num_channels=out_channels, eps=1e-6, affine=True), SiLU(), # nn.Dropout(p=dropout), zero_module( nn.Conv2d(out_channels, out_channels, groups=3, kernel_size=ks, stride=1, padding=(ks - 1)//2) ), ) if self.in_channels != self.out_channels: self.shortcut = nn.Conv2d(in_channels, out_channels, groups=3, kernel_size=1, stride=1, padding=0) else: self.shortcut = nn.Identity() def forward(self, feat_maps): if self.input_norm: feat_maps = [self.norm_xy(feat_maps[0]), self.norm_xz(feat_maps[1]), self.norm_yz(feat_maps[2])] x, (H, W, D) = compose_triplane_channelwise(feat_maps) if self.up: raise NotImplementedError else: h = self.in_layers(x) h_xy, h_xz, h_yz = decompose_triplane_channelwise(h, (H, W, D)) h_xy = self.norm_xy(h_xy) h_xz = self.norm_xz(h_xz) h_yz = self.norm_yz(h_yz) h, _ = compose_triplane_channelwise([h_xy, h_xz, h_yz]) h = self.out_layers(h) h = h + self.shortcut(x) h_maps = decompose_triplane_channelwise(h, (H, W, D)) return h_maps class BeVplaneGroupResnetBlock(nn.Module): def __init__(self, in_channels, out_channels, up=False, ks=3, input_norm=True, input_act=True): super().__init__() in_channels out_channels self.in_channels = in_channels self.out_channels = out_channels self.up = up self.input_norm = input_norm if input_norm and input_act: self.in_layers = nn.Sequential( # nn.GroupNorm(num_groups=3, num_channels=in_channels, eps=1e-6, affine=True), SiLU(), nn.Conv2d(in_channels, out_channels, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) elif not input_norm: if input_act: self.in_layers = nn.Sequential( SiLU(), nn.Conv2d(in_channels, out_channels, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) else: self.in_layers = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=ks, stride=1, padding=(ks - 1)//2) ) else: raise NotImplementedError self.norm_xy = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True) self.norm_xz = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True) self.norm_yz = nn.InstanceNorm2d(out_channels, eps=1e-6, affine=True) self.out_layers = nn.Sequential( # nn.GroupNorm(num_groups=3, num_channels=out_channels, eps=1e-6, affine=True), SiLU(), # nn.Dropout(p=dropout), zero_module( nn.Conv2d(out_channels, out_channels, kernel_size=ks, stride=1, padding=(ks - 1)//2) ), ) if self.in_channels != self.out_channels: self.shortcut = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) else: self.shortcut = nn.Identity() def forward(self, feat_maps): if self.input_norm: feat_maps = [self.norm_xy(feat_maps[0]), self.norm_xz(feat_maps[1]), self.norm_yz(feat_maps[2])] x = feat_maps[0] if self.up: raise NotImplementedError else: h = self.in_layers(x) h = self.norm_xy(h) h = self.out_layers(h) h = h + self.shortcut(x) h_maps = [h, feat_maps[1], feat_maps[2]] return h_maps ================================================ FILE: encoding/lovasz.py ================================================ import torch from torch.autograd import Variable import torch.nn.functional as F try: from itertools import ifilterfalse except ImportError: # py3k from itertools import filterfalse as ifilterfalse # -*- coding:utf-8 -*- # author: Xinge def dice_coef(y_true, y_pred, smooth=1e-6): y_true_f = y_true.view(-1) y_pred_f = y_pred.view(-1) intersection = (y_true_f * y_pred_f).sum() return (2. * intersection + smooth) / (y_true_f.sum() + y_pred_f.sum() + smooth) def dice_coef_multilabel(y_true, y_pred, numLabels=11): dice=0 for index in range(1, numLabels): dice += dice_coef(y_true[:,index,:,:,:], y_pred[:,index,:,:,:]) return (numLabels-1) - dice """ Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License) """ def lovasz_grad(gt_sorted): """ Computes gradient of the Lovasz extension w.r.t sorted errors See Alg. 1 in paper """ p = len(gt_sorted) gts = gt_sorted.sum() intersection = gts - gt_sorted.float().cumsum(0) union = gts + (1 - gt_sorted).float().cumsum(0) jaccard = 1. - intersection / union if p > 1: # cover 1-pixel case jaccard[1:p] = jaccard[1:p] - jaccard[0:-1] return jaccard # --------------------------- MULTICLASS LOSSES --------------------------- def lovasz_softmax(probas, labels, classes='present', per_image=False, ignore=None): """ Multi-class Lovasz-Softmax loss probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1). Interpreted as binary (sigmoid) output with outputs of size [B, H, W]. labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1) classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average. per_image: compute the loss per image instead of per batch ignore: void class labels """ if per_image: loss = mean(lovasz_softmax_flat(*flatten_probas(prob.unsqueeze(0), lab.unsqueeze(0), ignore), classes=classes) for prob, lab in zip(probas, labels)) else: loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), classes=classes) return loss def lovasz_softmax_flat(probas, labels, classes='present'): """ Multi-class Lovasz-Softmax loss probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1) labels: [P] Tensor, ground truth labels (between 0 and C - 1) classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average. """ if probas.numel() == 0: # only void pixels, the gradients should be 0 return probas * 0. C = probas.size(1) losses = [] class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes for c in class_to_sum: fg = (labels == c).float() # foreground for class c if (classes is 'present' and fg.sum() == 0): continue if C == 1: if len(classes) > 1: raise ValueError('Sigmoid output possible only with 1 class') class_pred = probas[:, 0] else: class_pred = probas[:, c] errors = (Variable(fg) - class_pred).abs() errors_sorted, perm = torch.sort(errors, 0, descending=True) perm = perm.data fg_sorted = fg[perm] losses.append(torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted)))) return mean(losses) def flatten_probas(probas, labels, ignore=None): """ Flattens predictions in the batch """ if probas.dim() == 3: # assumes output of a sigmoid layer B, H, W = probas.size() probas = probas.view(B, 1, H, W) elif probas.dim() == 5: #3D segmentation B, C, L, H, W = probas.size() probas = probas.contiguous().view(B, C, L, H*W) B, C, H, W = probas.size() probas = probas.permute(0, 2, 3, 1).contiguous().view(-1, C) # B * H * W, C = P, C labels = labels.view(-1) if ignore is None: return probas, labels valid = (labels != ignore) vprobas = probas[valid.nonzero().squeeze()] vlabels = labels[valid] return vprobas, vlabels # --------------------------- HELPER FUNCTIONS --------------------------- def isnan(x): return x != x def mean(l, ignore_nan=False, empty=0): """ nanmean compatible with generators. """ l = iter(l) if ignore_nan: l = ifilterfalse(isnan, l) try: n = 1 acc = next(l) except StopIteration: if empty == 'raise': raise ValueError('Empty mean') return empty for n, v in enumerate(l, 2): acc += v if n == 1: return acc return acc / n ================================================ FILE: encoding/networks.py ================================================ import torch import torch.nn as nn import torch.nn.functional as F from encoding.blocks import TriplaneGroupResnetBlock, BeVplaneGroupResnetBlock, DecoderMLPSkipConcat class Encoder(nn.Module): def __init__(self, geo_feat_channels, z_down, padding_mode, kernel_size = (5, 5, 3), padding = (2, 2, 1)): super().__init__() self.z_down = z_down self.conv0 = nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode) self.convblock1 = nn.Sequential( nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels), nn.LeakyReLU(1e-1, True), nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels) ) if self.z_down : self.downsample = nn.Sequential( nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=(0, 0, 0), bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels) ) else : self.downsample = nn.Sequential( nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=(2, 2, 1), stride=(2, 2, 1), padding=(0, 0, 0), bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels) ) self.convblock2 = nn.Sequential( nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels), nn.LeakyReLU(1e-1, True), nn.Conv3d(geo_feat_channels, geo_feat_channels, kernel_size=kernel_size, stride=(1, 1, 1), padding=padding, bias=True, padding_mode=padding_mode), nn.InstanceNorm3d(geo_feat_channels) ) def forward(self, x): # [b, geo_feat_channels, X, Y, Z] x = self.conv0(x) # [b, geo_feat_channels, X, Y, Z] residual_feat = x x = self.convblock1(x) # [b, geo_feat_channels, X, Y, Z] x = x + residual_feat # [b, geo_feat_channels, X, Y, Z] x = self.downsample(x) # [b, geo_feat_channels, X//2, Y//2, Z//2] residual_feat = x x = self.convblock2(x) x = x + residual_feat return x # [b, geo_feat_channels, X//2, Y//2, Z//2] class AutoEncoderGroupSkip(nn.Module): def __init__(self, args) -> None: super().__init__() class_num = args.num_class self.embedding = nn.Embedding(class_num, args.geo_feat_channels) print('build encoder...') if args.dataset == 'kitti': self.geo_encoder = Encoder(args.geo_feat_channels, args.z_down, args.padding_mode) else: self.geo_encoder = Encoder(args.geo_feat_channels, args.z_down, args.padding_mode, kernel_size = 3, padding = 1) if args.voxel_fea : self.norm = nn.InstanceNorm3d(args.geo_feat_channels) else: self.norm = nn.InstanceNorm2d(args.geo_feat_channels) self.geo_feat_dim = args.geo_feat_channels self.pos = args.pos self.pos_num_freq = 6 # the defualt value 6 like NeRF self.args = args print('triplane features are summed for decoding...') if args.dataset == 'kitti': if args.voxel_fea: self.geo_convs = nn.Sequential( nn.Conv3d(args.geo_feat_channels, args.feat_channel_up, kernel_size=3, stride=1, padding=1, bias=True, padding_mode=args.padding_mode), nn.InstanceNorm3d(args.geo_feat_channels) ) else : if args.triplane: self.geo_convs = TriplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=5, input_norm=False, input_act=False) else : self.geo_convs = BeVplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=5, input_norm=False, input_act=False) else: self.geo_convs = TriplaneGroupResnetBlock(args.geo_feat_channels, args.feat_channel_up, ks=3, input_norm=False, input_act=False) print(f'build shared decoder... (PE: {self.pos})\n') if self.pos: self.geo_decoder = DecoderMLPSkipConcat(args.feat_channel_up+6*self.pos_num_freq, args.num_class, args.mlp_hidden_channels, args.mlp_hidden_layers) else: self.geo_decoder = DecoderMLPSkipConcat(args.feat_channel_up, args.num_class, args.mlp_hidden_channels, args.mlp_hidden_layers) def geo_parameters(self): return list(self.geo_encoder.parameters()) + list(self.geo_convs.parameters()) + list(self.geo_decoder.parameters()) def tex_parameters(self): return list(self.tex_encoder.parameters()) + list(self.tex_convs.parameters()) + list(self.tex_decoder.parameters()) def encode(self, vol): x = vol.detach().clone() x[x == 255] = 0 x = self.embedding(x) x = x.permute(0, 4, 1, 2, 3) vol_feat = self.geo_encoder(x) if self.args.voxel_fea: vol_feat = self.norm(vol_feat).tanh() return vol_feat else : xy_feat = vol_feat.mean(dim=4) xz_feat = vol_feat.mean(dim=3) yz_feat = vol_feat.mean(dim=2) xy_feat = (self.norm(xy_feat) * 0.5).tanh() xz_feat = (self.norm(xz_feat) * 0.5).tanh() yz_feat = (self.norm(yz_feat) * 0.5).tanh() return [xy_feat, xz_feat, yz_feat] def sample_feature_plane2D(self, feat_map, x): """Sample feature map at given coordinates""" # feat_map: [bs, C, H, W] # x: [bs, N, 2] sample_coords = x.view(x.shape[0], 1, -1, 2) # sample_coords: [bs, 1, N, 2] feat = F.grid_sample(feat_map, sample_coords.flip(-1), align_corners=False, padding_mode='border') # feat : [bs, C, 1, N] feat = feat[:, :, 0, :] # feat : [bs, C, N] feat = feat.transpose(1, 2) # feat : [bs, N, C] return feat def sample_feature_plane3D(self, vol_feat, x): """Sample feature map at given coordinates""" # feat_map: [bs, C, H, W, D] # x: [bs, N, 3] sample_coords = x.view(x.shape[0], 1, 1, -1, 3) feat = F.grid_sample(vol_feat, sample_coords.flip(-1), align_corners=False, padding_mode='border') # feat : [bs, C, 1, 1, N] feat = feat[:, :, 0, 0, :] # feat : [bs, C, N] feat = feat.transpose(1, 2) # feat : [bs, N, C] return feat def decode(self, feat_maps, query): if self.args.voxel_fea: h_geo = self.geo_convs(feat_maps) h_geo = self.sample_feature_plane3D(h_geo, query) else : # coords [N, 3] coords_list = [[0, 1], [0, 2], [1, 2]] geo_feat_maps = [fm[:, :self.geo_feat_dim] for fm in feat_maps] geo_feat_maps = self.geo_convs(geo_feat_maps) if self.args.triplane: h_geo = 0 for i in range(3): h_geo += self.sample_feature_plane2D(geo_feat_maps[i], query[..., coords_list[i]]) # feat : [bs, N, C] else : h_geo = self.sample_feature_plane2D(geo_feat_maps[0], query[..., coords_list[0]]) # feat : [bs, N, C] if self.pos : # multiply_PE_res = 1 # embed_fn, input_ch = get_embedder(multires=multiply_PE_res) # sample_PE = embed_fn(query) PE = [] for freq in range(self.pos_num_freq): PE.append(torch.sin((2.**freq) * query)) PE.append(torch.cos((2.**freq) * query)) PE = torch.cat(PE, dim=-1) # [bs, N, 6*self.pos_num_freq] h_geo = torch.cat([h_geo, PE], dim=-1) h = self.geo_decoder(h_geo) # h : [bs, N, 1] return h def forward(self, vol, query): feat_map = self.encode(vol) return self.decode(feat_map, query) ================================================ FILE: encoding/ssc_metrics.py ================================================ import torch import numpy as np import os def compose_featmaps(feat_xy, feat_xz, feat_yz): H, W = feat_xy.shape[-2:] D = feat_xz.shape[-1] empty_block = torch.zeros(list(feat_xy.shape[:-2]) + [D, D], dtype=feat_xy.dtype, device=feat_xy.device) composed_map = torch.cat( [torch.cat([feat_xy, feat_xz], dim=-1), torch.cat([feat_yz.transpose(-1, -2), empty_block], dim=-1)], dim=-2 ) return composed_map def decompose_featmaps(composed_map): H, W, D = 256, 256, 32 feat_xy = composed_map[..., :H, :W] # (C, H, W) feat_xz = composed_map[..., :H, W:] # (C, H, D) feat_yz = np.asarray(torch.tensor(composed_map[..., H:, :W]).transpose(-1, -2)) # (C, W, D) return feat_xy, feat_xz, feat_yz def visualization(args, coords, preds, folder, idx, learning_map_inv, training=True): output = torch.zeros((256, 256, 32), device=preds.device) coords = coords.squeeze(0) output[coords[:,0], coords[:,1], coords[:,2]] = preds.squeeze(0) pred = output.cpu().long().data.numpy() maxkey = max(learning_map_inv.keys()) # +100 hack making lut bigger just in case there are unknown labels remap_lut_First = np.zeros((maxkey + 100), dtype=np.int32) remap_lut_First[list(learning_map_inv.keys())] = list(learning_map_inv.values()) pred = pred.astype(np.uint32) pred = pred.reshape((-1)) upper_half = pred >> 16 # get upper half for instances lower_half = pred & 0xFFFF # get lower half for semantics lower_half = remap_lut_First[lower_half] # do the remapping of semantics pred = (upper_half << 16) + lower_half # reconstruct full label pred = pred.astype(np.uint32) # Save final_preds = pred.astype(np.uint16) if training: os.makedirs(args.save_path+'/Prediction/', exist_ok=True) for i in range(11): os.makedirs(args.save_path+'/Prediction/'+str(i).zfill(2), exist_ok=True) if torch.is_tensor(idx): save_path = args.save_path+'/Prediction/'+str(folder)+'/'+str(idx.item()).zfill(3)+'.label' else : save_path = args.save_path+'/Prediction/'+str(folder)+'/'+str(idx).zfill(3)+'.label' else : save_path = args.save_path+'/'+str(folder)+'/'+str(idx).zfill(3)+'.label' final_preds.tofile(save_path) """ Part of the code is taken from https://github.com/waterljwant/SSC/blob/master/sscMetrics.py """ import numpy as np from sklearn.metrics import accuracy_score, precision_recall_fscore_support #!/usr/bin/env python3 # This file is covered by the LICENSE file in the root of this project. import sys import numpy as np class SSCMetrics: def __init__(self, n_classes, ignore=None): # classes self.n_classes = n_classes # What to include and ignore from the means self.ignore = np.array(ignore, dtype=np.int64) self.include = np.array([n for n in range(self.n_classes) if n not in self.ignore], dtype=np.int64) #print("[IOU EVAL] IGNORE: ", self.ignore) #print("[IOU EVAL] INCLUDE: ", self.include) # reset the class counters self.reset() def num_classes(self): return self.n_classes def get_eval_mask(self, labels, invalid_voxels): # from samantickitti api """ Ignore labels set to 255 and invalid voxels (the ones never hit by a laser ray, probed using ray tracing) :param labels: input ground truth voxels :param invalid_voxels: voxels ignored during evaluation since the lie beyond the scene that was captured by the laser :return: boolean mask to subsample the voxels to evaluate """ masks = np.ones_like(labels, dtype=np.bool_) masks[labels == 255] = False masks[invalid_voxels == 1] = False return masks def reset(self): self.conf_matrix = np.zeros((self.n_classes, self.n_classes), dtype=np.int64) def one_stats(self, x, y): # sizes should be matching x_row = x.reshape(-1) # de-batchify y_row = y.reshape(-1) # de-batchify idxs = tuple(np.stack((x_row, y_row), axis=0)) conf_matrix = np.zeros((self.n_classes, self.n_classes), dtype=np.int64) np.add.at(conf_matrix, idxs, 1) conf_matrix[:, self.ignore] = 0 tp = np.diag(conf_matrix) fp = conf_matrix.sum(axis=1) - tp fn = conf_matrix.sum(axis=0) - tp intersection = tp union = tp + fp + fn + 1e-15 n = len(np.unique(y)) - 1 miou = (intersection[1:] / union[1:]).sum()/n *100 #miou = (intersection / union).sum()/n *100 all_miou = (intersection / union).sum()/(n+1) *100 iou = (np.sum(conf_matrix[1:, 1:])) / (np.sum(conf_matrix) - conf_matrix[0, 0] + 1e-8) * 100 return iou, miou, all_miou def addBatch(self, x, y): # x=preds, y=targets # sizes should be matching x_row = x.reshape(-1) # de-batchify y_row = y.reshape(-1) # de-batchify # check assert(x_row.shape == y_row.shape) # create indexes idxs = tuple(np.stack((x_row, y_row), axis=0)) # make confusion matrix (cols = gt, rows = pred) np.add.at(self.conf_matrix, idxs, 1) iou, miou, all_miou = self.one_stats(x, y) return iou, miou def getStats(self): # remove fp from confusion on the ignore classes cols conf = self.conf_matrix.copy() conf[:, self.ignore] = 0 # get the clean stats tp = np.diag(conf) fp = conf.sum(axis=1) - tp fn = conf.sum(axis=0) - tp return tp, fp, fn def getIoU(self): tp, fp, fn = self.getStats() intersection = tp union = tp + fp + fn + 1e-15 iou = intersection / union iou_mean = (intersection[self.include] / union[self.include]).mean() return iou_mean, iou # returns "iou mean", "iou per class" ALL CLASSES def getacc(self): tp, fp, fn = self.getStats() total_tp = tp.sum() total = tp[self.include].sum() + fp[self.include].sum() + 1e-15 acc_mean = total_tp / total return acc_mean # returns "acc mean" def get_confusion(self): return self.conf_matrix.copy() ================================================ FILE: encoding/train_ae.py ================================================ from torch.utils.tensorboard import SummaryWriter from dataset.dataset_builder import dataset_builder from encoding.networks import AutoEncoderGroupSkip from encoding.lovasz import lovasz_softmax from utils.utils import save_remap_lut, point2voxel import os import torch from tqdm.auto import tqdm from torch.cuda.amp import autocast, GradScaler import numpy as np from encoding.ssc_metrics import SSCMetrics class Trainer: def __init__(self, args): # etc self.args = args self.writer = SummaryWriter(os.path.join(args.save_path, 'tb')) self.epoch, self.start_epoch = 0, 0 self.global_step = 0 self.best_miou = 0 # dataset self.train_dataset, self.val_dataset, self.num_class, class_names = dataset_builder(args) self.train_dataloader = torch.utils.data.DataLoader(self.train_dataset, batch_size=args.bs, shuffle=True, num_workers=8, pin_memory=True) self.val_dataloader = torch.utils.data.DataLoader(self.val_dataset, batch_size=1, shuffle=False, num_workers=8, pin_memory=True) self.iou_class_names = class_names # model & optimizer self.model = AutoEncoderGroupSkip(args).cuda() self.optimizer = torch.optim.Adam(self.model.parameters(), lr=args.lr) self.scheduler = torch.optim.lr_scheduler.MultiStepLR(self.optimizer, args.lr_scheduler_steps, args.lr_scheduler_decay) if args.lr_scheduler else None self.grad_scaler = GradScaler() if args.resume: checkpoint = torch.load(args.resume) self.model.load_state_dict(checkpoint['model']) self.optimizer.load_state_dict(checkpoint['optimizer']) self.start_epoch = checkpoint['epoch'] # TODO: load scheduler # loss functions self.loss_fns = {} self.loss_fns['ce'] = torch.nn.CrossEntropyLoss(weight=self.train_dataset.weights, ignore_index=255) self.loss_fns['lovasz'] = None def train(self): for epoch in range(30000): self.epoch = self.start_epoch + epoch + 1 print('Training...') self._train_model() if epoch % self.args.eval_epoch == 0: print('Evaluation...') self._eval_and_save_model() # learning rate scheduling self.scheduler.step() self.writer.add_scalar('lr_epochwise', self.optimizer.param_groups[0]['lr'], global_step=self.epoch) def _loss(self, vox, query, label, losses, coord): empty_label = 0. preds = self.model(vox, query) # [bs, N, 20] losses['ce'] = self.loss_fns['ce'](preds.view(-1, self.num_class), label.view(-1,)) losses['loss'] = losses['ce'] pred_output = torch.full((preds.shape[0], vox.shape[1], vox.shape[2], vox.shape[3], self.num_class), fill_value=empty_label, device=preds.device) gt_output = torch.full((preds.shape[0], vox.shape[1], vox.shape[2], vox.shape[3]), fill_value=empty_label, device=preds.device) softmax_preds = torch.nn.functional.softmax(preds, dim=2) for i in range(softmax_preds.shape[0]): pred_output[i, coord[i, :, 0], coord[i, :, 1], coord[i, :, 2], :] = softmax_preds[i] gt_output[i, coord[i, :, 0], coord[i, :, 1], coord[i, :, 2]] = label[i].float() losses['lovasz'] = lovasz_softmax(pred_output.permute(0,4,1,2,3), gt_output) losses['loss'] += losses['lovasz'] adaptive_weight = None return losses, preds, adaptive_weight def _train_model(self): self.model.train() total_losses = {loss_name: 0. for loss_name in self.loss_fns.keys()} total_losses['loss'] = 0. evaluator = SSCMetrics(self.num_class, []) dataloader_tqdm = tqdm(self.train_dataloader) for vox, query, label, coord, path, invalid in dataloader_tqdm: vox = vox.type(torch.LongTensor).cuda() query = query.type(torch.FloatTensor).cuda() label = label.type(torch.LongTensor).cuda() coord = coord.type(torch.LongTensor).cuda() invalid = invalid.type(torch.LongTensor).cuda() b_size = vox.size(0) # TODO: bsize is correct? # forward losses = {} with autocast(): losses, model_output, adaptive_weight = self._loss(vox, query, label, losses, coord) # optimize self.optimizer.zero_grad() self.grad_scaler.scale(losses['loss']).backward() self.grad_scaler.unscale_(self.optimizer) grad_norm = torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0) # gradient clipping self.grad_scaler.step(self.optimizer) self.grad_scaler.update() # eval and log each iteration if self.global_step % self.args.display_period == 0: pred_mask = get_pred_mask(model_output) masks = torch.from_numpy(evaluator.get_eval_mask(vox.cpu().numpy(), invalid.cpu().numpy())) output = point2voxel(self.args, pred_mask, coord) eval_output = output[masks] eval_label = vox[masks] this_iou, this_miou = evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int)) # on display dataloader_tqdm.set_postfix({"loss": losses['loss'].detach().item(), "iou": this_iou, "miou": this_miou}) # on tensorboard self.writer.add_scalar('Grad_Norm', grad_norm, global_step=self.global_step) self.writer.add_scalar('Train_Performance_stepwise/IoU', this_iou, global_step=self.global_step) self.writer.add_scalar('Train_Performance_stepwise/mIoU', this_miou, global_step=self.global_step) for loss_name in losses.keys(): self.writer.add_scalar(f'Train_Loss_stepwise/loss_{loss_name}', losses[loss_name], self.global_step) # loss accumulation for logging for loss_name in losses.keys(): total_losses[loss_name] += (losses[loss_name] * b_size) self.global_step += 1 # eval for 1 epoch _, class_jaccard = evaluator.getIoU() m_jaccard = class_jaccard[1:].mean() miou = m_jaccard * 100 conf = evaluator.get_confusion() iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8) evaluator.reset() # log for 1 epoch self.writer.add_scalar('Train_Performance_epochwise/IoU', iou, global_step=self.epoch) self.writer.add_scalar('Train_Performance_epochwise/mIoU', miou, global_step=self.epoch) for class_idx, class_name in enumerate(self.iou_class_names): self.writer.add_scalar(f'Train_ClassPerformance_epochwise/class{class_idx + 1}_IoU_{class_name}', class_jaccard[class_idx + 1], global_step=self.epoch) for loss_name in losses.keys(): self.writer.add_scalar(f'Train_Loss_epochwise/loss_{loss_name}', total_losses[loss_name] / len(self.train_dataset), global_step=self.epoch) print(f"Epoch: {self.epoch} \t IOU: \t {iou:01f} \t mIoU: \t {miou:01f}") @torch.no_grad() def _eval_and_save_model(self): self.model.eval() total_losses = {loss_name: 0. for loss_name in self.loss_fns.keys()} total_losses['loss'] = 0. evaluator = SSCMetrics(self.num_class, []) dataloader_tqdm = tqdm(self.val_dataloader) for sample_idx, (vox, query, label, coord, path, invalid) in enumerate(dataloader_tqdm): vox = vox.type(torch.LongTensor).cuda() query = query.type(torch.FloatTensor).cuda() label = label.type(torch.LongTensor).cuda() coord = coord.type(torch.LongTensor).cuda() invalid = invalid.type(torch.LongTensor).cuda() b_size = vox.size(0) # TODO: check correctness assert b_size == 1, 'For accurate logging, please set batch size of validation dataloader to 1.' losses = {} losses, model_output, adaptive_weight = self._loss(vox, query, label, losses, coord) pred_mask = get_pred_mask(model_output) masks = torch.from_numpy(evaluator.get_eval_mask(vox.cpu().numpy(), invalid.cpu().numpy())) output = point2voxel(self.args, pred_mask, coord) eval_output = output[masks] eval_label = vox[masks] this_iou, this_miou = evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int)) # log on display for each sample dataloader_tqdm.set_postfix({"loss": losses['loss'].detach().item(), "iou": this_iou, "miou": this_miou}) for loss_name in losses.keys(): total_losses[loss_name] += (losses[loss_name] * b_size) idx = path[0].split('/')[-1].split('.')[0] folder = path[0].split('/')[-3] save_remap_lut(self.args, output, folder, idx, self.train_dataset.learning_map_inv, True) # eval for all validation samples _, class_jaccard = evaluator.getIoU() m_jaccard = class_jaccard[1:].mean() miou = m_jaccard * 100 conf = evaluator.get_confusion() iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8) evaluator.reset() self.writer.add_scalar('Val_Performance_epochwise/IoU', iou, global_step=self.epoch) self.writer.add_scalar('Val_Performance_epochwise/mIoU', miou, global_step=self.epoch) for class_idx, class_name in enumerate(self.iou_class_names): self.writer.add_scalar(f'Val_ClassPerformance_epochwise/class{class_idx + 1}_IoU_{class_name}', class_jaccard[class_idx + 1], global_step=self.epoch) for loss_name in losses.keys(): self.writer.add_scalar(f'Val_Loss_epochwise/loss_{loss_name}', total_losses[loss_name] / len(self.val_dataset), global_step=self.epoch) print(f"Epoch: {self.epoch} \t IOU: \t {iou:01f} \t mIoU: \t {miou:01f}") if self.best_miou < miou: self.best_miou = miou checkpoint = {'optimizer': self.optimizer.state_dict(), 'model': self.model.state_dict(), 'epoch': self.epoch} # TODO: save scheduler torch.save(checkpoint, self.args.save_path + "/" + str(self.epoch) + "_miou=" + str(f"{miou:.3f}") + '.pt') def get_pred_mask(model_output, separate_decoder=False): preds = model_output pred_prob = torch.softmax(preds, dim=2) pred_mask = pred_prob.argmax(dim=2).float() return pred_mask ================================================ FILE: sampling/generation.py ================================================ from utils.parser_util import add_encoding_training_options, add_diffusion_training_options, add_generation_options from utils.utils import save_remap_lut, point2voxel from encoding.train_ae import get_pred_mask from diffusion.triplane_util import build_sampling_model, decompose_featmaps from utils import dist_util import torch import os import argparse import numpy as np def sample(args): model, ae, sample_fn, coords, query, out_shape, _, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args) args.grid_size = grid_size with torch.no_grad(): condition = np.zeros(out_shape) cond = {'y':condition, 'H':H, 'W':W, 'D':D, 'path':args.save_path} for r in range(args.num_samples): samples = sample_fn(model, out_shape, progress=False, model_kwargs=cond) xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, (H, W, D)) model_output = ae.decode([xy_feat, xz_feat, yz_feat], query) sample = get_pred_mask(model_output) output = point2voxel(args, sample, coords) sample = save_remap_lut(args, output, "sample", r, learning_map_inv, training=False) os.umask(0) save_path = os.path.join(args.save_path, f"sample/{r}.label") os.makedirs(args.save_path+'/sample', mode=0o777, exist_ok=True) sample.tofile(save_path) def sample_parser(): parser = argparse.ArgumentParser() add_encoding_training_options(parser) add_diffusion_training_options(parser) add_generation_options(parser) parser.add_argument("--gpu_id", default=0, type=int) parser.add_argument("--save_path", type=str, default = '') parser.add_argument("--dataset", default='kitti', choices=['kitti', 'carla']) parser.add_argument("--num_samples", type=int, default=10) args = parser.parse_args() return args if __name__ == '__main__': args = sample_parser() dist_util.setup_dist(args.gpu_id) sample(args) ================================================ FILE: sampling/inpainting.py ================================================ from utils.parser_util import add_diffusion_training_options, add_encoding_training_options, add_in_out_sampling from sampling.outpainting import edit_scene from utils.utils import load_label, save_remap_lut from diffusion.triplane_util import build_sampling_model from utils import dist_util import torch import argparse def inpainting(scene, cond_1, cond_2, cond_3, cond_4, Generate_Scene): cond = scene.clone().detach() edit_scene = scene.clone().detach() output = Generate_Scene(cond, m=(cond_1, cond_2, cond_3, cond_4)) edit_scene[:, cond_3 : cond_4, cond_1 : cond_2, :] = output[:, cond_3 : cond_4, cond_1 : cond_2, :] return edit_scene def edit(args): model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args) args.grid_size = grid_size scene = load_label(args.load_path, learning_map, grid_size) Generate_Scene = edit_scene(args, ae, model, sample_fn, coords, query, out_shape, (H, W, D), args.overlap) more_edit_answer = 'y' while more_edit_answer != 'n' : cond_1, cond_2, cond_3, cond_4 = input('points of re-generation region tl, tr, dl, dr:').split() answer = 'y' while answer == 'y' : new_scene = inpainting(scene, int(cond_1), int(cond_2), int(cond_3), int(cond_4), Generate_Scene) save_scene = save_remap_lut(args, new_scene, None, None, learning_map_inv, training=False) save_scene.tofile(args.save_path+'/inpainting.label') answer = input('Again? (y/n/q) :') if answer == 'n' : scene = new_scene if answer == 'q' : break more_edit_answer = input('More edit? (y/n) :') scene = new_scene def sample_parser(): parser = argparse.ArgumentParser() add_encoding_training_options(parser) add_diffusion_training_options(parser) add_in_out_sampling(parser) parser.add_argument("--save_path", type=str, default = '') parser.add_argument("--gpu_id", default=0, type=int) parser.add_argument("--load_path", default='./dataset/001335.label') args = parser.parse_args() return args if __name__ == '__main__': args = sample_parser() args.overlap = 'inpainting' dist_util.setup_dist(args.gpu_id) edit(args) ================================================ FILE: sampling/outpainting.py ================================================ from diffusion.triplane_util import build_sampling_model, compose_featmaps, decompose_featmaps from utils.parser_util import add_in_out_sampling, add_diffusion_training_options, add_encoding_training_options from utils.utils import point2voxel, load_label, save_remap_lut from encoding.train_ae import get_pred_mask from utils import dist_util import torch import argparse import numpy as np def city_generate(m, scene, Generate_Scene, overlap, out_shape, H=128): new_scene = scene.clone().detach() if m == 'upleft': left_cond = new_scene[:, overlap*2 : overlap*4 , overlap: overlap*3].detach().clone() up_cond = new_scene[:, overlap : overlap*3, overlap*2 : overlap*4 ].detach().clone() condition = torch.zeros(out_shape, device=dist_util.dev()) left_tri= Generate_Scene(left_cond, m, decode=False) up_tri = Generate_Scene(up_cond, m, decode=False) condition[:, :, :, :int(overlap/2)] = left_tri[:, :, :, H-int(overlap/2):H].detach().clone() condition[:, :, :int(overlap/2), :] = up_tri[:, :, H-int(overlap/2):H, :].detach().clone() output = Generate_Scene(condition, m, encode=False) new_scene[:, overlap*2 : overlap*4 , overlap*2 : overlap*4 , :] = output elif m == 'upright' : right_cond = new_scene[:, overlap*2 : overlap*4 , overlap : overlap*3, :].detach().clone() up_cond = new_scene[:, overlap : overlap*3, :overlap*2 , :].detach().clone() condition = torch.zeros(out_shape, device=dist_util.dev()) right_tri= Generate_Scene(right_cond, m, decode=False) up_tri = Generate_Scene(up_cond, m, decode=False) condition[:, :, :, H-int(overlap/2):H] = right_tri[:, :, :, :int(overlap/2)].detach().clone() condition[:, :, :int(overlap/2), :] = up_tri[:, :, H-int(overlap/2):H, :].detach().clone() output = Generate_Scene(condition, m, encode=False) new_scene[:, overlap*2 : overlap*4 , :overlap*2 , :] = output elif m == 'downright': right_cond = new_scene[:, :overlap*2 , overlap : overlap*3].detach().clone() down_cond = new_scene[:, overlap : overlap*3, :overlap*2 ].detach().clone() condition = torch.zeros(out_shape, device=dist_util.dev()) right_tri= Generate_Scene(right_cond, m, decode=False) down_tri = Generate_Scene(down_cond, m, decode=False) condition[:, :, :, H-int(overlap/2):H] = right_tri[:, :, :, :int(overlap/2)].detach().clone() condition[:, :, H-int(overlap/2):H, :] = down_tri[:, :, :int(overlap/2), :].detach().clone() output = Generate_Scene(condition, m, encode=False) new_scene[:, : overlap*2 , :overlap*2 , :] = output elif m == 'downleft': left_cond = new_scene[:, : overlap*2 , overlap: overlap*3].detach().clone() down_cond = new_scene[:, overlap : overlap*3, overlap*2 : overlap*4 ].detach().clone() condition = torch.zeros(out_shape, device=dist_util.dev()) left_tri= Generate_Scene(left_cond, m, decode=False) down_tri = Generate_Scene(down_cond, m, decode=False) condition[:, :, :, :int(overlap/2)] = left_tri[:, :, :, H-int(overlap/2):H].detach().clone() condition[:, :, H-int(overlap/2):H, :] = down_tri[:, :, :int(overlap/2), :].detach().clone() output = Generate_Scene(condition, m, encode=False) new_scene[:, :overlap*2 , overlap*2:overlap*4, :] = output else : condition = new_scene[:, overlap:3*overlap, overlap:3*overlap, :] output = Generate_Scene(condition, m) if m == 'down': new_scene[:, :2*overlap, overlap:3*overlap, :] = output elif m == 'up': new_scene[:, 2*overlap:, overlap:3*overlap, :] = output elif m == 'left': new_scene[:, overlap:3*overlap, 2*overlap:, :] = output elif m == 'right': new_scene[:, overlap:3*overlap, :2*overlap, :] = output return new_scene class edit_scene(torch.nn.Module): def __init__(self, args, ae, model, sample_fn, coords, query, out_shape, tri_size, overlap): super().__init__() self.args = args self.overlap = overlap self.model, self.ae = model, ae self.sample_fn = sample_fn self.coords, self.query = coords, query self.out_shape = out_shape self.tri_size = tri_size H, W, D = tri_size self.cond = {'y':np.zeros((1, H + D, H + D)), 'H':[H], 'W':[W], 'D':[D], 'path':0} def encode(self, condition): xy_feat, xz_feat, yz_feat = self.ae.encode(condition) before_scene, _ = compose_featmaps(xy_feat, xz_feat, yz_feat, self.tri_size) return before_scene def decode(self, samples): xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, self.tri_size) model_output = self.ae.decode([xy_feat, xz_feat, yz_feat], self.query) sample = get_pred_mask(model_output) output = point2voxel(self.args, sample, self.coords) return output def forward(self, condition, m, encode=True, decode=True): condition = condition.detach().clone() with torch.no_grad(): if encode and decode : before_scene = self.encode(condition) samples = self.sample_fn(self.model, self.out_shape, model_kwargs=self.cond, cond=before_scene, mode = m, overlap=self.overlap) output = self.decode(samples) elif encode : output = self.encode(condition) elif decode: samples = self.sample_fn(self.model, self.out_shape, model_kwargs=self.cond, cond=condition, mode = m, overlap=self.overlap) output = self.decode(samples) return output def outpaint(args): model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, _, args = build_sampling_model(args) args.grid_size = grid_size voxel_label = load_label(args.load_path, learning_map, grid_size) scene = torch.zeros(1, 2*grid_size[1], 2*grid_size[1], grid_size[-1]).type(torch.LongTensor).to(dist_util.dev()) overlap = int(grid_size[1]/2) scene[:, overlap : overlap*3, overlap : overlap*3, :] = voxel_label Generate_Scene = edit_scene(args, ae, model, sample_fn, coords, query, out_shape, (H, W, D), overlap) for m in ['down', 'left', 'right', 'up', 'downleft', 'downright', 'upleft', 'upright']: print("Generating :", m) new_scene= city_generate(m, scene, Generate_Scene, overlap, out_shape) scene = new_scene save_scene = save_remap_lut(args, scene, None, None, learning_map_inv, training=False) save_scene.tofile(args.save_path+'/outpainting.label') def sample_parser(): parser = argparse.ArgumentParser() add_in_out_sampling(parser) add_encoding_training_options(parser) add_diffusion_training_options(parser) parser.add_argument("--save_path", type=str, default = '') parser.add_argument("--gpu_id", default=0, type=int) parser.add_argument("--load_path", default='./dataset/001335.label') args = parser.parse_args() return args if __name__ == '__main__': args = sample_parser() dist_util.setup_dist(args.gpu_id) outpaint(args) ================================================ FILE: sampling/ssc_refine.py ================================================ from diffusion.triplane_util import build_sampling_model from utils.parser_util import add_encoding_training_options, add_diffusion_training_options, add_refine_options from utils.common_util import get_result from utils.utils import save_remap_lut, point2voxel, unpack, load_label from dataset.tri_dataset_builder import TriplaneDataset from encoding.ssc_metrics import SSCMetrics from encoding.train_ae import get_pred_mask from diffusion.nn import decompose_featmaps from utils import dist_util from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import torch import os import argparse import numpy as np from tqdm.auto import tqdm def sample(args, tb): model, ae, sample_fn, coords, query, out_shape, learning_map, learning_map_inv, H, W, D, grid_size, class_name, args = build_sampling_model(args) args.grid_size = grid_size ds = TriplaneDataset(args, 'val') dl = DataLoader(ds, batch_size = args.batch_size, shuffle = False, pin_memory = True) tqdm_ = tqdm(dl) refine_evaluator, ssc_evaluator = SSCMetrics(args.num_class, []), SSCMetrics(args.num_class, []) with torch.no_grad(): for _, cond in tqdm_: # load dataset idx = cond['path'][0].split("/")[-1].split(".")[0].split("_")[0] folder = cond['path'][0].split("/")[-3] os.umask(0) os.makedirs(args.save_path+'/'+folder, mode=0o777, exist_ok=True) save_path = os.path.join(args.save_path, f"{folder}/{idx}.label") gt_path = os.path.join(args.data_path, f"{folder}/voxels/{idx}.label") cond_path = os.path.join(args.data_path, f"{folder}/{args.refine_dataset}/{idx}.label") vox_label = load_label(gt_path, learning_map, grid_size) cond_label = load_label(cond_path, learning_map, grid_size) invalid = torch.from_numpy(unpack(np.fromfile(gt_path.replace('label', 'invalid'), dtype=np.uint8))) invalid = invalid.squeeze().type(torch.FloatTensor).cuda().reshape(grid_size) masks = torch.from_numpy(refine_evaluator.get_eval_mask(vox_label.cpu().numpy(), invalid.cpu().numpy())) eval_label = vox_label[masks] cond_eval_label = cond_label[masks] # ssc refine samples = sample_fn(model, out_shape, progress=False, model_kwargs=cond) xy_feat, xz_feat, yz_feat = decompose_featmaps(samples, (H, W, D)) model_output = ae.decode([xy_feat, xz_feat, yz_feat], query) sample = get_pred_mask(model_output) output = point2voxel(args, sample, coords) eval_output = output[masks] this_iou, this_miou, _ = refine_evaluator.one_stats(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int)) tqdm_.set_postfix({"iou": this_iou, "miou": this_miou}) sample = save_remap_lut(args, output, folder, idx, learning_map_inv, training=False) sample.tofile(save_path) ssc_evaluator.addBatch(cond_eval_label.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int)) refine_evaluator.addBatch(eval_output.cpu().numpy().astype(int), eval_label.cpu().numpy().astype(int)) get_result(ssc_evaluator, class_name, tb, args.save_path) get_result(refine_evaluator, class_name, tb, args.save_path) def sample_parser(): parser = argparse.ArgumentParser() add_encoding_training_options(parser) add_diffusion_training_options(parser) add_refine_options(parser) parser.add_argument("--gpu_id", default=0, type=int) parser.add_argument("--refine_dataset", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc']) parser.add_argument("--save_path", type=str, default = '') args = parser.parse_args() return args if __name__ == '__main__': args = sample_parser() dist_util.setup_dist(args.gpu_id) tb = SummaryWriter(os.path.join(args.save_path, 'tb')) sample(args, tb) ================================================ FILE: scripts/save_triplane.py ================================================ import torch import numpy as np import argparse from encoding.networks import AutoEncoderGroupSkip from diffusion.triplane_util import compose_featmaps from tqdm.auto import tqdm import os from dataset.kitti_dataset import SemKITTI from dataset.carla_dataset import CarlaDataset from dataset.path_manager import * from pathlib import Path def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--geo_feat_channels", type=int, default=16, help="geometry feature dimension") parser.add_argument("--feat_channel_up", type=int, default=64, help="conv feature dimension") parser.add_argument("--mlp_hidden_channels", type=int, default=256, help="mlp hidden dimension") parser.add_argument("--mlp_hidden_layers", type=int, default=4, help="mlp hidden layers") parser.add_argument("--z_down", default=False) parser.add_argument("--padding_mode", default='replicate') parser.add_argument('--lovasz', type=bool, default=True) parser.add_argument("--dataset", default='kitti', choices=['kitti', 'carla']) parser.add_argument('--data_name', default='voxels') parser.add_argument('--data_tail', default='.label') parser.add_argument('--save_name', default='triplane') parser.add_argument('--save_tail', default='_scpnet.npy') parser.add_argument('--resume', default = '/home/jumin/Documents/Projects/SemCity/results/4_miou=81.715.pt') ### Ablation ### parser.add_argument("--triplane", type=bool, default=True) parser.add_argument("--pos", default=True, type=bool) parser.add_argument("--voxel_fea", default=False, type=bool) args = parser.parse_args() return args @torch.no_grad() def save(args): if args.dataset == 'kitti': dataset = SemKITTI(args, 'train', get_query=False, folder=args.data_name) val_dataset = SemKITTI(args, 'val', get_query=False, folder=args.data_name) tri_size = (128, 128, 16) if args.z_down else (128, 128, 32) elif args.dataset == 'carla': dataset = CarlaDataset(args, 'train', get_query=False) val_dataset = CarlaDataset(args, 'val', get_query=False) tri_size = (64, 64, 4) if args.z_down else (64, 64, 8) dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, num_workers=4) #collate_fn=dataset.collate_fn, num_workers=4) val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=1, shuffle=False, num_workers=4) #collate_fn=dataset.collate_fn, num_workers=4) print(args.data_name) print(f'The number of voxel labels is {len(dataset)}.') print(f'Load autoencoder model from "{args.resume}"') model = AutoEncoderGroupSkip(args) model = model.cuda() checkpoint = torch.load(args.resume) model.load_state_dict(checkpoint['model']) model.eval() print("\nSave Triplane...") for loader in [dataloader, val_dataloader]: for vox, _, _, _, path, invalid in tqdm(loader): # to gpu vox = vox.type(torch.LongTensor).cuda() invalid = invalid.type(torch.LongTensor).cuda() vox[invalid == 1] = 0 triplane = model.encode(vox) if not args.voxel_fea : triplane, _ = compose_featmaps(triplane[0].squeeze(), triplane[1].squeeze(), triplane[2].squeeze(), tri_size) file_idx = str(Path(path[0]).stem.split('_')[0]) # e.g., 002165 folder_idx = str(Path(path[0]).parent.parent.stem) # e.g., 00 save_folder_path = os.path.join(args.save_path, folder_idx, args.save_name) # e.g., /home/sebin/dataset/sequence/00/tri_1enc_1dec_0pad os.makedirs(save_folder_path, exist_ok=True) np.save(os.path.join(save_folder_path, file_idx +args.save_tail), triplane.cpu().numpy()) def main(): args = get_args() if args.dataset == 'kitti': args.num_class = 20 args.data_path=SEMKITTI_DATA_PATH args.save_path=SEMKITTI_DATA_PATH args.yaml_path=SEMKITTI_YAML_PATH elif args.dataset == 'carla': args.num_class = 11 args.data_path=CARLA_DATA_PATH args.save_path=CARLA_DATA_PATH args.yaml_path=CARLA_YAML_PATH save(args) if __name__ == '__main__': main() ================================================ FILE: scripts/train_ae_main.py ================================================ import argparse from encoding.train_ae import Trainer from dataset.path_manager import * def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--geo_feat_channels", type=int, default=16, help="geometry feature dimension") parser.add_argument("--feat_channel_up", type=int, default=64, help="conv feature dimension") parser.add_argument("--mlp_hidden_channels", type=int, default=256, help="mlp hidden dimension") parser.add_argument("--mlp_hidden_layers", type=int, default=4, help="mlp hidden layers") parser.add_argument("--padding_mode", default='replicate') parser.add_argument("--bs", type=int, default=4, help="batch size for autoencoding training") parser.add_argument("--dataset", default='kitti', choices=['kitti', 'carla']) parser.add_argument("--z_down", default=False) parser.add_argument("--lr", type=float, default=0.001) parser.add_argument("--lr_scheduler", default=True) parser.add_argument("--lr_scheduler_steps", nargs='+', type=int, default=[30, 40]) parser.add_argument("--lr_scheduler_decay", type=float, default=0.5) parser.add_argument('--save_path', type=str, default='') parser.add_argument('--resume', default = None) parser.add_argument('--display_period', type=int, default=50) parser.add_argument('--eval_epoch', type=int, default=1) ### Ablation ### parser.add_argument("--triplane", type=bool, default=True, help="use triplane feature, if False, use bev feature") parser.add_argument("--pos", default=True, type=bool) parser.add_argument("--voxel_fea", default=False, type=bool, help="use 3d voxel feature") args = parser.parse_args() return args def main(): args = get_args() if args.dataset == 'carla': args.data_path=CARLA_DATA_PATH args.yaml_path=CARLA_YAML_PATH elif args.dataset == 'kitti': args.data_path=SEMKITTI_DATA_PATH args.yaml_path=SEMKITTI_YAML_PATH trainer = Trainer(args) trainer.train() if __name__ == '__main__': main() ================================================ FILE: scripts/train_diffusion_main.py ================================================ from utils.parser_util import add_diffusion_training_options, add_encoding_training_options from dataset.tri_dataset_builder import TriplaneDataset from diffusion.script_util import create_model_and_diffusion_from_args from diffusion.resample import create_named_schedule_sampler from diffusion.train_util import TrainLoop from diffusion import logger from utils import dist_util from dataset.path_manager import * from utils.utils import cycle from torch.utils.data import DataLoader import argparse def train_diffusion(args) : log_dir = args.save_path logger.configure(dir=log_dir) ds = TriplaneDataset(args, 'train') val_ds = TriplaneDataset(args, 'val') collate_fn = None dl = DataLoader(ds, batch_size = args.batch_size, shuffle = True, pin_memory = True, collate_fn=collate_fn) dl = cycle(dl) val_dl = DataLoader(val_ds, batch_size = args.batch_size, shuffle = False, pin_memory = True, collate_fn=collate_fn) val_dl = cycle(val_dl) model, diffusion = create_model_and_diffusion_from_args(args) model.to(dist_util.dev()) schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion) TrainLoop( diffusion_net = args.diff_net_type, triplane_loss_type = args.triplane_loss_type, timestep_respacing = args.timestep_respacing, training_step = args.steps, model=model, diffusion=diffusion, data=dl, val_data=val_dl, ssc_refine = args.ssc_refine, batch_size=args.batch_size, microbatch=-1, lr=args.diff_lr, ema_rate=args.ema_rate, log_interval=args.log_interval, save_interval=args.save_interval, resume_checkpoint=args.resume_checkpoint, use_fp16=args.use_fp16, schedule_sampler=schedule_sampler, weight_decay=args.weight_decay, lr_anneal_steps=args.diff_n_iters, ).run_loop() if __name__ == '__main__': parser = argparse.ArgumentParser() add_diffusion_training_options(parser) parser.add_argument("--gpu_id", default=0, type=int) parser.add_argument("--save_path", type=str, default='') parser.add_argument('--ssc_refine', action='store_true') parser.add_argument("--ssc_refine_dataset", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc']) parser.add_argument("--dataset", default='kitti', choices=['kitti', 'carla']) parser.add_argument("--batch_size", type=int, default=16, help="batch size for diffusion training") parser.add_argument("--resume_checkpoint", type=str, default = None) parser.add_argument("--triplane_loss_type", type=str, default='l2', choices=['l1', 'l2']) add_encoding_training_options(parser) parser.add_argument("--triplane", default=True) parser.add_argument("--pos", default=True, type=bool) parser.add_argument("--voxel_fea", default=False, type=bool) args = parser.parse_args() if args.dataset == 'carla': args.data_path=CARLA_DATA_PATH args.yaml_path=CARLA_YAML_PATH elif args.dataset == 'kitti': args.data_path=SEMKITTI_DATA_PATH args.yaml_path=SEMKITTI_YAML_PATH if args.voxel_fea : args.diff_net_type = "unet_voxel" else : args.diff_net_type = "unet_tri" if args.triplane else "unet_bev" #CUDA_VISIBLE_DEVICES=1 dist_util.setup_dist(args.gpu_id) train_diffusion(args) ================================================ FILE: setup.py ================================================ from setuptools import setup setup( name="SemCity", version = "0.1", py_modules=["scripts", "dataset", "encoding", "diffusion", "sample", "utils"] ) ================================================ FILE: utils/common_util.py ================================================ import random import numpy as np import torch import matplotlib.pyplot as plt def seed_all(seed): random.seed(seed) # python random generator np.random.seed(seed) # numpy random generator torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False def draw_scalar_field2D(arr, vmin=None, vmax=None, cmap=None, title=None): multi = max(arr.shape[0] // 512, 1) fig, ax = plt.subplots(figsize=(5 * multi, 5 * multi)) cax1 = ax.matshow(arr, vmin=vmin, vmax=vmax, cmap=cmap) fig.colorbar(cax1, ax=ax, fraction=0.046, pad=0.04) fig.tight_layout() if title is not None: ax.set_title('08/'+str(title).zfill(6)) return fig def get_result(evaluator, class_name): _, class_jaccard = evaluator.getIoU() m_jaccard = class_jaccard[1:].mean() miou = m_jaccard * 100 conf = evaluator.get_confusion() iou = (np.sum(conf[1:, 1:])) / (np.sum(conf) - conf[0, 0] + 1e-8) * 100 evaluator.reset() print(f"mIoU: {miou:.2f}") print(f"iou: {iou:.2f}") for i, c in enumerate(class_name) : print(f"{c}: {class_jaccard[i]*100:.2f}") ================================================ FILE: utils/dist_util.py ================================================ """ Helpers for distributed training. """ import socket import os import torch as th import torch.distributed as dist # Change this to reflect your cluster layout. # The GPU for a given rank is (rank % GPUS_PER_NODE). GPUS_PER_NODE = 8 SETUP_RETRY_COUNT = 3 used_device = 0 def setup_dist(device=0): """ Setup a distributed process group. """ os.environ["CUDA_VISIBLE_DEVICES"] = str(device) # f"{MPI.COMM_WORLD.Get_rank() % GPUS_PER_NODE}" def dev(): """ Get the device to use for torch.distributed. """ global used_device if th.cuda.is_available() and used_device>=0: return th.device(f"cuda:{used_device}") return th.device("cpu") def load_state_dict(path, **kwargs): """ Load a PyTorch file without redundant fetches across MPI ranks. """ return th.load(path, **kwargs) def sync_params(params): """ Synchronize a sequence of Tensors across ranks from rank 0. """ for p in params: with th.no_grad(): dist.broadcast(p, 0) ================================================ FILE: utils/parser_util.py ================================================ import argparse import json from dataset.path_manager import * import numpy as np from utils.utils import read_semantickitti_yaml import yaml def add_encoding_training_options(parser): group = parser.add_argument_group("encoding") group.add_argument("--feat_channel_up", type=int, default=64, help="conv feature dimension") group.add_argument("--mlp_hidden_channels", type=int, default=256, help="mlp hidden dimension") group.add_argument("--mlp_hidden_layers", type=int, default=4, help="mlp hidden layers") group.add_argument("--invalid_class", type=bool, default=False) group.add_argument("--padding_mode", default='replicate') group.add_argument("--lovasz", default=True) group.add_argument("--geo_feat_channels", type=int, default=16, help="geometry feature dimension") group.add_argument("--z_down", default=False) def add_diffusion_training_options(parser): group = parser.add_argument_group("diffusion") group.add_argument("--steps", type=int, default=100, help="diffusion step") group.add_argument("--is_rollout", type=bool, default=True) group.add_argument('--mult_channels', default=(1, 2, 4)) group.add_argument("--diff_lr", type=float, default=5e-4, help="initial learning rate for diffusion training") group.add_argument("--schedule_sampler", type=str, default="uniform", help="schedule sampler") group.add_argument("--ema_rate", type=float, default=0.9999, help="ema rate") group.add_argument("--weight_decay", type=float, default=0.0, help="weight decay") group.add_argument("--log_interval", type=int, default=500, help="log interval") group.add_argument("--save_interval", type=int, default=1000, help="save interval") group.add_argument("--use_fp16", type=bool, default=False) group.add_argument("--predict_xstart", type=bool, default=True) group.add_argument("--learn_sigma", type=bool, default=False) group.add_argument("--timestep_respacing", default='') group.add_argument("--use_ddim", type=str2bool, default=False, help="use ddim") group.add_argument("--conv_down", default=True) group.add_argument("--diff_n_iters", type=int, default=50000, help="lr ann eal steps for diffusion training") group.add_argument("--tri_z_down", default=False) group.add_argument('--tri_unet_updown', type=bool, default=True) group.add_argument("--model_channels", default=64, help="model channels") def add_generation_options(parser): group = parser.add_argument_group("sampling") group.add_argument("--triplane", default=True) group.add_argument("--pos", default=True, type=bool) group.add_argument("--voxel_fea", default=False) group.add_argument('--ssc_refine', default=False, type=bool) group.add_argument("--refine_dataset", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc', 'lmsc', 'motionsc', 'sscfull']) group.add_argument("--triplane_loss_type", type=str, default='l2', choices=['l1', 'l2',]) group.add_argument("--batch_size", type=int, default=1) group.add_argument("--diff_net_type", type=str, default='unet_tri') group.add_argument("--repaint", default=False, type=bool) def add_refine_options(parser): group = parser.add_argument_group("sampling") group.add_argument("--triplane", default=True) group.add_argument("--pos", default=True, type=bool) group.add_argument("--voxel_fea", default=False) group.add_argument('--ssc_refine', default=True, type=bool) group.add_argument("--dataset", default='kitti') group.add_argument("--triplane_loss_type", type=str, default='l2', choices=['l1', 'l2',]) group.add_argument("--diff_net_type", type=str, default='unet_tri') group.add_argument("--repaint", default=False, type=bool) group.add_argument("--batch_size", type=int, default=1) def add_in_out_sampling(parser): group = parser.add_argument_group("sampling") group.add_argument("--triplane", default=True) group.add_argument("--pos", default=True, type=bool) group.add_argument("--voxel_fea", default=False) group.add_argument('--ssc_refine', default=False, type=bool) group.add_argument("--refine_dataset", default='monoscene', choices=['monoscene', 'occdepth', 'scpnet', 'ssasc', 'lmsc', 'motionsc', 'sscfull']) group.add_argument("--triplane_loss_type", type=str, default='l2', choices=['l1', 'l2',]) group.add_argument("--batch_size", type=int, default=1) group.add_argument("--diff_net_type", type=str, default='unet_tri') group.add_argument("--repaint", default=True, type=bool) group.add_argument("--dataset", default='kitti') def get_gen_args(args): if args.dataset == 'kitti' : if args.z_down : H, W, D = 128 ,128, 16 else : H, W, D = 128, 128, 32 learning_map, learning_map_inv = read_semantickitti_yaml() grid_size = (1, 256, 256, 32) class_name = [ 'car', 'bicycle', 'motorcycle', 'truck', 'other-vehicle', 'person', 'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building', 'fence', 'vegetation', 'trunk', 'terrain', 'pole', 'traffic-sign' ] tri_size = (128, 128, 16) if args.z_down else (128, 128, 32) num_class = 20 max_points = 400000 elif args.dataset == 'carla' : if args.z_down : H, W, D = 64 ,64, 4 else : H, W, D = 64, 64, 8 with open(args.yaml_path, 'r') as stream: data_yaml = yaml.safe_load(stream) label_remap = data_yaml["learning_map"] learning_map = np.asarray(list(label_remap.values())) learning_map_inv = None class_name = ['building', 'barrier', 'other', 'pedestrian', 'pole', 'road', 'ground', 'sidewalk', 'vegetation', 'vehicle'] grid_size = (1, 128, 128, 8) tri_size = (64, 64, 4) if args.z_down else (64, 64, 8) num_class = 11 max_points = 70000 return H, W, D, learning_map, learning_map_inv, class_name, grid_size, tri_size, num_class, max_points def diffusion_defaults(): return dict( learn_sigma=False, noise_schedule="linear", timestep_respacing="", use_kl=False, rescale_timesteps=False, rescale_learned_sigmas=False, ) def diffusion_model_defaults(): return dict( in_channels=8, out_channels=8, num_res_blocks=1, dropout=0, use_checkpoint=False, use_fp16=False, use_scale_shift_norm=True, ) def get_args_by_group(parser, args, group_name): for group in parser._action_groups: if group.title == group_name: group_dict = {a.dest: getattr(args, a.dest, None) for a in group._group_actions} return group_dict return ValueError('group_name was not found.') def load_and_overwrite_args(args, path, ignore_keys=[]): with open(path, "r") as f: overwrite_args = json.load(f) for k, v in overwrite_args.items(): if k not in ignore_keys: setattr(args, k, v) return args def add_dict_to_argparser(parser, default_dict): for k, v in default_dict.items(): v_type = type(v) if v is None: v_type = str elif isinstance(v, bool): v_type = str2bool parser.add_argument(f"--{k}", default=v, type=v_type) def args_to_dict(args, keys): return {k: getattr(args, k) for k in keys} def str2bool(v): """ https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse """ if isinstance(v, bool): return v if v.lower() in ("yes", "true", "t", "y", "1"): return True elif v.lower() in ("no", "false", "f", "n", "0"): return False else: raise argparse.ArgumentTypeError("boolean value expected") ================================================ FILE: utils/utils.py ================================================ from prettytable import PrettyTable import os import torch import yaml import numpy as np from functools import lru_cache from dataset.path_manager import * def read_semantickitti_yaml(): with open(SEMKITTI_YAML_PATH, 'r') as stream: semkittiyaml = yaml.safe_load(stream) learning_map_inv = semkittiyaml["learning_map_inv"] learning_map = semkittiyaml['learning_map'] maxkey = max(learning_map.keys()) remap_lut = np.zeros((maxkey + 100), dtype=np.int32) remap_lut[list(learning_map.keys())] = list(learning_map.values()) remap_lut[remap_lut == 0] = 255 # map 0 to 'invalid' remap_lut[0] = 0 return remap_lut, learning_map_inv def unpack(compressed): ''' given a bit encoded voxel grid, make a normal voxel grid out of it. ''' uncompressed = np.zeros(compressed.shape[0] * 8, dtype=np.uint8) uncompressed[::8] = compressed[:] >> 7 & 1 uncompressed[1::8] = compressed[:] >> 6 & 1 uncompressed[2::8] = compressed[:] >> 5 & 1 uncompressed[3::8] = compressed[:] >> 4 & 1 uncompressed[4::8] = compressed[:] >> 3 & 1 uncompressed[5::8] = compressed[:] >> 2 & 1 uncompressed[6::8] = compressed[:] >> 1 & 1 uncompressed[7::8] = compressed[:] & 1 return uncompressed def load_label(path, learning_map, grid_size): label = np.fromfile(path, dtype=np.uint16).reshape((-1, 1)) label = learning_map[label] label = torch.from_numpy(label).squeeze().type(torch.LongTensor).cuda().reshape(grid_size) label[label==255]=0 return label def write_result(args): os.umask(0) os.makedirs(args.save_path, mode=0o777, exist_ok=True) args_table = PrettyTable(['Arg', 'Value']) for arg, val in vars(args).items(): args_table.add_row([arg, val]) with open(os.path.join(args.save_path, 'results.txt'), "w") as f: f.write(str(args_table)) def point2voxel(args, preds, coords): if len(args.grid_size)==4: output = torch.zeros((preds.shape[0], args.grid_size[1], args.grid_size[2], args.grid_size[3]), device=preds.device) else : output = torch.zeros((preds.shape[0], args.grid_size[0], args.grid_size[1], args.grid_size[2]), device=preds.device) for i in range(preds.shape[0]): output[i, coords[i, :, 0], coords[i, :, 1], coords[i, :, 2]] = preds[i] return output def visualization(args, coords, preds, folder, idx, learning_map_inv, training): output = point2voxel(args, preds, coords) return save_remap_lut(args, output, folder, idx, learning_map_inv, training) def save_remap_lut(args, pred, folder, idx, learning_map_inv, training, make_numpy=True): if make_numpy: pred = pred.cpu().long().data.numpy() if learning_map_inv is not None: maxkey = max(learning_map_inv.keys()) # +100 hack making lut bigger just in case there are unknown labels remap_lut_First = np.zeros((maxkey + 100), dtype=np.int32) remap_lut_First[list(learning_map_inv.keys())] = list(learning_map_inv.values()) pred = pred.astype(np.uint32) pred = pred.reshape((-1)) upper_half = pred >> 16 # get upper half for instances lower_half = pred & 0xFFFF # get lower half for semantics lower_half = remap_lut_First[lower_half] # do the remapping of semantics pred = (upper_half << 16) + lower_half # reconstruct full label pred = pred.astype(np.uint32) if training: final_preds = pred.astype(np.uint16) os.umask(0) os.makedirs(args.save_path+'/sample/'+str(folder), mode=0o777, exist_ok=True) if torch.is_tensor(idx): save_path = args.save_path+'/sample/'+str(folder)+'/'+str(idx.item()).zfill(3)+'.label' else : save_path = args.save_path+'/sample/'+str(folder)+'/'+str(idx).zfill(3)+'.label' final_preds.tofile(save_path) else: return pred.astype(np.uint16) def cycle(dl): while True: for data in dl: yield data @lru_cache(4) def voxel_coord(voxel_shape): x = np.arange(voxel_shape[0]) y = np.arange(voxel_shape[1]) z = np.arange(voxel_shape[2]) Y, X, Z = np.meshgrid(x, y, z) voxel_coord = np.concatenate((X[..., None], Y[..., None], Z[..., None]), axis=-1) return voxel_coord def make_query(grid_size): gs = grid_size[1:] coords = torch.from_numpy(voxel_coord(gs)) coords = coords.reshape(-1, 3) query = torch.zeros(coords.shape, dtype=torch.float32) query[:,0] = 2*coords[:,0]/float(gs[0]-1) -1 query[:,1] = 2*coords[:,1]/float(gs[1]-1) -1 query[:,2] = 2*coords[:,2]/float(gs[2]-1) -1 query = query.reshape(-1, 3) return coords.unsqueeze(0), query.unsqueeze(0)