[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\ndata\n.vscode\n.idea\n\n# custom\n*.pkl\n*.pkl.json\n*.log.json\nwork_dirs/\nwork_dirs\npretrained\npretrained/\n# Pytorch\n*.pth\ntrash/\ntrash\n"
  },
  {
    "path": "README.md",
    "content": "This is the official repository of the paper [SegDiff: Image Segmentation with Diffusion Probabilistic Models](https://arxiv.org/abs/2112.00390)\n\nThe code is based on [Improved Denoising Diffusion Probabilistic Models.](https://github.com/openai/improved-diffusion)\n\n## Installation\n### Conda environment\nTo create the environment use the conda environment command\n```\nconda env create -f environment.yml\n```\n\n## Project structure and data preparations\nour project need to be arranged in the following format\n\n```\nsegdiff/ # git clone the source code here\n\ndata/ # the root of the data folders\n    Vaihingen/\n    Medical/MoNuSeg/\n    cityscapes_instances/\n```\n\n### Vaihingen\n\ndownload the dataset from [link](https://drive.google.com/file/d/1nenpWH4BdplSiHdfXs0oYfiA5qL42plB/view) \nand unzip it's content (folder named buildings), execute the preprocess\n```\ndatasets/preprocess_vaihingen.py --path building-folder-path \n```\n\nVaihingen dataset should have the following format\n```\nVaihingen/\n    full_test_vaih.hdf5\n    full_training_vaih.hdf5\n```\n\n### MonuSeg\ngeneral [website](https://monuseg.grand-challenge.org/) of the challenge,\ndownload the dataset\n[train](https://drive.google.com/file/d/1ZgqFJomqQGNnsx7w7QBzQQMVA16lbVCA/view?usp=sharing)\nand [test](https://drive.google.com/file/d/1NKkSQ5T0ZNQ8aUhh0a8Dt2YKYCQXIViw/view?usp=sharing) sets.\n\nlaunch the matlab [code](https://drive.google.com/file/d/1YDtIiLZX0lQzZp_JbqneHXHvRo45ZWGX/view) \nfor preprocess \n\nMonuSeg dataset should have the following format\n```\nMonuSeg/\n    Test/\n        img/\n            XX.tif\n        mask/\n            XX.png\n    Training/\n        img/\n            XX.tif\n        mask/\n            XX.png\n```\n\n### Cityscapes\n\ndownload [cityscapes](https://www.cityscapes-dataset.com) dataset with the splits from \n[PolyRNN++](https://github.com/fidler-lab/polyrnn-pp), follow the instructions [here](https://github.com/shirgur/ACDRNet) for preparations\n\nTo get cityscapes_final_v5 annotations you can sign up to get PolygonRNN++ code here http://www.cs.toronto.edu/polyrnn/code_signup/ the cityscapes_final_v5 folder is inside the data folder\n\nCityscapes dataset should have the following format\n```\ncityscapes_instances/\n    full/\n        all_classes_instances.json\n    train/\n        all_classes_instances.json\n    train_val/\n        all_classes_instances.json\n    val/\n        all_classes_instances.json\n    all_images.hdf5\n```\n\n\n## Train and Evaluate\nExecute the following commands (multi gpu is supported for training, set the gpus with CUDA_VISIBLE_DEVICES and -n for the actual number)\n\nTraining options:\n```\n# Training\n--batch-size    Batch size\n--lr            Learning rate\n\n# Architecture\n--rrdb_blocks       Number of rrdb blocks\n--dropout           Dropout\n--diffusion_steps   number of steps for the diffusion model\n\n# Cityscapes\n--class_name        name of class of cityscapes, options are [\"bike\", \"bus\", \"person\", \"train\", \"motorcycle\", \"car\", \"rider\"]\n--expansion         boolean flag, for expansion setting or not\n\n# Misc\n--save_interval     interval for saving model weights\n```\n\n### MonuSeg\nTraining script example:\n```\nCUDA_VISIBLE_DEVICES=0,1,2,3 mpiexec -n 4 image_train_diff_medical.py --rrdb_blocks 12 --batch_size 2 --lr 0.0001 --diffusion_steps 100\n```\n\nEvaluation script example:\n```\nCUDA_VISIBLE_DEVICES=0 mpiexec -n 1 python image_sample_diff_medical.py --model_path path-for-model-weights\n```\n\n### Cityscapes\nTraining script example:\n```\nCUDA_VISIBLE_DEVICES=0,1 mpiexec -n 2 python image_train_diff_city.py --class_name \"train\" --expansion True --rrdb_blocks 15 --lr 0.0001 --batch_size 15 --diffusion_steps 100\n```\n\nEvaluation script example:\n```\nCUDA_VISIBLE_DEVICES=0 mpiexec -n 1 python image_sample_diff_city.py --model_path path-for-model-weights\n```\n\n### Vaihingen\nTraining script example:\n```\nCUDA_VISIBLE_DEVICES=0,1 mpiexec -n 2 python image_train_diff_vaih.py --lr 0.0001 --batch_size 4 --dropout 0.1 --rrdb_blocks 6 --diffusion_steps 100\n```\n\nEvaluation script example:\n```\nCUDA_VISIBLE_DEVICES=0 mpiexec -n 1 python image_sample_diff_vaih.py --model_path path-for-model-weights\n```\n\n## Citation\n```\n@article{amit2021segdiff,\n  title={Segdiff: Image segmentation with diffusion probabilistic models},\n  author={Amit, Tomer and Nachmani, Eliya and Shaharbany, Tal and Wolf, Lior},\n  journal={arXiv preprint arXiv:2112.00390},\n  year={2021}\n}\n```\n\n"
  },
  {
    "path": "datasets/city.py",
    "content": "import json\nimport os\nimport random\nfrom pathlib import Path\n\nimport h5py\nimport numpy as np\nimport pycocotools.mask as maskUtils\nimport torch\nfrom PIL import Image\nfrom matplotlib import pyplot as plt\nfrom mpi4py import MPI\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision.transforms.functional import resize\nfrom tqdm import tqdm\n\nfrom datasets.transforms import \\\n    Compose, ToPILImage, RandomHorizontalFlip, ToTensor, Normalize, RandomAffine\n\n\ndef create_dataset(mode=\"train\", class_name=\"train\", expansion=False):\n    shard=MPI.COMM_WORLD.Get_rank()\n    num_shards = MPI.COMM_WORLD.Get_size()\n    data_inst_path = str(Path(__file__).absolute().parent.parent.parent / \"data/cityscapes_instances/\")\n\n    print('loading \\\"{}\\\" annotations into memory...'.format(mode))\n    data = json.load(open(os.path.join(data_inst_path, mode, 'all_classes_instances.json'), 'r'))\n\n    annotations = data['data'][class_name][shard::num_shards]\n\n    hdf5_obj = h5py.File(os.path.join(data_inst_path, 'all_images.hdf5'), 'r')\n    images = [hdf5_obj[ann['img']['file_name']] for ann in annotations]\n\n    return CityscapesInstances(\n        images,\n        annotations,\n        mode=mode,\n        expansion=expansion\n    )\n\n\ndef load_data(\n    *, data_dir, batch_size, image_size, class_name, class_cond=False, expansion, deterministic=False\n):\n    \"\"\"\n    For a dataset, create a generator over (images, kwargs) pairs.\n\n    Each images is an NCHW float tensor, and the kwargs dict contains zero or\n    more keys, each of which map to a batched Tensor of their own.\n    The kwargs dict can be used for class labels, in which case the key is \"y\"\n    and the values are integer tensors of class labels.\n\n    :param data_dir: a dataset directory.\n    :param batch_size: the batch size of each returned pair.\n    :param image_size: the size to which images are resized.\n    :param class_cond: if True, include a \"y\" key in returned dicts for class\n                       label. If classes are not available and this is true, an\n                       exception will be raised.\n    :param deterministic: if True, yield results in a deterministic order.\n    \"\"\"\n\n    dataset = create_dataset(mode=\"train\", class_name=class_name, expansion=expansion)\n\n    if deterministic:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=False, num_workers=0, drop_last=True\n        )\n    else:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=True, num_workers=0, drop_last=True\n        )\n    while True:\n        yield from loader\n\n\nclass CityscapesInstances(Dataset):\n    CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',\n               'bicycle')\n\n    def __init__(self,\n                 images,\n                 annotations,\n                 no_aug=False,\n                 mode='train',\n                 loops=100,\n                 expansion=False,\n                 std=np.array([58.395, 57.12, 57.375]),\n                 mean=np.array([123.675, 116.28, 103.53]),\n                 ):\n        super(CityscapesInstances, self).__init__()\n\n        self.loops = loops\n        self.mode = mode\n        self.mean = torch.from_numpy(mean)\n        self.std = torch.from_numpy(std)\n        self.expansion = expansion\n        image_size = 128\n\n        if mode == 'train' and not no_aug:\n            self.transformations = Compose([\n                ToPILImage(),\n                # Resize((image_size, image_size)),\n                RandomHorizontalFlip(),\n                RandomAffine(22, scale=(0.75, 1.25)),\n                ToTensor(),\n                Normalize(self.mean, self.std)\n                # transforms.NormalizeInstance()\n            ])\n        else:\n            self.transformations = Compose([\n                ToPILImage(),\n                # Resize((image_size, image_size), do_mask=False),\n                ToTensor(),\n                Normalize(self.mean, self.std),\n                # transforms.NormalizeInstance()\n            ])\n\n        self.instance_images = []\n        self.instance_masks = []\n\n        self.annotations = annotations\n\n        for item in tqdm(range(len(images))):\n            ann = self.annotations[item]\n            mask = self._poly2mask(ann['segmentation'], ann['img']['height'], ann['img']['width'])\n            bbox = np.maximum(0, np.array(ann['bbox']).astype(np.int32))\n\n            if self.expansion:\n                if self.mode == 'train':\n                    bounding_box_expansion = random.randint(10, 20)\n                else:\n                    bounding_box_expansion = 15\n\n                increase_axis_by = bbox[3] * (bounding_box_expansion / 100)\n                increase_each_coordinate = increase_axis_by / 2\n\n                x_1 = bbox[1] - increase_each_coordinate\n                x_2 = bbox[1] + bbox[3] + increase_each_coordinate\n\n                increase_axis_by = bbox[2] * (bounding_box_expansion / 100)\n                increase_each_coordinate = increase_axis_by / 2\n\n                y_1 = bbox[0] - increase_each_coordinate\n                y_2 = bbox[0] + bbox[2] + increase_each_coordinate\n\n                # check the axis order\n                x_2 = round(min(x_2, images[item].shape[0]))\n                y_2 = round(min(y_2, images[item].shape[1]))\n\n                x_1 = round(max(x_1, 0))\n                y_1 = round(max(y_1, 0))\n\n                instance_image = images[item][x_1:x_2, y_1:y_2]\n                instance_mask = mask[x_1:x_2, y_1:y_2]\n            else:\n                instance_image = images[item][bbox[1]:bbox[1] + bbox[3], bbox[0]:bbox[0] + bbox[2]]\n                instance_mask = mask[bbox[1]:bbox[1] + bbox[3], bbox[0]:bbox[0] + bbox[2]]\n\n            size = [image_size, image_size]\n            self.instance_images.append(resize(torch.from_numpy(instance_image).permute(2, 0, 1), size, Image.BILINEAR).permute(1, 2, 0).numpy())\n\n            if mode == 'train' and not no_aug:\n                self.instance_masks.append(resize(torch.from_numpy(instance_mask).unsqueeze(0), size, Image.NEAREST).squeeze(0).numpy())\n            else:\n                self.instance_masks.append(instance_mask)\n\n    @staticmethod\n    def _poly2mask(mask_ann, img_h, img_w):\n        if isinstance(mask_ann, list):\n            # polygon -- a single object might consist of multiple parts\n            # we merge all parts into one mask rle code\n            rles = maskUtils.frPyObjects(mask_ann, img_h, img_w)\n            rle = maskUtils.merge(rles)\n        elif isinstance(mask_ann['counts'], list):\n            # uncompressed RLE\n            rle = maskUtils.frPyObjects(mask_ann, img_h, img_w)\n        else:\n            # rle\n            rle = mask_ann\n        mask = maskUtils.decode(rle)\n        return mask\n\n    def __len__(self):\n        return len(self.annotations)\n\n    def __getitem__(self, item):\n        ann = self.annotations[item]\n\n        instance_image, instance_mask = self.transformations(self.instance_images[item], self.instance_masks[item])\n\n        out_dict = {\"conditioned_image\": instance_image}\n        instance_mask = 2 * instance_mask - 1.0\n        return instance_mask.unsqueeze(0), out_dict, Path(ann[\"img\"]['file_name']).stem\n\n\ndef main():\n    mean = np.array([0, 0, 0])\n    std = np.array([1, 1, 1])\n    dataset = create_dataset(class_name=\"train\", mode='train')\n    for i in range(10):\n        # mask, out_dict, _ = dataset[i]\n        # img = out_dict[\"conditioned_image\"]\n        # plt.imshow(img.permute(1, 2, 0).numpy().astype(np.uint8))\n        # plt.show()\n        #\n        # plt.imshow(mask.permute(1, 2, 0).numpy(), cmap='gray')\n        # plt.show()\n\n        masks, out_dict, _ = dataset[i]\n        imgs = out_dict[\"conditioned_image\"]\n        for index in range(10):\n            plt.imshow(imgs[index * 10].permute(1, 2, 0).numpy().astype(np.uint8))\n            plt.show()\n\n        for index in range(10):\n            plt.imshow(masks[index * 10].permute(1, 2, 0).numpy(), cmap='gray')\n            plt.show()\n\n        pass\n\n\nif __name__ == '__main__':\n    main()\n\n"
  },
  {
    "path": "datasets/monu.py",
    "content": "import os\nfrom pathlib import Path\n\nimport imageio\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tifffile\nimport torch\nfrom mpi4py import MPI\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom datasets.transforms import \\\n    Compose, ToPILImage, ColorJitter, RandomHorizontalFlip, ToTensor, Normalize, RandomVerticalFlip, RandomAffine, \\\n    Resize, RandomCrop\n\n\ndef cv2_loader(path, is_mask):\n    if is_mask:\n        # img = cv2.imread(path, 0)\n        img = imageio.imread(path)\n        img[img > 0] = 1\n    else:\n        # img = cv2.cvtColor(cv2.imread(path, cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB)\n        # img = imageio.imread(path)\n        img = tifffile.imread(path)\n    return img\n\n\ndef get_monu_transform(image_size):\n\n    transform_train = Compose([\n        ToPILImage(),\n        Resize((512, 512)),\n        RandomCrop((image_size, image_size)),\n        RandomHorizontalFlip(),\n        RandomVerticalFlip(),\n        RandomAffine(int(22), scale=(float(0.75), float(1.25))),\n        ColorJitter(brightness=0.4,\n                    contrast=0.4,\n                    saturation=0.4,\n                    hue=0.1),\n        ToTensor(),\n        Normalize(mean=[142.07, 98.48, 132.96], std=[65.78, 57.05, 57.78])\n    ])\n    transform_test = Compose([\n        ToPILImage(),\n        Resize((512, 512)),\n        ToTensor(),\n        Normalize(mean=[142.07, 98.48, 132.96], std=[65.78, 57.05, 57.78])\n    ])\n    return transform_train, transform_test\n\n\ndef create_dataset(mode=\"train\", image_size=256):\n    datadir = str(Path(__file__).absolute().parent.parent.parent / \"data/Medical/MoNuSeg\")\n\n    transform_train, transform_test = get_monu_transform(image_size)\n    if mode == \"train\":\n        return MonuDataset(datadir, train=True, transform=transform_train, image_size=image_size)\n    else:\n        return MonuDataset(datadir, train=False, transform=transform_test)\n\n\ndef load_data(\n    *, data_dir, batch_size, image_size, class_name, class_cond=False, expansion, deterministic=False\n):\n    \"\"\"\n    For a dataset, create a generator over (images, kwargs) pairs.\n\n    Each images is an NCHW float tensor, and the kwargs dict contains zero or\n    more keys, each of which map to a batched Tensor of their own.\n    The kwargs dict can be used for class labels, in which case the key is \"y\"\n    and the values are integer tensors of class labels.\n\n    :param data_dir: a dataset directory.\n    :param batch_size: the batch size of each returned pair.\n    :param image_size: the size to which images are resized.\n    :param class_cond: if True, include a \"y\" key in returned dicts for class\n                       label. If classes are not available and this is true, an\n                       exception will be raised.\n    :param deterministic: if True, yield results in a deterministic order.\n    \"\"\"\n\n    dataset = create_dataset(mode=\"train\")\n\n    if deterministic:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=False, num_workers=0, drop_last=True\n        )\n    else:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=True, num_workers=0, drop_last=True\n        )\n    while True:\n        yield from loader\n\n\nclass MonuDataset(torch.utils.data.Dataset):\n    def __init__(self, root, transform=None, target_transform=None, train=False, loader=cv2_loader, pSize=8, image_size=256):\n        self.root = root\n        if train:\n            self.imgs_root = os.path.join(self.root, 'Training', 'img')\n            self.masks_root = os.path.join(self.root, 'Training', 'mask')\n        else:\n            self.imgs_root = os.path.join(self.root, 'Test', 'img')\n            self.masks_root = os.path.join(self.root, 'Test', 'mask')\n        self.image_size = image_size\n        self.paths = sorted(os.listdir(self.imgs_root))\n        self.transform = transform\n        self.target_transform = target_transform\n        self.loader = loader\n        self.train = train\n        self.pSize = pSize\n        self.masks = []\n        self.imgs = []\n        self.mean = torch.from_numpy(np.array([142.07, 98.48, 132.96]))\n        self.std = torch.from_numpy(np.array([65.78, 57.05, 57.78]))\n\n        shard = MPI.COMM_WORLD.Get_rank()\n        num_shards = MPI.COMM_WORLD.Get_size()\n\n        for file_path in tqdm(self.paths):\n            mask_path = file_path.split('.')[0] + '.png'\n            self.imgs.append(self.loader(os.path.join(self.imgs_root, file_path), is_mask=False))\n            self.masks.append(self.loader(os.path.join(self.masks_root, mask_path), is_mask=True))\n\n        self.imgs = self.imgs[shard::num_shards]\n        self.masks = self.masks[shard::num_shards]\n        self.paths = self.paths[shard::num_shards]\n\n        print('num of data:{}'.format(len(self.paths)))\n\n    def __getitem__(self, index):\n        img = self.imgs[index]\n        mask = self.masks[index]\n\n        img, mask = self.transform(img, mask)\n        out_dict = {\"conditioned_image\": img}\n        mask = 2 * mask - 1.0\n        return mask.unsqueeze(0), out_dict, f\"{Path(self.paths[index]).stem}_{index}\"\n\n    def __len__(self):\n        return len(self.paths)\n\n\nif __name__ == \"__main__\":\n    val_dataset = create_dataset(\n        mode='val',\n        image_size=256,\n    )\n\n    ds = torch.utils.data.DataLoader(val_dataset,\n                                     batch_size=1,\n                                     num_workers=0,\n                                     shuffle=False,\n                                     drop_last=True)\n    pbar = tqdm(ds)\n    mean0_list = []\n    mean1_list = []\n    mean2_list = []\n    std0_list = []\n    std1_list = []\n    std2_list = []\n    for i, (mask, out_dict, _) in enumerate(pbar):\n        img = out_dict[\"conditioned_image\"]\n        plt.imshow(img.squeeze().permute(1,2,0).numpy().astype(np.uint8))\n        plt.show()\n\n        plt.imshow(mask.squeeze().numpy(), cmap='gray')\n        plt.show()\n        a = img.mean(dim=(0, 2, 3))\n        b = img.std(dim=(0, 2, 3))\n        mean0_list.append(a[0].item())\n        mean1_list.append(a[1].item())\n        mean2_list.append(a[2].item())\n        std0_list.append(b[0].item())\n        std1_list.append(b[1].item())\n        std2_list.append(b[2].item())\n    print(np.mean(mean0_list))\n    print(np.mean(mean1_list))\n    print(np.mean(mean2_list))\n\n    print(np.mean(std0_list))\n    print(np.mean(std1_list))\n    print(np.mean(std2_list))\n\n        # a = img.squeeze().permute(1, 2, 0).cpu().numpy()\n        # b = mask.squeeze().cpu().numpy()\n        # a = (a - a.min()) / (a.max() - a.min())\n        # cv2.imwrite('kaki.jpg', 255*a)\n        # cv2.imwrite('kaki_mask.jpg', 255*b)"
  },
  {
    "path": "datasets/preprocess_vaihingen.py",
    "content": "from pathlib import Path\n\nimport h5py\nimport os\nimport cv2\nimport numpy as np\nfrom cv2 import resize\n\n\ndef get_img(cfile):\n    img = cv2.cvtColor(cv2.imread(cfile, cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB)\n    img = resize(img, (256,256), interpolation=cv2.INTER_NEAREST)\n    return img\n\n\ndef get_mask(cfile):\n    GT = cv2.imread(cfile, 0)\n    GT = resize(GT, (256, 256), interpolation=cv2.INTER_LINEAR)\n    GT[GT >= 0.5] = 1\n    GT[GT < 0.5] = 0\n    return GT\n\n\ndef main(args, out_path):\n    data_folder_path = Path(args['path'])\n    imgs_list = sorted(list(data_folder_path.glob(\"building_[0-9]*.tif\")))\n    masks_list = sorted(list(data_folder_path.glob(\"building_mask_[0-9]*.tif\")))\n\n    hf_tri = h5py.File(str(out_path / \"full_training_vaih.hdf5\"), 'w')\n    hf_test = h5py.File(str(out_path / \"full_test_vaih.hdf5\"), 'w')\n\n    imgs_tri = hf_tri.create_group('imgs')\n    mask_single_tri = hf_tri.create_group('mask_single')\n\n    imgs_test = hf_test.create_group('imgs')\n    mask_single_test = hf_test.create_group('mask_single')\n\n    for image_path in imgs_list[:100]:\n        print('training: ' + str(image_path))\n        img = get_img(str(image_path))\n        imgs_tri.create_dataset(image_path.stem, data=img, dtype=np.uint8)\n\n    for image_path in imgs_list[100:]:\n        print('validation: ' + str(image_path))\n        img = get_img(str(image_path))\n        imgs_test.create_dataset(image_path.stem, data=img, dtype=np.uint8)\n\n    for mask_path in masks_list[:100]:\n        print('training: ' + str(mask_path))\n        mask = get_mask(str(mask_path))\n        mask_single_tri.create_dataset(mask_path.stem, data=mask, dtype=np.uint8)\n\n    for mask_path in masks_list[100:]:\n        print('validation: ' + str(mask_path))\n        mask = get_mask(str(mask_path))\n        mask_single_test.create_dataset(mask_path.stem, data=mask, dtype=np.uint8)\n\n    hf_tri.close()\n    hf_test.close()\n\n\nif __name__ == '__main__':\n    import argparse\n    folder_path = Path(__file__).absolute().parent.parent.parent / \"data\" / \"Vaihingen\"\n    folder_path.mkdir(parents=True, exist_ok=True)\n    parser = argparse.ArgumentParser(description='Description of your program')\n    parser.add_argument('-path',\n                        '--path',\n                        default='',\n                        help='Data path, should point on \"building\"',\n                        required=True)\n    args = vars(parser.parse_args())\n    main(args, out_path=folder_path)\n\n\n\n"
  },
  {
    "path": "datasets/transforms.py",
    "content": "from __future__ import division\nimport torch\nimport math\nimport sys\nimport random\nfrom PIL import Image\n\ntry:\n    import accimage\nexcept ImportError:\n    accimage = None\nimport numpy as np\nimport numbers\nimport types\nimport collections\nimport warnings\n\nfrom torchvision.transforms import functional as F\n\nif sys.version_info < (3, 3):\n    Sequence = collections.Sequence\n    Iterable = collections.Iterable\nelse:\n    Sequence = collections.abc.Sequence\n    Iterable = collections.abc.Iterable\n\n__all__ = [\"Compose\", \"ToTensor\", \"ToPILImage\", \"Normalize\", \"Resize\", \"CenterCrop\", \"Pad\",\n           \"Lambda\", \"RandomApply\", \"RandomChoice\", \"RandomOrder\", \"RandomCrop\", \"RandomHorizontalFlip\",\n           \"RandomVerticalFlip\", \"RandomResizedCrop\", \"FiveCrop\", \"TenCrop\",\n           \"ColorJitter\", \"RandomRotation\", \"RandomAffine\",\n           \"RandomPerspective\"]\n\n_pil_interpolation_to_str = {\n    Image.NEAREST: 'PIL.Image.NEAREST',\n    Image.BILINEAR: 'PIL.Image.BILINEAR',\n    Image.BICUBIC: 'PIL.Image.BICUBIC',\n    Image.LANCZOS: 'PIL.Image.LANCZOS',\n    Image.HAMMING: 'PIL.Image.HAMMING',\n    Image.BOX: 'PIL.Image.BOX',\n}\n\n\nclass Compose(object):\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, img, mask):\n        for t in self.transforms:\n            img, mask = t(img, mask)\n        return img, mask\n\n\nclass ToTensor(object):\n    def __call__(self, img, mask):\n        # return F.to_tensor(img), F.to_tensor(mask)\n        img = torch.from_numpy(np.array(img)).permute(2, 0, 1).float()\n        mask = torch.from_numpy(np.array(mask)).float()\n        return img, mask\n\n\nclass ToPILImage(object):\n    def __init__(self, mode=None):\n        self.mode = mode\n\n    def __call__(self, img, mask):\n        return F.to_pil_image(img, self.mode), F.to_pil_image(mask, self.mode)\n\n\nclass Normalize(object):\n    def __init__(self, mean, std, inplace=False):\n        self.mean = mean\n        self.std = std\n        self.inplace = inplace\n\n    def __call__(self, img, mask):\n        return F.normalize(img, self.mean, self.std, self.inplace), mask\n\n\nclass Resize(object):\n    def __init__(self, size, interpolation=Image.BILINEAR, do_mask=True):\n        assert isinstance(size, int) or (isinstance(size, Iterable) and len(size) == 2)\n        self.size = size\n        self.interpolation = interpolation\n        self.do_mask = do_mask\n\n    def __call__(self, img, mask):\n        if self.do_mask:\n            return F.resize(img, self.size, self.interpolation), F.resize(mask, self.size, Image.NEAREST)\n        else:\n            return F.resize(img, self.size, self.interpolation), mask\n\n\nclass CenterCrop(object):\n    def __init__(self, size):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n\n    def __call__(self, img, mask):\n        return F.center_crop(img, self.size), F.center_crop(mask, self.size)\n\n\nclass Pad(object):\n    def __init__(self, padding, fill=0, padding_mode='constant'):\n        assert isinstance(padding, (numbers.Number, tuple))\n        assert isinstance(fill, (numbers.Number, str, tuple))\n        assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric']\n        if isinstance(padding, Sequence) and len(padding) not in [2, 4]:\n            raise ValueError(\"Padding must be an int or a 2, or 4 element tuple, not a \" +\n                             \"{} element tuple\".format(len(padding)))\n\n        self.padding = padding\n        self.fill = fill\n        self.padding_mode = padding_mode\n\n    def __call__(self, img, mask):\n        return F.pad(img, self.padding, self.fill, self.padding_mode), \\\n               F.pad(mask, self.padding, self.fill, self.padding_mode)\n\n\nclass Lambda(object):\n    def __init__(self, lambd):\n        assert callable(lambd), repr(type(lambd).__name__) + \" object is not callable\"\n        self.lambd = lambd\n\n    def __call__(self, img, mask):\n        return self.lambd(img), self.lambd(mask)\n\n\nclass Lambda_image(object):\n    def __init__(self, lambd):\n        assert callable(lambd), repr(type(lambd).__name__) + \" object is not callable\"\n        self.lambd = lambd\n\n    def __call__(self, img, mask):\n        return self.lambd(img), mask\n\n\nclass RandomTransforms(object):\n    def __init__(self, transforms):\n        assert isinstance(transforms, (list, tuple))\n        self.transforms = transforms\n\n    def __call__(self, *args, **kwargs):\n        raise NotImplementedError()\n\n\nclass RandomApply(RandomTransforms):\n    def __init__(self, transforms, p=0.5):\n        super(RandomApply, self).__init__(transforms)\n        self.p = p\n\n    def __call__(self, img, mask):\n        if self.p < random.random():\n            return img, mask\n        for t in self.transforms:\n            img, mask = t(img, mask)\n        return img, mask\n\n\nclass RandomOrder(RandomTransforms):\n    def __call__(self, img, mask):\n        order = list(range(len(self.transforms)))\n        random.shuffle(order)\n        for i in order:\n            img, mask = self.transforms[i](img, mask)\n        return img, mask\n\n\nclass RandomChoice(RandomTransforms):\n    def __call__(self, img, mask):\n        t = random.choice(self.transforms)\n        return t(img, mask)\n\n\nclass RandomCrop(object):\n    def __init__(self, size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant'):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n        self.padding = padding\n        self.pad_if_needed = pad_if_needed\n        self.fill = fill\n        self.padding_mode = padding_mode\n\n    @staticmethod\n    def get_params(img, output_size):\n        w, h = img.size\n        th, tw = output_size\n        if w == tw and h == th:\n            return 0, 0, h, w\n\n        i = random.randint(0, h - th)\n        j = random.randint(0, w - tw)\n        return i, j, th, tw\n\n    def __call__(self, img, mask):\n        if self.padding is not None:\n            img = F.pad(img, self.padding, self.fill, self.padding_mode)\n\n        # pad the width if needed\n        if self.pad_if_needed and img.size[0] < self.size[1]:\n            img = F.pad(img, (self.size[1] - img.size[0], 0), self.fill, self.padding_mode)\n        # pad the height if needed\n        if self.pad_if_needed and img.size[1] < self.size[0]:\n            img = F.pad(img, (0, self.size[0] - img.size[1]), self.fill, self.padding_mode)\n\n        i, j, h, w = self.get_params(img, self.size)\n\n        return F.crop(img, i, j, h, w), F.crop(mask, i, j, h, w)\n\n\nclass RandomHorizontalFlip(object):\n    def __init__(self, p=0.5):\n        self.p = p\n\n    def __call__(self, img, mask):\n        if random.random() < self.p:\n            return F.hflip(img), F.hflip(mask)\n        return img, mask\n\n\nclass RandomVerticalFlip(object):\n    def __init__(self, p=0.5):\n        self.p = p\n\n    def __call__(self, img, mask):\n        if random.random() < self.p:\n            return F.vflip(img), F.vflip(mask)\n        return img, mask\n\n\nclass RandomPerspective(object):\n    def __init__(self, distortion_scale=0.5, p=0.5, interpolation=Image.BICUBIC):\n        self.p = p\n        self.interpolation = interpolation\n        self.distortion_scale = distortion_scale\n\n    def __call__(self, img, mask):\n        if not F._is_pil_image(img):\n            raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n        if random.random() < self.p:\n            width, height = img.size\n            startpoints, endpoints = self.get_params(width, height, self.distortion_scale)\n            return F.perspective(img, startpoints, endpoints, self.interpolation), \\\n                   F.perspective(mask, startpoints, endpoints, Image.NEAREST)\n        return img, mask\n\n    @staticmethod\n    def get_params(width, height, distortion_scale):\n        half_height = int(height / 2)\n        half_width = int(width / 2)\n        topleft = (random.randint(0, int(distortion_scale * half_width)),\n                   random.randint(0, int(distortion_scale * half_height)))\n        topright = (random.randint(width - int(distortion_scale * half_width) - 1, width - 1),\n                    random.randint(0, int(distortion_scale * half_height)))\n        botright = (random.randint(width - int(distortion_scale * half_width) - 1, width - 1),\n                    random.randint(height - int(distortion_scale * half_height) - 1, height - 1))\n        botleft = (random.randint(0, int(distortion_scale * half_width)),\n                   random.randint(height - int(distortion_scale * half_height) - 1, height - 1))\n        startpoints = [(0, 0), (width - 1, 0), (width - 1, height - 1), (0, height - 1)]\n        endpoints = [topleft, topright, botright, botleft]\n        return startpoints, endpoints\n\n\nclass RandomResizedCrop(object):\n    def __init__(self, size, mask_size, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.), interpolation=Image.BILINEAR):\n        if isinstance(size, tuple):\n            self.size = size\n            self.mask_size = mask_size\n        else:\n            self.size = (size, size)\n            self.mask_size = (mask_size, mask_size)\n        if (scale[0] > scale[1]) or (ratio[0] > ratio[1]):\n            warnings.warn(\"range should be of kind (min, max)\")\n\n        self.interpolation = interpolation\n        self.scale = scale\n        self.ratio = ratio\n\n    @staticmethod\n    def get_params(img, scale, ratio):\n        area = img.size[0] * img.size[1]\n\n        for attempt in range(10):\n            target_area = random.uniform(*scale) * area\n            log_ratio = (math.log(ratio[0]), math.log(ratio[1]))\n            aspect_ratio = math.exp(random.uniform(*log_ratio))\n\n            w = int(round(math.sqrt(target_area * aspect_ratio)))\n            h = int(round(math.sqrt(target_area / aspect_ratio)))\n\n            if w <= img.size[0] and h <= img.size[1]:\n                i = random.randint(0, img.size[1] - h)\n                j = random.randint(0, img.size[0] - w)\n                return i, j, h, w\n\n        # Fallback to central crop\n        in_ratio = img.size[0] / img.size[1]\n        if (in_ratio < min(ratio)):\n            w = img.size[0]\n            h = w / min(ratio)\n        elif (in_ratio > max(ratio)):\n            h = img.size[1]\n            w = h * max(ratio)\n        else:  # whole image\n            w = img.size[0]\n            h = img.size[1]\n        i = (img.size[1] - h) // 2\n        j = (img.size[0] - w) // 2\n        return i, j, h, w\n\n    def __call__(self, img, mask):\n        i, j, h, w = self.get_params(img, self.scale, self.ratio)\n        return F.resized_crop(img, i, j, h, w, self.size, self.interpolation), \\\n               F.resized_crop(mask, i, j, h, w, self.mask_size, Image.NEAREST)\n\n\nclass FiveCrop(object):\n    def __init__(self, size):\n        self.size = size\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n            self.size = size\n\n    def __call__(self, img, mask):\n        return F.five_crop(img, self.size), F.five_crop(mask, self.size)\n\n\nclass TenCrop(object):\n    def __init__(self, size, vertical_flip=False):\n        self.size = size\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n            self.size = size\n        self.vertical_flip = vertical_flip\n\n    def __call__(self, img, mask):\n        return F.ten_crop(img, self.size, self.vertical_flip), F.ten_crop(mask, self.size, self.vertical_flip)\n\n\nclass ColorJitter(object):\n    def __init__(self, brightness=0, contrast=0, saturation=0, hue=0):\n        self.brightness = self._check_input(brightness, 'brightness')\n        self.contrast = self._check_input(contrast, 'contrast')\n        self.saturation = self._check_input(saturation, 'saturation')\n        self.hue = self._check_input(hue, 'hue', center=0, bound=(-0.5, 0.5),\n                                     clip_first_on_zero=False)\n\n    def _check_input(self, value, name, center=1, bound=(0, float('inf')), clip_first_on_zero=True):\n        if isinstance(value, numbers.Number):\n            if value < 0:\n                raise ValueError(\"If {} is a single number, it must be non negative.\".format(name))\n            value = [center - value, center + value]\n            if clip_first_on_zero:\n                value[0] = max(value[0], 0)\n        elif isinstance(value, (tuple, list)) and len(value) == 2:\n            if not bound[0] <= value[0] <= value[1] <= bound[1]:\n                raise ValueError(\"{} values should be between {}\".format(name, bound))\n        else:\n            raise TypeError(\"{} should be a single number or a list/tuple with lenght 2.\".format(name))\n\n        # if value is 0 or (1., 1.) for brightness/contrast/saturation\n        # or (0., 0.) for hue, do nothing\n        if value[0] == value[1] == center:\n            value = None\n        return value\n\n    @staticmethod\n    def get_params(brightness, contrast, saturation, hue):\n        transforms = []\n\n        if brightness is not None:\n            brightness_factor = random.uniform(brightness[0], brightness[1])\n            transforms.append(Lambda_image(lambda img: F.adjust_brightness(img, brightness_factor)))\n\n        if contrast is not None:\n            contrast_factor = random.uniform(contrast[0], contrast[1])\n            transforms.append(Lambda_image(lambda img: F.adjust_contrast(img, contrast_factor)))\n\n        if saturation is not None:\n            saturation_factor = random.uniform(saturation[0], saturation[1])\n            transforms.append(Lambda_image(lambda img: F.adjust_saturation(img, saturation_factor)))\n\n        if hue is not None:\n            hue_factor = random.uniform(hue[0], hue[1])\n            transforms.append(Lambda_image(lambda img: F.adjust_hue(img, hue_factor)))\n\n        random.shuffle(transforms)\n        transform = Compose(transforms)\n\n        return transform\n\n    def __call__(self, img, mask):\n        transform = self.get_params(self.brightness, self.contrast,\n                                    self.saturation, self.hue)\n        return transform(img, mask)\n\n\nclass RandomRotation(object):\n    def __init__(self, degrees, resample=False, expand=False, center=None):\n        if isinstance(degrees, numbers.Number):\n            if degrees < 0:\n                raise ValueError(\"If degrees is a single number, it must be positive.\")\n            self.degrees = (-degrees, degrees)\n        else:\n            if len(degrees) != 2:\n                raise ValueError(\"If degrees is a sequence, it must be of len 2.\")\n            self.degrees = degrees\n\n        self.resample = resample\n        self.expand = expand\n        self.center = center\n\n    @staticmethod\n    def get_params(degrees):\n        angle = random.uniform(degrees[0], degrees[1])\n\n        return angle\n\n    def __call__(self, img, mask):\n        angle = self.get_params(self.degrees)\n\n        return F.rotate(img, angle, Image.BILINEAR, self.expand, self.center), \\\n               F.rotate(mask, angle, Image.NEAREST, self.expand, self.center)\n\n\nclass RandomAffine(object):\n    def __init__(self, degrees, translate=None, scale=None, shear=None, resample=False, fillcolor=0):\n        if isinstance(degrees, numbers.Number):\n            if degrees < 0:\n                raise ValueError(\"If degrees is a single number, it must be positive.\")\n            self.degrees = (-degrees, degrees)\n        else:\n            assert isinstance(degrees, (tuple, list)) and len(degrees) == 2, \\\n                \"degrees should be a list or tuple and it must be of length 2.\"\n            self.degrees = degrees\n\n        if translate is not None:\n            assert isinstance(translate, (tuple, list)) and len(translate) == 2, \\\n                \"translate should be a list or tuple and it must be of length 2.\"\n            for t in translate:\n                if not (0.0 <= t <= 1.0):\n                    raise ValueError(\"translation values should be between 0 and 1\")\n        self.translate = translate\n\n        if scale is not None:\n            assert isinstance(scale, (tuple, list)) and len(scale) == 2, \\\n                \"scale should be a list or tuple and it must be of length 2.\"\n            for s in scale:\n                if s <= 0:\n                    raise ValueError(\"scale values should be positive\")\n        self.scale = scale\n\n        if shear is not None:\n            if isinstance(shear, numbers.Number):\n                if shear < 0:\n                    raise ValueError(\"If shear is a single number, it must be positive.\")\n                self.shear = (-shear, shear)\n            else:\n                assert isinstance(shear, (tuple, list)) and len(shear) == 2, \\\n                    \"shear should be a list or tuple and it must be of length 2.\"\n                self.shear = shear\n        else:\n            self.shear = shear\n\n        self.resample = resample\n        self.fillcolor = fillcolor\n\n    @staticmethod\n    def get_params(degrees, translate, scale_ranges, shears, img_size):\n        angle = random.uniform(degrees[0], degrees[1])\n        if translate is not None:\n            max_dx = translate[0] * img_size[0]\n            max_dy = translate[1] * img_size[1]\n            translations = (np.round(random.uniform(-max_dx, max_dx)),\n                            np.round(random.uniform(-max_dy, max_dy)))\n        else:\n            translations = (0, 0)\n\n        if scale_ranges is not None:\n            scale = random.uniform(scale_ranges[0], scale_ranges[1])\n        else:\n            scale = 1.0\n\n        if shears is not None:\n            shear = random.uniform(shears[0], shears[1])\n        else:\n            shear = 0.0\n\n        return angle, translations, scale, shear\n\n    def __call__(self, img, mask):\n        ret = self.get_params(self.degrees, self.translate, self.scale, self.shear, img.size)\n        return F.affine(img, *ret, resample=Image.BILINEAR, fillcolor=self.fillcolor), \\\n               F.affine(mask, *ret, resample=Image.NEAREST, fillcolor=self.fillcolor)\n\nclass RandomAffineFromSet(object):\n    def __init__(self, degrees, translate=None, scale=None, shear=None, resample=False, fillcolor=0):\n        assert isinstance(degrees, (tuple, list)), \\\n            \"degrees should be a list or tuple.\"\n        self.degrees = degrees\n\n        if translate is not None:\n            assert isinstance(translate, (tuple, list)) and len(translate) == 2, \\\n                \"translate should be a list or tuple and it must be of length 2.\"\n            for t in translate:\n                if not (0.0 <= t <= 1.0):\n                    raise ValueError(\"translation values should be between 0 and 1\")\n        self.translate = translate\n\n        if scale is not None:\n            assert isinstance(scale, (tuple, list)) and len(scale) == 2, \\\n                \"scale should be a list or tuple and it must be of length 2.\"\n            for s in scale:\n                if s <= 0:\n                    raise ValueError(\"scale values should be positive\")\n        self.scale = scale\n\n        if shear is not None:\n            if isinstance(shear, numbers.Number):\n                if shear < 0:\n                    raise ValueError(\"If shear is a single number, it must be positive.\")\n                self.shear = (-shear, shear)\n            else:\n                assert isinstance(shear, (tuple, list)) and len(shear) == 2, \\\n                    \"shear should be a list or tuple and it must be of length 2.\"\n                self.shear = shear\n        else:\n            self.shear = shear\n\n        self.resample = resample\n        self.fillcolor = fillcolor\n\n    @staticmethod\n    def get_params(degrees, translate, scale_ranges, shears, img_size):\n        angle = random.choice(degrees)\n        if translate is not None:\n            max_dx = translate[0] * img_size[0]\n            max_dy = translate[1] * img_size[1]\n            translations = (np.round(random.uniform(-max_dx, max_dx)),\n                            np.round(random.uniform(-max_dy, max_dy)))\n        else:\n            translations = (0, 0)\n\n        if scale_ranges is not None:\n            scale = random.uniform(scale_ranges[0], scale_ranges[1])\n        else:\n            scale = 1.0\n\n        if shears is not None:\n            shear = random.uniform(shears[0], shears[1])\n        else:\n            shear = 0.0\n\n        return angle, translations, scale, shear\n\n    def __call__(self, img, mask):\n        ret = self.get_params(self.degrees, self.translate, self.scale, self.shear, img.size)\n        return F.affine(img, *ret, resample=Image.BILINEAR, fillcolor=self.fillcolor), \\\n               F.affine(mask, *ret, resample=Image.NEAREST, fillcolor=self.fillcolor)"
  },
  {
    "path": "datasets/vaih.py",
    "content": "from pathlib import Path\n\nimport h5py\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom matplotlib import pyplot as plt\nfrom mpi4py import MPI\nfrom torch.utils.data import Dataset, DataLoader\n\nfrom datasets.transforms import \\\n    Compose, ToPILImage, Resize, RandomHorizontalFlip, ToTensor, Normalize, \\\n    RandomAffine, RandomVerticalFlip, ColorJitter\n\n\ndef load_data(\n    *, data_dir, batch_size, image_size, class_cond=False, deterministic=False\n):\n    \"\"\"\n    For a dataset, create a generator over (images, kwargs) pairs.\n\n    Each images is an NCHW float tensor, and the kwargs dict contains zero or\n    more keys, each of which map to a batched Tensor of their own.\n    The kwargs dict can be used for class labels, in which case the key is \"y\"\n    and the values are integer tensors of class labels.\n\n    :param data_dir: a dataset directory.\n    :param batch_size: the batch size of each returned pair.\n    :param image_size: the size to which images are resized.\n    :param class_cond: if True, include a \"y\" key in returned dicts for class\n                       label. If classes are not available and this is true, an\n                       exception will be raised.\n    :param deterministic: if True, yield results in a deterministic order.\n    \"\"\"\n\n    dataset = VaihDataset(\n        mode='train',\n        image_size=image_size,\n        shard=MPI.COMM_WORLD.Get_rank(),\n        num_shards=MPI.COMM_WORLD.Get_size(),\n    )\n    if deterministic:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=False, num_workers=0, drop_last=True\n        )\n    else:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=True, num_workers=0, drop_last=True\n        )\n    while True:\n        yield from loader\n\n\nclass VaihDataset(Dataset):\n\n    CLASSES = ('building',)\n\n    PALETTE = [[255, 0, 0]]\n\n    def __init__(self, mode, std=np.array([0.22645572 * 255, 0.15276193 * 255, 0.140702 * 255]),\n                 mean=np.array([0.47341759 * 255, 0.28791303 * 255, 0.2850705 * 255]), no_aug=False,\n                 image_size=256, max_data_size=None, shard=0, num_shards=1, small_image_size=None):\n\n        self.mode = mode\n        self.mean = torch.from_numpy(mean)\n        self.std = torch.from_numpy(std)\n\n        if mode == 'train' and not no_aug:\n            self.transformations = Compose([ToPILImage(),\n                                                        Resize(size=(image_size, image_size)),\n                                                        RandomAffine(degrees=[0, 360], scale=(0.75, 1.5)),\n                                                       ColorJitter(brightness=0.6,\n                                                                              contrast=0.5,\n                                                                              saturation=0.4,\n                                                                              hue=0.025),\n                                                       RandomVerticalFlip(),\n                                                       RandomHorizontalFlip(),\n                                                       ToTensor(),\n                                                       Normalize(self.mean, self.std)])\n        else:\n            self.transformations = Compose([ToPILImage(),\n                                            Resize(size=(image_size, image_size)),\n                                                       ToTensor(),\n                                                       Normalize(self.mean, self.std)])\n        if mode == 'train':\n            self.data_length = 100\n        else:\n            self.data_length = 68\n\n        if max_data_size is not None:\n            self.data_length = max_data_size\n\n        if self.mode == 'train':\n            self.data = h5py.File(\n                str(Path(__file__).absolute().parent.parent.parent / \"data/Vaihingen/full_training_vaih.hdf5\"), 'r')\n\n        else:\n            self.data = h5py.File(\n                str(Path(__file__).absolute().parent.parent.parent / \"data/Vaihingen/full_test_vaih.hdf5\"), 'r')\n\n        self.small_image_size = small_image_size\n        self.mask = self.data['mask_single']\n        self.imgs = self.data['imgs']\n        self.img_list = list(self.imgs)[shard::num_shards]\n        self.mask_list = list(self.mask)[shard::num_shards]\n\n    def __len__(self):\n        return len(self.img_list)\n\n    def __getitem__(self, item):\n        cimage = self.img_list[item]\n        img = np.array(self.imgs.get(cimage))\n        cmask = self.mask_list[item]\n        mask = np.array(self.mask.get(cmask))\n        img = img.astype(np.uint8)\n        mask = mask.astype(np.uint8)\n        img, mask = self.transformations(img, mask)\n        out_dict = {\"conditioned_image\": img}\n        mask = (2 * mask - 1.0).unsqueeze(0)\n        if self.small_image_size is not None:\n            out_dict[\"low_res\"] = F.interpolate(mask.unsqueeze(0), self.small_image_size, mode=\"nearest\").squeeze(0)\n        return mask, out_dict, str(Path(cimage).stem)\n\n\nif __name__ == '__main__':\n    mean = np.array([0, 0, 0])\n    std = np.array([1, 1, 1])\n    dataset = VaihDataset('train', mean=mean, std=std, image_size=256)\n    dataset2 = VaihDataset('train', mean=mean, std=std, image_size=256, no_aug=True)\n    for i in range(10):\n        mask, out_dict, _ = dataset[0]\n        img = out_dict[\"conditioned_image\"]\n        plt.imshow(img.permute(1,2,0).numpy().astype(np.uint8))\n        plt.show()\n\n        plt.imshow(mask.permute(1,2,0).numpy(), cmap='gray')\n        plt.show()\n\n        mask, out_dict, _ = dataset2[0]\n        img = out_dict[\"conditioned_image\"]\n        plt.imshow(img.permute(1,2,0).numpy().astype(np.uint8))\n        plt.show()"
  },
  {
    "path": "environment.yml",
    "content": "name: segdiff\nchannels:\n  - anaconda\n  - pytorch\n  - conda-forge\n  - defaults\ndependencies:\n  - python=3.8.12\n  - pip=21.2.4\n  - pytorch=1.9.0\n  - torchvision=0.10.0\n  - cudatoolkit=11.1\n  - mpi4py=3.1.2\n  - tqdm=4.62.3\n  - scikit-learn=0.24.2\n  - scikit-image=0.18.3\n  - matplotlib=3.4.3\n  - seaborn=0.11.2\n  - pip:\n    - opencv-python==4.5.1.48\n    - blobfile==1.2.3\n    - pycocotools==2.0.2\n    - gitpython==3.1.24\n    - kornia==0.5.11\n    - h5py==3.4.0\n    - imagecodecs==2021.11.20\n"
  },
  {
    "path": "image_sample_diff_city.py",
    "content": "\"\"\"\nGenerate a large batch of image samples from a model and save them as a large\nnumpy array. This can be used to produce samples for FID evaluation.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nfrom pathlib import Path\n\nimport torch.distributed as dist\n\nfrom improved_diffusion.sampling_util import sampling_major_vote_func\nfrom improved_diffusion import dist_util, logger\nfrom datasets.city import create_dataset\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    add_dict_to_argparser,\n    args_to_dict,\n)\nfrom improved_diffusion.utils import set_random_seed\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n\n    original_logs_path = Path(args.model_path).parent\n    logs_path = original_logs_path / f\"{Path(args.model_path).stem}_major_vote\"\n\n    args.__dict__.update(json.loads((original_logs_path / 'args.json').read_text()))\n    logger.info(args.__dict__)\n    dist_util.setup_dist()\n\n    logger.configure(dir=str(logs_path), log_suffix=f\"val_{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.load_state_dict(\n        dist_util.load_state_dict(args.model_path, map_location=\"cpu\")\n    )\n    model.to(dist_util.dev())\n    model.eval()\n\n    test_dataset = create_dataset(\n        class_name=args.class_name,\n        mode='val',\n        expansion=args.expansion,\n    )\n\n    if args.__dict__.get(\"seed\") is None:\n        seed = 1234\n    else:\n        seed = int(args.__dict__.get(\"seed\"))\n    set_random_seed(seed, deterministic=True)\n    logger.log(\"sampling major vote val\")\n    (logs_path / \"major_vote\").mkdir(exist_ok=True)\n    step = int(Path(args.model_path).stem.split(\"_\")[-1])\n    sampling_major_vote_func(diffusion, model, str(logs_path / \"major_vote\"), test_dataset, logger, args.clip_denoised,\n                             step=step, n_rounds=len(test_dataset))\n\n    dist.barrier()\n    logger.log(\"sampling complete\")\n\n\ndef create_argparser():\n    defaults = dict(\n        clip_denoised=True,\n        num_samples=10000,\n        batch_size=16,\n        use_ddim=False,\n        model_path=\"\",\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "image_sample_diff_medical.py",
    "content": "\"\"\"\nGenerate a large batch of image samples from a model and save them as a large\nnumpy array. This can be used to produce samples for FID evaluation.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nfrom pathlib import Path\n\nimport torch.distributed as dist\n\nfrom improved_diffusion import dist_util, logger\nfrom datasets.monu import create_dataset\nfrom improved_diffusion.sampling_util import sampling_major_vote_func\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    add_dict_to_argparser,\n    args_to_dict,\n)\nfrom improved_diffusion.utils import set_random_seed\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n\n    original_logs_path = Path(args.model_path).parent\n    logs_path = original_logs_path / f\"{Path(args.model_path).stem}_major_vote\"\n\n    args.__dict__.update(json.loads((original_logs_path / 'args.json').read_text()))\n    logger.info(args.__dict__)\n    dist_util.setup_dist()\n\n    logger.configure(dir=str(logs_path), log_suffix=f\"val_{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.load_state_dict(\n        dist_util.load_state_dict(args.model_path, map_location=\"cpu\")\n    )\n    model.to(dist_util.dev())\n    model.eval()\n\n    test_dataset = create_dataset(\n        mode='val',\n    )\n\n    if args.__dict__.get(\"seed\") is None:\n        seed = 1234\n    else:\n        seed = int(args.__dict__.get(\"seed\"))\n    set_random_seed(seed, deterministic=True)\n    logger.log(\"sampling major vote val\")\n    (logs_path / \"major_vote\").mkdir(exist_ok=True)\n    step = int(Path(args.model_path).stem.split(\"_\")[-1])\n    sampling_major_vote_func(diffusion, model, str(logs_path / \"major_vote\"), test_dataset, logger, args.clip_denoised,\n                             step=step, n_rounds=len(test_dataset))\n\n    dist.barrier()\n    logger.log(\"sampling complete\")\n\n\ndef create_argparser():\n    defaults = dict(\n        clip_denoised=True,\n        num_samples=10000,\n        batch_size=16,\n        use_ddim=False,\n        model_path=\"\",\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "image_sample_diff_vaih.py",
    "content": "\"\"\"\nGenerate a large batch of image samples from a model and save them as a large\nnumpy array. This can be used to produce samples for FID evaluation.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nfrom pathlib import Path\n\nimport torch.distributed as dist\nfrom mpi4py import MPI\n\nfrom improved_diffusion import dist_util, logger\nfrom improved_diffusion.sampling_util import sampling_major_vote_func\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    add_dict_to_argparser,\n    args_to_dict,\n)\nfrom improved_diffusion.utils import set_random_seed\nfrom datasets.vaih import VaihDataset\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n\n    original_logs_path = Path(args.model_path).parent\n    logs_path = original_logs_path / f\"{Path(args.model_path).stem}_major_vote\"\n\n    args.__dict__.update(json.loads((original_logs_path / 'args.json').read_text()))\n    logger.info(args.__dict__)\n    dist_util.setup_dist()\n    \n    logger.configure(dir=str(logs_path), log_suffix=f\"val_{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.load_state_dict(\n        dist_util.load_state_dict(args.model_path, map_location=\"cpu\")\n    )\n    model.to(dist_util.dev())\n    model.eval()\n\n    test_dataset = VaihDataset(\n        mode='val',\n        image_size=args.image_size,\n        shard=MPI.COMM_WORLD.Get_rank(),\n        num_shards=MPI.COMM_WORLD.Get_size(),\n    )\n\n    if args.__dict__.get(\"seed\") is None:\n        seed = 1234\n    else:\n        seed = int(args.__dict__.get(\"seed\"))\n    set_random_seed(seed, deterministic=True)\n    logger.log(\"sampling major vote val\")\n    (logs_path / \"major_vote\").mkdir(exist_ok=True)\n    step = int(Path(args.model_path).stem.split(\"_\")[-1])\n    sampling_major_vote_func(diffusion, model, str(logs_path / \"major_vote\"), test_dataset, logger, args.clip_denoised,\n                             step=step, n_rounds=len(test_dataset))\n\n    dist.barrier()\n    logger.log(\"sampling complete\")\n\n\ndef create_argparser():\n    defaults = dict(\n        clip_denoised=True,\n        num_samples=10000,\n        batch_size=16,\n        use_ddim=False,\n        model_path=\"\",\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "image_train_diff_city.py",
    "content": "\"\"\"\nTrain a diffusion model on images.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nimport os\nfrom pathlib import Path\n\nimport git\nfrom mpi4py import MPI\n\nfrom improved_diffusion import dist_util, logger\nfrom datasets.city import load_data, create_dataset\nfrom improved_diffusion.resample import create_named_schedule_sampler\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    args_to_dict,\n    add_dict_to_argparser,\n)\nfrom improved_diffusion.train_util import TrainLoop\nfrom improved_diffusion.utils import set_random_seed, set_random_seed_for_iterations\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n    args.use_fp16 = True\n    args.clip_denoised = False\n    args.learn_sigma = False\n    args.sigma_small = False\n    args.num_channels = 128\n    args.image_size = 128\n    args.num_res_blocks = 3\n    args.noise_schedule = \"linear\"\n    args.rescale_learned_sigmas = False\n    args.rescale_timesteps = False\n    args.use_scale_shift_norm = False\n    args.deeper_net = True\n\n    exp_name = f\"city_{args.rrdb_blocks}_{args.lr}_{args.batch_size}_{args.diffusion_steps}_{str(args.dropout)}_{args.class_name}_{MPI.COMM_WORLD.Get_rank()}\"\n    if args.expansion:\n        exp_name += \"_expansion\"\n    logs_root = Path(__file__).absolute().parent.parent / \"logs\"\n    log_path = logs_root / f\"{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}_{exp_name}\"\n    os.environ[\"OPENAI_LOGDIR\"] = str(log_path)\n    set_random_seed(MPI.COMM_WORLD.Get_rank(), deterministic=True)\n    set_random_seed_for_iterations(MPI.COMM_WORLD.Get_rank())\n    dist_util.setup_dist()\n    logger.configure(dir=str(log_path))\n\n    if args.resume_checkpoint:\n        resumed_checkpoint_arg = args.resume_checkpoint\n        args.__dict__.update(json.loads((Path(args.resume_checkpoint) / 'args.json').read_text()))\n        args.resume_checkpoint = resumed_checkpoint_arg\n\n    logger.info(args.__dict__)\n\n    (Path(log_path) / 'args.json').write_text(json.dumps(args.__dict__, indent=4))\n    logger.info(f\"log folder path: {Path(log_path).resolve()}\")\n\n    repo = git.Repo(search_parent_directories=True)\n    sha = repo.head.object.hexsha\n\n    logger.log(f\"git commit hash {sha}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.to(dist_util.dev())\n    schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)\n\n    logger.log(\"creating data loader...\")\n    data = load_data(\n        data_dir=args.data_dir,\n        batch_size=args.batch_size,\n        image_size=args.image_size,\n        class_cond=args.class_cond,\n        class_name=args.class_name,\n        expansion=args.expansion\n    )\n    val_dataset = create_dataset(\n        class_name=args.class_name,\n        mode='val',\n        expansion=args.expansion,\n    )\n\n    logger.log(f\"gpu {MPI.COMM_WORLD.Get_rank()} / {MPI.COMM_WORLD.Get_size()} val length {len(val_dataset)}\")\n\n    logger.log(\"training...\")\n    TrainLoop(\n        model=model,\n        diffusion=diffusion,\n        data=data,\n        batch_size=args.batch_size,\n        microbatch=args.microbatch,\n        lr=args.lr,\n        ema_rate=args.ema_rate,\n        log_interval=args.log_interval,\n        save_interval=args.save_interval,\n        resume_checkpoint=args.resume_checkpoint,\n        use_fp16=args.use_fp16,\n        fp16_scale_growth=args.fp16_scale_growth,\n        schedule_sampler=schedule_sampler,\n        weight_decay=args.weight_decay,\n        lr_anneal_steps=args.lr_anneal_steps,\n        clip_denoised=args.clip_denoised,\n        logger=logger,\n        image_size=args.image_size,\n        val_dataset=val_dataset,\n        run_without_test=args.run_without_test,\n        args=args\n        # dist_util=dist_util,\n    ).run_loop(max_iter=300000, start_print_iter=args.start_print_iter)\n\n\ndef create_argparser():\n    defaults = dict(\n        data_dir=\"\",\n        schedule_sampler=\"uniform\",\n        lr=0.00002,\n        weight_decay=0.0,\n        lr_anneal_steps=0,\n        clip_denoised=False,\n        batch_size=4,\n        microbatch=-1,  # -1 disables microbatches\n        ema_rate=\"0.9999\",  # comma-separated list of EMA values\n        save_interval=5000,\n        start_print_iter=75000,\n        log_interval=200,\n        run_without_test=False,\n        resume_checkpoint=\"\",\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "image_train_diff_medical.py",
    "content": "\"\"\"\nTrain a diffusion model on images.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nimport os\nfrom pathlib import Path\n\nimport git\nfrom mpi4py import MPI\n\nfrom improved_diffusion import dist_util, logger\nfrom datasets.monu import load_data, create_dataset\nfrom improved_diffusion.resample import create_named_schedule_sampler\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    args_to_dict,\n    add_dict_to_argparser,\n)\nfrom improved_diffusion.train_util import TrainLoop\nfrom improved_diffusion.utils import set_random_seed, set_random_seed_for_iterations\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n    args.use_fp16 = True\n    args.clip_denoised = False\n    args.learn_sigma = False\n    args.sigma_small = False\n    args.image_size = 256\n    args.num_res_blocks = 3\n    args.noise_schedule = \"linear\"\n    args.rescale_learned_sigmas = False\n    args.rescale_timesteps = False\n    args.use_scale_shift_norm = False\n    args.deeper_net = True\n    # args.start_print_iter = 4\n    # args.save_interval = 4\n\n    exp_name = f\"monu_{args.rrdb_blocks}_{args.lr}_{args.batch_size}_{args.diffusion_steps}_{str(args.dropout)}_{MPI.COMM_WORLD.Get_rank()}\"\n    logs_root = Path(__file__).absolute().parent.parent / \"logs\"\n    log_path = logs_root / f\"{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}_{exp_name}\"\n    os.environ[\"OPENAI_LOGDIR\"] = str(log_path)\n    set_random_seed(MPI.COMM_WORLD.Get_rank(), deterministic=True)\n    set_random_seed_for_iterations(MPI.COMM_WORLD.Get_rank())\n    dist_util.setup_dist()\n    logger.configure(dir=str(log_path))\n\n    if args.resume_checkpoint:\n        resumed_checkpoint_arg = args.resume_checkpoint\n        args.__dict__.update(json.loads((Path(args.resume_checkpoint) / 'args.json').read_text()))\n        args.resume_checkpoint = resumed_checkpoint_arg\n\n    logger.info(args.__dict__)\n\n    (Path(log_path) / 'args.json').write_text(json.dumps(args.__dict__, indent=4))\n    logger.info(f\"log folder path: {Path(log_path).resolve()}\")\n\n    repo = git.Repo(search_parent_directories=True)\n    sha = repo.head.object.hexsha\n\n    logger.log(f\"git commit hash {sha}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.to(dist_util.dev())\n    schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)\n\n    logger.log(\"creating data loader...\")\n    data = load_data(\n        data_dir=args.data_dir,\n        batch_size=args.batch_size,\n        image_size=args.image_size,\n        class_cond=args.class_cond,\n        class_name=args.class_name,\n        expansion=args.expansion\n    )\n    val_dataset = create_dataset(\n        mode='val',\n        image_size=args.image_size\n    )\n\n    logger.log(f\"gpu {MPI.COMM_WORLD.Get_rank()} / {MPI.COMM_WORLD.Get_size()} val length {len(val_dataset)}\")\n\n    logger.log(\"training...\")\n    TrainLoop(\n        model=model,\n        diffusion=diffusion,\n        data=data,\n        batch_size=args.batch_size,\n        microbatch=args.microbatch,\n        lr=args.lr,\n        ema_rate=args.ema_rate,\n        log_interval=args.log_interval,\n        save_interval=args.save_interval,\n        resume_checkpoint=args.resume_checkpoint,\n        use_fp16=args.use_fp16,\n        fp16_scale_growth=args.fp16_scale_growth,\n        schedule_sampler=schedule_sampler,\n        weight_decay=args.weight_decay,\n        lr_anneal_steps=args.lr_anneal_steps,\n        clip_denoised=args.clip_denoised,\n        logger=logger,\n        image_size=args.image_size,\n        val_dataset=val_dataset,\n        run_without_test=args.run_without_test,\n        args=args\n        # dist_util=dist_util,\n    ).run_loop(max_iter=300000, start_print_iter=args.start_print_iter)\n\n\ndef create_argparser():\n    defaults = dict(\n        data_dir=\"\",\n        schedule_sampler=\"uniform\",\n        lr=0.00002,\n        weight_decay=0.0,\n        lr_anneal_steps=0,\n        clip_denoised=False,\n        batch_size=4,\n        microbatch=-1,  # -1 disables microbatches\n        ema_rate=\"0.9999\",  # comma-separated list of EMA values\n        save_interval=5000,\n        start_print_iter=75000,\n        log_interval=200,\n        run_without_test=False,\n        resume_checkpoint=\"\",\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "image_train_diff_vaih.py",
    "content": "\"\"\"\nTrain a diffusion model on images.\n\"\"\"\n\nimport argparse\nimport datetime\nimport json\nimport os\nfrom pathlib import Path\n\nimport git\nfrom mpi4py import MPI\n\nfrom improved_diffusion import dist_util, logger\nfrom datasets.vaih import load_data\nfrom improved_diffusion.resample import create_named_schedule_sampler\nfrom improved_diffusion.script_util import (\n    model_and_diffusion_defaults,\n    create_model_and_diffusion,\n    args_to_dict,\n    add_dict_to_argparser,\n)\nfrom improved_diffusion.train_util import TrainLoop\nfrom improved_diffusion.utils import set_random_seed, set_random_seed_for_iterations\nfrom datasets.vaih import VaihDataset\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\ndef main():\n    args = create_argparser().parse_args()\n    args.use_fp16 = True\n    args.clip_denoised = False\n    args.learn_sigma = False\n    args.sigma_small = False\n    args.num_channels = 128\n    args.image_size = 256\n    args.num_res_blocks = 3\n    args.noise_schedule = \"linear\"\n    args.rescale_learned_sigmas = False\n    args.rescale_timesteps = False\n    args.use_scale_shift_norm = False\n    args.deeper_net = True\n\n    exp_name = f\"vaih_256_{args.rrdb_blocks}_{args.lr}_{args.batch_size}_{args.diffusion_steps}_{str(args.dropout)}_{MPI.COMM_WORLD.Get_rank()}\"\n\n    logs_root = Path(__file__).absolute().parent.parent / \"logs\"\n    log_path = logs_root / f\"{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}_{exp_name}\"\n    os.environ[\"OPENAI_LOGDIR\"] = str(log_path)\n    set_random_seed(MPI.COMM_WORLD.Get_rank(), deterministic=True)\n    set_random_seed_for_iterations(MPI.COMM_WORLD.Get_rank())\n    dist_util.setup_dist()\n    logger.configure(dir=str(log_path))\n\n    if args.resume_checkpoint:\n        resumed_checkpoint_arg = args.resume_checkpoint\n        args.__dict__.update(json.loads((Path(args.resume_checkpoint) / 'args.json').read_text()))\n        args.resume_checkpoint = resumed_checkpoint_arg\n\n    logger.info(args.__dict__)\n\n    (Path(log_path) / 'args.json').write_text(json.dumps(args.__dict__, indent=4))\n    logger.info(f\"log folder path: {Path(log_path).resolve()}\")\n\n    repo = git.Repo(search_parent_directories=True)\n    sha = repo.head.object.hexsha\n\n    logger.log(f\"git commit hash {sha}\")\n\n    logger.log(\"creating model and diffusion...\")\n    model, diffusion = create_model_and_diffusion(\n        **args_to_dict(args, model_and_diffusion_defaults().keys())\n    )\n    model.to(dist_util.dev())\n    schedule_sampler = create_named_schedule_sampler(args.schedule_sampler, diffusion)\n\n    logger.log(\"creating data loader...\")\n    data = load_data(\n        data_dir=args.data_dir,\n        batch_size=args.batch_size,\n        image_size=args.image_size,\n        class_cond=args.class_cond\n    )\n    val_dataset = VaihDataset(\n        mode='val',\n        image_size=args.image_size,\n        shard=MPI.COMM_WORLD.Get_rank(),\n        num_shards=MPI.COMM_WORLD.Get_size(),\n    )\n\n    logger.log(f\"gpu {MPI.COMM_WORLD.Get_rank()} / {MPI.COMM_WORLD.Get_size()} val length {len(val_dataset)}\")\n\n    logger.log(\"training...\")\n    TrainLoop(\n        model=model,\n        diffusion=diffusion,\n        data=data,\n        batch_size=args.batch_size,\n        microbatch=args.microbatch,\n        lr=args.lr,\n        ema_rate=args.ema_rate,\n        log_interval=args.log_interval,\n        save_interval=args.save_interval,\n        resume_checkpoint=args.resume_checkpoint,\n        use_fp16=args.use_fp16,\n        fp16_scale_growth=args.fp16_scale_growth,\n        schedule_sampler=schedule_sampler,\n        weight_decay=args.weight_decay,\n        lr_anneal_steps=args.lr_anneal_steps,\n        clip_denoised=args.clip_denoised,\n        logger=logger,\n        image_size=args.image_size,\n        val_dataset=val_dataset,\n        run_without_test=args.run_without_test,\n        args=args\n        # dist_util=dist_util,\n    ).run_loop(max_iter=300000, start_print_iter=args.start_print_iter)\n\n\ndef create_argparser():\n    defaults = dict(\n        data_dir=\"\",\n        schedule_sampler=\"uniform\",\n        lr=0.00002,\n        weight_decay=0.0,\n        lr_anneal_steps=0,\n        clip_denoised=False,\n        batch_size=4,\n        microbatch=-1,  # -1 disables microbatches\n        ema_rate=\"0.9999\",  # comma-separated list of EMA values\n        save_interval=5000,\n        start_print_iter=75000,\n        log_interval=200,\n        run_without_test=False,\n        resume_checkpoint=\"\",\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n    )\n    defaults.update(model_and_diffusion_defaults())\n    parser = argparse.ArgumentParser()\n    add_dict_to_argparser(parser, defaults)\n    return parser\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "improved_diffusion/RRDB.py",
    "content": "import functools\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef make_layer(block, n_layers):\n    layers = []\n    for _ in range(n_layers):\n        layers.append(block())\n    return nn.Sequential(*layers)\n\n\nclass ResidualDenseBlock_5C(nn.Module):\n    def __init__(self, nf=64, gc=32, bias=True):\n        super(ResidualDenseBlock_5C, self).__init__()\n        # gc: growth channel, i.e. intermediate channels\n        self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)\n        self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)\n        self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)\n        self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)\n        self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)\n        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n        # initialization\n        # mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)\n\n    def forward(self, x):\n        x1 = self.lrelu(self.conv1(x))\n        x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))\n        x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))\n        x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))\n        x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))\n        return x5 * 0.2 + x\n\n\nclass RRDB(nn.Module):\n    '''Residual in Residual Dense Block'''\n\n    def __init__(self, nf=1, gc=32):\n        super(RRDB, self).__init__()\n        self.RDB1 = ResidualDenseBlock_5C(nf, gc)\n        self.RDB2 = ResidualDenseBlock_5C(nf, gc)\n        self.RDB3 = ResidualDenseBlock_5C(nf, gc)\n\n    def forward(self, x):\n        out = self.RDB1(x)\n        out = self.RDB2(out)\n        out = self.RDB3(out)\n        return out * 0.2 + x\n\nclass RRDBNet(nn.Module):\n    def __init__(self, in_nc=3, out_nc=128, nf=64, nb=3, gc=32):\n        super(RRDBNet, self).__init__()\n        RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)\n\n        self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)\n        self.RRDB_trunk = make_layer(RRDB_block_f, nb)\n        self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)\n        self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)\n        self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)\n\n        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n    def forward(self, x):\n        fea = self.conv_first(x)\n        trunk = self.trunk_conv(self.RRDB_trunk(fea))\n        fea = fea + trunk\n        out = self.conv_last(self.lrelu(self.HRconv(fea)))\n\n        return out\n\n"
  },
  {
    "path": "improved_diffusion/__init__.py",
    "content": "\"\"\"\nCodebase for \"Improved Denoising Diffusion Probabilistic Models\".\n\"\"\"\n"
  },
  {
    "path": "improved_diffusion/dist_util.py",
    "content": "\"\"\"\nHelpers for distributed training.\n\"\"\"\n\nimport io\nimport os\nimport socket\n\nimport blobfile as bf\nfrom mpi4py import MPI\nimport torch as th\nimport torch.distributed as dist\n\n# Change this to reflect your cluster layout.\n# The GPU for a given rank is (rank % GPUS_PER_NODE).\nGPUS_PER_NODE = 8\n\nSETUP_RETRY_COUNT = 3\n\n\ndef setup_dist():\n    \"\"\"\n    Setup a distributed process group.\n    \"\"\"\n    if dist.is_initialized():\n        return\n\n    comm = MPI.COMM_WORLD\n    backend = \"gloo\" if not th.cuda.is_available() else \"nccl\"\n\n    if backend == \"gloo\":\n        hostname = \"localhost\"\n    else:\n        hostname = socket.gethostbyname(socket.getfqdn())\n    os.environ[\"MASTER_ADDR\"] = comm.bcast(hostname, root=0)\n    os.environ[\"RANK\"] = str(comm.rank)\n    os.environ[\"WORLD_SIZE\"] = str(comm.size)\n\n    port = comm.bcast(_find_free_port(), root=0)\n    os.environ[\"MASTER_PORT\"] = str(port)\n    dist.init_process_group(backend=backend, init_method=\"env://\")\n\n\ndef dev():\n    \"\"\"\n    Get the device to use for torch.distributed.\n    \"\"\"\n    if th.cuda.is_available():\n        return th.device(f\"cuda:{MPI.COMM_WORLD.Get_rank() % GPUS_PER_NODE}\")\n    return th.device(\"cpu\")\n\n\ndef load_state_dict(path, **kwargs):\n    \"\"\"\n    Load a PyTorch file without redundant fetches across MPI ranks.\n    \"\"\"\n    if MPI.COMM_WORLD.Get_rank() == 0:\n        with bf.BlobFile(path, \"rb\") as f:\n            data = f.read()\n    else:\n        data = None\n    data = MPI.COMM_WORLD.bcast(data)\n    return th.load(io.BytesIO(data), **kwargs)\n\n\ndef sync_params(params):\n    \"\"\"\n    Synchronize a sequence of Tensors across ranks from rank 0.\n    \"\"\"\n    for p in params:\n        with th.no_grad():\n            dist.broadcast(p, 0)\n\n\ndef _find_free_port():\n    try:\n        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n        s.bind((\"\", 0))\n        s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n        return s.getsockname()[1]\n    finally:\n        s.close()\n"
  },
  {
    "path": "improved_diffusion/fp16_util.py",
    "content": "\"\"\"\nHelpers to train with 16-bit precision.\n\"\"\"\n\nimport torch.nn as nn\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\n\ndef convert_module_to_f16(l):\n    \"\"\"\n    Convert primitive modules to float16.\n    \"\"\"\n    if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):\n        l.weight.data = l.weight.data.half()\n        l.bias.data = l.bias.data.half()\n\n\ndef convert_module_to_f32(l):\n    \"\"\"\n    Convert primitive modules to float32, undoing convert_module_to_f16().\n    \"\"\"\n    if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):\n        l.weight.data = l.weight.data.float()\n        l.bias.data = l.bias.data.float()\n\n\ndef make_master_params(model_params):\n    \"\"\"\n    Copy model parameters into a (differently-shaped) list of full-precision\n    parameters.\n    \"\"\"\n    master_params = _flatten_dense_tensors(\n        [param.detach().float() for param in model_params]\n    )\n    master_params = nn.Parameter(master_params)\n    master_params.requires_grad = True\n    return [master_params]\n\n\ndef model_grads_to_master_grads(model_params, master_params):\n    \"\"\"\n    Copy the gradients from the model parameters into the master parameters\n    from make_master_params().\n    \"\"\"\n    master_params[0].grad = _flatten_dense_tensors(\n        [param.grad.data.detach().float() for param in model_params]\n    )\n\n\ndef master_params_to_model_params(model_params, master_params):\n    \"\"\"\n    Copy the master parameter data back into the model parameters.\n    \"\"\"\n    # Without copying to a list, if a generator is passed, this will\n    # silently not copy any parameters.\n    model_params = list(model_params)\n\n    for param, master_param in zip(\n        model_params, unflatten_master_params(model_params, master_params)\n    ):\n        param.detach().copy_(master_param)\n\n\ndef unflatten_master_params(model_params, master_params):\n    \"\"\"\n    Unflatten the master parameters to look like model_params.\n    \"\"\"\n    return _unflatten_dense_tensors(master_params[0].detach(), model_params)\n\n\ndef zero_grad(model_params):\n    for param in model_params:\n        # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group\n        if param.grad is not None:\n            param.grad.detach_()\n            param.grad.zero_()\n"
  },
  {
    "path": "improved_diffusion/gaussian_diffusion.py",
    "content": "\"\"\"\nThis code started out as a PyTorch port of Ho et al's diffusion models:\nhttps://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py\n\nDocstrings have been added, as well as DDIM sampling and a new collection of beta schedules.\n\"\"\"\n\nimport enum\nimport math\n\nimport numpy as np\nimport torch as th\n\nfrom .nn import mean_flat\nfrom .losses import normal_kl, discretized_gaussian_log_likelihood\n\n\ndef get_named_beta_schedule(schedule_name, num_diffusion_timesteps):\n    \"\"\"\n    Get a pre-defined beta schedule for the given name.\n\n    The beta schedule library consists of beta schedules which remain similar\n    in the limit of num_diffusion_timesteps.\n    Beta schedules may be added, but should not be removed or changed once\n    they are committed to maintain backwards compatibility.\n    \"\"\"\n    if schedule_name == \"linear\":\n        # Linear schedule from Ho et al, extended to work for any number of\n        # diffusion steps.\n        scale = 1000 / num_diffusion_timesteps\n        beta_start = scale * 0.0001\n        beta_end = scale * 0.02\n        return np.linspace(\n            beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64\n        )\n    elif schedule_name == \"cosine\":\n        return betas_for_alpha_bar(\n            num_diffusion_timesteps,\n            lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2,\n        )\n    else:\n        raise NotImplementedError(f\"unknown beta schedule: {schedule_name}\")\n\n\ndef betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):\n    \"\"\"\n    Create a beta schedule that discretizes the given alpha_t_bar function,\n    which defines the cumulative product of (1-beta) over time from t = [0,1].\n\n    :param num_diffusion_timesteps: the number of betas to produce.\n    :param alpha_bar: a lambda that takes an argument t from 0 to 1 and\n                      produces the cumulative product of (1-beta) up to that\n                      part of the diffusion process.\n    :param max_beta: the maximum beta to use; use values lower than 1 to\n                     prevent singularities.\n    \"\"\"\n    betas = []\n    for i in range(num_diffusion_timesteps):\n        t1 = i / num_diffusion_timesteps\n        t2 = (i + 1) / num_diffusion_timesteps\n        betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))\n    return np.array(betas)\n\n\nclass ModelMeanType(enum.Enum):\n    \"\"\"\n    Which type of output the model predicts.\n    \"\"\"\n\n    PREVIOUS_X = enum.auto()  # the model predicts x_{t-1}\n    START_X = enum.auto()  # the model predicts x_0\n    EPSILON = enum.auto()  # the model predicts epsilon\n\n\nclass ModelVarType(enum.Enum):\n    \"\"\"\n    What is used as the model's output variance.\n\n    The LEARNED_RANGE option has been added to allow the model to predict\n    values between FIXED_SMALL and FIXED_LARGE, making its job easier.\n    \"\"\"\n\n    LEARNED = enum.auto()\n    FIXED_SMALL = enum.auto()\n    FIXED_LARGE = enum.auto()\n    LEARNED_RANGE = enum.auto()\n\n\nclass LossType(enum.Enum):\n    MSE = enum.auto()  # use raw MSE loss (and KL when learning variances)\n    RESCALED_MSE = (\n        enum.auto()\n    )  # use raw MSE loss (with RESCALED_KL when learning variances)\n    KL = enum.auto()  # use the variational lower-bound\n    RESCALED_KL = enum.auto()  # like KL, but rescale to estimate the full VLB\n\n    def is_vb(self):\n        return self == LossType.KL or self == LossType.RESCALED_KL\n\n\nclass GaussianDiffusion:\n    \"\"\"\n    Utilities for training and sampling diffusion models.\n\n    Ported directly from here, and then adapted over time to further experimentation.\n    https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42\n\n    :param betas: a 1-D numpy array of betas for each diffusion timestep,\n                  starting at T and going to 1.\n    :param model_mean_type: a ModelMeanType determining what the model outputs.\n    :param model_var_type: a ModelVarType determining how variance is output.\n    :param loss_type: a LossType determining the loss function to use.\n    :param rescale_timesteps: if True, pass floating point timesteps into the\n                              model so that they are always scaled like in the\n                              original paper (0 to 1000).\n    \"\"\"\n\n    def __init__(\n        self,\n        *,\n        betas,\n        model_mean_type,\n        model_var_type,\n        loss_type,\n        rescale_timesteps=False,\n    ):\n        self.model_mean_type = model_mean_type\n        self.model_var_type = model_var_type\n        self.loss_type = loss_type\n        self.rescale_timesteps = rescale_timesteps\n\n        # Use float64 for accuracy.\n        betas = np.array(betas, dtype=np.float64)\n        self.betas = betas\n        assert len(betas.shape) == 1, \"betas must be 1-D\"\n        assert (betas > 0).all() and (betas <= 1).all()\n\n        self.num_timesteps = int(betas.shape[0])\n\n        alphas = 1.0 - betas\n        self.alphas_cumprod = np.cumprod(alphas, axis=0)\n        self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1])\n        self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0)\n        assert self.alphas_cumprod_prev.shape == (self.num_timesteps,)\n\n        # calculations for diffusion q(x_t | x_{t-1}) and others\n        self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod)\n        self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod)\n        self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod)\n        self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod)\n        self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1)\n\n        # calculations for posterior q(x_{t-1} | x_t, x_0)\n        self.posterior_variance = (\n            betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)\n        )\n        # log calculation clipped because the posterior variance is 0 at the\n        # beginning of the diffusion chain.\n        self.posterior_log_variance_clipped = np.log(\n            np.append(self.posterior_variance[1], self.posterior_variance[1:])\n        )\n        self.posterior_mean_coef1 = (\n            betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)\n        )\n        self.posterior_mean_coef2 = (\n            (1.0 - self.alphas_cumprod_prev)\n            * np.sqrt(alphas)\n            / (1.0 - self.alphas_cumprod)\n        )\n\n    def q_mean_variance(self, x_start, t):\n        \"\"\"\n        Get the distribution q(x_t | x_0).\n\n        :param x_start: the [N x C x ...] tensor of noiseless inputs.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :return: A tuple (mean, variance, log_variance), all of x_start's shape.\n        \"\"\"\n        mean = (\n            _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start\n        )\n        variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)\n        log_variance = _extract_into_tensor(\n            self.log_one_minus_alphas_cumprod, t, x_start.shape\n        )\n        return mean, variance, log_variance\n\n    def q_sample(self, x_start, t, noise=None):\n        \"\"\"\n        Diffuse the data for a given number of diffusion steps.\n\n        In other words, sample from q(x_t | x_0).\n\n        :param x_start: the initial data batch.\n        :param t: the number of diffusion steps (minus 1). Here, 0 means one step.\n        :param noise: if specified, the split-out normal noise.\n        :return: A noisy version of x_start.\n        \"\"\"\n        if noise is None:\n            noise = th.randn_like(x_start)\n        assert noise.shape == x_start.shape\n        return (\n            _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start\n            + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape)\n            * noise\n        )\n\n    def q_posterior_mean_variance(self, x_start, x_t, t):\n        \"\"\"\n        Compute the mean and variance of the diffusion posterior:\n\n            q(x_{t-1} | x_t, x_0)\n\n        \"\"\"\n        assert x_start.shape == x_t.shape\n        posterior_mean = (\n            _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start\n            + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t\n        )\n        posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape)\n        posterior_log_variance_clipped = _extract_into_tensor(\n            self.posterior_log_variance_clipped, t, x_t.shape\n        )\n        assert (\n            posterior_mean.shape[0]\n            == posterior_variance.shape[0]\n            == posterior_log_variance_clipped.shape[0]\n            == x_start.shape[0]\n        )\n        return posterior_mean, posterior_variance, posterior_log_variance_clipped\n\n    def p_mean_variance(\n        self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None\n    ):\n        \"\"\"\n        Apply the model to get p(x_{t-1} | x_t), as well as a prediction of\n        the initial x, x_0.\n\n        :param model: the model, which takes a signal and a batch of timesteps\n                      as input.\n        :param x: the [N x C x ...] tensor at time t.\n        :param t: a 1-D Tensor of timesteps.\n        :param clip_denoised: if True, clip the denoised signal into [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample. Applies before\n            clip_denoised.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :return: a dict with the following keys:\n                 - 'mean': the model mean output.\n                 - 'variance': the model variance output.\n                 - 'log_variance': the log of 'variance'.\n                 - 'pred_xstart': the prediction for x_0.\n        \"\"\"\n        if model_kwargs is None:\n            model_kwargs = {}\n\n        B, C = x.shape[:2]\n        assert t.shape == (B,)\n        model_output = model(x, self._scale_timesteps(t), **model_kwargs)\n\n        if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]:\n            assert model_output.shape == (B, C * 2, *x.shape[2:])\n            model_output, model_var_values = th.split(model_output, C, dim=1)\n            if self.model_var_type == ModelVarType.LEARNED:\n                model_log_variance = model_var_values\n                model_variance = th.exp(model_log_variance)\n            else:\n                min_log = _extract_into_tensor(\n                    self.posterior_log_variance_clipped, t, x.shape\n                )\n                max_log = _extract_into_tensor(np.log(self.betas), t, x.shape)\n                # The model_var_values is [-1, 1] for [min_var, max_var].\n                frac = (model_var_values + 1) / 2\n                model_log_variance = frac * max_log + (1 - frac) * min_log\n                model_variance = th.exp(model_log_variance)\n        else:\n            model_variance, model_log_variance = {\n                # for fixedlarge, we set the initial (log-)variance like so\n                # to get a better decoder log likelihood.\n                ModelVarType.FIXED_LARGE: (\n                    np.append(self.posterior_variance[1], self.betas[1:]),\n                    np.log(np.append(self.posterior_variance[1], self.betas[1:])),\n                ),\n                ModelVarType.FIXED_SMALL: (\n                    self.posterior_variance,\n                    self.posterior_log_variance_clipped,\n                ),\n            }[self.model_var_type]\n            model_variance = _extract_into_tensor(model_variance, t, x.shape)\n            model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape)\n\n        def process_xstart(x):\n            if denoised_fn is not None:\n                x = denoised_fn(x)\n            if clip_denoised:\n                return x.clamp(-1, 1)\n            return x\n\n        if self.model_mean_type == ModelMeanType.PREVIOUS_X:\n            pred_xstart = process_xstart(\n                self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output)\n            )\n            model_mean = model_output\n        elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]:\n            if self.model_mean_type == ModelMeanType.START_X:\n                pred_xstart = process_xstart(model_output)\n            else:\n                pred_xstart = process_xstart(\n                    self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)\n                )\n            model_mean, _, _ = self.q_posterior_mean_variance(\n                x_start=pred_xstart, x_t=x, t=t\n            )\n        else:\n            raise NotImplementedError(self.model_mean_type)\n\n        assert (\n            model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape\n        )\n        return {\n            \"mean\": model_mean,\n            \"variance\": model_variance,\n            \"log_variance\": model_log_variance,\n            \"pred_xstart\": pred_xstart,\n        }\n\n    def _predict_xstart_from_eps(self, x_t, t, eps):\n        assert x_t.shape == eps.shape\n        return (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t\n            - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps\n        )\n\n    def _predict_xstart_from_xprev(self, x_t, t, xprev):\n        assert x_t.shape == xprev.shape\n        return (  # (xprev - coef2*x_t) / coef1\n            _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev\n            - _extract_into_tensor(\n                self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape\n            )\n            * x_t\n        )\n\n    def _predict_eps_from_xstart(self, x_t, t, pred_xstart):\n        return (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t\n            - pred_xstart\n        ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)\n\n    def _scale_timesteps(self, t):\n        if self.rescale_timesteps:\n            return t.float() * (1000.0 / self.num_timesteps)\n        return t\n\n    def p_sample(\n        self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None\n    ):\n        \"\"\"\n        Sample x_{t-1} from the model at the given timestep.\n\n        :param model: the model to sample from.\n        :param x: the current tensor at x_{t-1}.\n        :param t: the value of t, starting at 0 for the first diffusion step.\n        :param clip_denoised: if True, clip the x_start prediction to [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :return: a dict containing the following keys:\n                 - 'sample': a random sample from the model.\n                 - 'pred_xstart': a prediction of x_0.\n        \"\"\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        noise = th.randn_like(x)\n        nonzero_mask = (\n            (t != 0).float().view(-1, *([1] * (len(x.shape) - 1)))\n        )  # no noise when t == 0\n        sample = out[\"mean\"] + nonzero_mask * th.exp(0.5 * out[\"log_variance\"]) * noise\n        return {\"sample\": sample, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def p_sample_loop(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n    ):\n        \"\"\"\n        Generate samples from the model.\n\n        :param model: the model module.\n        :param shape: the shape of the samples, (N, C, H, W).\n        :param noise: if specified, the noise from the encoder to sample.\n                      Should be of the same shape as `shape`.\n        :param clip_denoised: if True, clip x_start predictions to [-1, 1].\n        :param denoised_fn: if not None, a function which applies to the\n            x_start prediction before it is used to sample.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :param device: if specified, the device to create the samples on.\n                       If not specified, use a model parameter's device.\n        :param progress: if True, show a tqdm progress bar.\n        :return: a non-differentiable batch of samples.\n        \"\"\"\n        final = None\n        for sample in self.p_sample_loop_progressive(\n            model,\n            shape,\n            noise=noise,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n            device=device,\n            progress=progress,\n        ):\n            final = sample\n        return final[\"sample\"]\n\n    def p_sample_loop_progressive(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n    ):\n        \"\"\"\n        Generate samples from the model and yield intermediate samples from\n        each timestep of diffusion.\n\n        Arguments are the same as p_sample_loop().\n        Returns a generator over dicts, where each dict is the return value of\n        p_sample().\n        \"\"\"\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        if noise is not None:\n            img = noise\n        else:\n            img = th.randn(*shape).to(device=device)\n        indices = list(range(self.num_timesteps))[::-1]\n\n        if progress:\n            # Lazy import so that we don't depend on tqdm.\n            from tqdm.auto import tqdm\n\n            indices = tqdm(indices)\n\n        for i in indices:\n            t = th.tensor([i] * shape[0], device=device)\n            with th.no_grad():\n                out = self.p_sample(\n                    model,\n                    img,\n                    t,\n                    clip_denoised=clip_denoised,\n                    denoised_fn=denoised_fn,\n                    model_kwargs=model_kwargs,\n                )\n                yield out\n                img = out[\"sample\"]\n\n    def ddim_sample(\n        self,\n        model,\n        x,\n        t,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        eta=0.0,\n    ):\n        \"\"\"\n        Sample x_{t-1} from the model using DDIM.\n\n        Same usage as p_sample().\n        \"\"\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        # Usually our model outputs epsilon, but we re-derive it\n        # in case we used x_start or x_prev prediction.\n        eps = self._predict_eps_from_xstart(x, t, out[\"pred_xstart\"])\n        alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape)\n        alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape)\n        sigma = (\n            eta\n            * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar))\n            * th.sqrt(1 - alpha_bar / alpha_bar_prev)\n        )\n        # Equation 12.\n        noise = th.randn_like(x)\n        mean_pred = (\n            out[\"pred_xstart\"] * th.sqrt(alpha_bar_prev)\n            + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps\n        )\n        nonzero_mask = (\n            (t != 0).float().view(-1, *([1] * (len(x.shape) - 1)))\n        )  # no noise when t == 0\n        sample = mean_pred + nonzero_mask * sigma * noise\n        return {\"sample\": sample, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def ddim_reverse_sample(\n        self,\n        model,\n        x,\n        t,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        eta=0.0,\n    ):\n        \"\"\"\n        Sample x_{t+1} from the model using DDIM reverse ODE.\n        \"\"\"\n        assert eta == 0.0, \"Reverse ODE only for deterministic path\"\n        out = self.p_mean_variance(\n            model,\n            x,\n            t,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n        )\n        # Usually our model outputs epsilon, but we re-derive it\n        # in case we used x_start or x_prev prediction.\n        eps = (\n            _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x\n            - out[\"pred_xstart\"]\n        ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape)\n        alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape)\n\n        # Equation 12. reversed\n        mean_pred = (\n            out[\"pred_xstart\"] * th.sqrt(alpha_bar_next)\n            + th.sqrt(1 - alpha_bar_next) * eps\n        )\n\n        return {\"sample\": mean_pred, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def ddim_sample_loop(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n        eta=0.0,\n    ):\n        \"\"\"\n        Generate samples from the model using DDIM.\n\n        Same usage as p_sample_loop().\n        \"\"\"\n        final = None\n        for sample in self.ddim_sample_loop_progressive(\n            model,\n            shape,\n            noise=noise,\n            clip_denoised=clip_denoised,\n            denoised_fn=denoised_fn,\n            model_kwargs=model_kwargs,\n            device=device,\n            progress=progress,\n            eta=eta,\n        ):\n            final = sample\n        return final[\"sample\"]\n\n    def ddim_sample_loop_progressive(\n        self,\n        model,\n        shape,\n        noise=None,\n        clip_denoised=True,\n        denoised_fn=None,\n        model_kwargs=None,\n        device=None,\n        progress=False,\n        eta=0.0,\n    ):\n        \"\"\"\n        Use DDIM to sample from the model and yield intermediate samples from\n        each timestep of DDIM.\n\n        Same usage as p_sample_loop_progressive().\n        \"\"\"\n        if device is None:\n            device = next(model.parameters()).device\n        assert isinstance(shape, (tuple, list))\n        if noise is not None:\n            img = noise\n        else:\n            img = th.randn(*shape).to(device=device)\n        indices = list(range(self.num_timesteps))[::-1]\n\n        if progress:\n            # Lazy import so that we don't depend on tqdm.\n            from tqdm.auto import tqdm\n\n            indices = tqdm(indices)\n\n        for i in indices:\n            t = th.tensor([i] * shape[0], device=device)\n            with th.no_grad():\n                out = self.ddim_sample(\n                    model,\n                    img,\n                    t,\n                    clip_denoised=clip_denoised,\n                    denoised_fn=denoised_fn,\n                    model_kwargs=model_kwargs,\n                    eta=eta,\n                )\n                yield out\n                img = out[\"sample\"]\n\n    def _vb_terms_bpd(\n        self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None\n    ):\n        \"\"\"\n        Get a term for the variational lower-bound.\n\n        The resulting units are bits (rather than nats, as one might expect).\n        This allows for comparison to other papers.\n\n        :return: a dict with the following keys:\n                 - 'output': a shape [N] tensor of NLLs or KLs.\n                 - 'pred_xstart': the x_0 predictions.\n        \"\"\"\n        true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance(\n            x_start=x_start, x_t=x_t, t=t\n        )\n        out = self.p_mean_variance(\n            model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs\n        )\n        kl = normal_kl(\n            true_mean, true_log_variance_clipped, out[\"mean\"], out[\"log_variance\"]\n        )\n        kl = mean_flat(kl) / np.log(2.0)\n\n        decoder_nll = -discretized_gaussian_log_likelihood(\n            x_start, means=out[\"mean\"], log_scales=0.5 * out[\"log_variance\"]\n        )\n        assert decoder_nll.shape == x_start.shape\n        decoder_nll = mean_flat(decoder_nll) / np.log(2.0)\n\n        # At the first timestep return the decoder NLL,\n        # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t))\n        output = th.where((t == 0), decoder_nll, kl)\n        return {\"output\": output, \"pred_xstart\": out[\"pred_xstart\"]}\n\n    def training_losses(self, model, x_start, t, model_kwargs=None, noise=None):\n        \"\"\"\n        Compute training losses for a single timestep.\n\n        :param model: the model to evaluate loss on.\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :param t: a batch of timestep indices.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n        :param noise: if specified, the specific Gaussian noise to try to remove.\n        :return: a dict with the key \"loss\" containing a tensor of shape [N].\n                 Some mean or variance settings may also have other keys.\n        \"\"\"\n        if model_kwargs is None:\n            model_kwargs = {}\n        if noise is None:\n            noise = th.randn_like(x_start)\n        x_t = self.q_sample(x_start, t, noise=noise)\n\n        terms = {}\n\n        if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL:\n            terms[\"loss\"] = self._vb_terms_bpd(\n                model=model,\n                x_start=x_start,\n                x_t=x_t,\n                t=t,\n                clip_denoised=False,\n                model_kwargs=model_kwargs,\n            )[\"output\"]\n            if self.loss_type == LossType.RESCALED_KL:\n                terms[\"loss\"] *= self.num_timesteps\n        elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE:\n            model_output = model(x_t, self._scale_timesteps(t), **model_kwargs)\n\n            if self.model_var_type in [\n                ModelVarType.LEARNED,\n                ModelVarType.LEARNED_RANGE,\n            ]:\n                B, C = x_t.shape[:2]\n                assert model_output.shape == (B, C * 2, *x_t.shape[2:])\n                model_output, model_var_values = th.split(model_output, C, dim=1)\n                # Learn the variance using the variational bound, but don't let\n                # it affect our mean prediction.\n                frozen_out = th.cat([model_output.detach(), model_var_values], dim=1)\n                terms[\"vb\"] = self._vb_terms_bpd(\n                    model=lambda *args, r=frozen_out: r,\n                    x_start=x_start,\n                    x_t=x_t,\n                    t=t,\n                    clip_denoised=False,\n                )[\"output\"]\n                if self.loss_type == LossType.RESCALED_MSE:\n                    # Divide by 1000 for equivalence with initial implementation.\n                    # Without a factor of 1/1000, the VB term hurts the MSE term.\n                    terms[\"vb\"] *= self.num_timesteps / 1000.0\n\n            target = {\n                ModelMeanType.PREVIOUS_X: self.q_posterior_mean_variance(\n                    x_start=x_start, x_t=x_t, t=t\n                )[0],\n                ModelMeanType.START_X: x_start,\n                ModelMeanType.EPSILON: noise,\n            }[self.model_mean_type]\n            assert model_output.shape == target.shape == x_start.shape\n            terms[\"mse\"] = mean_flat((target - model_output) ** 2)\n            terms[\"sum\"] = (target - model_output).pow(2).sum(dim=(1, 2, 3))\n            if \"vb\" in terms:\n                terms[\"loss\"] = terms[\"mse\"] + terms[\"vb\"]\n            else:\n                terms[\"loss\"] = terms[\"sum\"]\n        else:\n            raise NotImplementedError(self.loss_type)\n\n        return terms\n\n    def _prior_bpd(self, x_start):\n        \"\"\"\n        Get the prior KL term for the variational lower-bound, measured in\n        bits-per-dim.\n\n        This term can't be optimized, as it only depends on the encoder.\n\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :return: a batch of [N] KL values (in bits), one per batch element.\n        \"\"\"\n        batch_size = x_start.shape[0]\n        t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)\n        qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)\n        kl_prior = normal_kl(\n            mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0\n        )\n        return mean_flat(kl_prior) / np.log(2.0)\n\n    def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None):\n        \"\"\"\n        Compute the entire variational lower-bound, measured in bits-per-dim,\n        as well as other related quantities.\n\n        :param model: the model to evaluate loss on.\n        :param x_start: the [N x C x ...] tensor of inputs.\n        :param clip_denoised: if True, clip denoised samples.\n        :param model_kwargs: if not None, a dict of extra keyword arguments to\n            pass to the model. This can be used for conditioning.\n\n        :return: a dict containing the following keys:\n                 - total_bpd: the total variational lower-bound, per batch element.\n                 - prior_bpd: the prior term in the lower-bound.\n                 - vb: an [N x T] tensor of terms in the lower-bound.\n                 - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep.\n                 - mse: an [N x T] tensor of epsilon MSEs for each timestep.\n        \"\"\"\n        device = x_start.device\n        batch_size = x_start.shape[0]\n\n        vb = []\n        xstart_mse = []\n        mse = []\n        for t in list(range(self.num_timesteps))[::-1]:\n            t_batch = th.tensor([t] * batch_size, device=device)\n            noise = th.randn_like(x_start)\n            x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise)\n            # Calculate VLB term at the current timestep\n            with th.no_grad():\n                out = self._vb_terms_bpd(\n                    model,\n                    x_start=x_start,\n                    x_t=x_t,\n                    t=t_batch,\n                    clip_denoised=clip_denoised,\n                    model_kwargs=model_kwargs,\n                )\n            vb.append(out[\"output\"])\n            xstart_mse.append(mean_flat((out[\"pred_xstart\"] - x_start) ** 2))\n            eps = self._predict_eps_from_xstart(x_t, t_batch, out[\"pred_xstart\"])\n            mse.append(mean_flat((eps - noise) ** 2))\n\n        vb = th.stack(vb, dim=1)\n        xstart_mse = th.stack(xstart_mse, dim=1)\n        mse = th.stack(mse, dim=1)\n\n        prior_bpd = self._prior_bpd(x_start)\n        total_bpd = vb.sum(dim=1) + prior_bpd\n        return {\n            \"total_bpd\": total_bpd,\n            \"prior_bpd\": prior_bpd,\n            \"vb\": vb,\n            \"xstart_mse\": xstart_mse,\n            \"mse\": mse,\n        }\n\n\ndef _extract_into_tensor(arr, timesteps, broadcast_shape):\n    \"\"\"\n    Extract values from a 1-D numpy array for a batch of indices.\n\n    :param arr: the 1-D numpy array.\n    :param timesteps: a tensor of indices into the array to extract.\n    :param broadcast_shape: a larger shape of K dimensions with the batch\n                            dimension equal to the length of timesteps.\n    :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims.\n    \"\"\"\n    res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float()\n    while len(res.shape) < len(broadcast_shape):\n        res = res[..., None]\n    return res.expand(broadcast_shape)\n"
  },
  {
    "path": "improved_diffusion/image_datasets.py",
    "content": "from PIL import Image\nimport blobfile as bf\nfrom mpi4py import MPI\nimport numpy as np\nfrom torch.utils.data import DataLoader, Dataset\n\n\ndef load_data(\n    *, data_dir, batch_size, image_size, class_cond=False, deterministic=False\n):\n    \"\"\"\n    For a dataset, create a generator over (images, kwargs) pairs.\n\n    Each images is an NCHW float tensor, and the kwargs dict contains zero or\n    more keys, each of which map to a batched Tensor of their own.\n    The kwargs dict can be used for class labels, in which case the key is \"y\"\n    and the values are integer tensors of class labels.\n\n    :param data_dir: a dataset directory.\n    :param batch_size: the batch size of each returned pair.\n    :param image_size: the size to which images are resized.\n    :param class_cond: if True, include a \"y\" key in returned dicts for class\n                       label. If classes are not available and this is true, an\n                       exception will be raised.\n    :param deterministic: if True, yield results in a deterministic order.\n    \"\"\"\n    if not data_dir:\n        raise ValueError(\"unspecified data directory\")\n    all_files = _list_image_files_recursively(data_dir)\n    classes = None\n    if class_cond:\n        # Assume classes are the first part of the filename,\n        # before an underscore.\n        class_names = [bf.basename(path).split(\"_\")[0] for path in all_files]\n        sorted_classes = {x: i for i, x in enumerate(sorted(set(class_names)))}\n        classes = [sorted_classes[x] for x in class_names]\n    dataset = ImageDataset(\n        image_size,\n        all_files,\n        classes=classes,\n        shard=MPI.COMM_WORLD.Get_rank(),\n        num_shards=MPI.COMM_WORLD.Get_size(),\n    )\n    if deterministic:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=False, num_workers=0, drop_last=True\n        )\n    else:\n        loader = DataLoader(\n            dataset, batch_size=batch_size, shuffle=True, num_workers=0, drop_last=True\n        )\n    while True:\n        yield from loader\n\n\ndef _list_image_files_recursively(data_dir):\n    results = []\n    for entry in sorted(bf.listdir(data_dir)):\n        full_path = bf.join(data_dir, entry)\n        ext = entry.split(\".\")[-1]\n        if \".\" in entry and ext.lower() in [\"jpg\", \"jpeg\", \"png\", \"gif\"]:\n            results.append(full_path)\n        elif bf.isdir(full_path):\n            results.extend(_list_image_files_recursively(full_path))\n    return results\n\n\nclass ImageDataset(Dataset):\n    def __init__(self, resolution, image_paths, classes=None, shard=0, num_shards=1):\n        super().__init__()\n        self.resolution = resolution\n        self.local_images = image_paths[shard:][::num_shards]\n        self.local_classes = None if classes is None else classes[shard:][::num_shards]\n\n    def __len__(self):\n        return len(self.local_images)\n\n    def __getitem__(self, idx):\n        path = self.local_images[idx]\n        with bf.BlobFile(path, \"rb\") as f:\n            pil_image = Image.open(f)\n            pil_image.load()\n\n        # We are not on a new enough PIL to support the `reducing_gap`\n        # argument, which uses BOX downsampling at powers of two first.\n        # Thus, we do it by hand to improve downsample quality.\n        while min(*pil_image.size) >= 2 * self.resolution:\n            pil_image = pil_image.resize(\n                tuple(x // 2 for x in pil_image.size), resample=Image.BOX\n            )\n\n        scale = self.resolution / min(*pil_image.size)\n        pil_image = pil_image.resize(\n            tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC\n        )\n\n        arr = np.array(pil_image.convert(\"RGB\"))\n        crop_y = (arr.shape[0] - self.resolution) // 2\n        crop_x = (arr.shape[1] - self.resolution) // 2\n        arr = arr[crop_y : crop_y + self.resolution, crop_x : crop_x + self.resolution]\n        arr = arr.astype(np.float32) / 127.5 - 1\n\n        out_dict = {}\n        if self.local_classes is not None:\n            out_dict[\"y\"] = np.array(self.local_classes[idx], dtype=np.int64)\n        return np.transpose(arr, [2, 0, 1]), out_dict\n"
  },
  {
    "path": "improved_diffusion/logger.py",
    "content": "\"\"\"\nLogger copied from OpenAI baselines to avoid extra RL-based dependencies:\nhttps://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/logger.py\n\"\"\"\n\nimport os\nimport sys\nimport shutil\nimport os.path as osp\nimport json\nimport time\nimport datetime\nimport tempfile\nimport warnings\nfrom collections import defaultdict\nfrom contextlib import contextmanager\n\nDEBUG = 10\nINFO = 20\nWARN = 30\nERROR = 40\n\nDISABLED = 50\n\n\nclass KVWriter(object):\n    def writekvs(self, kvs):\n        raise NotImplementedError\n\n\nclass SeqWriter(object):\n    def writeseq(self, seq):\n        raise NotImplementedError\n\n\nclass HumanOutputFormat(KVWriter, SeqWriter):\n    def __init__(self, filename_or_file):\n        if isinstance(filename_or_file, str):\n            self.file = open(filename_or_file, \"wt\")\n            self.own_file = True\n        else:\n            assert hasattr(filename_or_file, \"read\"), (\n                \"expected file or str, got %s\" % filename_or_file\n            )\n            self.file = filename_or_file\n            self.own_file = False\n\n    def writekvs(self, kvs):\n        # Create strings for printing\n        key2str = {}\n        for (key, val) in sorted(kvs.items()):\n            if hasattr(val, \"__float__\"):\n                valstr = \"%-8.3g\" % val\n            else:\n                valstr = str(val)\n            key2str[self._truncate(key)] = self._truncate(valstr)\n\n        # Find max widths\n        if len(key2str) == 0:\n            print(\"WARNING: tried to write empty key-value dict\")\n            return\n        else:\n            keywidth = max(map(len, key2str.keys()))\n            valwidth = max(map(len, key2str.values()))\n\n        # Write out the data\n        dashes = \"-\" * (keywidth + valwidth + 7)\n        lines = [dashes]\n        for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()):\n            lines.append(\n                \"| %s%s | %s%s |\"\n                % (key, \" \" * (keywidth - len(key)), val, \" \" * (valwidth - len(val)))\n            )\n        lines.append(dashes)\n        self.file.write(\"\\n\".join(lines) + \"\\n\")\n\n        # Flush the output to the file\n        self.file.flush()\n\n    def _truncate(self, s):\n        maxlen = 30\n        return s[: maxlen - 3] + \"...\" if len(s) > maxlen else s\n\n    def writeseq(self, seq):\n        seq = list(seq)\n        for (i, elem) in enumerate(seq):\n            self.file.write(f\"{datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')}  {elem}\")\n            if i < len(seq) - 1:  # add space unless this is the last one\n                self.file.write(\" \")\n        self.file.write(\"\\n\")\n        self.file.flush()\n\n    def close(self):\n        if self.own_file:\n            self.file.close()\n\n\nclass JSONOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, \"wt\")\n\n    def writekvs(self, kvs):\n        for k, v in sorted(kvs.items()):\n            if hasattr(v, \"dtype\"):\n                kvs[k] = float(v)\n        self.file.write(json.dumps(kvs) + \"\\n\")\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\n\nclass CSVOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, \"w+t\")\n        self.keys = []\n        self.sep = \",\"\n\n    def writekvs(self, kvs):\n        # Add our current row to the history\n        extra_keys = list(kvs.keys() - self.keys)\n        extra_keys.sort()\n        if extra_keys:\n            self.keys.extend(extra_keys)\n            self.file.seek(0)\n            lines = self.file.readlines()\n            self.file.seek(0)\n            for (i, k) in enumerate(self.keys):\n                if i > 0:\n                    self.file.write(\",\")\n                self.file.write(k)\n            self.file.write(\"\\n\")\n            for line in lines[1:]:\n                self.file.write(line[:-1])\n                self.file.write(self.sep * len(extra_keys))\n                self.file.write(\"\\n\")\n        for (i, k) in enumerate(self.keys):\n            if i > 0:\n                self.file.write(\",\")\n            v = kvs.get(k)\n            if v is not None:\n                self.file.write(str(v))\n        self.file.write(\"\\n\")\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\n\nclass TensorBoardOutputFormat(KVWriter):\n    \"\"\"\n    Dumps key/value pairs into TensorBoard's numeric format.\n    \"\"\"\n\n    def __init__(self, dir):\n        os.makedirs(dir, exist_ok=True)\n        self.dir = dir\n        self.step = 1\n        prefix = \"events\"\n        path = osp.join(osp.abspath(dir), prefix)\n        import tensorflow as tf\n        from tensorflow.python import pywrap_tensorflow\n        from tensorflow.core.util import event_pb2\n        from tensorflow.python.util import compat\n\n        self.tf = tf\n        self.event_pb2 = event_pb2\n        self.pywrap_tensorflow = pywrap_tensorflow\n        self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path))\n\n    def writekvs(self, kvs):\n        def summary_val(k, v):\n            kwargs = {\"tag\": k, \"simple_value\": float(v)}\n            return self.tf.Summary.Value(**kwargs)\n\n        summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()])\n        event = self.event_pb2.Event(wall_time=time.time(), summary=summary)\n        event.step = (\n            self.step\n        )  # is there any reason why you'd want to specify the step?\n        self.writer.WriteEvent(event)\n        self.writer.Flush()\n        self.step += 1\n\n    def close(self):\n        if self.writer:\n            self.writer.Close()\n            self.writer = None\n\n\ndef make_output_format(format, ev_dir, log_suffix=\"\"):\n    os.makedirs(ev_dir, exist_ok=True)\n    if format == \"stdout\":\n        return HumanOutputFormat(sys.stdout)\n    elif format == \"log\":\n        return HumanOutputFormat(osp.join(ev_dir, \"log%s.txt\" % log_suffix))\n    elif format == \"json\":\n        return JSONOutputFormat(osp.join(ev_dir, \"progress%s.json\" % log_suffix))\n    elif format == \"csv\":\n        return CSVOutputFormat(osp.join(ev_dir, \"progress%s.csv\" % log_suffix))\n    elif format == \"tensorboard\":\n        return TensorBoardOutputFormat(osp.join(ev_dir, \"tb%s\" % log_suffix))\n    else:\n        raise ValueError(\"Unknown format specified: %s\" % (format,))\n\n\n# ================================================================\n# API\n# ================================================================\n\n\ndef logkv(key, val):\n    \"\"\"\n    Log a value of some diagnostic\n    Call this once for each diagnostic quantity, each iteration\n    If called many times, last value will be used.\n    \"\"\"\n    get_current().logkv(key, val)\n\n\ndef logkv_mean(key, val):\n    \"\"\"\n    The same as logkv(), but if called many times, values averaged.\n    \"\"\"\n    get_current().logkv_mean(key, val)\n\n\ndef logkvs(d):\n    \"\"\"\n    Log a dictionary of key-value pairs\n    \"\"\"\n    for (k, v) in d.items():\n        logkv(k, v)\n\n\ndef dumpkvs():\n    \"\"\"\n    Write all of the diagnostics from the current iteration\n    \"\"\"\n    return get_current().dumpkvs()\n\n\ndef getkvs():\n    return get_current().name2val\n\n\ndef log(*args, level=INFO):\n    \"\"\"\n    Write the sequence of args, with no separators, to the console and output files (if you've configured an output file).\n    \"\"\"\n    get_current().log(*args, level=level)\n\n\ndef debug(*args):\n    log(*args, level=DEBUG)\n\n\ndef info(*args):\n    log(*args, level=INFO)\n\n\ndef warn(*args):\n    log(*args, level=WARN)\n\n\ndef error(*args):\n    log(*args, level=ERROR)\n\n\ndef set_level(level):\n    \"\"\"\n    Set logging threshold on current logger.\n    \"\"\"\n    get_current().set_level(level)\n\n\ndef set_comm(comm):\n    get_current().set_comm(comm)\n\n\ndef get_dir():\n    \"\"\"\n    Get directory that log files are being written to.\n    will be None if there is no output directory (i.e., if you didn't call start)\n    \"\"\"\n    return get_current().get_dir()\n\n\nrecord_tabular = logkv\ndump_tabular = dumpkvs\n\n\n@contextmanager\ndef profile_kv(scopename):\n    logkey = \"wait_\" + scopename\n    tstart = time.time()\n    try:\n        yield\n    finally:\n        get_current().name2val[logkey] += time.time() - tstart\n\n\ndef profile(n):\n    \"\"\"\n    Usage:\n    @profile(\"my_func\")\n    def my_func(): code\n    \"\"\"\n\n    def decorator_with_name(func):\n        def func_wrapper(*args, **kwargs):\n            with profile_kv(n):\n                return func(*args, **kwargs)\n\n        return func_wrapper\n\n    return decorator_with_name\n\n\n# ================================================================\n# Backend\n# ================================================================\n\n\ndef get_current():\n    if Logger.CURRENT is None:\n        _configure_default_logger()\n\n    return Logger.CURRENT\n\n\nclass Logger(object):\n    DEFAULT = None  # A logger with no output files. (See right below class definition)\n    # So that you can still log to the terminal without setting up any output files\n    CURRENT = None  # Current logger being used by the free functions above\n\n    def __init__(self, dir, output_formats, comm=None):\n        self.name2val = defaultdict(float)  # values this iteration\n        self.name2cnt = defaultdict(int)\n        self.level = INFO\n        self.dir = dir\n        self.output_formats = output_formats\n        self.comm = comm\n\n    # Logging API, forwarded\n    # ----------------------------------------\n    def logkv(self, key, val):\n        self.name2val[key] = val\n\n    def logkv_mean(self, key, val):\n        oldval, cnt = self.name2val[key], self.name2cnt[key]\n        self.name2val[key] = oldval * cnt / (cnt + 1) + val / (cnt + 1)\n        self.name2cnt[key] = cnt + 1\n\n    def dumpkvs(self):\n        if self.comm is None:\n            d = self.name2val\n        else:\n            d = mpi_weighted_mean(\n                self.comm,\n                {\n                    name: (val, self.name2cnt.get(name, 1))\n                    for (name, val) in self.name2val.items()\n                },\n            )\n            if self.comm.rank != 0:\n                d[\"dummy\"] = 1  # so we don't get a warning about empty dict\n        out = d.copy()  # Return the dict for unit testing purposes\n        for fmt in self.output_formats:\n            if isinstance(fmt, KVWriter):\n                fmt.writekvs(d)\n        self.name2val.clear()\n        self.name2cnt.clear()\n        return out\n\n    def log(self, *args, level=INFO):\n        if self.level <= level:\n            self._do_log(args)\n\n    # Configuration\n    # ----------------------------------------\n    def set_level(self, level):\n        self.level = level\n\n    def set_comm(self, comm):\n        self.comm = comm\n\n    def get_dir(self):\n        return self.dir\n\n    def close(self):\n        for fmt in self.output_formats:\n            fmt.close()\n\n    # Misc\n    # ----------------------------------------\n    def _do_log(self, args):\n        for fmt in self.output_formats:\n            if isinstance(fmt, SeqWriter):\n                fmt.writeseq(map(str, args))\n\n\ndef get_rank_without_mpi_import():\n    # check environment variables here instead of importing mpi4py\n    # to avoid calling MPI_Init() when this module is imported\n    for varname in [\"PMI_RANK\", \"OMPI_COMM_WORLD_RANK\"]:\n        if varname in os.environ:\n            return int(os.environ[varname])\n    return 0\n\n\ndef mpi_weighted_mean(comm, local_name2valcount):\n    \"\"\"\n    Copied from: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/common/mpi_util.py#L110\n    Perform a weighted average over dicts that are each on a different node\n    Input: local_name2valcount: dict mapping key -> (value, count)\n    Returns: key -> mean\n    \"\"\"\n    all_name2valcount = comm.gather(local_name2valcount)\n    if comm.rank == 0:\n        name2sum = defaultdict(float)\n        name2count = defaultdict(float)\n        for n2vc in all_name2valcount:\n            for (name, (val, count)) in n2vc.items():\n                try:\n                    val = float(val)\n                except ValueError:\n                    if comm.rank == 0:\n                        warnings.warn(\n                            \"WARNING: tried to compute mean on non-float {}={}\".format(\n                                name, val\n                            )\n                        )\n                else:\n                    name2sum[name] += val * count\n                    name2count[name] += count\n        return {name: name2sum[name] / name2count[name] for name in name2sum}\n    else:\n        return {}\n\n\ndef configure(dir=None, format_strs=None, comm=None, log_suffix=\"\"):\n    \"\"\"\n    If comm is provided, average all numerical stats across that comm\n    \"\"\"\n    if dir is None:\n        dir = os.getenv(\"OPENAI_LOGDIR\")\n    if dir is None:\n        dir = osp.join(\n            tempfile.gettempdir(),\n            datetime.datetime.now().strftime(\"openai-%Y-%m-%d-%H-%M-%S-%f\"),\n        )\n    assert isinstance(dir, str)\n    dir = os.path.expanduser(dir)\n    os.makedirs(os.path.expanduser(dir), exist_ok=True)\n\n    rank = get_rank_without_mpi_import()\n    if rank > 0:\n        log_suffix = log_suffix + \"-rank%03i\" % rank\n\n    if format_strs is None:\n        if rank == 0:\n            format_strs = os.getenv(\"OPENAI_LOG_FORMAT\", \"stdout,log,csv\").split(\",\")\n        else:\n            format_strs = os.getenv(\"OPENAI_LOG_FORMAT_MPI\", \"log\").split(\",\")\n    format_strs = filter(None, format_strs)\n    output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs]\n\n    Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm)\n    if output_formats:\n        log(\"Logging to %s\" % dir)\n\n\ndef _configure_default_logger():\n    configure()\n    Logger.DEFAULT = Logger.CURRENT\n\n\ndef reset():\n    if Logger.CURRENT is not Logger.DEFAULT:\n        Logger.CURRENT.close()\n        Logger.CURRENT = Logger.DEFAULT\n        log(\"Reset logger\")\n\n\n@contextmanager\ndef scoped_configure(dir=None, format_strs=None, comm=None):\n    prevlogger = Logger.CURRENT\n    configure(dir=dir, format_strs=format_strs, comm=comm)\n    try:\n        yield\n    finally:\n        Logger.CURRENT.close()\n        Logger.CURRENT = prevlogger\n\n"
  },
  {
    "path": "improved_diffusion/losses.py",
    "content": "\"\"\"\nHelpers for various likelihood-based losses. These are ported from the original\nHo et al. diffusion models codebase:\nhttps://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/utils.py\n\"\"\"\n\nimport numpy as np\n\nimport torch as th\n\n\ndef normal_kl(mean1, logvar1, mean2, logvar2):\n    \"\"\"\n    Compute the KL divergence between two gaussians.\n\n    Shapes are automatically broadcasted, so batches can be compared to\n    scalars, among other use cases.\n    \"\"\"\n    tensor = None\n    for obj in (mean1, logvar1, mean2, logvar2):\n        if isinstance(obj, th.Tensor):\n            tensor = obj\n            break\n    assert tensor is not None, \"at least one argument must be a Tensor\"\n\n    # Force variances to be Tensors. Broadcasting helps convert scalars to\n    # Tensors, but it does not work for th.exp().\n    logvar1, logvar2 = [\n        x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor)\n        for x in (logvar1, logvar2)\n    ]\n\n    return 0.5 * (\n        -1.0\n        + logvar2\n        - logvar1\n        + th.exp(logvar1 - logvar2)\n        + ((mean1 - mean2) ** 2) * th.exp(-logvar2)\n    )\n\n\ndef approx_standard_normal_cdf(x):\n    \"\"\"\n    A fast approximation of the cumulative distribution function of the\n    standard normal.\n    \"\"\"\n    return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3))))\n\n\ndef discretized_gaussian_log_likelihood(x, *, means, log_scales):\n    \"\"\"\n    Compute the log-likelihood of a Gaussian distribution discretizing to a\n    given image.\n\n    :param x: the target images. It is assumed that this was uint8 values,\n              rescaled to the range [-1, 1].\n    :param means: the Gaussian mean Tensor.\n    :param log_scales: the Gaussian log stddev Tensor.\n    :return: a tensor like x of log probabilities (in nats).\n    \"\"\"\n    assert x.shape == means.shape == log_scales.shape\n    centered_x = x - means\n    inv_stdv = th.exp(-log_scales)\n    plus_in = inv_stdv * (centered_x + 1.0 / 255.0)\n    cdf_plus = approx_standard_normal_cdf(plus_in)\n    min_in = inv_stdv * (centered_x - 1.0 / 255.0)\n    cdf_min = approx_standard_normal_cdf(min_in)\n    log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12))\n    log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12))\n    cdf_delta = cdf_plus - cdf_min\n    log_probs = th.where(\n        x < -0.999,\n        log_cdf_plus,\n        th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))),\n    )\n    assert log_probs.shape == x.shape\n    return log_probs\n"
  },
  {
    "path": "improved_diffusion/metrics.py",
    "content": "import numpy as np\nfrom skimage.morphology import binary_dilation, disk\n\n\ndef WCov_metric(pred, gt_mask):\n    A1 = float(np.count_nonzero(pred))\n    A2 = float(np.count_nonzero(gt_mask))\n    if A1 >= A2: return A2 / A1\n    if A2 > A1: return A1 / A2\n\n\ndef FBound_metric(pred, gt_mask):\n    tmp1 = db_eval_boundary(pred, gt_mask, 1)[0]\n    tmp2 = db_eval_boundary(pred, gt_mask, 2)[0]\n    tmp3 = db_eval_boundary(pred, gt_mask, 3)[0]\n    tmp4 = db_eval_boundary(pred, gt_mask, 4)[0]\n    tmp5 = db_eval_boundary(pred, gt_mask, 5)[0]\n    return (tmp1 + tmp2 + tmp3 + tmp4 + tmp5) / 5.0\n\n\ndef db_eval_boundary(foreground_mask, gt_mask, bound_th):\n    \"\"\"\n    Compute mean,recall and decay from per-frame evaluation.\n    Calculates precision/recall for boundaries between foreground_mask and\n    gt_mask using morphological operators to speed it up.\n    Arguments:\n        foreground_mask (ndarray): binary segmentation image.\n        gt_mask         (ndarray): binary annotated image.\n    Returns:\n        F (float): boundaries F-measure\n        P (float): boundaries precision\n        R (float): boundaries recall\n    \"\"\"\n    assert np.atleast_3d(foreground_mask).shape[2] == 1\n\n    bound_pix = bound_th if bound_th >= 1 else \\\n        np.ceil(bound_th * np.linalg.norm(foreground_mask.shape))\n\n    # Get the pixel boundaries of both masks\n    fg_boundary = seg2bmap(foreground_mask)\n    gt_boundary = seg2bmap(gt_mask)\n\n    fg_dil = binary_dilation(fg_boundary, disk(bound_pix))\n    gt_dil = binary_dilation(gt_boundary, disk(bound_pix))\n\n    # Get the intersection\n    gt_match = gt_boundary * fg_dil\n    fg_match = fg_boundary * gt_dil\n\n    # Area of the intersection\n    n_fg = np.sum(fg_boundary)\n    n_gt = np.sum(gt_boundary)\n\n    # % Compute precision and recall\n    if n_fg == 0 and n_gt > 0:\n        precision = 1\n        recall = 0\n    elif n_fg > 0 and n_gt == 0:\n        precision = 0\n        recall = 1\n    elif n_fg == 0 and n_gt == 0:\n        precision = 1\n        recall = 1\n    else:\n        precision = np.sum(fg_match) / float(n_fg)\n        recall = np.sum(gt_match) / float(n_gt)\n\n    # Compute F measure\n    if precision + recall == 0:\n        F = 0\n    else:\n        F = 2 * precision * recall / (precision + recall)\n\n    return F, precision, recall, np.sum(fg_match), n_fg, np.sum(gt_match), n_gt\n\n\ndef seg2bmap(seg, width=None, height=None):\n    \"\"\"\n    From a segmentation, compute a binary boundary map with 1 pixel wide\n    boundaries.  The boundary pixels are offset by 1/2 pixel towards the\n    origin from the actual segment boundary.\n    Arguments:\n        seg     : Segments labeled from 1..k.\n        width     : Width of desired bmap  <= seg.shape[1]\n        height  :   Height of desired bmap <= seg.shape[0]\n    Returns:\n        bmap (ndarray): Binary boundary map.\n     David Martin <dmartin@eecs.berkeley.edu>\n     January 2003\n    \"\"\"\n    seg = seg.astype(bool)\n    seg[seg > 0] = 1\n\n    assert np.atleast_3d(seg).shape[2] == 1\n\n    width = seg.shape[1] if width is None else width\n    height = seg.shape[0] if height is None else height\n\n    h, w = seg.shape[:2]\n\n    ar1 = float(width) / float(height)\n    ar2 = float(w) / float(h)\n\n    assert not (width > w | height > h | abs(ar1 - ar2) > 0.01), \\\n        'Can''t convert %dx%d seg to %dx%d bmap.' % (w, h, width, height)\n\n    e = np.zeros_like(seg)\n    s = np.zeros_like(seg)\n    se = np.zeros_like(seg)\n\n    e[:, :-1] = seg[:, 1:]\n    s[:-1, :] = seg[1:, :]\n    se[:-1, :-1] = seg[1:, 1:]\n\n    b = seg ^ e | seg ^ s | seg ^ se\n    b[-1, :] = seg[-1, :] ^ e[-1, :]\n    b[:, -1] = seg[:, -1] ^ s[:, -1]\n    b[-1, -1] = 0\n\n    if w == width and h == height:\n        bmap = b\n    else:\n        bmap = np.zeros((height, width))\n        for x in range(w):\n            for y in range(h):\n                if b[y, x]:\n                    j = 1 + np.floor((y - 1) + height / h)\n                    i = 1 + np.floor((x - 1) + width / h)\n                    bmap[j, i] = 1\n\n    return bmap"
  },
  {
    "path": "improved_diffusion/nn.py",
    "content": "\"\"\"\nVarious utilities for neural networks.\n\"\"\"\n\nimport math\n\nimport torch as th\nimport torch.nn as nn\n\n\n# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.\nclass SiLU(nn.Module):\n    def forward(self, x):\n        return x * th.sigmoid(x)\n\n\nclass GroupNorm32(nn.GroupNorm):\n    def forward(self, x):\n        return super().forward(x.float()).type(x.dtype)\n\n\ndef conv_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D convolution module.\n    \"\"\"\n    if dims == 1:\n        return nn.Conv1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.Conv2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.Conv3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef linear(*args, **kwargs):\n    \"\"\"\n    Create a linear module.\n    \"\"\"\n    return nn.Linear(*args, **kwargs)\n\n\ndef avg_pool_nd(dims, *args, **kwargs):\n    \"\"\"\n    Create a 1D, 2D, or 3D average pooling module.\n    \"\"\"\n    if dims == 1:\n        return nn.AvgPool1d(*args, **kwargs)\n    elif dims == 2:\n        return nn.AvgPool2d(*args, **kwargs)\n    elif dims == 3:\n        return nn.AvgPool3d(*args, **kwargs)\n    raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef update_ema(target_params, source_params, rate=0.99):\n    \"\"\"\n    Update target parameters to be closer to those of source parameters using\n    an exponential moving average.\n\n    :param target_params: the target parameter sequence.\n    :param source_params: the source parameter sequence.\n    :param rate: the EMA rate (closer to 1 means slower).\n    \"\"\"\n    for targ, src in zip(target_params, source_params):\n        targ.detach().mul_(rate).add_(src, alpha=1 - rate)\n\n\ndef swap_ema(target_params, source_params):\n    \"\"\"\n    Update target parameters to be closer to those of source parameters using\n    an exponential moving average.\n\n    :param target_params: the target parameter sequence.\n    :param source_params: the source parameter sequence.\n    \"\"\"\n    for targ, src in zip(target_params, source_params):\n        temp = targ.data.clone()\n        targ.data.copy_(src.data)\n        src.data.copy_(temp)\n\n\ndef zero_module(module):\n    \"\"\"\n    Zero out the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().zero_()\n    return module\n\n\ndef scale_module(module, scale):\n    \"\"\"\n    Scale the parameters of a module and return it.\n    \"\"\"\n    for p in module.parameters():\n        p.detach().mul_(scale)\n    return module\n\n\ndef mean_flat(tensor):\n    \"\"\"\n    Take the mean over all non-batch dimensions.\n    \"\"\"\n    return tensor.mean(dim=list(range(1, len(tensor.shape))))\n\n\ndef normalization(channels):\n    \"\"\"\n    Make a standard normalization layer.\n\n    :param channels: number of input channels.\n    :return: an nn.Module for normalization.\n    \"\"\"\n    return GroupNorm32(32, channels)\n\n\ndef timestep_embedding(timesteps, dim, max_period=10000):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n\n    :param timesteps: a 1-D Tensor of N indices, one per batch element.\n                      These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an [N x dim] Tensor of positional embeddings.\n    \"\"\"\n    half = dim // 2\n    freqs = th.exp(\n        -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half\n    ).to(device=timesteps.device)\n    args = timesteps[:, None].float() * freqs[None]\n    embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)\n    if dim % 2:\n        embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)\n    return embedding\n\n\ndef checkpoint(func, inputs, params, flag):\n    \"\"\"\n    Evaluate a function without caching intermediate activations, allowing for\n    reduced memory at the expense of extra compute in the backward pass.\n\n    :param func: the function to evaluate.\n    :param inputs: the argument sequence to pass to `func`.\n    :param params: a sequence of parameters `func` depends on but does not\n                   explicitly take as arguments.\n    :param flag: if False, disable gradient checkpointing.\n    \"\"\"\n    if flag:\n        args = tuple(inputs) + tuple(params)\n        return CheckpointFunction.apply(func, len(inputs), *args)\n    else:\n        return func(*inputs)\n\n\nclass CheckpointFunction(th.autograd.Function):\n    @staticmethod\n    def forward(ctx, run_function, length, *args):\n        ctx.run_function = run_function\n        ctx.input_tensors = list(args[:length])\n        ctx.input_params = list(args[length:])\n        with th.no_grad():\n            output_tensors = ctx.run_function(*ctx.input_tensors)\n        return output_tensors\n\n    @staticmethod\n    def backward(ctx, *output_grads):\n        ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]\n        with th.enable_grad():\n            # Fixes a bug where the first op in run_function modifies the\n            # Tensor storage in place, which is not allowed for detach()'d\n            # Tensors.\n            shallow_copies = [x.view_as(x) for x in ctx.input_tensors]\n            output_tensors = ctx.run_function(*shallow_copies)\n        input_grads = th.autograd.grad(\n            output_tensors,\n            ctx.input_tensors + ctx.input_params,\n            output_grads,\n            allow_unused=True,\n        )\n        del ctx.input_tensors\n        del ctx.input_params\n        del output_tensors\n        return (None, None) + input_grads\n"
  },
  {
    "path": "improved_diffusion/resample.py",
    "content": "from abc import ABC, abstractmethod\n\nimport numpy as np\nimport torch as th\nimport torch.distributed as dist\n\n\ndef create_named_schedule_sampler(name, diffusion):\n    \"\"\"\n    Create a ScheduleSampler from a library of pre-defined samplers.\n\n    :param name: the name of the sampler.\n    :param diffusion: the diffusion object to sample for.\n    \"\"\"\n    if name == \"uniform\":\n        return UniformSampler(diffusion)\n    elif name == \"loss-second-moment\":\n        return LossSecondMomentResampler(diffusion)\n    else:\n        raise NotImplementedError(f\"unknown schedule sampler: {name}\")\n\n\nclass ScheduleSampler(ABC):\n    \"\"\"\n    A distribution over timesteps in the diffusion process, intended to reduce\n    variance of the objective.\n\n    By default, samplers perform unbiased importance sampling, in which the\n    objective's mean is unchanged.\n    However, subclasses may override sample() to change how the resampled\n    terms are reweighted, allowing for actual changes in the objective.\n    \"\"\"\n\n    @abstractmethod\n    def weights(self):\n        \"\"\"\n        Get a numpy array of weights, one per diffusion step.\n\n        The weights needn't be normalized, but must be positive.\n        \"\"\"\n\n    def sample(self, batch_size, device):\n        \"\"\"\n        Importance-sample timesteps for a batch.\n\n        :param batch_size: the number of timesteps.\n        :param device: the torch device to save to.\n        :return: a tuple (timesteps, weights):\n                 - timesteps: a tensor of timestep indices.\n                 - weights: a tensor of weights to scale the resulting losses.\n        \"\"\"\n        w = self.weights()\n        p = w / np.sum(w)\n        indices_np = np.random.choice(len(p), size=(batch_size,), p=p)\n        indices = th.from_numpy(indices_np).long().to(device)\n        weights_np = 1 / (len(p) * p[indices_np])\n        weights = th.from_numpy(weights_np).float().to(device)\n        return indices, weights\n\n\nclass UniformSampler(ScheduleSampler):\n    def __init__(self, diffusion):\n        self.diffusion = diffusion\n        self._weights = np.ones([diffusion.num_timesteps])\n\n    def weights(self):\n        return self._weights\n\n\nclass LossAwareSampler(ScheduleSampler):\n    def update_with_local_losses(self, local_ts, local_losses):\n        \"\"\"\n        Update the reweighting using losses from a model.\n\n        Call this method from each rank with a batch of timesteps and the\n        corresponding losses for each of those timesteps.\n        This method will perform synchronization to make sure all of the ranks\n        maintain the exact same reweighting.\n\n        :param local_ts: an integer Tensor of timesteps.\n        :param local_losses: a 1D Tensor of losses.\n        \"\"\"\n        batch_sizes = [\n            th.tensor([0], dtype=th.int32, device=local_ts.device)\n            for _ in range(dist.get_world_size())\n        ]\n        dist.all_gather(\n            batch_sizes,\n            th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device),\n        )\n\n        # Pad all_gather batches to be the maximum batch size.\n        batch_sizes = [x.item() for x in batch_sizes]\n        max_bs = max(batch_sizes)\n\n        timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes]\n        loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes]\n        dist.all_gather(timestep_batches, local_ts)\n        dist.all_gather(loss_batches, local_losses)\n        timesteps = [\n            x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs]\n        ]\n        losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]]\n        self.update_with_all_losses(timesteps, losses)\n\n    @abstractmethod\n    def update_with_all_losses(self, ts, losses):\n        \"\"\"\n        Update the reweighting using losses from a model.\n\n        Sub-classes should override this method to update the reweighting\n        using losses from the model.\n\n        This method directly updates the reweighting without synchronizing\n        between workers. It is called by update_with_local_losses from all\n        ranks with identical arguments. Thus, it should have deterministic\n        behavior to maintain state across workers.\n\n        :param ts: a list of int timesteps.\n        :param losses: a list of float losses, one per timestep.\n        \"\"\"\n\n\nclass LossSecondMomentResampler(LossAwareSampler):\n    def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001):\n        self.diffusion = diffusion\n        self.history_per_term = history_per_term\n        self.uniform_prob = uniform_prob\n        self._loss_history = np.zeros(\n            [diffusion.num_timesteps, history_per_term], dtype=np.float64\n        )\n        self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int)\n\n    def weights(self):\n        if not self._warmed_up():\n            return np.ones([self.diffusion.num_timesteps], dtype=np.float64)\n        weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1))\n        weights /= np.sum(weights)\n        weights *= 1 - self.uniform_prob\n        weights += self.uniform_prob / len(weights)\n        return weights\n\n    def update_with_all_losses(self, ts, losses):\n        for t, loss in zip(ts, losses):\n            if self._loss_counts[t] == self.history_per_term:\n                # Shift out the oldest loss term.\n                self._loss_history[t, :-1] = self._loss_history[t, 1:]\n                self._loss_history[t, -1] = loss\n            else:\n                self._loss_history[t, self._loss_counts[t]] = loss\n                self._loss_counts[t] += 1\n\n    def _warmed_up(self):\n        return (self._loss_counts == self.history_per_term).all()\n"
  },
  {
    "path": "improved_diffusion/respace.py",
    "content": "import numpy as np\nimport torch as th\n\nfrom .gaussian_diffusion import GaussianDiffusion\n\n\ndef space_timesteps(num_timesteps, section_counts):\n    \"\"\"\n    Create a list of timesteps to use from an original diffusion process,\n    given the number of timesteps we want to take from equally-sized portions\n    of the original process.\n\n    For example, if there's 300 timesteps and the section counts are [10,15,20]\n    then the first 100 timesteps are strided to be 10 timesteps, the second 100\n    are strided to be 15 timesteps, and the final 100 are strided to be 20.\n\n    If the stride is a string starting with \"ddim\", then the fixed striding\n    from the DDIM paper is used, and only one section is allowed.\n\n    :param num_timesteps: the number of diffusion steps in the original\n                          process to divide up.\n    :param section_counts: either a list of numbers, or a string containing\n                           comma-separated numbers, indicating the step count\n                           per section. As a special case, use \"ddimN\" where N\n                           is a number of steps to use the striding from the\n                           DDIM paper.\n    :return: a set of diffusion steps from the original process to use.\n    \"\"\"\n    if isinstance(section_counts, str):\n        if section_counts.startswith(\"ddim\"):\n            desired_count = int(section_counts[len(\"ddim\") :])\n            for i in range(1, num_timesteps):\n                if len(range(0, num_timesteps, i)) == desired_count:\n                    return set(range(0, num_timesteps, i))\n            raise ValueError(\n                f\"cannot create exactly {num_timesteps} steps with an integer stride\"\n            )\n        section_counts = [int(x) for x in section_counts.split(\",\")]\n    size_per = num_timesteps // len(section_counts)\n    extra = num_timesteps % len(section_counts)\n    start_idx = 0\n    all_steps = []\n    for i, section_count in enumerate(section_counts):\n        size = size_per + (1 if i < extra else 0)\n        if size < section_count:\n            raise ValueError(\n                f\"cannot divide section of {size} steps into {section_count}\"\n            )\n        if section_count <= 1:\n            frac_stride = 1\n        else:\n            frac_stride = (size - 1) / (section_count - 1)\n        cur_idx = 0.0\n        taken_steps = []\n        for _ in range(section_count):\n            taken_steps.append(start_idx + round(cur_idx))\n            cur_idx += frac_stride\n        all_steps += taken_steps\n        start_idx += size\n    return set(all_steps)\n\n\nclass SpacedDiffusion(GaussianDiffusion):\n    \"\"\"\n    A diffusion process which can skip steps in a base diffusion process.\n\n    :param use_timesteps: a collection (sequence or set) of timesteps from the\n                          original diffusion process to retain.\n    :param kwargs: the kwargs to create the base diffusion process.\n    \"\"\"\n\n    def __init__(self, use_timesteps, **kwargs):\n        self.use_timesteps = set(use_timesteps)\n        self.timestep_map = []\n        self.original_num_steps = len(kwargs[\"betas\"])\n\n        base_diffusion = GaussianDiffusion(**kwargs)  # pylint: disable=missing-kwoa\n        last_alpha_cumprod = 1.0\n        new_betas = []\n        for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):\n            if i in self.use_timesteps:\n                new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)\n                last_alpha_cumprod = alpha_cumprod\n                self.timestep_map.append(i)\n        kwargs[\"betas\"] = np.array(new_betas)\n        super().__init__(**kwargs)\n\n    def p_mean_variance(\n        self, model, *args, **kwargs\n    ):  # pylint: disable=signature-differs\n        return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)\n\n    def training_losses(\n        self, model, *args, **kwargs\n    ):  # pylint: disable=signature-differs\n        return super().training_losses(self._wrap_model(model), *args, **kwargs)\n\n    def _wrap_model(self, model):\n        if isinstance(model, _WrappedModel):\n            return model\n        return _WrappedModel(\n            model, self.timestep_map, self.rescale_timesteps, self.original_num_steps\n        )\n\n    def _scale_timesteps(self, t):\n        # Scaling is done by the wrapped model.\n        return t\n\n\nclass _WrappedModel:\n    def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps):\n        self.model = model\n        self.timestep_map = timestep_map\n        self.rescale_timesteps = rescale_timesteps\n        self.original_num_steps = original_num_steps\n\n    def __call__(self, x, ts, **kwargs):\n        map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)\n        new_ts = map_tensor[ts]\n        if self.rescale_timesteps:\n            new_ts = new_ts.float() * (1000.0 / self.original_num_steps)\n        return self.model(x, new_ts, **kwargs)\n"
  },
  {
    "path": "improved_diffusion/sampling_util.py",
    "content": "import math\nimport os\n\nimport numpy as np\nimport torch\nimport torch.distributed as dist\nimport torch.nn.functional as F\nimport torchvision.utils as tvu\nfrom PIL import Image\nfrom kornia import denormalize\nfrom sklearn.metrics import f1_score, jaccard_score\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom . import dist_util\nfrom .metrics import FBound_metric, WCov_metric\nfrom datasets.monu import MonuDataset\nfrom .utils import set_random_seed_for_iterations\n\ncityspallete = [\n    0, 0, 0,\n    128, 64, 128,\n    244, 35, 232,\n    70, 70, 70,\n    102, 102, 156,\n    190, 153, 153,\n    153, 153, 153,\n    250, 170, 30,\n    220, 220, 0,\n    107, 142, 35,\n    152, 251, 152,\n    0, 130, 180,\n    220, 20, 60,\n    255, 0, 0,\n    0, 0, 142,\n    0, 0, 70,\n    0, 60, 100,\n    0, 80, 100,\n    0, 0, 230,\n    119, 11, 32,\n]\n\n\ndef calculate_metrics(x, gt):\n    predict = x.detach().cpu().numpy().astype('uint8')\n    target = gt.detach().cpu().numpy().astype('uint8')\n    return f1_score(target.flatten(), predict.flatten()), jaccard_score(target.flatten(), predict.flatten()), \\\n           WCov_metric(predict, target), FBound_metric(predict, target)\n\n\ndef sampling_major_vote_func(diffusion_model, ddp_model, output_folder, dataset, logger, clip_denoised, step, n_rounds=3):\n    ddp_model.eval()\n    batch_size = 1\n    major_vote_number = 9\n    loader = DataLoader(dataset, batch_size=batch_size)\n    loader_iter = iter(loader)\n\n    f1_score_list = []\n    miou_list = []\n    fbound_list = []\n    wcov_list = []\n\n    with torch.no_grad():\n        for round_index in tqdm(\n                range(n_rounds), desc=\"Generating image samples for FID evaluation.\"\n        ):\n            gt_mask, condition_on, name = next(loader_iter)\n            set_random_seed_for_iterations(step + int(name[0].split(\"_\")[1]))\n            gt_mask = (gt_mask + 1.0) / 2.0\n            condition_on = condition_on[\"conditioned_image\"]\n            former_frame_for_feature_extraction = condition_on.to(dist_util.dev())\n\n            for i in range(gt_mask.shape[0]):\n                gt_img = Image.fromarray(gt_mask[i][0].detach().cpu().numpy().astype('uint8'))\n                gt_img.putpalette(cityspallete)\n                gt_img.save(\n                    os.path.join(output_folder, f\"{name[i]}_gt_palette.png\"))\n                gt_img = Image.fromarray((gt_mask[i][0].detach().cpu().numpy() - 1).astype(np.uint8))\n                gt_img.save(\n                    os.path.join(output_folder, f\"{name[i]}_gt.png\"))\n\n            for i in range(condition_on.shape[0]):\n                denorm_condition_on = denormalize(condition_on.clone(), mean=dataset.mean, std=dataset.std)\n                tvu.save_image(\n                    denorm_condition_on[i,] / 255.,\n                    os.path.join(output_folder, f\"{name[i]}_condition_on.png\")\n                )\n\n            if isinstance(dataset, MonuDataset):\n                _, _, W, H = former_frame_for_feature_extraction.shape\n                kernel_size = dataset.image_size\n                stride = 256\n                patches = []\n                for y, x in np.ndindex((((W - kernel_size) // stride) + 1, ((H - kernel_size) // stride) + 1)):\n                    y = y * stride\n                    x = x * stride\n                    patches.append(former_frame_for_feature_extraction[0,\n                        :,\n                        y: min(y + kernel_size, W),\n                        x: min(x + kernel_size, H)])\n                patches = torch.stack(patches)\n\n                major_vote_list = []\n                for i in range(major_vote_number):\n                    x_list = []\n\n                    for index in range(math.ceil(patches.shape[0] / 4)):\n                        model_kwargs = {\"conditioned_image\": patches[index * 4: min((index + 1) * 4, patches.shape[0])]}\n                        x = diffusion_model.p_sample_loop(\n                                ddp_model,\n                                (model_kwargs[\"conditioned_image\"].shape[0], gt_mask.shape[1], model_kwargs[\"conditioned_image\"].shape[2], model_kwargs[\"conditioned_image\"].shape[3]),\n                                progress=True,\n                                clip_denoised=clip_denoised,\n                                model_kwargs=model_kwargs\n                            )\n\n                        x_list.append(x)\n                    out = torch.cat(x_list)\n\n                    output = torch.zeros((former_frame_for_feature_extraction.shape[0], gt_mask.shape[1], former_frame_for_feature_extraction.shape[2], former_frame_for_feature_extraction.shape[3]))\n                    idx_sum = torch.zeros((former_frame_for_feature_extraction.shape[0], gt_mask.shape[1], former_frame_for_feature_extraction.shape[2], former_frame_for_feature_extraction.shape[3]))\n                    for index, val in enumerate(out):\n                        y, x = np.unravel_index(index, (((W - kernel_size) // stride) + 1, ((H - kernel_size) // stride) + 1))\n                        y = y * stride\n                        x = x * stride\n\n                        idx_sum[0,\n                        :,\n                        y: min(y + kernel_size, W),\n                        x: min(x + kernel_size, H)] += 1\n\n                        output[0,\n                        :,\n                        y: min(y + kernel_size, W),\n                        x: min(x + kernel_size, H)] += val[:, :min(y + kernel_size, W) - y, :min(x + kernel_size, H) - x].cpu().data.numpy()\n\n                    output = output / idx_sum\n                    major_vote_list.append(output)\n\n                x = torch.cat(major_vote_list)\n\n            else:\n                model_kwargs = {\n                    \"conditioned_image\": torch.cat([former_frame_for_feature_extraction] * major_vote_number)}\n\n                x = diffusion_model.p_sample_loop(\n                    ddp_model,\n                    (major_vote_number, gt_mask.shape[1], former_frame_for_feature_extraction.shape[2],\n                     former_frame_for_feature_extraction.shape[3]),\n                    progress=True,\n                    clip_denoised=clip_denoised,\n                    model_kwargs=model_kwargs\n                )\n\n            x = (x + 1.0) / 2.0\n\n            if x.shape[2] != gt_mask.shape[2] or x.shape[3] != gt_mask.shape[3]:\n                x = F.interpolate(x, gt_mask.shape[2:], mode='bilinear')\n\n            x = torch.clamp(x, 0.0, 1.0)\n\n            # major vote result\n            x = x.mean(dim=0, keepdim=True).round()\n\n            for i in range(x.shape[0]):\n                # save as outer training ids\n                # current_output = x[i][0] + 1\n                # current_output[current_output == current_output.max()] = 0\n                out_img = Image.fromarray(x[i][0].detach().cpu().numpy().astype('uint8'))\n                out_img.putpalette(cityspallete)\n                out_img.save(\n                    os.path.join(output_folder, f\"{name[i]}_model_output_palette.png\"))\n                out_img = Image.fromarray((x[i][0].detach().cpu().numpy() - 1).astype(np.uint8))\n                out_img.save(\n                    os.path.join(output_folder, f\"{name[i]}_model_output.png\"))\n\n            for index, (gt_im, out_im) in enumerate(zip(gt_mask, x)):\n\n                f1, miou, wcov, fbound = calculate_metrics(out_im[0], gt_im[0])\n                f1_score_list.append(f1)\n                miou_list.append(miou)\n                wcov_list.append(wcov)\n                fbound_list.append(fbound)\n\n                logger.info(\n                    f\"{name[index]} iou {miou_list[-1]}, f1_Score {f1_score_list[-1]}, WCov {wcov_list[-1]}, boundF {fbound_list[-1]}\")\n\n    my_length = len(miou_list)\n    length_of_data = torch.tensor(len(miou_list), device=dist_util.dev())\n    gathered_length_of_data = [torch.tensor(1, device=dist_util.dev()) for _ in range(dist.get_world_size())]\n    dist.all_gather(gathered_length_of_data, length_of_data)\n    max_len = torch.max(torch.stack(gathered_length_of_data))\n\n    iou_tensor = torch.tensor(miou_list + [torch.tensor(-1)] * (max_len - my_length), device=dist_util.dev())\n    f1_tensor = torch.tensor(f1_score_list + [torch.tensor(-1)] * (max_len - my_length), device=dist_util.dev())\n    wcov_tensor = torch.tensor(wcov_list + [torch.tensor(-1)] * (max_len - my_length), device=dist_util.dev())\n    boundf_tensor = torch.tensor(fbound_list + [torch.tensor(-1)] * (max_len - my_length), device=dist_util.dev())\n    gathered_miou = [torch.ones_like(iou_tensor) * -1 for _ in range(dist.get_world_size())]\n    gathered_f1 = [torch.ones_like(f1_tensor) * -1 for _ in range(dist.get_world_size())]\n    gathered_wcov = [torch.ones_like(wcov_tensor) * -1 for _ in range(dist.get_world_size())]\n    gathered_boundf = [torch.ones_like(boundf_tensor) * -1 for _ in range(dist.get_world_size())]\n\n    dist.all_gather(gathered_miou, iou_tensor)\n    dist.all_gather(gathered_f1, f1_tensor)\n    dist.all_gather(gathered_wcov, wcov_tensor)\n    dist.all_gather(gathered_boundf, boundf_tensor)\n\n    # if dist.get_rank() == 0:\n    logger.info(\"measure total avg\")\n    gathered_miou = torch.cat(gathered_miou)\n    gathered_miou = gathered_miou[gathered_miou != -1]\n    logger.info(f\"mean iou {gathered_miou.mean()}\")\n\n    gathered_f1 = torch.cat(gathered_f1)\n    gathered_f1 = gathered_f1[gathered_f1 != -1]\n    logger.info(f\"mean f1 {gathered_f1.mean()}\")\n    gathered_wcov = torch.cat(gathered_wcov)\n    gathered_wcov = gathered_wcov[gathered_wcov != -1]\n    logger.info(f\"mean WCov {gathered_wcov.mean()}\")\n    gathered_boundf = torch.cat(gathered_boundf)\n    gathered_boundf = gathered_boundf[gathered_boundf != -1]\n    logger.info(f\"mean boundF {gathered_boundf.mean()}\")\n\n    dist.barrier()\n    return gathered_miou.mean().item()\n"
  },
  {
    "path": "improved_diffusion/script_util.py",
    "content": "import argparse\nimport inspect\n\nfrom . import gaussian_diffusion as gd\nfrom .respace import SpacedDiffusion, space_timesteps\nfrom .unet import SuperResModel, UNetModel\n\nNUM_CLASSES = 1000\n\n\ndef model_and_diffusion_defaults():\n    \"\"\"\n    Defaults for image training.\n    \"\"\"\n    return dict(\n        image_size=64,\n        num_channels=128,\n        num_res_blocks=2,\n        num_heads=4,\n        num_heads_upsample=-1,\n        attention_resolutions=\"16,8\",\n        dropout=0.0,\n        rrdb_blocks=10,\n        deeper_net=False,\n        learn_sigma=False,\n        sigma_small=False,\n        class_cond=False,\n        class_name=\"train\",\n        expansion=False,\n        diffusion_steps=100,\n        noise_schedule=\"linear\",\n        timestep_respacing=\"\",\n        use_kl=False,\n        predict_xstart=False,\n        rescale_timesteps=True,\n        rescale_learned_sigmas=True,\n        use_checkpoint=False,\n        use_scale_shift_norm=True,\n        seed=None,\n    )\n\n\ndef create_model_and_diffusion(\n    image_size,\n    class_cond,\n    learn_sigma,\n    sigma_small,\n    num_channels,\n    num_res_blocks,\n    num_heads,\n    num_heads_upsample,\n    attention_resolutions,\n    dropout,\n    rrdb_blocks,\n    deeper_net,\n    class_name,\n    expansion,\n    diffusion_steps,\n    noise_schedule,\n    timestep_respacing,\n    use_kl,\n    predict_xstart,\n    rescale_timesteps,\n    rescale_learned_sigmas,\n    use_checkpoint,\n    use_scale_shift_norm,\n    seed,\n):\n    _ = seed  # hack to prevent unused variable\n    _ = expansion\n    _ = class_name\n    model = create_model(\n        image_size,\n        num_channels,\n        num_res_blocks,\n        learn_sigma=learn_sigma,\n        class_cond=class_cond,\n        use_checkpoint=use_checkpoint,\n        attention_resolutions=attention_resolutions,\n        num_heads=num_heads,\n        num_heads_upsample=num_heads_upsample,\n        use_scale_shift_norm=use_scale_shift_norm,\n        dropout=dropout,\n        rrdb_blocks=rrdb_blocks,\n        deeper_net=deeper_net\n    )\n    diffusion = create_gaussian_diffusion(\n        steps=diffusion_steps,\n        learn_sigma=learn_sigma,\n        sigma_small=sigma_small,\n        noise_schedule=noise_schedule,\n        use_kl=use_kl,\n        predict_xstart=predict_xstart,\n        rescale_timesteps=rescale_timesteps,\n        rescale_learned_sigmas=rescale_learned_sigmas,\n        timestep_respacing=timestep_respacing,\n    )\n    return model, diffusion\n\n\ndef create_model(\n    image_size,\n    num_channels,\n    num_res_blocks,\n    learn_sigma,\n    class_cond,\n    use_checkpoint,\n    attention_resolutions,\n    num_heads,\n    num_heads_upsample,\n    use_scale_shift_norm,\n    dropout,\n    rrdb_blocks,\n    deeper_net\n):\n    if image_size == 256:\n        if deeper_net:\n            channel_mult = (1, 1, 1, 2, 2, 4, 4)\n        else:\n            channel_mult = (1, 1, 2, 2, 4, 4)\n    elif image_size == 128:\n        channel_mult = (1, 1, 2, 2, 4, 4)\n    elif image_size == 64:\n        channel_mult = (1, 2, 3, 4)\n    elif image_size == 32:\n        channel_mult = (1, 2, 2, 2)\n    else:\n        raise ValueError(f\"unsupported image size: {image_size}\")\n\n    attention_ds = []\n    for res in attention_resolutions.split(\",\"):\n        attention_ds.append(image_size // int(res))\n\n    return UNetModel(\n        in_channels=1,\n        model_channels=num_channels,\n        out_channels=(1 if not learn_sigma else 2),\n        num_res_blocks=num_res_blocks,\n        attention_resolutions=tuple(attention_ds),\n        dropout=dropout,\n        channel_mult=channel_mult,\n        num_classes=(NUM_CLASSES if class_cond else None),\n        use_checkpoint=use_checkpoint,\n        num_heads=num_heads,\n        num_heads_upsample=num_heads_upsample,\n        use_scale_shift_norm=use_scale_shift_norm,\n        rrdb_blocks=rrdb_blocks\n    )\n\n\ndef sr_model_and_diffusion_defaults():\n    res = model_and_diffusion_defaults()\n    res[\"large_size\"] = 256\n    res[\"small_size\"] = 64\n    arg_names = inspect.getfullargspec(sr_create_model_and_diffusion)[0]\n    for k in res.copy().keys():\n        if k not in arg_names:\n            del res[k]\n    return res\n\n\ndef sr_create_model_and_diffusion(\n    large_size,\n    small_size,\n    class_cond,\n    learn_sigma,\n    num_channels,\n    num_res_blocks,\n    num_heads,\n    num_heads_upsample,\n    attention_resolutions,\n    dropout,\n    rrdb_blocks,\n    deeper_net,\n    diffusion_steps,\n    noise_schedule,\n    timestep_respacing,\n    use_kl,\n    predict_xstart,\n    rescale_timesteps,\n    rescale_learned_sigmas,\n    use_checkpoint,\n    use_scale_shift_norm,\n):\n    model = sr_create_model(\n        large_size,\n        small_size,\n        num_channels,\n        num_res_blocks,\n        learn_sigma=learn_sigma,\n        class_cond=class_cond,\n        use_checkpoint=use_checkpoint,\n        attention_resolutions=attention_resolutions,\n        num_heads=num_heads,\n        num_heads_upsample=num_heads_upsample,\n        use_scale_shift_norm=use_scale_shift_norm,\n        dropout=dropout,\n        rrdb_blocks=rrdb_blocks,\n        deeper_net=deeper_net,\n    )\n    diffusion = create_gaussian_diffusion(\n        steps=diffusion_steps,\n        learn_sigma=learn_sigma,\n        noise_schedule=noise_schedule,\n        use_kl=use_kl,\n        predict_xstart=predict_xstart,\n        rescale_timesteps=rescale_timesteps,\n        rescale_learned_sigmas=rescale_learned_sigmas,\n        timestep_respacing=timestep_respacing,\n    )\n    return model, diffusion\n\n\ndef sr_create_model(\n    large_size,\n    small_size,\n    num_channels,\n    num_res_blocks,\n    learn_sigma,\n    class_cond,\n    use_checkpoint,\n    attention_resolutions,\n    num_heads,\n    num_heads_upsample,\n    use_scale_shift_norm,\n    dropout,\n    rrdb_blocks,\n    deeper_net,\n):\n    _ = small_size  # hack to prevent unused variable\n\n    if large_size == 256:\n        if deeper_net:\n            channel_mult = (1, 1, 1, 2, 2, 4, 4)\n        else:\n            channel_mult = (1, 1, 2, 2, 4, 4)\n    elif large_size == 64:\n        channel_mult = (1, 2, 3, 4)\n    else:\n        raise ValueError(f\"unsupported large size: {large_size}\")\n\n    attention_ds = []\n    for res in attention_resolutions.split(\",\"):\n        attention_ds.append(large_size // int(res))\n\n    return SuperResModel(\n        in_channels=1,\n        model_channels=num_channels,\n        out_channels=(1 if not learn_sigma else 2),\n        num_res_blocks=num_res_blocks,\n        attention_resolutions=tuple(attention_ds),\n        dropout=dropout,\n        channel_mult=channel_mult,\n        num_classes=(NUM_CLASSES if class_cond else None),\n        use_checkpoint=use_checkpoint,\n        num_heads=num_heads,\n        num_heads_upsample=num_heads_upsample,\n        use_scale_shift_norm=use_scale_shift_norm,\n        rrdb_blocks=rrdb_blocks,\n    )\n\n\ndef create_gaussian_diffusion(\n    *,\n    steps=1000,\n    learn_sigma=False,\n    sigma_small=False,\n    noise_schedule=\"linear\",\n    use_kl=False,\n    predict_xstart=False,\n    rescale_timesteps=False,\n    rescale_learned_sigmas=False,\n    timestep_respacing=\"\",\n):\n    betas = gd.get_named_beta_schedule(noise_schedule, steps)\n    if use_kl:\n        loss_type = gd.LossType.RESCALED_KL\n    elif rescale_learned_sigmas:\n        loss_type = gd.LossType.RESCALED_MSE\n    else:\n        loss_type = gd.LossType.MSE\n    if not timestep_respacing:\n        timestep_respacing = [steps]\n    return SpacedDiffusion(\n        use_timesteps=space_timesteps(steps, timestep_respacing),\n        betas=betas,\n        model_mean_type=(\n            gd.ModelMeanType.EPSILON if not predict_xstart else gd.ModelMeanType.START_X\n        ),\n        model_var_type=(\n            (\n                gd.ModelVarType.FIXED_LARGE\n                if not sigma_small\n                else gd.ModelVarType.FIXED_SMALL\n            )\n            if not learn_sigma\n            else gd.ModelVarType.LEARNED_RANGE\n        ),\n        loss_type=loss_type,\n        rescale_timesteps=rescale_timesteps,\n    )\n\n\ndef add_dict_to_argparser(parser, default_dict):\n    for k, v in default_dict.items():\n        v_type = type(v)\n        if v is None:\n            v_type = str\n        elif isinstance(v, bool):\n            v_type = str2bool\n        parser.add_argument(f\"--{k}\", default=v, type=v_type)\n\n\ndef args_to_dict(args, keys):\n    return {k: getattr(args, k) for k in keys}\n\n\ndef str2bool(v):\n    \"\"\"\n    https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse\n    \"\"\"\n    if isinstance(v, bool):\n        return v\n    if v.lower() in (\"yes\", \"true\", \"t\", \"y\", \"1\"):\n        return True\n    elif v.lower() in (\"no\", \"false\", \"f\", \"n\", \"0\"):\n        return False\n    else:\n        raise argparse.ArgumentTypeError(\"boolean value expected\")\n"
  },
  {
    "path": "improved_diffusion/train_util.py",
    "content": "import copy\nimport functools\nimport os\nfrom pathlib import Path\n\nimport blobfile as bf\nimport numpy as np\nimport torch as th\nimport torch.distributed as dist\nfrom mpi4py import MPI\nfrom torch.nn.parallel.distributed import DistributedDataParallel as DDP\nfrom torch.optim import AdamW\nfrom tqdm import tqdm\n\nfrom . import dist_util, logger\nfrom .fp16_util import (\n    make_master_params,\n    master_params_to_model_params,\n    model_grads_to_master_grads,\n    unflatten_master_params,\n    zero_grad,\n)\nfrom .nn import update_ema\nfrom .resample import LossAwareSampler, UniformSampler\n# For ImageNet experiments, this was a good default value.\n# We found that the lg_loss_scale quickly climbed to\n# 20-21 within the first ~1K steps of training.\nfrom .sampling_util import sampling_major_vote_func\nfrom .utils import set_random_seed_for_iterations\n\nINITIAL_LOG_LOSS_SCALE = 20.0\n\n\nclass TrainLoop:\n    def __init__(\n        self,\n        *,\n        model,\n        diffusion,\n        data,\n        batch_size,\n        microbatch,\n        lr,\n        ema_rate,\n        log_interval,\n        save_interval,\n        resume_checkpoint,\n        logger,\n        image_size,\n        val_dataset,\n        clip_denoised=True,\n        use_fp16=False,\n        fp16_scale_growth=1e-3,\n        schedule_sampler=None,\n        weight_decay=0.0,\n        lr_anneal_steps=0,\n        run_without_test=False,\n        args=None\n    ):\n        self.model = model\n        self.diffusion = diffusion\n        self.data = data\n        self.batch_size = batch_size\n        self.microbatch = microbatch if microbatch > 0 else batch_size\n        self.lr = lr\n        self.args = args\n        self.ema_rate = (\n            [ema_rate]\n            if isinstance(ema_rate, float)\n            else [float(x) for x in ema_rate.split(\",\")]\n        )\n        self.log_interval = log_interval\n        self.save_interval = save_interval\n        self.resume_checkpoint = resume_checkpoint\n        self.use_fp16 = use_fp16\n        self.fp16_scale_growth = fp16_scale_growth\n        self.schedule_sampler = schedule_sampler or UniformSampler(diffusion)\n        self.weight_decay = weight_decay\n        self.lr_anneal_steps = lr_anneal_steps\n\n        self.step = 1\n        self.resume_step = 0\n        self.global_batch = self.batch_size * dist.get_world_size()\n\n        self.model_params = list(self.model.parameters())\n        self.master_params = self.model_params\n        self.lg_loss_scale = INITIAL_LOG_LOSS_SCALE\n        self.sync_cuda = th.cuda.is_available()\n\n        # if self.resume_checkpoint:\n        self._load_and_sync_parameters(self.resume_checkpoint)\n\n        if self.use_fp16:\n            self._setup_fp16()\n\n        self.opt = AdamW(self.master_params, lr=self.lr, weight_decay=self.weight_decay)\n        if self.resume_checkpoint:\n\n            self._load_optimizer_state(resume_checkpoint)\n            # Model was resumed, either due to a restart or a checkpoint\n            # being specified at the command line.\n            self.ema_params = [\n                self._load_ema_parameters(rate, resume_checkpoint) for rate in self.ema_rate\n            ]\n        else:\n            self.ema_params = [\n                copy.deepcopy(self.master_params) for _ in range(len(self.ema_rate))\n            ]\n\n        if th.cuda.is_available():\n            self.use_ddp = True\n            self.ddp_model = DDP(\n                self.model,\n                device_ids=[dist_util.dev()],\n                output_device=dist_util.dev(),\n                broadcast_buffers=False,\n                bucket_cap_mb=128,\n                find_unused_parameters=False,\n            )\n            self.ema_model = copy.deepcopy(self.model).to(th.device(\"cpu\"))\n        else:\n            if dist.get_world_size() > 1:\n                logger.warn(\n                    \"Distributed training requires CUDA. \"\n                    \"Gradients will not be synchronized properly!\"\n                )\n            self.use_ddp = False\n            self.ddp_model = self.model\n\n        self.val_dataset = val_dataset\n        self.logger = logger\n        self.ema_val_best_iou = 0\n        self.val_best_iou = 0\n        self.clip_denoised = clip_denoised\n        self.val_current_model_name = \"\"\n        self.val_current_model_ema_name = \"\"\n        self.current_model_checkpoint_name = \"\"\n        self.run_without_test = run_without_test\n\n    def _load_and_sync_parameters(self, logs_path):\n        # resume_checkpoint = find_resume_checkpoint() or self.resume_checkpoint\n\n        # model_checkpoint = bf.join(\n        #     bf.dirname(logs_path), f\"model.pt\"\n        # )\n        logger.log(f\"model folder path\")\n        if logs_path:\n            if Path(logs_path).exists():\n                model_path = list(Path(logs_path).glob(\"model*.pt\"))[0]\n                self.resume_step = parse_resume_step_from_filename(str(model_path))\n                self.step = self.resume_step\n\n                logger.log(f\"loading model from checkpoint: {model_path} from step {self.step}...\")\n\n                self.model.load_state_dict(\n                    dist_util.load_state_dict(\n                        str(model_path), map_location=dist_util.dev()\n                    )\n                )\n\n        dist_util.sync_params(self.model.parameters())\n\n    def _load_ema_parameters(self, rate, logs_path):\n        ema_params = copy.deepcopy(self.master_params)\n\n        ema_checkpoint = Path(logs_path) / \"ema.pt\"\n\n        if ema_checkpoint.exists():\n            # if dist.get_rank() == 0:\n            logger.log(f\"loading EMA from checkpoint: {str(ema_checkpoint)}...\")\n            state_dict = dist_util.load_state_dict(\n                str(ema_checkpoint), map_location=dist_util.dev()\n            )\n            ema_params = self._state_dict_to_master_params(state_dict)\n\n        dist_util.sync_params(ema_params)\n        return ema_params\n\n    def _load_optimizer_state(self, logs_path):\n\n        opt_checkpoint = Path(logs_path) / \"opt.pt\"\n\n        if opt_checkpoint.exists():\n            logger.log(f\"loading optimizer state from checkpoint: {str(opt_checkpoint)}\")\n            state_dict = dist_util.load_state_dict(\n                str(opt_checkpoint), map_location=dist_util.dev()\n            )\n            self.opt.load_state_dict(state_dict)\n\n    def _setup_fp16(self):\n        self.master_params = make_master_params(self.model_params)\n        self.model.convert_to_fp16()\n\n    def run_loop(self, max_iter=250000, start_print_iter=100000, vis_batch_size=8, n_rounds=3):\n        if dist.get_rank() == 0:\n            pbar = tqdm()\n        while (\n            self.step < max_iter\n        ):\n            self.ddp_model.train()\n            batch, cond, _ = next(self.data)\n            self.run_step(batch, cond)\n            if dist.get_rank() == 0:\n                pbar.update(1)\n            if self.step % self.log_interval == 0 and self.step != 0:\n                logger.log(f\"interval\")\n                logger.dumpkvs()\n                logger.log(f\"class {self.args.class_name} lr {self.lr}, expansion {self.args.expansion}, \"\n                           f\"rrdb blocks {self.args.rrdb_blocks} gpus {MPI.COMM_WORLD.Get_size()}\")\n\n            if self.step % self.save_interval == 0:\n                logger.log(f\"save model for checkpoint\")\n                self.save_state_dict()\n                dist.barrier()\n\n            if self.step % self.save_interval == 0 and self.step >= start_print_iter or self.step == 60000:\n                if self.run_without_test:\n                    if dist.get_rank() == 0:\n                        self.save_checkpoint(self.ema_rate[0], self.ema_params[0], name=f\"model\")\n                else:\n                    self.ddp_model.eval()\n\n                    logger.log(f\"ema sampling\")\n                    output_folder = os.path.join(os.environ[\"OPENAI_LOGDIR\"], f\"{self.step}_val_ema_major\")\n                    os.mkdir(output_folder)\n                    self.ema_model = self.ema_model.to(dist_util.dev())\n                    self.ema_model.load_state_dict(self._master_params_to_state_dict(self.ema_params[0]))\n                    self.ema_model.eval()\n                    ema_val_miou = sampling_major_vote_func(self.diffusion, self.ema_model, output_folder=output_folder,\n                                                        dataset=self.val_dataset, logger=self.logger,\n                                                        clip_denoised=self.clip_denoised, step=self.step, n_rounds=len(self.val_dataset))\n                    self.ema_model = self.ema_model.to(th.device(\"cpu\")) # release gpu memory\n\n                    if dist.get_rank() == 0:\n                        if self.ema_val_best_iou < ema_val_miou:\n                            logger.log(f\"best iou ema val: {ema_val_miou} step {self.step}\")\n                            self.ema_val_best_iou = ema_val_miou\n\n                            ema_filename = self.save_checkpoint(self.ema_rate[0], self.ema_params[0], name=f\"val_{ema_val_miou:.7f}\")\n\n                            if self.val_current_model_ema_name != \"\":\n                                ckpt_path = bf.join(get_blob_logdir(), self.val_current_model_ema_name)\n                                if os.path.exists(ckpt_path):\n                                    os.remove(ckpt_path)\n\n                            self.val_current_model_ema_name = ema_filename\n\n                set_random_seed_for_iterations(MPI.COMM_WORLD.Get_rank() + self.step)\n                dist.barrier()\n            self.step += 1\n\n    def run_step(self, batch, cond):\n        self.forward_backward(batch, cond)\n        if self.use_fp16:\n            self.optimize_fp16()\n        else:\n            self.optimize_normal()\n        self.log_step()\n\n    def forward_backward(self, batch, cond):\n        zero_grad(self.model_params)\n        for i in range(0, batch.shape[0], self.microbatch):\n            micro = batch[i : i + self.microbatch].to(dist_util.dev())\n            micro_cond = {\n                k: v[i : i + self.microbatch].to(dist_util.dev())\n                for k, v in cond.items()\n            }\n            last_batch = (i + self.microbatch) >= batch.shape[0]\n            t, weights = self.schedule_sampler.sample(micro.shape[0], dist_util.dev())\n\n            compute_losses = functools.partial(\n                self.diffusion.training_losses,\n                self.ddp_model,\n                micro,\n                t,\n                model_kwargs=micro_cond,\n            )\n\n            if last_batch or not self.use_ddp:\n                losses = compute_losses()\n            else:\n                with self.ddp_model.no_sync():\n                    losses = compute_losses()\n\n            if isinstance(self.schedule_sampler, LossAwareSampler):\n                self.schedule_sampler.update_with_local_losses(\n                    t, losses[\"loss\"].detach()\n                )\n\n            loss = (losses[\"loss\"] * weights).mean()\n            log_loss_dict(\n                self.diffusion, t, {k: v * weights for k, v in losses.items()}\n            )\n            if self.use_fp16:\n                loss_scale = 2 ** self.lg_loss_scale\n                (loss * loss_scale).backward()\n            else:\n                loss.backward()\n\n    def optimize_fp16(self):\n        if any(not th.isfinite(p.grad).all() for p in self.model_params):\n            self.lg_loss_scale -= 1\n            logger.log(f\"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}\")\n            return\n\n        model_grads_to_master_grads(self.model_params, self.master_params)\n        self.master_params[0].grad.mul_(1.0 / (2 ** self.lg_loss_scale))\n        self._log_grad_norm()\n        self._anneal_lr()\n        self.opt.step()\n        for rate, params in zip(self.ema_rate, self.ema_params):\n            update_ema(params, self.master_params, rate=rate)\n        master_params_to_model_params(self.model_params, self.master_params)\n        self.lg_loss_scale += self.fp16_scale_growth\n\n    def optimize_normal(self):\n        self._log_grad_norm()\n        self._anneal_lr()\n        self.opt.step()\n        for rate, params in zip(self.ema_rate, self.ema_params):\n            update_ema(params, self.master_params, rate=rate)\n\n    def _log_grad_norm(self):\n        sqsum = 0.0\n        for p in self.master_params:\n            sqsum += (p.grad ** 2).sum().item()\n        logger.logkv_mean(\"grad_norm\", np.sqrt(sqsum))\n\n    def _anneal_lr(self):\n        if not self.lr_anneal_steps:\n            return\n        frac_done = (self.step + self.resume_step) / self.lr_anneal_steps\n        lr = self.lr * (1 - frac_done)\n        for param_group in self.opt.param_groups:\n            param_group[\"lr\"] = lr\n\n    def log_step(self):\n        logger.logkv(\"step\", self.step + self.resume_step)\n        logger.logkv(\"samples\", (self.step + self.resume_step + 1) * self.global_batch)\n        if self.use_fp16:\n            logger.logkv(\"lg_loss_scale\", self.lg_loss_scale)\n\n    def save_checkpoint(self, rate, params, name):\n        state_dict = self._master_params_to_state_dict(params)\n        if dist.get_rank() == 0:\n            logger.log(f\"saving model {rate}...\")\n            if not rate:\n                filename = f\"model_{name}_{(self.step+self.resume_step):06d}.pt\"\n            else:\n                filename = f\"ema_{name}_{rate}_{(self.step+self.resume_step):06d}.pt\"\n            with bf.BlobFile(bf.join(get_blob_logdir(), filename), \"wb\") as f:\n                th.save(state_dict, f)\n            return filename\n\n    def save_state_dict(self):\n\n        if dist.get_rank() == 0:\n            with bf.BlobFile(bf.join(get_blob_logdir(), f\"opt.pt\"), \"wb\",) as f:\n                th.save(self.opt.state_dict(), f)\n\n            with bf.BlobFile(bf.join(get_blob_logdir(), f\"model{self.step}.pt\"), \"wb\") as f:\n                th.save(self._master_params_to_state_dict(self.master_params), f)\n\n            if self.current_model_checkpoint_name != \"\":\n                ckpt_path = bf.join(get_blob_logdir(), self.current_model_checkpoint_name)\n                if os.path.exists(ckpt_path):\n                    os.remove(ckpt_path)\n\n            self.current_model_checkpoint_name = bf.join(get_blob_logdir(), f\"model{self.step}.pt\")\n\n            with bf.BlobFile(bf.join(get_blob_logdir(), f\"ema.pt\"), \"wb\") as f:\n                th.save(self._master_params_to_state_dict(self.ema_params[0]), f)\n        #\n        # checkpoint = {\n        #     'step': self.step,\n        #     'state_dict': self._master_params_to_state_dict(self.master_params),\n        #     'ema_state_dict': self._master_params_to_state_dict(self.ema_params[0]),\n        #     'optimizer': self.opt.state_dict()\n        # }\n        #\n        # current_model_checkpoint_name = bf.join(get_blob_logdir(), file_name)\n        # th.save(checkpoint, current_model_checkpoint_name)\n        #\n        # if self.current_model_checkpoint_name != \"\":\n        #     ckpt_path = bf.join(get_blob_logdir(), self.current_model_checkpoint_name)\n        #     if os.path.exists(ckpt_path):\n        #         os.remove(ckpt_path)\n        #\n        # self.current_model_checkpoint_name = current_model_checkpoint_name\n\n    def save(self, name):\n\n        filename = self.save_checkpoint(0, self.master_params, name)\n        for rate, params in zip(self.ema_rate, self.ema_params):\n            filename_ema = self.save_checkpoint(rate, params, name)\n\n        # if dist.get_rank() == 0:\n        #     with bf.BlobFile(\n        #         bf.join(get_blob_logdir(), f\"opt{(self.step+self.resume_step):06d}.pt\"),\n        #         \"wb\",\n        #     ) as f:\n        #         th.save(self.opt.state_dict(), f)\n\n        # dist.barrier()\n\n        return filename, filename_ema\n\n    def _master_params_to_state_dict(self, master_params):\n        if self.use_fp16:\n            master_params = unflatten_master_params(\n                list(self.model.parameters()), master_params\n            )\n        state_dict = self.model.state_dict()\n        for i, (name, _value) in enumerate(self.model.named_parameters()):\n            assert name in state_dict\n            state_dict[name] = master_params[i]\n        return state_dict\n\n    def _state_dict_to_master_params(self, state_dict):\n        params = [state_dict[name] for name, _ in self.model.named_parameters()]\n        if self.use_fp16:\n            return make_master_params(params)\n        else:\n            return params\n\n\ndef parse_resume_step_from_filename(filename):\n    \"\"\"\n    Parse filenames of the form path/to/modelNNNNNN.pt, where NNNNNN is the\n    checkpoint's number of steps.\n    \"\"\"\n    split = filename.split(\"model\")\n    if len(split) < 2:\n        return 0\n    split1 = split[-1].split(\".\")[0]\n    try:\n        return int(split1)\n    except ValueError:\n        return 0\n\n\ndef get_blob_logdir():\n    return os.environ.get(\"DIFFUSION_BLOB_LOGDIR\", logger.get_dir())\n\n\ndef find_resume_checkpoint():\n    # On your infrastructure, you may want to override this to automatically\n    # discover the latest checkpoint on your blob storage, etc.\n    return None\n\n\ndef find_ema_checkpoint(main_checkpoint, step, rate):\n    if main_checkpoint is None:\n        return None\n    filename = f\"ema_{rate}_{(step):06d}.pt\"\n    path = bf.join(bf.dirname(main_checkpoint), filename)\n    if bf.exists(path):\n        return path\n    return None\n\n\ndef log_loss_dict(diffusion, ts, losses):\n    for key, values in losses.items():\n        logger.logkv_mean(key, values.mean().item())\n        # Log the quantiles (four quartiles, in particular).\n        for sub_t, sub_loss in zip(ts.cpu().numpy(), values.detach().cpu().numpy()):\n            quartile = int(4 * sub_t / diffusion.num_timesteps)\n            logger.logkv_mean(f\"{key}_q{quartile}\", sub_loss)\n"
  },
  {
    "path": "improved_diffusion/unet.py",
    "content": "from abc import abstractmethod\n\nimport math\n\nimport numpy as np\nimport torch as th\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom .RRDB import RRDBNet\nfrom .fp16_util import convert_module_to_f16, convert_module_to_f32\nfrom .nn import (\n    SiLU,\n    conv_nd,\n    linear,\n    avg_pool_nd,\n    zero_module,\n    normalization,\n    timestep_embedding,\n    checkpoint,\n)\n\n\nclass TimestepBlock(nn.Module):\n    \"\"\"\n    Any module where forward() takes timestep embeddings as a second argument.\n    \"\"\"\n\n    @abstractmethod\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the module to `x` given `emb` timestep embeddings.\n        \"\"\"\n\n\nclass TimestepEmbedSequential(nn.Sequential, TimestepBlock):\n    \"\"\"\n    A sequential module that passes timestep embeddings to the children that\n    support it as an extra input.\n    \"\"\"\n\n    def forward(self, x, emb):\n        for layer in self:\n            if isinstance(layer, TimestepBlock):\n                x = layer(x, emb)\n            else:\n                x = layer(x)\n        return x\n\n\nclass Upsample(nn.Module):\n    \"\"\"\n    An upsampling layer with an optional convolution.\n\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 upsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2):\n        super().__init__()\n        self.channels = channels\n        self.use_conv = use_conv\n        self.dims = dims\n        if use_conv:\n            self.conv = conv_nd(dims, channels, channels, 3, padding=1)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        if self.dims == 3:\n            x = F.interpolate(\n                x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode=\"nearest\"\n            )\n        else:\n            x = F.interpolate(x, scale_factor=2, mode=\"nearest\")\n        if self.use_conv:\n            x = self.conv(x)\n        return x\n\n\nclass Downsample(nn.Module):\n    \"\"\"\n    A downsampling layer with an optional convolution.\n\n    :param channels: channels in the inputs and outputs.\n    :param use_conv: a bool determining if a convolution is applied.\n    :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then\n                 downsampling occurs in the inner-two dimensions.\n    \"\"\"\n\n    def __init__(self, channels, use_conv, dims=2):\n        super().__init__()\n        self.channels = channels\n        self.use_conv = use_conv\n        self.dims = dims\n        stride = 2 if dims != 3 else (1, 2, 2)\n        if use_conv:\n            self.op = conv_nd(dims, channels, channels, 3, stride=stride, padding=1)\n        else:\n            self.op = avg_pool_nd(stride)\n\n    def forward(self, x):\n        assert x.shape[1] == self.channels\n        return self.op(x)\n\n\nclass ResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param use_checkpoint: if True, use gradient checkpointing on this module.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        dropout,\n        out_channels=None,\n        use_conv=False,\n        use_scale_shift_norm=False,\n        dims=2,\n        use_checkpoint=False,\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.dropout = dropout\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n\n        self.in_layers = nn.Sequential(\n            normalization(channels),\n            SiLU(),\n            conv_nd(dims, channels, self.out_channels, 3, padding=1),\n        )\n        self.emb_layers = nn.Sequential(\n            SiLU(),\n            linear(\n                emb_channels,\n                2 * self.out_channels if use_scale_shift_norm else self.out_channels,\n            ),\n        )\n        self.out_layers = nn.Sequential(\n            normalization(self.out_channels),\n            SiLU(),\n            nn.Dropout(p=dropout),\n            zero_module(\n                conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)\n            ),\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = conv_nd(\n                dims, channels, self.out_channels, 3, padding=1\n            )\n        else:\n            self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)\n\n    def forward(self, x, emb):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        return checkpoint(\n            self._forward, (x, emb), self.parameters(), self.use_checkpoint\n        )\n\n    def _forward(self, x, emb):\n        h = self.in_layers(x)\n        emb_out = self.emb_layers(emb).type(h.dtype)\n        while len(emb_out.shape) < len(h.shape):\n            emb_out = emb_out[..., None]\n        if self.use_scale_shift_norm:\n            out_norm, out_rest = self.out_layers[0], self.out_layers[1:]\n            scale, shift = th.chunk(emb_out, 2, dim=1)\n            h = out_norm(h) * (1 + scale) + shift\n            h = out_rest(h)\n        else:\n            h = h + emb_out\n            h = self.out_layers(h)\n        return self.skip_connection(x) + h\n\n\nclass AttentionBlock(nn.Module):\n    \"\"\"\n    An attention block that allows spatial positions to attend to each other.\n\n    Originally ported from here, but adapted to the N-d case.\n    https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.\n    \"\"\"\n\n    def __init__(self, channels, num_heads=1, use_checkpoint=False):\n        super().__init__()\n        self.channels = channels\n        self.num_heads = num_heads\n        self.use_checkpoint = use_checkpoint\n\n        self.norm = normalization(channels)\n        self.qkv = conv_nd(1, channels, channels * 3, 1)\n        self.attention = QKVAttention()\n        self.proj_out = zero_module(conv_nd(1, channels, channels, 1))\n\n    def forward(self, x):\n        return checkpoint(self._forward, (x,), self.parameters(), self.use_checkpoint)\n\n    def _forward(self, x):\n        b, c, *spatial = x.shape\n        x = x.reshape(b, c, -1)\n        qkv = self.qkv(self.norm(x))\n        qkv = qkv.reshape(b * self.num_heads, -1, qkv.shape[2])\n        h = self.attention(qkv)\n        h = h.reshape(b, -1, h.shape[-1])\n        h = self.proj_out(h)\n        return (x + h).reshape(b, c, *spatial)\n\n\nclass QKVAttention(nn.Module):\n    \"\"\"\n    A module which performs QKV attention.\n    \"\"\"\n\n    def forward(self, qkv):\n        \"\"\"\n        Apply QKV attention.\n\n        :param qkv: an [N x (C * 3) x T] tensor of Qs, Ks, and Vs.\n        :return: an [N x C x T] tensor after attention.\n        \"\"\"\n        ch = qkv.shape[1] // 3\n        q, k, v = th.split(qkv, ch, dim=1)\n        scale = 1 / math.sqrt(math.sqrt(ch))\n        weight = th.einsum(\n            \"bct,bcs->bts\", q * scale, k * scale\n        )  # More stable with f16 than dividing afterwards\n        weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)\n        return th.einsum(\"bts,bcs->bct\", weight, v)\n\n    @staticmethod\n    def count_flops(model, _x, y):\n        \"\"\"\n        A counter for the `thop` package to count the operations in an\n        attention operation.\n\n        Meant to be used like:\n\n            macs, params = thop.profile(\n                model,\n                inputs=(inputs, timestamps),\n                custom_ops={QKVAttention: QKVAttention.count_flops},\n            )\n\n        \"\"\"\n        b, c, *spatial = y[0].shape\n        num_spatial = int(np.prod(spatial))\n        # We perform two matmuls with the same number of ops.\n        # The first computes the weight matrix, the second computes\n        # the combination of the value vectors.\n        matmul_ops = 2 * b * (num_spatial ** 2) * c\n        model.total_ops += th.DoubleTensor([matmul_ops])\n\n\nclass UNetModel(nn.Module):\n    \"\"\"\n    The full UNet model with attention and timestep embedding.\n\n    :param in_channels: channels in the input Tensor.\n    :param model_channels: base channel count for the model.\n    :param out_channels: channels in the output Tensor.\n    :param num_res_blocks: number of residual blocks per downsample.\n    :param attention_resolutions: a collection of downsample rates at which\n        attention will take place. May be a set, list, or tuple.\n        For example, if this contains 4, then at 4x downsampling, attention\n        will be used.\n    :param dropout: the dropout probability.\n    :param channel_mult: channel multiplier for each level of the UNet.\n    :param conv_resample: if True, use learned convolutions for upsampling and\n        downsampling.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param num_classes: if specified (as an int), then this model will be\n        class-conditional with `num_classes` classes.\n    :param use_checkpoint: use gradient checkpointing to reduce memory usage.\n    :param num_heads: the number of attention heads in each attention layer.\n    \"\"\"\n\n    def __init__(\n        self,\n        in_channels,\n        model_channels,\n        out_channels,\n        num_res_blocks,\n        attention_resolutions,\n        dropout=0,\n        channel_mult=(1, 2, 4, 8),\n        conv_resample=True,\n        dims=2,\n        num_classes=None,\n        use_checkpoint=False,\n        num_heads=1,\n        num_heads_upsample=-1,\n        use_scale_shift_norm=False,\n        rrdb_blocks=3,\n    ):\n        super().__init__()\n\n        if num_heads_upsample == -1:\n            num_heads_upsample = num_heads\n\n        self.in_channels = in_channels\n        self.model_channels = model_channels\n        self.out_channels = out_channels\n        self.num_res_blocks = num_res_blocks\n        self.attention_resolutions = attention_resolutions\n        self.dropout = dropout\n        self.channel_mult = channel_mult\n        self.conv_resample = conv_resample\n        self.num_classes = num_classes\n        self.use_checkpoint = use_checkpoint\n        self.num_heads = num_heads\n        self.num_heads_upsample = num_heads_upsample\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            linear(model_channels, time_embed_dim),\n            SiLU(),\n            linear(time_embed_dim, time_embed_dim),\n        )\n\n        if self.num_classes is not None:\n            self.label_emb = nn.Embedding(num_classes, time_embed_dim)\n        self.rrdb = RRDBNet(nb=rrdb_blocks, out_nc=model_channels)\n        self.input_blocks = nn.ModuleList(\n            [\n                TimestepEmbedSequential(\n                    conv_nd(dims, in_channels, model_channels, 3, padding=1)\n                )\n            ]\n        )\n        input_block_chans = [model_channels]\n        ch = model_channels\n        ds = 1\n        for level, mult in enumerate(channel_mult):\n            for _ in range(num_res_blocks):\n                layers = [\n                    ResBlock(\n                        ch,\n                        time_embed_dim,\n                        dropout,\n                        out_channels=mult * model_channels,\n                        dims=dims,\n                        use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm,\n                    )\n                ]\n                ch = mult * model_channels\n                if ds in attention_resolutions:\n                    layers.append(\n                        AttentionBlock(\n                            ch, use_checkpoint=use_checkpoint, num_heads=num_heads\n                        )\n                    )\n                self.input_blocks.append(TimestepEmbedSequential(*layers))\n                input_block_chans.append(ch)\n            if level != len(channel_mult) - 1:\n                self.input_blocks.append(\n                    TimestepEmbedSequential(Downsample(ch, conv_resample, dims=dims))\n                )\n                input_block_chans.append(ch)\n                ds *= 2\n\n        self.middle_block = TimestepEmbedSequential(\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n            AttentionBlock(ch, use_checkpoint=use_checkpoint, num_heads=num_heads),\n            ResBlock(\n                ch,\n                time_embed_dim,\n                dropout,\n                dims=dims,\n                use_checkpoint=use_checkpoint,\n                use_scale_shift_norm=use_scale_shift_norm,\n            ),\n        )\n\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            for i in range(num_res_blocks + 1):\n                layers = [\n                    ResBlock(\n                        ch + input_block_chans.pop(),\n                        time_embed_dim,\n                        dropout,\n                        out_channels=model_channels * mult,\n                        dims=dims,\n                        use_checkpoint=use_checkpoint,\n                        use_scale_shift_norm=use_scale_shift_norm,\n                    )\n                ]\n                ch = model_channels * mult\n                if ds in attention_resolutions:\n                    layers.append(\n                        AttentionBlock(\n                            ch,\n                            use_checkpoint=use_checkpoint,\n                            num_heads=num_heads_upsample,\n                        )\n                    )\n                if level and i == num_res_blocks:\n                    layers.append(Upsample(ch, conv_resample, dims=dims))\n                    ds //= 2\n                self.output_blocks.append(TimestepEmbedSequential(*layers))\n\n        self.out = nn.Sequential(\n            normalization(ch),\n            SiLU(),\n            zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),\n        )\n\n    def convert_to_fp16(self):\n        \"\"\"\n        Convert the torso of the model to float16.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f16)\n        self.middle_block.apply(convert_module_to_f16)\n        self.output_blocks.apply(convert_module_to_f16)\n        self.rrdb.apply(convert_module_to_f16)\n\n    def convert_to_fp32(self):\n        \"\"\"\n        Convert the torso of the model to float32.\n        \"\"\"\n        self.input_blocks.apply(convert_module_to_f32)\n        self.middle_block.apply(convert_module_to_f32)\n        self.output_blocks.apply(convert_module_to_f32)\n        self.rrdb.apply(convert_module_to_f32)\n\n    @property\n    def inner_dtype(self):\n        \"\"\"\n        Get the dtype used by the torso of the model.\n        \"\"\"\n        return next(self.input_blocks.parameters()).dtype\n\n    def forward(self, x, timesteps, y=None, conditioned_image=None):\n        \"\"\"\n        Apply the model to an input batch.\n\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        assert (y is not None) == (\n            self.num_classes is not None\n        ), \"must specify y if and only if the model is class-conditional\"\n\n        hs = []\n        emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))\n\n        if self.num_classes is not None:\n            assert y.shape == (x.shape[0],)\n            emb = emb + self.label_emb(y)\n        former_frames_features = self.rrdb(conditioned_image.type(self.inner_dtype))\n        h = x.type(self.inner_dtype)\n        for i, module in enumerate(self.input_blocks):\n            h = module(h, emb)\n            if i == 0:\n                h = h + former_frames_features\n            hs.append(h)\n        h = self.middle_block(h, emb)\n        for module in self.output_blocks:\n            cat_in = th.cat([h, hs.pop()], dim=1)\n            h = module(cat_in, emb)\n        h = h.type(x.dtype)\n        return self.out(h)\n\n    def get_feature_vectors(self, x, timesteps, y=None):\n        \"\"\"\n        Apply the model and return all of the intermediate tensors.\n\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: a dict with the following keys:\n                 - 'down': a list of hidden state tensors from downsampling.\n                 - 'middle': the tensor of the output of the lowest-resolution\n                             block in the model.\n                 - 'up': a list of hidden state tensors from upsampling.\n        \"\"\"\n        hs = []\n        emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))\n        if self.num_classes is not None:\n            assert y.shape == (x.shape[0],)\n            emb = emb + self.label_emb(y)\n        result = dict(down=[], up=[])\n        h = x.type(self.inner_dtype)\n        for module in self.input_blocks:\n            h = module(h, emb)\n            hs.append(h)\n            result[\"down\"].append(h.type(x.dtype))\n        h = self.middle_block(h, emb)\n        result[\"middle\"] = h.type(x.dtype)\n        for module in self.output_blocks:\n            cat_in = th.cat([h, hs.pop()], dim=1)\n            h = module(cat_in, emb)\n            result[\"up\"].append(h.type(x.dtype))\n        return result\n\n\nclass SuperResModel(UNetModel):\n    \"\"\"\n    A UNetModel that performs super-resolution.\n\n    Expects an extra kwarg `low_res` to condition on a low-resolution image.\n    \"\"\"\n\n    def __init__(self, in_channels, *args, **kwargs):\n        super().__init__(in_channels * 2, *args, **kwargs)\n\n    def forward(self, x, timesteps, low_res=None, **kwargs):\n        _, _, new_height, new_width = x.shape\n        upsampled = F.interpolate(low_res, (new_height, new_width), mode=\"nearest\")\n        x = th.cat([x, upsampled], dim=1)\n        return super().forward(x, timesteps, **kwargs)\n\n    def get_feature_vectors(self, x, timesteps, low_res=None, **kwargs):\n        _, new_height, new_width, _ = x.shape\n        upsampled = F.interpolate(low_res, (new_height, new_width), mode=\"nearest\")\n        x = th.cat([x, upsampled], dim=1)\n        return super().get_feature_vectors(x, timesteps, **kwargs)\n\n"
  },
  {
    "path": "improved_diffusion/utils.py",
    "content": "import random\n\nimport numpy as np\nimport torch\n\n\ndef set_random_seed(seed, deterministic=False):\n    \"\"\"Set random seed.\n    Args:\n        seed (int): Seed to be used.\n        deterministic (bool): Whether to set the deterministic option for\n            CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n            to True and `torch.backends.cudnn.benchmark` to False.\n            Default: False.\n    \"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed_all(seed)\n    if deterministic:\n        torch.backends.cudnn.deterministic = True\n        torch.backends.cudnn.benchmark = False\n\n\ndef set_random_seed_for_iterations(seed):\n    \"\"\"Set random seed.\n    Args:\n        seed (int): Seed to be used.\n        deterministic (bool): Whether to set the deterministic option for\n            CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n            to True and `torch.backends.cudnn.benchmark` to False.\n            Default: False.\n    \"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n    torch.manual_seed(seed)\n    torch.cuda.manual_seed(seed)\n"
  }
]