[
  {
    "path": ".devcontainer/devcontainer.json",
    "content": "// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:\n// https://github.com/microsoft/vscode-dev-containers/tree/v0.122.1/containers/docker-from-docker-compose\n// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.\n{\n\t\"name\": \"RTFNet\", // You can freely choose a name.\n\t\"dockerComposeFile\": \"docker-compose.yml\",\n\t\"service\": \"RTFNet\", // The name of the docker-compose service.\n\t\"workspaceFolder\": \"/workspace\",\n\t\n\t// Set *default* container specific settings.json values on container create.\n\t// \"settings\": { \n\t// \t\"terminal.integrated.shell.linux\": \"/bin/bash\"\n\t// },\n\n\t// Add the IDs of extensions you want installed when the container is created.\n\t// \"extensions\": [\n\t// \t\"ms-azuretools.vscode-docker\"\n\t// ]\n\n\t// Uncomment the next line if you want start specific services in your Docker Compose config.\n\t// \"runServices\": [],\n\n\t// Uncomment the next line if you want to keep your containers running after VS Code shuts down.\n\t// \"shutdownAction\": \"none\",\n\n\t// Use 'postCreateCommand' to run commands after the container is created.\n\t// \"postCreateCommand\": \"docker --version\",\n\n\t// Uncomment to connect as a non-root user. See https://aka.ms/vscode-remote/containers/non-root.\n\t// \"remoteUser\": \"vscode\"\n}"
  },
  {
    "path": ".devcontainer/docker-compose.yml",
    "content": "#-------------------------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.\n#-------------------------------------------------------------------------------------------------------------\n\nversion: '2.3'\nservices:\n  RTFNet:\n    # Uncomment the next line to use a non-root user for all processes. You can also\n    # simply use the \"remoteUser\" property in devcontainer.json if you just want VS Code\n    # and its sub-processes (terminals, tasks, debugging) to execute as the user. On Linux,\n    # you may need to update USER_UID and USER_GID in .devcontainer/Dockerfile to match your\n    # user if not 1000. See https://aka.ms/vscode-remote/containers/non-root for details.\n    # user: vscode\n    runtime: nvidia\n    image: docker_image_rtfnet # The name of the docker image\n    ports:\n      - '1234:6006' \n    volumes:\n      # Update this to wherever you want VS Code to mount the folder of your project\n      - ..:/workspace:cached # Do not change!\n      # - /home/sun/somefolder/:/somefolder # folder_in_local_computer:folder_in_docker_container\n\n      # Forwards the local Docker socket to the container.\n      - /var/run/docker.sock:/var/run/docker-host.sock \n    shm_size: 32g\n    devices: \n      - /dev/nvidia0\n      - /dev/nvidia1 # Please add or delete according to the number of your GPU cards\n\n    # Uncomment the next four lines if you will use a ptrace-based debuggers like C++, Go, and Rust.\n    # cap_add:\n    #  - SYS_PTRACE\n    # security_opt:\n    #   - seccomp:unconfined\n\n    # Overrides default command so things don't shut down after the process ends.\n    #entrypoint: /usr/local/share/docker-init.sh\n    command: sleep infinity\n"
  },
  {
    "path": "Dockerfile",
    "content": "FROM nvidia/cuda:12.5.0-devel-ubuntu22.04\n\nENV DEBIAN_FRONTEND=noninteractive\nENV DEBCONF_NOWARNINGS=yes\n\nRUN apt-get update && apt-get install -y vim python3 python3-pip\nRUN pip3 install -U scipy scikit-learn\nRUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121\n\nRUN pip3 install tensorboard torchsummary==1.5.1 numpy==1.23.0\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 \n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# RTFNet-pytorch\n\nThis is the official pytorch implementation of [RTFNet: RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes](https://github.com/yuxiangsun/RTFNet/blob/master/doc/RAL2019_RTFNet.pdf) (IEEE RA-L). Some of the codes are borrowed from [MFNet](https://github.com/haqishen/MFNet-pytorch). Note that our implementations of the evaluation metrics (Acc and IoU) are different from those in MFNet. In addition, we consider the unlabelled class when computing the metrics.\n\nThe current version supports Python>=3.10.12, CUDA>=12.5.0 and PyTorch>=2.3.1, but it should work fine with lower versions of CUDA and PyTorch. Please modify the `Dockerfile` as you want. If you do not use docker, please manually install the dependencies listed in the `Dockerfile`.\n\n<img src=\"doc/network.png\" width=\"900px\"/>\n  \n## Introduction\n\nRTFNet is a data-fusion network for semantic segmentation using RGB and thermal images. It consists of two encoders and one decoder.\n \n## Dataset\n \nThe original dataset can be downloaded from the MFNet project [page](https://www.mi.t.u-tokyo.ac.jp/static/projects/mil_multispectral/), but you are encouraged to download our preprocessed dataset from [here](http://gofile.me/4jm56/CfukComo1).\n\n## Pretrained weights\n\nThe weights used in the paper:\n\nRTFNet 50: http://gofile.me/4jm56/9VygmBgPR\nRTFNet 152: http://gofile.me/4jm56/ODE2fxJKG\n\n## Usage\n\n* Assume you have [docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/) and [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) installed. First, you need to build a docker image. Then, download the dataset:\n```\n$ cd ~ \n$ git clone https://github.com/yuxiangsun/RTFNet.git\n$ cd ~/RTFNet\n$ docker build -t docker_image_rtfnet .\n$ mkdir ~/RTFNet/dataset\n$ cd ~/RTFNet/dataset\n$ (download our preprocessed dataset.zip in this folder)\n$ unzip -d .. dataset.zip\n```\n\n* To reproduce our results (for different RTFNet variants, please mannully change `num_resnet_layers` in `RTFNet.py` and `weight_name` in `run_demo.py`):\n```\n$ cd ~/RTFNet\n$ mkdir -p ~/RTFNet/weights_backup/RTFNet_50\n$ cd ~/RTFNet/weights_backup/RTFNet_50\n$ (download the RTFNet_50 weight in this folder)\n$ mkdir -p ~/RTFNet/weights_backup/RTFNet_152\n$ cd ~/RTFNet/weights_backup/RTFNet_152\n$ (download the RTFNet_152 weight in this folder)\n$ docker run -it --shm-size 8G -p 1234:6006 --name docker_container_rtfnet --gpus all -v ~/RTFNet:/workspace docker_image_rtfnet\n$ (currently, you should be in the docker)\n$ cd /workspace\n$ python3 run_demo.py\n```\nThe results will be saved in the `./runs` folder.\n\n* To train RTFNet (for different RTFNet variants, please mannully change `num_resnet_layers` in `RTFNet.py`):\n```\n$ docker run -it --shm-size 8G -p 1234:6006 --name docker_container_rtfnet --gpus all -v ~/RTFNet:/workspace docker_image_rtfnet\n$ (currently, you should be in the docker)\n$ cd /workspace\n$ python3 train.py\n$ (fire up another terminal)\n$ docker exec -it docker_container_rtfnet bash\n$ cd /workspace\n$ tensorboard --bind_all --logdir=./runs/tensorboard_log/\n$ (fire up your favorite browser with http://localhost:1234, you will see the tensorboard)\n```\nThe results will be saved in the `./runs` folder.\n\nNote: Please change the smoothing factor in the Tensorboard webpage to `0.999`, otherwise, you may not find the patterns from the noisy plots. If you have the error `docker: Error response from daemon: could not select device driver`, please first install [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) on your computer!\n\n## Citation\n\nIf you use RTFNet in an academic work, please cite:\n\n```\n@ARTICLE{sun2019rtfnet,\nauthor={Yuxiang Sun and Weixun Zuo and Ming Liu}, \njournal={{IEEE Robotics and Automation Letters}}, \ntitle={{RTFNet: RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes}}, \nyear={2019}, \nvolume={4}, \nnumber={3}, \npages={2576-2583}, \ndoi={10.1109/LRA.2019.2904733}, \nISSN={2377-3766}, \nmonth={July},}\n```\n\n## Demos\n\n<img src=\"doc/demo.png\" width=\"900px\"/>\n\n## About VSCode and Docker\n\nWe suggest use VSCode and Docker for deep learning research. Note that this repo already contains the `.devcontainer` folder, which is needed by VSCode.\nFor more details, please refer to this [tutorial](https://github.com/yuxiangsun/VSCode_Docker_Tutorial).\n\n## Contact\n\nsun.yuxiang@outlook.com\n\n"
  },
  {
    "path": "model/RTFNet.py",
    "content": "# coding:utf-8\n# By Yuxiang Sun, Aug. 2, 2019\n# Email: sun.yuxiang@outlook.com\n\nimport torch\nimport torch.nn as nn \nimport torchvision.models as models \n\nclass RTFNet(nn.Module):\n\n    def __init__(self, n_class):\n        super(RTFNet, self).__init__()\n\n        self.num_resnet_layers = 152\n\n        if self.num_resnet_layers == 18:\n            resnet_raw_model1 = models.resnet18(pretrained=True)\n            resnet_raw_model2 = models.resnet18(pretrained=True)\n            self.inplanes = 512\n        elif self.num_resnet_layers == 34:\n            resnet_raw_model1 = models.resnet34(pretrained=True)\n            resnet_raw_model2 = models.resnet34(pretrained=True)\n            self.inplanes = 512\n        elif self.num_resnet_layers == 50:\n            resnet_raw_model1 = models.resnet50(pretrained=True)\n            resnet_raw_model2 = models.resnet50(pretrained=True)\n            self.inplanes = 2048\n        elif self.num_resnet_layers == 101:\n            resnet_raw_model1 = models.resnet101(pretrained=True)\n            resnet_raw_model2 = models.resnet101(pretrained=True)\n            self.inplanes = 2048\n        elif self.num_resnet_layers == 152:\n            resnet_raw_model1 = models.resnet152(pretrained=True)\n            resnet_raw_model2 = models.resnet152(pretrained=True)\n            self.inplanes = 2048\n\n        ########  Thermal ENCODER  ########\n \n        self.encoder_thermal_conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) \n        self.encoder_thermal_conv1.weight.data = torch.unsqueeze(torch.mean(resnet_raw_model1.conv1.weight.data, dim=1), dim=1)\n        self.encoder_thermal_bn1 = resnet_raw_model1.bn1\n        self.encoder_thermal_relu = resnet_raw_model1.relu\n        self.encoder_thermal_maxpool = resnet_raw_model1.maxpool\n        self.encoder_thermal_layer1 = resnet_raw_model1.layer1\n        self.encoder_thermal_layer2 = resnet_raw_model1.layer2\n        self.encoder_thermal_layer3 = resnet_raw_model1.layer3\n        self.encoder_thermal_layer4 = resnet_raw_model1.layer4\n\n        ########  RGB ENCODER  ########\n \n        self.encoder_rgb_conv1 = resnet_raw_model2.conv1\n        self.encoder_rgb_bn1 = resnet_raw_model2.bn1\n        self.encoder_rgb_relu = resnet_raw_model2.relu\n        self.encoder_rgb_maxpool = resnet_raw_model2.maxpool\n        self.encoder_rgb_layer1 = resnet_raw_model2.layer1\n        self.encoder_rgb_layer2 = resnet_raw_model2.layer2\n        self.encoder_rgb_layer3 = resnet_raw_model2.layer3\n        self.encoder_rgb_layer4 = resnet_raw_model2.layer4\n\n        ########  DECODER  ########\n\n        self.deconv1 = self._make_transpose_layer(TransBottleneck, self.inplanes//2, 2, stride=2) # using // for python 3.6\n        self.deconv2 = self._make_transpose_layer(TransBottleneck, self.inplanes//2, 2, stride=2) # using // for python 3.6\n        self.deconv3 = self._make_transpose_layer(TransBottleneck, self.inplanes//2, 2, stride=2) # using // for python 3.6\n        self.deconv4 = self._make_transpose_layer(TransBottleneck, self.inplanes//2, 2, stride=2) # using // for python 3.6\n        self.deconv5 = self._make_transpose_layer(TransBottleneck, n_class, 2, stride=2)\n \n    def _make_transpose_layer(self, block, planes, blocks, stride=1):\n\n        upsample = None\n        if stride != 1:\n            upsample = nn.Sequential(\n                nn.ConvTranspose2d(self.inplanes, planes, kernel_size=2, stride=stride, padding=0, bias=False),\n                nn.BatchNorm2d(planes),\n            ) \n        elif self.inplanes != planes:\n            upsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes, kernel_size=1, stride=stride, padding=0, bias=False),\n                nn.BatchNorm2d(planes),\n            ) \n \n        for m in upsample.modules():\n            if isinstance(m, nn.ConvTranspose2d):\n                nn.init.xavier_uniform_(m.weight.data)\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n        layers = []\n\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, self.inplanes))\n\n        layers.append(block(self.inplanes, planes, stride, upsample))\n        self.inplanes = planes\n\n        return nn.Sequential(*layers)\n \n    def forward(self, input):\n\n        rgb = input[:,:3]\n        thermal = input[:,3:]\n\n        verbose = False\n\n        # encoder\n\n        ######################################################################\n\n        if verbose: print(\"rgb.size() original: \", rgb.size())  # (480, 640)\n        if verbose: print(\"thermal.size() original: \", thermal.size()) # (480, 640)\n\n        ######################################################################\n\n        rgb = self.encoder_rgb_conv1(rgb)\n        if verbose: print(\"rgb.size() after conv1: \", rgb.size()) # (240, 320)\n        rgb = self.encoder_rgb_bn1(rgb)\n        if verbose: print(\"rgb.size() after bn1: \", rgb.size())  # (240, 320)\n        rgb = self.encoder_rgb_relu(rgb)\n        if verbose: print(\"rgb.size() after relu: \", rgb.size())  # (240, 320)\n\n        thermal = self.encoder_thermal_conv1(thermal)\n        if verbose: print(\"thermal.size() after conv1: \", thermal.size()) # (240, 320)\n        thermal = self.encoder_thermal_bn1(thermal)\n        if verbose: print(\"thermal.size() after bn1: \", thermal.size()) # (240, 320)\n        thermal = self.encoder_thermal_relu(thermal)\n        if verbose: print(\"thermal.size() after relu: \", thermal.size())  # (240, 320)\n\n        rgb = rgb + thermal\n\n        rgb = self.encoder_rgb_maxpool(rgb)\n        if verbose: print(\"rgb.size() after maxpool: \", rgb.size()) # (120, 160)\n\n        thermal = self.encoder_thermal_maxpool(thermal)\n        if verbose: print(\"thermal.size() after maxpool: \", thermal.size()) # (120, 160)\n\n        ######################################################################\n\n        rgb = self.encoder_rgb_layer1(rgb)\n        if verbose: print(\"rgb.size() after layer1: \", rgb.size()) # (120, 160)\n        thermal = self.encoder_thermal_layer1(thermal)\n        if verbose: print(\"thermal.size() after layer1: \", thermal.size()) # (120, 160)\n\n        rgb = rgb + thermal\n\n        ######################################################################\n \n        rgb = self.encoder_rgb_layer2(rgb)\n        if verbose: print(\"rgb.size() after layer2: \", rgb.size()) # (60, 80)\n        thermal = self.encoder_thermal_layer2(thermal)\n        if verbose: print(\"thermal.size() after layer2: \", thermal.size()) # (60, 80)\n\n        rgb = rgb + thermal\n\n        ######################################################################\n\n        rgb = self.encoder_rgb_layer3(rgb)\n        if verbose: print(\"rgb.size() after layer3: \", rgb.size()) # (30, 40)\n        thermal = self.encoder_thermal_layer3(thermal)\n        if verbose: print(\"thermal.size() after layer3: \", thermal.size()) # (30, 40)\n\n        rgb = rgb + thermal\n\n        ######################################################################\n\n        rgb = self.encoder_rgb_layer4(rgb)\n        if verbose: print(\"rgb.size() after layer4: \", rgb.size()) # (15, 20)\n        thermal = self.encoder_thermal_layer4(thermal)\n        if verbose: print(\"thermal.size() after layer4: \", thermal.size()) # (15, 20)\n\n        fuse = rgb + thermal\n\n        ######################################################################\n\n        # decoder\n\n        fuse = self.deconv1(fuse)\n        if verbose: print(\"fuse after deconv1: \", fuse.size()) # (30, 40)\n        fuse = self.deconv2(fuse)\n        if verbose: print(\"fuse after deconv2: \", fuse.size()) # (60, 80)\n        fuse = self.deconv3(fuse)\n        if verbose: print(\"fuse after deconv3: \", fuse.size()) # (120, 160)\n        fuse = self.deconv4(fuse)\n        if verbose: print(\"fuse after deconv4: \", fuse.size()) # (240, 320)\n        fuse = self.deconv5(fuse)\n        if verbose: print(\"fuse after deconv5: \", fuse.size()) # (480, 640)\n\n        return fuse\n  \nclass TransBottleneck(nn.Module):\n\n    def __init__(self, inplanes, planes, stride=1, upsample=None):\n        super(TransBottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)  \n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)  \n        self.bn2 = nn.BatchNorm2d(planes)\n\n        if upsample is not None and stride != 1:\n            self.conv3 = nn.ConvTranspose2d(planes, planes, kernel_size=2, stride=stride, padding=0, bias=False)  \n        else:\n            self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)  \n\n        self.bn3 = nn.BatchNorm2d(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.upsample = upsample\n        self.stride = stride\n \n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.xavier_uniform_(m.weight.data)\n            elif isinstance(m, nn.ConvTranspose2d):\n                nn.init.xavier_uniform_(m.weight.data)\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.upsample is not None:\n            residual = self.upsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\ndef unit_test():\n    num_minibatch = 2\n    rgb = torch.randn(num_minibatch, 3, 480, 640).cuda(0)\n    thermal = torch.randn(num_minibatch, 1, 480, 640).cuda(0)\n    rtf_net = RTFNet(9).cuda(0)\n    input = torch.cat((rgb, thermal), dim=1)\n    rtf_net(input)\n    #print('The model: ', rtf_net.modules)\n\nif __name__ == '__main__':\n    unit_test()\n"
  },
  {
    "path": "model/__init__.py",
    "content": "from .RTFNet import RTFNet\n"
  },
  {
    "path": "run_demo.py",
    "content": "# By Yuxiang Sun, Dec. 14, 2020\n# Email: sun.yuxiang@outlook.com\n\nimport os, argparse, time, datetime, sys, shutil, stat, torch\nimport numpy as np \nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom util.MF_dataset import MF_dataset \nfrom util.util import compute_results, visualize\nfrom sklearn.metrics import confusion_matrix\nfrom scipy.io import savemat \nfrom model import RTFNet\n\n#############################################################################################\nparser = argparse.ArgumentParser(description='Test with pytorch')\n#############################################################################################\nparser.add_argument('--model_name', '-m', type=str, default='RTFNet')\nparser.add_argument('--weight_name', '-w', type=str, default='RTFNet_152') # RTFNet_152, RTFNet_50, please change the number of layers in the network file\nparser.add_argument('--file_name', '-f', type=str, default='final.pth')\nparser.add_argument('--dataset_split', '-d', type=str, default='test') # test, test_day, test_night\nparser.add_argument('--gpu', '-g', type=int, default=0)\n#############################################################################################\nparser.add_argument('--img_height', '-ih', type=int, default=480) \nparser.add_argument('--img_width', '-iw', type=int, default=640)  \nparser.add_argument('--num_workers', '-j', type=int, default=16)\nparser.add_argument('--n_class', '-nc', type=int, default=9)\nparser.add_argument('--data_dir', '-dr', type=str, default='./dataset/')\nparser.add_argument('--model_dir', '-wd', type=str, default='./weights_backup/')\nargs = parser.parse_args()\n#############################################################################################\n \nif __name__ == '__main__':\n  \n    torch.cuda.set_device(args.gpu)\n    print(\"\\nthe pytorch version:\", torch.__version__)\n    print(\"the gpu count:\", torch.cuda.device_count())\n    print(\"the current used gpu:\", torch.cuda.current_device(), '\\n')\n\n    # prepare save direcotry\n    if os.path.exists(\"./runs\"):\n        print(\"previous \\\"./runs\\\" folder exist, will delete this folder\")\n        shutil.rmtree(\"./runs\")\n    os.makedirs(\"./runs\")\n    os.chmod(\"./runs\", stat.S_IRWXO)  # allow the folder created by docker read, written, and execuated by local machine\n    model_dir = os.path.join(args.model_dir, args.weight_name)\n    if os.path.exists(model_dir) is False:\n        sys.exit(\"the %s does not exit.\" %(model_dir))\n    model_file = os.path.join(model_dir, args.file_name)\n    if os.path.exists(model_file) is True:\n        print('use the final model file.')\n    else:\n        sys.exit('no model file found.') \n    print('testing %s: %s on GPU #%d with pytorch' % (args.model_name, args.weight_name, args.gpu))\n    \n    conf_total = np.zeros((args.n_class, args.n_class))\n    model = eval(args.model_name)(n_class=args.n_class)\n    if args.gpu >= 0: model.cuda(args.gpu)\n    print('loading model file %s... ' % model_file)\n    pretrained_weight = torch.load(model_file, map_location = lambda storage, loc: storage.cuda(args.gpu))\n    own_state = model.state_dict()\n    for name, param in pretrained_weight.items():\n        if name not in own_state:\n            continue\n        own_state[name].copy_(param)  \n    print('done!')\n\n    batch_size = 1\n    test_dataset  = MF_dataset(data_dir=args.data_dir, split=args.dataset_split, input_h=args.img_height, input_w=args.img_width)\n    test_loader  = DataLoader(\n        dataset     = test_dataset,\n        batch_size  = batch_size,\n        shuffle     = False,\n        num_workers = args.num_workers,\n        pin_memory  = True,\n        drop_last   = False\n    )\n    ave_time_cost = 0.0\n\n    model.eval()\n    with torch.no_grad():\n        for it, (images, labels, names) in enumerate(test_loader):\n            images = Variable(images).cuda(args.gpu)\n            labels = Variable(labels).cuda(args.gpu)\n            start_time = time.time()\n            logits = model(images)  # logits.size(): mini_batch*num_class*480*640\n            end_time = time.time()\n            if it>=5: # # ignore the first 5 frames\n                ave_time_cost += (end_time-start_time)\n            # convert tensor to numpy 1d array\n            label = labels.cpu().numpy().squeeze().flatten()\n            prediction = logits.argmax(1).cpu().numpy().squeeze().flatten() # prediction and label are both 1-d array, size: minibatch*640*480\n            # generate confusion matrix frame-by-frame\n            conf = confusion_matrix(y_true=label, y_pred=prediction, labels=[0,1,2,3,4,5,6,7,8]) # conf is an n_class*n_class matrix, vertical axis: groundtruth, horizontal axis: prediction\n            conf_total += conf\n            # save demo images\n            visualize(image_name=names, predictions=logits.argmax(1), weight_name=args.weight_name)\n            print(\"%s, %s, frame %d/%d, %s, time cost: %.2f ms, demo result saved.\"\n                  %(args.model_name, args.weight_name, it+1, len(test_loader), names, (end_time-start_time)*1000))\n \n    precision_per_class, recall_per_class, iou_per_class = compute_results(conf_total)\n    conf_total_matfile = os.path.join(\"./runs\", 'conf_'+args.weight_name+'.mat')\n    savemat(conf_total_matfile,  {'conf': conf_total}) # 'conf' is the variable name when loaded in Matlab\n \n    print('\\n###########################################################################')\n    print('\\n%s: %s test results (with batch size %d) on %s using %s:' %(args.model_name, args.weight_name, batch_size, datetime.date.today(), torch.cuda.get_device_name(args.gpu))) \n    print('\\n* the tested dataset name: %s' % args.dataset_split)\n    print('* the tested image count: %d' % len(test_loader))\n    print('* the tested image size: %d*%d' %(args.img_height, args.img_width)) \n    print('* the weight name: %s' %args.weight_name) \n    print('* the file name: %s' %args.file_name) \n    print(\"* recall per class: \\n    unlabeled: %.6f, car: %.6f, person: %.6f, bike: %.6f, curve: %.6f, car_stop: %.6f, guardrail: %.6f, color_cone: %.6f, bump: %.6f\" \\\n          %(recall_per_class[0], recall_per_class[1], recall_per_class[2], recall_per_class[3], recall_per_class[4], recall_per_class[5], recall_per_class[6], recall_per_class[7], recall_per_class[8]))\n    print(\"* iou per class: \\n    unlabeled: %.6f, car: %.6f, person: %.6f, bike: %.6f, curve: %.6f, car_stop: %.6f, guardrail: %.6f, color_cone: %.6f, bump: %.6f\" \\\n          %(iou_per_class[0], iou_per_class[1], iou_per_class[2], iou_per_class[3], iou_per_class[4], iou_per_class[5], iou_per_class[6], iou_per_class[7], iou_per_class[8])) \n    print(\"\\n* average values (np.mean(x)): \\n recall: %.6f, iou: %.6f\" \\\n          %(recall_per_class.mean(), iou_per_class.mean()))\n    print(\"* average values (np.mean(np.nan_to_num(x))): \\n recall: %.6f, iou: %.6f\" \\\n          %(np.mean(np.nan_to_num(recall_per_class)), np.mean(np.nan_to_num(iou_per_class))))\n    print('\\n* the average time cost per frame (with batch size %d): %.2f ms, namely, the inference speed is %.2f fps' %(batch_size, ave_time_cost*1000/(len(test_loader)-5), 1.0/(ave_time_cost/(len(test_loader)-5)))) # ignore the first 10 frames\n    #print('\\n* the total confusion matrix: ') \n    #np.set_printoptions(precision=8, threshold=np.inf, linewidth=np.inf, suppress=True)\n    #print(conf_total)\n    print('\\n###########################################################################')\n"
  },
  {
    "path": "train.py",
    "content": "# By Yuxiang Sun, Dec. 4, 2019\n# Email: sun.yuxiang@outlook.com\n\nimport os, argparse, time, datetime, stat, shutil\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nimport torchvision.utils as vutils\nfrom util.MF_dataset import MF_dataset\nfrom util.augmentation import RandomFlip, RandomCrop, RandomCropOut, RandomBrightness, RandomNoise\nfrom util.util import compute_results\nfrom sklearn.metrics import confusion_matrix\nfrom torch.utils.tensorboard import SummaryWriter\nfrom model import RTFNet\n\n#############################################################################################\nparser = argparse.ArgumentParser(description='Train with pytorch')\n############################################################################################# \nparser.add_argument('--model_name', '-m', type=str, default='RTFNet')\n#batch_size: RTFNet-152: 2; RTFNet-101: 2; RTFNet-50: 3; RTFNet-34: 10; RTFNet-18: 15;\nparser.add_argument('--batch_size', '-b', type=int, default=2) \nparser.add_argument('--lr_start', '-ls', type=float, default=0.01)\nparser.add_argument('--gpu', '-g', type=int, default=0)\n#############################################################################################\nparser.add_argument('--lr_decay', '-ld', type=float, default=0.95)\nparser.add_argument('--epoch_max', '-em', type=int, default=10000) # please stop training mannully \nparser.add_argument('--epoch_from', '-ef', type=int, default=0) \nparser.add_argument('--num_workers', '-j', type=int, default=8)\nparser.add_argument('--n_class', '-nc', type=int, default=9)\nparser.add_argument('--data_dir', '-dr', type=str, default='./dataset/')\nargs = parser.parse_args()\n#############################################################################################\n\naugmentation_methods = [\n    RandomFlip(prob=0.5),\n    RandomCrop(crop_rate=0.1, prob=1.0),\n    # RandomCropOut(crop_rate=0.2, prob=1.0),\n    # RandomBrightness(bright_range=0.15, prob=0.9),\n    # RandomNoise(noise_range=5, prob=0.9),\n]\n\ndef train(epo, model, train_loader, optimizer):\n    model.train()\n    for it, (images, labels, names) in enumerate(train_loader):\n        images = Variable(images).cuda(args.gpu)\n        labels = Variable(labels).cuda(args.gpu)\n        start_t = time.time() # time.time() returns the current time\n        optimizer.zero_grad()\n        logits = model(images)\n        loss = F.cross_entropy(logits, labels)  # Note that the cross_entropy function has already include the softmax function\n        loss.backward()\n        optimizer.step()\n        lr_this_epo=0\n        for param_group in optimizer.param_groups:\n            lr_this_epo = param_group['lr']\n        print('Train: %s, epo %s/%s, iter %s/%s, lr %.8f, %.2f img/sec, loss %.4f, time %s' \\\n            % (args.model_name, epo, args.epoch_max, it+1, len(train_loader), lr_this_epo, len(names)/(time.time()-start_t), float(loss),\n              datetime.datetime.now().replace(microsecond=0)-start_datetime))\n        if accIter['train'] % 1 == 0:\n            writer.add_scalar('Train/loss', loss, accIter['train'])\n        view_figure = True # note that I have not colorized the GT and predictions here\n        if accIter['train'] % 500 == 0:\n            if view_figure:\n                input_rgb_images = vutils.make_grid(images[:,:3], nrow=8, padding=10) # can only display 3-channel images, so images[:,:3]\n                writer.add_image('Train/input_rgb_images', input_rgb_images, accIter['train'])\n                scale = max(1, 255//args.n_class) # label (0,1,2..) is invisable, multiply a constant for visualization\n                groundtruth_tensor = labels.unsqueeze(1) * scale  # mini_batch*480*640 -> mini_batch*1*480*640\n                groundtruth_tensor = torch.cat((groundtruth_tensor, groundtruth_tensor, groundtruth_tensor), 1)  # change to 3-channel for visualization\n                groudtruth_images = vutils.make_grid(groundtruth_tensor, nrow=8, padding=10)\n                writer.add_image('Train/groudtruth_images', groudtruth_images, accIter['train'])\n                predicted_tensor = logits.argmax(1).unsqueeze(1) * scale # mini_batch*args.n_class*480*640 -> mini_batch*480*640 -> mini_batch*1*480*640\n                predicted_tensor = torch.cat((predicted_tensor, predicted_tensor, predicted_tensor),1) # change to 3-channel for visualization, mini_batch*1*480*640\n                predicted_images = vutils.make_grid(predicted_tensor, nrow=8, padding=10)\n                writer.add_image('Train/predicted_images', predicted_images, accIter['train'])\n        accIter['train'] = accIter['train'] + 1\n\ndef validation(epo, model, val_loader): \n    model.eval()\n    with torch.no_grad():\n        for it, (images, labels, names) in enumerate(val_loader):\n            images = Variable(images).cuda(args.gpu)\n            labels = Variable(labels).cuda(args.gpu)\n            start_t = time.time() # time.time() returns the current time\n            logits = model(images)\n            loss = F.cross_entropy(logits, labels)  # Note that the cross_entropy function has already include the softmax function\n            print('Val: %s, epo %s/%s, iter %s/%s, %.2f img/sec, loss %.4f, time %s' \\\n                  % (args.model_name, epo, args.epoch_max, it + 1, len(val_loader), len(names)/(time.time()-start_t), float(loss),\n                    datetime.datetime.now().replace(microsecond=0)-start_datetime))\n            if accIter['val'] % 1 == 0:\n                writer.add_scalar('Validation/loss', loss, accIter['val'])\n            view_figure = False  # note that I have not colorized the GT and predictions here\n            if accIter['val'] % 100 == 0:\n                if view_figure:\n                    input_rgb_images = vutils.make_grid(images[:, :3], nrow=8, padding=10)  # can only display 3-channel images, so images[:,:3]\n                    writer.add_image('Validation/input_rgb_images', input_rgb_images, accIter['val'])\n                    scale = max(1, 255 // args.n_class)  # label (0,1,2..) is invisable, multiply a constant for visualization\n                    groundtruth_tensor = labels.unsqueeze(1) * scale  # mini_batch*480*640 -> mini_batch*1*480*640\n                    groundtruth_tensor = torch.cat((groundtruth_tensor, groundtruth_tensor, groundtruth_tensor), 1)  # change to 3-channel for visualization\n                    groudtruth_images = vutils.make_grid(groundtruth_tensor, nrow=8, padding=10)\n                    writer.add_image('Validation/groudtruth_images', groudtruth_images, accIter['val'])\n                    predicted_tensor = logits.argmax(1).unsqueeze(1)*scale  # mini_batch*args.n_class*480*640 -> mini_batch*480*640 -> mini_batch*1*480*640\n                    predicted_tensor = torch.cat((predicted_tensor, predicted_tensor, predicted_tensor), 1)  # change to 3-channel for visualization, mini_batch*1*480*640\n                    predicted_images = vutils.make_grid(predicted_tensor, nrow=8, padding=10)\n                    writer.add_image('Validation/predicted_images', predicted_images, accIter['val'])\n            accIter['val'] += 1\n\ndef testing(epo, model, test_loader):\n    model.eval()\n    conf_total = np.zeros((args.n_class, args.n_class))\n    label_list = [\"unlabeled\", \"car\", \"person\", \"bike\", \"curve\", \"car_stop\", \"guardrail\", \"color_cone\", \"bump\"]\n    testing_results_file = os.path.join(weight_dir, 'testing_results_file.txt')\n    with torch.no_grad():\n        for it, (images, labels, names) in enumerate(test_loader):\n            images = Variable(images).cuda(args.gpu)\n            labels = Variable(labels).cuda(args.gpu)\n            logits = model(images)\n            label = labels.cpu().numpy().squeeze().flatten()\n            prediction = logits.argmax(1).cpu().numpy().squeeze().flatten() # prediction and label are both 1-d array, size: minibatch*640*480\n            conf = confusion_matrix(y_true=label, y_pred=prediction, labels=[0,1,2,3,4,5,6,7,8]) # conf is args.n_class*args.n_class matrix, vertical axis: groundtruth, horizontal axis: prediction\n            conf_total += conf\n            print('Test: %s, epo %s/%s, iter %s/%s, time %s' % (args.model_name, epo, args.epoch_max, it+1, len(test_loader),\n                 datetime.datetime.now().replace(microsecond=0)-start_datetime))\n    precision, recall, IoU = compute_results(conf_total)\n    writer.add_scalar('Test/average_precision',precision.mean(), epo)\n    writer.add_scalar('Test/average_recall', recall.mean(), epo)\n    writer.add_scalar('Test/average_IoU', IoU.mean(), epo)\n    for i in range(len(precision)):\n        writer.add_scalar(\"Test(class)/precision_class_%s\" % label_list[i], precision[i], epo)\n        writer.add_scalar(\"Test(class)/recall_class_%s\"% label_list[i], recall[i],epo)\n        writer.add_scalar('Test(class)/Iou_%s'% label_list[i], IoU[i], epo)\n    if epo==0:\n        with open(testing_results_file, 'w') as f:\n            f.write(\"# %s, initial lr: %s, batch size: %s, date: %s \\n\" %(args.model_name, args.lr_start, args.batch_size, datetime.date.today()))\n            f.write(\"# epoch: unlabeled, car, person, bike, curve, car_stop, guardrail, color_cone, bump, average(nan_to_num). (Acc %, IoU %)\\n\")\n    with open(testing_results_file, 'a') as f:\n        f.write(str(epo)+': ')\n        for i in range(len(precision)):\n            f.write('%0.4f, %0.4f, ' % (100*recall[i], 100*IoU[i]))\n        f.write('%0.4f, %0.4f\\n' % (100*np.mean(np.nan_to_num(recall)), 100*np.mean(np.nan_to_num(IoU))))\n    print('saving testing results.')\n    with open(testing_results_file, \"r\") as file:\n        writer.add_text('testing_results', file.read().replace('\\n', '  \\n'), epo)\n\nif __name__ == '__main__':\n   \n    torch.cuda.set_device(args.gpu)\n    print(\"\\nthe pytorch version:\", torch.__version__)\n    print(\"the gpu count:\", torch.cuda.device_count())\n    print(\"the current used gpu:\", torch.cuda.current_device(), '\\n')\n\n    model = eval(args.model_name)(n_class=args.n_class)\n    if args.gpu >= 0: model.cuda(args.gpu)\n    optimizer = torch.optim.SGD(model.parameters(), lr=args.lr_start, momentum=0.9, weight_decay=0.0005)\n    scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=args.lr_decay, last_epoch=-1)\n\n    # preparing folders\n    if os.path.exists(\"./runs\"):\n        shutil.rmtree(\"./runs\")\n    weight_dir = os.path.join(\"./runs\", args.model_name)\n    os.makedirs(weight_dir)\n    os.chmod(weight_dir, stat.S_IRWXO)  # allow the folder created by docker read, written, and execuated by local machine\n \n    writer = SummaryWriter(\"./runs/tensorboard_log\")\n    os.chmod(\"./runs/tensorboard_log\", stat.S_IRWXO)  # allow the folder created by docker read, written, and execuated by local machine\n    os.chmod(\"./runs\", stat.S_IRWXO) \n\n    print('training %s on GPU #%d with pytorch' % (args.model_name, args.gpu))\n    print('from epoch %d / %s' % (args.epoch_from, args.epoch_max))\n    print('weight will be saved in: %s' % weight_dir)\n\n    train_dataset = MF_dataset(data_dir=args.data_dir, split='train', transform=augmentation_methods)\n    val_dataset  = MF_dataset(data_dir=args.data_dir, split='val')\n    test_dataset = MF_dataset(data_dir=args.data_dir, split='test')\n\n    train_loader  = DataLoader(\n        dataset     = train_dataset,\n        batch_size  = args.batch_size,\n        shuffle     = True,\n        num_workers = args.num_workers,\n        pin_memory  = True,\n        drop_last   = False\n    )\n    val_loader  = DataLoader(\n        dataset     = val_dataset,\n        batch_size  = args.batch_size,\n        shuffle     = False,\n        num_workers = args.num_workers,\n        pin_memory  = True,\n        drop_last   = False\n    )\n    test_loader = DataLoader(\n        dataset      = test_dataset,\n        batch_size   = args.batch_size,\n        shuffle      = False,\n        num_workers  = args.num_workers,\n        pin_memory   = True,\n        drop_last    = False\n    )\n    start_datetime = datetime.datetime.now().replace(microsecond=0)\n    accIter = {'train': 0, 'val': 0}\n    for epo in range(args.epoch_from, args.epoch_max):\n        print('\\ntrain %s, epo #%s begin...' % (args.model_name, epo))\n        #scheduler.step() # if using pytorch 0.4.1, please put this statement here \n        train(epo, model, train_loader, optimizer)\n        validation(epo, model, val_loader)\n\n        checkpoint_model_file = os.path.join(weight_dir, str(epo) + '.pth')\n        print('saving check point %s: ' % checkpoint_model_file)\n        torch.save(model.state_dict(), checkpoint_model_file)\n\n        testing(epo, model, test_loader) # testing is just for your reference, you can comment this line during training\n        scheduler.step() # if using pytorch 1.1 or above, please put this statement here\n"
  },
  {
    "path": "util/MF_dataset.py",
    "content": "# By Yuxiang Sun, Jul. 3, 2021\n# Email: sun.yuxiang@outlook.com\n\nimport os, torch\nfrom torch.utils.data.dataset import Dataset\nimport numpy as np\nimport PIL\n\nclass MF_dataset(Dataset):\n\n    def __init__(self, data_dir, split, input_h=480, input_w=640 ,transform=[]):\n        super(MF_dataset, self).__init__()\n\n        assert split in ['train', 'val', 'test', 'test_day', 'test_night', 'val_test', 'most_wanted'], \\\n            'split must be \"train\"|\"val\"|\"test\"|\"test_day\"|\"test_night\"|\"val_test\"|\"most_wanted\"'  # test_day, test_night\n\n        with open(os.path.join(data_dir, split+'.txt'), 'r') as f:\n            self.names = [name.strip() for name in f.readlines()]\n\n        self.data_dir  = data_dir\n        self.split     = split\n        self.input_h   = input_h\n        self.input_w   = input_w\n        self.transform = transform\n        self.n_data    = len(self.names)\n\n    def read_image(self, name, folder):\n        file_path = os.path.join(self.data_dir, '%s/%s.png' % (folder, name))\n        image     = np.asarray(PIL.Image.open(file_path))\n        return image\n\n    def __getitem__(self, index):\n        name  = self.names[index]\n        image = self.read_image(name, 'images')\n        label = self.read_image(name, 'labels')\n        for func in self.transform:\n            image, label = func(image, label)\n            \n        image = np.asarray(PIL.Image.fromarray(image).resize((self.input_w, self.input_h)))\n        image = image.astype('float32')\n        image = np.transpose(image, (2,0,1))/255.0\n        label = np.asarray(PIL.Image.fromarray(label).resize((self.input_w, self.input_h), resample=PIL.Image.NEAREST))\n        label = label.astype('int64')\n        \n        return torch.tensor(image), torch.tensor(label), name\n\n    def __len__(self):\n        return self.n_data\n"
  },
  {
    "path": "util/__init__.py",
    "content": "\n"
  },
  {
    "path": "util/augmentation.py",
    "content": "import numpy as np\nfrom PIL import Image\n#from ipdb import set_trace as st\n\n\nclass RandomFlip():\n    def __init__(self, prob=0.5):\n        #super(RandomFlip, self).__init__()\n        self.prob = prob\n\n    def __call__(self, image, label):\n        if np.random.rand() < self.prob:\n            image = image[:,::-1]\n            label = label[:,::-1]\n        return image, label\n\n\nclass RandomCrop():\n    def __init__(self, crop_rate=0.1, prob=1.0):\n        #super(RandomCrop, self).__init__()\n        self.crop_rate = crop_rate\n        self.prob      = prob\n\n    def __call__(self, image, label):\n        if np.random.rand() < self.prob:\n            w, h, c = image.shape\n\n            h1 = np.random.randint(0, h*self.crop_rate)\n            w1 = np.random.randint(0, w*self.crop_rate)\n            h2 = np.random.randint(h-h*self.crop_rate, h+1)\n            w2 = np.random.randint(w-w*self.crop_rate, w+1)\n\n            image = image[w1:w2, h1:h2]\n            label = label[w1:w2, h1:h2]\n\n        return image, label\n\n\nclass RandomCropOut():\n    def __init__(self, crop_rate=0.2, prob=1.0):\n        #super(RandomCropOut, self).__init__()\n        self.crop_rate = crop_rate\n        self.prob      = prob\n\n    def __call__(self, image, label):\n        if np.random.rand() < self.prob:\n            w, h, c = image.shape\n\n            h1 = np.random.randint(0, h*self.crop_rate)\n            w1 = np.random.randint(0, w*self.crop_rate)\n            h2 = int(h1 + h*self.crop_rate)\n            w2 = int(w1 + w*self.crop_rate)\n\n            image[w1:w2, h1:h2] = 0\n            label[w1:w2, h1:h2] = 0\n\n        return image, label\n\n\nclass RandomBrightness():\n    def __init__(self, bright_range=0.15, prob=0.9):\n        #super(RandomBrightness, self).__init__()\n        self.bright_range = bright_range\n        self.prob = prob\n\n    def __call__(self, image, label):\n        if np.random.rand() < self.prob:\n            bright_factor = np.random.uniform(1-self.bright_range, 1+self.bright_range)\n            image = (image * bright_factor).astype(image.dtype)\n\n        return image, label\n\n\nclass RandomNoise():\n    def __init__(self, noise_range=5, prob=0.9):\n        #super(RandomNoise, self).__init__()\n        self.noise_range = noise_range\n        self.prob = prob\n\n    def __call__(self, image, label):\n        if np.random.rand() < self.prob:\n            w, h, c = image.shape\n\n            noise = np.random.randint(\n                -self.noise_range,\n                self.noise_range,\n                (w,h,c)\n            )\n\n            image = (image + noise).clip(0,255).astype(image.dtype)\n\n        return image, label\n        \n\n\n"
  },
  {
    "path": "util/util.py",
    "content": "# By Yuxiang Sun, Dec. 4, 2020\n# Email: sun.yuxiang@outlook.com\n\nimport numpy as np \nfrom PIL import Image \n \n# 0:unlabeled, 1:car, 2:person, 3:bike, 4:curve, 5:car_stop, 6:guardrail, 7:color_cone, 8:bump \ndef get_palette():\n    unlabelled = [0,0,0]\n    car        = [64,0,128]\n    person     = [64,64,0]\n    bike       = [0,128,192]\n    curve      = [0,0,192]\n    car_stop   = [128,128,0]\n    guardrail  = [64,64,128]\n    color_cone = [192,128,128]\n    bump       = [192,64,0]\n    palette    = np.array([unlabelled,car, person, bike, curve, car_stop, guardrail, color_cone, bump])\n    return palette\n\ndef visualize(image_name, predictions, weight_name):\n    palette = get_palette()\n    for (i, pred) in enumerate(predictions):\n        pred = predictions[i].cpu().numpy()\n        img = np.zeros((pred.shape[0], pred.shape[1], 3), dtype=np.uint8)\n        for cid in range(0, len(palette)): # fix the mistake from the MFNet code on Dec.27, 2019\n            img[pred == cid] = palette[cid]\n        img = Image.fromarray(np.uint8(img))\n        img.save('runs/Pred_' + weight_name + '_' + image_name[i] + '.png')\n\ndef compute_results(conf_total):\n    n_class =  conf_total.shape[0]\n    consider_unlabeled = True  # must consider the unlabeled, please set it to True\n    if consider_unlabeled is True:\n        start_index = 0\n    else:\n        start_index = 1\n    precision_per_class = np.zeros(n_class)\n    recall_per_class = np.zeros(n_class)\n    iou_per_class = np.zeros(n_class)\n    for cid in range(start_index, n_class): # cid: class id\n        if conf_total[start_index:, cid].sum() == 0:\n            precision_per_class[cid] =  np.nan\n        else:\n            precision_per_class[cid] = float(conf_total[cid, cid]) / float(conf_total[start_index:, cid].sum()) # precision = TP/TP+FP\n        if conf_total[cid, start_index:].sum() == 0:\n            recall_per_class[cid] = np.nan\n        else:\n            recall_per_class[cid] = float(conf_total[cid, cid]) / float(conf_total[cid, start_index:].sum()) # recall = TP/TP+FN\n        if (conf_total[cid, start_index:].sum() + conf_total[start_index:, cid].sum() - conf_total[cid, cid]) == 0:\n            iou_per_class[cid] = np.nan\n        else:\n            iou_per_class[cid] = float(conf_total[cid, cid]) / float((conf_total[cid, start_index:].sum() + conf_total[start_index:, cid].sum() - conf_total[cid, cid])) # IoU = TP/TP+FP+FN\n\n    return precision_per_class, recall_per_class, iou_per_class\n"
  }
]