[
  {
    "path": ".gitignore",
    "content": "work_dirs/\npredicts/\noutput/\ndata/\ndata\n\n__pycache__/\n*/*.un~\n.*.swp\n\n\n\n*.egg-info/\n*.egg\n\noutput.txt\n.vscode/*\n.DS_Store\ntmp.*\n*.pt\n*.pth\n*.un~\n"
  },
  {
    "path": "INSTALL.md",
    "content": "\n# Install\n\n1. Clone the RESA repository\n    ```\n    git clone https://github.com/zjulearning/resa.git\n    ```\n    We call this directory as `$RESA_ROOT`\n\n2. Create a conda virtual environment and activate it (conda is optional)\n\n    ```Shell\n    conda create -n resa python=3.8 -y\n    conda activate resa\n    ```\n\n3. Install dependencies\n\n    ```Shell\n    # Install pytorch firstly, the cudatoolkit version should be same in your system. (you can also use pip to install pytorch and torchvision)\n    conda install pytorch torchvision cudatoolkit=10.1 -c pytorch\n\n    # Or you can install via pip\n    pip install torch torchvision\n\n    # Install python packages\n    pip install -r requirements.txt\n    ```\n\n4. Data preparation\n\n    Download [CULane](https://xingangpan.github.io/projects/CULane.html) and [Tusimple](https://github.com/TuSimple/tusimple-benchmark/issues/3). Then extract them to `$CULANEROOT` and `$TUSIMPLEROOT`. Create link to `data` directory.\n    \n    ```Shell\n    cd $RESA_ROOT\n    ln -s $CULANEROOT data/CULane\n    ln -s $TUSIMPLEROOT data/tusimple\n    ```\n\n    For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation. \n\n    ```Shell\n    python scripts/convert_tusimple.py --root $TUSIMPLEROOT\n    # this will generate segmentations and two list files: train_gt.txt and test.txt\n    ```\n\n    For CULane, you should have structure like this:\n    ```\n    $RESA_ROOT/data/CULane/driver_xx_xxframe    # data folders x6\n    $RESA_ROOT/data/CULane/laneseg_label_w16    # lane segmentation labels\n    $RESA_ROOT/data/CULane/list                 # data lists\n    ```\n\n    For Tusimple, you should have structure like this:\n    ```\n    $RESA_ROOT/data/tusimple/clips # data folders\n    $RESA_ROOT/data/tusimple/lable_data_xxxx.json # label json file x4\n    $RESA_ROOT/data/tusimple/test_tasks_0627.json # test tasks json file\n    $RESA_ROOT/data/tusimple/test_label.json # test label json file\n    ```\n\n5. Install CULane evaluation tools. \n\n    This tools requires OpenCV C++. Please follow [here](https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html) to install OpenCV C++.  Or just install opencv with command `sudo apt-get install libopencv-dev`\n\n    \n    Then compile the evaluation tool of CULane.\n    ```Shell\n    cd $RESA_ROOT/runner/evaluator/culane/lane_evaluation\n    make\n    cd -\n    ```\n    \n    Note that, the default `opencv` version is 3. If you use opencv2, please modify the `OPENCV_VERSION := 3` to `OPENCV_VERSION := 2` in the `Makefile`."
  },
  {
    "path": "LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2021 Tu Zheng\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "README.md",
    "content": "# RESA \nPyTorch implementation of the paper \"[RESA: Recurrent Feature-Shift Aggregator for Lane Detection](https://arxiv.org/abs/2008.13719)\".\n\nOur paper has been accepted by AAAI2021.\n\n**News**: We also release RESA on [LaneDet](https://github.com/Turoad/lanedet). It's also recommended for you to try LaneDet.\n\n## Introduction\n![intro](intro.png \"intro\")\n- RESA shifts sliced\nfeature map recurrently in vertical and horizontal directions\nand enables each pixel to gather global information.\n- RESA achieves SOTA results on CULane and Tusimple Dataset.\n\n## Get started\n1. Clone the RESA repository\n    ```\n    git clone https://github.com/zjulearning/resa.git\n    ```\n    We call this directory as `$RESA_ROOT`\n\n2. Create a conda virtual environment and activate it (conda is optional)\n\n    ```Shell\n    conda create -n resa python=3.8 -y\n    conda activate resa\n    ```\n\n3. Install dependencies\n\n    ```Shell\n    # Install pytorch firstly, the cudatoolkit version should be same in your system. (you can also use pip to install pytorch and torchvision)\n    conda install pytorch torchvision cudatoolkit=10.1 -c pytorch\n\n    # Or you can install via pip\n    pip install torch torchvision\n\n    # Install python packages\n    pip install -r requirements.txt\n    ```\n\n4. Data preparation\n\n    Download [CULane](https://xingangpan.github.io/projects/CULane.html) and [Tusimple](https://github.com/TuSimple/tusimple-benchmark/issues/3). Then extract them to `$CULANEROOT` and `$TUSIMPLEROOT`. Create link to `data` directory.\n    \n    ```Shell\n    cd $RESA_ROOT\n    mkdir -p data\n    ln -s $CULANEROOT data/CULane\n    ln -s $TUSIMPLEROOT data/tusimple\n    ```\n\n    For CULane, you should have structure like this:\n    ```\n    $CULANEROOT/driver_xx_xxframe    # data folders x6\n    $CULANEROOT/laneseg_label_w16    # lane segmentation labels\n    $CULANEROOT/list                 # data lists\n    ```\n\n    For Tusimple, you should have structure like this:\n    ```\n    $TUSIMPLEROOT/clips # data folders\n    $TUSIMPLEROOT/lable_data_xxxx.json # label json file x4\n    $TUSIMPLEROOT/test_tasks_0627.json # test tasks json file\n    $TUSIMPLEROOT/test_label.json # test label json file\n\n    ```\n\n    For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation. \n\n    ```Shell\n    python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT\n    # this will generate seg_label directory\n    ```\n\n5. Install CULane evaluation tools. \n\n    This tools requires OpenCV C++. Please follow [here](https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html) to install OpenCV C++.  Or just install opencv with command `sudo apt-get install libopencv-dev`\n\n    \n    Then compile the evaluation tool of CULane.\n    ```Shell\n    cd $RESA_ROOT/runner/evaluator/culane/lane_evaluation\n    make\n    cd -\n    ```\n    \n    Note that, the default `opencv` version is 3. If you use opencv2, please modify the `OPENCV_VERSION := 3` to `OPENCV_VERSION := 2` in the `Makefile`.\n\n\n## Training\n\nFor training, run\n\n```Shell\npython main.py [configs/path_to_your_config] --gpus [gpu_ids]\n```\n\n\nFor example, run\n```Shell\npython main.py configs/culane.py --gpus 0 1 2 3\n```\n\n## Testing\nFor testing, run\n```Shell\npython main.py c[configs/path_to_your_config] --validate --load_from [path_to_your_model] [gpu_num]\n```\n\nFor example, run\n```Shell\npython main.py configs/culane.py --validate --load_from culane_resnet50.pth --gpus 0 1 2 3\n\npython main.py configs/tusimple.py --validate --load_from tusimple_resnet34.pth --gpus 0 1 2 3\n```\n\n\nWe provide two trained ResNet models on CULane and Tusimple, downloading our best performed model (Tusimple: [GoogleDrive](https://drive.google.com/file/d/1M1xi82y0RoWUwYYG9LmZHXWSD2D60o0D/view?usp=sharing)/[BaiduDrive(code:s5ii)](https://pan.baidu.com/s/1CgJFrt9OHe-RUNooPpHRGA),\nCULane: [GoogleDrive](https://drive.google.com/file/d/1pcqq9lpJ4ixJgFVFndlPe42VgVsjgn0Q/view?usp=sharing)/[BaiduDrive(code:rlwj)](https://pan.baidu.com/s/1ODKAZxpKrZIPXyaNnxcV3g)\n)\n\n## Visualization\nJust add `--view`.\n\nFor example:\n```Shell\npython main.py configs/culane.py --validate --load_from culane_resnet50.pth --gpus 0 1 2 3 --view\n```\nYou will get the result in the directory: `work_dirs/[DATASET]/xxx/vis`.\n\n## Citation\nIf you use our method, please consider citing:\n```BibTeX\n@inproceedings{zheng2021resa,\n  title={RESA: Recurrent Feature-Shift Aggregator for Lane Detection},\n  author={Zheng, Tu and Fang, Hao and Zhang, Yi and Tang, Wenjian and Yang, Zheng and Liu, Haifeng and Cai, Deng},\n  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n  volume={35},\n  number={4},\n  pages={3547--3554},\n  year={2021}\n}\n```\n\n<!-- ## Thanks\n\nThe evaluation code is modified from [SCNN](https://github.com/XingangPan/SCNN) and [Tusimple Benchmark](https://github.com/TuSimple/tusimple-benchmark). -->\n"
  },
  {
    "path": "configs/culane.py",
    "content": "net = dict(\n    type='RESANet',\n)\n\nbackbone = dict(\n    type='ResNetWrapper',\n    resnet='resnet50',\n    pretrained=True,\n    replace_stride_with_dilation=[False, True, True],\n    out_conv=True,\n    fea_stride=8,\n)\n\nresa = dict(\n    type='RESA',\n    alpha=2.0,\n    iter=4,\n    input_channel=128,\n    conv_stride=9,\n)\n\ndecoder = 'PlainDecoder'        \n\ntrainer = dict(\n    type='RESA'\n)\n\nevaluator = dict(\n    type='CULane',        \n)\n\noptimizer = dict(\n  type='sgd',\n  lr=0.025,\n  weight_decay=1e-4,\n  momentum=0.9\n)\n\nepochs = 12\nbatch_size = 8\ntotal_iter = (88880 // batch_size) * epochs\nimport math\nscheduler = dict(\n    type = 'LambdaLR',\n    lr_lambda = lambda _iter : math.pow(1 - _iter/total_iter, 0.9)\n)\n\nloss_type = 'dice_loss'\nseg_loss_weight = 2.\neval_ep = 6\nsave_ep = epochs\n\nbg_weight = 0.4\n\nimg_norm = dict(\n    mean=[103.939, 116.779, 123.68],\n    std=[1., 1., 1.]\n)\n\nimg_height = 288\nimg_width = 800\ncut_height = 240 \n\ndataset_path = './data/CULane'\ndataset = dict(\n    train=dict(\n        type='CULane',\n        img_path=dataset_path,\n        data_list='train_gt.txt',\n    ),\n    val=dict(\n        type='CULane',\n        img_path=dataset_path,\n        data_list='test.txt',\n    ),\n    test=dict(\n        type='CULane',\n        img_path=dataset_path,\n        data_list='test.txt',\n    )\n)\n\n\nworkers = 12\nnum_classes = 4 + 1\nignore_label = 255\nlog_interval = 500\n"
  },
  {
    "path": "configs/tusimple.py",
    "content": "net = dict(\n    type='RESANet',\n)\n\nbackbone = dict(\n    type='ResNetWrapper',\n    resnet='resnet34',\n    pretrained=True,\n    replace_stride_with_dilation=[False, True, True],\n    out_conv=True,\n    fea_stride=8,\n)\n\nresa = dict(\n    type='RESA',\n    alpha=2.0,\n    iter=5,\n    input_channel=128,\n    conv_stride=9,\n)\n\ndecoder = 'BUSD'        \n\ntrainer = dict(\n    type='RESA'\n)\n\nevaluator = dict(\n    type='Tusimple',        \n    thresh = 0.60\n)\n\noptimizer = dict(\n  type='sgd',\n  lr=0.020,\n  weight_decay=1e-4,\n  momentum=0.9\n)\n\ntotal_iter = 80000\nimport math\nscheduler = dict(\n    type = 'LambdaLR',\n    lr_lambda = lambda _iter : math.pow(1 - _iter/total_iter, 0.9)\n)\n\nbg_weight = 0.4\n\nimg_norm = dict(\n    mean=[103.939, 116.779, 123.68],\n    std=[1., 1., 1.]\n)\n\nimg_height = 368\nimg_width = 640\ncut_height = 160\nseg_label = \"seg_label\"\n\ndataset_path = './data/tusimple'\ntest_json_file = './data/tusimple/test_label.json'\n\ndataset = dict(\n    train=dict(\n        type='TuSimple',\n        img_path=dataset_path,\n        data_list='train_val_gt.txt',\n    ),\n    val=dict(\n        type='TuSimple',\n        img_path=dataset_path,\n        data_list='test_gt.txt'\n    ),\n    test=dict(\n        type='TuSimple',\n        img_path=dataset_path,\n        data_list='test_gt.txt'\n    )\n)\n\n\nloss_type = 'cross_entropy'\nseg_loss_weight = 1.0\n\n\nbatch_size = 4\nworkers = 12\nnum_classes = 6 + 1\nignore_label = 255\nepochs = 300\nlog_interval = 100\neval_ep = 1\nsave_ep = epochs\nlog_note = ''\n"
  },
  {
    "path": "datasets/__init__.py",
    "content": "from .registry import build_dataset, build_dataloader\n\nfrom .tusimple import TuSimple\nfrom .culane import CULane\n"
  },
  {
    "path": "datasets/base_dataset.py",
    "content": "import os.path as osp\nimport os\nimport numpy as np\nimport cv2\nimport torch\nfrom torch.utils.data import Dataset\nimport torchvision\nimport utils.transforms as tf\nfrom .registry import DATASETS\n\n\n@DATASETS.register_module\nclass BaseDataset(Dataset):\n    def __init__(self, img_path, data_list, list_path='list', cfg=None):\n        self.cfg = cfg\n        self.img_path = img_path\n        self.list_path = osp.join(img_path, list_path)\n        self.data_list = data_list\n        self.is_training = ('train' in data_list)\n\n        self.img_name_list = []\n        self.full_img_path_list = []\n        self.label_list = []\n        self.exist_list = []\n\n        self.transform = self.transform_train() if self.is_training else self.transform_val()\n\n        self.init()\n\n    def transform_train(self):\n        raise NotImplementedError()\n\n    def transform_val(self):\n        val_transform = torchvision.transforms.Compose([\n            tf.SampleResize((self.cfg.img_width, self.cfg.img_height)),\n            tf.GroupNormalize(mean=(self.cfg.img_norm['mean'], (0, )), std=(\n                self.cfg.img_norm['std'], (1, ))),\n        ])\n        return val_transform\n\n    def view(self, img, coords, file_path=None):\n        for coord in coords:\n            for x, y in coord:\n                if x <= 0 or y <= 0:\n                    continue\n                x, y = int(x), int(y)\n                cv2.circle(img, (x, y), 4, (255, 0, 0), 2)\n\n        if file_path is not None:\n            if not os.path.exists(osp.dirname(file_path)):\n                os.makedirs(osp.dirname(file_path))\n            cv2.imwrite(file_path, img)\n\n\n    def init(self):\n        raise NotImplementedError()\n\n\n    def __len__(self):\n        return len(self.full_img_path_list)\n\n    def __getitem__(self, idx):\n        img = cv2.imread(self.full_img_path_list[idx]).astype(np.float32)\n        img = img[self.cfg.cut_height:, :, :]\n\n        if self.is_training:\n            label = cv2.imread(self.label_list[idx], cv2.IMREAD_UNCHANGED)\n            if len(label.shape) > 2:\n                label = label[:, :, 0]\n            label = label.squeeze()\n            label = label[self.cfg.cut_height:, :]\n            exist = self.exist_list[idx]\n            if self.transform:\n                img, label = self.transform((img, label))\n            label = torch.from_numpy(label).contiguous().long()\n        else:\n            img, = self.transform((img,))\n\n        img = torch.from_numpy(img).permute(2, 0, 1).contiguous().float()\n        meta = {'full_img_path': self.full_img_path_list[idx],\n                'img_name': self.img_name_list[idx]}\n\n        data = {'img': img, 'meta': meta}\n        if self.is_training:\n            data.update({'label': label, 'exist': exist})\n        return data\n"
  },
  {
    "path": "datasets/culane.py",
    "content": "import os\nimport os.path as osp\nimport numpy as np\nimport torchvision\nimport utils.transforms as tf\nfrom .base_dataset import BaseDataset\nfrom .registry import DATASETS\nimport cv2\nimport torch\n\n\n@DATASETS.register_module\nclass CULane(BaseDataset):\n    def __init__(self, img_path, data_list, cfg=None):\n        super().__init__(img_path, data_list, cfg=cfg)\n        self.ori_imgh = 590\n        self.ori_imgw = 1640\n\n    def init(self):\n        with open(osp.join(self.list_path, self.data_list)) as f:\n            for line in f:\n                line_split = line.strip().split(\" \")\n                self.img_name_list.append(line_split[0])\n                self.full_img_path_list.append(self.img_path + line_split[0])\n                if not self.is_training:\n                    continue\n                self.label_list.append(self.img_path + line_split[1])\n                self.exist_list.append(\n                    np.array([int(line_split[2]), int(line_split[3]),\n                              int(line_split[4]), int(line_split[5])]))\n\n    def transform_train(self):\n        train_transform = torchvision.transforms.Compose([\n            tf.GroupRandomRotation(degree=(-2, 2)),\n            tf.GroupRandomHorizontalFlip(),\n            tf.SampleResize((self.cfg.img_width, self.cfg.img_height)),\n            tf.GroupNormalize(mean=(self.cfg.img_norm['mean'], (0, )), std=(\n                self.cfg.img_norm['std'], (1, ))),\n        ])\n        return train_transform\n\n    def probmap2lane(self, probmaps, exists, pts=18):\n        coords = []\n        probmaps = probmaps[1:, ...]\n        exists = exists > 0.5\n        for probmap, exist in zip(probmaps, exists):\n            if exist == 0:\n                continue\n            probmap = cv2.blur(probmap, (9, 9), borderType=cv2.BORDER_REPLICATE)\n            thr = 0.3\n            coordinate = np.zeros(pts)\n            cut_height = self.cfg.cut_height\n            for i in range(pts):\n                line = probmap[round(\n                    self.cfg.img_height-i*20/(self.ori_imgh-cut_height)*self.cfg.img_height)-1]\n\n                if np.max(line) > thr:\n                    coordinate[i] = np.argmax(line)+1\n            if np.sum(coordinate > 0) < 2:\n                continue\n    \n            img_coord = np.zeros((pts, 2))\n            img_coord[:, :] = -1\n            for idx, value in enumerate(coordinate):\n                if value > 0:\n                    img_coord[idx][0] = round(value*self.ori_imgw/self.cfg.img_width-1)\n                    img_coord[idx][1] = round(self.ori_imgh-idx*20-1)\n    \n            img_coord = img_coord.astype(int)\n            coords.append(img_coord)\n    \n        return coords\n"
  },
  {
    "path": "datasets/registry.py",
    "content": "from utils import Registry, build_from_cfg\n\nimport torch\n\nDATASETS = Registry('datasets')\n\ndef build(cfg, registry, default_args=None):\n    if isinstance(cfg, list):\n        modules = [\n            build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg\n        ]\n        return nn.Sequential(*modules)\n    else:\n        return build_from_cfg(cfg, registry, default_args)\n\n\ndef build_dataset(split_cfg, cfg):\n    args = split_cfg.copy()\n    args.pop('type')\n    args = args.to_dict()\n    args['cfg'] = cfg\n    return build(split_cfg, DATASETS, default_args=args)\n\ndef build_dataloader(split_cfg, cfg, is_train=True):\n    if is_train:\n        shuffle = True\n    else:\n        shuffle = False\n\n    dataset = build_dataset(split_cfg, cfg)\n\n    data_loader = torch.utils.data.DataLoader(\n        dataset, batch_size = cfg.batch_size, shuffle = shuffle,\n        num_workers = cfg.workers, pin_memory = False, drop_last = False)\n\n    return data_loader\n"
  },
  {
    "path": "datasets/tusimple.py",
    "content": "import os.path as osp\nimport numpy as np\nimport cv2\nimport torchvision\nimport utils.transforms as tf\nfrom .base_dataset import BaseDataset\nfrom .registry import DATASETS\n\n\n@DATASETS.register_module\nclass TuSimple(BaseDataset):\n    def __init__(self, img_path, data_list, cfg=None):\n        super().__init__(img_path, data_list, 'seg_label/list', cfg)\n\n    def transform_train(self):\n        input_mean = self.cfg.img_norm['mean']\n        train_transform = torchvision.transforms.Compose([\n            tf.GroupRandomRotation(),\n            tf.GroupRandomHorizontalFlip(),\n            tf.SampleResize((self.cfg.img_width, self.cfg.img_height)),\n            tf.GroupNormalize(mean=(self.cfg.img_norm['mean'], (0, )), std=(\n                self.cfg.img_norm['std'], (1, ))),\n        ])\n        return train_transform\n\n\n    def init(self):\n        with open(osp.join(self.list_path, self.data_list)) as f:\n            for line in f:\n                line_split = line.strip().split(\" \")\n                self.img_name_list.append(line_split[0])\n                self.full_img_path_list.append(self.img_path + line_split[0])\n                if not self.is_training:\n                    continue\n                self.label_list.append(self.img_path + line_split[1])\n                self.exist_list.append(\n                    np.array([int(line_split[2]), int(line_split[3]),\n                              int(line_split[4]), int(line_split[5]),\n                              int(line_split[6]), int(line_split[7])\n                              ]))\n\n    def fix_gap(self, coordinate):\n        if any(x > 0 for x in coordinate):\n            start = [i for i, x in enumerate(coordinate) if x > 0][0]\n            end = [i for i, x in reversed(list(enumerate(coordinate))) if x > 0][0]\n            lane = coordinate[start:end+1]\n            if any(x < 0 for x in lane):\n                gap_start = [i for i, x in enumerate(\n                    lane[:-1]) if x > 0 and lane[i+1] < 0]\n                gap_end = [i+1 for i,\n                           x in enumerate(lane[:-1]) if x < 0 and lane[i+1] > 0]\n                gap_id = [i for i, x in enumerate(lane) if x < 0]\n                if len(gap_start) == 0 or len(gap_end) == 0:\n                    return coordinate\n                for id in gap_id:\n                    for i in range(len(gap_start)):\n                        if i >= len(gap_end):\n                            return coordinate\n                        if id > gap_start[i] and id < gap_end[i]:\n                            gap_width = float(gap_end[i] - gap_start[i])\n                            lane[id] = int((id - gap_start[i]) / gap_width * lane[gap_end[i]] + (\n                                gap_end[i] - id) / gap_width * lane[gap_start[i]])\n                if not all(x > 0 for x in lane):\n                    print(\"Gaps still exist!\")\n                coordinate[start:end+1] = lane\n        return coordinate\n\n    def is_short(self, lane):\n        start = [i for i, x in enumerate(lane) if x > 0]\n        if not start:\n            return 1\n        else:\n            return 0\n\n    def get_lane(self, prob_map, y_px_gap, pts, thresh, resize_shape=None):\n        \"\"\"\n        Arguments:\n        ----------\n        prob_map: prob map for single lane, np array size (h, w)\n        resize_shape:  reshape size target, (H, W)\n    \n        Return:\n        ----------\n        coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape\n        \"\"\"\n        if resize_shape is None:\n            resize_shape = prob_map.shape\n        h, w = prob_map.shape\n        H, W = resize_shape\n        H -= self.cfg.cut_height\n    \n        coords = np.zeros(pts)\n        coords[:] = -1.0\n        for i in range(pts):\n            y = int((H - 10 - i * y_px_gap) * h / H)\n            if y < 0:\n                break\n            line = prob_map[y, :]\n            id = np.argmax(line)\n            if line[id] > thresh:\n                coords[i] = int(id / w * W)\n        if (coords > 0).sum() < 2:\n            coords = np.zeros(pts)\n        self.fix_gap(coords)\n        #print(coords.shape)\n\n        return coords\n\n    def probmap2lane(self, seg_pred, exist, resize_shape=(720, 1280), smooth=True, y_px_gap=10, pts=56, thresh=0.6):\n        \"\"\"\n        Arguments:\n        ----------\n        seg_pred:      np.array size (5, h, w)\n        resize_shape:  reshape size target, (H, W)\n        exist:       list of existence, e.g. [0, 1, 1, 0]\n        smooth:      whether to smooth the probability or not\n        y_px_gap:    y pixel gap for sampling\n        pts:     how many points for one lane\n        thresh:  probability threshold\n    \n        Return:\n        ----------\n        coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]\n        \"\"\"\n        if resize_shape is None:\n            resize_shape = seg_pred.shape[1:]  # seg_pred (5, h, w)\n        _, h, w = seg_pred.shape\n        H, W = resize_shape\n        coordinates = []\n    \n        for i in range(self.cfg.num_classes - 1):\n            prob_map = seg_pred[i + 1]\n            if smooth:\n                prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)\n            coords = self.get_lane(prob_map, y_px_gap, pts, thresh, resize_shape)\n            if self.is_short(coords):\n                continue\n            coordinates.append(\n                [[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in\n                 range(pts)])\n    \n    \n        if len(coordinates) == 0:\n            coords = np.zeros(pts)\n            coordinates.append(\n                [[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in\n                 range(pts)])\n        #print(coordinates)\n    \n        return coordinates\n"
  },
  {
    "path": "main.py",
    "content": "import os\nimport os.path as osp\nimport time\nimport shutil\nimport torch\nimport torchvision\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.nn.functional as F\nimport torch.optim\nimport cv2\nimport numpy as np\nimport models\nimport argparse\nfrom utils.config import Config\nfrom runner.runner import Runner \nfrom datasets import build_dataloader\n\n\ndef main():\n    args = parse_args()\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = ','.join(str(gpu) for gpu in args.gpus)\n\n    cfg = Config.fromfile(args.config)\n    cfg.gpus = len(args.gpus)\n\n    cfg.load_from = args.load_from\n    cfg.finetune_from = args.finetune_from\n    cfg.view = args.view\n\n    cfg.work_dirs = args.work_dirs + '/' + cfg.dataset.train.type\n\n    cudnn.benchmark = True\n    cudnn.fastest = True\n\n    runner = Runner(cfg)\n\n    if args.validate:\n        val_loader = build_dataloader(cfg.dataset.val, cfg, is_train=False)\n        runner.validate(val_loader)\n    else:\n        runner.train()\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='Train a detector')\n    parser.add_argument('config', help='train config file path')\n    parser.add_argument(\n        '--work_dirs', type=str, default='work_dirs',\n        help='work dirs')\n    parser.add_argument(\n        '--load_from', default=None,\n        help='the checkpoint file to resume from')\n    parser.add_argument(\n        '--finetune_from', default=None,\n        help='whether to finetune from the checkpoint')\n    parser.add_argument(\n        '--validate',\n        action='store_true',\n        help='whether to evaluate the checkpoint during training')\n    parser.add_argument(\n        '--view',\n        action='store_true',\n        help='whether to show visualization result')\n    parser.add_argument('--gpus', nargs='+', type=int, default='0')\n    parser.add_argument('--seed', type=int,\n                        default=None, help='random seed')\n    args = parser.parse_args()\n\n    return args\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "models/__init__.py",
    "content": "from .resa import *\n"
  },
  {
    "path": "models/decoder.py",
    "content": "from torch import nn\nimport torch.nn.functional as F\n\nclass PlainDecoder(nn.Module):\n    def __init__(self, cfg):\n        super(PlainDecoder, self).__init__()\n        self.cfg = cfg\n\n        self.dropout = nn.Dropout2d(0.1)\n        self.conv8 = nn.Conv2d(128, cfg.num_classes, 1)\n\n    def forward(self, x):\n        x = self.dropout(x)\n        x = self.conv8(x)\n        x = F.interpolate(x, size=[self.cfg.img_height,  self.cfg.img_width],\n                           mode='bilinear', align_corners=False)\n\n        return x\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)\n\n\nclass non_bottleneck_1d(nn.Module):\n    def __init__(self, chann, dropprob, dilated):\n        super().__init__()\n\n        self.conv3x1_1 = nn.Conv2d(\n            chann, chann, (3, 1), stride=1, padding=(1, 0), bias=True)\n\n        self.conv1x3_1 = nn.Conv2d(\n            chann, chann, (1, 3), stride=1, padding=(0, 1), bias=True)\n\n        self.bn1 = nn.BatchNorm2d(chann, eps=1e-03)\n\n        self.conv3x1_2 = nn.Conv2d(chann, chann, (3, 1), stride=1, padding=(1 * dilated, 0), bias=True,\n                                   dilation=(dilated, 1))\n\n        self.conv1x3_2 = nn.Conv2d(chann, chann, (1, 3), stride=1, padding=(0, 1 * dilated), bias=True,\n                                   dilation=(1, dilated))\n\n        self.bn2 = nn.BatchNorm2d(chann, eps=1e-03)\n\n        self.dropout = nn.Dropout2d(dropprob)\n\n    def forward(self, input):\n        output = self.conv3x1_1(input)\n        output = F.relu(output)\n        output = self.conv1x3_1(output)\n        output = self.bn1(output)\n        output = F.relu(output)\n\n        output = self.conv3x1_2(output)\n        output = F.relu(output)\n        output = self.conv1x3_2(output)\n        output = self.bn2(output)\n\n        if (self.dropout.p != 0):\n            output = self.dropout(output)\n\n        # +input = identity (residual connection)\n        return F.relu(output + input)\n\n\nclass UpsamplerBlock(nn.Module):\n    def __init__(self, ninput, noutput, up_width, up_height):\n        super().__init__()\n\n        self.conv = nn.ConvTranspose2d(\n            ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True)\n\n        self.bn = nn.BatchNorm2d(noutput, eps=1e-3, track_running_stats=True)\n\n        self.follows = nn.ModuleList()\n        self.follows.append(non_bottleneck_1d(noutput, 0, 1))\n        self.follows.append(non_bottleneck_1d(noutput, 0, 1))\n\n        # interpolate\n        self.up_width = up_width\n        self.up_height = up_height\n        self.interpolate_conv = conv1x1(ninput, noutput)\n        self.interpolate_bn = nn.BatchNorm2d(\n            noutput, eps=1e-3, track_running_stats=True)\n\n    def forward(self, input):\n        output = self.conv(input)\n        output = self.bn(output)\n        out = F.relu(output)\n        for follow in self.follows:\n            out = follow(out)\n\n        interpolate_output = self.interpolate_conv(input)\n        interpolate_output = self.interpolate_bn(interpolate_output)\n        interpolate_output = F.relu(interpolate_output)\n\n        interpolate = F.interpolate(interpolate_output, size=[self.up_height,  self.up_width],\n                                    mode='bilinear', align_corners=False)\n\n        return out + interpolate\n\nclass BUSD(nn.Module):\n    def __init__(self, cfg):\n        super().__init__()\n        img_height = cfg.img_height\n        img_width = cfg.img_width\n        num_classes = cfg.num_classes\n\n        self.layers = nn.ModuleList()\n\n        self.layers.append(UpsamplerBlock(ninput=128, noutput=64,\n                                          up_height=int(img_height)//4, up_width=int(img_width)//4))\n        self.layers.append(UpsamplerBlock(ninput=64, noutput=32,\n                                          up_height=int(img_height)//2, up_width=int(img_width)//2))\n        self.layers.append(UpsamplerBlock(ninput=32, noutput=16,\n                                          up_height=int(img_height)//1, up_width=int(img_width)//1))\n\n        self.output_conv = conv1x1(16, num_classes)\n\n    def forward(self, input):\n        output = input\n\n        for layer in self.layers:\n            output = layer(output)\n\n        output = self.output_conv(output)\n\n        return output\n"
  },
  {
    "path": "models/registry.py",
    "content": "from utils import Registry, build_from_cfg\n\nNET = Registry('net')\n\ndef build(cfg, registry, default_args=None):\n    if isinstance(cfg, list):\n        modules = [\n            build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg\n        ]\n        return nn.Sequential(*modules)\n    else:\n        return build_from_cfg(cfg, registry, default_args)\n\n\ndef build_net(cfg):\n    return build(cfg.net, NET, default_args=dict(cfg=cfg))\n"
  },
  {
    "path": "models/resa.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\n\nfrom models.registry import NET\nfrom .resnet import ResNetWrapper \nfrom .decoder import BUSD, PlainDecoder \n\n\nclass RESA(nn.Module):\n    def __init__(self, cfg):\n        super(RESA, self).__init__()\n        self.iter = cfg.resa.iter\n        chan = cfg.resa.input_channel\n        fea_stride = cfg.backbone.fea_stride\n        self.height = cfg.img_height // fea_stride\n        self.width = cfg.img_width // fea_stride\n        self.alpha = cfg.resa.alpha\n        conv_stride = cfg.resa.conv_stride\n\n        for i in range(self.iter):\n            conv_vert1 = nn.Conv2d(\n                chan, chan, (1, conv_stride),\n                padding=(0, conv_stride//2), groups=1, bias=False)\n            conv_vert2 = nn.Conv2d(\n                chan, chan, (1, conv_stride),\n                padding=(0, conv_stride//2), groups=1, bias=False)\n\n            setattr(self, 'conv_d'+str(i), conv_vert1)\n            setattr(self, 'conv_u'+str(i), conv_vert2)\n\n            conv_hori1 = nn.Conv2d(\n                chan, chan, (conv_stride, 1),\n                padding=(conv_stride//2, 0), groups=1, bias=False)\n            conv_hori2 = nn.Conv2d(\n                chan, chan, (conv_stride, 1),\n                padding=(conv_stride//2, 0), groups=1, bias=False)\n\n            setattr(self, 'conv_r'+str(i), conv_hori1)\n            setattr(self, 'conv_l'+str(i), conv_hori2)\n\n            idx_d = (torch.arange(self.height) + self.height //\n                     2**(self.iter - i)) % self.height\n            setattr(self, 'idx_d'+str(i), idx_d)\n\n            idx_u = (torch.arange(self.height) - self.height //\n                     2**(self.iter - i)) % self.height\n            setattr(self, 'idx_u'+str(i), idx_u)\n\n            idx_r = (torch.arange(self.width) + self.width //\n                     2**(self.iter - i)) % self.width\n            setattr(self, 'idx_r'+str(i), idx_r)\n\n            idx_l = (torch.arange(self.width) - self.width //\n                     2**(self.iter - i)) % self.width\n            setattr(self, 'idx_l'+str(i), idx_l)\n\n    def forward(self, x):\n        x = x.clone()\n\n        for direction in ['d', 'u']:\n            for i in range(self.iter):\n                conv = getattr(self, 'conv_' + direction + str(i))\n                idx = getattr(self, 'idx_' + direction + str(i))\n                x.add_(self.alpha * F.relu(conv(x[..., idx, :])))\n\n        for direction in ['r', 'l']:\n            for i in range(self.iter):\n                conv = getattr(self, 'conv_' + direction + str(i))\n                idx = getattr(self, 'idx_' + direction + str(i))\n                x.add_(self.alpha * F.relu(conv(x[..., idx])))\n\n        return x\n\n\n\nclass ExistHead(nn.Module):\n    def __init__(self, cfg=None):\n        super(ExistHead, self).__init__()\n        self.cfg = cfg\n\n        self.dropout = nn.Dropout2d(0.1)  # ???\n        self.conv8 = nn.Conv2d(128, cfg.num_classes, 1)\n\n        stride = cfg.backbone.fea_stride * 2\n        self.fc9 = nn.Linear(\n            int(cfg.num_classes * cfg.img_width / stride * cfg.img_height / stride), 128)\n        self.fc10 = nn.Linear(128, cfg.num_classes-1)\n\n    def forward(self, x):\n        x = self.dropout(x)\n        x = self.conv8(x)\n\n        x = F.softmax(x, dim=1)\n        x = F.avg_pool2d(x, 2, stride=2, padding=0)\n        x = x.view(-1, x.numel() // x.shape[0])\n        x = self.fc9(x)\n        x = F.relu(x)\n        x = self.fc10(x)\n        x = torch.sigmoid(x)\n\n        return x\n\n\n@NET.register_module\nclass RESANet(nn.Module):\n    def __init__(self, cfg):\n        super(RESANet, self).__init__()\n        self.cfg = cfg\n        self.backbone = ResNetWrapper(cfg)\n        self.resa = RESA(cfg)\n        self.decoder = eval(cfg.decoder)(cfg)\n        self.heads = ExistHead(cfg) \n\n    def forward(self, batch):\n        fea = self.backbone(batch)\n        fea = self.resa(fea)\n        seg = self.decoder(fea)\n        exist = self.heads(fea)\n\n        output = {'seg': seg, 'exist': exist}\n\n        return output\n"
  },
  {
    "path": "models/resnet.py",
    "content": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torch.hub import load_state_dict_from_url\n\n\n# This code is borrow from torchvision.\n\nmodel_urls = {\n    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\n    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\n    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\n    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\n    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\n    'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',\n    'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',\n    'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',\n    'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',\n}\n\n\ndef conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=dilation, groups=groups, bias=False, dilation=dilation)\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,\n                 base_width=64, dilation=1, norm_layer=None):\n        super(BasicBlock, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        if groups != 1 or base_width != 64:\n            raise ValueError(\n                'BasicBlock only supports groups=1 and base_width=64')\n        # if dilation > 1:\n        #     raise NotImplementedError(\n        #         \"Dilation > 1 not supported in BasicBlock\")\n        # Both self.conv1 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv3x3(inplanes, planes, stride, dilation=dilation)\n        self.bn1 = norm_layer(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes, dilation=dilation)\n        self.bn2 = norm_layer(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,\n                 base_width=64, dilation=1, norm_layer=None):\n        super(Bottleneck, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        width = int(planes * (base_width / 64.)) * groups\n        # Both self.conv2 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv1x1(inplanes, width)\n        self.bn1 = norm_layer(width)\n        self.conv2 = conv3x3(width, width, stride, groups, dilation)\n        self.bn2 = norm_layer(width)\n        self.conv3 = conv1x1(width, planes * self.expansion)\n        self.bn3 = norm_layer(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass ResNetWrapper(nn.Module):\n\n    def __init__(self, cfg):\n        super(ResNetWrapper, self).__init__()\n        self.cfg = cfg\n        self.in_channels = [64, 128, 256, 512]\n        if 'in_channels' in cfg.backbone:\n            self.in_channels = cfg.backbone.in_channels\n        self.model = eval(cfg.backbone.resnet)(\n            pretrained=cfg.backbone.pretrained,\n            replace_stride_with_dilation=cfg.backbone.replace_stride_with_dilation, in_channels=self.in_channels)\n        self.out = None\n        if cfg.backbone.out_conv:\n            out_channel = 512\n            for chan in reversed(self.in_channels):\n                if chan < 0: continue\n                out_channel = chan\n                break\n            self.out = conv1x1(\n                out_channel * self.model.expansion, 128)\n\n    def forward(self, x):\n        x = self.model(x)\n        if self.out:\n            x = self.out(x)\n        return x\n\n\nclass ResNet(nn.Module):\n\n    def __init__(self, block, layers, zero_init_residual=False,\n                 groups=1, width_per_group=64, replace_stride_with_dilation=None,\n                 norm_layer=None, in_channels=None):\n        super(ResNet, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        self._norm_layer = norm_layer\n\n        self.inplanes = 64\n        self.dilation = 1\n        if replace_stride_with_dilation is None:\n            # each element in the tuple indicates if we should replace\n            # the 2x2 stride with a dilated convolution instead\n            replace_stride_with_dilation = [False, False, False]\n        if len(replace_stride_with_dilation) != 3:\n            raise ValueError(\"replace_stride_with_dilation should be None \"\n                             \"or a 3-element tuple, got {}\".format(replace_stride_with_dilation))\n        self.groups = groups\n        self.base_width = width_per_group\n        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = norm_layer(self.inplanes)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.in_channels = in_channels\n        self.layer1 = self._make_layer(block, in_channels[0], layers[0])\n        self.layer2 = self._make_layer(block, in_channels[1], layers[1], stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block, in_channels[2], layers[2], stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n        if in_channels[3] > 0:\n            self.layer4 = self._make_layer(block, in_channels[3], layers[3], stride=2,\n                                           dilate=replace_stride_with_dilation[2])\n        self.expansion = block.expansion\n\n        # self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        # self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(\n                    m.weight, mode='fan_out', nonlinearity='relu')\n            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n        # Zero-initialize the last BN in each residual branch,\n        # so that the residual branch starts with zeros, and each residual block behaves like an identity.\n        # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, Bottleneck):\n                    nn.init.constant_(m.bn3.weight, 0)\n                elif isinstance(m, BasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        norm_layer = self._norm_layer\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                norm_layer(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, self.groups,\n                            self.base_width, previous_dilation, norm_layer))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes, groups=self.groups,\n                                base_width=self.base_width, dilation=self.dilation,\n                                norm_layer=norm_layer))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        if self.in_channels[3] > 0:\n            x = self.layer4(x)\n\n        # x = self.avgpool(x)\n        # x = torch.flatten(x, 1)\n        # x = self.fc(x)\n\n        return x\n\n\ndef _resnet(arch, block, layers, pretrained, progress, **kwargs):\n    model = ResNet(block, layers, **kwargs)\n    if pretrained:\n        state_dict = load_state_dict_from_url(model_urls[arch],\n                                              progress=progress)\n        model.load_state_dict(state_dict, strict=False)\n    return model\n\n\ndef resnet18(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-18 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet34(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-34 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet50(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-50 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet101(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-101 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet152(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-152 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnext50_32x4d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-50 32x4d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 4\n    return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef resnext101_32x8d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-101 32x8d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 8\n    return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet50_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-50-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet101_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-101-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n"
  },
  {
    "path": "requirement.txt",
    "content": "pandas\naddict\nsklearn\nopencv-python\npytorch_warmup\nscikit-image\ntqdm\ntermcolor\n"
  },
  {
    "path": "runner/__init__.py",
    "content": "from .evaluator import *\nfrom .resa_trainer import *\n\nfrom .registry import build_evaluator \n"
  },
  {
    "path": "runner/evaluator/__init__.py",
    "content": "from .tusimple.tusimple import Tusimple\nfrom .culane.culane import CULane\n"
  },
  {
    "path": "runner/evaluator/culane/culane.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\nfrom runner.logger import get_logger\n\nfrom runner.registry import EVALUATOR \nimport json\nimport os\nimport subprocess\nfrom shutil import rmtree\nimport cv2\nimport numpy as np\n\ndef check():\n    import subprocess\n    import sys\n    FNULL = open(os.devnull, 'w')\n    result = subprocess.call(\n        './runner/evaluator/culane/lane_evaluation/evaluate', stdout=FNULL, stderr=FNULL)\n    if result > 1:\n        print('There is something wrong with evaluate tool, please compile it.')\n        sys.exit()\n\ndef read_helper(path):\n    lines = open(path, 'r').readlines()[1:]\n    lines = ' '.join(lines)\n    values = lines.split(' ')[1::2]\n    keys = lines.split(' ')[0::2]\n    keys = [key[:-1] for key in keys]\n    res = {k : v for k,v in zip(keys,values)}\n    return res\n\ndef call_culane_eval(data_dir, output_path='./output'):\n    if data_dir[-1] != '/':\n        data_dir = data_dir + '/'\n    detect_dir=os.path.join(output_path, 'lines')+'/'\n\n    w_lane=30\n    iou=0.5;  # Set iou to 0.3 or 0.5\n    im_w=1640\n    im_h=590\n    frame=1\n    list0 = os.path.join(data_dir,'list/test_split/test0_normal.txt')\n    list1 = os.path.join(data_dir,'list/test_split/test1_crowd.txt')\n    list2 = os.path.join(data_dir,'list/test_split/test2_hlight.txt')\n    list3 = os.path.join(data_dir,'list/test_split/test3_shadow.txt')\n    list4 = os.path.join(data_dir,'list/test_split/test4_noline.txt')\n    list5 = os.path.join(data_dir,'list/test_split/test5_arrow.txt')\n    list6 = os.path.join(data_dir,'list/test_split/test6_curve.txt')\n    list7 = os.path.join(data_dir,'list/test_split/test7_cross.txt')\n    list8 = os.path.join(data_dir,'list/test_split/test8_night.txt')\n    if not os.path.exists(os.path.join(output_path,'txt')):\n        os.mkdir(os.path.join(output_path,'txt'))\n    out0 = os.path.join(output_path,'txt','out0_normal.txt')\n    out1 = os.path.join(output_path,'txt','out1_crowd.txt')\n    out2 = os.path.join(output_path,'txt','out2_hlight.txt')\n    out3 = os.path.join(output_path,'txt','out3_shadow.txt')\n    out4 = os.path.join(output_path,'txt','out4_noline.txt')\n    out5 = os.path.join(output_path,'txt','out5_arrow.txt')\n    out6 = os.path.join(output_path,'txt','out6_curve.txt')\n    out7 = os.path.join(output_path,'txt','out7_cross.txt')\n    out8 = os.path.join(output_path,'txt','out8_night.txt')\n\n    eval_cmd = './runner/evaluator/culane/lane_evaluation/evaluate'\n\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list0,w_lane,iou,im_w,im_h,frame,out0))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list1,w_lane,iou,im_w,im_h,frame,out1))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list2,w_lane,iou,im_w,im_h,frame,out2))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list3,w_lane,iou,im_w,im_h,frame,out3))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list4,w_lane,iou,im_w,im_h,frame,out4))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list5,w_lane,iou,im_w,im_h,frame,out5))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list6,w_lane,iou,im_w,im_h,frame,out6))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list7,w_lane,iou,im_w,im_h,frame,out7))\n    os.system('%s -a %s -d %s -i %s -l %s -w %s -t %s -c %s -r %s -f %s -o %s'%(eval_cmd,data_dir,detect_dir,data_dir,list8,w_lane,iou,im_w,im_h,frame,out8))\n    res_all = {}\n    res_all['normal'] = read_helper(out0)\n    res_all['crowd']= read_helper(out1)\n    res_all['night']= read_helper(out8)\n    res_all['noline'] = read_helper(out4)\n    res_all['shadow'] = read_helper(out3)\n    res_all['arrow']= read_helper(out5)\n    res_all['hlight'] = read_helper(out2)\n    res_all['curve']= read_helper(out6)\n    res_all['cross']= read_helper(out7)\n    return res_all\n\n@EVALUATOR.register_module\nclass CULane(nn.Module):\n    def __init__(self, cfg):\n        super(CULane, self).__init__()\n        # Firstly, check the evaluation tool\n        check()\n        self.cfg = cfg \n        self.blur = torch.nn.Conv2d(\n            5, 5, 9, padding=4, bias=False, groups=5).cuda()\n        torch.nn.init.constant_(self.blur.weight, 1 / 81)\n        self.logger = get_logger('resa')\n        self.out_dir = os.path.join(self.cfg.work_dir, 'lines')\n        if cfg.view:\n            self.view_dir = os.path.join(self.cfg.work_dir, 'vis')\n\n    def evaluate(self, dataset, output, batch):\n        seg, exists = output['seg'], output['exist']\n        predictmaps = F.softmax(seg, dim=1).cpu().numpy()\n        exists = exists.cpu().numpy()\n        batch_size = seg.size(0)\n        img_name = batch['meta']['img_name']\n        img_path = batch['meta']['full_img_path']\n        for i in range(batch_size):\n            coords = dataset.probmap2lane(predictmaps[i], exists[i])\n            outname = self.out_dir + img_name[i][:-4] + '.lines.txt'\n            outdir = os.path.dirname(outname)\n            if not os.path.exists(outdir):\n                os.makedirs(outdir)\n            f = open(outname, 'w')\n            for coord in coords:\n                for x, y in coord:\n                    if x < 0 and y < 0:\n                        continue\n                    f.write('%d %d ' % (x, y))\n                f.write('\\n')\n            f.close()\n\n            if self.cfg.view:\n                img = cv2.imread(img_path[i]).astype(np.float32)\n                dataset.view(img, coords, self.view_dir+img_name[i])\n\n\n    def summarize(self):\n        self.logger.info('summarize result...')\n        eval_list_path = os.path.join(\n            self.cfg.dataset_path, \"list\", self.cfg.dataset.val.data_list)\n        #prob2lines(self.prob_dir, self.out_dir, eval_list_path, self.cfg)\n        res = call_culane_eval(self.cfg.dataset_path, output_path=self.cfg.work_dir)\n        TP,FP,FN = 0,0,0\n        out_str = 'Copypaste: '\n        for k, v in res.items():\n            val = float(v['Fmeasure']) if 'nan' not in v['Fmeasure'] else 0\n            val_tp, val_fp, val_fn = int(v['tp']), int(v['fp']), int(v['fn'])\n            val_p, val_r, val_f1 = float(v['precision']), float(v['recall']), float(v['Fmeasure'])\n            TP += val_tp\n            FP += val_fp\n            FN += val_fn\n            self.logger.info(k + ': ' + str(v))\n            out_str += k\n            for metric, value in v.items():\n                out_str += ' ' + str(value).rstrip('\\n')\n            out_str += ' '\n        P = TP * 1.0 / (TP + FP + 1e-9)\n        R = TP * 1.0 / (TP + FN + 1e-9)\n        F = 2*P*R/(P + R + 1e-9)\n        overall_result_str = ('Overall Precision: %f Recall: %f F1: %f' % (P, R, F))\n        self.logger.info(overall_result_str)\n        out_str = out_str + overall_result_str\n        self.logger.info(out_str)\n\n        # delete the tmp output\n        rmtree(self.out_dir)\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/.gitignore",
    "content": "build/\nevaluate\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/Makefile",
    "content": "PROJECT_NAME:= evaluate\n\n# config ----------------------------------\nOPENCV_VERSION := 3\n\nINCLUDE_DIRS := include\nLIBRARY_DIRS := lib /usr/local/lib\n\nCOMMON_FLAGS := -DCPU_ONLY\nCXXFLAGS := -std=c++11 -fopenmp\nLDFLAGS := -fopenmp -Wl,-rpath,./lib\nBUILD_DIR := build\n\n\n# make rules -------------------------------\nCXX ?= g++\nBUILD_DIR ?= ./build\n\nLIBRARIES += opencv_core opencv_highgui opencv_imgproc \nifeq ($(OPENCV_VERSION), 3)\n\t\tLIBRARIES += opencv_imgcodecs\nendif\n\nCXXFLAGS += $(COMMON_FLAGS) $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))\nLDFLAGS +=  $(COMMON_FLAGS) $(foreach includedir,$(LIBRARY_DIRS),-L$(includedir)) $(foreach library,$(LIBRARIES),-l$(library))\nSRC_DIRS += $(shell find * -type d -exec bash -c \"find {} -maxdepth 1 \\( -name '*.cpp' -o -name '*.proto' \\) | grep -q .\" \\; -print)\nCXX_SRCS += $(shell find src/ -name \"*.cpp\")\nCXX_TARGETS:=$(patsubst %.cpp, $(BUILD_DIR)/%.o, $(CXX_SRCS))\nALL_BUILD_DIRS := $(sort $(BUILD_DIR) $(addprefix $(BUILD_DIR)/, $(SRC_DIRS)))\n\n.PHONY: all\nall: $(PROJECT_NAME)\n\n.PHONY: $(ALL_BUILD_DIRS)\n$(ALL_BUILD_DIRS):\n\t@mkdir -p $@\n\n$(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)\n\t@echo \"CXX\" $<\n\t@$(CXX) $(CXXFLAGS) -c -o $@ $<\n\n$(PROJECT_NAME): $(CXX_TARGETS)\n\t@echo \"CXX/LD\" $@\n\t@$(CXX) -o $@ $^ $(LDFLAGS)\n\n.PHONY: clean\nclean:\n\t@rm -rf $(CXX_TARGETS)\n\t@rm -rf $(PROJECT_NAME)\n\t@rm -rf $(BUILD_DIR)\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/include/counter.hpp",
    "content": "#ifndef COUNTER_HPP\n#define COUNTER_HPP\n\n#include \"lane_compare.hpp\"\n#include \"hungarianGraph.hpp\"\n#include <iostream>\n#include <algorithm>\n#include <tuple>\n#include <vector>\n#include <opencv2/core/core.hpp>\n\nusing namespace std;\nusing namespace cv;\n\n// before coming to use functions of this class, the lanes should resize to im_width and im_height using resize_lane() in lane_compare.hpp\nclass Counter\n{\n\tpublic:\n\t\tCounter(int _im_width, int _im_height, double _iou_threshold=0.4, int _lane_width=10):tp(0),fp(0),fn(0){\n\t\t\tim_width = _im_width;\n\t\t\tim_height = _im_height;\n\t\t\tsim_threshold = _iou_threshold;\n\t\t\tlane_compare = new LaneCompare(_im_width, _im_height,  _lane_width, LaneCompare::IOU);\n\t\t};\n\t\tdouble get_precision(void);\n\t\tdouble get_recall(void);\n\t\tlong getTP(void);\n\t\tlong getFP(void);\n\t\tlong getFN(void);\n\t\tvoid setTP(long);\n\t\tvoid setFP(long);\n\t\tvoid setFN(long);\n\t\t// direct add tp, fp, tn and fn\n\t\t// first match with hungarian\n\t\ttuple<vector<int>, long, long, long, long> count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes);\n\t\tvoid makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2);\n\n\tprivate:\n\t\tdouble sim_threshold;\n\t\tint im_width;\n\t\tint im_height;\n\t\tlong tp;\n\t\tlong fp;\n\t\tlong fn;\n\t\tLaneCompare *lane_compare;\n};\n#endif\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/include/hungarianGraph.hpp",
    "content": "﻿#ifndef HUNGARIAN_GRAPH_HPP\n#define HUNGARIAN_GRAPH_HPP\n#include <vector>\nusing namespace std;\n\nstruct pipartiteGraph {\n    vector<vector<double> > mat;\n    vector<bool> leftUsed, rightUsed;\n    vector<double> leftWeight, rightWeight;\n    vector<int>rightMatch, leftMatch;\n    int leftNum, rightNum;\n    bool matchDfs(int u) {\n        leftUsed[u] = true;\n        for (int v = 0; v < rightNum; v++) {\n            if (!rightUsed[v] && fabs(leftWeight[u] + rightWeight[v] - mat[u][v]) < 1e-2) {\n                rightUsed[v] = true;\n                if (rightMatch[v] == -1 || matchDfs(rightMatch[v])) {\n                    rightMatch[v] = u;\n                    leftMatch[u] = v;\n                    return true;\n                }\n            }\n        }\n        return false;\n    }\n    void resize(int leftNum, int rightNum) {\n        this->leftNum = leftNum;\n        this->rightNum = rightNum;\n        leftMatch.resize(leftNum);\n        rightMatch.resize(rightNum);\n        leftUsed.resize(leftNum);\n        rightUsed.resize(rightNum);\n        leftWeight.resize(leftNum);\n        rightWeight.resize(rightNum);\n        mat.resize(leftNum);\n        for (int i = 0; i < leftNum; i++) mat[i].resize(rightNum);\n    }\n    void match() {\n        for (int i = 0; i < leftNum; i++) leftMatch[i] = -1;\n        for (int i = 0; i < rightNum; i++) rightMatch[i] = -1;\n        for (int i = 0; i < rightNum; i++) rightWeight[i] = 0;\n        for (int i = 0; i < leftNum; i++) {\n            leftWeight[i] = -1e5;\n            for (int j = 0; j < rightNum; j++) {\n                if (leftWeight[i] < mat[i][j]) leftWeight[i] = mat[i][j];\n            }\n        }\n\n        for (int u = 0; u < leftNum; u++) {\n            while (1) {\n                for (int i = 0; i < leftNum; i++) leftUsed[i] = false;\n                for (int i = 0; i < rightNum; i++) rightUsed[i] = false;\n                if (matchDfs(u)) break;\n                double d = 1e10;\n                for (int i = 0; i < leftNum; i++) {\n                    if (leftUsed[i] ) {\n                        for (int j = 0; j < rightNum; j++) {\n                            if (!rightUsed[j]) d = min(d, leftWeight[i] + rightWeight[j] - mat[i][j]);\n                        }\n                    }\n                }\n                if (d == 1e10) return ;\n                for (int i = 0; i < leftNum; i++) if (leftUsed[i]) leftWeight[i] -= d;\n                for (int i = 0; i < rightNum; i++) if (rightUsed[i]) rightWeight[i] += d;\n            }\n        }\n    }\n};\n\n\n#endif // HUNGARIAN_GRAPH_HPP\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/include/lane_compare.hpp",
    "content": "#ifndef LANE_COMPARE_HPP\n#define LANE_COMPARE_HPP\n\n#include \"spline.hpp\"\n#include <vector>\n#include <iostream>\n#include <opencv2/core/version.hpp>\n#include <opencv2/core/core.hpp>\n\n#if CV_VERSION_EPOCH == 2\n#define OPENCV2\n#elif CV_VERSION_MAJOR == 3\n#define  OPENCV3\n#else\n#error Not support this OpenCV version\n#endif\n\n#ifdef OPENCV3\n#include <opencv2/imgproc.hpp>\n#elif defined(OPENCV2)\n#include <opencv2/imgproc/imgproc.hpp>\n#endif\n\nusing namespace std;\nusing namespace cv;\n\nclass LaneCompare{\n\tpublic:\n\t\tenum CompareMode{\n\t\t\tIOU,\n\t\t\tCaltech\n\t\t};\n\n\t\tLaneCompare(int _im_width, int _im_height, int _lane_width = 10, CompareMode _compare_mode = IOU){\n\t\t\tim_width = _im_width;\n\t\t\tim_height = _im_height;\n\t\t\tcompare_mode = _compare_mode;\n\t\t\tlane_width = _lane_width;\n\t\t}\n\n\t\tdouble get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2);\n\t\tvoid resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height);\n\tprivate:\n\t\tCompareMode compare_mode;\n\t\tint im_width;\n\t\tint im_height;\n\t\tint lane_width;\n\t\tSpline splineSolver;\n};\n\n#endif\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/include/spline.hpp",
    "content": "#ifndef SPLINE_HPP\n#define SPLINE_HPP\n#include <vector>\n#include <cstdio>\n#include <math.h>\n#include <opencv2/core/core.hpp>\n\nusing namespace cv;\nusing namespace std;\n\nstruct Func {\n    double a_x;\n    double b_x;\n    double c_x;\n    double d_x;\n    double a_y;\n    double b_y;\n    double c_y;\n    double d_y;\n    double h;\n};\nclass Spline {\npublic:\n\tvector<Point2f> splineInterpTimes(const vector<Point2f> &tmp_line, int times);\n    vector<Point2f> splineInterpStep(vector<Point2f> tmp_line, double step);\n\tvector<Func> cal_fun(const vector<Point2f> &point_v);\n};\n#endif\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/src/counter.cpp",
    "content": "/*************************************************************************\n\t> File Name: counter.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Thu Jul 14 20:23:08 2016\n ************************************************************************/\n\n#include \"counter.hpp\"\n\ndouble Counter::get_precision(void)\n{\n\tcerr<<\"tp: \"<<tp<<\" fp: \"<<fp<<\" fn: \"<<fn<<endl;\n\tif(tp+fp == 0)\n\t{\n\t\tcerr<<\"no positive detection\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fp);\n}\n\ndouble Counter::get_recall(void)\n{\n\tif(tp+fn == 0)\n\t{\n\t\tcerr<<\"no ground truth positive\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fn);\n}\n\nlong Counter::getTP(void)\n{\n\treturn tp;\n}\n\nlong Counter::getFP(void)\n{\n\treturn fp;\n}\n\nlong Counter::getFN(void)\n{\n\treturn fn;\n}\n\nvoid Counter::setTP(long value) \n{\n\ttp = value;\n}\n\nvoid Counter::setFP(long value)\n{\n  fp = value;\n}\n\nvoid Counter::setFN(long value)\n{\n\tfn = value;\n}\n\ntuple<vector<int>, long, long, long, long> Counter::count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes)\n{\n\tvector<int> anno_match(anno_lanes.size(), -1);\n\tvector<int> detect_match;\n\tif(anno_lanes.empty())\n\t{\n\t\treturn make_tuple(anno_match, 0, detect_lanes.size(), 0, 0);\n\t}\n\n\tif(detect_lanes.empty())\n\t{\n\t\treturn make_tuple(anno_match, 0, 0, 0, anno_lanes.size());\n\t}\n\t// hungarian match first\n\t\n\t// first calc similarity matrix\n\tvector<vector<double> > similarity(anno_lanes.size(), vector<double>(detect_lanes.size(), 0));\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tconst vector<Point2f> &curr_anno_lane = anno_lanes[i];\n\t\tfor(int j=0; j<detect_lanes.size(); j++)\n\t\t{\n\t\t\tconst vector<Point2f> &curr_detect_lane = detect_lanes[j];\n\t\t\tsimilarity[i][j] = lane_compare->get_lane_similarity(curr_anno_lane, curr_detect_lane);\n\t\t}\n\t}\n\n\n\n\tmakeMatch(similarity, anno_match, detect_match);\n\n\t\n\tint curr_tp = 0;\n\t// count and add\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tif(anno_match[i]>=0 && similarity[i][anno_match[i]] > sim_threshold)\n\t\t{\n\t\t\tcurr_tp++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tanno_match[i] = -1;\n\t\t}\n\t}\n\tint curr_fn = anno_lanes.size() - curr_tp;\n\tint curr_fp = detect_lanes.size() - curr_tp;\n\treturn make_tuple(anno_match, curr_tp, curr_fp, 0, curr_fn);\n}\n\n\nvoid Counter::makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2) {\n\tint m = similarity.size();\n\tint n = similarity[0].size();\n    pipartiteGraph gra;\n    bool have_exchange = false;\n    if (m > n) {\n        have_exchange = true;\n        swap(m, n);\n    }\n    gra.resize(m, n);\n    for (int i = 0; i < gra.leftNum; i++) {\n        for (int j = 0; j < gra.rightNum; j++) {\n\t\t\tif(have_exchange)\n\t\t\t\tgra.mat[i][j] = similarity[j][i];\n\t\t\telse\n\t\t\t\tgra.mat[i][j] = similarity[i][j];\n        }\n    }\n    gra.match();\n    match1 = gra.leftMatch;\n    match2 = gra.rightMatch;\n    if (have_exchange) swap(match1, match2);\n}\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/src/evaluate.cpp",
    "content": "/*************************************************************************\n        > File Name: evaluate.cpp\n        > Author: Xingang Pan, Jun Li\n        > Mail: px117@ie.cuhk.edu.hk\n        > Created Time: 2016年07月14日 星期四 18时28分45秒\n ************************************************************************/\n\n#include \"counter.hpp\"\n#include \"spline.hpp\"\n#include <unistd.h>\n#include <iostream>\n#include <fstream>\n#include <sstream>\n#include <cstdlib>\n#include <string>\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\nusing namespace std;\nusing namespace cv;\n\nvoid help(void) {\n  cout << \"./evaluate [OPTIONS]\" << endl;\n  cout << \"-h                  : print usage help\" << endl;\n  cout << \"-a                  : directory for annotation files (default: \"\n          \"/data/driving/eval_data/anno_label/)\" << endl;\n  cout << \"-d                  : directory for detection files (default: \"\n          \"/data/driving/eval_data/predict_label/)\" << endl;\n  cout << \"-i                  : directory for image files (default: \"\n          \"/data/driving/eval_data/img/)\" << endl;\n  cout << \"-l                  : list of images used for evaluation (default: \"\n          \"/data/driving/eval_data/img/all.txt)\" << endl;\n  cout << \"-w                  : width of the lanes (default: 10)\" << endl;\n  cout << \"-t                  : threshold of iou (default: 0.4)\" << endl;\n  cout << \"-c                  : cols (max image width) (default: 1920)\"\n       << endl;\n  cout << \"-r                  : rows (max image height) (default: 1080)\"\n       << endl;\n  cout << \"-s                  : show visualization\" << endl;\n  cout << \"-f                  : start frame in the test set (default: 1)\"\n       << endl;\n}\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f>> &lanes);\nvoid visualize(string &full_im_name, vector<vector<Point2f>> &anno_lanes,\n               vector<vector<Point2f>> &detect_lanes, vector<int> anno_match,\n               int width_lane, string save_path = \"\");\n\nint main(int argc, char **argv) {\n  // process params\n  string anno_dir = \"/data/driving/eval_data/anno_label/\";\n  string detect_dir = \"/data/driving/eval_data/predict_label/\";\n  string im_dir = \"/data/driving/eval_data/img/\";\n  string list_im_file = \"/data/driving/eval_data/img/all.txt\";\n  string output_file = \"./output.txt\";\n  int width_lane = 10;\n  double iou_threshold = 0.4;\n  int im_width = 1920;\n  int im_height = 1080;\n  int oc;\n  bool show = false;\n  int frame = 1;\n  string save_path = \"\";\n  while ((oc = getopt(argc, argv, \"ha:d:i:l:w:t:c:r:sf:o:p:\")) != -1) {\n    switch (oc) {\n    case 'h':\n      help();\n      return 0;\n    case 'a':\n      anno_dir = optarg;\n      break;\n    case 'd':\n      detect_dir = optarg;\n      break;\n    case 'i':\n      im_dir = optarg;\n      break;\n    case 'l':\n      list_im_file = optarg;\n      break;\n    case 'w':\n      width_lane = atoi(optarg);\n      break;\n    case 't':\n      iou_threshold = atof(optarg);\n      break;\n    case 'c':\n      im_width = atoi(optarg);\n      break;\n    case 'r':\n      im_height = atoi(optarg);\n      break;\n    case 's':\n      show = true;\n      break;\n    case 'p':\n      save_path = optarg;\n      break;\n    case 'f':\n      frame = atoi(optarg);\n      break;\n    case 'o':\n      output_file = optarg;\n      break;\n    }\n  }\n\n  cout << \"------------Configuration---------\" << endl;\n  cout << \"anno_dir: \" << anno_dir << endl;\n  cout << \"detect_dir: \" << detect_dir << endl;\n  cout << \"im_dir: \" << im_dir << endl;\n  cout << \"list_im_file: \" << list_im_file << endl;\n  cout << \"width_lane: \" << width_lane << endl;\n  cout << \"iou_threshold: \" << iou_threshold << endl;\n  cout << \"im_width: \" << im_width << endl;\n  cout << \"im_height: \" << im_height << endl;\n  cout << \"-----------------------------------\" << endl;\n  cout << \"Evaluating the results...\" << endl;\n  // this is the max_width and max_height\n\n  if (width_lane < 1) {\n    cerr << \"width_lane must be positive\" << endl;\n    help();\n    return 1;\n  }\n\n  ifstream ifs_im_list(list_im_file, ios::in);\n  if (ifs_im_list.fail()) {\n    cerr << \"Error: file \" << list_im_file << \" not exist!\" << endl;\n    return 1;\n  }\n\n  Counter counter(im_width, im_height, iou_threshold, width_lane);\n\n  vector<int> anno_match;\n  string sub_im_name;\n  // pre-load filelist\n  vector<string> filelists;\n  while (getline(ifs_im_list, sub_im_name)) {\n    filelists.push_back(sub_im_name);\n  }\n  ifs_im_list.close();\n\n  vector<tuple<vector<int>, long, long, long, long>> tuple_lists;\n  tuple_lists.resize(filelists.size());\n\n#pragma omp parallel for\n  for (size_t i = 0; i < filelists.size(); i++) {\n    auto sub_im_name = filelists[i];\n    string full_im_name = im_dir + sub_im_name;\n    string sub_txt_name =\n        sub_im_name.substr(0, sub_im_name.find_last_of(\".\")) + \".lines.txt\";\n    string anno_file_name = anno_dir + sub_txt_name;\n    string detect_file_name = detect_dir + sub_txt_name;\n    vector<vector<Point2f>> anno_lanes;\n    vector<vector<Point2f>> detect_lanes;\n    read_lane_file(anno_file_name, anno_lanes);\n    read_lane_file(detect_file_name, detect_lanes);\n    // cerr<<count<<\": \"<<full_im_name<<endl;\n    tuple_lists[i] = counter.count_im_pair(anno_lanes, detect_lanes);\n    if (show) {\n      auto anno_match = get<0>(tuple_lists[i]);\n      visualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane);\n      waitKey(0);\n    }\n    if (save_path != \"\") {\n      auto anno_match = get<0>(tuple_lists[i]);\n      visualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane,\n                save_path);\n    }\n  }\n\n  long tp = 0, fp = 0, tn = 0, fn = 0;\n  for (auto result : tuple_lists) {\n    tp += get<1>(result);\n    fp += get<2>(result);\n    // tn = get<3>(result);\n    fn += get<4>(result);\n  }\n  counter.setTP(tp);\n  counter.setFP(fp);\n  counter.setFN(fn);\n\n  double precision = counter.get_precision();\n  double recall = counter.get_recall();\n  double F = 2 * precision * recall / (precision + recall);\n  cerr << \"finished process file\" << endl;\n  cout << \"precision: \" << precision << endl;\n  cout << \"recall: \" << recall << endl;\n  cout << \"Fmeasure: \" << F << endl;\n  cout << \"----------------------------------\" << endl;\n\n  ofstream ofs_out_file;\n  ofs_out_file.open(output_file, ios::out);\n  ofs_out_file << \"file: \" << output_file << endl;\n  ofs_out_file << \"tp: \" << counter.getTP() << \" fp: \" << counter.getFP()\n               << \" fn: \" << counter.getFN() << endl;\n  ofs_out_file << \"precision: \" << precision << endl;\n  ofs_out_file << \"recall: \" << recall << endl;\n  ofs_out_file << \"Fmeasure: \" << F << endl << endl;\n  ofs_out_file.close();\n  return 0;\n}\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f>> &lanes) {\n  lanes.clear();\n  ifstream ifs_lane(file_name, ios::in);\n  if (ifs_lane.fail()) {\n    return;\n  }\n\n  string str_line;\n  while (getline(ifs_lane, str_line)) {\n    vector<Point2f> curr_lane;\n    stringstream ss;\n    ss << str_line;\n    double x, y;\n    while (ss >> x >> y) {\n      curr_lane.push_back(Point2f(x, y));\n    }\n    lanes.push_back(curr_lane);\n  }\n\n  ifs_lane.close();\n}\n\nvoid visualize(string &full_im_name, vector<vector<Point2f>> &anno_lanes,\n               vector<vector<Point2f>> &detect_lanes, vector<int> anno_match,\n               int width_lane, string save_path) {\n  Mat img = imread(full_im_name, 1);\n  Mat img2 = imread(full_im_name, 1);\n  vector<Point2f> curr_lane;\n  vector<Point2f> p_interp;\n  Spline splineSolver;\n  Scalar color_B = Scalar(255, 0, 0);\n  Scalar color_G = Scalar(0, 255, 0);\n  Scalar color_R = Scalar(0, 0, 255);\n  Scalar color_P = Scalar(255, 0, 255);\n  Scalar color;\n  for (int i = 0; i < anno_lanes.size(); i++) {\n    curr_lane = anno_lanes[i];\n    if (curr_lane.size() == 2) {\n      p_interp = curr_lane;\n    } else {\n      p_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n    }\n    if (anno_match[i] >= 0) {\n      color = color_G;\n    } else {\n      color = color_G;\n    }\n    for (int n = 0; n < p_interp.size() - 1; n++) {\n      line(img, p_interp[n], p_interp[n + 1], color, width_lane);\n      line(img2, p_interp[n], p_interp[n + 1], color, 2);\n    }\n  }\n  bool detected;\n  for (int i = 0; i < detect_lanes.size(); i++) {\n    detected = false;\n    curr_lane = detect_lanes[i];\n    if (curr_lane.size() == 2) {\n      p_interp = curr_lane;\n    } else {\n      p_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n    }\n    for (int n = 0; n < anno_lanes.size(); n++) {\n      if (anno_match[n] == i) {\n        detected = true;\n        break;\n      }\n    }\n    if (detected == true) {\n      color = color_B;\n    } else {\n      color = color_R;\n    }\n    for (int n = 0; n < p_interp.size() - 1; n++) {\n      line(img, p_interp[n], p_interp[n + 1], color, width_lane);\n      line(img2, p_interp[n], p_interp[n + 1], color, 2);\n    }\n  }\n  if (save_path != \"\") {\n    size_t pos = 0;\n    string s = full_im_name;\n    std::string token;\n    std::string delimiter = \"/\";\n    vector<string> names;\n    while ((pos = s.find(delimiter)) != std::string::npos) {\n      token = s.substr(0, pos);\n      names.emplace_back(token);\n      s.erase(0, pos + delimiter.length());\n    }\n    names.emplace_back(s);\n    string file_name = names[3] + '_' + names[4] + '_' + names[5];\n    // cout << file_name << endl;\n    imwrite(save_path + '/' + file_name, img);\n  } else {\n    namedWindow(\"visualize\", 1);\n    imshow(\"visualize\", img);\n    namedWindow(\"visualize2\", 1);\n    imshow(\"visualize2\", img2);\n  }\n}\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/src/lane_compare.cpp",
    "content": "/*************************************************************************\n\t> File Name: lane_compare.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Fri Jul 15 10:26:32 2016\n ************************************************************************/\n\n#include \"lane_compare.hpp\"\n\ndouble LaneCompare::get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2)\n{\n\tif(lane1.size()<2 || lane2.size()<2)\n\t{\n\t\tcerr<<\"lane size must be greater or equal to 2\"<<endl;\n\t\treturn 0;\n\t}\n\tMat im1 = Mat::zeros(im_height, im_width, CV_8UC1);\n\tMat im2 = Mat::zeros(im_height, im_width, CV_8UC1);\n\t// draw lines on im1 and im2\n\tvector<Point2f> p_interp1;\n\tvector<Point2f> p_interp2;\n\tif(lane1.size() == 2)\n\t{\n\t\tp_interp1 = lane1;\n\t}\n\telse\n\t{\n\t\tp_interp1 = splineSolver.splineInterpTimes(lane1, 50);\n\t}\n\n\tif(lane2.size() == 2)\n\t{\n\t\tp_interp2 = lane2;\n\t}\n\telse\n\t{\n\t\tp_interp2 = splineSolver.splineInterpTimes(lane2, 50);\n\t}\n\t\n\tScalar color_white = Scalar(1);\n\tfor(int n=0; n<p_interp1.size()-1; n++)\n\t{\n\t\tline(im1, p_interp1[n], p_interp1[n+1], color_white, lane_width);\n\t}\n\tfor(int n=0; n<p_interp2.size()-1; n++)\n\t{\n\t\tline(im2, p_interp2[n], p_interp2[n+1], color_white, lane_width);\n\t}\n\n\tdouble sum_1 = cv::sum(im1).val[0];\n\tdouble sum_2 = cv::sum(im2).val[0];\n\tdouble inter_sum = cv::sum(im1.mul(im2)).val[0];\n\tdouble union_sum = sum_1 + sum_2 - inter_sum; \n\tdouble iou = inter_sum / union_sum;\n\treturn iou;\n}\n\n\n// resize the lane from Size(curr_width, curr_height) to Size(im_width, im_height)\nvoid LaneCompare::resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height)\n{\n\tif(curr_width == im_width && curr_height == im_height)\n\t{\n\t\treturn;\n\t}\n\tdouble x_scale = im_width/(double)curr_width;\n\tdouble y_scale = im_height/(double)curr_height;\n\tfor(int n=0; n<curr_lane.size(); n++)\n\t{\n\t\tcurr_lane[n] = Point2f(curr_lane[n].x*x_scale, curr_lane[n].y*y_scale);\n\t}\n}\n\n"
  },
  {
    "path": "runner/evaluator/culane/lane_evaluation/src/spline.cpp",
    "content": "#include <vector>\n#include <iostream>\n#include \"spline.hpp\"\nusing namespace std;\nusing namespace cv;\n\nvector<Point2f> Spline::splineInterpTimes(const vector<Point2f>& tmp_line, int times) {\n    vector<Point2f> res;\n\n    if(tmp_line.size() == 2) {\n        double x1 = tmp_line[0].x;\n        double y1 = tmp_line[0].y;\n        double x2 = tmp_line[1].x;\n        double y2 = tmp_line[1].y;\n\n        for (int k = 0; k <= times; k++) {\n            double xi =  x1 + double((x2 - x1) * k) / times;\n            double yi =  y1 + double((y2 - y1) * k) / times;\n            res.push_back(Point2f(xi, yi));\n        }\n    }\n\n    else if(tmp_line.size() > 2)\n    {\n        vector<Func> tmp_func;\n        tmp_func = this->cal_fun(tmp_line);\n        if (tmp_func.empty()) {\n            cout << \"in splineInterpTimes: cal_fun failed\" << endl;\n            return res;\n        }\n        for(int j = 0; j < tmp_func.size(); j++)\n        {\n            double delta = tmp_func[j].h / times;\n            for(int k = 0; k < times; k++)\n            {\n                double t1 = delta*k;\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n    }\n\telse {\n\t\tcerr << \"in splineInterpTimes: not enough points\" << endl;\n\t}\n    return res;\n}\nvector<Point2f> Spline::splineInterpStep(vector<Point2f> tmp_line, double step) {\n\tvector<Point2f> res;\n\t/*\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\n\t\tfor (double yi = std::min(y1, y2); yi < std::max(y1, y2); yi += step) {\n            double xi;\n\t\t\tif (yi == y1) xi = x1;\n\t\t\telse xi = (x2 - x1) / (y2 - y1) * (yi - y1) + x1;\n\t\t\tres.push_back(Point2f(xi, yi));\n\t\t}\n\t}*/\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\t\ttmp_line[1].x = (x1 + x2) / 2;\n\t\ttmp_line[1].y = (y1 + y2) / 2;\n\t\ttmp_line.push_back(Point2f(x2, y2));\n\t}\n\tif (tmp_line.size() > 2) {\n\t\tvector<Func> tmp_func;\n\t\ttmp_func = this->cal_fun(tmp_line);\n\t\tdouble ystart = tmp_line[0].y;\n\t\tdouble yend = tmp_line[tmp_line.size() - 1].y;\n\t\tbool down;\n\t\tif (ystart < yend) down = 1;\n\t\telse down = 0;\n\t\tif (tmp_func.empty()) {\n\t\t\tcerr << \"in splineInterpStep: cal_fun failed\" << endl;\n\t\t}\n\n\t\tfor(int j = 0; j < tmp_func.size(); j++)\n        {\n            for(double t1 = 0; t1 < tmp_func[j].h; t1 += step)\n            {\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n\t}\n    else {\n        cerr << \"in splineInterpStep: not enough points\" << endl;\n    }\n    return res;\n}\n\nvector<Func> Spline::cal_fun(const vector<Point2f> &point_v)\n{\n    vector<Func> func_v;\n    int n = point_v.size();\n    if(n<=2) {\n        cout << \"in cal_fun: point number less than 3\" << endl;\n        return func_v;\n    }\n\n    func_v.resize(point_v.size()-1);\n\n    vector<double> Mx(n);\n    vector<double> My(n);\n    vector<double> A(n-2);\n    vector<double> B(n-2);\n    vector<double> C(n-2);\n    vector<double> Dx(n-2);\n    vector<double> Dy(n-2);\n    vector<double> h(n-1);\n    //vector<func> func_v(n-1);\n\n    for(int i = 0; i < n-1; i++)\n    {\n        h[i] = sqrt(pow(point_v[i+1].x - point_v[i].x, 2) + pow(point_v[i+1].y - point_v[i].y, 2));\n    }\n\n    for(int i = 0; i < n-2; i++)\n    {\n        A[i] = h[i];\n        B[i] = 2*(h[i]+h[i+1]);\n        C[i] = h[i+1];\n\n        Dx[i] =  6*( (point_v[i+2].x - point_v[i+1].x)/h[i+1] - (point_v[i+1].x - point_v[i].x)/h[i] );\n        Dy[i] =  6*( (point_v[i+2].y - point_v[i+1].y)/h[i+1] - (point_v[i+1].y - point_v[i].y)/h[i] );\n    }\n\n    //TDMA\n    C[0] = C[0] / B[0];\n    Dx[0] = Dx[0] / B[0];\n    Dy[0] = Dy[0] / B[0];\n    for(int i = 1; i < n-2; i++)\n    {\n        double tmp = B[i] - A[i]*C[i-1];\n        C[i] = C[i] / tmp;\n        Dx[i] = (Dx[i] - A[i]*Dx[i-1]) / tmp;\n        Dy[i] = (Dy[i] - A[i]*Dy[i-1]) / tmp;\n    }\n    Mx[n-2] = Dx[n-3];\n    My[n-2] = Dy[n-3];\n    for(int i = n-4; i >= 0; i--)\n    {\n        Mx[i+1] = Dx[i] - C[i]*Mx[i+2];\n        My[i+1] = Dy[i] - C[i]*My[i+2];\n    }\n\n    Mx[0] = 0;\n    Mx[n-1] = 0;\n    My[0] = 0;\n    My[n-1] = 0;\n\n    for(int i = 0; i < n-1; i++)\n    {\n        func_v[i].a_x = point_v[i].x;\n        func_v[i].b_x = (point_v[i+1].x - point_v[i].x)/h[i] - (2*h[i]*Mx[i] + h[i]*Mx[i+1]) / 6;\n        func_v[i].c_x = Mx[i]/2;\n        func_v[i].d_x = (Mx[i+1] - Mx[i]) / (6*h[i]);\n\n        func_v[i].a_y = point_v[i].y;\n        func_v[i].b_y = (point_v[i+1].y - point_v[i].y)/h[i] - (2*h[i]*My[i] + h[i]*My[i+1]) / 6;\n        func_v[i].c_y = My[i]/2;\n        func_v[i].d_y = (My[i+1] - My[i]) / (6*h[i]);\n\n        func_v[i].h = h[i];\n    }\n    return func_v;\n}\n"
  },
  {
    "path": "runner/evaluator/culane/prob2lines.py",
    "content": "import os\nimport argparse\nimport numpy as np\nimport pandas as pd\nfrom PIL import Image\nimport tqdm\n\n\ndef getLane(probmap, pts, cfg = None):\n    thr = 0.3\n    coordinate = np.zeros(pts)\n    cut_height = 0\n    if cfg.cut_height:\n        cut_height = cfg.cut_height\n    for i in range(pts):\n        line = probmap[round(cfg.img_height-i*20/(590-cut_height)*cfg.img_height)-1]\n        if np.max(line)/255 > thr:\n            coordinate[i] = np.argmax(line)+1\n    if np.sum(coordinate > 0) < 2:\n        coordinate = np.zeros(pts)\n    return coordinate\n\n\ndef prob2lines(prob_dir, out_dir, list_file, cfg = None):\n    lists = pd.read_csv(list_file, sep=' ', header=None,\n                        names=('img', 'probmap', 'label1', 'label2', 'label3', 'label4'))\n    pts = 18\n\n    for k, im in enumerate(lists['img'], 1):\n        existPath = prob_dir + im[:-4] + '.exist.txt'\n        outname = out_dir + im[:-4] + '.lines.txt'\n        prefix = '/'.join(outname.split('/')[:-1])\n        if not os.path.exists(prefix):\n            os.makedirs(prefix)\n        f = open(outname, 'w')\n\n        labels = list(pd.read_csv(existPath, sep=' ', header=None).iloc[0])\n        coordinates = np.zeros((4, pts))\n        for i in range(4):\n            if labels[i] == 1:\n                probfile = prob_dir + im[:-4] + '_{0}_avg.png'.format(i+1)\n                probmap = np.array(Image.open(probfile))\n                coordinates[i] = getLane(probmap, pts, cfg)\n\n                if np.sum(coordinates[i] > 0) > 1:\n                    for idx, value in enumerate(coordinates[i]):\n                        if value > 0:\n                            f.write('%d %d ' % (\n                                round(value*1640/cfg.img_width)-1, round(590-idx*20)-1))\n                    f.write('\\n')\n        f.close()\n"
  },
  {
    "path": "runner/evaluator/tusimple/getLane.py",
    "content": "import cv2\nimport numpy as np\n\ndef isShort(lane):\n    start = [i for i, x in enumerate(lane) if x > 0]\n    if not start:\n        return 1\n    else:\n        return 0\n\ndef fixGap(coordinate):\n    if any(x > 0 for x in coordinate):\n        start = [i for i, x in enumerate(coordinate) if x > 0][0]\n        end = [i for i, x in reversed(list(enumerate(coordinate))) if x > 0][0]\n        lane = coordinate[start:end+1]\n        if any(x < 0 for x in lane):\n            gap_start = [i for i, x in enumerate(\n                lane[:-1]) if x > 0 and lane[i+1] < 0]\n            gap_end = [i+1 for i,\n                       x in enumerate(lane[:-1]) if x < 0 and lane[i+1] > 0]\n            gap_id = [i for i, x in enumerate(lane) if x < 0]\n            if len(gap_start) == 0 or len(gap_end) == 0:\n                return coordinate\n            for id in gap_id:\n                for i in range(len(gap_start)):\n                    if i >= len(gap_end):\n                        return coordinate\n                    if id > gap_start[i] and id < gap_end[i]:\n                        gap_width = float(gap_end[i] - gap_start[i])\n                        lane[id] = int((id - gap_start[i]) / gap_width * lane[gap_end[i]] + (\n                            gap_end[i] - id) / gap_width * lane[gap_start[i]])\n            if not all(x > 0 for x in lane):\n                print(\"Gaps still exist!\")\n            coordinate[start:end+1] = lane\n    return coordinate\n\ndef getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape=None, cfg=None):\n    \"\"\"\n    Arguments:\n    ----------\n    prob_map: prob map for single lane, np array size (h, w)\n    resize_shape:  reshape size target, (H, W)\n\n    Return:\n    ----------\n    coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = prob_map.shape\n    h, w = prob_map.shape\n    H, W = resize_shape\n    H -= cfg.cut_height\n\n    coords = np.zeros(pts)\n    coords[:] = -1.0\n    for i in range(pts):\n        y = int((H - 10 - i * y_px_gap) * h / H)\n        if y < 0:\n            break\n        line = prob_map[y, :]\n        id = np.argmax(line)\n        if line[id] > thresh:\n            coords[i] = int(id / w * W)\n    if (coords > 0).sum() < 2:\n        coords = np.zeros(pts)\n    fixGap(coords)\n    return coords\n\n\ndef prob2lines_tusimple(seg_pred, exist, resize_shape=None, smooth=True, y_px_gap=10, pts=None, thresh=0.3, cfg=None):\n    \"\"\"\n    Arguments:\n    ----------\n    seg_pred:      np.array size (5, h, w)\n    resize_shape:  reshape size target, (H, W)\n    exist:       list of existence, e.g. [0, 1, 1, 0]\n    smooth:      whether to smooth the probability or not\n    y_px_gap:    y pixel gap for sampling\n    pts:     how many points for one lane\n    thresh:  probability threshold\n\n    Return:\n    ----------\n    coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = seg_pred.shape[1:]  # seg_pred (5, h, w)\n    _, h, w = seg_pred.shape\n    H, W = resize_shape\n    coordinates = []\n\n    if pts is None:\n        pts = round(H / 2 / y_px_gap)\n\n    seg_pred = np.ascontiguousarray(np.transpose(seg_pred, (1, 2, 0)))\n    for i in range(cfg.num_classes - 1):\n        prob_map = seg_pred[..., i + 1]\n        if smooth:\n            prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)\n        coords = getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape, cfg)\n        if isShort(coords):\n            continue\n        coordinates.append(\n            [[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in\n             range(pts)])\n\n\n    if len(coordinates) == 0:\n        coords = np.zeros(pts)\n        coordinates.append(\n            [[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in\n             range(pts)])\n\n\n    return coordinates\n"
  },
  {
    "path": "runner/evaluator/tusimple/lane.py",
    "content": "import numpy as np\nfrom sklearn.linear_model import LinearRegression\nimport json as json\n\n\nclass LaneEval(object):\n    lr = LinearRegression()\n    pixel_thresh = 20\n    pt_thresh = 0.85\n\n    @staticmethod\n    def get_angle(xs, y_samples):\n        xs, ys = xs[xs >= 0], y_samples[xs >= 0]\n        if len(xs) > 1:\n            LaneEval.lr.fit(ys[:, None], xs)\n            k = LaneEval.lr.coef_[0]\n            theta = np.arctan(k)\n        else:\n            theta = 0\n        return theta\n\n    @staticmethod\n    def line_accuracy(pred, gt, thresh):\n        pred = np.array([p if p >= 0 else -100 for p in pred])\n        gt = np.array([g if g >= 0 else -100 for g in gt])\n        return np.sum(np.where(np.abs(pred - gt) < thresh, 1., 0.)) / len(gt)\n\n    @staticmethod\n    def bench(pred, gt, y_samples, running_time):\n        if any(len(p) != len(y_samples) for p in pred):\n            raise Exception('Format of lanes error.')\n        if running_time > 200 or len(gt) + 2 < len(pred):\n            return 0., 0., 1.\n        angles = [LaneEval.get_angle(\n            np.array(x_gts), np.array(y_samples)) for x_gts in gt]\n        threshs = [LaneEval.pixel_thresh / np.cos(angle) for angle in angles]\n        line_accs = []\n        fp, fn = 0., 0.\n        matched = 0.\n        for x_gts, thresh in zip(gt, threshs):\n            accs = [LaneEval.line_accuracy(\n                np.array(x_preds), np.array(x_gts), thresh) for x_preds in pred]\n            max_acc = np.max(accs) if len(accs) > 0 else 0.\n            if max_acc < LaneEval.pt_thresh:\n                fn += 1\n            else:\n                matched += 1\n            line_accs.append(max_acc)\n        fp = len(pred) - matched\n        if len(gt) > 4 and fn > 0:\n            fn -= 1\n        s = sum(line_accs)\n        if len(gt) > 4:\n            s -= min(line_accs)\n        return s / max(min(4.0, len(gt)), 1.), fp / len(pred) if len(pred) > 0 else 0., fn / max(min(len(gt), 4.), 1.)\n\n    @staticmethod\n    def bench_one_submit(pred_file, gt_file):\n        try:\n            json_pred = [json.loads(line)\n                         for line in open(pred_file).readlines()]\n        except BaseException as e:\n            raise Exception('Fail to load json file of the prediction.')\n        json_gt = [json.loads(line) for line in open(gt_file).readlines()]\n        if len(json_gt) != len(json_pred):\n            raise Exception(\n                'We do not get the predictions of all the test tasks')\n        gts = {l['raw_file']: l for l in json_gt}\n        accuracy, fp, fn = 0., 0., 0.\n        for pred in json_pred:\n            if 'raw_file' not in pred or 'lanes' not in pred or 'run_time' not in pred:\n                raise Exception(\n                    'raw_file or lanes or run_time not in some predictions.')\n            raw_file = pred['raw_file']\n            pred_lanes = pred['lanes']\n            run_time = pred['run_time']\n            if raw_file not in gts:\n                raise Exception(\n                    'Some raw_file from your predictions do not exist in the test tasks.')\n            gt = gts[raw_file]\n            gt_lanes = gt['lanes']\n            y_samples = gt['h_samples']\n            try:\n                a, p, n = LaneEval.bench(\n                    pred_lanes, gt_lanes, y_samples, run_time)\n            except BaseException as e:\n                raise Exception('Format of lanes error.')\n            accuracy += a\n            fp += p\n            fn += n\n        num = len(gts)\n        # the first return parameter is the default ranking parameter\n        return json.dumps([\n            {'name': 'Accuracy', 'value': accuracy / num, 'order': 'desc'},\n            {'name': 'FP', 'value': fp / num, 'order': 'asc'},\n            {'name': 'FN', 'value': fn / num, 'order': 'asc'}\n        ]), accuracy / num\n\n\nif __name__ == '__main__':\n    import sys\n    try:\n        if len(sys.argv) != 3:\n            raise Exception('Invalid input arguments')\n        print(LaneEval.bench_one_submit(sys.argv[1], sys.argv[2]))\n    except Exception as e:\n        print(e.message)\n        sys.exit(e.message)\n"
  },
  {
    "path": "runner/evaluator/tusimple/tusimple.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\nfrom runner.logger import get_logger\n\nfrom runner.registry import EVALUATOR \nimport json\nimport os\nimport cv2\n\nfrom .lane import LaneEval\n\ndef split_path(path):\n    \"\"\"split path tree into list\"\"\"\n    folders = []\n    while True:\n        path, folder = os.path.split(path)\n        if folder != \"\":\n            folders.insert(0, folder)\n        else:\n            if path != \"\":\n                folders.insert(0, path)\n            break\n    return folders\n\n\n@EVALUATOR.register_module\nclass Tusimple(nn.Module):\n    def __init__(self, cfg):\n        super(Tusimple, self).__init__()\n        self.cfg = cfg \n        exp_dir = os.path.join(self.cfg.work_dir, \"output\")\n        if not os.path.exists(exp_dir):\n            os.mkdir(exp_dir)\n        self.out_path = os.path.join(exp_dir, \"coord_output\")\n        if not os.path.exists(self.out_path):\n            os.mkdir(self.out_path)\n        self.dump_to_json = [] \n        self.thresh = cfg.evaluator.thresh\n        self.logger = get_logger('resa')\n        if cfg.view:\n            self.view_dir = os.path.join(self.cfg.work_dir, 'vis')\n\n    def evaluate_pred(self, dataset, seg_pred, exist_pred, batch):\n        img_name = batch['meta']['img_name']\n        img_path = batch['meta']['full_img_path']\n        for b in range(len(seg_pred)):\n            seg = seg_pred[b]\n            exist = [1 if exist_pred[b, i] >\n                     0.5 else 0 for i in range(self.cfg.num_classes-1)]\n            lane_coords = dataset.probmap2lane(seg, exist, thresh = self.thresh)\n            for i in range(len(lane_coords)):\n                lane_coords[i] = sorted(\n                    lane_coords[i], key=lambda pair: pair[1])\n\n            path_tree = split_path(img_name[b])\n            save_dir, save_name = path_tree[-3:-1], path_tree[-1]\n            save_dir = os.path.join(self.out_path, *save_dir)\n            save_name = save_name[:-3] + \"lines.txt\"\n            save_name = os.path.join(save_dir, save_name)\n            if not os.path.exists(save_dir):\n                os.makedirs(save_dir, exist_ok=True)\n\n            with open(save_name, \"w\") as f:\n                for l in lane_coords:\n                    for (x, y) in l:\n                        print(\"{} {}\".format(x, y), end=\" \", file=f)\n                    print(file=f)\n\n            json_dict = {}\n            json_dict['lanes'] = []\n            json_dict['h_sample'] = []\n            json_dict['raw_file'] = os.path.join(*path_tree[-4:])\n            json_dict['run_time'] = 0\n            for l in lane_coords:\n                if len(l) == 0:\n                    continue\n                json_dict['lanes'].append([])\n                for (x, y) in l:\n                    json_dict['lanes'][-1].append(int(x))\n            for (x, y) in lane_coords[0]:\n                json_dict['h_sample'].append(y)\n            self.dump_to_json.append(json.dumps(json_dict))\n            if self.cfg.view:\n                img = cv2.imread(img_path[b])\n                new_img_name = img_name[b].replace('/', '_')\n                save_dir = os.path.join(self.view_dir, new_img_name)\n                dataset.view(img, lane_coords, save_dir)\n\n\n    def evaluate(self, dataset, output, batch):\n        seg_pred, exist_pred = output['seg'], output['exist']\n        seg_pred = F.softmax(seg_pred, dim=1)\n        seg_pred = seg_pred.detach().cpu().numpy()\n        exist_pred = exist_pred.detach().cpu().numpy()\n        self.evaluate_pred(dataset, seg_pred, exist_pred, batch)\n\n    def summarize(self):\n        best_acc = 0\n        output_file = os.path.join(self.out_path, 'predict_test.json')\n        with open(output_file, \"w+\") as f:\n            for line in self.dump_to_json:\n                print(line, end=\"\\n\", file=f)\n\n        eval_result, acc = LaneEval.bench_one_submit(output_file,\n                            self.cfg.test_json_file)\n\n        self.logger.info(eval_result)\n        self.dump_to_json = []\n        best_acc = max(acc, best_acc)\n        return best_acc\n"
  },
  {
    "path": "runner/logger.py",
    "content": "import logging\n\nlogger_initialized = {}\n\ndef get_logger(name, log_file=None, log_level=logging.INFO):\n    \"\"\"Initialize and get a logger by name.\n    If the logger has not been initialized, this method will initialize the\n    logger by adding one or two handlers, otherwise the initialized logger will\n    be directly returned. During initialization, a StreamHandler will always be\n    added. If `log_file` is specified and the process rank is 0, a FileHandler\n    will also be added.\n    Args:\n        name (str): Logger name.\n        log_file (str | None): The log filename. If specified, a FileHandler\n            will be added to the logger.\n        log_level (int): The logger level. Note that only the process of\n            rank 0 is affected, and other processes will set the level to\n            \"Error\" thus be silent most of the time.\n    Returns:\n        logging.Logger: The expected logger.\n    \"\"\"\n    logger = logging.getLogger(name)\n    if name in logger_initialized:\n        return logger\n    # handle hierarchical names\n    # e.g., logger \"a\" is initialized, then logger \"a.b\" will skip the\n    # initialization since it is a child of \"a\".\n    for logger_name in logger_initialized:\n        if name.startswith(logger_name):\n            return logger\n\n    stream_handler = logging.StreamHandler()\n    handlers = [stream_handler]\n\n    if log_file is not None:\n        file_handler = logging.FileHandler(log_file, 'w')\n        handlers.append(file_handler)\n\n    formatter = logging.Formatter(\n        '%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n    for handler in handlers:\n        handler.setFormatter(formatter)\n        handler.setLevel(log_level)\n        logger.addHandler(handler)\n\n    logger.setLevel(log_level)\n\n    logger_initialized[name] = True\n\n    return logger\n"
  },
  {
    "path": "runner/net_utils.py",
    "content": "import torch\nimport os\nfrom torch import nn\nimport numpy as np\nimport torch.nn.functional\nfrom termcolor import colored\nfrom .logger import get_logger\n\ndef save_model(net, optim, scheduler, recorder, is_best=False):\n    model_dir = os.path.join(recorder.work_dir, 'ckpt')\n    os.system('mkdir -p {}'.format(model_dir))\n    epoch = recorder.epoch\n    ckpt_name = 'best' if is_best else epoch\n    torch.save({\n        'net': net.state_dict(),\n        'optim': optim.state_dict(),\n        'scheduler': scheduler.state_dict(),\n        'recorder': recorder.state_dict(),\n        'epoch': epoch\n    }, os.path.join(model_dir, '{}.pth'.format(ckpt_name)))\n\n\ndef load_network_specified(net, model_dir, logger=None):\n    pretrained_net = torch.load(model_dir)['net']\n    net_state = net.state_dict()\n    state = {}\n    for k, v in pretrained_net.items():\n        if k not in net_state.keys() or v.size() != net_state[k].size():\n            if logger:\n                logger.info('skip weights: ' + k)\n            continue\n        state[k] = v\n    net.load_state_dict(state, strict=False)\n\n\ndef load_network(net, model_dir, finetune_from=None, logger=None):\n    if finetune_from:\n        if logger:\n            logger.info('Finetune model from: ' + finetune_from)\n        load_network_specified(net, finetune_from, logger)\n        return\n    pretrained_model = torch.load(model_dir)\n    net.load_state_dict(pretrained_model['net'], strict=True)\n"
  },
  {
    "path": "runner/optimizer.py",
    "content": "import torch\n\n\n_optimizer_factory = {\n    'adam': torch.optim.Adam,\n    'sgd': torch.optim.SGD\n}\n\n\ndef build_optimizer(cfg, net):\n    params = []\n    lr = cfg.optimizer.lr\n    weight_decay = cfg.optimizer.weight_decay\n\n    for key, value in net.named_parameters():\n        if not value.requires_grad:\n            continue\n        params += [{\"params\": [value], \"lr\": lr, \"weight_decay\": weight_decay}]\n\n    if 'adam' in cfg.optimizer.type:\n        optimizer = _optimizer_factory[cfg.optimizer.type](params, lr, weight_decay=weight_decay)\n    else:\n        optimizer = _optimizer_factory[cfg.optimizer.type](\n                params, lr, weight_decay=weight_decay, momentum=cfg.optimizer.momentum)\n\n    return optimizer\n"
  },
  {
    "path": "runner/recorder.py",
    "content": "from collections import deque, defaultdict\nimport torch\nimport os\nimport datetime\nfrom .logger import get_logger\n\n\nclass SmoothedValue(object):\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=20):\n        self.deque = deque(maxlen=window_size)\n        self.total = 0.0\n        self.count = 0\n\n    def update(self, value):\n        self.deque.append(value)\n        self.count += 1\n        self.total += value\n\n    @property\n    def median(self):\n        d = torch.tensor(list(self.deque))\n        return d.median().item()\n\n    @property\n    def avg(self):\n        d = torch.tensor(list(self.deque))\n        return d.mean().item()\n\n    @property\n    def global_avg(self):\n        return self.total / self.count\n\n\nclass Recorder(object):\n    def __init__(self, cfg):\n        self.cfg = cfg\n        self.work_dir = self.get_work_dir()\n        cfg.work_dir = self.work_dir\n        self.log_path = os.path.join(self.work_dir, 'log.txt')\n\n        self.logger = get_logger('resa', self.log_path)\n        self.logger.info('Config: \\n' + cfg.text)\n\n        # scalars\n        self.epoch = 0\n        self.step = 0\n        self.loss_stats = defaultdict(SmoothedValue)\n        self.batch_time = SmoothedValue()\n        self.data_time = SmoothedValue()\n        self.max_iter = self.cfg.total_iter \n        self.lr = 0.\n\n    def get_work_dir(self):\n        now = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')\n        hyper_param_str = '_lr_%1.0e_b_%d' % (self.cfg.optimizer.lr, self.cfg.batch_size)\n        work_dir = os.path.join(self.cfg.work_dirs, now + hyper_param_str)\n        if not os.path.exists(work_dir):\n            os.makedirs(work_dir)\n        return work_dir\n\n    def update_loss_stats(self, loss_dict):\n        for k, v in loss_dict.items():\n            self.loss_stats[k].update(v.detach().cpu())\n\n    def record(self, prefix, step=-1, loss_stats=None, image_stats=None):\n        self.logger.info(self)\n        # self.write(str(self))\n\n    def write(self, content):\n        with open(self.log_path, 'a+') as f:\n            f.write(content)\n            f.write('\\n')\n\n    def state_dict(self):\n        scalar_dict = {}\n        scalar_dict['step'] = self.step\n        return scalar_dict\n\n    def load_state_dict(self, scalar_dict):\n        self.step = scalar_dict['step']\n\n    def __str__(self):\n        loss_state = []\n        for k, v in self.loss_stats.items():\n            loss_state.append('{}: {:.4f}'.format(k, v.avg))\n        loss_state = '  '.join(loss_state)\n\n        recording_state = '  '.join(['epoch: {}', 'step: {}', 'lr: {:.4f}', '{}', 'data: {:.4f}', 'batch: {:.4f}', 'eta: {}'])\n        eta_seconds = self.batch_time.global_avg * (self.max_iter - self.step)\n        eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))\n        return recording_state.format(self.epoch, self.step, self.lr, loss_state, self.data_time.avg, self.batch_time.avg, eta_string)\n\n\ndef build_recorder(cfg):\n    return Recorder(cfg)\n\n"
  },
  {
    "path": "runner/registry.py",
    "content": "from utils import Registry, build_from_cfg\n\nTRAINER = Registry('trainer')\nEVALUATOR = Registry('evaluator')\n\ndef build(cfg, registry, default_args=None):\n    if isinstance(cfg, list):\n        modules = [\n            build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg\n        ]\n        return nn.Sequential(*modules)\n    else:\n        return build_from_cfg(cfg, registry, default_args)\n\ndef build_trainer(cfg):\n    return build(cfg.trainer, TRAINER, default_args=dict(cfg=cfg))\n\ndef build_evaluator(cfg):\n    return build(cfg.evaluator, EVALUATOR, default_args=dict(cfg=cfg))\n"
  },
  {
    "path": "runner/resa_trainer.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\n\nfrom runner.registry import TRAINER\n\ndef dice_loss(input, target):\n    input = input.contiguous().view(input.size()[0], -1)\n    target = target.contiguous().view(target.size()[0], -1).float()\n\n    a = torch.sum(input * target, 1)\n    b = torch.sum(input * input, 1) + 0.001\n    c = torch.sum(target * target, 1) + 0.001\n    d = (2 * a) / (b + c)\n    return (1-d).mean()\n\n@TRAINER.register_module\nclass RESA(nn.Module):\n    def __init__(self, cfg):\n        super(RESA, self).__init__()\n        self.cfg = cfg\n        self.loss_type = cfg.loss_type\n        if self.loss_type == 'cross_entropy':\n            weights = torch.ones(cfg.num_classes)\n            weights[0] = cfg.bg_weight\n            weights = weights.cuda()\n            self.criterion = torch.nn.NLLLoss(ignore_index=self.cfg.ignore_label,\n                                              weight=weights).cuda()\n\n        self.criterion_exist = torch.nn.BCEWithLogitsLoss().cuda()\n\n    def forward(self, net, batch):\n        output = net(batch['img'])\n\n        loss_stats = {}\n        loss = 0.\n\n        if self.loss_type == 'dice_loss':\n            target = F.one_hot(batch['label'], num_classes=self.cfg.num_classes).permute(0, 3, 1, 2)\n            seg_loss = dice_loss(F.softmax(\n                output['seg'], dim=1)[:, 1:], target[:, 1:])\n        else:\n            seg_loss = self.criterion(F.log_softmax(\n                output['seg'], dim=1), batch['label'].long())\n\n        loss += seg_loss * self.cfg.seg_loss_weight\n\n        loss_stats.update({'seg_loss': seg_loss})\n\n        if 'exist' in output:\n            exist_loss = 0.1 * \\\n                self.criterion_exist(output['exist'], batch['exist'].float())\n            loss += exist_loss\n            loss_stats.update({'exist_loss': exist_loss})\n\n        ret = {'loss': loss, 'loss_stats': loss_stats}\n\n        return ret\n"
  },
  {
    "path": "runner/runner.py",
    "content": "import time\nimport torch\nimport numpy as np\nfrom tqdm import tqdm\nimport pytorch_warmup as warmup\n\nfrom models.registry import build_net\nfrom .registry import build_trainer, build_evaluator\nfrom .optimizer import build_optimizer\nfrom .scheduler import build_scheduler\nfrom datasets import build_dataloader\nfrom .recorder import build_recorder\nfrom .net_utils import save_model, load_network\n\n\nclass Runner(object):\n    def __init__(self, cfg):\n        self.cfg = cfg\n        self.recorder = build_recorder(self.cfg)\n        self.net = build_net(self.cfg)\n        self.net = torch.nn.parallel.DataParallel(\n                self.net, device_ids = range(self.cfg.gpus)).cuda()\n        self.recorder.logger.info('Network: \\n' + str(self.net))\n        self.resume()\n        self.optimizer = build_optimizer(self.cfg, self.net)\n        self.scheduler = build_scheduler(self.cfg, self.optimizer)\n        self.evaluator = build_evaluator(self.cfg)\n        self.warmup_scheduler = warmup.LinearWarmup(\n            self.optimizer, warmup_period=5000)\n        self.metric = 0.\n\n    def resume(self):\n        if not self.cfg.load_from and not self.cfg.finetune_from:\n            return\n        load_network(self.net, self.cfg.load_from,\n                finetune_from=self.cfg.finetune_from, logger=self.recorder.logger)\n\n    def to_cuda(self, batch):\n        for k in batch:\n            if k == 'meta':\n                continue\n            batch[k] = batch[k].cuda()\n        return batch\n    \n    def train_epoch(self, epoch, train_loader):\n        self.net.train()\n        end = time.time()\n        max_iter = len(train_loader)\n        for i, data in enumerate(train_loader):\n            if self.recorder.step >= self.cfg.total_iter:\n                break\n            date_time = time.time() - end\n            self.recorder.step += 1\n            data = self.to_cuda(data)\n            output = self.trainer.forward(self.net, data)\n            self.optimizer.zero_grad()\n            loss = output['loss']\n            loss.backward()\n            self.optimizer.step()\n            self.scheduler.step()\n            self.warmup_scheduler.dampen()\n            batch_time = time.time() - end\n            end = time.time()\n            self.recorder.update_loss_stats(output['loss_stats'])\n            self.recorder.batch_time.update(batch_time)\n            self.recorder.data_time.update(date_time)\n\n            if i % self.cfg.log_interval == 0 or i == max_iter - 1:\n                lr = self.optimizer.param_groups[0]['lr']\n                self.recorder.lr = lr\n                self.recorder.record('train')\n\n    def train(self):\n        self.recorder.logger.info('start training...')\n        self.trainer = build_trainer(self.cfg)\n        train_loader = build_dataloader(self.cfg.dataset.train, self.cfg, is_train=True)\n        val_loader = build_dataloader(self.cfg.dataset.val, self.cfg, is_train=False)\n\n        for epoch in range(self.cfg.epochs):\n            self.recorder.epoch = epoch\n            self.train_epoch(epoch, train_loader)\n            if (epoch + 1) % self.cfg.save_ep == 0 or epoch == self.cfg.epochs - 1:\n                self.save_ckpt()\n            if (epoch + 1) % self.cfg.eval_ep == 0 or epoch == self.cfg.epochs - 1:\n                self.validate(val_loader)\n            if self.recorder.step >= self.cfg.total_iter:\n                break\n\n    def validate(self, val_loader):\n        self.net.eval()\n        for i, data in enumerate(tqdm(val_loader, desc=f'Validate')):\n            data = self.to_cuda(data)\n            with torch.no_grad():\n                output = self.net(data['img'])\n                self.evaluator.evaluate(val_loader.dataset, output, data)\n\n        metric = self.evaluator.summarize()\n        if not metric:\n            return\n        if metric > self.metric:\n            self.metric = metric\n            self.save_ckpt(is_best=True)\n        self.recorder.logger.info('Best metric: ' + str(self.metric))\n\n    def save_ckpt(self, is_best=False):\n        save_model(self.net, self.optimizer, self.scheduler,\n                self.recorder, is_best)\n"
  },
  {
    "path": "runner/scheduler.py",
    "content": "import torch\nimport math\n\n\n_scheduler_factory = {\n    'LambdaLR': torch.optim.lr_scheduler.LambdaLR,\n}\n\n\ndef build_scheduler(cfg, optimizer):\n\n    assert cfg.scheduler.type in _scheduler_factory\n\n    cfg_cp = cfg.scheduler.copy()\n    cfg_cp.pop('type')\n\n    scheduler = _scheduler_factory[cfg.scheduler.type](optimizer, **cfg_cp)\n\n\n    return scheduler \n"
  },
  {
    "path": "tools/generate_seg_tusimple.py",
    "content": "import json\nimport numpy as np\nimport cv2\nimport os\nimport argparse\n\nTRAIN_SET = ['label_data_0313.json', 'label_data_0601.json']\nVAL_SET = ['label_data_0531.json']\nTRAIN_VAL_SET = TRAIN_SET + VAL_SET\nTEST_SET = ['test_label.json']\n\ndef gen_label_for_json(args, image_set):\n    H, W = 720, 1280\n    SEG_WIDTH = 30\n    save_dir = args.savedir\n\n    os.makedirs(os.path.join(args.root, args.savedir, \"list\"), exist_ok=True)\n    list_f = open(os.path.join(args.root, args.savedir, \"list\", \"{}_gt.txt\".format(image_set)), \"w\")\n\n    json_path = os.path.join(args.root, args.savedir, \"{}.json\".format(image_set))\n    with open(json_path) as f:\n        for line in f:\n            label = json.loads(line)\n            # ---------- clean and sort lanes -------------\n            lanes = []\n            _lanes = []\n            slope = [] # identify 0th, 1st, 2nd, 3rd, 4th, 5th lane through slope\n            for i in range(len(label['lanes'])):\n                l = [(x, y) for x, y in zip(label['lanes'][i], label['h_samples']) if x >= 0]\n                if (len(l)>1):\n                    _lanes.append(l)\n                    slope.append(np.arctan2(l[-1][1]-l[0][1], l[0][0]-l[-1][0]) / np.pi * 180)\n            _lanes = [_lanes[i] for i in np.argsort(slope)]\n            slope = [slope[i] for i in np.argsort(slope)]\n\n            idx = [None for i in range(6)]\n            for i in range(len(slope)):\n                if slope[i] <= 90:\n                    idx[2] = i\n                    idx[1] = i-1 if i > 0 else None\n                    idx[0] = i-2 if i > 1 else None\n                else:\n                    idx[3] = i\n                    idx[4] = i+1 if i+1 < len(slope) else None\n                    idx[5] = i+2 if i+2 < len(slope) else None\n                    break\n            for i in range(6):\n                lanes.append([] if idx[i] is None else _lanes[idx[i]])\n\n            # ---------------------------------------------\n\n            img_path = label['raw_file']\n            seg_img = np.zeros((H, W, 3))\n            list_str = []  # str to be written to list.txt\n            for i in range(len(lanes)):\n                coords = lanes[i]\n                if len(coords) < 4:\n                    list_str.append('0')\n                    continue\n                for j in range(len(coords)-1):\n                    cv2.line(seg_img, coords[j], coords[j+1], (i+1, i+1, i+1), SEG_WIDTH//2)\n                list_str.append('1')\n\n            seg_path = img_path.split(\"/\")\n            seg_path, img_name = os.path.join(args.root, args.savedir, seg_path[1], seg_path[2]), seg_path[3]\n            os.makedirs(seg_path, exist_ok=True)\n            seg_path = os.path.join(seg_path, img_name[:-3]+\"png\")\n            cv2.imwrite(seg_path, seg_img)\n\n            seg_path = \"/\".join([args.savedir, *img_path.split(\"/\")[1:3], img_name[:-3]+\"png\"])\n            if seg_path[0] != '/':\n                seg_path = '/' + seg_path\n            if img_path[0] != '/':\n                img_path = '/' + img_path\n            list_str.insert(0, seg_path)\n            list_str.insert(0, img_path)\n            list_str = \" \".join(list_str) + \"\\n\"\n            list_f.write(list_str)\n\n\ndef generate_json_file(save_dir, json_file, image_set):\n    with open(os.path.join(save_dir, json_file), \"w\") as outfile:\n        for json_name in (image_set):\n            with open(os.path.join(args.root, json_name)) as infile:\n                for line in infile:\n                    outfile.write(line)\n\ndef generate_label(args):\n    save_dir = os.path.join(args.root, args.savedir)\n    os.makedirs(save_dir, exist_ok=True)\n    generate_json_file(save_dir, \"train_val.json\", TRAIN_VAL_SET)\n    generate_json_file(save_dir, \"test.json\", TEST_SET)\n\n    print(\"generating train_val set...\")\n    gen_label_for_json(args, 'train_val')\n    print(\"generating test set...\")\n    gen_label_for_json(args, 'test')\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--root', required=True, help='The root of the Tusimple dataset')\n    parser.add_argument('--savedir', type=str, default='seg_label', help='The root of the Tusimple dataset')\n    args = parser.parse_args()\n\n    generate_label(args)\n"
  },
  {
    "path": "utils/__init__.py",
    "content": "from .config import Config\nfrom .registry import Registry, build_from_cfg\n"
  },
  {
    "path": "utils/config.py",
    "content": "# Copyright (c) Open-MMLab. All rights reserved.\nimport ast\nimport os.path as osp\nimport shutil\nimport sys\nimport tempfile\nfrom argparse import Action, ArgumentParser\nfrom collections import abc\nfrom importlib import import_module\n\nfrom addict import Dict\nfrom yapf.yapflib.yapf_api import FormatCode\n\n\nBASE_KEY = '_base_'\nDELETE_KEY = '_delete_'\nRESERVED_KEYS = ['filename', 'text', 'pretty_text']\n\ndef check_file_exist(filename, msg_tmpl='file \"{}\" does not exist'):\n    if not osp.isfile(filename):\n        raise FileNotFoundError(msg_tmpl.format(filename))\n\n\n\nclass ConfigDict(Dict):\n\n    def __missing__(self, name):\n        raise KeyError(name)\n\n    def __getattr__(self, name):\n        try:\n            value = super(ConfigDict, self).__getattr__(name)\n        except KeyError:\n            ex = AttributeError(f\"'{self.__class__.__name__}' object has no \"\n                                f\"attribute '{name}'\")\n        except Exception as e:\n            ex = e\n        else:\n            return value\n        raise ex\n\n\ndef add_args(parser, cfg, prefix=''):\n    for k, v in cfg.items():\n        if isinstance(v, str):\n            parser.add_argument('--' + prefix + k)\n        elif isinstance(v, int):\n            parser.add_argument('--' + prefix + k, type=int)\n        elif isinstance(v, float):\n            parser.add_argument('--' + prefix + k, type=float)\n        elif isinstance(v, bool):\n            parser.add_argument('--' + prefix + k, action='store_true')\n        elif isinstance(v, dict):\n            add_args(parser, v, prefix + k + '.')\n        elif isinstance(v, abc.Iterable):\n            parser.add_argument('--' + prefix + k, type=type(v[0]), nargs='+')\n        else:\n            print(f'cannot parse key {prefix + k} of type {type(v)}')\n    return parser\n\n\nclass Config:\n    \"\"\"A facility for config and config files.\n    It supports common file formats as configs: python/json/yaml. The interface\n    is the same as a dict object and also allows access config values as\n    attributes.\n    Example:\n        >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1])))\n        >>> cfg.a\n        1\n        >>> cfg.b\n        {'b1': [0, 1]}\n        >>> cfg.b.b1\n        [0, 1]\n        >>> cfg = Config.fromfile('tests/data/config/a.py')\n        >>> cfg.filename\n        \"/home/kchen/projects/mmcv/tests/data/config/a.py\"\n        >>> cfg.item4\n        'test'\n        >>> cfg\n        \"Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: \"\n        \"{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}\"\n    \"\"\"\n\n    @staticmethod\n    def _validate_py_syntax(filename):\n        with open(filename) as f:\n            content = f.read()\n        try:\n            ast.parse(content)\n        except SyntaxError:\n            raise SyntaxError('There are syntax errors in config '\n                              f'file {filename}')\n\n    @staticmethod\n    def _file2dict(filename):\n        filename = osp.abspath(osp.expanduser(filename))\n        check_file_exist(filename)\n        if filename.endswith('.py'):\n            with tempfile.TemporaryDirectory() as temp_config_dir:\n                temp_config_file = tempfile.NamedTemporaryFile(\n                    dir=temp_config_dir, suffix='.py')\n                temp_config_name = osp.basename(temp_config_file.name)\n                shutil.copyfile(filename,\n                                osp.join(temp_config_dir, temp_config_name))\n                temp_module_name = osp.splitext(temp_config_name)[0]\n                sys.path.insert(0, temp_config_dir)\n                Config._validate_py_syntax(filename)\n                mod = import_module(temp_module_name)\n                sys.path.pop(0)\n                cfg_dict = {\n                    name: value\n                    for name, value in mod.__dict__.items()\n                    if not name.startswith('__')\n                }\n                # delete imported module\n                del sys.modules[temp_module_name]\n                # close temp file\n                temp_config_file.close()\n        elif filename.endswith(('.yml', '.yaml', '.json')):\n            import mmcv\n            cfg_dict = mmcv.load(filename)\n        else:\n            raise IOError('Only py/yml/yaml/json type are supported now!')\n\n        cfg_text = filename + '\\n'\n        with open(filename, 'r') as f:\n            cfg_text += f.read()\n\n        if BASE_KEY in cfg_dict:\n            cfg_dir = osp.dirname(filename)\n            base_filename = cfg_dict.pop(BASE_KEY)\n            base_filename = base_filename if isinstance(\n                base_filename, list) else [base_filename]\n\n            cfg_dict_list = list()\n            cfg_text_list = list()\n            for f in base_filename:\n                _cfg_dict, _cfg_text = Config._file2dict(osp.join(cfg_dir, f))\n                cfg_dict_list.append(_cfg_dict)\n                cfg_text_list.append(_cfg_text)\n\n            base_cfg_dict = dict()\n            for c in cfg_dict_list:\n                if len(base_cfg_dict.keys() & c.keys()) > 0:\n                    raise KeyError('Duplicate key is not allowed among bases')\n                base_cfg_dict.update(c)\n\n            base_cfg_dict = Config._merge_a_into_b(cfg_dict, base_cfg_dict)\n            cfg_dict = base_cfg_dict\n\n            # merge cfg_text\n            cfg_text_list.append(cfg_text)\n            cfg_text = '\\n'.join(cfg_text_list)\n\n        return cfg_dict, cfg_text\n\n    @staticmethod\n    def _merge_a_into_b(a, b):\n        # merge dict `a` into dict `b` (non-inplace). values in `a` will\n        # overwrite `b`.\n        # copy first to avoid inplace modification\n        b = b.copy()\n        for k, v in a.items():\n            if isinstance(v, dict) and k in b and not v.pop(DELETE_KEY, False):\n                if not isinstance(b[k], dict):\n                    raise TypeError(\n                        f'{k}={v} in child config cannot inherit from base '\n                        f'because {k} is a dict in the child config but is of '\n                        f'type {type(b[k])} in base config. You may set '\n                        f'`{DELETE_KEY}=True` to ignore the base config')\n                b[k] = Config._merge_a_into_b(v, b[k])\n            else:\n                b[k] = v\n        return b\n\n    @staticmethod\n    def fromfile(filename):\n        cfg_dict, cfg_text = Config._file2dict(filename)\n        return Config(cfg_dict, cfg_text=cfg_text, filename=filename)\n\n    @staticmethod\n    def auto_argparser(description=None):\n        \"\"\"Generate argparser from config file automatically (experimental)\n        \"\"\"\n        partial_parser = ArgumentParser(description=description)\n        partial_parser.add_argument('config', help='config file path')\n        cfg_file = partial_parser.parse_known_args()[0].config\n        cfg = Config.fromfile(cfg_file)\n        parser = ArgumentParser(description=description)\n        parser.add_argument('config', help='config file path')\n        add_args(parser, cfg)\n        return parser, cfg\n\n    def __init__(self, cfg_dict=None, cfg_text=None, filename=None):\n        if cfg_dict is None:\n            cfg_dict = dict()\n        elif not isinstance(cfg_dict, dict):\n            raise TypeError('cfg_dict must be a dict, but '\n                            f'got {type(cfg_dict)}')\n        for key in cfg_dict:\n            if key in RESERVED_KEYS:\n                raise KeyError(f'{key} is reserved for config file')\n\n        super(Config, self).__setattr__('_cfg_dict', ConfigDict(cfg_dict))\n        super(Config, self).__setattr__('_filename', filename)\n        if cfg_text:\n            text = cfg_text\n        elif filename:\n            with open(filename, 'r') as f:\n                text = f.read()\n        else:\n            text = ''\n        super(Config, self).__setattr__('_text', text)\n\n    @property\n    def filename(self):\n        return self._filename\n\n    @property\n    def text(self):\n        return self._text\n\n    @property\n    def pretty_text(self):\n\n        indent = 4\n\n        def _indent(s_, num_spaces):\n            s = s_.split('\\n')\n            if len(s) == 1:\n                return s_\n            first = s.pop(0)\n            s = [(num_spaces * ' ') + line for line in s]\n            s = '\\n'.join(s)\n            s = first + '\\n' + s\n            return s\n\n        def _format_basic_types(k, v, use_mapping=False):\n            if isinstance(v, str):\n                v_str = f\"'{v}'\"\n            else:\n                v_str = str(v)\n\n            if use_mapping:\n                k_str = f\"'{k}'\" if isinstance(k, str) else str(k)\n                attr_str = f'{k_str}: {v_str}'\n            else:\n                attr_str = f'{str(k)}={v_str}'\n            attr_str = _indent(attr_str, indent)\n\n            return attr_str\n\n        def _format_list(k, v, use_mapping=False):\n            # check if all items in the list are dict\n            if all(isinstance(_, dict) for _ in v):\n                v_str = '[\\n'\n                v_str += '\\n'.join(\n                    f'dict({_indent(_format_dict(v_), indent)}),'\n                    for v_ in v).rstrip(',')\n                if use_mapping:\n                    k_str = f\"'{k}'\" if isinstance(k, str) else str(k)\n                    attr_str = f'{k_str}: {v_str}'\n                else:\n                    attr_str = f'{str(k)}={v_str}'\n                attr_str = _indent(attr_str, indent) + ']'\n            else:\n                attr_str = _format_basic_types(k, v, use_mapping)\n            return attr_str\n\n        def _contain_invalid_identifier(dict_str):\n            contain_invalid_identifier = False\n            for key_name in dict_str:\n                contain_invalid_identifier |= \\\n                    (not str(key_name).isidentifier())\n            return contain_invalid_identifier\n\n        def _format_dict(input_dict, outest_level=False):\n            r = ''\n            s = []\n\n            use_mapping = _contain_invalid_identifier(input_dict)\n            if use_mapping:\n                r += '{'\n            for idx, (k, v) in enumerate(input_dict.items()):\n                is_last = idx >= len(input_dict) - 1\n                end = '' if outest_level or is_last else ','\n                if isinstance(v, dict):\n                    v_str = '\\n' + _format_dict(v)\n                    if use_mapping:\n                        k_str = f\"'{k}'\" if isinstance(k, str) else str(k)\n                        attr_str = f'{k_str}: dict({v_str}'\n                    else:\n                        attr_str = f'{str(k)}=dict({v_str}'\n                    attr_str = _indent(attr_str, indent) + ')' + end\n                elif isinstance(v, list):\n                    attr_str = _format_list(k, v, use_mapping) + end\n                else:\n                    attr_str = _format_basic_types(k, v, use_mapping) + end\n\n                s.append(attr_str)\n            r += '\\n'.join(s)\n            if use_mapping:\n                r += '}'\n            return r\n\n        cfg_dict = self._cfg_dict.to_dict()\n        text = _format_dict(cfg_dict, outest_level=True)\n        # copied from setup.cfg\n        yapf_style = dict(\n            based_on_style='pep8',\n            blank_line_before_nested_class_or_def=True,\n            split_before_expression_after_opening_paren=True)\n        text, _ = FormatCode(text, style_config=yapf_style, verify=True)\n\n        return text\n\n    def __repr__(self):\n        return f'Config (path: {self.filename}): {self._cfg_dict.__repr__()}'\n\n    def __len__(self):\n        return len(self._cfg_dict)\n\n    def __getattr__(self, name):\n        return getattr(self._cfg_dict, name)\n\n    def __getitem__(self, name):\n        return self._cfg_dict.__getitem__(name)\n\n    def __setattr__(self, name, value):\n        if isinstance(value, dict):\n            value = ConfigDict(value)\n        self._cfg_dict.__setattr__(name, value)\n\n    def __setitem__(self, name, value):\n        if isinstance(value, dict):\n            value = ConfigDict(value)\n        self._cfg_dict.__setitem__(name, value)\n\n    def __iter__(self):\n        return iter(self._cfg_dict)\n\n    def dump(self, file=None):\n        cfg_dict = super(Config, self).__getattribute__('_cfg_dict').to_dict()\n        if self.filename.endswith('.py'):\n            if file is None:\n                return self.pretty_text\n            else:\n                with open(file, 'w') as f:\n                    f.write(self.pretty_text)\n        else:\n            import mmcv\n            if file is None:\n                file_format = self.filename.split('.')[-1]\n                return mmcv.dump(cfg_dict, file_format=file_format)\n            else:\n                mmcv.dump(cfg_dict, file)\n\n    def merge_from_dict(self, options):\n        \"\"\"Merge list into cfg_dict\n        Merge the dict parsed by MultipleKVAction into this cfg.\n        Examples:\n            >>> options = {'model.backbone.depth': 50,\n            ...            'model.backbone.with_cp':True}\n            >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet'))))\n            >>> cfg.merge_from_dict(options)\n            >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict')\n            >>> assert cfg_dict == dict(\n            ...     model=dict(backbone=dict(depth=50, with_cp=True)))\n        Args:\n            options (dict): dict of configs to merge from.\n        \"\"\"\n        option_cfg_dict = {}\n        for full_key, v in options.items():\n            d = option_cfg_dict\n            key_list = full_key.split('.')\n            for subkey in key_list[:-1]:\n                d.setdefault(subkey, ConfigDict())\n                d = d[subkey]\n            subkey = key_list[-1]\n            d[subkey] = v\n\n        cfg_dict = super(Config, self).__getattribute__('_cfg_dict')\n        super(Config, self).__setattr__(\n            '_cfg_dict', Config._merge_a_into_b(option_cfg_dict, cfg_dict))\n\n\nclass DictAction(Action):\n    \"\"\"\n    argparse action to split an argument into KEY=VALUE form\n    on the first = and append to a dictionary. List options should\n    be passed as comma separated values, i.e KEY=V1,V2,V3\n    \"\"\"\n\n    @staticmethod\n    def _parse_int_float_bool(val):\n        try:\n            return int(val)\n        except ValueError:\n            pass\n        try:\n            return float(val)\n        except ValueError:\n            pass\n        if val.lower() in ['true', 'false']:\n            return True if val.lower() == 'true' else False\n        return val\n\n    def __call__(self, parser, namespace, values, option_string=None):\n        options = {}\n        for kv in values:\n            key, val = kv.split('=', maxsplit=1)\n            val = [self._parse_int_float_bool(v) for v in val.split(',')]\n            if len(val) == 1:\n                val = val[0]\n            options[key] = val\n        setattr(namespace, self.dest, options)\n"
  },
  {
    "path": "utils/registry.py",
    "content": "import inspect\n\nimport six\n\n# borrow from mmdetection\n\ndef is_str(x):\n    \"\"\"Whether the input is an string instance.\"\"\"\n    return isinstance(x, six.string_types)\n\nclass Registry(object):\n\n    def __init__(self, name):\n        self._name = name\n        self._module_dict = dict()\n\n    def __repr__(self):\n        format_str = self.__class__.__name__ + '(name={}, items={})'.format(\n            self._name, list(self._module_dict.keys()))\n        return format_str\n\n    @property\n    def name(self):\n        return self._name\n\n    @property\n    def module_dict(self):\n        return self._module_dict\n\n    def get(self, key):\n        return self._module_dict.get(key, None)\n\n    def _register_module(self, module_class):\n        \"\"\"Register a module.\n\n        Args:\n            module (:obj:`nn.Module`): Module to be registered.\n        \"\"\"\n        if not inspect.isclass(module_class):\n            raise TypeError('module must be a class, but got {}'.format(\n                type(module_class)))\n        module_name = module_class.__name__\n        if module_name in self._module_dict:\n            raise KeyError('{} is already registered in {}'.format(\n                module_name, self.name))\n        self._module_dict[module_name] = module_class\n\n    def register_module(self, cls):\n        self._register_module(cls)\n        return cls\n\n\ndef build_from_cfg(cfg, registry, default_args=None):\n    \"\"\"Build a module from config dict.\n\n    Args:\n        cfg (dict): Config dict. It should at least contain the key \"type\".\n        registry (:obj:`Registry`): The registry to search the type from.\n        default_args (dict, optional): Default initialization arguments.\n\n    Returns:\n        obj: The constructed object.\n    \"\"\"\n    assert isinstance(cfg, dict) and 'type' in cfg\n    assert isinstance(default_args, dict) or default_args is None\n    args = {}\n    obj_type = cfg.type \n    if is_str(obj_type):\n        obj_cls = registry.get(obj_type)\n        if obj_cls is None:\n            raise KeyError('{} is not in the {} registry'.format(\n                obj_type, registry.name))\n    elif inspect.isclass(obj_type):\n        obj_cls = obj_type\n    else:\n        raise TypeError('type must be a str or valid type, but got {}'.format(\n            type(obj_type)))\n    if default_args is not None:\n        for name, value in default_args.items():\n            args.setdefault(name, value)\n    return obj_cls(**args)\n"
  },
  {
    "path": "utils/transforms.py",
    "content": "import random\nimport cv2\nimport numpy as np\nimport numbers\nimport collections\n\n# copy from: https://github.com/cardwing/Codes-for-Lane-Detection/blob/master/ERFNet-CULane-PyTorch/utils/transforms.py\n\n__all__ = ['GroupRandomCrop', 'GroupCenterCrop', 'GroupRandomPad', 'GroupCenterPad',\n           'GroupRandomScale', 'GroupRandomHorizontalFlip', 'GroupNormalize']\n\n\nclass SampleResize(object):\n    def __init__(self, size):\n        assert (isinstance(size, collections.Iterable) and len(size) == 2)\n        self.size = size\n\n    def __call__(self, sample):\n        out = list()\n        out.append(cv2.resize(sample[0], self.size,\n                              interpolation=cv2.INTER_CUBIC))\n        if len(sample) > 1:\n            out.append(cv2.resize(sample[1], self.size,\n                                  interpolation=cv2.INTER_NEAREST))\n        return out\n\n\nclass GroupRandomCrop(object):\n    def __init__(self, size):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n\n    def __call__(self, img_group):\n        h, w = img_group[0].shape[0:2]\n        th, tw = self.size\n\n        out_images = list()\n        h1 = random.randint(0, max(0, h - th))\n        w1 = random.randint(0, max(0, w - tw))\n        h2 = min(h1 + th, h)\n        w2 = min(w1 + tw, w)\n\n        for img in img_group:\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(img[h1:h2, w1:w2, ...])\n        return out_images\n\n\nclass GroupRandomCropRatio(object):\n    def __init__(self, size):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n\n    def __call__(self, img_group):\n        h, w = img_group[0].shape[0:2]\n        tw, th = self.size\n\n        out_images = list()\n        h1 = random.randint(0, max(0, h - th))\n        w1 = random.randint(0, max(0, w - tw))\n        h2 = min(h1 + th, h)\n        w2 = min(w1 + tw, w)\n\n        for img in img_group:\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(img[h1:h2, w1:w2, ...])\n        return out_images\n\n\nclass GroupCenterCrop(object):\n    def __init__(self, size):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n\n    def __call__(self, img_group):\n        h, w = img_group[0].shape[0:2]\n        th, tw = self.size\n\n        out_images = list()\n        h1 = max(0, int((h - th) / 2))\n        w1 = max(0, int((w - tw) / 2))\n        h2 = min(h1 + th, h)\n        w2 = min(w1 + tw, w)\n\n        for img in img_group:\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(img[h1:h2, w1:w2, ...])\n        return out_images\n\n\nclass GroupRandomPad(object):\n    def __init__(self, size, padding):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n        self.padding = padding\n\n    def __call__(self, img_group):\n        assert (len(self.padding) == len(img_group))\n        h, w = img_group[0].shape[0:2]\n        th, tw = self.size\n\n        out_images = list()\n        h1 = random.randint(0, max(0, th - h))\n        w1 = random.randint(0, max(0, tw - w))\n        h2 = max(th - h - h1, 0)\n        w2 = max(tw - w - w1, 0)\n\n        for img, padding in zip(img_group, self.padding):\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(cv2.copyMakeBorder(\n                img, h1, h2, w1, w2, cv2.BORDER_CONSTANT, value=padding))\n            if len(img.shape) > len(out_images[-1].shape):\n                out_images[-1] = out_images[-1][...,\n                                                np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupCenterPad(object):\n    def __init__(self, size, padding):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n        self.padding = padding\n\n    def __call__(self, img_group):\n        assert (len(self.padding) == len(img_group))\n        h, w = img_group[0].shape[0:2]\n        th, tw = self.size\n\n        out_images = list()\n        h1 = max(0, int((th - h) / 2))\n        w1 = max(0, int((tw - w) / 2))\n        h2 = max(th - h - h1, 0)\n        w2 = max(tw - w - w1, 0)\n\n        for img, padding in zip(img_group, self.padding):\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(cv2.copyMakeBorder(\n                img, h1, h2, w1, w2, cv2.BORDER_CONSTANT, value=padding))\n            if len(img.shape) > len(out_images[-1].shape):\n                out_images[-1] = out_images[-1][...,\n                                                np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupConcerPad(object):\n    def __init__(self, size, padding):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n        self.padding = padding\n\n    def __call__(self, img_group):\n        assert (len(self.padding) == len(img_group))\n        h, w = img_group[0].shape[0:2]\n        th, tw = self.size\n\n        out_images = list()\n        h1 = 0\n        w1 = 0\n        h2 = max(th - h - h1, 0)\n        w2 = max(tw - w - w1, 0)\n\n        for img, padding in zip(img_group, self.padding):\n            assert (img.shape[0] == h and img.shape[1] == w)\n            out_images.append(cv2.copyMakeBorder(\n                img, h1, h2, w1, w2, cv2.BORDER_CONSTANT, value=padding))\n            if len(img.shape) > len(out_images[-1].shape):\n                out_images[-1] = out_images[-1][...,\n                                                np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupRandomScaleNew(object):\n    def __init__(self, size=(976, 208), interpolation=(cv2.INTER_LINEAR, cv2.INTER_NEAREST)):\n        self.size = size\n        self.interpolation = interpolation\n\n    def __call__(self, img_group):\n        assert (len(self.interpolation) == len(img_group))\n        scale_w, scale_h = self.size[0] * 1.0 / 1640, self.size[1] * 1.0 / 590\n        out_images = list()\n        for img, interpolation in zip(img_group, self.interpolation):\n            out_images.append(cv2.resize(img, None, fx=scale_w,\n                                         fy=scale_h, interpolation=interpolation))\n            if len(img.shape) > len(out_images[-1].shape):\n                out_images[-1] = out_images[-1][...,\n                                                np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupRandomScale(object):\n    def __init__(self, size=(0.5, 1.5), interpolation=(cv2.INTER_LINEAR, cv2.INTER_NEAREST)):\n        self.size = size\n        self.interpolation = interpolation\n\n    def __call__(self, img_group):\n        assert (len(self.interpolation) == len(img_group))\n        scale = random.uniform(self.size[0], self.size[1])\n        out_images = list()\n        for img, interpolation in zip(img_group, self.interpolation):\n            out_images.append(cv2.resize(img, None, fx=scale,\n                                         fy=scale, interpolation=interpolation))\n            if len(img.shape) > len(out_images[-1].shape):\n                out_images[-1] = out_images[-1][...,\n                                                np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupRandomMultiScale(object):\n    def __init__(self, size=(0.5, 1.5), interpolation=(cv2.INTER_LINEAR, cv2.INTER_NEAREST)):\n        self.size = size\n        self.interpolation = interpolation\n\n    def __call__(self, img_group):\n        assert (len(self.interpolation) == len(img_group))\n        scales = [0.5, 1.0, 1.5]  # random.uniform(self.size[0], self.size[1])\n        out_images = list()\n        for scale in scales:\n            for img, interpolation in zip(img_group, self.interpolation):\n                out_images.append(cv2.resize(\n                    img, None, fx=scale, fy=scale, interpolation=interpolation))\n                if len(img.shape) > len(out_images[-1].shape):\n                    out_images[-1] = out_images[-1][...,\n                                                    np.newaxis]  # single channel image\n        return out_images\n\n\nclass GroupRandomScaleRatio(object):\n    def __init__(self, size=(680, 762, 562, 592), interpolation=(cv2.INTER_LINEAR, cv2.INTER_NEAREST)):\n        self.size = size\n        self.interpolation = interpolation\n        self.origin_id = [0, 1360, 580, 768, 255, 300, 680, 710, 312, 1509, 800, 1377, 880, 910, 1188, 128, 960, 1784,\n                          1414, 1150, 512, 1162, 950, 750, 1575, 708, 2111, 1848, 1071, 1204, 892, 639, 2040, 1524, 832, 1122, 1224, 2295]\n\n    def __call__(self, img_group):\n        assert (len(self.interpolation) == len(img_group))\n        w_scale = random.randint(self.size[0], self.size[1])\n        h_scale = random.randint(self.size[2], self.size[3])\n        h, w, _ = img_group[0].shape\n        out_images = list()\n        out_images.append(cv2.resize(img_group[0], None, fx=w_scale*1.0/w, fy=h_scale*1.0/h,\n                                     interpolation=self.interpolation[0]))  # fx=w_scale*1.0/w, fy=h_scale*1.0/h\n        ### process label map ###\n        origin_label = cv2.resize(\n            img_group[1], None, fx=w_scale*1.0/w, fy=h_scale*1.0/h, interpolation=self.interpolation[1])\n        origin_label = origin_label.astype(int)\n        label = origin_label[:, :, 0] * 5 + \\\n            origin_label[:, :, 1] * 3 + origin_label[:, :, 2]\n        new_label = np.ones(label.shape) * 100\n        new_label = new_label.astype(int)\n        for cnt in range(37):\n            new_label = (\n                label == self.origin_id[cnt]) * (cnt - 100) + new_label\n        new_label = (label == self.origin_id[37]) * (36 - 100) + new_label\n        assert(100 not in np.unique(new_label))\n        out_images.append(new_label)\n        return out_images\n\n\nclass GroupRandomRotation(object):\n    def __init__(self, degree=(-10, 10), interpolation=(cv2.INTER_LINEAR, cv2.INTER_NEAREST), padding=None):\n        self.degree = degree\n        self.interpolation = interpolation\n        self.padding = padding\n        if self.padding is None:\n            self.padding = [0, 0]\n\n    def __call__(self, img_group):\n        assert (len(self.interpolation) == len(img_group))\n        v = random.random()\n        if v < 0.5:\n            degree = random.uniform(self.degree[0], self.degree[1])\n            h, w = img_group[0].shape[0:2]\n            center = (w / 2, h / 2)\n            map_matrix = cv2.getRotationMatrix2D(center, degree, 1.0)\n            out_images = list()\n            for img, interpolation, padding in zip(img_group, self.interpolation, self.padding):\n                out_images.append(cv2.warpAffine(\n                    img, map_matrix, (w, h), flags=interpolation, borderMode=cv2.BORDER_CONSTANT, borderValue=padding))\n                if len(img.shape) > len(out_images[-1].shape):\n                    out_images[-1] = out_images[-1][...,\n                                                    np.newaxis]  # single channel image\n            return out_images\n        else:\n            return img_group\n\n\nclass GroupRandomBlur(object):\n    def __init__(self, applied):\n        self.applied = applied\n\n    def __call__(self, img_group):\n        assert (len(self.applied) == len(img_group))\n        v = random.random()\n        if v < 0.5:\n            out_images = []\n            for img, a in zip(img_group, self.applied):\n                if a:\n                    img = cv2.GaussianBlur(\n                        img, (5, 5), random.uniform(1e-6, 0.6))\n                out_images.append(img)\n                if len(img.shape) > len(out_images[-1].shape):\n                    out_images[-1] = out_images[-1][...,\n                                                    np.newaxis]  # single channel image\n            return out_images\n        else:\n            return img_group\n\n\nclass GroupRandomHorizontalFlip(object):\n    \"\"\"Randomly horizontally flips the given numpy Image with a probability of 0.5\n    \"\"\"\n\n    def __init__(self, is_flow=False):\n        self.is_flow = is_flow\n\n    def __call__(self, img_group, is_flow=False):\n        v = random.random()\n        if v < 0.5:\n            out_images = [np.fliplr(img) for img in img_group]\n            if self.is_flow:\n                for i in range(0, len(out_images), 2):\n                    # invert flow pixel values when flipping\n                    out_images[i] = -out_images[i]\n            return out_images\n        else:\n            return img_group\n\n\nclass GroupNormalize(object):\n    def __init__(self, mean, std):\n        self.mean = mean\n        self.std = std\n\n    def __call__(self, img_group):\n        out_images = list()\n        for img, m, s in zip(img_group, self.mean, self.std):\n            if len(m) == 1:\n                img = img - np.array(m)  # single channel image\n                img = img / np.array(s)\n            else:\n                img = img - np.array(m)[np.newaxis, np.newaxis, ...]\n                img = img / np.array(s)[np.newaxis, np.newaxis, ...]\n            out_images.append(img)\n\n        return out_images\n"
  }
]