[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 HarryHan\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# SCNN lane detection in Pytorch\n\nSCNN is a segmentation-tasked lane detection algorithm, described in ['Spatial As Deep: Spatial CNN for Traffic Scene Understanding'](https://arxiv.org/abs/1712.06080). The [official implementation](<https://github.com/XingangPan/SCNN>) is in lua torch.\n\nThis repository contains a re-implementation in Pytorch.\n\n\n\n### Updates\n\n- 2019 / 08 / 14: Code refined including more convenient test & evaluation script.\n- 2019 / 08 / 12: Trained model on both dataset provided.\n- 2019 / 05 / 08: Evaluation is provided.\n- 2019 / 04 / 23: Trained model converted from [official t7 model](https://github.com/XingangPan/SCNN#Testing) is provided.\n\n<br/>\n\n## Data preparation\n\n### CULane\n\nThe dataset is available in [CULane](https://xingangpan.github.io/projects/CULane.html). Please download and unzip the files in one folder, which later is represented as `CULane_path`.  Then modify the path of `CULane_path` in `config.py`. Also, modify the path of `CULane_path` as `data_dir`  in `utils/lane_evaluation/CULane/Run.sh` .\n```\nCULane_path\n├── driver_100_30frame\n├── driver_161_90frame\n├── driver_182_30frame\n├── driver_193_90frame\n├── driver_23_30frame\n├── driver_37_30frame\n├── laneseg_label_w16\n├── laneseg_label_w16_test\n└── list\n```\n\n **Note: absolute path is encouraged.**\n\n\n\n\n\n### Tusimple\nThe dataset is available in [here](https://github.com/TuSimple/tusimple-benchmark/issues/3). Please download and unzip the files in one folder, which later is represented as `Tusimple_path`. Then modify the path of `Tusimple_path` in `config.py`.\n```\nTusimple_path\n├── clips\n├── label_data_0313.json\n├── label_data_0531.json\n├── label_data_0601.json\n└── test_label.json\n```\n\n**Note:  seg\\_label images and gt.txt, as in CULane dataset format,  will be generated the first time `Tusimple` object is instantiated. It may take time.**\n\n\n\n<br/>\n\n## Trained Model Provided\n\n* Model trained on CULane Dataset can be converted from [official implementation](https://github.com/XingangPan/SCNN#Testing)， which can be downloaded [here](https://drive.google.com/open?id=1Wv3r3dCYNBwJdKl_WPEfrEOt-XGaROKu). Please put the `vgg_SCNN_DULR_w9.t7` file into `experiments/vgg_SCNN_DULR_w9`.\n\n  ```bash\n  python experiments/vgg_SCNN_DULR_w9/t7_to_pt.py\n  ```\n\n  Model will be cached into `experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.pth`. \n\n  **Note**:`torch.utils.serialization` is obsolete in Pytorch 1.0+. You can directly download **the converted model [here](https://drive.google.com/open?id=1bBdN3yhoOQBC9pRtBUxzeRrKJdF7uVTJ)**.\n\n\n\n* My trained model on Tusimple can be downloaded [here](https://drive.google.com/open?id=1IwEenTekMt-t6Yr5WJU9_kv4d_Pegd_Q). Its configure file is in `exp0`.\n\n| Accuracy | FP   | FN   |\n| -------- | ---- | ---- |\n| 94.16%   |0.0735|0.0825|\n\n\n\n\n\n* My trained model on CULane can be downloaded [here](https://drive.google.com/open?id=1AZn23w8RbMh1P6lJcVcf6PcTIWJvQg9u). Its configure file is in `exp10`.\n\n| Category  | F1-measure          |\n| --------- | ------------------- |\n| Normal    | 90.26               |\n| Crowded   | 68.23               |\n| HLight    | 61.84                |\n| Shadow    | 61.16               |\n| No line   | 43.44               |\n| Arrow     | 84.64               |\n| Curve     | 61.74               |\n| Crossroad | 2728 （FP measure） |\n| Night     | 65.32               |\n\n\n\n\n\n<br/>\n\n\n## Demo Test\n\nFor single image demo test:\n\n```shell\npython demo_test.py   -i demo/demo.jpg \n                      -w experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.pth \n                      [--visualize / -v]\n```\n\n![](demo/demo_result.jpg \"demo_result\")\n\n\n\n<br/>\n\n## Train \n\n1. Specify an experiment directory, e.g. `experiments/exp0`. \n\n2. Modify the hyperparameters in `experiments/exp0/cfg.json`.\n\n3. Start training:\n\n   ```shell\n   python train.py --exp_dir ./experiments/exp0 [--resume/-r]\n   ```\n\n4. Monitor on tensorboard:\n\n   ```bash\n   tensorboard --logdir='experiments/exp0'\n   ```\n\n**Note**\n\n\n- My model is trained with `torch.nn.DataParallel`. Modify it according to your hardware configuration.\n- Currently the backbone is vgg16 from torchvision. Several modifications are done to the torchvision model according to paper, i.e., i). dilation of last three conv layer is changed to 2, ii). last two maxpooling layer is removed.\n\n\n\n<br/>\n\n## Evaluation\n\n* CULane Evaluation code is ported from [official implementation](<https://github.com/XingangPan/SCNN>) and an extra `CMakeLists.txt` is provided. \n\n  1. Please build the CPP code first.  \n  2. Then modify `root` as absolute project path in `utils/lane_evaluation/CULane/Run.sh`.\n\n  ```bash\n  cd utils/lane_evaluation/CULane\n  mkdir build && cd build\n  cmake ..\n  make\n  ```\n\n  Just run the evaluation script. Result will be saved into corresponding `exp_dir` directory, \n\n  ``` shell\n  python test_CULane.py --exp_dir ./experiments/exp10\n  ```\n\n  \n\n* Tusimple Evaluation code is ported from [tusimple repo](https://github.com/TuSimple/tusimple-benchmark/blob/master/evaluate/lane.py).\n\n  ```Shell\n  python test_tusimple.py --exp_dir ./experiments/exp0\n  ```\n\n\n\n\n\n## Acknowledgement\n\nThis repos is build based on [official implementation](<https://github.com/XingangPan/SCNN>).\n\n"
  },
  {
    "path": "config.py",
    "content": "Dataset_Path = dict(\n    CULane = \"/home/lion/Dataset/CULane/data/CULane\",\n    Tusimple = \"/home/lion/Dataset/tusimple\"\n)\n"
  },
  {
    "path": "dataset/CULane.py",
    "content": "import cv2\nimport os\nimport numpy as np\n\nimport torch\nfrom torch.utils.data import Dataset\n\n\nclass CULane(Dataset):\n    def __init__(self, path, image_set, transforms=None):\n        super(CULane, self).__init__()\n        assert image_set in ('train', 'val', 'test'), \"image_set is not valid!\"\n        self.data_dir_path = path\n        self.image_set = image_set\n        self.transforms = transforms\n\n        if image_set != 'test':\n            self.createIndex()\n        else:\n            self.createIndex_test()\n\n\n    def createIndex(self):\n        listfile = os.path.join(self.data_dir_path, \"list\", \"{}_gt.txt\".format(self.image_set))\n\n        self.img_list = []\n        self.segLabel_list = []\n        self.exist_list = []\n        with open(listfile) as f:\n            for line in f:\n                line = line.strip()\n                l = line.split(\" \")\n                self.img_list.append(os.path.join(self.data_dir_path, l[0][1:]))   # l[0][1:]  get rid of the first '/' so as for os.path.join\n                self.segLabel_list.append(os.path.join(self.data_dir_path, l[1][1:]))\n                self.exist_list.append([int(x) for x in l[2:]])\n\n    def createIndex_test(self):\n        listfile = os.path.join(self.data_dir_path, \"list\", \"{}.txt\".format(self.image_set))\n\n        self.img_list = []\n        with open(listfile) as f:\n            for line in f:\n                line = line.strip()\n                self.img_list.append(os.path.join(self.data_dir_path, line[1:]))  # l[0][1:]  get rid of the first '/' so as for os.path.join\n\n    def __getitem__(self, idx):\n        img = cv2.imread(self.img_list[idx])\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n        if self.image_set != 'test':\n            segLabel = cv2.imread(self.segLabel_list[idx])[:, :, 0]\n            exist = np.array(self.exist_list[idx])\n        else:\n            segLabel = None\n            exist = None\n\n        sample = {'img': img,\n                  'segLabel': segLabel,\n                  'exist': exist,\n                  'img_name': self.img_list[idx]}\n        if self.transforms is not None:\n            sample = self.transforms(sample)\n        return sample\n\n    def __len__(self):\n        return len(self.img_list)\n\n    @staticmethod\n    def collate(batch):\n        if isinstance(batch[0]['img'], torch.Tensor):\n            img = torch.stack([b['img'] for b in batch])\n        else:\n            img = [b['img'] for b in batch]\n\n        if batch[0]['segLabel'] is None:\n            segLabel = None\n            exist = None\n        elif isinstance(batch[0]['segLabel'], torch.Tensor):\n            segLabel = torch.stack([b['segLabel'] for b in batch])\n            exist = torch.stack([b['exist'] for b in batch])\n        else:\n            segLabel = [b['segLabel'] for b in batch]\n            exist = [b['exist'] for b in batch]\n\n        samples = {'img': img,\n                  'segLabel': segLabel,\n                  'exist': exist,\n                  'img_name': [x['img_name'] for x in batch]}\n\n        return samples"
  },
  {
    "path": "dataset/Tusimple.py",
    "content": "import json\nimport os\n\nimport cv2\nimport numpy as np\nimport torch\nfrom torch.utils.data import Dataset\n\n\nclass Tusimple(Dataset):\n    \"\"\"\n    image_set is splitted into three partitions: train, val, test.\n    train includes label_data_0313.json, label_data_0601.json\n    val includes label_data_0531.json\n    test includes test_label.json\n    \"\"\"\n    TRAIN_SET = ['label_data_0313.json', 'label_data_0601.json']\n    VAL_SET = ['label_data_0531.json']\n    TEST_SET = ['test_label.json']\n\n    def __init__(self, path, image_set, transforms=None):\n        super(Tusimple, self).__init__()\n        assert image_set in ('train', 'val', 'test'), \"image_set is not valid!\"\n        self.data_dir_path = path\n        self.image_set = image_set\n        self.transforms = transforms\n\n        if not os.path.exists(os.path.join(path, \"seg_label\")):\n            print(\"Label is going to get generated into dir: {} ...\".format(os.path.join(path, \"seg_label\")))\n            self.generate_label()\n        self.createIndex()\n\n    def createIndex(self):\n        self.img_list = []\n        self.segLabel_list = []\n        self.exist_list = []\n\n        listfile = os.path.join(self.data_dir_path, \"seg_label\", \"list\", \"{}_gt.txt\".format(self.image_set))\n        if not os.path.exists(listfile):\n            raise FileNotFoundError(\"List file doesn't exist. Label has to be generated! ...\")\n\n        with open(listfile) as f:\n            for line in f:\n                line = line.strip()\n                l = line.split(\" \")\n                self.img_list.append(os.path.join(self.data_dir_path, l[0][1:]))  # l[0][1:]  get rid of the first '/' so as for os.path.join\n                self.segLabel_list.append(os.path.join(self.data_dir_path, l[1][1:]))\n                self.exist_list.append([int(x) for x in l[2:]])\n\n    def __getitem__(self, idx):\n        img = cv2.imread(self.img_list[idx])\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n        if self.image_set != 'test':\n            segLabel = cv2.imread(self.segLabel_list[idx])[:, :, 0]\n            exist = np.array(self.exist_list[idx])\n        else:\n            segLabel = None\n            exist = None\n\n        sample = {'img': img,\n                  'segLabel': segLabel,\n                  'exist': exist,\n                  'img_name': self.img_list[idx]}\n        if self.transforms is not None:\n            sample = self.transforms(sample)\n        return sample\n\n    def __len__(self):\n        return len(self.img_list)\n\n    def generate_label(self):\n        save_dir = os.path.join(self.data_dir_path, \"seg_label\")\n        os.makedirs(save_dir, exist_ok=True)\n\n        # --------- merge json into one file ---------\n        with open(os.path.join(save_dir, \"train.json\"), \"w\") as outfile:\n            for json_name in self.TRAIN_SET:\n                with open(os.path.join(self.data_dir_path, json_name)) as infile:\n                    for line in infile:\n                        outfile.write(line)\n\n        with open(os.path.join(save_dir, \"val.json\"), \"w\") as outfile:\n            for json_name in self.VAL_SET:\n                with open(os.path.join(self.data_dir_path, json_name)) as infile:\n                    for line in infile:\n                        outfile.write(line)\n\n        with open(os.path.join(save_dir, \"test.json\"), \"w\") as outfile:\n            for json_name in self.TEST_SET:\n                with open(os.path.join(self.data_dir_path, json_name)) as infile:\n                    for line in infile:\n                        outfile.write(line)\n\n        self._gen_label_for_json('train')\n        print(\"train set is done\")\n        self._gen_label_for_json('val')\n        print(\"val set is done\")\n        self._gen_label_for_json('test')\n        print(\"test set is done\")\n\n    def _gen_label_for_json(self, image_set):\n        H, W = 720, 1280\n        SEG_WIDTH = 30\n        save_dir = \"seg_label\"\n\n        os.makedirs(os.path.join(self.data_dir_path, save_dir, \"list\"), exist_ok=True)\n        list_f = open(os.path.join(self.data_dir_path, save_dir, \"list\", \"{}_gt.txt\".format(image_set)), \"w\")\n\n        json_path = os.path.join(self.data_dir_path, save_dir, \"{}.json\".format(image_set))\n        with open(json_path) as f:\n            for line in f:\n                label = json.loads(line)\n\n                # ---------- clean and sort lanes -------------\n                lanes = []\n                _lanes = []\n                slope = [] # identify 1st, 2nd, 3rd, 4th lane through slope\n                for i in range(len(label['lanes'])):\n                    l = [(x, y) for x, y in zip(label['lanes'][i], label['h_samples']) if x >= 0]\n                    if (len(l)>1):\n                        _lanes.append(l)\n                        slope.append(np.arctan2(l[-1][1]-l[0][1], l[0][0]-l[-1][0]) / np.pi * 180)\n                _lanes = [_lanes[i] for i in np.argsort(slope)]\n                slope = [slope[i] for i in np.argsort(slope)]\n\n                idx_1 = None\n                idx_2 = None\n                idx_3 = None\n                idx_4 = None\n                for i in range(len(slope)):\n                    if slope[i]<=90:\n                        idx_2 = i\n                        idx_1 = i-1 if i>0 else None\n                    else:\n                        idx_3 = i\n                        idx_4 = i+1 if i+1 < len(slope) else None\n                        break\n                lanes.append([] if idx_1 is None else _lanes[idx_1])\n                lanes.append([] if idx_2 is None else _lanes[idx_2])\n                lanes.append([] if idx_3 is None else _lanes[idx_3])\n                lanes.append([] if idx_4 is None else _lanes[idx_4])\n                # ---------------------------------------------\n\n                img_path = label['raw_file']\n                seg_img = np.zeros((H, W, 3))\n                list_str = []  # str to be written to list.txt\n                for i in range(len(lanes)):\n                    coords = lanes[i]\n                    if len(coords) < 4:\n                        list_str.append('0')\n                        continue\n                    for j in range(len(coords)-1):\n                        cv2.line(seg_img, coords[j], coords[j+1], (i+1, i+1, i+1), SEG_WIDTH//2)\n                    list_str.append('1')\n\n                seg_path = img_path.split(\"/\")\n                seg_path, img_name = os.path.join(self.data_dir_path, save_dir, seg_path[1], seg_path[2]), seg_path[3]\n                os.makedirs(seg_path, exist_ok=True)\n                seg_path = os.path.join(seg_path, img_name[:-3]+\"png\")\n                cv2.imwrite(seg_path, seg_img)\n\n                seg_path = \"/\".join([save_dir, *img_path.split(\"/\")[1:3], img_name[:-3]+\"png\"])\n                if seg_path[0] != '/':\n                    seg_path = '/' + seg_path\n                if img_path[0] != '/':\n                    img_path = '/' + img_path\n                list_str.insert(0, seg_path)\n                list_str.insert(0, img_path)\n                list_str = \" \".join(list_str) + \"\\n\"\n                list_f.write(list_str)\n\n        list_f.close()\n\n    @staticmethod\n    def collate(batch):\n        if isinstance(batch[0]['img'], torch.Tensor):\n            img = torch.stack([b['img'] for b in batch])\n        else:\n            img = [b['img'] for b in batch]\n\n        if batch[0]['segLabel'] is None:\n            segLabel = None\n            exist = None\n        elif isinstance(batch[0]['segLabel'], torch.Tensor):\n            segLabel = torch.stack([b['segLabel'] for b in batch])\n            exist = torch.stack([b['exist'] for b in batch])\n        else:\n            segLabel = [b['segLabel'] for b in batch]\n            exist = [b['exist'] for b in batch]\n\n        samples = {'img': img,\n                   'segLabel': segLabel,\n                   'exist': exist,\n                   'img_name': [x['img_name'] for x in batch]}\n\n        return samples"
  },
  {
    "path": "dataset/__init__.py",
    "content": "from .CULane import CULane\nfrom .Tusimple import Tusimple"
  },
  {
    "path": "demo_test.py",
    "content": "import argparse\nimport cv2\nimport torch\n\nfrom model import SCNN\nfrom utils.prob2lines import getLane\nfrom utils.transforms import *\n\nnet = SCNN(input_size=(800, 288), pretrained=False)\nmean=(0.3598, 0.3653, 0.3662) # CULane mean, std\nstd=(0.2573, 0.2663, 0.2756)\ntransform_img = Resize((800, 288))\ntransform_to_net = Compose(ToTensor(), Normalize(mean=mean, std=std))\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--img_path\", '-i', type=str, default=\"demo/demo.jpg\", help=\"Path to demo img\")\n    parser.add_argument(\"--weight_path\", '-w', type=str, help=\"Path to model weights\")\n    parser.add_argument(\"--visualize\", '-v', action=\"store_true\", default=False, help=\"Visualize the result\")\n    args = parser.parse_args()\n    return args\n\n\ndef main():\n    args = parse_args()\n    img_path = args.img_path\n    weight_path = args.weight_path\n\n    img = cv2.imread(img_path)\n    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    img = transform_img({'img': img})['img']\n    x = transform_to_net({'img': img})['img']\n    x.unsqueeze_(0)\n\n    save_dict = torch.load(weight_path, map_location='cpu')\n    net.load_state_dict(save_dict['net'])\n    net.eval()\n\n    seg_pred, exist_pred = net(x)[:2]\n    seg_pred = seg_pred.detach().cpu().numpy()\n    exist_pred = exist_pred.detach().cpu().numpy()\n    seg_pred = seg_pred[0]\n    exist = [1 if exist_pred[0, i] > 0.5 else 0 for i in range(4)]\n\n    img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)\n    lane_img = np.zeros_like(img)\n    color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8')\n    coord_mask = np.argmax(seg_pred, axis=0)\n    for i in range(0, 4):\n        if exist_pred[0, i] > 0.5:\n            lane_img[coord_mask == (i + 1)] = color[i]\n    img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=img, beta=1., gamma=0.)\n    cv2.imwrite(\"demo/demo_result.jpg\", img)\n\n    for x in getLane.prob2lines_CULane(seg_pred, exist):\n        print(x)\n\n    if args.visualize:\n        print([1 if exist_pred[0, i] > 0.5 else 0 for i in range(4)])\n        cv2.imshow(\"\", img)\n        cv2.waitKey(0)\n        cv2.destroyAllWindows()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "experiments/exp0/cfg.json",
    "content": "{\n  \"device\": \"cuda:0\",\n  \"MAX_EPOCHES\": 60,\n\n  \"dataset\": {\n    \"dataset_name\": \"Tusimple\",\n    \"batch_size\": 32,\n    \"resize_shape\": [512, 288]\n  },\n\n  \"optim\": {\n    \"lr\": 15e-2,\n    \"momentum\": 0.9,\n    \"weight_decay\": 1e-4,\n    \"nesterov\": true\n  },\n\n  \"lr_scheduler\": {\n    \"warmup\": 20,\n    \"max_iter\": 1500,\n    \"min_lrs\": 1e-10\n  },\n\n  \"model\": {\n    \"scale_exist\": 0.07\n  }\n}"
  },
  {
    "path": "experiments/exp10/cfg.json",
    "content": "{\n  \"device\": \"cuda:0\",\n  \"MAX_EPOCHES\": 30,\n\n  \"dataset\": {\n    \"dataset_name\": \"CULane\",\n    \"batch_size\": 128,\n    \"resize_shape\": [800, 288]\n  },\n\n  \"optim\": {\n    \"lr\": 16e-2,\n    \"momentum\": 0.9,\n    \"weight_decay\": 1e-3,\n    \"nesterov\": true\n  },\n\n  \"lr_scheduler\": {\n    \"warmup\": 50,\n    \"max_iter\": 8000\n  }\n}"
  },
  {
    "path": "experiments/vgg_SCNN_DULR_w9/cfg.json",
    "content": "{\n  \"device\": \"cuda:0\",\n\n  \"dataset\": {\n    \"dataset_name\": \"CULane\",\n    \"resize_shape\": [800, 288]\n  }\n\n\n}"
  },
  {
    "path": "experiments/vgg_SCNN_DULR_w9/t7_to_pt.py",
    "content": "import sys\nimport os\nabs_file_path = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(os.path.join(abs_file_path, \"..\", \"..\")) # add path\n\nimport torch\nimport torch.nn as nn\nimport collections\nfrom torch.utils.serialization import load_lua\nfrom model import SCNN\n\nmodel1 = load_lua('experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.t7', unknown_classes=True)\nmodel2 = collections.OrderedDict()\n\nmodel2['backbone.0.weight'] = model1.modules[0].weight\nmodel2['backbone.1.weight'] = model1.modules[1].weight\nmodel2['backbone.1.bias'] = model1.modules[1].bias\nmodel2['backbone.1.running_mean'] = model1.modules[1].running_mean\nmodel2['backbone.1.running_var'] = model1.modules[1].running_var\nmodel2['backbone.3.weight'] = model1.modules[3].weight\nmodel2['backbone.4.weight'] = model1.modules[4].weight\nmodel2['backbone.4.bias'] = model1.modules[4].bias\nmodel2['backbone.4.running_mean'] = model1.modules[4].running_mean\nmodel2['backbone.4.running_var'] = model1.modules[4].running_var\n\nmodel2['backbone.7.weight'] = model1.modules[7].weight\nmodel2['backbone.8.weight'] = model1.modules[8].weight\nmodel2['backbone.8.bias'] = model1.modules[8].bias\nmodel2['backbone.8.running_mean'] = model1.modules[8].running_mean\nmodel2['backbone.8.running_var'] = model1.modules[8].running_var\nmodel2['backbone.10.weight'] = model1.modules[10].weight\nmodel2['backbone.11.weight'] = model1.modules[11].weight\nmodel2['backbone.11.bias'] = model1.modules[11].bias\nmodel2['backbone.11.running_mean'] = model1.modules[11].running_mean\nmodel2['backbone.11.running_var'] = model1.modules[11].running_var\n\nmodel2['backbone.14.weight'] = model1.modules[14].weight\nmodel2['backbone.15.weight'] = model1.modules[15].weight\nmodel2['backbone.15.bias'] = model1.modules[15].bias\nmodel2['backbone.15.running_mean'] = model1.modules[15].running_mean\nmodel2['backbone.15.running_var'] = model1.modules[15].running_var\nmodel2['backbone.17.weight'] = model1.modules[17].weight\nmodel2['backbone.18.weight'] = model1.modules[18].weight\nmodel2['backbone.18.bias'] = model1.modules[18].bias\nmodel2['backbone.18.running_mean'] = model1.modules[18].running_mean\nmodel2['backbone.18.running_var'] = model1.modules[18].running_var\nmodel2['backbone.20.weight'] = model1.modules[20].weight\nmodel2['backbone.21.weight'] = model1.modules[21].weight\nmodel2['backbone.21.bias'] = model1.modules[21].bias\nmodel2['backbone.21.running_mean'] = model1.modules[21].running_mean\nmodel2['backbone.21.running_var'] = model1.modules[21].running_var\n\nmodel2['backbone.24.weight'] = model1.modules[24].weight\nmodel2['backbone.25.weight'] = model1.modules[25].weight\nmodel2['backbone.25.bias'] = model1.modules[25].bias\nmodel2['backbone.25.running_mean'] = model1.modules[25].running_mean\nmodel2['backbone.25.running_var'] = model1.modules[25].running_var\nmodel2['backbone.27.weight'] = model1.modules[27].weight\nmodel2['backbone.28.weight'] = model1.modules[28].weight\nmodel2['backbone.28.bias'] = model1.modules[28].bias\nmodel2['backbone.28.running_mean'] = model1.modules[28].running_mean\nmodel2['backbone.28.running_var'] = model1.modules[28].running_var\nmodel2['backbone.30.weight'] = model1.modules[30].weight\nmodel2['backbone.31.weight'] = model1.modules[31].weight\nmodel2['backbone.31.bias'] = model1.modules[31].bias\nmodel2['backbone.31.running_mean'] = model1.modules[31].running_mean\nmodel2['backbone.31.running_var'] = model1.modules[31].running_var\n\nmodel2['backbone.34.weight'] = model1.modules[33].weight\nmodel2['backbone.35.weight'] = model1.modules[34].weight\nmodel2['backbone.35.bias'] = model1.modules[34].bias\nmodel2['backbone.35.running_mean'] = model1.modules[34].running_mean\nmodel2['backbone.35.running_var'] = model1.modules[34].running_var\nmodel2['backbone.37.weight'] = model1.modules[36].weight\nmodel2['backbone.38.weight'] = model1.modules[37].weight\nmodel2['backbone.38.bias'] = model1.modules[37].bias\nmodel2['backbone.38.running_mean'] = model1.modules[37].running_mean\nmodel2['backbone.38.running_var'] = model1.modules[37].running_var\nmodel2['backbone.40.weight'] = model1.modules[39].weight\nmodel2['backbone.41.weight'] = model1.modules[40].weight\nmodel2['backbone.41.bias'] = model1.modules[40].bias\nmodel2['backbone.41.running_mean'] = model1.modules[40].running_mean\nmodel2['backbone.41.running_var'] = model1.modules[40].running_var\n\nmodel2['layer1.0.weight'] = model1.modules[42].modules[0].weight\nmodel2['layer1.1.weight'] = model1.modules[42].modules[1].weight\nmodel2['layer1.1.bias'] = model1.modules[42].modules[1].bias\nmodel2['layer1.1.running_mean'] = model1.modules[42].modules[1].running_mean\nmodel2['layer1.1.running_var'] = model1.modules[42].modules[1].running_var\nmodel2['layer1.3.weight'] = model1.modules[42].modules[3].weight\nmodel2['layer1.4.weight'] = model1.modules[42].modules[4].weight\nmodel2['layer1.4.bias'] = model1.modules[42].modules[4].bias\nmodel2['layer1.4.running_mean'] = model1.modules[42].modules[4].running_mean\nmodel2['layer1.4.running_var'] = model1.modules[42].modules[4].running_var\n\nmodel2['message_passing.up_down.weight'] = model1.modules[42].modules[6].modules[0].modules[0].modules[2].modules[0].modules[1].modules[1].modules[0].weight\nmodel2['message_passing.down_up.weight'] = model1.modules[42].modules[6].modules[0].modules[0].modules[140].modules[1].modules[2].modules[0].modules[0].weight\nmodel2['message_passing.left_right.weight'] = model1.modules[42].modules[6].modules[1].modules[0].modules[2].modules[0].modules[1].modules[1].modules[0].weight\nmodel2['message_passing.right_left.weight'] = model1.modules[42].modules[6].modules[1].modules[0].modules[396].modules[1].modules[2].modules[0].modules[0].weight\n\nmodel2['layer2.1.weight'] = model1.modules[42].modules[8].weight\nmodel2['layer2.1.bias'] = model1.modules[42].modules[8].bias\nmodel2['fc.0.weight'] = model1.modules[43].modules[1].modules[3].weight\nmodel2['fc.0.bias'] = model1.modules[43].modules[1].modules[3].bias\nmodel2['fc.2.weight'] = model1.modules[43].modules[1].modules[5].weight\nmodel2['fc.2.bias'] = model1.modules[43].modules[1].modules[5].bias\n\nsave_name = os.path.join('experiments', 'vgg_SCNN_DULR_w9', 'vgg_SCNN_DULR_w9.pth')\ntorch.save(model2, save_name)\n\n# load and save again\nnet = SCNN(input_size=(800, 288), pretrained=False)\nd = torch.load(save_name)\nnet.load_state_dict(d, strict=False)\nfor m in net.backbone.modules():\n    if isinstance(m, nn.Conv2d):\n        if m.bias is not None:\n            m.bias.data.zero_()\n\n\nsave_dict = {\n    \"epoch\": 0,\n    \"net\": net.state_dict(),\n     \"optim\": None,\n    \"lr_scheduler\": None\n}\n\nif not os.path.exists(os.path.join('experiments', 'vgg_SCNN_DULR_w9')):\n    os.makedirs(os.path.join('experiments', 'vgg_SCNN_DULR_w9'), exist_ok=True)\ntorch.save(save_dict, save_name)"
  },
  {
    "path": "model.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\n\n\nclass SCNN(nn.Module):\n    def __init__(\n            self,\n            input_size,\n            ms_ks=9,\n            pretrained=True\n    ):\n        \"\"\"\n        Argument\n            ms_ks: kernel size in message passing conv\n        \"\"\"\n        super(SCNN, self).__init__()\n        self.pretrained = pretrained\n        self.net_init(input_size, ms_ks)\n        if not pretrained:\n            self.weight_init()\n\n        self.scale_background = 0.4\n        self.scale_seg = 1.0\n        self.scale_exist = 0.1\n\n        self.ce_loss = nn.CrossEntropyLoss(weight=torch.tensor([self.scale_background, 1, 1, 1, 1]))\n        self.bce_loss = nn.BCELoss()\n\n    def forward(self, img, seg_gt=None, exist_gt=None):\n        x = self.backbone(img)\n        x = self.layer1(x)\n        x = self.message_passing_forward(x)\n        x = self.layer2(x)\n\n        seg_pred = F.interpolate(x, scale_factor=8, mode='bilinear', align_corners=True)\n        x = self.layer3(x)\n        x = x.view(-1, self.fc_input_feature)\n        exist_pred = self.fc(x)\n\n        if seg_gt is not None and exist_gt is not None:\n            loss_seg = self.ce_loss(seg_pred, seg_gt)\n            loss_exist = self.bce_loss(exist_pred, exist_gt)\n            loss = loss_seg * self.scale_seg + loss_exist * self.scale_exist\n        else:\n            loss_seg = torch.tensor(0, dtype=img.dtype, device=img.device)\n            loss_exist = torch.tensor(0, dtype=img.dtype, device=img.device)\n            loss = torch.tensor(0, dtype=img.dtype, device=img.device)\n\n        return seg_pred, exist_pred, loss_seg, loss_exist, loss\n\n    def message_passing_forward(self, x):\n        Vertical = [True, True, False, False]\n        Reverse = [False, True, False, True]\n        for ms_conv, v, r in zip(self.message_passing, Vertical, Reverse):\n            x = self.message_passing_once(x, ms_conv, v, r)\n        return x\n\n    def message_passing_once(self, x, conv, vertical=True, reverse=False):\n        \"\"\"\n        Argument:\n        ----------\n        x: input tensor\n        vertical: vertical message passing or horizontal\n        reverse: False for up-down or left-right, True for down-up or right-left\n        \"\"\"\n        nB, C, H, W = x.shape\n        if vertical:\n            slices = [x[:, :, i:(i + 1), :] for i in range(H)]\n            dim = 2\n        else:\n            slices = [x[:, :, :, i:(i + 1)] for i in range(W)]\n            dim = 3\n        if reverse:\n            slices = slices[::-1]\n\n        out = [slices[0]]\n        for i in range(1, len(slices)):\n            out.append(slices[i] + F.relu(conv(out[i - 1])))\n        if reverse:\n            out = out[::-1]\n        return torch.cat(out, dim=dim)\n\n    def net_init(self, input_size, ms_ks):\n        input_w, input_h = input_size\n        self.fc_input_feature = 5 * int(input_w/16) * int(input_h/16)\n        self.backbone = models.vgg16_bn(pretrained=self.pretrained).features\n\n        # ----------------- process backbone -----------------\n        for i in [34, 37, 40]:\n            conv = self.backbone._modules[str(i)]\n            dilated_conv = nn.Conv2d(\n                conv.in_channels, conv.out_channels, conv.kernel_size, stride=conv.stride,\n                padding=tuple(p * 2 for p in conv.padding), dilation=2, bias=(conv.bias is not None)\n            )\n            dilated_conv.load_state_dict(conv.state_dict())\n            self.backbone._modules[str(i)] = dilated_conv\n        self.backbone._modules.pop('33')\n        self.backbone._modules.pop('43')\n\n        # ----------------- SCNN part -----------------\n        self.layer1 = nn.Sequential(\n            nn.Conv2d(512, 1024, 3, padding=4, dilation=4, bias=False),\n            nn.BatchNorm2d(1024),\n            nn.ReLU(),\n            nn.Conv2d(1024, 128, 1, bias=False),\n            nn.BatchNorm2d(128),\n            nn.ReLU()  # (nB, 128, 36, 100)\n        )\n\n        # ----------------- add message passing -----------------\n        self.message_passing = nn.ModuleList()\n        self.message_passing.add_module('up_down', nn.Conv2d(128, 128, (1, ms_ks), padding=(0, ms_ks // 2), bias=False))\n        self.message_passing.add_module('down_up', nn.Conv2d(128, 128, (1, ms_ks), padding=(0, ms_ks // 2), bias=False))\n        self.message_passing.add_module('left_right',\n                                        nn.Conv2d(128, 128, (ms_ks, 1), padding=(ms_ks // 2, 0), bias=False))\n        self.message_passing.add_module('right_left',\n                                        nn.Conv2d(128, 128, (ms_ks, 1), padding=(ms_ks // 2, 0), bias=False))\n        # (nB, 128, 36, 100)\n\n        # ----------------- SCNN part -----------------\n        self.layer2 = nn.Sequential(\n            nn.Dropout2d(0.1),\n            nn.Conv2d(128, 5, 1)  # get (nB, 5, 36, 100)\n        )\n\n        self.layer3 = nn.Sequential(\n            nn.Softmax(dim=1),  # (nB, 5, 36, 100)\n            nn.AvgPool2d(2, 2),  # (nB, 5, 18, 50)\n        )\n        self.fc = nn.Sequential(\n            nn.Linear(self.fc_input_feature, 128),\n            nn.ReLU(),\n            nn.Linear(128, 4),\n            nn.Sigmoid()\n        )\n\n    def weight_init(self):\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                m.reset_parameters()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data[:] = 1.\n                m.bias.data.zero_()\n"
  },
  {
    "path": "requirements.txt",
    "content": "numpy\nopencv-python\ntorch>=0.4.1\ntorchvision"
  },
  {
    "path": "test_CULane.py",
    "content": "import argparse\nimport json\nimport os\n\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nimport dataset\nfrom config import *\nfrom model import SCNN\nfrom utils.prob2lines import getLane\nfrom utils.transforms import *\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--exp_dir\", type=str, default=\"./experiments/exp10\")\n    args = parser.parse_args()\n    return args\n\n\n# ------------ config ------------\nargs = parse_args()\nexp_dir = args.exp_dir\nexp_name = exp_dir.split('/')[-1]\n\nwith open(os.path.join(exp_dir, \"cfg.json\")) as f:\n    exp_cfg = json.load(f)\nresize_shape = tuple(exp_cfg['dataset']['resize_shape'])\ndevice = torch.device('cuda')\n\n\ndef split_path(path):\n    \"\"\"split path tree into list\"\"\"\n    folders = []\n    while True:\n        path, folder = os.path.split(path)\n        if folder != \"\":\n            folders.insert(0, folder)\n        else:\n            if path != \"\":\n                folders.insert(0, path)\n            break\n    return folders\n\n\n# ------------ data and model ------------\n# # CULane mean, std\n# mean=(0.3598, 0.3653, 0.3662)\n# std=(0.2573, 0.2663, 0.2756)\n# Imagenet mean, std\nmean = (0.485, 0.456, 0.406)\nstd = (0.229, 0.224, 0.225)\ndataset_name = exp_cfg['dataset'].pop('dataset_name')\nDataset_Type = getattr(dataset, dataset_name)\ntransform = Compose(Resize(resize_shape), ToTensor(),\n                    Normalize(mean=mean, std=std))\ntest_dataset = Dataset_Type(Dataset_Path[dataset_name], \"test\", transform)\ntest_loader = DataLoader(test_dataset, batch_size=64, collate_fn=test_dataset.collate, num_workers=4)\n\nnet = SCNN(resize_shape, pretrained=False)\nsave_name = os.path.join(exp_dir, exp_dir.split('/')[-1] + '_best.pth')\nsave_dict = torch.load(save_name, map_location='cpu')\nprint(\"\\nloading\", save_name, \"...... From Epoch: \", save_dict['epoch'])\nnet.load_state_dict(save_dict['net'])\nnet = torch.nn.DataParallel(net.to(device))\nnet.eval()\n\n# ------------ test ------------\nout_path = os.path.join(exp_dir, \"coord_output\")\nevaluation_path = os.path.join(exp_dir, \"evaluate\")\nif not os.path.exists(out_path):\n    os.mkdir(out_path)\nif not os.path.exists(evaluation_path):\n    os.mkdir(evaluation_path)\n\nprogressbar = tqdm(range(len(test_loader)))\nwith torch.no_grad():\n    for batch_idx, sample in enumerate(test_loader):\n        img = sample['img'].to(device)\n        img_name = sample['img_name']\n\n        seg_pred, exist_pred = net(img)[:2]\n        seg_pred = F.softmax(seg_pred, dim=1)\n        seg_pred = seg_pred.detach().cpu().numpy()\n        exist_pred = exist_pred.detach().cpu().numpy()\n\n        for b in range(len(seg_pred)):\n            seg = seg_pred[b]\n            exist = [1 if exist_pred[b, i] > 0.5 else 0 for i in range(4)]\n            lane_coords = getLane.prob2lines_CULane(seg, exist, resize_shape=(590, 1640), y_px_gap=20, pts=18)\n\n            path_tree = split_path(img_name[b])\n            save_dir, save_name = path_tree[-3:-1], path_tree[-1]\n            save_dir = os.path.join(out_path, *save_dir)\n            save_name = save_name[:-3] + \"lines.txt\"\n            save_name = os.path.join(save_dir, save_name)\n            if not os.path.exists(save_dir):\n                os.makedirs(save_dir)\n\n            with open(save_name, \"w\") as f:\n                for l in lane_coords:\n                    for (x, y) in l:\n                        print(\"{} {}\".format(x, y), end=\" \", file=f)\n                    print(file=f)\n\n        progressbar.update(1)\nprogressbar.close()\n\n# ---- evaluate ----\nos.system(\"sh utils/lane_evaluation/CULane/Run.sh \" + exp_name)\n"
  },
  {
    "path": "test_tusimple.py",
    "content": "import argparse\nimport json\nimport os\n\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nimport dataset\nfrom config import *\nfrom model import SCNN\nfrom utils.prob2lines import getLane\nfrom utils.transforms import *\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--exp_dir\", type=str, default=\"./experiments/exp0\")\n    args = parser.parse_args()\n    return args\n\n\n# ------------ config ------------\nargs = parse_args()\nexp_dir = args.exp_dir\nexp_name = exp_dir.split('/')[-1]\n\nwith open(os.path.join(exp_dir, \"cfg.json\")) as f:\n    exp_cfg = json.load(f)\nresize_shape = tuple(exp_cfg['dataset']['resize_shape'])\ndevice = torch.device('cuda')\n\n\ndef split_path(path):\n    \"\"\"split path tree into list\"\"\"\n    folders = []\n    while True:\n        path, folder = os.path.split(path)\n        if folder != \"\":\n            folders.insert(0, folder)\n        else:\n            if path != \"\":\n                folders.insert(0, path)\n            break\n    return folders\n\n\n# ------------ data and model ------------\n# # CULane mean, std\n# mean=(0.3598, 0.3653, 0.3662)\n# std=(0.2573, 0.2663, 0.2756)\n# Imagenet mean, std\nmean = (0.485, 0.456, 0.406)\nstd = (0.229, 0.224, 0.225)\ntransform = Compose(Resize(resize_shape), ToTensor(),\n                    Normalize(mean=mean, std=std))\ndataset_name = exp_cfg['dataset'].pop('dataset_name')\nDataset_Type = getattr(dataset, dataset_name)\ntest_dataset = Dataset_Type(Dataset_Path['Tusimple'], \"test\", transform)\ntest_loader = DataLoader(test_dataset, batch_size=32, collate_fn=test_dataset.collate, num_workers=4)\n\nnet = SCNN(input_size=resize_shape, pretrained=False)\nsave_name = os.path.join(exp_dir, exp_dir.split('/')[-1] + '_best.pth')\nsave_dict = torch.load(save_name, map_location='cpu')\nprint(\"\\nloading\", save_name, \"...... From Epoch: \", save_dict['epoch'])\nnet.load_state_dict(save_dict['net'])\nnet = torch.nn.DataParallel(net.to(device))\nnet.eval()\n\n# ------------ test ------------\nout_path = os.path.join(exp_dir, \"coord_output\")\nevaluation_path = os.path.join(exp_dir, \"evaluate\")\nif not os.path.exists(out_path):\n    os.mkdir(out_path)\nif not os.path.exists(evaluation_path):\n    os.mkdir(evaluation_path)\ndump_to_json = []\n\nprogressbar = tqdm(range(len(test_loader)))\nwith torch.no_grad():\n    for batch_idx, sample in enumerate(test_loader):\n        img = sample['img'].to(device)\n        img_name = sample['img_name']\n\n        seg_pred, exist_pred = net(img)[:2]\n        seg_pred = F.softmax(seg_pred, dim=1)\n        seg_pred = seg_pred.detach().cpu().numpy()\n        exist_pred = exist_pred.detach().cpu().numpy()\n\n        for b in range(len(seg_pred)):\n            seg = seg_pred[b]\n            exist = [1 if exist_pred[b, i] > 0.5 else 0 for i in range(4)]\n            lane_coords = getLane.prob2lines_tusimple(seg, exist, resize_shape=(720, 1280), y_px_gap=10, pts=56)\n            for i in range(len(lane_coords)):\n                lane_coords[i] = sorted(lane_coords[i], key=lambda pair: pair[1])\n\n            path_tree = split_path(img_name[b])\n            save_dir, save_name = path_tree[-3:-1], path_tree[-1]\n            save_dir = os.path.join(out_path, *save_dir)\n            save_name = save_name[:-3] + \"lines.txt\"\n            save_name = os.path.join(save_dir, save_name)\n            if not os.path.exists(save_dir):\n                os.makedirs(save_dir, exist_ok=True)\n\n            with open(save_name, \"w\") as f:\n                for l in lane_coords:\n                    for (x, y) in l:\n                        print(\"{} {}\".format(x, y), end=\" \", file=f)\n                    print(file=f)\n\n            json_dict = {}\n            json_dict['lanes'] = []\n            json_dict['h_sample'] = []\n            json_dict['raw_file'] = os.path.join(*path_tree[-4:])\n            json_dict['run_time'] = 0\n            for l in lane_coords:\n                if len(l) == 0:\n                    continue\n                json_dict['lanes'].append([])\n                for (x, y) in l:\n                    json_dict['lanes'][-1].append(int(x))\n            for (x, y) in lane_coords[0]:\n                json_dict['h_sample'].append(y)\n            dump_to_json.append(json.dumps(json_dict))\n\n        progressbar.update(1)\nprogressbar.close()\n\nwith open(os.path.join(out_path, \"predict_test.json\"), \"w\") as f:\n    for line in dump_to_json:\n        print(line, end=\"\\n\", file=f)\n\n# ---- evaluate ----\nfrom utils.lane_evaluation.tusimple.lane import LaneEval\n\neval_result = LaneEval.bench_one_submit(os.path.join(out_path, \"predict_test.json\"),\n                                        os.path.join(Dataset_Path['Tusimple'], 'test_label.json'))\nprint(eval_result)\nwith open(os.path.join(evaluation_path, \"evaluation_result.txt\"), \"w\") as f:\n    print(eval_result, file=f)\n"
  },
  {
    "path": "train.py",
    "content": "import argparse\nimport json\nimport os\nimport shutil\nimport time\n\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom config import *\nimport dataset\nfrom model import SCNN\nfrom utils.tensorboard import TensorBoard\nfrom utils.transforms import *\nfrom utils.lr_scheduler import PolyLR\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--exp_dir\", type=str, default=\"./experiments/exp0\")\n    parser.add_argument(\"--resume\", \"-r\", action=\"store_true\")\n    args = parser.parse_args()\n    return args\nargs = parse_args()\n\n# ------------ config ------------\nexp_dir = args.exp_dir\nwhile exp_dir[-1]=='/':\n    exp_dir = exp_dir[:-1]\nexp_name = exp_dir.split('/')[-1]\n\nwith open(os.path.join(exp_dir, \"cfg.json\")) as f:\n    exp_cfg = json.load(f)\nresize_shape = tuple(exp_cfg['dataset']['resize_shape'])\n\ndevice = torch.device(exp_cfg['device'])\ntensorboard = TensorBoard(exp_dir)\n\n# ------------ train data ------------\n# # CULane mean, std\n# mean=(0.3598, 0.3653, 0.3662)\n# std=(0.2573, 0.2663, 0.2756)\n# Imagenet mean, std\nmean=(0.485, 0.456, 0.406)\nstd=(0.229, 0.224, 0.225)\ntransform_train = Compose(Resize(resize_shape), Rotation(2), ToTensor(),\n                          Normalize(mean=mean, std=std))\ndataset_name = exp_cfg['dataset'].pop('dataset_name')\nDataset_Type = getattr(dataset, dataset_name)\ntrain_dataset = Dataset_Type(Dataset_Path[dataset_name], \"train\", transform_train)\ntrain_loader = DataLoader(train_dataset, batch_size=exp_cfg['dataset']['batch_size'], shuffle=True, collate_fn=train_dataset.collate, num_workers=8)\n\n# ------------ val data ------------\ntransform_val_img = Resize(resize_shape)\ntransform_val_x = Compose(ToTensor(), Normalize(mean=mean, std=std))\ntransform_val = Compose(transform_val_img, transform_val_x)\nval_dataset = Dataset_Type(Dataset_Path[dataset_name], \"val\", transform_val)\nval_loader = DataLoader(val_dataset, batch_size=8, collate_fn=val_dataset.collate, num_workers=4)\n\n# ------------ preparation ------------\nnet = SCNN(resize_shape, pretrained=True)\nnet = net.to(device)\nnet = torch.nn.DataParallel(net)\n\noptimizer = optim.SGD(net.parameters(), **exp_cfg['optim'])\nlr_scheduler = PolyLR(optimizer, 0.9, **exp_cfg['lr_scheduler'])\nbest_val_loss = 1e6\n\n\ndef train(epoch):\n    print(\"Train Epoch: {}\".format(epoch))\n    net.train()\n    train_loss = 0\n    train_loss_seg = 0\n    train_loss_exist = 0\n    progressbar = tqdm(range(len(train_loader)))\n\n    for batch_idx, sample in enumerate(train_loader):\n        img = sample['img'].to(device)\n        segLabel = sample['segLabel'].to(device)\n        exist = sample['exist'].to(device)\n\n        optimizer.zero_grad()\n        seg_pred, exist_pred, loss_seg, loss_exist, loss = net(img, segLabel, exist)\n        if isinstance(net, torch.nn.DataParallel):\n            loss_seg = loss_seg.sum()\n            loss_exist = loss_exist.sum()\n            loss = loss.sum()\n        loss.backward()\n        optimizer.step()\n        lr_scheduler.step()\n\n        iter_idx = epoch * len(train_loader) + batch_idx\n        train_loss = loss.item()\n        train_loss_seg = loss_seg.item()\n        train_loss_exist = loss_exist.item()\n        progressbar.set_description(\"batch loss: {:.3f}\".format(loss.item()))\n        progressbar.update(1)\n\n        lr = optimizer.param_groups[0]['lr']\n        tensorboard.scalar_summary(exp_name + \"/train_loss\", train_loss, iter_idx)\n        tensorboard.scalar_summary(exp_name + \"/train_loss_seg\", train_loss_seg, iter_idx)\n        tensorboard.scalar_summary(exp_name + \"/train_loss_exist\", train_loss_exist, iter_idx)\n        tensorboard.scalar_summary(exp_name + \"/learning_rate\", lr, iter_idx)\n\n    progressbar.close()\n    tensorboard.writer.flush()\n\n    if epoch % 1 == 0:\n        save_dict = {\n            \"epoch\": epoch,\n            \"net\": net.module.state_dict() if isinstance(net, torch.nn.DataParallel) else net.state_dict(),\n            \"optim\": optimizer.state_dict(),\n            \"lr_scheduler\": lr_scheduler.state_dict(),\n            \"best_val_loss\": best_val_loss\n        }\n        save_name = os.path.join(exp_dir, exp_name + '.pth')\n        torch.save(save_dict, save_name)\n        print(\"model is saved: {}\".format(save_name))\n\n    print(\"------------------------\\n\")\n\n\ndef val(epoch):\n    global best_val_loss\n\n    print(\"Val Epoch: {}\".format(epoch))\n\n    net.eval()\n    val_loss = 0\n    val_loss_seg = 0\n    val_loss_exist = 0\n    progressbar = tqdm(range(len(val_loader)))\n\n    with torch.no_grad():\n        for batch_idx, sample in enumerate(val_loader):\n            img = sample['img'].to(device)\n            segLabel = sample['segLabel'].to(device)\n            exist = sample['exist'].to(device)\n\n            seg_pred, exist_pred, loss_seg, loss_exist, loss = net(img, segLabel, exist)\n            if isinstance(net, torch.nn.DataParallel):\n                loss_seg = loss_seg.sum()\n                loss_exist = loss_exist.sum()\n                loss = loss.sum()\n\n            # visualize validation every 5 frame, 50 frames in all\n            gap_num = 5\n            if batch_idx%gap_num == 0 and batch_idx < 50 * gap_num:\n                origin_imgs = []\n                seg_pred = seg_pred.detach().cpu().numpy()\n                exist_pred = exist_pred.detach().cpu().numpy()\n\n                for b in range(len(img)):\n                    img_name = sample['img_name'][b]\n                    img = cv2.imread(img_name)\n                    img = transform_val_img({'img': img})['img']\n\n                    lane_img = np.zeros_like(img)\n                    color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8')\n\n                    coord_mask = np.argmax(seg_pred[b], axis=0)\n                    for i in range(0, 4):\n                        if exist_pred[b, i] > 0.5:\n                            lane_img[coord_mask==(i+1)] = color[i]\n                    img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=img, beta=1., gamma=0.)\n                    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n                    lane_img = cv2.cvtColor(lane_img, cv2.COLOR_BGR2RGB)\n                    cv2.putText(lane_img, \"{}\".format([1 if exist_pred[b, i]>0.5 else 0 for i in range(4)]), (20, 20), cv2.FONT_HERSHEY_SIMPLEX, 1.1, (255, 255, 255), 2)\n                    origin_imgs.append(img)\n                    origin_imgs.append(lane_img)\n                tensorboard.image_summary(\"img_{}\".format(batch_idx), origin_imgs, epoch)\n\n            val_loss += loss.item()\n            val_loss_seg += loss_seg.item()\n            val_loss_exist += loss_exist.item()\n\n            progressbar.set_description(\"batch loss: {:.3f}\".format(loss.item()))\n            progressbar.update(1)\n\n    progressbar.close()\n    iter_idx = (epoch + 1) * len(train_loader)  # keep align with training process iter_idx\n    tensorboard.scalar_summary(\"val_loss\", val_loss, iter_idx)\n    tensorboard.scalar_summary(\"val_loss_seg\", val_loss_seg, iter_idx)\n    tensorboard.scalar_summary(\"val_loss_exist\", val_loss_exist, iter_idx)\n    tensorboard.writer.flush()\n\n    print(\"------------------------\\n\")\n    if val_loss < best_val_loss:\n        best_val_loss = val_loss\n        save_name = os.path.join(exp_dir, exp_name + '.pth')\n        copy_name = os.path.join(exp_dir, exp_name + '_best.pth')\n        shutil.copyfile(save_name, copy_name)\n\n\ndef main():\n    global best_val_loss\n    if args.resume:\n        save_dict = torch.load(os.path.join(exp_dir, exp_name + '.pth'))\n        if isinstance(net, torch.nn.DataParallel):\n            net.module.load_state_dict(save_dict['net'])\n        else:\n            net.load_state_dict(save_dict['net'])\n        optimizer.load_state_dict(save_dict['optim'])\n        lr_scheduler.load_state_dict(save_dict['lr_scheduler'])\n        start_epoch = save_dict['epoch'] + 1\n        best_val_loss = save_dict.get(\"best_val_loss\", 1e6)\n    else:\n        start_epoch = 0\n\n    exp_cfg['MAX_EPOCHES'] = int(np.ceil(exp_cfg['lr_scheduler']['max_iter'] / len(train_loader)))\n    for epoch in range(start_epoch, exp_cfg['MAX_EPOCHES']):\n        train(epoch)\n        if epoch % 1 == 0:\n            print(\"\\nValidation For Experiment: \", exp_dir)\n            print(time.strftime('%H:%M:%S', time.localtime()))\n            val(epoch)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\nproject (evaluate)\n\nSET(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR})\nset(CMAKE_CXX_STANDARD 11)\nset(CMAKE_CXX_FLAGS \"-DCPU_ONLY -fopenmp\")\n\nfind_package(OpenCV REQUIRED)\ninclude_directories(\"${PROJECT_SOURCE_DIR}/include\")\n\nadd_executable(evaluate \n\t${PROJECT_SOURCE_DIR}/src/evaluate.cpp \n\t${PROJECT_SOURCE_DIR}/src/counter.cpp \n\t${PROJECT_SOURCE_DIR}/src/lane_compare.cpp \n\t${PROJECT_SOURCE_DIR}/src/spline.cpp\n)\ntarget_link_libraries(evaluate ${OpenCV_LIBS})\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/Run.sh",
    "content": "root=/home/lion/SCNN_Pytorch/\nexp=$1\ndata_dir=/home/lion/Dataset/CULane/data/CULane/\ndetect_dir=${root}/experiments/${exp}/coord_output/\nbin_dir=${root}/utils/lane_evaluation/CULane\n\nw_lane=30;\niou=0.5;  # Set iou to 0.3 or 0.5\nim_w=1640\nim_h=590\nframe=1\nlist0=${data_dir}list/test_split/test0_normal.txt\nlist1=${data_dir}list/test_split/test1_crowd.txt\nlist2=${data_dir}list/test_split/test2_hlight.txt\nlist3=${data_dir}list/test_split/test3_shadow.txt\nlist4=${data_dir}list/test_split/test4_noline.txt\nlist5=${data_dir}list/test_split/test5_arrow.txt\nlist6=${data_dir}list/test_split/test6_curve.txt\nlist7=${data_dir}list/test_split/test7_cross.txt\nlist8=${data_dir}list/test_split/test8_night.txt\nout0=${detect_dir}../evaluate/out0_normal.txt\nout1=${detect_dir}../evaluate/out1_crowd.txt\nout2=${detect_dir}../evaluate/out2_hlight.txt\nout3=${detect_dir}../evaluate/out3_shadow.txt\nout4=${detect_dir}../evaluate/out4_noline.txt\nout5=${detect_dir}../evaluate/out5_arrow.txt\nout6=${detect_dir}../evaluate/out6_curve.txt\nout7=${detect_dir}../evaluate/out7_cross.txt\nout8=${detect_dir}../evaluate/out8_night.txt\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list0 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out0\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list1 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out1\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list2 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out2\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list3 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out3\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list4 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out4\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list5 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out5\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list6 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out6\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list7 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out7\n${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list8 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out8\ncat ${detect_dir}/../evaluate/out*.txt > ${detect_dir}/../evaluate/${exp}_iou${iou}_split.txt\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/include/counter.hpp",
    "content": "#ifndef COUNTER_HPP\n#define COUNTER_HPP\n\n#include \"lane_compare.hpp\"\n#include \"hungarianGraph.hpp\"\n#include <iostream>\n#include <algorithm>\n#include <vector>\n#include <opencv2/core/core.hpp>\n\nusing namespace std;\nusing namespace cv;\n\n// before coming to use functions of this class, the lanes should resize to im_width and im_height using resize_lane() in lane_compare.hpp\nclass Counter\n{\n\tpublic:\n\t\tCounter(int _im_width, int _im_height, double _iou_threshold=0.4, int _lane_width=10):tp(0),fp(0),fn(0){\n\t\t\tim_width = _im_width;\n\t\t\tim_height = _im_height;\n\t\t\tsim_threshold = _iou_threshold;\n\t\t\tlane_compare = new LaneCompare(_im_width, _im_height,  _lane_width, LaneCompare::IOU);\n\t\t};\n\t\tdouble get_precision(void);\n\t\tdouble get_recall(void);\n\t\tlong getTP(void);\n\t\tlong getFP(void);\n\t\tlong getFN(void);\n\t\t// direct add tp, fp, tn and fn\n\t\t// first match with hungarian\n\t\tvector<int> count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes);\n\t\tvoid makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2);\n\n\tprivate:\n\t\tdouble sim_threshold;\n\t\tint im_width;\n\t\tint im_height;\n\t\tlong tp;\n\t\tlong fp;\n\t\tlong fn;\n\t\tLaneCompare *lane_compare;\n};\n#endif\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/include/hungarianGraph.hpp",
    "content": "﻿#ifndef HUNGARIAN_GRAPH_HPP\n#define HUNGARIAN_GRAPH_HPP\n#include <vector>\nusing namespace std;\n\nstruct pipartiteGraph {\n    vector<vector<double> > mat;\n    vector<bool> leftUsed, rightUsed;\n    vector<double> leftWeight, rightWeight;\n    vector<int>rightMatch, leftMatch;\n    int leftNum, rightNum;\n    bool matchDfs(int u) {\n        leftUsed[u] = true;\n        for (int v = 0; v < rightNum; v++) {\n            if (!rightUsed[v] && fabs(leftWeight[u] + rightWeight[v] - mat[u][v]) < 1e-2) {\n                rightUsed[v] = true;\n                if (rightMatch[v] == -1 || matchDfs(rightMatch[v])) {\n                    rightMatch[v] = u;\n                    leftMatch[u] = v;\n                    return true;\n                }\n            }\n        }\n        return false;\n    }\n    void resize(int leftNum, int rightNum) {\n        this->leftNum = leftNum;\n        this->rightNum = rightNum;\n        leftMatch.resize(leftNum);\n        rightMatch.resize(rightNum);\n        leftUsed.resize(leftNum);\n        rightUsed.resize(rightNum);\n        leftWeight.resize(leftNum);\n        rightWeight.resize(rightNum);\n        mat.resize(leftNum);\n        for (int i = 0; i < leftNum; i++) mat[i].resize(rightNum);\n    }\n    void match() {\n        for (int i = 0; i < leftNum; i++) leftMatch[i] = -1;\n        for (int i = 0; i < rightNum; i++) rightMatch[i] = -1;\n        for (int i = 0; i < rightNum; i++) rightWeight[i] = 0;\n        for (int i = 0; i < leftNum; i++) {\n            leftWeight[i] = -1e5;\n            for (int j = 0; j < rightNum; j++) {\n                if (leftWeight[i] < mat[i][j]) leftWeight[i] = mat[i][j];\n            }\n        }\n\n        for (int u = 0; u < leftNum; u++) {\n            while (1) {\n                for (int i = 0; i < leftNum; i++) leftUsed[i] = false;\n                for (int i = 0; i < rightNum; i++) rightUsed[i] = false;\n                if (matchDfs(u)) break;\n                double d = 1e10;\n                for (int i = 0; i < leftNum; i++) {\n                    if (leftUsed[i] ) {\n                        for (int j = 0; j < rightNum; j++) {\n                            if (!rightUsed[j]) d = min(d, leftWeight[i] + rightWeight[j] - mat[i][j]);\n                        }\n                    }\n                }\n                if (d == 1e10) return ;\n                for (int i = 0; i < leftNum; i++) if (leftUsed[i]) leftWeight[i] -= d;\n                for (int i = 0; i < rightNum; i++) if (rightUsed[i]) rightWeight[i] += d;\n            }\n        }\n    }\n};\n\n\n#endif // HUNGARIAN_GRAPH_HPP\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/include/lane_compare.hpp",
    "content": "#ifndef LANE_COMPARE_HPP\n#define LANE_COMPARE_HPP\n\n#include \"spline.hpp\"\n#include <vector>\n#include <iostream>\n#include <opencv2/core/core.hpp>\n\nusing namespace std;\nusing namespace cv;\n\nclass LaneCompare{\n\tpublic:\n\t\tenum CompareMode{\n\t\t\tIOU,\n\t\t\tCaltech\n\t\t};\n\n\t\tLaneCompare(int _im_width, int _im_height, int _lane_width = 10, CompareMode _compare_mode = IOU){\n\t\t\tim_width = _im_width;\n\t\t\tim_height = _im_height;\n\t\t\tcompare_mode = _compare_mode;\n\t\t\tlane_width = _lane_width;\n\t\t}\n\n\t\tdouble get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2);\n\t\tvoid resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height);\n\tprivate:\n\t\tCompareMode compare_mode;\n\t\tint im_width;\n\t\tint im_height;\n\t\tint lane_width;\n\t\tSpline splineSolver;\n};\n\n#endif\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/include/spline.hpp",
    "content": "#ifndef SPLINE_HPP\n#define SPLINE_HPP\n#include <vector>\n#include <cstdio>\n#include <math.h>\n#include <opencv2/core/core.hpp>\n\nusing namespace cv;\nusing namespace std;\n\nstruct Func {\n    double a_x;\n    double b_x;\n    double c_x;\n    double d_x;\n    double a_y;\n    double b_y;\n    double c_y;\n    double d_y;\n    double h;\n};\nclass Spline {\npublic:\n\tvector<Point2f> splineInterpTimes(const vector<Point2f> &tmp_line, int times);\n    vector<Point2f> splineInterpStep(vector<Point2f> tmp_line, double step);\n\tvector<Func> cal_fun(const vector<Point2f> &point_v);\n};\n#endif\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src/counter.cpp",
    "content": "/*************************************************************************\n\t> File Name: counter.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Thu Jul 14 20:23:08 2016\n ************************************************************************/\n\n#include \"counter.hpp\"\n#include <thread>\n\ndouble Counter::get_precision(void)\n{\n\tcerr<<\"tp: \"<<tp<<\" fp: \"<<fp<<\" fn: \"<<fn<<endl;\n\tif(tp+fp == 0)\n\t{\n\t\tcerr<<\"no positive detection\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fp);\n}\n\ndouble Counter::get_recall(void)\n{\n\tif(tp+fn == 0)\n\t{\n\t\tcerr<<\"no ground truth positive\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fn);\n}\n\nlong Counter::getTP(void)\n{\n\treturn tp;\n}\n\nlong Counter::getFP(void)\n{\n\treturn fp;\n}\n\nlong Counter::getFN(void)\n{\n\treturn fn;\n}\n\nvector<int> Counter::count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes)\n{\n\tvector<int> anno_match(anno_lanes.size(), -1);\n\tvector<int> detect_match;\n\tif(anno_lanes.empty())\n\t{\n\t\tfp += detect_lanes.size();\n\t\treturn anno_match;\n\t}\n\n\tif(detect_lanes.empty())\n\t{\n\t\tfn += anno_lanes.size();\n\t\treturn anno_match;\n\t}\n\t// hungarian match first\n\t\n\t// first calc similarity matrix\n\tvector<vector<double> > similarity(anno_lanes.size(), vector<double>(detect_lanes.size(), 0));\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tconst vector<Point2f> &curr_anno_lane = anno_lanes[i];\n\t\tfor(int j=0; j<detect_lanes.size(); j++)\n\t\t{\n\t\t\tconst vector<Point2f> &curr_detect_lane = detect_lanes[j];\n\t\t\tsimilarity[i][j] = lane_compare->get_lane_similarity(ref(curr_anno_lane), ref(curr_detect_lane));\n\t\t}\n\t}\n\n\n\n\tmakeMatch(ref(similarity), ref(anno_match), ref(detect_match));\n\n\t\n\tint curr_tp = 0;\n\t// count and add\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tif(anno_match[i]>=0 && similarity[i][anno_match[i]] > sim_threshold)\n\t\t{\n\t\t\tcurr_tp++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tanno_match[i] = -1;\n\t\t}\n\t}\n\tint curr_fn = anno_lanes.size() - curr_tp;\n\tint curr_fp = detect_lanes.size() - curr_tp;\n\ttp += curr_tp;\n\tfn += curr_fn;\n\tfp += curr_fp;\n\treturn anno_match;\n}\n\n\nvoid Counter::makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2) {\n\tint m = similarity.size();\n\tint n = similarity[0].size();\n    pipartiteGraph gra;\n    bool have_exchange = false;\n    if (m > n) {\n        have_exchange = true;\n        swap(m, n);\n    }\n    gra.resize(m, n);\n    for (int i = 0; i < gra.leftNum; i++) {\n        for (int j = 0; j < gra.rightNum; j++) {\n\t\t\tif(have_exchange)\n\t\t\t\tgra.mat[i][j] = similarity[j][i];\n\t\t\telse\n\t\t\t\tgra.mat[i][j] = similarity[i][j];\n        }\n    }\n    gra.match();\n    match1 = gra.leftMatch;\n    match2 = gra.rightMatch;\n    if (have_exchange) swap(match1, match2);\n}\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src/evaluate.cpp",
    "content": "/*************************************************************************\n\t> File Name: evaluate.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: 2016年07月14日 星期四 18时28分45秒\n ************************************************************************/\n\n#include \"counter.hpp\"\n#include \"spline.hpp\"\n#include <unistd.h>\n#include <iostream>\n#include <fstream>\n#include <sstream>\n#include <cstdlib>\n#include <string>\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc.hpp>\n\n#include <vector>\n#include <thread>\n#include <mutex>\n\nusing namespace std;\nusing namespace cv;\n\nvoid help(void)\n{\n\tcout<<\"./evaluate [OPTIONS]\"<<endl;\n\tcout<<\"-h                  : print usage help\"<<endl;\n\tcout<<\"-a                  : directory for annotation files (default: /data/driving/eval_data/anno_label/)\"<<endl;\n\tcout<<\"-d                  : directory for detection files (default: /data/driving/eval_data/predict_label/)\"<<endl;\n\tcout<<\"-i                  : directory for image files (default: /data/driving/eval_data/img/)\"<<endl;\n\tcout<<\"-l                  : list of images used for evaluation (default: /data/driving/eval_data/img/all.txt)\"<<endl;\n\tcout<<\"-w                  : width of the lanes (default: 10)\"<<endl;\n\tcout<<\"-t                  : threshold of iou (default: 0.4)\"<<endl;\n\tcout<<\"-c                  : cols (max image width) (default: 1920)\"<<endl;\n\tcout<<\"-r                  : rows (max image height) (default: 1080)\"<<endl;\n\tcout<<\"-s                  : show visualization\"<<endl;\n\tcout<<\"-f                  : start frame in the test set (default: 1)\"<<endl;\n}\n\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes);\nvoid visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane);\nvoid worker_func(vector<string> &lines_list_v, int start, int end, int &tp, int &fp, int &fn);\nvoid update_tp_fp_fn(int &tp, int &fp, int &fn, int _tp, int _fp, int _fn);\n\ndouble get_precision(int tp, int fp, int fn)\n{\n\tcerr<<\"tp: \"<<tp<<\" fp: \"<<fp<<\" fn: \"<<fn<<endl;\n\tif(tp+fp == 0)\n\t{\n\t\tcerr<<\"no positive detection\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fp);\n}\n\ndouble get_recall(int tp, int fp, int fn)\n{\n\tif(tp+fn == 0)\n\t{\n\t\tcerr<<\"no ground truth positive\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fn);\n}\n\nmutex myMutex; \nstring anno_dir = \"/data/driving/eval_data/anno_label/\";\nstring detect_dir = \"/data/driving/eval_data/predict_label/\";\nstring im_dir = \"/data/driving/eval_data/img/\";\nstring list_im_file = \"/data/driving/eval_data/img/all.txt\";\nstring output_file = \"./output.txt\";\nint width_lane = 10;\ndouble iou_threshold = 0.4;\nint im_width = 1920;\nint im_height = 1080;\nint oc;\nbool show = false;\nint frame = 1;\nint NUM_PROCESS=20;\n\nint main(int argc, char **argv)\n{\n\t// process params\n\t\n\twhile((oc = getopt(argc, argv, \"ha:d:i:l:w:t:c:r:sf:o:p:\")) != -1)\n\t{\n\t\tswitch(oc)\n\t\t{\n\t\t\tcase 'h':\n\t\t\t\thelp();\n\t\t\t\treturn 0;\n\t\t\tcase 'a':\n\t\t\t\tanno_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'd':\n\t\t\t\tdetect_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'i':\n\t\t\t\tim_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tlist_im_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'w':\n\t\t\t\twidth_lane = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tiou_threshold = atof(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tim_width = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tim_height = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tshow = true;\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tframe = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'o':\n\t\t\t\toutput_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t    NUM_PROCESS = atoi(optarg);\n\t\t\t    break;\n\t\t}\n\t}\n\n\n\tcerr<<\"------------Configuration---------\"<<endl;\n\tcerr << \"using multi-thread, num:\" << NUM_PROCESS << endl;\n\tcerr<<\"anno_dir: \"<<anno_dir<<endl;\n\tcerr<<\"detect_dir: \"<<detect_dir<<endl;\n\tcerr<<\"im_dir: \"<<im_dir<<endl;\n\tcerr<<\"list_im_file: \"<<list_im_file<<endl;\n\tcerr<<\"width_lane: \"<<width_lane<<endl;\n\tcerr<<\"iou_threshold: \"<<iou_threshold<<endl;\n\tcerr<<\"im_width: \"<<im_width<<endl;\n\tcerr<<\"im_height: \"<<im_height<<endl;\n\tcerr<<\"-----------------------------------\"<<endl;\n\tcerr<<\"Evaluating the results...\"<<endl;\n\t// this is the max_width and max_height\n\n\tif(width_lane<1)\n\t{\n\t\tcerr<<\"width_lane must be positive\"<<endl;\n\t\thelp();\n\t\treturn 1;\n\t}\n\n\n\tifstream ifs_im_list(list_im_file, ios::in);\n\tif(ifs_im_list.fail())\n\t{\n\t\tcerr<<\"Error: file \"<<list_im_file<<\" not exist!\"<<endl;\n\t\treturn 1;\n\t}\n\n\n\tvector<string> lines_list_v;\n\tstring line;\n\twhile(getline(ifs_im_list, line)) {\n\t\tlines_list_v.push_back(line);\n\t}\n\tifs_im_list.close();\n\n\tint TP=0, FP=0, FN=0; //result\n\tint NUM = lines_list_v.size();\n    int batch_size = NUM / NUM_PROCESS;\n\tvector<thread> thread_v;\n\tfor (int i=0; i<NUM_PROCESS; i++){\n\t\t\tint _start=batch_size*i, _end= batch_size*(i+1);\n\t\t\t_end = (_end>NUM) ? NUM:_end;\n\t\t\tthread_v.push_back(thread(worker_func, ref(lines_list_v), _start, _end, ref(TP), ref(FP), ref(FN)));\n\t}\n\n\tfor (int i=0; i<thread_v.size(); i++)\n        thread_v[i].join();\n\n\n\t// Counter counter(im_width, im_height, iou_threshold, width_lane);\n\t\n\t// vector<int> anno_match;\n\t// string sub_im_name;\n\t// int count = 0;\n\n\n\t// while(getline(ifs_im_list, sub_im_name))\n\t// {\n\t// \tcount++;\n\t// \tif (count < frame)\n\t// \t\tcontinue;\n\t// \tstring full_im_name = im_dir + sub_im_name;\n\t// \tstring sub_txt_name =  sub_im_name.substr(0, sub_im_name.find_last_of(\".\")) + \".lines.txt\";\n\t// \tstring anno_file_name = anno_dir + sub_txt_name;\n\t// \tstring detect_file_name = detect_dir + sub_txt_name;\n\t// \tvector<vector<Point2f> > anno_lanes;\n\t// \tvector<vector<Point2f> > detect_lanes;\n\t// \tread_lane_file(anno_file_name, anno_lanes);\n\t// \tread_lane_file(detect_file_name, detect_lanes);\n\t// \t//cerr<<count<<\": \"<<full_im_name<<endl;\n\t// \tanno_match = counter.count_im_pair(anno_lanes, detect_lanes);\n\t// \tif (show)\n\t// \t{\n\t// \t\tvisualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane);\n\t// \t\twaitKey(0);\n\t// \t}\n\t// }\n\t// ifs_im_list.close();\n\n\tcerr << \"list images num: \" << lines_list_v.size() << endl;\n\t\n\tdouble precision = get_precision(TP, FP, FN);\n\tdouble recall = get_recall(TP, FP, FN);\n\tdouble F = 2 * precision * recall / (precision + recall);\t\n\tcerr<<\"finished process file\"<<endl;\n\tcerr<<\"precision: \"<<precision<<endl;\n\tcerr<<\"recall: \"<<recall<<endl;\n\tcerr<<\"Fmeasure: \"<<F<<endl;\n\tcerr<<\"----------------------------------\"<<endl;\n\n\tofstream ofs_out_file;\n\tofs_out_file.open(output_file, ios::out);\n\tofs_out_file<<\"file: \"<<output_file<<endl;\n\tofs_out_file<<\"tp: \"<< TP <<\" fp: \"<< FP <<\" fn: \"<< FN <<endl;\n\tofs_out_file<<\"precision: \"<<precision<<endl;\n\tofs_out_file<<\"recall: \"<<recall<<endl;\n\tofs_out_file<<\"Fmeasure: \"<<F<<endl<<endl;\n\tofs_out_file.close();\n\treturn 0;\n}\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes)\n{\n\tlanes.clear();\n\tifstream ifs_lane(file_name, ios::in);\n\tif(ifs_lane.fail())\n\t{\n\t\treturn;\n\t}\n\n\tstring str_line;\n\twhile(getline(ifs_lane, str_line))\n\t{\n\t\tvector<Point2f> curr_lane;\n\t\tstringstream ss;\n\t\tss<<str_line;\n\t\tdouble x,y;\n\t\twhile(ss>>x>>y)\n\t\t{\n\t\t\tcurr_lane.push_back(Point2f(x, y));\n\t\t}\n\t\tlanes.push_back(curr_lane);\n\t}\n\n\tifs_lane.close();\n}\n\nvoid visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane)\n{\n\tMat img = imread(full_im_name, 1);\n\tMat img2 = imread(full_im_name, 1);\n\tvector<Point2f> curr_lane;\n\tvector<Point2f> p_interp;\n\tSpline splineSolver;\n\tScalar color_B = Scalar(255, 0, 0);\n\tScalar color_G = Scalar(0, 255, 0);\n\tScalar color_R = Scalar(0, 0, 255);\n\tScalar color_P = Scalar(255, 0, 255);\n\tScalar color;\n\tfor (int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tcurr_lane = anno_lanes[i];\n\t\tif(curr_lane.size() == 2)\n\t\t{\n\t\t\tp_interp = curr_lane;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tp_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n\t\t}\n\t\tif (anno_match[i] >= 0)\n\t\t{\n\t\t\tcolor = color_G;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcolor = color_G;\n\t\t}\n\t\tfor (int n=0; n<p_interp.size()-1; n++)\n\t\t{\n\t\t\tcv::line(img, p_interp[n], p_interp[n+1], color, width_lane);\n\t\t\tcv::line(img2, p_interp[n], p_interp[n+1], color, 2);\n\t\t}\n\t}\n\tbool detected;\n\tfor (int i=0; i<detect_lanes.size(); i++)\n\t{\n\t\tdetected = false;\n\t\tcurr_lane = detect_lanes[i];\n\t\tif(curr_lane.size() == 2)\n\t\t{\n\t\t\tp_interp = curr_lane;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tp_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n\t\t}\n\t\tfor (int n=0; n<anno_lanes.size(); n++)\n\t\t{\n\t\t\tif (anno_match[n] == i)\n\t\t\t{\n\t\t\t\tdetected = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (detected == true)\n\t\t{\n\t\t\tcolor = color_B;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcolor = color_R;\n\t\t}\n\t\tfor (int n=0; n<p_interp.size()-1; n++)\n\t\t{\n\t\t\tcv::line(img, p_interp[n], p_interp[n+1], color, width_lane);\n\t\t\tcv::line(img2, p_interp[n], p_interp[n+1], color, 2);\n\t\t}\n\t}\n\tnamedWindow(\"visualize\", 1);\n\timshow(\"visualize\", img);\n\tnamedWindow(\"visualize2\", 1);\n\timshow(\"visualize2\", img2);\n}\n\nvoid update_tp_fp_fn(int &tp, int &fp, int &fn, int _tp, int _fp, int _fn)\n{\n\tstd::lock_guard<std::mutex> guard(myMutex);\n\ttp += _tp;\n\tfp += _fp;\n\tfn += _fn;\n}\n\nvoid worker_func(vector<string> &lines_list_v, int start, int end, int &tp, int &fp, int &fn)\n{\n\tCounter counter(im_width, im_height, iou_threshold, width_lane);\n\n\tvector<int> anno_match;\n\tstring sub_im_name;\n\tint count = 0;\n\n\tfor (int i=start; i<end; i++) {\n\t\tsub_im_name = lines_list_v[i];\n\t\tcount++;\n\n\t\tstring full_im_name = im_dir + sub_im_name;\n\t\tstring sub_txt_name =  sub_im_name.substr(0, sub_im_name.find_last_of(\".\")) + \".lines.txt\";\n\t\tstring anno_file_name = anno_dir + sub_txt_name;\n\t\tstring detect_file_name = detect_dir + sub_txt_name;\n\t\tvector<vector<Point2f> > anno_lanes;\n\t\tvector<vector<Point2f> > detect_lanes;\n\n\t\tread_lane_file(anno_file_name, ref(anno_lanes));\n\t\tread_lane_file(detect_file_name, ref(detect_lanes));\n\t\t\n\t\tanno_match = counter.count_im_pair(ref(anno_lanes), ref(detect_lanes));\n\t}\n\n\tupdate_tp_fp_fn(ref(tp), ref(fp), ref(fn), counter.getTP(), counter.getFP(), counter.getFN());\n}"
  },
  {
    "path": "utils/lane_evaluation/CULane/src/lane_compare.cpp",
    "content": "/*************************************************************************\n\t> File Name: lane_compare.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Fri Jul 15 10:26:32 2016\n ************************************************************************/\n\n#include \"lane_compare.hpp\"\n#include <opencv2/core/core.hpp>\n#include <opencv2/imgproc.hpp>\n\n\ndouble LaneCompare::get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2)\n{\n\tif(lane1.size()<2 || lane2.size()<2)\n\t{\n\t\tcerr<<\"lane size must be greater or equal to 2\"<<endl;\n\t\treturn 0;\n\t}\n\tMat im1 = Mat::zeros(im_height, im_width, CV_8UC1);\n\tMat im2 = Mat::zeros(im_height, im_width, CV_8UC1);\n\t// draw lines on im1 and im2\n\tvector<Point2f> p_interp1;\n\tvector<Point2f> p_interp2;\n\tif(lane1.size() == 2)\n\t{\n\t\tp_interp1 = lane1;\n\t}\n\telse\n\t{\n\t\tp_interp1 = splineSolver.splineInterpTimes(lane1, 50);\n\t}\n\n\tif(lane2.size() == 2)\n\t{\n\t\tp_interp2 = lane2;\n\t}\n\telse\n\t{\n\t\tp_interp2 = splineSolver.splineInterpTimes(lane2, 50);\n\t}\n\t\n\tScalar color_white = Scalar(1);\n\tfor(int n=0; n<p_interp1.size()-1; n++)\n\t{\n\t\tcv::line(im1, p_interp1[n], p_interp1[n+1], color_white, lane_width);\n\t}\n\tfor(int n=0; n<p_interp2.size()-1; n++)\n\t{\n\t\tcv::line(im2, p_interp2[n], p_interp2[n+1], color_white, lane_width);\n\t}\n\n\tdouble sum_1 = cv::sum(im1).val[0];\n\tdouble sum_2 = cv::sum(im2).val[0];\n\tdouble inter_sum = cv::sum(im1.mul(im2)).val[0];\n\tdouble union_sum = sum_1 + sum_2 - inter_sum; \n\tdouble iou = inter_sum / union_sum;\n\treturn iou;\n}\n\n\n// resize the lane from Size(curr_width, curr_height) to Size(im_width, im_height)\nvoid LaneCompare::resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height)\n{\n\tif(curr_width == im_width && curr_height == im_height)\n\t{\n\t\treturn;\n\t}\n\tdouble x_scale = im_width/(double)curr_width;\n\tdouble y_scale = im_height/(double)curr_height;\n\tfor(int n=0; n<curr_lane.size(); n++)\n\t{\n\t\tcurr_lane[n] = Point2f(curr_lane[n].x*x_scale, curr_lane[n].y*y_scale);\n\t}\n}\n\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src/spline.cpp",
    "content": "#include <vector>\n#include <iostream>\n#include \"spline.hpp\"\nusing namespace std;\nusing namespace cv;\n\nvector<Point2f> Spline::splineInterpTimes(const vector<Point2f>& tmp_line, int times) {\n    vector<Point2f> res;\n\n    if(tmp_line.size() == 2) {\n        double x1 = tmp_line[0].x;\n        double y1 = tmp_line[0].y;\n        double x2 = tmp_line[1].x;\n        double y2 = tmp_line[1].y;\n\n        for (int k = 0; k <= times; k++) {\n            double xi =  x1 + double((x2 - x1) * k) / times;\n            double yi =  y1 + double((y2 - y1) * k) / times;\n            res.push_back(Point2f(xi, yi));\n        }\n    }\n\n    else if(tmp_line.size() > 2)\n    {\n        vector<Func> tmp_func;\n        tmp_func = this->cal_fun(tmp_line);\n        if (tmp_func.empty()) {\n            cout << \"in splineInterpTimes: cal_fun failed\" << endl;\n            return res;\n        }\n        for(int j = 0; j < tmp_func.size(); j++)\n        {\n            double delta = tmp_func[j].h / times;\n            for(int k = 0; k < times; k++)\n            {\n                double t1 = delta*k;\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n    }\n\telse {\n\t\tcerr << \"in splineInterpTimes: not enough points\" << endl;\n\t}\n    return res;\n}\nvector<Point2f> Spline::splineInterpStep(vector<Point2f> tmp_line, double step) {\n\tvector<Point2f> res;\n\t/*\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\n\t\tfor (double yi = std::min(y1, y2); yi < std::max(y1, y2); yi += step) {\n            double xi;\n\t\t\tif (yi == y1) xi = x1;\n\t\t\telse xi = (x2 - x1) / (y2 - y1) * (yi - y1) + x1;\n\t\t\tres.push_back(Point2f(xi, yi));\n\t\t}\n\t}*/\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\t\ttmp_line[1].x = (x1 + x2) / 2;\n\t\ttmp_line[1].y = (y1 + y2) / 2;\n\t\ttmp_line.push_back(Point2f(x2, y2));\n\t}\n\tif (tmp_line.size() > 2) {\n\t\tvector<Func> tmp_func;\n\t\ttmp_func = this->cal_fun(tmp_line);\n\t\tdouble ystart = tmp_line[0].y;\n\t\tdouble yend = tmp_line[tmp_line.size() - 1].y;\n\t\tbool down;\n\t\tif (ystart < yend) down = 1;\n\t\telse down = 0;\n\t\tif (tmp_func.empty()) {\n\t\t\tcerr << \"in splineInterpStep: cal_fun failed\" << endl;\n\t\t}\n\n\t\tfor(int j = 0; j < tmp_func.size(); j++)\n        {\n            for(double t1 = 0; t1 < tmp_func[j].h; t1 += step)\n            {\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n\t}\n    else {\n        cerr << \"in splineInterpStep: not enough points\" << endl;\n    }\n    return res;\n}\n\nvector<Func> Spline::cal_fun(const vector<Point2f> &point_v)\n{\n    vector<Func> func_v;\n    int n = point_v.size();\n    if(n<=2) {\n        cout << \"in cal_fun: point number less than 3\" << endl;\n        return func_v;\n    }\n\n    func_v.resize(point_v.size()-1);\n\n    vector<double> Mx(n);\n    vector<double> My(n);\n    vector<double> A(n-2);\n    vector<double> B(n-2);\n    vector<double> C(n-2);\n    vector<double> Dx(n-2);\n    vector<double> Dy(n-2);\n    vector<double> h(n-1);\n    //vector<func> func_v(n-1);\n\n    for(int i = 0; i < n-1; i++)\n    {\n        h[i] = sqrt(pow(point_v[i+1].x - point_v[i].x, 2) + pow(point_v[i+1].y - point_v[i].y, 2));\n    }\n\n    for(int i = 0; i < n-2; i++)\n    {\n        A[i] = h[i];\n        B[i] = 2*(h[i]+h[i+1]);\n        C[i] = h[i+1];\n\n        Dx[i] =  6*( (point_v[i+2].x - point_v[i+1].x)/h[i+1] - (point_v[i+1].x - point_v[i].x)/h[i] );\n        Dy[i] =  6*( (point_v[i+2].y - point_v[i+1].y)/h[i+1] - (point_v[i+1].y - point_v[i].y)/h[i] );\n    }\n\n    //TDMA\n    C[0] = C[0] / B[0];\n    Dx[0] = Dx[0] / B[0];\n    Dy[0] = Dy[0] / B[0];\n    for(int i = 1; i < n-2; i++)\n    {\n        double tmp = B[i] - A[i]*C[i-1];\n        C[i] = C[i] / tmp;\n        Dx[i] = (Dx[i] - A[i]*Dx[i-1]) / tmp;\n        Dy[i] = (Dy[i] - A[i]*Dy[i-1]) / tmp;\n    }\n    Mx[n-2] = Dx[n-3];\n    My[n-2] = Dy[n-3];\n    for(int i = n-4; i >= 0; i--)\n    {\n        Mx[i+1] = Dx[i] - C[i]*Mx[i+2];\n        My[i+1] = Dy[i] - C[i]*My[i+2];\n    }\n\n    Mx[0] = 0;\n    Mx[n-1] = 0;\n    My[0] = 0;\n    My[n-1] = 0;\n\n    for(int i = 0; i < n-1; i++)\n    {\n        func_v[i].a_x = point_v[i].x;\n        func_v[i].b_x = (point_v[i+1].x - point_v[i].x)/h[i] - (2*h[i]*Mx[i] + h[i]*Mx[i+1]) / 6;\n        func_v[i].c_x = Mx[i]/2;\n        func_v[i].d_x = (Mx[i+1] - Mx[i]) / (6*h[i]);\n\n        func_v[i].a_y = point_v[i].y;\n        func_v[i].b_y = (point_v[i+1].y - point_v[i].y)/h[i] - (2*h[i]*My[i] + h[i]*My[i+1]) / 6;\n        func_v[i].c_y = My[i]/2;\n        func_v[i].d_y = (My[i+1] - My[i]) / (6*h[i]);\n\n        func_v[i].h = h[i];\n    }\n    return func_v;\n}\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src_origin/counter.cpp",
    "content": "/*************************************************************************\n\t> File Name: counter.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Thu Jul 14 20:23:08 2016\n ************************************************************************/\n\n#include \"counter.hpp\"\n\ndouble Counter::get_precision(void)\n{\n\tcerr<<\"tp: \"<<tp<<\" fp: \"<<fp<<\" fn: \"<<fn<<endl;\n\tif(tp+fp == 0)\n\t{\n\t\tcerr<<\"no positive detection\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fp);\n}\n\ndouble Counter::get_recall(void)\n{\n\tif(tp+fn == 0)\n\t{\n\t\tcerr<<\"no ground truth positive\"<<endl;\n\t\treturn -1;\n\t}\n\treturn tp/double(tp + fn);\n}\n\nlong Counter::getTP(void)\n{\n\treturn tp;\n}\n\nlong Counter::getFP(void)\n{\n\treturn fp;\n}\n\nlong Counter::getFN(void)\n{\n\treturn fn;\n}\n\nvector<int> Counter::count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes)\n{\n\tvector<int> anno_match(anno_lanes.size(), -1);\n\tvector<int> detect_match;\n\tif(anno_lanes.empty())\n\t{\n\t\tfp += detect_lanes.size();\n\t\treturn anno_match;\n\t}\n\n\tif(detect_lanes.empty())\n\t{\n\t\tfn += anno_lanes.size();\n\t\treturn anno_match;\n\t}\n\t// hungarian match first\n\t\n\t// first calc similarity matrix\n\tvector<vector<double> > similarity(anno_lanes.size(), vector<double>(detect_lanes.size(), 0));\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tconst vector<Point2f> &curr_anno_lane = anno_lanes[i];\n\t\tfor(int j=0; j<detect_lanes.size(); j++)\n\t\t{\n\t\t\tconst vector<Point2f> &curr_detect_lane = detect_lanes[j];\n\t\t\tsimilarity[i][j] = lane_compare->get_lane_similarity(curr_anno_lane, curr_detect_lane);\n\t\t}\n\t}\n\n\n\n\tmakeMatch(similarity, anno_match, detect_match);\n\n\t\n\tint curr_tp = 0;\n\t// count and add\n\tfor(int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tif(anno_match[i]>=0 && similarity[i][anno_match[i]] > sim_threshold)\n\t\t{\n\t\t\tcurr_tp++;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tanno_match[i] = -1;\n\t\t}\n\t}\n\tint curr_fn = anno_lanes.size() - curr_tp;\n\tint curr_fp = detect_lanes.size() - curr_tp;\n\ttp += curr_tp;\n\tfn += curr_fn;\n\tfp += curr_fp;\n\treturn anno_match;\n}\n\n\nvoid Counter::makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2) {\n\tint m = similarity.size();\n\tint n = similarity[0].size();\n    pipartiteGraph gra;\n    bool have_exchange = false;\n    if (m > n) {\n        have_exchange = true;\n        swap(m, n);\n    }\n    gra.resize(m, n);\n    for (int i = 0; i < gra.leftNum; i++) {\n        for (int j = 0; j < gra.rightNum; j++) {\n\t\t\tif(have_exchange)\n\t\t\t\tgra.mat[i][j] = similarity[j][i];\n\t\t\telse\n\t\t\t\tgra.mat[i][j] = similarity[i][j];\n        }\n    }\n    gra.match();\n    match1 = gra.leftMatch;\n    match2 = gra.rightMatch;\n    if (have_exchange) swap(match1, match2);\n}\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src_origin/evaluate.cpp",
    "content": "/*************************************************************************\n\t> File Name: evaluate.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: 2016年07月14日 星期四 18时28分45秒\n ************************************************************************/\n\n#include \"counter.hpp\"\n#include \"spline.hpp\"\n#include <unistd.h>\n#include <iostream>\n#include <fstream>\n#include <sstream>\n#include <cstdlib>\n#include <string>\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc.hpp>\n\nusing namespace std;\nusing namespace cv;\n\nvoid help(void)\n{\n\tcout<<\"./evaluate [OPTIONS]\"<<endl;\n\tcout<<\"-h                  : print usage help\"<<endl;\n\tcout<<\"-a                  : directory for annotation files (default: /data/driving/eval_data/anno_label/)\"<<endl;\n\tcout<<\"-d                  : directory for detection files (default: /data/driving/eval_data/predict_label/)\"<<endl;\n\tcout<<\"-i                  : directory for image files (default: /data/driving/eval_data/img/)\"<<endl;\n\tcout<<\"-l                  : list of images used for evaluation (default: /data/driving/eval_data/img/all.txt)\"<<endl;\n\tcout<<\"-w                  : width of the lanes (default: 10)\"<<endl;\n\tcout<<\"-t                  : threshold of iou (default: 0.4)\"<<endl;\n\tcout<<\"-c                  : cols (max image width) (default: 1920)\"<<endl;\n\tcout<<\"-r                  : rows (max image height) (default: 1080)\"<<endl;\n\tcout<<\"-s                  : show visualization\"<<endl;\n\tcout<<\"-f                  : start frame in the test set (default: 1)\"<<endl;\n}\n\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes);\nvoid visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane);\n\nint main(int argc, char **argv)\n{\n\t// process params\n\tstring anno_dir = \"/data/driving/eval_data/anno_label/\";\n\tstring detect_dir = \"/data/driving/eval_data/predict_label/\";\n\tstring im_dir = \"/data/driving/eval_data/img/\";\n\tstring list_im_file = \"/data/driving/eval_data/img/all.txt\";\n\tstring output_file = \"./output.txt\";\n\tint width_lane = 10;\n\tdouble iou_threshold = 0.4;\n\tint im_width = 1920;\n\tint im_height = 1080;\n\tint oc;\n\tbool show = false;\n\tint frame = 1;\n\twhile((oc = getopt(argc, argv, \"ha:d:i:l:w:t:c:r:sf:o:\")) != -1)\n\t{\n\t\tswitch(oc)\n\t\t{\n\t\t\tcase 'h':\n\t\t\t\thelp();\n\t\t\t\treturn 0;\n\t\t\tcase 'a':\n\t\t\t\tanno_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'd':\n\t\t\t\tdetect_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'i':\n\t\t\t\tim_dir = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tlist_im_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'w':\n\t\t\t\twidth_lane = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tiou_threshold = atof(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tim_width = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tim_height = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tshow = true;\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tframe = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'o':\n\t\t\t\toutput_file = optarg;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\n\tcout<<\"------------Configuration---------\"<<endl;\n\tcout<<\"anno_dir: \"<<anno_dir<<endl;\n\tcout<<\"detect_dir: \"<<detect_dir<<endl;\n\tcout<<\"im_dir: \"<<im_dir<<endl;\n\tcout<<\"list_im_file: \"<<list_im_file<<endl;\n\tcout<<\"width_lane: \"<<width_lane<<endl;\n\tcout<<\"iou_threshold: \"<<iou_threshold<<endl;\n\tcout<<\"im_width: \"<<im_width<<endl;\n\tcout<<\"im_height: \"<<im_height<<endl;\n\tcout<<\"-----------------------------------\"<<endl;\n\tcout<<\"Evaluating the results...\"<<endl;\n\t// this is the max_width and max_height\n\n\tif(width_lane<1)\n\t{\n\t\tcerr<<\"width_lane must be positive\"<<endl;\n\t\thelp();\n\t\treturn 1;\n\t}\n\n\n\tifstream ifs_im_list(list_im_file, ios::in);\n\tif(ifs_im_list.fail())\n\t{\n\t\tcerr<<\"Error: file \"<<list_im_file<<\" not exist!\"<<endl;\n\t\treturn 1;\n\t}\n\n\n\tCounter counter(im_width, im_height, iou_threshold, width_lane);\n\t\n\tvector<int> anno_match;\n\tstring sub_im_name;\n\tint count = 0;\n\twhile(getline(ifs_im_list, sub_im_name))\n\t{\n\t\tcount++;\n\t\tif (count < frame)\n\t\t\tcontinue;\n\t\tstring full_im_name = im_dir + sub_im_name;\n\t\tstring sub_txt_name =  sub_im_name.substr(0, sub_im_name.find_last_of(\".\")) + \".lines.txt\";\n\t\tstring anno_file_name = anno_dir + sub_txt_name;\n\t\tstring detect_file_name = detect_dir + sub_txt_name;\n\t\tvector<vector<Point2f> > anno_lanes;\n\t\tvector<vector<Point2f> > detect_lanes;\n\t\tread_lane_file(anno_file_name, anno_lanes);\n\t\tread_lane_file(detect_file_name, detect_lanes);\n\t\t//cerr<<count<<\": \"<<full_im_name<<endl;\n\t\tanno_match = counter.count_im_pair(anno_lanes, detect_lanes);\n\t\tif (show)\n\t\t{\n\t\t\tvisualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane);\n\t\t\twaitKey(0);\n\t\t}\n\t}\n\tifs_im_list.close();\n\t\n\tdouble precision = counter.get_precision();\n\tdouble recall = counter.get_recall();\n\tdouble F = 2 * precision * recall / (precision + recall);\t\n\tcerr<<\"finished process file\"<<endl;\n\tcout<<\"precision: \"<<precision<<endl;\n\tcout<<\"recall: \"<<recall<<endl;\n\tcout<<\"Fmeasure: \"<<F<<endl;\n\tcout<<\"----------------------------------\"<<endl;\n\n\tofstream ofs_out_file;\n\tofs_out_file.open(output_file, ios::out);\n\tofs_out_file<<\"file: \"<<output_file<<endl;\n\tofs_out_file<<\"tp: \"<<counter.getTP()<<\" fp: \"<<counter.getFP()<<\" fn: \"<<counter.getFN()<<endl;\n\tofs_out_file<<\"precision: \"<<precision<<endl;\n\tofs_out_file<<\"recall: \"<<recall<<endl;\n\tofs_out_file<<\"Fmeasure: \"<<F<<endl<<endl;\n\tofs_out_file.close();\n\treturn 0;\n}\n\nvoid read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes)\n{\n\tlanes.clear();\n\tifstream ifs_lane(file_name, ios::in);\n\tif(ifs_lane.fail())\n\t{\n\t\treturn;\n\t}\n\n\tstring str_line;\n\twhile(getline(ifs_lane, str_line))\n\t{\n\t\tvector<Point2f> curr_lane;\n\t\tstringstream ss;\n\t\tss<<str_line;\n\t\tdouble x,y;\n\t\twhile(ss>>x>>y)\n\t\t{\n\t\t\tcurr_lane.push_back(Point2f(x, y));\n\t\t}\n\t\tlanes.push_back(curr_lane);\n\t}\n\n\tifs_lane.close();\n}\n\nvoid visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane)\n{\n\tMat img = imread(full_im_name, 1);\n\tMat img2 = imread(full_im_name, 1);\n\tvector<Point2f> curr_lane;\n\tvector<Point2f> p_interp;\n\tSpline splineSolver;\n\tScalar color_B = Scalar(255, 0, 0);\n\tScalar color_G = Scalar(0, 255, 0);\n\tScalar color_R = Scalar(0, 0, 255);\n\tScalar color_P = Scalar(255, 0, 255);\n\tScalar color;\n\tfor (int i=0; i<anno_lanes.size(); i++)\n\t{\n\t\tcurr_lane = anno_lanes[i];\n\t\tif(curr_lane.size() == 2)\n\t\t{\n\t\t\tp_interp = curr_lane;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tp_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n\t\t}\n\t\tif (anno_match[i] >= 0)\n\t\t{\n\t\t\tcolor = color_G;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcolor = color_G;\n\t\t}\n\t\tfor (int n=0; n<p_interp.size()-1; n++)\n\t\t{\n\t\t\tcv::line(img, p_interp[n], p_interp[n+1], color, width_lane);\n\t\t\tcv::line(img2, p_interp[n], p_interp[n+1], color, 2);\n\t\t}\n\t}\n\tbool detected;\n\tfor (int i=0; i<detect_lanes.size(); i++)\n\t{\n\t\tdetected = false;\n\t\tcurr_lane = detect_lanes[i];\n\t\tif(curr_lane.size() == 2)\n\t\t{\n\t\t\tp_interp = curr_lane;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tp_interp = splineSolver.splineInterpTimes(curr_lane, 50);\n\t\t}\n\t\tfor (int n=0; n<anno_lanes.size(); n++)\n\t\t{\n\t\t\tif (anno_match[n] == i)\n\t\t\t{\n\t\t\t\tdetected = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (detected == true)\n\t\t{\n\t\t\tcolor = color_B;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tcolor = color_R;\n\t\t}\n\t\tfor (int n=0; n<p_interp.size()-1; n++)\n\t\t{\n\t\t\tcv::line(img, p_interp[n], p_interp[n+1], color, width_lane);\n\t\t\tcv::line(img2, p_interp[n], p_interp[n+1], color, 2);\n\t\t}\n\t}\n\tnamedWindow(\"visualize\", 1);\n\timshow(\"visualize\", img);\n\tnamedWindow(\"visualize2\", 1);\n\timshow(\"visualize2\", img2);\n}\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src_origin/lane_compare.cpp",
    "content": "/*************************************************************************\n\t> File Name: lane_compare.cpp\n\t> Author: Xingang Pan, Jun Li\n\t> Mail: px117@ie.cuhk.edu.hk\n\t> Created Time: Fri Jul 15 10:26:32 2016\n ************************************************************************/\n\n#include \"lane_compare.hpp\"\n#include <opencv2/core/core.hpp>\n#include <opencv2/imgproc.hpp>\n\n\ndouble LaneCompare::get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2)\n{\n\tif(lane1.size()<2 || lane2.size()<2)\n\t{\n\t\tcerr<<\"lane size must be greater or equal to 2\"<<endl;\n\t\treturn 0;\n\t}\n\tMat im1 = Mat::zeros(im_height, im_width, CV_8UC1);\n\tMat im2 = Mat::zeros(im_height, im_width, CV_8UC1);\n\t// draw lines on im1 and im2\n\tvector<Point2f> p_interp1;\n\tvector<Point2f> p_interp2;\n\tif(lane1.size() == 2)\n\t{\n\t\tp_interp1 = lane1;\n\t}\n\telse\n\t{\n\t\tp_interp1 = splineSolver.splineInterpTimes(lane1, 50);\n\t}\n\n\tif(lane2.size() == 2)\n\t{\n\t\tp_interp2 = lane2;\n\t}\n\telse\n\t{\n\t\tp_interp2 = splineSolver.splineInterpTimes(lane2, 50);\n\t}\n\t\n\tScalar color_white = Scalar(1);\n\tfor(int n=0; n<p_interp1.size()-1; n++)\n\t{\n\t\tcv::line(im1, p_interp1[n], p_interp1[n+1], color_white, lane_width);\n\t}\n\tfor(int n=0; n<p_interp2.size()-1; n++)\n\t{\n\t\tcv::line(im2, p_interp2[n], p_interp2[n+1], color_white, lane_width);\n\t}\n\n\tdouble sum_1 = cv::sum(im1).val[0];\n\tdouble sum_2 = cv::sum(im2).val[0];\n\tdouble inter_sum = cv::sum(im1.mul(im2)).val[0];\n\tdouble union_sum = sum_1 + sum_2 - inter_sum; \n\tdouble iou = inter_sum / union_sum;\n\treturn iou;\n}\n\n\n// resize the lane from Size(curr_width, curr_height) to Size(im_width, im_height)\nvoid LaneCompare::resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height)\n{\n\tif(curr_width == im_width && curr_height == im_height)\n\t{\n\t\treturn;\n\t}\n\tdouble x_scale = im_width/(double)curr_width;\n\tdouble y_scale = im_height/(double)curr_height;\n\tfor(int n=0; n<curr_lane.size(); n++)\n\t{\n\t\tcurr_lane[n] = Point2f(curr_lane[n].x*x_scale, curr_lane[n].y*y_scale);\n\t}\n}\n\n"
  },
  {
    "path": "utils/lane_evaluation/CULane/src_origin/spline.cpp",
    "content": "#include <vector>\n#include <iostream>\n#include \"spline.hpp\"\nusing namespace std;\nusing namespace cv;\n\nvector<Point2f> Spline::splineInterpTimes(const vector<Point2f>& tmp_line, int times) {\n    vector<Point2f> res;\n\n    if(tmp_line.size() == 2) {\n        double x1 = tmp_line[0].x;\n        double y1 = tmp_line[0].y;\n        double x2 = tmp_line[1].x;\n        double y2 = tmp_line[1].y;\n\n        for (int k = 0; k <= times; k++) {\n            double xi =  x1 + double((x2 - x1) * k) / times;\n            double yi =  y1 + double((y2 - y1) * k) / times;\n            res.push_back(Point2f(xi, yi));\n        }\n    }\n\n    else if(tmp_line.size() > 2)\n    {\n        vector<Func> tmp_func;\n        tmp_func = this->cal_fun(tmp_line);\n        if (tmp_func.empty()) {\n            cout << \"in splineInterpTimes: cal_fun failed\" << endl;\n            return res;\n        }\n        for(int j = 0; j < tmp_func.size(); j++)\n        {\n            double delta = tmp_func[j].h / times;\n            for(int k = 0; k < times; k++)\n            {\n                double t1 = delta*k;\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n    }\n\telse {\n\t\tcerr << \"in splineInterpTimes: not enough points\" << endl;\n\t}\n    return res;\n}\nvector<Point2f> Spline::splineInterpStep(vector<Point2f> tmp_line, double step) {\n\tvector<Point2f> res;\n\t/*\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\n\t\tfor (double yi = std::min(y1, y2); yi < std::max(y1, y2); yi += step) {\n            double xi;\n\t\t\tif (yi == y1) xi = x1;\n\t\t\telse xi = (x2 - x1) / (y2 - y1) * (yi - y1) + x1;\n\t\t\tres.push_back(Point2f(xi, yi));\n\t\t}\n\t}*/\n\tif (tmp_line.size() == 2) {\n\t\tdouble x1 = tmp_line[0].x;\n\t\tdouble y1 = tmp_line[0].y;\n\t\tdouble x2 = tmp_line[1].x;\n\t\tdouble y2 = tmp_line[1].y;\n\t\ttmp_line[1].x = (x1 + x2) / 2;\n\t\ttmp_line[1].y = (y1 + y2) / 2;\n\t\ttmp_line.push_back(Point2f(x2, y2));\n\t}\n\tif (tmp_line.size() > 2) {\n\t\tvector<Func> tmp_func;\n\t\ttmp_func = this->cal_fun(tmp_line);\n\t\tdouble ystart = tmp_line[0].y;\n\t\tdouble yend = tmp_line[tmp_line.size() - 1].y;\n\t\tbool down;\n\t\tif (ystart < yend) down = 1;\n\t\telse down = 0;\n\t\tif (tmp_func.empty()) {\n\t\t\tcerr << \"in splineInterpStep: cal_fun failed\" << endl;\n\t\t}\n\n\t\tfor(int j = 0; j < tmp_func.size(); j++)\n        {\n            for(double t1 = 0; t1 < tmp_func[j].h; t1 += step)\n            {\n                double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);\n                double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);\n                res.push_back(Point2f(x1, y1));\n            }\n        }\n        res.push_back(tmp_line[tmp_line.size() - 1]);\n\t}\n    else {\n        cerr << \"in splineInterpStep: not enough points\" << endl;\n    }\n    return res;\n}\n\nvector<Func> Spline::cal_fun(const vector<Point2f> &point_v)\n{\n    vector<Func> func_v;\n    int n = point_v.size();\n    if(n<=2) {\n        cout << \"in cal_fun: point number less than 3\" << endl;\n        return func_v;\n    }\n\n    func_v.resize(point_v.size()-1);\n\n    vector<double> Mx(n);\n    vector<double> My(n);\n    vector<double> A(n-2);\n    vector<double> B(n-2);\n    vector<double> C(n-2);\n    vector<double> Dx(n-2);\n    vector<double> Dy(n-2);\n    vector<double> h(n-1);\n    //vector<func> func_v(n-1);\n\n    for(int i = 0; i < n-1; i++)\n    {\n        h[i] = sqrt(pow(point_v[i+1].x - point_v[i].x, 2) + pow(point_v[i+1].y - point_v[i].y, 2));\n    }\n\n    for(int i = 0; i < n-2; i++)\n    {\n        A[i] = h[i];\n        B[i] = 2*(h[i]+h[i+1]);\n        C[i] = h[i+1];\n\n        Dx[i] =  6*( (point_v[i+2].x - point_v[i+1].x)/h[i+1] - (point_v[i+1].x - point_v[i].x)/h[i] );\n        Dy[i] =  6*( (point_v[i+2].y - point_v[i+1].y)/h[i+1] - (point_v[i+1].y - point_v[i].y)/h[i] );\n    }\n\n    //TDMA\n    C[0] = C[0] / B[0];\n    Dx[0] = Dx[0] / B[0];\n    Dy[0] = Dy[0] / B[0];\n    for(int i = 1; i < n-2; i++)\n    {\n        double tmp = B[i] - A[i]*C[i-1];\n        C[i] = C[i] / tmp;\n        Dx[i] = (Dx[i] - A[i]*Dx[i-1]) / tmp;\n        Dy[i] = (Dy[i] - A[i]*Dy[i-1]) / tmp;\n    }\n    Mx[n-2] = Dx[n-3];\n    My[n-2] = Dy[n-3];\n    for(int i = n-4; i >= 0; i--)\n    {\n        Mx[i+1] = Dx[i] - C[i]*Mx[i+2];\n        My[i+1] = Dy[i] - C[i]*My[i+2];\n    }\n\n    Mx[0] = 0;\n    Mx[n-1] = 0;\n    My[0] = 0;\n    My[n-1] = 0;\n\n    for(int i = 0; i < n-1; i++)\n    {\n        func_v[i].a_x = point_v[i].x;\n        func_v[i].b_x = (point_v[i+1].x - point_v[i].x)/h[i] - (2*h[i]*Mx[i] + h[i]*Mx[i+1]) / 6;\n        func_v[i].c_x = Mx[i]/2;\n        func_v[i].d_x = (Mx[i+1] - Mx[i]) / (6*h[i]);\n\n        func_v[i].a_y = point_v[i].y;\n        func_v[i].b_y = (point_v[i+1].y - point_v[i].y)/h[i] - (2*h[i]*My[i] + h[i]*My[i+1]) / 6;\n        func_v[i].c_y = My[i]/2;\n        func_v[i].d_y = (My[i+1] - My[i]) / (6*h[i]);\n\n        func_v[i].h = h[i];\n    }\n    return func_v;\n}\n"
  },
  {
    "path": "utils/lane_evaluation/tusimple/lane.py",
    "content": "import numpy as np\nfrom sklearn.linear_model import LinearRegression\nimport json as json\n\n\nclass LaneEval(object):\n    lr = LinearRegression()\n    pixel_thresh = 20\n    pt_thresh = 0.85\n\n    @staticmethod\n    def get_angle(xs, y_samples):\n        xs, ys = xs[xs >= 0], y_samples[xs >= 0]\n        if len(xs) > 1:\n            LaneEval.lr.fit(ys[:, None], xs)\n            k = LaneEval.lr.coef_[0]\n            theta = np.arctan(k)\n        else:\n            theta = 0\n        return theta\n\n    @staticmethod\n    def line_accuracy(pred, gt, thresh):\n        pred = np.array([p if p >= 0 else -100 for p in pred])\n        gt = np.array([g if g >= 0 else -100 for g in gt])\n        return np.sum(np.where(np.abs(pred - gt) < thresh, 1., 0.)) / len(gt)\n\n    @staticmethod\n    def bench(pred, gt, y_samples, running_time):\n        if any(len(p) != len(y_samples) for p in pred):\n            raise Exception('Format of lanes error.')\n        if running_time > 200 or len(gt) + 2 < len(pred):\n            return 0., 0., 1.\n        angles = [LaneEval.get_angle(np.array(x_gts), np.array(y_samples)) for x_gts in gt]\n        threshs = [LaneEval.pixel_thresh / np.cos(angle) for angle in angles]\n        line_accs = []\n        fp, fn = 0., 0.\n        matched = 0.\n        for x_gts, thresh in zip(gt, threshs):\n            accs = [LaneEval.line_accuracy(np.array(x_preds), np.array(x_gts), thresh) for x_preds in pred]\n            max_acc = np.max(accs) if len(accs) > 0 else 0.\n            if max_acc < LaneEval.pt_thresh:\n                fn += 1\n            else:\n                matched += 1\n            line_accs.append(max_acc)\n        fp = len(pred) - matched\n        if len(gt) > 4 and fn > 0:\n            fn -= 1\n        s = sum(line_accs)\n        if len(gt) > 4:\n            s -= min(line_accs)\n        return s / max(min(4.0, len(gt)), 1.), fp / len(pred) if len(pred) > 0 else 0., fn / max(min(len(gt), 4.) , 1.)\n\n    @staticmethod\n    def bench_one_submit(pred_file, gt_file):\n        try:\n            json_pred = [json.loads(line) for line in open(pred_file).readlines()]\n        except BaseException as e:\n            raise Exception('Fail to load json file of the prediction.')\n        json_gt = [json.loads(line) for line in open(gt_file).readlines()]\n        if len(json_gt) != len(json_pred):\n            raise Exception('We do not get the predictions of all the test tasks')\n        gts = {l['raw_file']: l for l in json_gt}\n        accuracy, fp, fn = 0., 0., 0.\n        for pred in json_pred:\n            if 'raw_file' not in pred or 'lanes' not in pred or 'run_time' not in pred:\n                raise Exception('raw_file or lanes or run_time not in some predictions.')\n            raw_file = pred['raw_file']\n            pred_lanes = pred['lanes']\n            run_time = pred['run_time']\n            if raw_file not in gts:\n                raise Exception('Some raw_file from your predictions do not exist in the test tasks.')\n            gt = gts[raw_file]\n            gt_lanes = gt['lanes']\n            y_samples = gt['h_samples']\n            try:\n                a, p, n = LaneEval.bench(pred_lanes, gt_lanes, y_samples, run_time)\n            except BaseException as e:\n                raise Exception('Format of lanes error.')\n            accuracy += a\n            fp += p\n            fn += n\n        num = len(gts)\n        # the first return parameter is the default ranking parameter\n        return json.dumps([\n            {'name': 'Accuracy', 'value': accuracy / num, 'order': 'desc'},\n            {'name': 'FP', 'value': fp / num, 'order': 'asc'},\n            {'name': 'FN', 'value': fn / num, 'order': 'asc'}\n        ])\n\n\nif __name__ == '__main__':\n    import sys\n    try:\n        if len(sys.argv) != 3:\n            raise Exception('Invalid input arguments')\n        print(LaneEval.bench_one_submit(sys.argv[1], sys.argv[2]))\n    except Exception as e:\n        print(e.message)\n        sys.exit(e.message)\n"
  },
  {
    "path": "utils/lr_scheduler.py",
    "content": "from torch.optim.lr_scheduler import _LRScheduler\n\n\nclass PolyLR(_LRScheduler):\n    def __init__(self, optimizer, pow, max_iter, min_lrs=1e-20, last_epoch=-1, warmup=0):\n        \"\"\"\n        :param warmup: how many steps for linearly warmup lr\n        \"\"\"\n        self.pow = pow\n        self.max_iter = max_iter\n        if not isinstance(min_lrs, list) and not isinstance(min_lrs, tuple):\n            self.min_lrs = [min_lrs] * len(optimizer.param_groups)\n\n        assert isinstance(warmup, int), \"The type of warmup is incorrect, got {}\".format(type(warmup))\n        self.warmup = max(warmup, 0)\n\n        super(PolyLR, self).__init__(optimizer, last_epoch)\n\n    def get_lr(self):\n        if self.last_epoch < self.warmup:\n            return [base_lr / self.warmup * (self.last_epoch+1) for base_lr in self.base_lrs]\n\n        if self.last_epoch < self.max_iter:\n            coeff = (1 - (self.last_epoch-self.warmup) / (self.max_iter-self.warmup)) ** self.pow\n        else:\n            coeff = 0\n        return [(base_lr - min_lr) * coeff + min_lr\n                for base_lr, min_lr in zip(self.base_lrs, self.min_lrs)]\n"
  },
  {
    "path": "utils/prob2lines/getLane.py",
    "content": "import cv2\nimport numpy as np\n\n\ndef getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape=None):\n    \"\"\"\n    Arguments:\n    ----------\n    prob_map: prob map for single lane, np array size (h, w)\n    resize_shape:  reshape size target, (H, W)\n\n    Return:\n    ----------\n    coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = prob_map.shape\n    h, w = prob_map.shape\n    H, W = resize_shape\n\n    coords = np.zeros(pts)\n    for i in range(pts):\n        y = int((H - 10 - i * y_px_gap) * h / H)\n        if y < 0:\n            break\n        line = prob_map[y, :]\n        id = np.argmax(line)\n        if line[id] > thresh:\n            coords[i] = int(id / w * W)\n    if (coords > 0).sum() < 2:\n        coords = np.zeros(pts)\n    return coords\n\n\ndef prob2lines_tusimple(seg_pred, exist, resize_shape=None, smooth=True, y_px_gap=10, pts=None, thresh=0.3):\n    \"\"\"\n    Arguments:\n    ----------\n    seg_pred:      np.array size (5, h, w)\n    resize_shape:  reshape size target, (H, W)\n    exist:       list of existence, e.g. [0, 1, 1, 0]\n    smooth:      whether to smooth the probability or not\n    y_px_gap:    y pixel gap for sampling\n    pts:     how many points for one lane\n    thresh:  probability threshold\n\n    Return:\n    ----------\n    coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = seg_pred.shape[1:]  # seg_pred (5, h, w)\n    _, h, w = seg_pred.shape\n    H, W = resize_shape\n    coordinates = []\n\n    if pts is None:\n        pts = round(H / 2 / y_px_gap)\n\n    seg_pred = np.ascontiguousarray(np.transpose(seg_pred, (1, 2, 0)))\n    for i in range(4):\n        prob_map = seg_pred[..., i + 1]\n        if smooth:\n            prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)\n        if exist[i] > 0:\n            coords = getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape)\n            if (coords>0).sum() < 2:\n                continue\n            coordinates.append(\n                [[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in\n                 range(pts)])\n\n    return coordinates\n\n\ndef getLane_CULane(prob_map, y_px_gap, pts, thresh, resize_shape=None):\n    \"\"\"\n    Arguments:\n    ----------\n    prob_map: prob map for single lane, np array size (h, w)\n    resize_shape:  reshape size target, (H, W)\n    Return:\n    ----------\n    coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = prob_map.shape\n    h, w = prob_map.shape\n    H, W = resize_shape\n\n    coords = np.zeros(pts)\n    for i in range(pts):\n        y = int(h - i * y_px_gap / H * h - 1)\n        if y < 0:\n            break\n        line = prob_map[y, :]\n        id = np.argmax(line)\n        if line[id] > thresh:\n            coords[i] = int(id / w * W)\n    if (coords > 0).sum() < 2:\n        coords = np.zeros(pts)\n    return coords\n\n\ndef prob2lines_CULane(seg_pred, exist, resize_shape=None, smooth=True, y_px_gap=20, pts=None, thresh=0.3):\n    \"\"\"\n    Arguments:\n    ----------\n    seg_pred: np.array size (5, h, w)\n    resize_shape:  reshape size target, (H, W)\n    exist:   list of existence, e.g. [0, 1, 1, 0]\n    smooth:  whether to smooth the probability or not\n    y_px_gap: y pixel gap for sampling\n    pts:     how many points for one lane\n    thresh:  probability threshold\n    Return:\n    ----------\n    coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]\n    \"\"\"\n    if resize_shape is None:\n        resize_shape = seg_pred.shape[1:]  # seg_pred (5, h, w)\n    _, h, w = seg_pred.shape\n    H, W = resize_shape\n    coordinates = []\n\n    if pts is None:\n        pts = round(H / 2 / y_px_gap)\n\n    seg_pred = np.ascontiguousarray(np.transpose(seg_pred, (1, 2, 0)))\n    for i in range(4):\n        prob_map = seg_pred[..., i + 1]\n        if smooth:\n            prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)\n        if exist[i] > 0:\n            coords = getLane_CULane(prob_map, y_px_gap, pts, thresh, resize_shape)\n            if (coords>0).sum() < 2:\n                continue\n            coordinates.append([[coords[j], H - 1 - j * y_px_gap] for j in range(pts) if coords[j] > 0])\n\n    return coordinates\n"
  },
  {
    "path": "utils/tensorboard.py",
    "content": "# Code copied from pytorch-tutorial https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/04-utils/tensorboard/logger.py \nimport tensorflow as tf\nimport numpy as np\nfrom PIL import Image\nimport scipy.misc \ntry:\n    from StringIO import StringIO  # Python 2.7\nexcept ImportError:\n    from io import BytesIO         # Python 3.x\n\n\nclass TensorBoard(object):\n    \n    def __init__(self, log_dir):\n        \"\"\"Create a summary writer logging to log_dir.\"\"\"\n        self.writer = tf.summary.FileWriter(log_dir)\n\n    def scalar_summary(self, tag, value, step):\n        \"\"\"Log a scalar variable.\"\"\"\n        summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value)])\n        self.writer.add_summary(summary, step)\n\n    def image_summary(self, tag, images, step):\n        \"\"\"Log a list of images.\"\"\"\n\n        img_summaries = []\n        for i, img in enumerate(images):\n            # Write the image to a string\n            try:\n                s = StringIO()\n            except:\n                s = BytesIO()\n            # scipy.misc.toimage(img).save(s, format=\"png\")\n            Image.fromarray(img).save(s, format='png')\n\n\n            # Create an Image object\n            img_sum = tf.Summary.Image(encoded_image_string=s.getvalue(),\n                                       height=img.shape[0],\n                                       width=img.shape[1])\n            # Create a Summary value\n            img_summaries.append(tf.Summary.Value(tag='%s/%d' % (tag, i), image=img_sum))\n\n        # Create and write Summary\n        summary = tf.Summary(value=img_summaries)\n        self.writer.add_summary(summary, step)\n        \n    def histo_summary(self, tag, values, step, bins=1000):\n        \"\"\"Log a histogram of the tensor of values.\"\"\"\n\n        # Create a histogram using numpy\n        counts, bin_edges = np.histogram(values, bins=bins)\n\n        # Fill the fields of the histogram proto\n        hist = tf.HistogramProto()\n        hist.min = float(np.min(values))\n        hist.max = float(np.max(values))\n        hist.num = int(np.prod(values.shape))\n        hist.sum = float(np.sum(values))\n        hist.sum_squares = float(np.sum(values**2))\n\n        # Drop the start of the first bin\n        bin_edges = bin_edges[1:]\n\n        # Add bin edges and counts\n        for edge in bin_edges:\n            hist.bucket_limit.append(edge)\n        for c in counts:\n            hist.bucket.append(c)\n\n        # Create and write Summary\n        summary = tf.Summary(value=[tf.Summary.Value(tag=tag, histo=hist)])\n        self.writer.add_summary(summary, step)\n        self.writer.flush()\n"
  },
  {
    "path": "utils/transforms/__init__.py",
    "content": "from .transforms import *\nfrom .data_augmentation import *\n"
  },
  {
    "path": "utils/transforms/data_augmentation.py",
    "content": "import random\nimport numpy as np\nimport cv2\n\nfrom utils.transforms.transforms import CustomTransform\n\n\nclass RandomFlip(CustomTransform):\n    def __init__(self, prob_x=0, prob_y=0):\n        \"\"\"\n        Arguments:\n        ----------\n        prob_x: range [0, 1], probability to use horizontal flip, setting to 0 means disabling flip\n        prob_y: range [0, 1], probability to use vertical flip\n        \"\"\"\n        self.prob_x = prob_x\n        self.prob_y = prob_y\n\n    def __call__(self, sample):\n        img = sample.get('img').copy()\n        segLabel = sample.get('segLabel', None)\n        if segLabel is not None:\n            segLabel = segLabel.copy()\n\n        flip_x = np.random.choice([False, True], p=(1 - self.prob_x, self.prob_x))\n        flip_y = np.random.choice([False, True], p=(1 - self.prob_y, self.prob_y))\n        if flip_x:\n            img = np.ascontiguousarray(np.flip(img, axis=1))\n            if segLabel is not None:\n                segLabel = np.ascontiguousarray(np.flip(segLabel, axis=1))\n\n        if flip_y:\n            img = np.ascontiguousarray(np.flip(img, axis=0))\n            if segLabel is not None:\n                segLabel = np.ascontiguousarray(np.flip(segLabel, axis=0))\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        _sample['segLabel'] = segLabel\n        return _sample\n\n\nclass Darkness(CustomTransform):\n    def __init__(self, coeff):\n        assert coeff >= 1., \"Darkness coefficient must be greater than 1\"\n        self.coeff = coeff\n\n    def __call__(self, sample):\n        img = sample.get('img')\n        coeff = np.random.uniform(1., self.coeff)\n        img = (img.astype('float32') / coeff).astype('uint8')\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        return _sample"
  },
  {
    "path": "utils/transforms/transforms.py",
    "content": "import cv2\n\nimport numpy as np\nimport torch\nfrom torchvision.transforms import Normalize as Normalize_th\n\n\nclass CustomTransform:\n    def __call__(self, *args, **kwargs):\n        raise NotImplementedError\n\n    def __str__(self):\n        return self.__class__.__name__\n\n    def __eq__(self, name):\n        return str(self) == name\n\n    def __iter__(self):\n        def iter_fn():\n            for t in [self]:\n                yield t\n        return iter_fn()\n\n    def __contains__(self, name):\n        for t in self.__iter__():\n            if isinstance(t, Compose):\n                if name in t:\n                    return True\n            elif name == t:\n                return True\n        return False\n\n\nclass Compose(CustomTransform):\n    \"\"\"\n    All transform in Compose should be able to accept two non None variable, img and boxes\n    \"\"\"\n    def __init__(self, *transforms):\n        self.transforms = [*transforms]\n\n    def __call__(self, sample):\n        for t in self.transforms:\n            sample = t(sample)\n        return sample\n\n    def __iter__(self):\n        return iter(self.transforms)\n\n    def modules(self):\n        yield self\n        for t in self.transforms:\n            if isinstance(t, Compose):\n                for _t in t.modules():\n                    yield _t\n            else:\n                yield t\n\n\nclass Resize(CustomTransform):\n    def __init__(self, size):\n        if isinstance(size, int):\n            size = (size, size)\n        self.size = size  #(W, H)\n\n    def __call__(self, sample):\n        img = sample.get('img')\n        segLabel = sample.get('segLabel', None)\n\n        img = cv2.resize(img, self.size, interpolation=cv2.INTER_CUBIC)\n        if segLabel is not None:\n            segLabel = cv2.resize(segLabel, self.size, interpolation=cv2.INTER_NEAREST)\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        _sample['segLabel'] = segLabel\n        return _sample\n\n    def reset_size(self, size):\n        if isinstance(size, int):\n            size = (size, size)\n        self.size = size\n\n\nclass RandomResize(Resize):\n    \"\"\"\n    Resize to (w, h), where w randomly samples from (minW, maxW) and h randomly samples from (minH, maxH)\n    \"\"\"\n    def __init__(self, minW, maxW, minH=None, maxH=None, batch=False):\n        if minH is None or maxH is None:\n            minH, maxH = minW, maxW\n        super(RandomResize, self).__init__((minW, minH))\n        self.minW = minW\n        self.maxW = maxW\n        self.minH = minH\n        self.maxH = maxH\n        self.batch = batch\n\n    def random_set_size(self):\n        w = np.random.randint(self.minW, self.maxW+1)\n        h = np.random.randint(self.minH, self.maxH+1)\n        self.reset_size((w, h))\n\n\nclass Rotation(CustomTransform):\n    def __init__(self, theta):\n        self.theta = theta\n\n    def __call__(self, sample):\n        img = sample.get('img')\n        segLabel = sample.get('segLabel', None)\n\n        u = np.random.uniform()\n        degree = (u-0.5) * self.theta\n        R = cv2.getRotationMatrix2D((img.shape[1]//2, img.shape[0]//2), degree, 1)\n        img = cv2.warpAffine(img, R, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)\n        if segLabel is not None:\n            segLabel = cv2.warpAffine(segLabel, R, (segLabel.shape[1], segLabel.shape[0]), flags=cv2.INTER_NEAREST)\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        _sample['segLabel'] = segLabel\n        return _sample\n\n    def reset_theta(self, theta):\n        self.theta = theta\n\n\nclass Normalize(CustomTransform):\n    def __init__(self, mean, std):\n        self.transform = Normalize_th(mean, std)\n\n    def __call__(self, sample):\n        img = sample.get('img')\n\n        img = self.transform(img)\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        return _sample\n\n\nclass ToTensor(CustomTransform):\n    def __init__(self, dtype=torch.float):\n        self.dtype=dtype\n\n    def __call__(self, sample):\n        img = sample.get('img')\n        segLabel = sample.get('segLabel', None)\n        exist = sample.get('exist', None)\n\n        img = img.transpose(2, 0, 1)\n        img = torch.from_numpy(img).type(self.dtype) / 255.\n        if segLabel is not None:\n            segLabel = torch.from_numpy(segLabel).type(torch.long)\n        if exist is not None:\n            exist = torch.from_numpy(exist).type(torch.float32)  # BCEloss requires float tensor\n\n        _sample = sample.copy()\n        _sample['img'] = img\n        _sample['segLabel'] = segLabel\n        _sample['exist'] = exist\n        return _sample\n\n\n"
  }
]