Repository: harryhan618/SCNN_Pytorch
Branch: master
Commit: bbe0acff4681
Files: 37
Total size: 110.2 KB
Directory structure:
gitextract_j36c990x/
├── LICENSE
├── README.md
├── config.py
├── dataset/
│ ├── CULane.py
│ ├── Tusimple.py
│ └── __init__.py
├── demo_test.py
├── experiments/
│ ├── exp0/
│ │ └── cfg.json
│ ├── exp10/
│ │ └── cfg.json
│ └── vgg_SCNN_DULR_w9/
│ ├── cfg.json
│ └── t7_to_pt.py
├── model.py
├── requirements.txt
├── test_CULane.py
├── test_tusimple.py
├── train.py
└── utils/
├── lane_evaluation/
│ ├── CULane/
│ │ ├── CMakeLists.txt
│ │ ├── Run.sh
│ │ ├── include/
│ │ │ ├── counter.hpp
│ │ │ ├── hungarianGraph.hpp
│ │ │ ├── lane_compare.hpp
│ │ │ └── spline.hpp
│ │ ├── src/
│ │ │ ├── counter.cpp
│ │ │ ├── evaluate.cpp
│ │ │ ├── lane_compare.cpp
│ │ │ └── spline.cpp
│ │ └── src_origin/
│ │ ├── counter.cpp
│ │ ├── evaluate.cpp
│ │ ├── lane_compare.cpp
│ │ └── spline.cpp
│ └── tusimple/
│ └── lane.py
├── lr_scheduler.py
├── prob2lines/
│ └── getLane.py
├── tensorboard.py
└── transforms/
├── __init__.py
├── data_augmentation.py
└── transforms.py
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2019 HarryHan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# SCNN lane detection in Pytorch
SCNN is a segmentation-tasked lane detection algorithm, described in ['Spatial As Deep: Spatial CNN for Traffic Scene Understanding'](https://arxiv.org/abs/1712.06080). The [official implementation](<https://github.com/XingangPan/SCNN>) is in lua torch.
This repository contains a re-implementation in Pytorch.
### Updates
- 2019 / 08 / 14: Code refined including more convenient test & evaluation script.
- 2019 / 08 / 12: Trained model on both dataset provided.
- 2019 / 05 / 08: Evaluation is provided.
- 2019 / 04 / 23: Trained model converted from [official t7 model](https://github.com/XingangPan/SCNN#Testing) is provided.
<br/>
## Data preparation
### CULane
The dataset is available in [CULane](https://xingangpan.github.io/projects/CULane.html). Please download and unzip the files in one folder, which later is represented as `CULane_path`. Then modify the path of `CULane_path` in `config.py`. Also, modify the path of `CULane_path` as `data_dir` in `utils/lane_evaluation/CULane/Run.sh` .
```
CULane_path
├── driver_100_30frame
├── driver_161_90frame
├── driver_182_30frame
├── driver_193_90frame
├── driver_23_30frame
├── driver_37_30frame
├── laneseg_label_w16
├── laneseg_label_w16_test
└── list
```
**Note: absolute path is encouraged.**
### Tusimple
The dataset is available in [here](https://github.com/TuSimple/tusimple-benchmark/issues/3). Please download and unzip the files in one folder, which later is represented as `Tusimple_path`. Then modify the path of `Tusimple_path` in `config.py`.
```
Tusimple_path
├── clips
├── label_data_0313.json
├── label_data_0531.json
├── label_data_0601.json
└── test_label.json
```
**Note: seg\_label images and gt.txt, as in CULane dataset format, will be generated the first time `Tusimple` object is instantiated. It may take time.**
<br/>
## Trained Model Provided
* Model trained on CULane Dataset can be converted from [official implementation](https://github.com/XingangPan/SCNN#Testing), which can be downloaded [here](https://drive.google.com/open?id=1Wv3r3dCYNBwJdKl_WPEfrEOt-XGaROKu). Please put the `vgg_SCNN_DULR_w9.t7` file into `experiments/vgg_SCNN_DULR_w9`.
```bash
python experiments/vgg_SCNN_DULR_w9/t7_to_pt.py
```
Model will be cached into `experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.pth`.
**Note**:`torch.utils.serialization` is obsolete in Pytorch 1.0+. You can directly download **the converted model [here](https://drive.google.com/open?id=1bBdN3yhoOQBC9pRtBUxzeRrKJdF7uVTJ)**.
* My trained model on Tusimple can be downloaded [here](https://drive.google.com/open?id=1IwEenTekMt-t6Yr5WJU9_kv4d_Pegd_Q). Its configure file is in `exp0`.
| Accuracy | FP | FN |
| -------- | ---- | ---- |
| 94.16% |0.0735|0.0825|
* My trained model on CULane can be downloaded [here](https://drive.google.com/open?id=1AZn23w8RbMh1P6lJcVcf6PcTIWJvQg9u). Its configure file is in `exp10`.
| Category | F1-measure |
| --------- | ------------------- |
| Normal | 90.26 |
| Crowded | 68.23 |
| HLight | 61.84 |
| Shadow | 61.16 |
| No line | 43.44 |
| Arrow | 84.64 |
| Curve | 61.74 |
| Crossroad | 2728 (FP measure) |
| Night | 65.32 |
<br/>
## Demo Test
For single image demo test:
```shell
python demo_test.py -i demo/demo.jpg
-w experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.pth
[--visualize / -v]
```

<br/>
## Train
1. Specify an experiment directory, e.g. `experiments/exp0`.
2. Modify the hyperparameters in `experiments/exp0/cfg.json`.
3. Start training:
```shell
python train.py --exp_dir ./experiments/exp0 [--resume/-r]
```
4. Monitor on tensorboard:
```bash
tensorboard --logdir='experiments/exp0'
```
**Note**
- My model is trained with `torch.nn.DataParallel`. Modify it according to your hardware configuration.
- Currently the backbone is vgg16 from torchvision. Several modifications are done to the torchvision model according to paper, i.e., i). dilation of last three conv layer is changed to 2, ii). last two maxpooling layer is removed.
<br/>
## Evaluation
* CULane Evaluation code is ported from [official implementation](<https://github.com/XingangPan/SCNN>) and an extra `CMakeLists.txt` is provided.
1. Please build the CPP code first.
2. Then modify `root` as absolute project path in `utils/lane_evaluation/CULane/Run.sh`.
```bash
cd utils/lane_evaluation/CULane
mkdir build && cd build
cmake ..
make
```
Just run the evaluation script. Result will be saved into corresponding `exp_dir` directory,
``` shell
python test_CULane.py --exp_dir ./experiments/exp10
```
* Tusimple Evaluation code is ported from [tusimple repo](https://github.com/TuSimple/tusimple-benchmark/blob/master/evaluate/lane.py).
```Shell
python test_tusimple.py --exp_dir ./experiments/exp0
```
## Acknowledgement
This repos is build based on [official implementation](<https://github.com/XingangPan/SCNN>).
================================================
FILE: config.py
================================================
Dataset_Path = dict(
CULane = "/home/lion/Dataset/CULane/data/CULane",
Tusimple = "/home/lion/Dataset/tusimple"
)
================================================
FILE: dataset/CULane.py
================================================
import cv2
import os
import numpy as np
import torch
from torch.utils.data import Dataset
class CULane(Dataset):
def __init__(self, path, image_set, transforms=None):
super(CULane, self).__init__()
assert image_set in ('train', 'val', 'test'), "image_set is not valid!"
self.data_dir_path = path
self.image_set = image_set
self.transforms = transforms
if image_set != 'test':
self.createIndex()
else:
self.createIndex_test()
def createIndex(self):
listfile = os.path.join(self.data_dir_path, "list", "{}_gt.txt".format(self.image_set))
self.img_list = []
self.segLabel_list = []
self.exist_list = []
with open(listfile) as f:
for line in f:
line = line.strip()
l = line.split(" ")
self.img_list.append(os.path.join(self.data_dir_path, l[0][1:])) # l[0][1:] get rid of the first '/' so as for os.path.join
self.segLabel_list.append(os.path.join(self.data_dir_path, l[1][1:]))
self.exist_list.append([int(x) for x in l[2:]])
def createIndex_test(self):
listfile = os.path.join(self.data_dir_path, "list", "{}.txt".format(self.image_set))
self.img_list = []
with open(listfile) as f:
for line in f:
line = line.strip()
self.img_list.append(os.path.join(self.data_dir_path, line[1:])) # l[0][1:] get rid of the first '/' so as for os.path.join
def __getitem__(self, idx):
img = cv2.imread(self.img_list[idx])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if self.image_set != 'test':
segLabel = cv2.imread(self.segLabel_list[idx])[:, :, 0]
exist = np.array(self.exist_list[idx])
else:
segLabel = None
exist = None
sample = {'img': img,
'segLabel': segLabel,
'exist': exist,
'img_name': self.img_list[idx]}
if self.transforms is not None:
sample = self.transforms(sample)
return sample
def __len__(self):
return len(self.img_list)
@staticmethod
def collate(batch):
if isinstance(batch[0]['img'], torch.Tensor):
img = torch.stack([b['img'] for b in batch])
else:
img = [b['img'] for b in batch]
if batch[0]['segLabel'] is None:
segLabel = None
exist = None
elif isinstance(batch[0]['segLabel'], torch.Tensor):
segLabel = torch.stack([b['segLabel'] for b in batch])
exist = torch.stack([b['exist'] for b in batch])
else:
segLabel = [b['segLabel'] for b in batch]
exist = [b['exist'] for b in batch]
samples = {'img': img,
'segLabel': segLabel,
'exist': exist,
'img_name': [x['img_name'] for x in batch]}
return samples
================================================
FILE: dataset/Tusimple.py
================================================
import json
import os
import cv2
import numpy as np
import torch
from torch.utils.data import Dataset
class Tusimple(Dataset):
"""
image_set is splitted into three partitions: train, val, test.
train includes label_data_0313.json, label_data_0601.json
val includes label_data_0531.json
test includes test_label.json
"""
TRAIN_SET = ['label_data_0313.json', 'label_data_0601.json']
VAL_SET = ['label_data_0531.json']
TEST_SET = ['test_label.json']
def __init__(self, path, image_set, transforms=None):
super(Tusimple, self).__init__()
assert image_set in ('train', 'val', 'test'), "image_set is not valid!"
self.data_dir_path = path
self.image_set = image_set
self.transforms = transforms
if not os.path.exists(os.path.join(path, "seg_label")):
print("Label is going to get generated into dir: {} ...".format(os.path.join(path, "seg_label")))
self.generate_label()
self.createIndex()
def createIndex(self):
self.img_list = []
self.segLabel_list = []
self.exist_list = []
listfile = os.path.join(self.data_dir_path, "seg_label", "list", "{}_gt.txt".format(self.image_set))
if not os.path.exists(listfile):
raise FileNotFoundError("List file doesn't exist. Label has to be generated! ...")
with open(listfile) as f:
for line in f:
line = line.strip()
l = line.split(" ")
self.img_list.append(os.path.join(self.data_dir_path, l[0][1:])) # l[0][1:] get rid of the first '/' so as for os.path.join
self.segLabel_list.append(os.path.join(self.data_dir_path, l[1][1:]))
self.exist_list.append([int(x) for x in l[2:]])
def __getitem__(self, idx):
img = cv2.imread(self.img_list[idx])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if self.image_set != 'test':
segLabel = cv2.imread(self.segLabel_list[idx])[:, :, 0]
exist = np.array(self.exist_list[idx])
else:
segLabel = None
exist = None
sample = {'img': img,
'segLabel': segLabel,
'exist': exist,
'img_name': self.img_list[idx]}
if self.transforms is not None:
sample = self.transforms(sample)
return sample
def __len__(self):
return len(self.img_list)
def generate_label(self):
save_dir = os.path.join(self.data_dir_path, "seg_label")
os.makedirs(save_dir, exist_ok=True)
# --------- merge json into one file ---------
with open(os.path.join(save_dir, "train.json"), "w") as outfile:
for json_name in self.TRAIN_SET:
with open(os.path.join(self.data_dir_path, json_name)) as infile:
for line in infile:
outfile.write(line)
with open(os.path.join(save_dir, "val.json"), "w") as outfile:
for json_name in self.VAL_SET:
with open(os.path.join(self.data_dir_path, json_name)) as infile:
for line in infile:
outfile.write(line)
with open(os.path.join(save_dir, "test.json"), "w") as outfile:
for json_name in self.TEST_SET:
with open(os.path.join(self.data_dir_path, json_name)) as infile:
for line in infile:
outfile.write(line)
self._gen_label_for_json('train')
print("train set is done")
self._gen_label_for_json('val')
print("val set is done")
self._gen_label_for_json('test')
print("test set is done")
def _gen_label_for_json(self, image_set):
H, W = 720, 1280
SEG_WIDTH = 30
save_dir = "seg_label"
os.makedirs(os.path.join(self.data_dir_path, save_dir, "list"), exist_ok=True)
list_f = open(os.path.join(self.data_dir_path, save_dir, "list", "{}_gt.txt".format(image_set)), "w")
json_path = os.path.join(self.data_dir_path, save_dir, "{}.json".format(image_set))
with open(json_path) as f:
for line in f:
label = json.loads(line)
# ---------- clean and sort lanes -------------
lanes = []
_lanes = []
slope = [] # identify 1st, 2nd, 3rd, 4th lane through slope
for i in range(len(label['lanes'])):
l = [(x, y) for x, y in zip(label['lanes'][i], label['h_samples']) if x >= 0]
if (len(l)>1):
_lanes.append(l)
slope.append(np.arctan2(l[-1][1]-l[0][1], l[0][0]-l[-1][0]) / np.pi * 180)
_lanes = [_lanes[i] for i in np.argsort(slope)]
slope = [slope[i] for i in np.argsort(slope)]
idx_1 = None
idx_2 = None
idx_3 = None
idx_4 = None
for i in range(len(slope)):
if slope[i]<=90:
idx_2 = i
idx_1 = i-1 if i>0 else None
else:
idx_3 = i
idx_4 = i+1 if i+1 < len(slope) else None
break
lanes.append([] if idx_1 is None else _lanes[idx_1])
lanes.append([] if idx_2 is None else _lanes[idx_2])
lanes.append([] if idx_3 is None else _lanes[idx_3])
lanes.append([] if idx_4 is None else _lanes[idx_4])
# ---------------------------------------------
img_path = label['raw_file']
seg_img = np.zeros((H, W, 3))
list_str = [] # str to be written to list.txt
for i in range(len(lanes)):
coords = lanes[i]
if len(coords) < 4:
list_str.append('0')
continue
for j in range(len(coords)-1):
cv2.line(seg_img, coords[j], coords[j+1], (i+1, i+1, i+1), SEG_WIDTH//2)
list_str.append('1')
seg_path = img_path.split("/")
seg_path, img_name = os.path.join(self.data_dir_path, save_dir, seg_path[1], seg_path[2]), seg_path[3]
os.makedirs(seg_path, exist_ok=True)
seg_path = os.path.join(seg_path, img_name[:-3]+"png")
cv2.imwrite(seg_path, seg_img)
seg_path = "/".join([save_dir, *img_path.split("/")[1:3], img_name[:-3]+"png"])
if seg_path[0] != '/':
seg_path = '/' + seg_path
if img_path[0] != '/':
img_path = '/' + img_path
list_str.insert(0, seg_path)
list_str.insert(0, img_path)
list_str = " ".join(list_str) + "\n"
list_f.write(list_str)
list_f.close()
@staticmethod
def collate(batch):
if isinstance(batch[0]['img'], torch.Tensor):
img = torch.stack([b['img'] for b in batch])
else:
img = [b['img'] for b in batch]
if batch[0]['segLabel'] is None:
segLabel = None
exist = None
elif isinstance(batch[0]['segLabel'], torch.Tensor):
segLabel = torch.stack([b['segLabel'] for b in batch])
exist = torch.stack([b['exist'] for b in batch])
else:
segLabel = [b['segLabel'] for b in batch]
exist = [b['exist'] for b in batch]
samples = {'img': img,
'segLabel': segLabel,
'exist': exist,
'img_name': [x['img_name'] for x in batch]}
return samples
================================================
FILE: dataset/__init__.py
================================================
from .CULane import CULane
from .Tusimple import Tusimple
================================================
FILE: demo_test.py
================================================
import argparse
import cv2
import torch
from model import SCNN
from utils.prob2lines import getLane
from utils.transforms import *
net = SCNN(input_size=(800, 288), pretrained=False)
mean=(0.3598, 0.3653, 0.3662) # CULane mean, std
std=(0.2573, 0.2663, 0.2756)
transform_img = Resize((800, 288))
transform_to_net = Compose(ToTensor(), Normalize(mean=mean, std=std))
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--img_path", '-i', type=str, default="demo/demo.jpg", help="Path to demo img")
parser.add_argument("--weight_path", '-w', type=str, help="Path to model weights")
parser.add_argument("--visualize", '-v', action="store_true", default=False, help="Visualize the result")
args = parser.parse_args()
return args
def main():
args = parse_args()
img_path = args.img_path
weight_path = args.weight_path
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = transform_img({'img': img})['img']
x = transform_to_net({'img': img})['img']
x.unsqueeze_(0)
save_dict = torch.load(weight_path, map_location='cpu')
net.load_state_dict(save_dict['net'])
net.eval()
seg_pred, exist_pred = net(x)[:2]
seg_pred = seg_pred.detach().cpu().numpy()
exist_pred = exist_pred.detach().cpu().numpy()
seg_pred = seg_pred[0]
exist = [1 if exist_pred[0, i] > 0.5 else 0 for i in range(4)]
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
lane_img = np.zeros_like(img)
color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8')
coord_mask = np.argmax(seg_pred, axis=0)
for i in range(0, 4):
if exist_pred[0, i] > 0.5:
lane_img[coord_mask == (i + 1)] = color[i]
img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=img, beta=1., gamma=0.)
cv2.imwrite("demo/demo_result.jpg", img)
for x in getLane.prob2lines_CULane(seg_pred, exist):
print(x)
if args.visualize:
print([1 if exist_pred[0, i] > 0.5 else 0 for i in range(4)])
cv2.imshow("", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
================================================
FILE: experiments/exp0/cfg.json
================================================
{
"device": "cuda:0",
"MAX_EPOCHES": 60,
"dataset": {
"dataset_name": "Tusimple",
"batch_size": 32,
"resize_shape": [512, 288]
},
"optim": {
"lr": 15e-2,
"momentum": 0.9,
"weight_decay": 1e-4,
"nesterov": true
},
"lr_scheduler": {
"warmup": 20,
"max_iter": 1500,
"min_lrs": 1e-10
},
"model": {
"scale_exist": 0.07
}
}
================================================
FILE: experiments/exp10/cfg.json
================================================
{
"device": "cuda:0",
"MAX_EPOCHES": 30,
"dataset": {
"dataset_name": "CULane",
"batch_size": 128,
"resize_shape": [800, 288]
},
"optim": {
"lr": 16e-2,
"momentum": 0.9,
"weight_decay": 1e-3,
"nesterov": true
},
"lr_scheduler": {
"warmup": 50,
"max_iter": 8000
}
}
================================================
FILE: experiments/vgg_SCNN_DULR_w9/cfg.json
================================================
{
"device": "cuda:0",
"dataset": {
"dataset_name": "CULane",
"resize_shape": [800, 288]
}
}
================================================
FILE: experiments/vgg_SCNN_DULR_w9/t7_to_pt.py
================================================
import sys
import os
abs_file_path = os.path.abspath(os.path.dirname(__file__))
sys.path.append(os.path.join(abs_file_path, "..", "..")) # add path
import torch
import torch.nn as nn
import collections
from torch.utils.serialization import load_lua
from model import SCNN
model1 = load_lua('experiments/vgg_SCNN_DULR_w9/vgg_SCNN_DULR_w9.t7', unknown_classes=True)
model2 = collections.OrderedDict()
model2['backbone.0.weight'] = model1.modules[0].weight
model2['backbone.1.weight'] = model1.modules[1].weight
model2['backbone.1.bias'] = model1.modules[1].bias
model2['backbone.1.running_mean'] = model1.modules[1].running_mean
model2['backbone.1.running_var'] = model1.modules[1].running_var
model2['backbone.3.weight'] = model1.modules[3].weight
model2['backbone.4.weight'] = model1.modules[4].weight
model2['backbone.4.bias'] = model1.modules[4].bias
model2['backbone.4.running_mean'] = model1.modules[4].running_mean
model2['backbone.4.running_var'] = model1.modules[4].running_var
model2['backbone.7.weight'] = model1.modules[7].weight
model2['backbone.8.weight'] = model1.modules[8].weight
model2['backbone.8.bias'] = model1.modules[8].bias
model2['backbone.8.running_mean'] = model1.modules[8].running_mean
model2['backbone.8.running_var'] = model1.modules[8].running_var
model2['backbone.10.weight'] = model1.modules[10].weight
model2['backbone.11.weight'] = model1.modules[11].weight
model2['backbone.11.bias'] = model1.modules[11].bias
model2['backbone.11.running_mean'] = model1.modules[11].running_mean
model2['backbone.11.running_var'] = model1.modules[11].running_var
model2['backbone.14.weight'] = model1.modules[14].weight
model2['backbone.15.weight'] = model1.modules[15].weight
model2['backbone.15.bias'] = model1.modules[15].bias
model2['backbone.15.running_mean'] = model1.modules[15].running_mean
model2['backbone.15.running_var'] = model1.modules[15].running_var
model2['backbone.17.weight'] = model1.modules[17].weight
model2['backbone.18.weight'] = model1.modules[18].weight
model2['backbone.18.bias'] = model1.modules[18].bias
model2['backbone.18.running_mean'] = model1.modules[18].running_mean
model2['backbone.18.running_var'] = model1.modules[18].running_var
model2['backbone.20.weight'] = model1.modules[20].weight
model2['backbone.21.weight'] = model1.modules[21].weight
model2['backbone.21.bias'] = model1.modules[21].bias
model2['backbone.21.running_mean'] = model1.modules[21].running_mean
model2['backbone.21.running_var'] = model1.modules[21].running_var
model2['backbone.24.weight'] = model1.modules[24].weight
model2['backbone.25.weight'] = model1.modules[25].weight
model2['backbone.25.bias'] = model1.modules[25].bias
model2['backbone.25.running_mean'] = model1.modules[25].running_mean
model2['backbone.25.running_var'] = model1.modules[25].running_var
model2['backbone.27.weight'] = model1.modules[27].weight
model2['backbone.28.weight'] = model1.modules[28].weight
model2['backbone.28.bias'] = model1.modules[28].bias
model2['backbone.28.running_mean'] = model1.modules[28].running_mean
model2['backbone.28.running_var'] = model1.modules[28].running_var
model2['backbone.30.weight'] = model1.modules[30].weight
model2['backbone.31.weight'] = model1.modules[31].weight
model2['backbone.31.bias'] = model1.modules[31].bias
model2['backbone.31.running_mean'] = model1.modules[31].running_mean
model2['backbone.31.running_var'] = model1.modules[31].running_var
model2['backbone.34.weight'] = model1.modules[33].weight
model2['backbone.35.weight'] = model1.modules[34].weight
model2['backbone.35.bias'] = model1.modules[34].bias
model2['backbone.35.running_mean'] = model1.modules[34].running_mean
model2['backbone.35.running_var'] = model1.modules[34].running_var
model2['backbone.37.weight'] = model1.modules[36].weight
model2['backbone.38.weight'] = model1.modules[37].weight
model2['backbone.38.bias'] = model1.modules[37].bias
model2['backbone.38.running_mean'] = model1.modules[37].running_mean
model2['backbone.38.running_var'] = model1.modules[37].running_var
model2['backbone.40.weight'] = model1.modules[39].weight
model2['backbone.41.weight'] = model1.modules[40].weight
model2['backbone.41.bias'] = model1.modules[40].bias
model2['backbone.41.running_mean'] = model1.modules[40].running_mean
model2['backbone.41.running_var'] = model1.modules[40].running_var
model2['layer1.0.weight'] = model1.modules[42].modules[0].weight
model2['layer1.1.weight'] = model1.modules[42].modules[1].weight
model2['layer1.1.bias'] = model1.modules[42].modules[1].bias
model2['layer1.1.running_mean'] = model1.modules[42].modules[1].running_mean
model2['layer1.1.running_var'] = model1.modules[42].modules[1].running_var
model2['layer1.3.weight'] = model1.modules[42].modules[3].weight
model2['layer1.4.weight'] = model1.modules[42].modules[4].weight
model2['layer1.4.bias'] = model1.modules[42].modules[4].bias
model2['layer1.4.running_mean'] = model1.modules[42].modules[4].running_mean
model2['layer1.4.running_var'] = model1.modules[42].modules[4].running_var
model2['message_passing.up_down.weight'] = model1.modules[42].modules[6].modules[0].modules[0].modules[2].modules[0].modules[1].modules[1].modules[0].weight
model2['message_passing.down_up.weight'] = model1.modules[42].modules[6].modules[0].modules[0].modules[140].modules[1].modules[2].modules[0].modules[0].weight
model2['message_passing.left_right.weight'] = model1.modules[42].modules[6].modules[1].modules[0].modules[2].modules[0].modules[1].modules[1].modules[0].weight
model2['message_passing.right_left.weight'] = model1.modules[42].modules[6].modules[1].modules[0].modules[396].modules[1].modules[2].modules[0].modules[0].weight
model2['layer2.1.weight'] = model1.modules[42].modules[8].weight
model2['layer2.1.bias'] = model1.modules[42].modules[8].bias
model2['fc.0.weight'] = model1.modules[43].modules[1].modules[3].weight
model2['fc.0.bias'] = model1.modules[43].modules[1].modules[3].bias
model2['fc.2.weight'] = model1.modules[43].modules[1].modules[5].weight
model2['fc.2.bias'] = model1.modules[43].modules[1].modules[5].bias
save_name = os.path.join('experiments', 'vgg_SCNN_DULR_w9', 'vgg_SCNN_DULR_w9.pth')
torch.save(model2, save_name)
# load and save again
net = SCNN(input_size=(800, 288), pretrained=False)
d = torch.load(save_name)
net.load_state_dict(d, strict=False)
for m in net.backbone.modules():
if isinstance(m, nn.Conv2d):
if m.bias is not None:
m.bias.data.zero_()
save_dict = {
"epoch": 0,
"net": net.state_dict(),
"optim": None,
"lr_scheduler": None
}
if not os.path.exists(os.path.join('experiments', 'vgg_SCNN_DULR_w9')):
os.makedirs(os.path.join('experiments', 'vgg_SCNN_DULR_w9'), exist_ok=True)
torch.save(save_dict, save_name)
================================================
FILE: model.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.models as models
class SCNN(nn.Module):
def __init__(
self,
input_size,
ms_ks=9,
pretrained=True
):
"""
Argument
ms_ks: kernel size in message passing conv
"""
super(SCNN, self).__init__()
self.pretrained = pretrained
self.net_init(input_size, ms_ks)
if not pretrained:
self.weight_init()
self.scale_background = 0.4
self.scale_seg = 1.0
self.scale_exist = 0.1
self.ce_loss = nn.CrossEntropyLoss(weight=torch.tensor([self.scale_background, 1, 1, 1, 1]))
self.bce_loss = nn.BCELoss()
def forward(self, img, seg_gt=None, exist_gt=None):
x = self.backbone(img)
x = self.layer1(x)
x = self.message_passing_forward(x)
x = self.layer2(x)
seg_pred = F.interpolate(x, scale_factor=8, mode='bilinear', align_corners=True)
x = self.layer3(x)
x = x.view(-1, self.fc_input_feature)
exist_pred = self.fc(x)
if seg_gt is not None and exist_gt is not None:
loss_seg = self.ce_loss(seg_pred, seg_gt)
loss_exist = self.bce_loss(exist_pred, exist_gt)
loss = loss_seg * self.scale_seg + loss_exist * self.scale_exist
else:
loss_seg = torch.tensor(0, dtype=img.dtype, device=img.device)
loss_exist = torch.tensor(0, dtype=img.dtype, device=img.device)
loss = torch.tensor(0, dtype=img.dtype, device=img.device)
return seg_pred, exist_pred, loss_seg, loss_exist, loss
def message_passing_forward(self, x):
Vertical = [True, True, False, False]
Reverse = [False, True, False, True]
for ms_conv, v, r in zip(self.message_passing, Vertical, Reverse):
x = self.message_passing_once(x, ms_conv, v, r)
return x
def message_passing_once(self, x, conv, vertical=True, reverse=False):
"""
Argument:
----------
x: input tensor
vertical: vertical message passing or horizontal
reverse: False for up-down or left-right, True for down-up or right-left
"""
nB, C, H, W = x.shape
if vertical:
slices = [x[:, :, i:(i + 1), :] for i in range(H)]
dim = 2
else:
slices = [x[:, :, :, i:(i + 1)] for i in range(W)]
dim = 3
if reverse:
slices = slices[::-1]
out = [slices[0]]
for i in range(1, len(slices)):
out.append(slices[i] + F.relu(conv(out[i - 1])))
if reverse:
out = out[::-1]
return torch.cat(out, dim=dim)
def net_init(self, input_size, ms_ks):
input_w, input_h = input_size
self.fc_input_feature = 5 * int(input_w/16) * int(input_h/16)
self.backbone = models.vgg16_bn(pretrained=self.pretrained).features
# ----------------- process backbone -----------------
for i in [34, 37, 40]:
conv = self.backbone._modules[str(i)]
dilated_conv = nn.Conv2d(
conv.in_channels, conv.out_channels, conv.kernel_size, stride=conv.stride,
padding=tuple(p * 2 for p in conv.padding), dilation=2, bias=(conv.bias is not None)
)
dilated_conv.load_state_dict(conv.state_dict())
self.backbone._modules[str(i)] = dilated_conv
self.backbone._modules.pop('33')
self.backbone._modules.pop('43')
# ----------------- SCNN part -----------------
self.layer1 = nn.Sequential(
nn.Conv2d(512, 1024, 3, padding=4, dilation=4, bias=False),
nn.BatchNorm2d(1024),
nn.ReLU(),
nn.Conv2d(1024, 128, 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU() # (nB, 128, 36, 100)
)
# ----------------- add message passing -----------------
self.message_passing = nn.ModuleList()
self.message_passing.add_module('up_down', nn.Conv2d(128, 128, (1, ms_ks), padding=(0, ms_ks // 2), bias=False))
self.message_passing.add_module('down_up', nn.Conv2d(128, 128, (1, ms_ks), padding=(0, ms_ks // 2), bias=False))
self.message_passing.add_module('left_right',
nn.Conv2d(128, 128, (ms_ks, 1), padding=(ms_ks // 2, 0), bias=False))
self.message_passing.add_module('right_left',
nn.Conv2d(128, 128, (ms_ks, 1), padding=(ms_ks // 2, 0), bias=False))
# (nB, 128, 36, 100)
# ----------------- SCNN part -----------------
self.layer2 = nn.Sequential(
nn.Dropout2d(0.1),
nn.Conv2d(128, 5, 1) # get (nB, 5, 36, 100)
)
self.layer3 = nn.Sequential(
nn.Softmax(dim=1), # (nB, 5, 36, 100)
nn.AvgPool2d(2, 2), # (nB, 5, 18, 50)
)
self.fc = nn.Sequential(
nn.Linear(self.fc_input_feature, 128),
nn.ReLU(),
nn.Linear(128, 4),
nn.Sigmoid()
)
def weight_init(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.reset_parameters()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data[:] = 1.
m.bias.data.zero_()
================================================
FILE: requirements.txt
================================================
numpy
opencv-python
torch>=0.4.1
torchvision
================================================
FILE: test_CULane.py
================================================
import argparse
import json
import os
import torch.nn.functional as F
from torch.utils.data import DataLoader
from tqdm import tqdm
import dataset
from config import *
from model import SCNN
from utils.prob2lines import getLane
from utils.transforms import *
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--exp_dir", type=str, default="./experiments/exp10")
args = parser.parse_args()
return args
# ------------ config ------------
args = parse_args()
exp_dir = args.exp_dir
exp_name = exp_dir.split('/')[-1]
with open(os.path.join(exp_dir, "cfg.json")) as f:
exp_cfg = json.load(f)
resize_shape = tuple(exp_cfg['dataset']['resize_shape'])
device = torch.device('cuda')
def split_path(path):
"""split path tree into list"""
folders = []
while True:
path, folder = os.path.split(path)
if folder != "":
folders.insert(0, folder)
else:
if path != "":
folders.insert(0, path)
break
return folders
# ------------ data and model ------------
# # CULane mean, std
# mean=(0.3598, 0.3653, 0.3662)
# std=(0.2573, 0.2663, 0.2756)
# Imagenet mean, std
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
dataset_name = exp_cfg['dataset'].pop('dataset_name')
Dataset_Type = getattr(dataset, dataset_name)
transform = Compose(Resize(resize_shape), ToTensor(),
Normalize(mean=mean, std=std))
test_dataset = Dataset_Type(Dataset_Path[dataset_name], "test", transform)
test_loader = DataLoader(test_dataset, batch_size=64, collate_fn=test_dataset.collate, num_workers=4)
net = SCNN(resize_shape, pretrained=False)
save_name = os.path.join(exp_dir, exp_dir.split('/')[-1] + '_best.pth')
save_dict = torch.load(save_name, map_location='cpu')
print("\nloading", save_name, "...... From Epoch: ", save_dict['epoch'])
net.load_state_dict(save_dict['net'])
net = torch.nn.DataParallel(net.to(device))
net.eval()
# ------------ test ------------
out_path = os.path.join(exp_dir, "coord_output")
evaluation_path = os.path.join(exp_dir, "evaluate")
if not os.path.exists(out_path):
os.mkdir(out_path)
if not os.path.exists(evaluation_path):
os.mkdir(evaluation_path)
progressbar = tqdm(range(len(test_loader)))
with torch.no_grad():
for batch_idx, sample in enumerate(test_loader):
img = sample['img'].to(device)
img_name = sample['img_name']
seg_pred, exist_pred = net(img)[:2]
seg_pred = F.softmax(seg_pred, dim=1)
seg_pred = seg_pred.detach().cpu().numpy()
exist_pred = exist_pred.detach().cpu().numpy()
for b in range(len(seg_pred)):
seg = seg_pred[b]
exist = [1 if exist_pred[b, i] > 0.5 else 0 for i in range(4)]
lane_coords = getLane.prob2lines_CULane(seg, exist, resize_shape=(590, 1640), y_px_gap=20, pts=18)
path_tree = split_path(img_name[b])
save_dir, save_name = path_tree[-3:-1], path_tree[-1]
save_dir = os.path.join(out_path, *save_dir)
save_name = save_name[:-3] + "lines.txt"
save_name = os.path.join(save_dir, save_name)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
with open(save_name, "w") as f:
for l in lane_coords:
for (x, y) in l:
print("{} {}".format(x, y), end=" ", file=f)
print(file=f)
progressbar.update(1)
progressbar.close()
# ---- evaluate ----
os.system("sh utils/lane_evaluation/CULane/Run.sh " + exp_name)
================================================
FILE: test_tusimple.py
================================================
import argparse
import json
import os
import torch.nn.functional as F
from torch.utils.data import DataLoader
from tqdm import tqdm
import dataset
from config import *
from model import SCNN
from utils.prob2lines import getLane
from utils.transforms import *
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--exp_dir", type=str, default="./experiments/exp0")
args = parser.parse_args()
return args
# ------------ config ------------
args = parse_args()
exp_dir = args.exp_dir
exp_name = exp_dir.split('/')[-1]
with open(os.path.join(exp_dir, "cfg.json")) as f:
exp_cfg = json.load(f)
resize_shape = tuple(exp_cfg['dataset']['resize_shape'])
device = torch.device('cuda')
def split_path(path):
"""split path tree into list"""
folders = []
while True:
path, folder = os.path.split(path)
if folder != "":
folders.insert(0, folder)
else:
if path != "":
folders.insert(0, path)
break
return folders
# ------------ data and model ------------
# # CULane mean, std
# mean=(0.3598, 0.3653, 0.3662)
# std=(0.2573, 0.2663, 0.2756)
# Imagenet mean, std
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
transform = Compose(Resize(resize_shape), ToTensor(),
Normalize(mean=mean, std=std))
dataset_name = exp_cfg['dataset'].pop('dataset_name')
Dataset_Type = getattr(dataset, dataset_name)
test_dataset = Dataset_Type(Dataset_Path['Tusimple'], "test", transform)
test_loader = DataLoader(test_dataset, batch_size=32, collate_fn=test_dataset.collate, num_workers=4)
net = SCNN(input_size=resize_shape, pretrained=False)
save_name = os.path.join(exp_dir, exp_dir.split('/')[-1] + '_best.pth')
save_dict = torch.load(save_name, map_location='cpu')
print("\nloading", save_name, "...... From Epoch: ", save_dict['epoch'])
net.load_state_dict(save_dict['net'])
net = torch.nn.DataParallel(net.to(device))
net.eval()
# ------------ test ------------
out_path = os.path.join(exp_dir, "coord_output")
evaluation_path = os.path.join(exp_dir, "evaluate")
if not os.path.exists(out_path):
os.mkdir(out_path)
if not os.path.exists(evaluation_path):
os.mkdir(evaluation_path)
dump_to_json = []
progressbar = tqdm(range(len(test_loader)))
with torch.no_grad():
for batch_idx, sample in enumerate(test_loader):
img = sample['img'].to(device)
img_name = sample['img_name']
seg_pred, exist_pred = net(img)[:2]
seg_pred = F.softmax(seg_pred, dim=1)
seg_pred = seg_pred.detach().cpu().numpy()
exist_pred = exist_pred.detach().cpu().numpy()
for b in range(len(seg_pred)):
seg = seg_pred[b]
exist = [1 if exist_pred[b, i] > 0.5 else 0 for i in range(4)]
lane_coords = getLane.prob2lines_tusimple(seg, exist, resize_shape=(720, 1280), y_px_gap=10, pts=56)
for i in range(len(lane_coords)):
lane_coords[i] = sorted(lane_coords[i], key=lambda pair: pair[1])
path_tree = split_path(img_name[b])
save_dir, save_name = path_tree[-3:-1], path_tree[-1]
save_dir = os.path.join(out_path, *save_dir)
save_name = save_name[:-3] + "lines.txt"
save_name = os.path.join(save_dir, save_name)
if not os.path.exists(save_dir):
os.makedirs(save_dir, exist_ok=True)
with open(save_name, "w") as f:
for l in lane_coords:
for (x, y) in l:
print("{} {}".format(x, y), end=" ", file=f)
print(file=f)
json_dict = {}
json_dict['lanes'] = []
json_dict['h_sample'] = []
json_dict['raw_file'] = os.path.join(*path_tree[-4:])
json_dict['run_time'] = 0
for l in lane_coords:
if len(l) == 0:
continue
json_dict['lanes'].append([])
for (x, y) in l:
json_dict['lanes'][-1].append(int(x))
for (x, y) in lane_coords[0]:
json_dict['h_sample'].append(y)
dump_to_json.append(json.dumps(json_dict))
progressbar.update(1)
progressbar.close()
with open(os.path.join(out_path, "predict_test.json"), "w") as f:
for line in dump_to_json:
print(line, end="\n", file=f)
# ---- evaluate ----
from utils.lane_evaluation.tusimple.lane import LaneEval
eval_result = LaneEval.bench_one_submit(os.path.join(out_path, "predict_test.json"),
os.path.join(Dataset_Path['Tusimple'], 'test_label.json'))
print(eval_result)
with open(os.path.join(evaluation_path, "evaluation_result.txt"), "w") as f:
print(eval_result, file=f)
================================================
FILE: train.py
================================================
import argparse
import json
import os
import shutil
import time
import torch.optim as optim
from torch.utils.data import DataLoader
from tqdm import tqdm
from config import *
import dataset
from model import SCNN
from utils.tensorboard import TensorBoard
from utils.transforms import *
from utils.lr_scheduler import PolyLR
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--exp_dir", type=str, default="./experiments/exp0")
parser.add_argument("--resume", "-r", action="store_true")
args = parser.parse_args()
return args
args = parse_args()
# ------------ config ------------
exp_dir = args.exp_dir
while exp_dir[-1]=='/':
exp_dir = exp_dir[:-1]
exp_name = exp_dir.split('/')[-1]
with open(os.path.join(exp_dir, "cfg.json")) as f:
exp_cfg = json.load(f)
resize_shape = tuple(exp_cfg['dataset']['resize_shape'])
device = torch.device(exp_cfg['device'])
tensorboard = TensorBoard(exp_dir)
# ------------ train data ------------
# # CULane mean, std
# mean=(0.3598, 0.3653, 0.3662)
# std=(0.2573, 0.2663, 0.2756)
# Imagenet mean, std
mean=(0.485, 0.456, 0.406)
std=(0.229, 0.224, 0.225)
transform_train = Compose(Resize(resize_shape), Rotation(2), ToTensor(),
Normalize(mean=mean, std=std))
dataset_name = exp_cfg['dataset'].pop('dataset_name')
Dataset_Type = getattr(dataset, dataset_name)
train_dataset = Dataset_Type(Dataset_Path[dataset_name], "train", transform_train)
train_loader = DataLoader(train_dataset, batch_size=exp_cfg['dataset']['batch_size'], shuffle=True, collate_fn=train_dataset.collate, num_workers=8)
# ------------ val data ------------
transform_val_img = Resize(resize_shape)
transform_val_x = Compose(ToTensor(), Normalize(mean=mean, std=std))
transform_val = Compose(transform_val_img, transform_val_x)
val_dataset = Dataset_Type(Dataset_Path[dataset_name], "val", transform_val)
val_loader = DataLoader(val_dataset, batch_size=8, collate_fn=val_dataset.collate, num_workers=4)
# ------------ preparation ------------
net = SCNN(resize_shape, pretrained=True)
net = net.to(device)
net = torch.nn.DataParallel(net)
optimizer = optim.SGD(net.parameters(), **exp_cfg['optim'])
lr_scheduler = PolyLR(optimizer, 0.9, **exp_cfg['lr_scheduler'])
best_val_loss = 1e6
def train(epoch):
print("Train Epoch: {}".format(epoch))
net.train()
train_loss = 0
train_loss_seg = 0
train_loss_exist = 0
progressbar = tqdm(range(len(train_loader)))
for batch_idx, sample in enumerate(train_loader):
img = sample['img'].to(device)
segLabel = sample['segLabel'].to(device)
exist = sample['exist'].to(device)
optimizer.zero_grad()
seg_pred, exist_pred, loss_seg, loss_exist, loss = net(img, segLabel, exist)
if isinstance(net, torch.nn.DataParallel):
loss_seg = loss_seg.sum()
loss_exist = loss_exist.sum()
loss = loss.sum()
loss.backward()
optimizer.step()
lr_scheduler.step()
iter_idx = epoch * len(train_loader) + batch_idx
train_loss = loss.item()
train_loss_seg = loss_seg.item()
train_loss_exist = loss_exist.item()
progressbar.set_description("batch loss: {:.3f}".format(loss.item()))
progressbar.update(1)
lr = optimizer.param_groups[0]['lr']
tensorboard.scalar_summary(exp_name + "/train_loss", train_loss, iter_idx)
tensorboard.scalar_summary(exp_name + "/train_loss_seg", train_loss_seg, iter_idx)
tensorboard.scalar_summary(exp_name + "/train_loss_exist", train_loss_exist, iter_idx)
tensorboard.scalar_summary(exp_name + "/learning_rate", lr, iter_idx)
progressbar.close()
tensorboard.writer.flush()
if epoch % 1 == 0:
save_dict = {
"epoch": epoch,
"net": net.module.state_dict() if isinstance(net, torch.nn.DataParallel) else net.state_dict(),
"optim": optimizer.state_dict(),
"lr_scheduler": lr_scheduler.state_dict(),
"best_val_loss": best_val_loss
}
save_name = os.path.join(exp_dir, exp_name + '.pth')
torch.save(save_dict, save_name)
print("model is saved: {}".format(save_name))
print("------------------------\n")
def val(epoch):
global best_val_loss
print("Val Epoch: {}".format(epoch))
net.eval()
val_loss = 0
val_loss_seg = 0
val_loss_exist = 0
progressbar = tqdm(range(len(val_loader)))
with torch.no_grad():
for batch_idx, sample in enumerate(val_loader):
img = sample['img'].to(device)
segLabel = sample['segLabel'].to(device)
exist = sample['exist'].to(device)
seg_pred, exist_pred, loss_seg, loss_exist, loss = net(img, segLabel, exist)
if isinstance(net, torch.nn.DataParallel):
loss_seg = loss_seg.sum()
loss_exist = loss_exist.sum()
loss = loss.sum()
# visualize validation every 5 frame, 50 frames in all
gap_num = 5
if batch_idx%gap_num == 0 and batch_idx < 50 * gap_num:
origin_imgs = []
seg_pred = seg_pred.detach().cpu().numpy()
exist_pred = exist_pred.detach().cpu().numpy()
for b in range(len(img)):
img_name = sample['img_name'][b]
img = cv2.imread(img_name)
img = transform_val_img({'img': img})['img']
lane_img = np.zeros_like(img)
color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8')
coord_mask = np.argmax(seg_pred[b], axis=0)
for i in range(0, 4):
if exist_pred[b, i] > 0.5:
lane_img[coord_mask==(i+1)] = color[i]
img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=img, beta=1., gamma=0.)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
lane_img = cv2.cvtColor(lane_img, cv2.COLOR_BGR2RGB)
cv2.putText(lane_img, "{}".format([1 if exist_pred[b, i]>0.5 else 0 for i in range(4)]), (20, 20), cv2.FONT_HERSHEY_SIMPLEX, 1.1, (255, 255, 255), 2)
origin_imgs.append(img)
origin_imgs.append(lane_img)
tensorboard.image_summary("img_{}".format(batch_idx), origin_imgs, epoch)
val_loss += loss.item()
val_loss_seg += loss_seg.item()
val_loss_exist += loss_exist.item()
progressbar.set_description("batch loss: {:.3f}".format(loss.item()))
progressbar.update(1)
progressbar.close()
iter_idx = (epoch + 1) * len(train_loader) # keep align with training process iter_idx
tensorboard.scalar_summary("val_loss", val_loss, iter_idx)
tensorboard.scalar_summary("val_loss_seg", val_loss_seg, iter_idx)
tensorboard.scalar_summary("val_loss_exist", val_loss_exist, iter_idx)
tensorboard.writer.flush()
print("------------------------\n")
if val_loss < best_val_loss:
best_val_loss = val_loss
save_name = os.path.join(exp_dir, exp_name + '.pth')
copy_name = os.path.join(exp_dir, exp_name + '_best.pth')
shutil.copyfile(save_name, copy_name)
def main():
global best_val_loss
if args.resume:
save_dict = torch.load(os.path.join(exp_dir, exp_name + '.pth'))
if isinstance(net, torch.nn.DataParallel):
net.module.load_state_dict(save_dict['net'])
else:
net.load_state_dict(save_dict['net'])
optimizer.load_state_dict(save_dict['optim'])
lr_scheduler.load_state_dict(save_dict['lr_scheduler'])
start_epoch = save_dict['epoch'] + 1
best_val_loss = save_dict.get("best_val_loss", 1e6)
else:
start_epoch = 0
exp_cfg['MAX_EPOCHES'] = int(np.ceil(exp_cfg['lr_scheduler']['max_iter'] / len(train_loader)))
for epoch in range(start_epoch, exp_cfg['MAX_EPOCHES']):
train(epoch)
if epoch % 1 == 0:
print("\nValidation For Experiment: ", exp_dir)
print(time.strftime('%H:%M:%S', time.localtime()))
val(epoch)
if __name__ == "__main__":
main()
================================================
FILE: utils/lane_evaluation/CULane/CMakeLists.txt
================================================
cmake_minimum_required(VERSION 2.8)
project (evaluate)
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR})
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_FLAGS "-DCPU_ONLY -fopenmp")
find_package(OpenCV REQUIRED)
include_directories("${PROJECT_SOURCE_DIR}/include")
add_executable(evaluate
${PROJECT_SOURCE_DIR}/src/evaluate.cpp
${PROJECT_SOURCE_DIR}/src/counter.cpp
${PROJECT_SOURCE_DIR}/src/lane_compare.cpp
${PROJECT_SOURCE_DIR}/src/spline.cpp
)
target_link_libraries(evaluate ${OpenCV_LIBS})
================================================
FILE: utils/lane_evaluation/CULane/Run.sh
================================================
root=/home/lion/SCNN_Pytorch/
exp=$1
data_dir=/home/lion/Dataset/CULane/data/CULane/
detect_dir=${root}/experiments/${exp}/coord_output/
bin_dir=${root}/utils/lane_evaluation/CULane
w_lane=30;
iou=0.5; # Set iou to 0.3 or 0.5
im_w=1640
im_h=590
frame=1
list0=${data_dir}list/test_split/test0_normal.txt
list1=${data_dir}list/test_split/test1_crowd.txt
list2=${data_dir}list/test_split/test2_hlight.txt
list3=${data_dir}list/test_split/test3_shadow.txt
list4=${data_dir}list/test_split/test4_noline.txt
list5=${data_dir}list/test_split/test5_arrow.txt
list6=${data_dir}list/test_split/test6_curve.txt
list7=${data_dir}list/test_split/test7_cross.txt
list8=${data_dir}list/test_split/test8_night.txt
out0=${detect_dir}../evaluate/out0_normal.txt
out1=${detect_dir}../evaluate/out1_crowd.txt
out2=${detect_dir}../evaluate/out2_hlight.txt
out3=${detect_dir}../evaluate/out3_shadow.txt
out4=${detect_dir}../evaluate/out4_noline.txt
out5=${detect_dir}../evaluate/out5_arrow.txt
out6=${detect_dir}../evaluate/out6_curve.txt
out7=${detect_dir}../evaluate/out7_cross.txt
out8=${detect_dir}../evaluate/out8_night.txt
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list0 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out0
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list1 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out1
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list2 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out2
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list3 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out3
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list4 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out4
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list5 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out5
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list6 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out6
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list7 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out7
${bin_dir}/evaluate -a $data_dir -d $detect_dir -i $data_dir -l $list8 -w $w_lane -t $iou -c $im_w -r $im_h -f $frame -o $out8
cat ${detect_dir}/../evaluate/out*.txt > ${detect_dir}/../evaluate/${exp}_iou${iou}_split.txt
================================================
FILE: utils/lane_evaluation/CULane/include/counter.hpp
================================================
#ifndef COUNTER_HPP
#define COUNTER_HPP
#include "lane_compare.hpp"
#include "hungarianGraph.hpp"
#include <iostream>
#include <algorithm>
#include <vector>
#include <opencv2/core/core.hpp>
using namespace std;
using namespace cv;
// before coming to use functions of this class, the lanes should resize to im_width and im_height using resize_lane() in lane_compare.hpp
class Counter
{
public:
Counter(int _im_width, int _im_height, double _iou_threshold=0.4, int _lane_width=10):tp(0),fp(0),fn(0){
im_width = _im_width;
im_height = _im_height;
sim_threshold = _iou_threshold;
lane_compare = new LaneCompare(_im_width, _im_height, _lane_width, LaneCompare::IOU);
};
double get_precision(void);
double get_recall(void);
long getTP(void);
long getFP(void);
long getFN(void);
// direct add tp, fp, tn and fn
// first match with hungarian
vector<int> count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes);
void makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2);
private:
double sim_threshold;
int im_width;
int im_height;
long tp;
long fp;
long fn;
LaneCompare *lane_compare;
};
#endif
================================================
FILE: utils/lane_evaluation/CULane/include/hungarianGraph.hpp
================================================
#ifndef HUNGARIAN_GRAPH_HPP
#define HUNGARIAN_GRAPH_HPP
#include <vector>
using namespace std;
struct pipartiteGraph {
vector<vector<double> > mat;
vector<bool> leftUsed, rightUsed;
vector<double> leftWeight, rightWeight;
vector<int>rightMatch, leftMatch;
int leftNum, rightNum;
bool matchDfs(int u) {
leftUsed[u] = true;
for (int v = 0; v < rightNum; v++) {
if (!rightUsed[v] && fabs(leftWeight[u] + rightWeight[v] - mat[u][v]) < 1e-2) {
rightUsed[v] = true;
if (rightMatch[v] == -1 || matchDfs(rightMatch[v])) {
rightMatch[v] = u;
leftMatch[u] = v;
return true;
}
}
}
return false;
}
void resize(int leftNum, int rightNum) {
this->leftNum = leftNum;
this->rightNum = rightNum;
leftMatch.resize(leftNum);
rightMatch.resize(rightNum);
leftUsed.resize(leftNum);
rightUsed.resize(rightNum);
leftWeight.resize(leftNum);
rightWeight.resize(rightNum);
mat.resize(leftNum);
for (int i = 0; i < leftNum; i++) mat[i].resize(rightNum);
}
void match() {
for (int i = 0; i < leftNum; i++) leftMatch[i] = -1;
for (int i = 0; i < rightNum; i++) rightMatch[i] = -1;
for (int i = 0; i < rightNum; i++) rightWeight[i] = 0;
for (int i = 0; i < leftNum; i++) {
leftWeight[i] = -1e5;
for (int j = 0; j < rightNum; j++) {
if (leftWeight[i] < mat[i][j]) leftWeight[i] = mat[i][j];
}
}
for (int u = 0; u < leftNum; u++) {
while (1) {
for (int i = 0; i < leftNum; i++) leftUsed[i] = false;
for (int i = 0; i < rightNum; i++) rightUsed[i] = false;
if (matchDfs(u)) break;
double d = 1e10;
for (int i = 0; i < leftNum; i++) {
if (leftUsed[i] ) {
for (int j = 0; j < rightNum; j++) {
if (!rightUsed[j]) d = min(d, leftWeight[i] + rightWeight[j] - mat[i][j]);
}
}
}
if (d == 1e10) return ;
for (int i = 0; i < leftNum; i++) if (leftUsed[i]) leftWeight[i] -= d;
for (int i = 0; i < rightNum; i++) if (rightUsed[i]) rightWeight[i] += d;
}
}
}
};
#endif // HUNGARIAN_GRAPH_HPP
================================================
FILE: utils/lane_evaluation/CULane/include/lane_compare.hpp
================================================
#ifndef LANE_COMPARE_HPP
#define LANE_COMPARE_HPP
#include "spline.hpp"
#include <vector>
#include <iostream>
#include <opencv2/core/core.hpp>
using namespace std;
using namespace cv;
class LaneCompare{
public:
enum CompareMode{
IOU,
Caltech
};
LaneCompare(int _im_width, int _im_height, int _lane_width = 10, CompareMode _compare_mode = IOU){
im_width = _im_width;
im_height = _im_height;
compare_mode = _compare_mode;
lane_width = _lane_width;
}
double get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2);
void resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height);
private:
CompareMode compare_mode;
int im_width;
int im_height;
int lane_width;
Spline splineSolver;
};
#endif
================================================
FILE: utils/lane_evaluation/CULane/include/spline.hpp
================================================
#ifndef SPLINE_HPP
#define SPLINE_HPP
#include <vector>
#include <cstdio>
#include <math.h>
#include <opencv2/core/core.hpp>
using namespace cv;
using namespace std;
struct Func {
double a_x;
double b_x;
double c_x;
double d_x;
double a_y;
double b_y;
double c_y;
double d_y;
double h;
};
class Spline {
public:
vector<Point2f> splineInterpTimes(const vector<Point2f> &tmp_line, int times);
vector<Point2f> splineInterpStep(vector<Point2f> tmp_line, double step);
vector<Func> cal_fun(const vector<Point2f> &point_v);
};
#endif
================================================
FILE: utils/lane_evaluation/CULane/src/counter.cpp
================================================
/*************************************************************************
> File Name: counter.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: Thu Jul 14 20:23:08 2016
************************************************************************/
#include "counter.hpp"
#include <thread>
double Counter::get_precision(void)
{
cerr<<"tp: "<<tp<<" fp: "<<fp<<" fn: "<<fn<<endl;
if(tp+fp == 0)
{
cerr<<"no positive detection"<<endl;
return -1;
}
return tp/double(tp + fp);
}
double Counter::get_recall(void)
{
if(tp+fn == 0)
{
cerr<<"no ground truth positive"<<endl;
return -1;
}
return tp/double(tp + fn);
}
long Counter::getTP(void)
{
return tp;
}
long Counter::getFP(void)
{
return fp;
}
long Counter::getFN(void)
{
return fn;
}
vector<int> Counter::count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes)
{
vector<int> anno_match(anno_lanes.size(), -1);
vector<int> detect_match;
if(anno_lanes.empty())
{
fp += detect_lanes.size();
return anno_match;
}
if(detect_lanes.empty())
{
fn += anno_lanes.size();
return anno_match;
}
// hungarian match first
// first calc similarity matrix
vector<vector<double> > similarity(anno_lanes.size(), vector<double>(detect_lanes.size(), 0));
for(int i=0; i<anno_lanes.size(); i++)
{
const vector<Point2f> &curr_anno_lane = anno_lanes[i];
for(int j=0; j<detect_lanes.size(); j++)
{
const vector<Point2f> &curr_detect_lane = detect_lanes[j];
similarity[i][j] = lane_compare->get_lane_similarity(ref(curr_anno_lane), ref(curr_detect_lane));
}
}
makeMatch(ref(similarity), ref(anno_match), ref(detect_match));
int curr_tp = 0;
// count and add
for(int i=0; i<anno_lanes.size(); i++)
{
if(anno_match[i]>=0 && similarity[i][anno_match[i]] > sim_threshold)
{
curr_tp++;
}
else
{
anno_match[i] = -1;
}
}
int curr_fn = anno_lanes.size() - curr_tp;
int curr_fp = detect_lanes.size() - curr_tp;
tp += curr_tp;
fn += curr_fn;
fp += curr_fp;
return anno_match;
}
void Counter::makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2) {
int m = similarity.size();
int n = similarity[0].size();
pipartiteGraph gra;
bool have_exchange = false;
if (m > n) {
have_exchange = true;
swap(m, n);
}
gra.resize(m, n);
for (int i = 0; i < gra.leftNum; i++) {
for (int j = 0; j < gra.rightNum; j++) {
if(have_exchange)
gra.mat[i][j] = similarity[j][i];
else
gra.mat[i][j] = similarity[i][j];
}
}
gra.match();
match1 = gra.leftMatch;
match2 = gra.rightMatch;
if (have_exchange) swap(match1, match2);
}
================================================
FILE: utils/lane_evaluation/CULane/src/evaluate.cpp
================================================
/*************************************************************************
> File Name: evaluate.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: 2016年07月14日 星期四 18时28分45秒
************************************************************************/
#include "counter.hpp"
#include "spline.hpp"
#include <unistd.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <cstdlib>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <vector>
#include <thread>
#include <mutex>
using namespace std;
using namespace cv;
void help(void)
{
cout<<"./evaluate [OPTIONS]"<<endl;
cout<<"-h : print usage help"<<endl;
cout<<"-a : directory for annotation files (default: /data/driving/eval_data/anno_label/)"<<endl;
cout<<"-d : directory for detection files (default: /data/driving/eval_data/predict_label/)"<<endl;
cout<<"-i : directory for image files (default: /data/driving/eval_data/img/)"<<endl;
cout<<"-l : list of images used for evaluation (default: /data/driving/eval_data/img/all.txt)"<<endl;
cout<<"-w : width of the lanes (default: 10)"<<endl;
cout<<"-t : threshold of iou (default: 0.4)"<<endl;
cout<<"-c : cols (max image width) (default: 1920)"<<endl;
cout<<"-r : rows (max image height) (default: 1080)"<<endl;
cout<<"-s : show visualization"<<endl;
cout<<"-f : start frame in the test set (default: 1)"<<endl;
}
void read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes);
void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane);
void worker_func(vector<string> &lines_list_v, int start, int end, int &tp, int &fp, int &fn);
void update_tp_fp_fn(int &tp, int &fp, int &fn, int _tp, int _fp, int _fn);
double get_precision(int tp, int fp, int fn)
{
cerr<<"tp: "<<tp<<" fp: "<<fp<<" fn: "<<fn<<endl;
if(tp+fp == 0)
{
cerr<<"no positive detection"<<endl;
return -1;
}
return tp/double(tp + fp);
}
double get_recall(int tp, int fp, int fn)
{
if(tp+fn == 0)
{
cerr<<"no ground truth positive"<<endl;
return -1;
}
return tp/double(tp + fn);
}
mutex myMutex;
string anno_dir = "/data/driving/eval_data/anno_label/";
string detect_dir = "/data/driving/eval_data/predict_label/";
string im_dir = "/data/driving/eval_data/img/";
string list_im_file = "/data/driving/eval_data/img/all.txt";
string output_file = "./output.txt";
int width_lane = 10;
double iou_threshold = 0.4;
int im_width = 1920;
int im_height = 1080;
int oc;
bool show = false;
int frame = 1;
int NUM_PROCESS=20;
int main(int argc, char **argv)
{
// process params
while((oc = getopt(argc, argv, "ha:d:i:l:w:t:c:r:sf:o:p:")) != -1)
{
switch(oc)
{
case 'h':
help();
return 0;
case 'a':
anno_dir = optarg;
break;
case 'd':
detect_dir = optarg;
break;
case 'i':
im_dir = optarg;
break;
case 'l':
list_im_file = optarg;
break;
case 'w':
width_lane = atoi(optarg);
break;
case 't':
iou_threshold = atof(optarg);
break;
case 'c':
im_width = atoi(optarg);
break;
case 'r':
im_height = atoi(optarg);
break;
case 's':
show = true;
break;
case 'f':
frame = atoi(optarg);
break;
case 'o':
output_file = optarg;
break;
case 'p':
NUM_PROCESS = atoi(optarg);
break;
}
}
cerr<<"------------Configuration---------"<<endl;
cerr << "using multi-thread, num:" << NUM_PROCESS << endl;
cerr<<"anno_dir: "<<anno_dir<<endl;
cerr<<"detect_dir: "<<detect_dir<<endl;
cerr<<"im_dir: "<<im_dir<<endl;
cerr<<"list_im_file: "<<list_im_file<<endl;
cerr<<"width_lane: "<<width_lane<<endl;
cerr<<"iou_threshold: "<<iou_threshold<<endl;
cerr<<"im_width: "<<im_width<<endl;
cerr<<"im_height: "<<im_height<<endl;
cerr<<"-----------------------------------"<<endl;
cerr<<"Evaluating the results..."<<endl;
// this is the max_width and max_height
if(width_lane<1)
{
cerr<<"width_lane must be positive"<<endl;
help();
return 1;
}
ifstream ifs_im_list(list_im_file, ios::in);
if(ifs_im_list.fail())
{
cerr<<"Error: file "<<list_im_file<<" not exist!"<<endl;
return 1;
}
vector<string> lines_list_v;
string line;
while(getline(ifs_im_list, line)) {
lines_list_v.push_back(line);
}
ifs_im_list.close();
int TP=0, FP=0, FN=0; //result
int NUM = lines_list_v.size();
int batch_size = NUM / NUM_PROCESS;
vector<thread> thread_v;
for (int i=0; i<NUM_PROCESS; i++){
int _start=batch_size*i, _end= batch_size*(i+1);
_end = (_end>NUM) ? NUM:_end;
thread_v.push_back(thread(worker_func, ref(lines_list_v), _start, _end, ref(TP), ref(FP), ref(FN)));
}
for (int i=0; i<thread_v.size(); i++)
thread_v[i].join();
// Counter counter(im_width, im_height, iou_threshold, width_lane);
// vector<int> anno_match;
// string sub_im_name;
// int count = 0;
// while(getline(ifs_im_list, sub_im_name))
// {
// count++;
// if (count < frame)
// continue;
// string full_im_name = im_dir + sub_im_name;
// string sub_txt_name = sub_im_name.substr(0, sub_im_name.find_last_of(".")) + ".lines.txt";
// string anno_file_name = anno_dir + sub_txt_name;
// string detect_file_name = detect_dir + sub_txt_name;
// vector<vector<Point2f> > anno_lanes;
// vector<vector<Point2f> > detect_lanes;
// read_lane_file(anno_file_name, anno_lanes);
// read_lane_file(detect_file_name, detect_lanes);
// //cerr<<count<<": "<<full_im_name<<endl;
// anno_match = counter.count_im_pair(anno_lanes, detect_lanes);
// if (show)
// {
// visualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane);
// waitKey(0);
// }
// }
// ifs_im_list.close();
cerr << "list images num: " << lines_list_v.size() << endl;
double precision = get_precision(TP, FP, FN);
double recall = get_recall(TP, FP, FN);
double F = 2 * precision * recall / (precision + recall);
cerr<<"finished process file"<<endl;
cerr<<"precision: "<<precision<<endl;
cerr<<"recall: "<<recall<<endl;
cerr<<"Fmeasure: "<<F<<endl;
cerr<<"----------------------------------"<<endl;
ofstream ofs_out_file;
ofs_out_file.open(output_file, ios::out);
ofs_out_file<<"file: "<<output_file<<endl;
ofs_out_file<<"tp: "<< TP <<" fp: "<< FP <<" fn: "<< FN <<endl;
ofs_out_file<<"precision: "<<precision<<endl;
ofs_out_file<<"recall: "<<recall<<endl;
ofs_out_file<<"Fmeasure: "<<F<<endl<<endl;
ofs_out_file.close();
return 0;
}
void read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes)
{
lanes.clear();
ifstream ifs_lane(file_name, ios::in);
if(ifs_lane.fail())
{
return;
}
string str_line;
while(getline(ifs_lane, str_line))
{
vector<Point2f> curr_lane;
stringstream ss;
ss<<str_line;
double x,y;
while(ss>>x>>y)
{
curr_lane.push_back(Point2f(x, y));
}
lanes.push_back(curr_lane);
}
ifs_lane.close();
}
void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane)
{
Mat img = imread(full_im_name, 1);
Mat img2 = imread(full_im_name, 1);
vector<Point2f> curr_lane;
vector<Point2f> p_interp;
Spline splineSolver;
Scalar color_B = Scalar(255, 0, 0);
Scalar color_G = Scalar(0, 255, 0);
Scalar color_R = Scalar(0, 0, 255);
Scalar color_P = Scalar(255, 0, 255);
Scalar color;
for (int i=0; i<anno_lanes.size(); i++)
{
curr_lane = anno_lanes[i];
if(curr_lane.size() == 2)
{
p_interp = curr_lane;
}
else
{
p_interp = splineSolver.splineInterpTimes(curr_lane, 50);
}
if (anno_match[i] >= 0)
{
color = color_G;
}
else
{
color = color_G;
}
for (int n=0; n<p_interp.size()-1; n++)
{
cv::line(img, p_interp[n], p_interp[n+1], color, width_lane);
cv::line(img2, p_interp[n], p_interp[n+1], color, 2);
}
}
bool detected;
for (int i=0; i<detect_lanes.size(); i++)
{
detected = false;
curr_lane = detect_lanes[i];
if(curr_lane.size() == 2)
{
p_interp = curr_lane;
}
else
{
p_interp = splineSolver.splineInterpTimes(curr_lane, 50);
}
for (int n=0; n<anno_lanes.size(); n++)
{
if (anno_match[n] == i)
{
detected = true;
break;
}
}
if (detected == true)
{
color = color_B;
}
else
{
color = color_R;
}
for (int n=0; n<p_interp.size()-1; n++)
{
cv::line(img, p_interp[n], p_interp[n+1], color, width_lane);
cv::line(img2, p_interp[n], p_interp[n+1], color, 2);
}
}
namedWindow("visualize", 1);
imshow("visualize", img);
namedWindow("visualize2", 1);
imshow("visualize2", img2);
}
void update_tp_fp_fn(int &tp, int &fp, int &fn, int _tp, int _fp, int _fn)
{
std::lock_guard<std::mutex> guard(myMutex);
tp += _tp;
fp += _fp;
fn += _fn;
}
void worker_func(vector<string> &lines_list_v, int start, int end, int &tp, int &fp, int &fn)
{
Counter counter(im_width, im_height, iou_threshold, width_lane);
vector<int> anno_match;
string sub_im_name;
int count = 0;
for (int i=start; i<end; i++) {
sub_im_name = lines_list_v[i];
count++;
string full_im_name = im_dir + sub_im_name;
string sub_txt_name = sub_im_name.substr(0, sub_im_name.find_last_of(".")) + ".lines.txt";
string anno_file_name = anno_dir + sub_txt_name;
string detect_file_name = detect_dir + sub_txt_name;
vector<vector<Point2f> > anno_lanes;
vector<vector<Point2f> > detect_lanes;
read_lane_file(anno_file_name, ref(anno_lanes));
read_lane_file(detect_file_name, ref(detect_lanes));
anno_match = counter.count_im_pair(ref(anno_lanes), ref(detect_lanes));
}
update_tp_fp_fn(ref(tp), ref(fp), ref(fn), counter.getTP(), counter.getFP(), counter.getFN());
}
================================================
FILE: utils/lane_evaluation/CULane/src/lane_compare.cpp
================================================
/*************************************************************************
> File Name: lane_compare.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: Fri Jul 15 10:26:32 2016
************************************************************************/
#include "lane_compare.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc.hpp>
double LaneCompare::get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2)
{
if(lane1.size()<2 || lane2.size()<2)
{
cerr<<"lane size must be greater or equal to 2"<<endl;
return 0;
}
Mat im1 = Mat::zeros(im_height, im_width, CV_8UC1);
Mat im2 = Mat::zeros(im_height, im_width, CV_8UC1);
// draw lines on im1 and im2
vector<Point2f> p_interp1;
vector<Point2f> p_interp2;
if(lane1.size() == 2)
{
p_interp1 = lane1;
}
else
{
p_interp1 = splineSolver.splineInterpTimes(lane1, 50);
}
if(lane2.size() == 2)
{
p_interp2 = lane2;
}
else
{
p_interp2 = splineSolver.splineInterpTimes(lane2, 50);
}
Scalar color_white = Scalar(1);
for(int n=0; n<p_interp1.size()-1; n++)
{
cv::line(im1, p_interp1[n], p_interp1[n+1], color_white, lane_width);
}
for(int n=0; n<p_interp2.size()-1; n++)
{
cv::line(im2, p_interp2[n], p_interp2[n+1], color_white, lane_width);
}
double sum_1 = cv::sum(im1).val[0];
double sum_2 = cv::sum(im2).val[0];
double inter_sum = cv::sum(im1.mul(im2)).val[0];
double union_sum = sum_1 + sum_2 - inter_sum;
double iou = inter_sum / union_sum;
return iou;
}
// resize the lane from Size(curr_width, curr_height) to Size(im_width, im_height)
void LaneCompare::resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height)
{
if(curr_width == im_width && curr_height == im_height)
{
return;
}
double x_scale = im_width/(double)curr_width;
double y_scale = im_height/(double)curr_height;
for(int n=0; n<curr_lane.size(); n++)
{
curr_lane[n] = Point2f(curr_lane[n].x*x_scale, curr_lane[n].y*y_scale);
}
}
================================================
FILE: utils/lane_evaluation/CULane/src/spline.cpp
================================================
#include <vector>
#include <iostream>
#include "spline.hpp"
using namespace std;
using namespace cv;
vector<Point2f> Spline::splineInterpTimes(const vector<Point2f>& tmp_line, int times) {
vector<Point2f> res;
if(tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
for (int k = 0; k <= times; k++) {
double xi = x1 + double((x2 - x1) * k) / times;
double yi = y1 + double((y2 - y1) * k) / times;
res.push_back(Point2f(xi, yi));
}
}
else if(tmp_line.size() > 2)
{
vector<Func> tmp_func;
tmp_func = this->cal_fun(tmp_line);
if (tmp_func.empty()) {
cout << "in splineInterpTimes: cal_fun failed" << endl;
return res;
}
for(int j = 0; j < tmp_func.size(); j++)
{
double delta = tmp_func[j].h / times;
for(int k = 0; k < times; k++)
{
double t1 = delta*k;
double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);
double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);
res.push_back(Point2f(x1, y1));
}
}
res.push_back(tmp_line[tmp_line.size() - 1]);
}
else {
cerr << "in splineInterpTimes: not enough points" << endl;
}
return res;
}
vector<Point2f> Spline::splineInterpStep(vector<Point2f> tmp_line, double step) {
vector<Point2f> res;
/*
if (tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
for (double yi = std::min(y1, y2); yi < std::max(y1, y2); yi += step) {
double xi;
if (yi == y1) xi = x1;
else xi = (x2 - x1) / (y2 - y1) * (yi - y1) + x1;
res.push_back(Point2f(xi, yi));
}
}*/
if (tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
tmp_line[1].x = (x1 + x2) / 2;
tmp_line[1].y = (y1 + y2) / 2;
tmp_line.push_back(Point2f(x2, y2));
}
if (tmp_line.size() > 2) {
vector<Func> tmp_func;
tmp_func = this->cal_fun(tmp_line);
double ystart = tmp_line[0].y;
double yend = tmp_line[tmp_line.size() - 1].y;
bool down;
if (ystart < yend) down = 1;
else down = 0;
if (tmp_func.empty()) {
cerr << "in splineInterpStep: cal_fun failed" << endl;
}
for(int j = 0; j < tmp_func.size(); j++)
{
for(double t1 = 0; t1 < tmp_func[j].h; t1 += step)
{
double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);
double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);
res.push_back(Point2f(x1, y1));
}
}
res.push_back(tmp_line[tmp_line.size() - 1]);
}
else {
cerr << "in splineInterpStep: not enough points" << endl;
}
return res;
}
vector<Func> Spline::cal_fun(const vector<Point2f> &point_v)
{
vector<Func> func_v;
int n = point_v.size();
if(n<=2) {
cout << "in cal_fun: point number less than 3" << endl;
return func_v;
}
func_v.resize(point_v.size()-1);
vector<double> Mx(n);
vector<double> My(n);
vector<double> A(n-2);
vector<double> B(n-2);
vector<double> C(n-2);
vector<double> Dx(n-2);
vector<double> Dy(n-2);
vector<double> h(n-1);
//vector<func> func_v(n-1);
for(int i = 0; i < n-1; i++)
{
h[i] = sqrt(pow(point_v[i+1].x - point_v[i].x, 2) + pow(point_v[i+1].y - point_v[i].y, 2));
}
for(int i = 0; i < n-2; i++)
{
A[i] = h[i];
B[i] = 2*(h[i]+h[i+1]);
C[i] = h[i+1];
Dx[i] = 6*( (point_v[i+2].x - point_v[i+1].x)/h[i+1] - (point_v[i+1].x - point_v[i].x)/h[i] );
Dy[i] = 6*( (point_v[i+2].y - point_v[i+1].y)/h[i+1] - (point_v[i+1].y - point_v[i].y)/h[i] );
}
//TDMA
C[0] = C[0] / B[0];
Dx[0] = Dx[0] / B[0];
Dy[0] = Dy[0] / B[0];
for(int i = 1; i < n-2; i++)
{
double tmp = B[i] - A[i]*C[i-1];
C[i] = C[i] / tmp;
Dx[i] = (Dx[i] - A[i]*Dx[i-1]) / tmp;
Dy[i] = (Dy[i] - A[i]*Dy[i-1]) / tmp;
}
Mx[n-2] = Dx[n-3];
My[n-2] = Dy[n-3];
for(int i = n-4; i >= 0; i--)
{
Mx[i+1] = Dx[i] - C[i]*Mx[i+2];
My[i+1] = Dy[i] - C[i]*My[i+2];
}
Mx[0] = 0;
Mx[n-1] = 0;
My[0] = 0;
My[n-1] = 0;
for(int i = 0; i < n-1; i++)
{
func_v[i].a_x = point_v[i].x;
func_v[i].b_x = (point_v[i+1].x - point_v[i].x)/h[i] - (2*h[i]*Mx[i] + h[i]*Mx[i+1]) / 6;
func_v[i].c_x = Mx[i]/2;
func_v[i].d_x = (Mx[i+1] - Mx[i]) / (6*h[i]);
func_v[i].a_y = point_v[i].y;
func_v[i].b_y = (point_v[i+1].y - point_v[i].y)/h[i] - (2*h[i]*My[i] + h[i]*My[i+1]) / 6;
func_v[i].c_y = My[i]/2;
func_v[i].d_y = (My[i+1] - My[i]) / (6*h[i]);
func_v[i].h = h[i];
}
return func_v;
}
================================================
FILE: utils/lane_evaluation/CULane/src_origin/counter.cpp
================================================
/*************************************************************************
> File Name: counter.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: Thu Jul 14 20:23:08 2016
************************************************************************/
#include "counter.hpp"
double Counter::get_precision(void)
{
cerr<<"tp: "<<tp<<" fp: "<<fp<<" fn: "<<fn<<endl;
if(tp+fp == 0)
{
cerr<<"no positive detection"<<endl;
return -1;
}
return tp/double(tp + fp);
}
double Counter::get_recall(void)
{
if(tp+fn == 0)
{
cerr<<"no ground truth positive"<<endl;
return -1;
}
return tp/double(tp + fn);
}
long Counter::getTP(void)
{
return tp;
}
long Counter::getFP(void)
{
return fp;
}
long Counter::getFN(void)
{
return fn;
}
vector<int> Counter::count_im_pair(const vector<vector<Point2f> > &anno_lanes, const vector<vector<Point2f> > &detect_lanes)
{
vector<int> anno_match(anno_lanes.size(), -1);
vector<int> detect_match;
if(anno_lanes.empty())
{
fp += detect_lanes.size();
return anno_match;
}
if(detect_lanes.empty())
{
fn += anno_lanes.size();
return anno_match;
}
// hungarian match first
// first calc similarity matrix
vector<vector<double> > similarity(anno_lanes.size(), vector<double>(detect_lanes.size(), 0));
for(int i=0; i<anno_lanes.size(); i++)
{
const vector<Point2f> &curr_anno_lane = anno_lanes[i];
for(int j=0; j<detect_lanes.size(); j++)
{
const vector<Point2f> &curr_detect_lane = detect_lanes[j];
similarity[i][j] = lane_compare->get_lane_similarity(curr_anno_lane, curr_detect_lane);
}
}
makeMatch(similarity, anno_match, detect_match);
int curr_tp = 0;
// count and add
for(int i=0; i<anno_lanes.size(); i++)
{
if(anno_match[i]>=0 && similarity[i][anno_match[i]] > sim_threshold)
{
curr_tp++;
}
else
{
anno_match[i] = -1;
}
}
int curr_fn = anno_lanes.size() - curr_tp;
int curr_fp = detect_lanes.size() - curr_tp;
tp += curr_tp;
fn += curr_fn;
fp += curr_fp;
return anno_match;
}
void Counter::makeMatch(const vector<vector<double> > &similarity, vector<int> &match1, vector<int> &match2) {
int m = similarity.size();
int n = similarity[0].size();
pipartiteGraph gra;
bool have_exchange = false;
if (m > n) {
have_exchange = true;
swap(m, n);
}
gra.resize(m, n);
for (int i = 0; i < gra.leftNum; i++) {
for (int j = 0; j < gra.rightNum; j++) {
if(have_exchange)
gra.mat[i][j] = similarity[j][i];
else
gra.mat[i][j] = similarity[i][j];
}
}
gra.match();
match1 = gra.leftMatch;
match2 = gra.rightMatch;
if (have_exchange) swap(match1, match2);
}
================================================
FILE: utils/lane_evaluation/CULane/src_origin/evaluate.cpp
================================================
/*************************************************************************
> File Name: evaluate.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: 2016年07月14日 星期四 18时28分45秒
************************************************************************/
#include "counter.hpp"
#include "spline.hpp"
#include <unistd.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <cstdlib>
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
void help(void)
{
cout<<"./evaluate [OPTIONS]"<<endl;
cout<<"-h : print usage help"<<endl;
cout<<"-a : directory for annotation files (default: /data/driving/eval_data/anno_label/)"<<endl;
cout<<"-d : directory for detection files (default: /data/driving/eval_data/predict_label/)"<<endl;
cout<<"-i : directory for image files (default: /data/driving/eval_data/img/)"<<endl;
cout<<"-l : list of images used for evaluation (default: /data/driving/eval_data/img/all.txt)"<<endl;
cout<<"-w : width of the lanes (default: 10)"<<endl;
cout<<"-t : threshold of iou (default: 0.4)"<<endl;
cout<<"-c : cols (max image width) (default: 1920)"<<endl;
cout<<"-r : rows (max image height) (default: 1080)"<<endl;
cout<<"-s : show visualization"<<endl;
cout<<"-f : start frame in the test set (default: 1)"<<endl;
}
void read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes);
void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane);
int main(int argc, char **argv)
{
// process params
string anno_dir = "/data/driving/eval_data/anno_label/";
string detect_dir = "/data/driving/eval_data/predict_label/";
string im_dir = "/data/driving/eval_data/img/";
string list_im_file = "/data/driving/eval_data/img/all.txt";
string output_file = "./output.txt";
int width_lane = 10;
double iou_threshold = 0.4;
int im_width = 1920;
int im_height = 1080;
int oc;
bool show = false;
int frame = 1;
while((oc = getopt(argc, argv, "ha:d:i:l:w:t:c:r:sf:o:")) != -1)
{
switch(oc)
{
case 'h':
help();
return 0;
case 'a':
anno_dir = optarg;
break;
case 'd':
detect_dir = optarg;
break;
case 'i':
im_dir = optarg;
break;
case 'l':
list_im_file = optarg;
break;
case 'w':
width_lane = atoi(optarg);
break;
case 't':
iou_threshold = atof(optarg);
break;
case 'c':
im_width = atoi(optarg);
break;
case 'r':
im_height = atoi(optarg);
break;
case 's':
show = true;
break;
case 'f':
frame = atoi(optarg);
break;
case 'o':
output_file = optarg;
break;
}
}
cout<<"------------Configuration---------"<<endl;
cout<<"anno_dir: "<<anno_dir<<endl;
cout<<"detect_dir: "<<detect_dir<<endl;
cout<<"im_dir: "<<im_dir<<endl;
cout<<"list_im_file: "<<list_im_file<<endl;
cout<<"width_lane: "<<width_lane<<endl;
cout<<"iou_threshold: "<<iou_threshold<<endl;
cout<<"im_width: "<<im_width<<endl;
cout<<"im_height: "<<im_height<<endl;
cout<<"-----------------------------------"<<endl;
cout<<"Evaluating the results..."<<endl;
// this is the max_width and max_height
if(width_lane<1)
{
cerr<<"width_lane must be positive"<<endl;
help();
return 1;
}
ifstream ifs_im_list(list_im_file, ios::in);
if(ifs_im_list.fail())
{
cerr<<"Error: file "<<list_im_file<<" not exist!"<<endl;
return 1;
}
Counter counter(im_width, im_height, iou_threshold, width_lane);
vector<int> anno_match;
string sub_im_name;
int count = 0;
while(getline(ifs_im_list, sub_im_name))
{
count++;
if (count < frame)
continue;
string full_im_name = im_dir + sub_im_name;
string sub_txt_name = sub_im_name.substr(0, sub_im_name.find_last_of(".")) + ".lines.txt";
string anno_file_name = anno_dir + sub_txt_name;
string detect_file_name = detect_dir + sub_txt_name;
vector<vector<Point2f> > anno_lanes;
vector<vector<Point2f> > detect_lanes;
read_lane_file(anno_file_name, anno_lanes);
read_lane_file(detect_file_name, detect_lanes);
//cerr<<count<<": "<<full_im_name<<endl;
anno_match = counter.count_im_pair(anno_lanes, detect_lanes);
if (show)
{
visualize(full_im_name, anno_lanes, detect_lanes, anno_match, width_lane);
waitKey(0);
}
}
ifs_im_list.close();
double precision = counter.get_precision();
double recall = counter.get_recall();
double F = 2 * precision * recall / (precision + recall);
cerr<<"finished process file"<<endl;
cout<<"precision: "<<precision<<endl;
cout<<"recall: "<<recall<<endl;
cout<<"Fmeasure: "<<F<<endl;
cout<<"----------------------------------"<<endl;
ofstream ofs_out_file;
ofs_out_file.open(output_file, ios::out);
ofs_out_file<<"file: "<<output_file<<endl;
ofs_out_file<<"tp: "<<counter.getTP()<<" fp: "<<counter.getFP()<<" fn: "<<counter.getFN()<<endl;
ofs_out_file<<"precision: "<<precision<<endl;
ofs_out_file<<"recall: "<<recall<<endl;
ofs_out_file<<"Fmeasure: "<<F<<endl<<endl;
ofs_out_file.close();
return 0;
}
void read_lane_file(const string &file_name, vector<vector<Point2f> > &lanes)
{
lanes.clear();
ifstream ifs_lane(file_name, ios::in);
if(ifs_lane.fail())
{
return;
}
string str_line;
while(getline(ifs_lane, str_line))
{
vector<Point2f> curr_lane;
stringstream ss;
ss<<str_line;
double x,y;
while(ss>>x>>y)
{
curr_lane.push_back(Point2f(x, y));
}
lanes.push_back(curr_lane);
}
ifs_lane.close();
}
void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lanes, vector<vector<Point2f> > &detect_lanes, vector<int> anno_match, int width_lane)
{
Mat img = imread(full_im_name, 1);
Mat img2 = imread(full_im_name, 1);
vector<Point2f> curr_lane;
vector<Point2f> p_interp;
Spline splineSolver;
Scalar color_B = Scalar(255, 0, 0);
Scalar color_G = Scalar(0, 255, 0);
Scalar color_R = Scalar(0, 0, 255);
Scalar color_P = Scalar(255, 0, 255);
Scalar color;
for (int i=0; i<anno_lanes.size(); i++)
{
curr_lane = anno_lanes[i];
if(curr_lane.size() == 2)
{
p_interp = curr_lane;
}
else
{
p_interp = splineSolver.splineInterpTimes(curr_lane, 50);
}
if (anno_match[i] >= 0)
{
color = color_G;
}
else
{
color = color_G;
}
for (int n=0; n<p_interp.size()-1; n++)
{
cv::line(img, p_interp[n], p_interp[n+1], color, width_lane);
cv::line(img2, p_interp[n], p_interp[n+1], color, 2);
}
}
bool detected;
for (int i=0; i<detect_lanes.size(); i++)
{
detected = false;
curr_lane = detect_lanes[i];
if(curr_lane.size() == 2)
{
p_interp = curr_lane;
}
else
{
p_interp = splineSolver.splineInterpTimes(curr_lane, 50);
}
for (int n=0; n<anno_lanes.size(); n++)
{
if (anno_match[n] == i)
{
detected = true;
break;
}
}
if (detected == true)
{
color = color_B;
}
else
{
color = color_R;
}
for (int n=0; n<p_interp.size()-1; n++)
{
cv::line(img, p_interp[n], p_interp[n+1], color, width_lane);
cv::line(img2, p_interp[n], p_interp[n+1], color, 2);
}
}
namedWindow("visualize", 1);
imshow("visualize", img);
namedWindow("visualize2", 1);
imshow("visualize2", img2);
}
================================================
FILE: utils/lane_evaluation/CULane/src_origin/lane_compare.cpp
================================================
/*************************************************************************
> File Name: lane_compare.cpp
> Author: Xingang Pan, Jun Li
> Mail: px117@ie.cuhk.edu.hk
> Created Time: Fri Jul 15 10:26:32 2016
************************************************************************/
#include "lane_compare.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc.hpp>
double LaneCompare::get_lane_similarity(const vector<Point2f> &lane1, const vector<Point2f> &lane2)
{
if(lane1.size()<2 || lane2.size()<2)
{
cerr<<"lane size must be greater or equal to 2"<<endl;
return 0;
}
Mat im1 = Mat::zeros(im_height, im_width, CV_8UC1);
Mat im2 = Mat::zeros(im_height, im_width, CV_8UC1);
// draw lines on im1 and im2
vector<Point2f> p_interp1;
vector<Point2f> p_interp2;
if(lane1.size() == 2)
{
p_interp1 = lane1;
}
else
{
p_interp1 = splineSolver.splineInterpTimes(lane1, 50);
}
if(lane2.size() == 2)
{
p_interp2 = lane2;
}
else
{
p_interp2 = splineSolver.splineInterpTimes(lane2, 50);
}
Scalar color_white = Scalar(1);
for(int n=0; n<p_interp1.size()-1; n++)
{
cv::line(im1, p_interp1[n], p_interp1[n+1], color_white, lane_width);
}
for(int n=0; n<p_interp2.size()-1; n++)
{
cv::line(im2, p_interp2[n], p_interp2[n+1], color_white, lane_width);
}
double sum_1 = cv::sum(im1).val[0];
double sum_2 = cv::sum(im2).val[0];
double inter_sum = cv::sum(im1.mul(im2)).val[0];
double union_sum = sum_1 + sum_2 - inter_sum;
double iou = inter_sum / union_sum;
return iou;
}
// resize the lane from Size(curr_width, curr_height) to Size(im_width, im_height)
void LaneCompare::resize_lane(vector<Point2f> &curr_lane, int curr_width, int curr_height)
{
if(curr_width == im_width && curr_height == im_height)
{
return;
}
double x_scale = im_width/(double)curr_width;
double y_scale = im_height/(double)curr_height;
for(int n=0; n<curr_lane.size(); n++)
{
curr_lane[n] = Point2f(curr_lane[n].x*x_scale, curr_lane[n].y*y_scale);
}
}
================================================
FILE: utils/lane_evaluation/CULane/src_origin/spline.cpp
================================================
#include <vector>
#include <iostream>
#include "spline.hpp"
using namespace std;
using namespace cv;
vector<Point2f> Spline::splineInterpTimes(const vector<Point2f>& tmp_line, int times) {
vector<Point2f> res;
if(tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
for (int k = 0; k <= times; k++) {
double xi = x1 + double((x2 - x1) * k) / times;
double yi = y1 + double((y2 - y1) * k) / times;
res.push_back(Point2f(xi, yi));
}
}
else if(tmp_line.size() > 2)
{
vector<Func> tmp_func;
tmp_func = this->cal_fun(tmp_line);
if (tmp_func.empty()) {
cout << "in splineInterpTimes: cal_fun failed" << endl;
return res;
}
for(int j = 0; j < tmp_func.size(); j++)
{
double delta = tmp_func[j].h / times;
for(int k = 0; k < times; k++)
{
double t1 = delta*k;
double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);
double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);
res.push_back(Point2f(x1, y1));
}
}
res.push_back(tmp_line[tmp_line.size() - 1]);
}
else {
cerr << "in splineInterpTimes: not enough points" << endl;
}
return res;
}
vector<Point2f> Spline::splineInterpStep(vector<Point2f> tmp_line, double step) {
vector<Point2f> res;
/*
if (tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
for (double yi = std::min(y1, y2); yi < std::max(y1, y2); yi += step) {
double xi;
if (yi == y1) xi = x1;
else xi = (x2 - x1) / (y2 - y1) * (yi - y1) + x1;
res.push_back(Point2f(xi, yi));
}
}*/
if (tmp_line.size() == 2) {
double x1 = tmp_line[0].x;
double y1 = tmp_line[0].y;
double x2 = tmp_line[1].x;
double y2 = tmp_line[1].y;
tmp_line[1].x = (x1 + x2) / 2;
tmp_line[1].y = (y1 + y2) / 2;
tmp_line.push_back(Point2f(x2, y2));
}
if (tmp_line.size() > 2) {
vector<Func> tmp_func;
tmp_func = this->cal_fun(tmp_line);
double ystart = tmp_line[0].y;
double yend = tmp_line[tmp_line.size() - 1].y;
bool down;
if (ystart < yend) down = 1;
else down = 0;
if (tmp_func.empty()) {
cerr << "in splineInterpStep: cal_fun failed" << endl;
}
for(int j = 0; j < tmp_func.size(); j++)
{
for(double t1 = 0; t1 < tmp_func[j].h; t1 += step)
{
double x1 = tmp_func[j].a_x + tmp_func[j].b_x*t1 + tmp_func[j].c_x*pow(t1,2) + tmp_func[j].d_x*pow(t1,3);
double y1 = tmp_func[j].a_y + tmp_func[j].b_y*t1 + tmp_func[j].c_y*pow(t1,2) + tmp_func[j].d_y*pow(t1,3);
res.push_back(Point2f(x1, y1));
}
}
res.push_back(tmp_line[tmp_line.size() - 1]);
}
else {
cerr << "in splineInterpStep: not enough points" << endl;
}
return res;
}
vector<Func> Spline::cal_fun(const vector<Point2f> &point_v)
{
vector<Func> func_v;
int n = point_v.size();
if(n<=2) {
cout << "in cal_fun: point number less than 3" << endl;
return func_v;
}
func_v.resize(point_v.size()-1);
vector<double> Mx(n);
vector<double> My(n);
vector<double> A(n-2);
vector<double> B(n-2);
vector<double> C(n-2);
vector<double> Dx(n-2);
vector<double> Dy(n-2);
vector<double> h(n-1);
//vector<func> func_v(n-1);
for(int i = 0; i < n-1; i++)
{
h[i] = sqrt(pow(point_v[i+1].x - point_v[i].x, 2) + pow(point_v[i+1].y - point_v[i].y, 2));
}
for(int i = 0; i < n-2; i++)
{
A[i] = h[i];
B[i] = 2*(h[i]+h[i+1]);
C[i] = h[i+1];
Dx[i] = 6*( (point_v[i+2].x - point_v[i+1].x)/h[i+1] - (point_v[i+1].x - point_v[i].x)/h[i] );
Dy[i] = 6*( (point_v[i+2].y - point_v[i+1].y)/h[i+1] - (point_v[i+1].y - point_v[i].y)/h[i] );
}
//TDMA
C[0] = C[0] / B[0];
Dx[0] = Dx[0] / B[0];
Dy[0] = Dy[0] / B[0];
for(int i = 1; i < n-2; i++)
{
double tmp = B[i] - A[i]*C[i-1];
C[i] = C[i] / tmp;
Dx[i] = (Dx[i] - A[i]*Dx[i-1]) / tmp;
Dy[i] = (Dy[i] - A[i]*Dy[i-1]) / tmp;
}
Mx[n-2] = Dx[n-3];
My[n-2] = Dy[n-3];
for(int i = n-4; i >= 0; i--)
{
Mx[i+1] = Dx[i] - C[i]*Mx[i+2];
My[i+1] = Dy[i] - C[i]*My[i+2];
}
Mx[0] = 0;
Mx[n-1] = 0;
My[0] = 0;
My[n-1] = 0;
for(int i = 0; i < n-1; i++)
{
func_v[i].a_x = point_v[i].x;
func_v[i].b_x = (point_v[i+1].x - point_v[i].x)/h[i] - (2*h[i]*Mx[i] + h[i]*Mx[i+1]) / 6;
func_v[i].c_x = Mx[i]/2;
func_v[i].d_x = (Mx[i+1] - Mx[i]) / (6*h[i]);
func_v[i].a_y = point_v[i].y;
func_v[i].b_y = (point_v[i+1].y - point_v[i].y)/h[i] - (2*h[i]*My[i] + h[i]*My[i+1]) / 6;
func_v[i].c_y = My[i]/2;
func_v[i].d_y = (My[i+1] - My[i]) / (6*h[i]);
func_v[i].h = h[i];
}
return func_v;
}
================================================
FILE: utils/lane_evaluation/tusimple/lane.py
================================================
import numpy as np
from sklearn.linear_model import LinearRegression
import json as json
class LaneEval(object):
lr = LinearRegression()
pixel_thresh = 20
pt_thresh = 0.85
@staticmethod
def get_angle(xs, y_samples):
xs, ys = xs[xs >= 0], y_samples[xs >= 0]
if len(xs) > 1:
LaneEval.lr.fit(ys[:, None], xs)
k = LaneEval.lr.coef_[0]
theta = np.arctan(k)
else:
theta = 0
return theta
@staticmethod
def line_accuracy(pred, gt, thresh):
pred = np.array([p if p >= 0 else -100 for p in pred])
gt = np.array([g if g >= 0 else -100 for g in gt])
return np.sum(np.where(np.abs(pred - gt) < thresh, 1., 0.)) / len(gt)
@staticmethod
def bench(pred, gt, y_samples, running_time):
if any(len(p) != len(y_samples) for p in pred):
raise Exception('Format of lanes error.')
if running_time > 200 or len(gt) + 2 < len(pred):
return 0., 0., 1.
angles = [LaneEval.get_angle(np.array(x_gts), np.array(y_samples)) for x_gts in gt]
threshs = [LaneEval.pixel_thresh / np.cos(angle) for angle in angles]
line_accs = []
fp, fn = 0., 0.
matched = 0.
for x_gts, thresh in zip(gt, threshs):
accs = [LaneEval.line_accuracy(np.array(x_preds), np.array(x_gts), thresh) for x_preds in pred]
max_acc = np.max(accs) if len(accs) > 0 else 0.
if max_acc < LaneEval.pt_thresh:
fn += 1
else:
matched += 1
line_accs.append(max_acc)
fp = len(pred) - matched
if len(gt) > 4 and fn > 0:
fn -= 1
s = sum(line_accs)
if len(gt) > 4:
s -= min(line_accs)
return s / max(min(4.0, len(gt)), 1.), fp / len(pred) if len(pred) > 0 else 0., fn / max(min(len(gt), 4.) , 1.)
@staticmethod
def bench_one_submit(pred_file, gt_file):
try:
json_pred = [json.loads(line) for line in open(pred_file).readlines()]
except BaseException as e:
raise Exception('Fail to load json file of the prediction.')
json_gt = [json.loads(line) for line in open(gt_file).readlines()]
if len(json_gt) != len(json_pred):
raise Exception('We do not get the predictions of all the test tasks')
gts = {l['raw_file']: l for l in json_gt}
accuracy, fp, fn = 0., 0., 0.
for pred in json_pred:
if 'raw_file' not in pred or 'lanes' not in pred or 'run_time' not in pred:
raise Exception('raw_file or lanes or run_time not in some predictions.')
raw_file = pred['raw_file']
pred_lanes = pred['lanes']
run_time = pred['run_time']
if raw_file not in gts:
raise Exception('Some raw_file from your predictions do not exist in the test tasks.')
gt = gts[raw_file]
gt_lanes = gt['lanes']
y_samples = gt['h_samples']
try:
a, p, n = LaneEval.bench(pred_lanes, gt_lanes, y_samples, run_time)
except BaseException as e:
raise Exception('Format of lanes error.')
accuracy += a
fp += p
fn += n
num = len(gts)
# the first return parameter is the default ranking parameter
return json.dumps([
{'name': 'Accuracy', 'value': accuracy / num, 'order': 'desc'},
{'name': 'FP', 'value': fp / num, 'order': 'asc'},
{'name': 'FN', 'value': fn / num, 'order': 'asc'}
])
if __name__ == '__main__':
import sys
try:
if len(sys.argv) != 3:
raise Exception('Invalid input arguments')
print(LaneEval.bench_one_submit(sys.argv[1], sys.argv[2]))
except Exception as e:
print(e.message)
sys.exit(e.message)
================================================
FILE: utils/lr_scheduler.py
================================================
from torch.optim.lr_scheduler import _LRScheduler
class PolyLR(_LRScheduler):
def __init__(self, optimizer, pow, max_iter, min_lrs=1e-20, last_epoch=-1, warmup=0):
"""
:param warmup: how many steps for linearly warmup lr
"""
self.pow = pow
self.max_iter = max_iter
if not isinstance(min_lrs, list) and not isinstance(min_lrs, tuple):
self.min_lrs = [min_lrs] * len(optimizer.param_groups)
assert isinstance(warmup, int), "The type of warmup is incorrect, got {}".format(type(warmup))
self.warmup = max(warmup, 0)
super(PolyLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
if self.last_epoch < self.warmup:
return [base_lr / self.warmup * (self.last_epoch+1) for base_lr in self.base_lrs]
if self.last_epoch < self.max_iter:
coeff = (1 - (self.last_epoch-self.warmup) / (self.max_iter-self.warmup)) ** self.pow
else:
coeff = 0
return [(base_lr - min_lr) * coeff + min_lr
for base_lr, min_lr in zip(self.base_lrs, self.min_lrs)]
================================================
FILE: utils/prob2lines/getLane.py
================================================
import cv2
import numpy as np
def getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape=None):
"""
Arguments:
----------
prob_map: prob map for single lane, np array size (h, w)
resize_shape: reshape size target, (H, W)
Return:
----------
coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape
"""
if resize_shape is None:
resize_shape = prob_map.shape
h, w = prob_map.shape
H, W = resize_shape
coords = np.zeros(pts)
for i in range(pts):
y = int((H - 10 - i * y_px_gap) * h / H)
if y < 0:
break
line = prob_map[y, :]
id = np.argmax(line)
if line[id] > thresh:
coords[i] = int(id / w * W)
if (coords > 0).sum() < 2:
coords = np.zeros(pts)
return coords
def prob2lines_tusimple(seg_pred, exist, resize_shape=None, smooth=True, y_px_gap=10, pts=None, thresh=0.3):
"""
Arguments:
----------
seg_pred: np.array size (5, h, w)
resize_shape: reshape size target, (H, W)
exist: list of existence, e.g. [0, 1, 1, 0]
smooth: whether to smooth the probability or not
y_px_gap: y pixel gap for sampling
pts: how many points for one lane
thresh: probability threshold
Return:
----------
coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]
"""
if resize_shape is None:
resize_shape = seg_pred.shape[1:] # seg_pred (5, h, w)
_, h, w = seg_pred.shape
H, W = resize_shape
coordinates = []
if pts is None:
pts = round(H / 2 / y_px_gap)
seg_pred = np.ascontiguousarray(np.transpose(seg_pred, (1, 2, 0)))
for i in range(4):
prob_map = seg_pred[..., i + 1]
if smooth:
prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)
if exist[i] > 0:
coords = getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape)
if (coords>0).sum() < 2:
continue
coordinates.append(
[[coords[j], H - 10 - j * y_px_gap] if coords[j] > 0 else [-1, H - 10 - j * y_px_gap] for j in
range(pts)])
return coordinates
def getLane_CULane(prob_map, y_px_gap, pts, thresh, resize_shape=None):
"""
Arguments:
----------
prob_map: prob map for single lane, np array size (h, w)
resize_shape: reshape size target, (H, W)
Return:
----------
coords: x coords bottom up every y_px_gap px, 0 for non-exist, in resized shape
"""
if resize_shape is None:
resize_shape = prob_map.shape
h, w = prob_map.shape
H, W = resize_shape
coords = np.zeros(pts)
for i in range(pts):
y = int(h - i * y_px_gap / H * h - 1)
if y < 0:
break
line = prob_map[y, :]
id = np.argmax(line)
if line[id] > thresh:
coords[i] = int(id / w * W)
if (coords > 0).sum() < 2:
coords = np.zeros(pts)
return coords
def prob2lines_CULane(seg_pred, exist, resize_shape=None, smooth=True, y_px_gap=20, pts=None, thresh=0.3):
"""
Arguments:
----------
seg_pred: np.array size (5, h, w)
resize_shape: reshape size target, (H, W)
exist: list of existence, e.g. [0, 1, 1, 0]
smooth: whether to smooth the probability or not
y_px_gap: y pixel gap for sampling
pts: how many points for one lane
thresh: probability threshold
Return:
----------
coordinates: [x, y] list of lanes, e.g.: [ [[9, 569], [50, 549]] ,[[630, 569], [647, 549]] ]
"""
if resize_shape is None:
resize_shape = seg_pred.shape[1:] # seg_pred (5, h, w)
_, h, w = seg_pred.shape
H, W = resize_shape
coordinates = []
if pts is None:
pts = round(H / 2 / y_px_gap)
seg_pred = np.ascontiguousarray(np.transpose(seg_pred, (1, 2, 0)))
for i in range(4):
prob_map = seg_pred[..., i + 1]
if smooth:
prob_map = cv2.blur(prob_map, (9, 9), borderType=cv2.BORDER_REPLICATE)
if exist[i] > 0:
coords = getLane_CULane(prob_map, y_px_gap, pts, thresh, resize_shape)
if (coords>0).sum() < 2:
continue
coordinates.append([[coords[j], H - 1 - j * y_px_gap] for j in range(pts) if coords[j] > 0])
return coordinates
================================================
FILE: utils/tensorboard.py
================================================
# Code copied from pytorch-tutorial https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/04-utils/tensorboard/logger.py
import tensorflow as tf
import numpy as np
from PIL import Image
import scipy.misc
try:
from StringIO import StringIO # Python 2.7
except ImportError:
from io import BytesIO # Python 3.x
class TensorBoard(object):
def __init__(self, log_dir):
"""Create a summary writer logging to log_dir."""
self.writer = tf.summary.FileWriter(log_dir)
def scalar_summary(self, tag, value, step):
"""Log a scalar variable."""
summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value)])
self.writer.add_summary(summary, step)
def image_summary(self, tag, images, step):
"""Log a list of images."""
img_summaries = []
for i, img in enumerate(images):
# Write the image to a string
try:
s = StringIO()
except:
s = BytesIO()
# scipy.misc.toimage(img).save(s, format="png")
Image.fromarray(img).save(s, format='png')
# Create an Image object
img_sum = tf.Summary.Image(encoded_image_string=s.getvalue(),
height=img.shape[0],
width=img.shape[1])
# Create a Summary value
img_summaries.append(tf.Summary.Value(tag='%s/%d' % (tag, i), image=img_sum))
# Create and write Summary
summary = tf.Summary(value=img_summaries)
self.writer.add_summary(summary, step)
def histo_summary(self, tag, values, step, bins=1000):
"""Log a histogram of the tensor of values."""
# Create a histogram using numpy
counts, bin_edges = np.histogram(values, bins=bins)
# Fill the fields of the histogram proto
hist = tf.HistogramProto()
hist.min = float(np.min(values))
hist.max = float(np.max(values))
hist.num = int(np.prod(values.shape))
hist.sum = float(np.sum(values))
hist.sum_squares = float(np.sum(values**2))
# Drop the start of the first bin
bin_edges = bin_edges[1:]
# Add bin edges and counts
for edge in bin_edges:
hist.bucket_limit.append(edge)
for c in counts:
hist.bucket.append(c)
# Create and write Summary
summary = tf.Summary(value=[tf.Summary.Value(tag=tag, histo=hist)])
self.writer.add_summary(summary, step)
self.writer.flush()
================================================
FILE: utils/transforms/__init__.py
================================================
from .transforms import *
from .data_augmentation import *
================================================
FILE: utils/transforms/data_augmentation.py
================================================
import random
import numpy as np
import cv2
from utils.transforms.transforms import CustomTransform
class RandomFlip(CustomTransform):
def __init__(self, prob_x=0, prob_y=0):
"""
Arguments:
----------
prob_x: range [0, 1], probability to use horizontal flip, setting to 0 means disabling flip
prob_y: range [0, 1], probability to use vertical flip
"""
self.prob_x = prob_x
self.prob_y = prob_y
def __call__(self, sample):
img = sample.get('img').copy()
segLabel = sample.get('segLabel', None)
if segLabel is not None:
segLabel = segLabel.copy()
flip_x = np.random.choice([False, True], p=(1 - self.prob_x, self.prob_x))
flip_y = np.random.choice([False, True], p=(1 - self.prob_y, self.prob_y))
if flip_x:
img = np.ascontiguousarray(np.flip(img, axis=1))
if segLabel is not None:
segLabel = np.ascontiguousarray(np.flip(segLabel, axis=1))
if flip_y:
img = np.ascontiguousarray(np.flip(img, axis=0))
if segLabel is not None:
segLabel = np.ascontiguousarray(np.flip(segLabel, axis=0))
_sample = sample.copy()
_sample['img'] = img
_sample['segLabel'] = segLabel
return _sample
class Darkness(CustomTransform):
def __init__(self, coeff):
assert coeff >= 1., "Darkness coefficient must be greater than 1"
self.coeff = coeff
def __call__(self, sample):
img = sample.get('img')
coeff = np.random.uniform(1., self.coeff)
img = (img.astype('float32') / coeff).astype('uint8')
_sample = sample.copy()
_sample['img'] = img
return _sample
================================================
FILE: utils/transforms/transforms.py
================================================
import cv2
import numpy as np
import torch
from torchvision.transforms import Normalize as Normalize_th
class CustomTransform:
def __call__(self, *args, **kwargs):
raise NotImplementedError
def __str__(self):
return self.__class__.__name__
def __eq__(self, name):
return str(self) == name
def __iter__(self):
def iter_fn():
for t in [self]:
yield t
return iter_fn()
def __contains__(self, name):
for t in self.__iter__():
if isinstance(t, Compose):
if name in t:
return True
elif name == t:
return True
return False
class Compose(CustomTransform):
"""
All transform in Compose should be able to accept two non None variable, img and boxes
"""
def __init__(self, *transforms):
self.transforms = [*transforms]
def __call__(self, sample):
for t in self.transforms:
sample = t(sample)
return sample
def __iter__(self):
return iter(self.transforms)
def modules(self):
yield self
for t in self.transforms:
if isinstance(t, Compose):
for _t in t.modules():
yield _t
else:
yield t
class Resize(CustomTransform):
def __init__(self, size):
if isinstance(size, int):
size = (size, size)
self.size = size #(W, H)
def __call__(self, sample):
img = sample.get('img')
segLabel = sample.get('segLabel', None)
img = cv2.resize(img, self.size, interpolation=cv2.INTER_CUBIC)
if segLabel is not None:
segLabel = cv2.resize(segLabel, self.size, interpolation=cv2.INTER_NEAREST)
_sample = sample.copy()
_sample['img'] = img
_sample['segLabel'] = segLabel
return _sample
def reset_size(self, size):
if isinstance(size, int):
size = (size, size)
self.size = size
class RandomResize(Resize):
"""
Resize to (w, h), where w randomly samples from (minW, maxW) and h randomly samples from (minH, maxH)
"""
def __init__(self, minW, maxW, minH=None, maxH=None, batch=False):
if minH is None or maxH is None:
minH, maxH = minW, maxW
super(RandomResize, self).__init__((minW, minH))
self.minW = minW
self.maxW = maxW
self.minH = minH
self.maxH = maxH
self.batch = batch
def random_set_size(self):
w = np.random.randint(self.minW, self.maxW+1)
h = np.random.randint(self.minH, self.maxH+1)
self.reset_size((w, h))
class Rotation(CustomTransform):
def __init__(self, theta):
self.theta = theta
def __call__(self, sample):
img = sample.get('img')
segLabel = sample.get('segLabel', None)
u = np.random.uniform()
degree = (u-0.5) * self.theta
R = cv2.getRotationMatrix2D((img.shape[1]//2, img.shape[0]//2), degree, 1)
img = cv2.warpAffine(img, R, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)
if segLabel is not None:
segLabel = cv2.warpAffine(segLabel, R, (segLabel.shape[1], segLabel.shape[0]), flags=cv2.INTER_NEAREST)
_sample = sample.copy()
_sample['img'] = img
_sample['segLabel'] = segLabel
return _sample
def reset_theta(self, theta):
self.theta = theta
class Normalize(CustomTransform):
def __init__(self, mean, std):
self.transform = Normalize_th(mean, std)
def __call__(self, sample):
img = sample.get('img')
img = self.transform(img)
_sample = sample.copy()
_sample['img'] = img
return _sample
class ToTensor(CustomTransform):
def __init__(self, dtype=torch.float):
self.dtype=dtype
def __call__(self, sample):
img = sample.get('img')
segLabel = sample.get('segLabel', None)
exist = sample.get('exist', None)
img = img.transpose(2, 0, 1)
img = torch.from_numpy(img).type(self.dtype) / 255.
if segLabel is not None:
segLabel = torch.from_numpy(segLabel).type(torch.long)
if exist is not None:
exist = torch.from_numpy(exist).type(torch.float32) # BCEloss requires float tensor
_sample = sample.copy()
_sample['img'] = img
_sample['segLabel'] = segLabel
_sample['exist'] = exist
return _sample
gitextract_j36c990x/
├── LICENSE
├── README.md
├── config.py
├── dataset/
│ ├── CULane.py
│ ├── Tusimple.py
│ └── __init__.py
├── demo_test.py
├── experiments/
│ ├── exp0/
│ │ └── cfg.json
│ ├── exp10/
│ │ └── cfg.json
│ └── vgg_SCNN_DULR_w9/
│ ├── cfg.json
│ └── t7_to_pt.py
├── model.py
├── requirements.txt
├── test_CULane.py
├── test_tusimple.py
├── train.py
└── utils/
├── lane_evaluation/
│ ├── CULane/
│ │ ├── CMakeLists.txt
│ │ ├── Run.sh
│ │ ├── include/
│ │ │ ├── counter.hpp
│ │ │ ├── hungarianGraph.hpp
│ │ │ ├── lane_compare.hpp
│ │ │ └── spline.hpp
│ │ ├── src/
│ │ │ ├── counter.cpp
│ │ │ ├── evaluate.cpp
│ │ │ ├── lane_compare.cpp
│ │ │ └── spline.cpp
│ │ └── src_origin/
│ │ ├── counter.cpp
│ │ ├── evaluate.cpp
│ │ ├── lane_compare.cpp
│ │ └── spline.cpp
│ └── tusimple/
│ └── lane.py
├── lr_scheduler.py
├── prob2lines/
│ └── getLane.py
├── tensorboard.py
└── transforms/
├── __init__.py
├── data_augmentation.py
└── transforms.py
SYMBOL INDEX (106 symbols across 19 files)
FILE: dataset/CULane.py
class CULane (line 9) | class CULane(Dataset):
method __init__ (line 10) | def __init__(self, path, image_set, transforms=None):
method createIndex (line 23) | def createIndex(self):
method createIndex_test (line 37) | def createIndex_test(self):
method __getitem__ (line 46) | def __getitem__(self, idx):
method __len__ (line 64) | def __len__(self):
method collate (line 68) | def collate(batch):
FILE: dataset/Tusimple.py
class Tusimple (line 10) | class Tusimple(Dataset):
method __init__ (line 21) | def __init__(self, path, image_set, transforms=None):
method createIndex (line 33) | def createIndex(self):
method __getitem__ (line 50) | def __getitem__(self, idx):
method __len__ (line 68) | def __len__(self):
method generate_label (line 71) | def generate_label(self):
method _gen_label_for_json (line 101) | def _gen_label_for_json(self, image_set):
method collate (line 175) | def collate(batch):
FILE: demo_test.py
function parse_args (line 16) | def parse_args():
function main (line 25) | def main():
FILE: model.py
class SCNN (line 7) | class SCNN(nn.Module):
method __init__ (line 8) | def __init__(
method forward (line 31) | def forward(self, img, seg_gt=None, exist_gt=None):
method message_passing_forward (line 53) | def message_passing_forward(self, x):
method message_passing_once (line 60) | def message_passing_once(self, x, conv, vertical=True, reverse=False):
method net_init (line 85) | def net_init(self, input_size, ms_ks):
method weight_init (line 139) | def weight_init(self):
FILE: test_CULane.py
function parse_args (line 16) | def parse_args():
function split_path (line 34) | def split_path(path):
FILE: test_tusimple.py
function parse_args (line 16) | def parse_args():
function split_path (line 34) | def split_path(path):
FILE: train.py
function parse_args (line 19) | def parse_args():
function train (line 71) | def train(epoch):
function val (line 125) | def val(epoch):
function main (line 197) | def main():
FILE: utils/lane_evaluation/CULane/include/counter.hpp
class Counter (line 15) | class Counter
method Counter (line 18) | Counter(int _im_width, int _im_height, double _iou_threshold=0.4, int ...
FILE: utils/lane_evaluation/CULane/include/hungarianGraph.hpp
type pipartiteGraph (line 6) | struct pipartiteGraph {
method matchDfs (line 12) | bool matchDfs(int u) {
method resize (line 26) | void resize(int leftNum, int rightNum) {
method match (line 38) | void match() {
FILE: utils/lane_evaluation/CULane/include/lane_compare.hpp
class LaneCompare (line 12) | class LaneCompare{
type CompareMode (line 14) | enum CompareMode{
method LaneCompare (line 19) | LaneCompare(int _im_width, int _im_height, int _lane_width = 10, Compa...
FILE: utils/lane_evaluation/CULane/include/spline.hpp
type Func (line 11) | struct Func {
class Spline (line 22) | class Spline {
FILE: utils/lane_evaluation/CULane/src/evaluate.cpp
function help (line 27) | void help(void)
function get_precision (line 49) | double get_precision(int tp, int fp, int fn)
function get_recall (line 60) | double get_recall(int tp, int fp, int fn)
function main (line 85) | int main(int argc, char **argv)
function read_lane_file (line 239) | void read_lane_file(const string &file_name, vector<vector<Point2f> > &l...
function visualize (line 265) | void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lane...
function update_tp_fp_fn (line 343) | void update_tp_fp_fn(int &tp, int &fp, int &fn, int _tp, int _fp, int _fn)
function worker_func (line 351) | void worker_func(vector<string> &lines_list_v, int start, int end, int &...
FILE: utils/lane_evaluation/CULane/src_origin/evaluate.cpp
function help (line 23) | void help(void)
function main (line 43) | int main(int argc, char **argv)
function read_lane_file (line 179) | void read_lane_file(const string &file_name, vector<vector<Point2f> > &l...
function visualize (line 205) | void visualize(string &full_im_name, vector<vector<Point2f> > &anno_lane...
FILE: utils/lane_evaluation/tusimple/lane.py
class LaneEval (line 6) | class LaneEval(object):
method get_angle (line 12) | def get_angle(xs, y_samples):
method line_accuracy (line 23) | def line_accuracy(pred, gt, thresh):
method bench (line 29) | def bench(pred, gt, y_samples, running_time):
method bench_one_submit (line 56) | def bench_one_submit(pred_file, gt_file):
FILE: utils/lr_scheduler.py
class PolyLR (line 4) | class PolyLR(_LRScheduler):
method __init__ (line 5) | def __init__(self, optimizer, pow, max_iter, min_lrs=1e-20, last_epoch...
method get_lr (line 19) | def get_lr(self):
FILE: utils/prob2lines/getLane.py
function getLane_tusimple (line 5) | def getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape=None):
function prob2lines_tusimple (line 35) | def prob2lines_tusimple(seg_pred, exist, resize_shape=None, smooth=True,...
function getLane_CULane (line 76) | def getLane_CULane(prob_map, y_px_gap, pts, thresh, resize_shape=None):
function prob2lines_CULane (line 105) | def prob2lines_CULane(seg_pred, exist, resize_shape=None, smooth=True, y...
FILE: utils/tensorboard.py
class TensorBoard (line 12) | class TensorBoard(object):
method __init__ (line 14) | def __init__(self, log_dir):
method scalar_summary (line 18) | def scalar_summary(self, tag, value, step):
method image_summary (line 23) | def image_summary(self, tag, images, step):
method histo_summary (line 48) | def histo_summary(self, tag, values, step, bins=1000):
FILE: utils/transforms/data_augmentation.py
class RandomFlip (line 8) | class RandomFlip(CustomTransform):
method __init__ (line 9) | def __init__(self, prob_x=0, prob_y=0):
method __call__ (line 19) | def __call__(self, sample):
class Darkness (line 43) | class Darkness(CustomTransform):
method __init__ (line 44) | def __init__(self, coeff):
method __call__ (line 48) | def __call__(self, sample):
FILE: utils/transforms/transforms.py
class CustomTransform (line 8) | class CustomTransform:
method __call__ (line 9) | def __call__(self, *args, **kwargs):
method __str__ (line 12) | def __str__(self):
method __eq__ (line 15) | def __eq__(self, name):
method __iter__ (line 18) | def __iter__(self):
method __contains__ (line 24) | def __contains__(self, name):
class Compose (line 34) | class Compose(CustomTransform):
method __init__ (line 38) | def __init__(self, *transforms):
method __call__ (line 41) | def __call__(self, sample):
method __iter__ (line 46) | def __iter__(self):
method modules (line 49) | def modules(self):
class Resize (line 59) | class Resize(CustomTransform):
method __init__ (line 60) | def __init__(self, size):
method __call__ (line 65) | def __call__(self, sample):
method reset_size (line 78) | def reset_size(self, size):
class RandomResize (line 84) | class RandomResize(Resize):
method __init__ (line 88) | def __init__(self, minW, maxW, minH=None, maxH=None, batch=False):
method random_set_size (line 98) | def random_set_size(self):
class Rotation (line 104) | class Rotation(CustomTransform):
method __init__ (line 105) | def __init__(self, theta):
method __call__ (line 108) | def __call__(self, sample):
method reset_theta (line 124) | def reset_theta(self, theta):
class Normalize (line 128) | class Normalize(CustomTransform):
method __init__ (line 129) | def __init__(self, mean, std):
method __call__ (line 132) | def __call__(self, sample):
class ToTensor (line 142) | class ToTensor(CustomTransform):
method __init__ (line 143) | def __init__(self, dtype=torch.float):
method __call__ (line 146) | def __call__(self, sample):
Condensed preview — 37 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (121K chars).
[
{
"path": "LICENSE",
"chars": 1065,
"preview": "MIT License\n\nCopyright (c) 2019 HarryHan\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\no"
},
{
"path": "README.md",
"chars": 5198,
"preview": "# SCNN lane detection in Pytorch\n\nSCNN is a segmentation-tasked lane detection algorithm, described in ['Spatial As Deep"
},
{
"path": "config.py",
"chars": 122,
"preview": "Dataset_Path = dict(\n CULane = \"/home/lion/Dataset/CULane/data/CULane\",\n Tusimple = \"/home/lion/Dataset/tusimple\"\n"
},
{
"path": "dataset/CULane.py",
"chars": 3021,
"preview": "import cv2\nimport os\nimport numpy as np\n\nimport torch\nfrom torch.utils.data import Dataset\n\n\nclass CULane(Dataset):\n "
},
{
"path": "dataset/Tusimple.py",
"chars": 7849,
"preview": "import json\nimport os\n\nimport cv2\nimport numpy as np\nimport torch\nfrom torch.utils.data import Dataset\n\n\nclass Tusimple("
},
{
"path": "dataset/__init__.py",
"chars": 57,
"preview": "from .CULane import CULane\nfrom .Tusimple import Tusimple"
},
{
"path": "demo_test.py",
"chars": 2171,
"preview": "import argparse\nimport cv2\nimport torch\n\nfrom model import SCNN\nfrom utils.prob2lines import getLane\nfrom utils.transfor"
},
{
"path": "experiments/exp0/cfg.json",
"chars": 385,
"preview": "{\n \"device\": \"cuda:0\",\n \"MAX_EPOCHES\": 60,\n\n \"dataset\": {\n \"dataset_name\": \"Tusimple\",\n \"batch_size\": 32,\n \""
},
{
"path": "experiments/exp10/cfg.json",
"chars": 319,
"preview": "{\n \"device\": \"cuda:0\",\n \"MAX_EPOCHES\": 30,\n\n \"dataset\": {\n \"dataset_name\": \"CULane\",\n \"batch_size\": 128,\n \"r"
},
{
"path": "experiments/vgg_SCNN_DULR_w9/cfg.json",
"chars": 108,
"preview": "{\n \"device\": \"cuda:0\",\n\n \"dataset\": {\n \"dataset_name\": \"CULane\",\n \"resize_shape\": [800, 288]\n }\n\n\n}"
},
{
"path": "experiments/vgg_SCNN_DULR_w9/t7_to_pt.py",
"chars": 6723,
"preview": "import sys\nimport os\nabs_file_path = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(os.path.join(abs_file_pa"
},
{
"path": "model.py",
"chars": 5432,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\n\n\nclass SCNN(nn.M"
},
{
"path": "requirements.txt",
"chars": 44,
"preview": "numpy\nopencv-python\ntorch>=0.4.1\ntorchvision"
},
{
"path": "test_CULane.py",
"chars": 3604,
"preview": "import argparse\nimport json\nimport os\n\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom tqdm"
},
{
"path": "test_tusimple.py",
"chars": 4799,
"preview": "import argparse\nimport json\nimport os\n\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom tqdm"
},
{
"path": "train.py",
"chars": 8312,
"preview": "import argparse\nimport json\nimport os\nimport shutil\nimport time\n\nimport torch.optim as optim\nfrom torch.utils.data impor"
},
{
"path": "utils/lane_evaluation/CULane/CMakeLists.txt",
"chars": 499,
"preview": "cmake_minimum_required(VERSION 2.8)\nproject (evaluate)\n\nSET(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR})\nset(CMAKE_CXX_"
},
{
"path": "utils/lane_evaluation/CULane/Run.sh",
"chars": 2346,
"preview": "root=/home/lion/SCNN_Pytorch/\nexp=$1\ndata_dir=/home/lion/Dataset/CULane/data/CULane/\ndetect_dir=${root}/experiments/${ex"
},
{
"path": "utils/lane_evaluation/CULane/include/counter.hpp",
"chars": 1232,
"preview": "#ifndef COUNTER_HPP\n#define COUNTER_HPP\n\n#include \"lane_compare.hpp\"\n#include \"hungarianGraph.hpp\"\n#include <iostream>\n#"
},
{
"path": "utils/lane_evaluation/CULane/include/hungarianGraph.hpp",
"chars": 2532,
"preview": "#ifndef HUNGARIAN_GRAPH_HPP\n#define HUNGARIAN_GRAPH_HPP\n#include <vector>\nusing namespace std;\n\nstruct pipartiteGraph {"
},
{
"path": "utils/lane_evaluation/CULane/include/lane_compare.hpp",
"chars": 774,
"preview": "#ifndef LANE_COMPARE_HPP\n#define LANE_COMPARE_HPP\n\n#include \"spline.hpp\"\n#include <vector>\n#include <iostream>\n#include "
},
{
"path": "utils/lane_evaluation/CULane/include/spline.hpp",
"chars": 572,
"preview": "#ifndef SPLINE_HPP\n#define SPLINE_HPP\n#include <vector>\n#include <cstdio>\n#include <math.h>\n#include <opencv2/core/core."
},
{
"path": "utils/lane_evaluation/CULane/src/counter.cpp",
"chars": 2727,
"preview": "/*************************************************************************\n\t> File Name: counter.cpp\n\t> Author: Xingang "
},
{
"path": "utils/lane_evaluation/CULane/src/evaluate.cpp",
"chars": 9945,
"preview": "/*************************************************************************\n\t> File Name: evaluate.cpp\n\t> Author: Xingang"
},
{
"path": "utils/lane_evaluation/CULane/src/lane_compare.cpp",
"chars": 1991,
"preview": "/*************************************************************************\n\t> File Name: lane_compare.cpp\n\t> Author: Xin"
},
{
"path": "utils/lane_evaluation/CULane/src/spline.cpp",
"chars": 5247,
"preview": "#include <vector>\n#include <iostream>\n#include \"spline.hpp\"\nusing namespace std;\nusing namespace cv;\n\nvector<Point2f> Sp"
},
{
"path": "utils/lane_evaluation/CULane/src_origin/counter.cpp",
"chars": 2684,
"preview": "/*************************************************************************\n\t> File Name: counter.cpp\n\t> Author: Xingang "
},
{
"path": "utils/lane_evaluation/CULane/src_origin/evaluate.cpp",
"chars": 7457,
"preview": "/*************************************************************************\n\t> File Name: evaluate.cpp\n\t> Author: Xingang"
},
{
"path": "utils/lane_evaluation/CULane/src_origin/lane_compare.cpp",
"chars": 1991,
"preview": "/*************************************************************************\n\t> File Name: lane_compare.cpp\n\t> Author: Xin"
},
{
"path": "utils/lane_evaluation/CULane/src_origin/spline.cpp",
"chars": 5247,
"preview": "#include <vector>\n#include <iostream>\n#include \"spline.hpp\"\nusing namespace std;\nusing namespace cv;\n\nvector<Point2f> Sp"
},
{
"path": "utils/lane_evaluation/tusimple/lane.py",
"chars": 3923,
"preview": "import numpy as np\nfrom sklearn.linear_model import LinearRegression\nimport json as json\n\n\nclass LaneEval(object):\n l"
},
{
"path": "utils/lr_scheduler.py",
"chars": 1120,
"preview": "from torch.optim.lr_scheduler import _LRScheduler\n\n\nclass PolyLR(_LRScheduler):\n def __init__(self, optimizer, pow, m"
},
{
"path": "utils/prob2lines/getLane.py",
"chars": 4417,
"preview": "import cv2\nimport numpy as np\n\n\ndef getLane_tusimple(prob_map, y_px_gap, pts, thresh, resize_shape=None):\n \"\"\"\n Ar"
},
{
"path": "utils/tensorboard.py",
"chars": 2598,
"preview": "# Code copied from pytorch-tutorial https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/04-utils/tensorboar"
},
{
"path": "utils/transforms/__init__.py",
"chars": 59,
"preview": "from .transforms import *\nfrom .data_augmentation import *\n"
},
{
"path": "utils/transforms/data_augmentation.py",
"chars": 1761,
"preview": "import random\nimport numpy as np\nimport cv2\n\nfrom utils.transforms.transforms import CustomTransform\n\n\nclass RandomFlip("
},
{
"path": "utils/transforms/transforms.py",
"chars": 4536,
"preview": "import cv2\n\nimport numpy as np\nimport torch\nfrom torchvision.transforms import Normalize as Normalize_th\n\n\nclass CustomT"
}
]
About this extraction
This page contains the full source code of the harryhan618/SCNN_Pytorch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 37 files (110.2 KB), approximately 33.2k tokens, and a symbol index with 106 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.