Repository: OpenGVLab/DriveMLM
Branch: main
Commit: 3b3adaf8a74e
Files: 1
Total size: 2.4 KB
Directory structure:
gitextract_yim4kxgr/
└── README.md
================================================
FILE CONTENTS
================================================
================================================
FILE: README.md
================================================
# DriveMLM
<!-- ## Description -->
Large language models (LLMs) have opened up new possibilities for intelligent agents, endowing them with human-like thinking and cognitive abilities. In this work, we delve into the potential of large language models (LLMs) in autonomous driving (AD). We introduce DriveMLM, an LLM-based AD framework that can perform close-loop autonomous driving in realistic simulators. To this end, (1) we bridge the gap between the language decisions and the vehicle control commands by standardizing the decision states according to the off-the-shelf motion planning module. (2) We employ a multi-modal LLM (MLLM) to model the behavior planning module of a module AD system, which uses driving rules, user commands, and inputs from various sensors (\eg, camera, lidar) as input and makes driving decisions and provide explanations; This model can plug-and-play in existing AD systems such as Apollo for close-loop driving. (3) We design an effective data engine to collect a dataset that includes decision state and corresponding explanation annotation for model training and evaluation. We conduct extensive experiments and show that our model achieves 76.1 driving score on the CARLA Town05 Long, and surpasses the Apollo baseline by 4.7 points under the same settings, demonstrating the effectiveness of our model. We hope this work can serve as a baseline for autonomous driving with LLMs.
<img width="600" alt="image" src="assest/fig_motiv.jpg">
## 🗓️ Schedule
- [ ] Release dataset and annotations
- [ ] Release code and models
## 🏠 Overview
<img width="800" alt="image" src="assest/fig_main.jpg">
## 🎁 Major Features
* Following human instruction. <br> <img width="400" alt="image" src="assest/vis_1.jpg"> <br>
* Handling more scenarios. <br> <img width="800" alt="image" src="assest/vis_2.jpg"> <br>
* Examples on nuScenes. <br> <img width="800" alt="image" src="assest/vis_3.jpg"> <br>
## 🎫 License
This project is released under the [Apache 2.0 license](LICENSE).
## 🖊️ Citation
If you find this project useful in your research, please consider cite:
```BibTeX
@article{wang2023drivemlm,
title={DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving},
author={Wang, Wenhai and Xie, Jiangwei and Hu, ChuanYang and Zou, Haoming and Fan, Jianan and Tong, Wenwen and Wen, Yang and Wu, Silei and Deng, Hanming and Li, Zhiqi and others},
journal={arXiv preprint arXiv:2312.09245},
year={2023}
}
```
gitextract_yim4kxgr/ └── README.md
Condensed preview — 1 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (3K chars).
[
{
"path": "README.md",
"chars": 2502,
"preview": "# DriveMLM\n\n\n<!-- ## Description -->\n\nLarge language models (LLMs) have opened up new possibilities for intelligent agen"
}
]
About this extraction
This page contains the full source code of the OpenGVLab/DriveMLM GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 1 files (2.4 KB), approximately 686 tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.