Multimodal large language models (MLLMs) have emerged as a prominent area of interest within the research community, given their proficiency in handling and reasoning with non-textual data, including images and videos. This study seeks to extend the application of MLLMs to the realm of autonomous driving by introducing DriveGPT4, a novel interpretable end-to-end autonomous driving system based on LLMs. Capable of processing multi-frame video inputs and textual queries, DriveGPT4 facilitates the interpretation of vehicle actions, offers pertinent reasoning, and effectively addresses a diverse range of questions posed by users. Furthermore, DriveGPT4 predicts low-level vehicle control signals in an end-to-end fashion. These advanced capabilities are achieved through the utilization of a bespoke visual instruction tuning dataset, specifically tailored for autonomous driving applications, in conjunction with a mix-finetuning training strategy. DriveGPT4 represents the pioneering effort to leverage LLMs for the development of an interpretable end-to-end autonomous driving solution. Evaluations conducted on the BDD-X dataset showcase the superior qualitative and quantitative performance of DriveGPT4. Additionally, the fine-tuning of domain-specific data enables DriveGPT4 to yield close or even improved results in terms of autonomous driving grounding when contrasted with GPT4-V. The code and dataset will be publicly available.
翻译:多模态大型语言模型(MLLMs)因其在处理和推理非文本数据(如图像和视频)方面的卓越能力,已成为研究领域的重要热点。本研究旨在将MLLMs的应用拓展至自动驾驶领域,提出DriveGPT4——一种基于LLMs的新型可解释端到端自动驾驶系统。该系统能够处理多帧视频输入与文本查询,进而解释车辆行为、提供相关推理依据,并有效回答用户提出的各类问题。同时,DriveGPT4能以端到端方式预测底层车辆控制信号。这些先进能力得益于为自动驾驶应用定制的专属视觉指令调优数据集,以及混合微调训练策略。DriveGPT4是首个利用LLMs构建可解释端到端自动驾驶解决方案的探索性工作。在BDD-X数据集上的评估表明,DriveGPT4在定性和定量性能上均展现出优越性。此外,通过领域特定数据的微调,DriveGPT4在自动驾驶语义理解方面能达到甚至超越GPT4-V的表现。相关代码与数据集将公开提供。