Multimodal large language models (MLLMs) have emerged as a prominent area of interest within the research community, given their proficiency in handling and reasoning with non-textual data, including images and videos. This study seeks to extend the application of MLLMs to the realm of autonomous driving by introducing DriveGPT4, a novel interpretable end-to-end autonomous driving system based on LLMs. Capable of processing multi-frame video inputs and textual queries, DriveGPT4 facilitates the interpretation of vehicle actions, offers pertinent reasoning, and effectively addresses a diverse range of questions posed by users. Furthermore, DriveGPT4 predicts low-level vehicle control signals in an end-to-end fashion.These advanced capabilities are achieved through the utilization of a bespoke visual instruction tuning dataset, specifically tailored for autonomous driving applications, in conjunction with a mix-finetuning training strategy. DriveGPT4 represents the pioneering effort to leverage LLMs for the development of an interpretable end-to-end autonomous driving solution. Evaluations conducted on the BDD-X dataset showcase the superior qualitative and quantitative performance of DriveGPT4. Additionally, the fine-tuning of domain-specific data enables DriveGPT4 to yield close or even improved results in terms of autonomous driving grounding when contrasted with GPT4-V.
翻译:多模态大语言模型(MLLMs)因其在处理和推理图像、视频等非文本数据方面的卓越能力,已成为研究界关注的热点领域。本研究旨在将MLLMs的应用扩展至自动驾驶领域,通过引入DriveGPT4——一种基于大语言模型的新型可解释端到端自动驾驶系统。该系统能够处理多帧视频输入与文本查询,不仅可解释车辆行为、提供相关推理,还能有效回应用户提出的各类问题。此外,DriveGPT4以端到端方式预测底层车辆控制信号。这些先进功能是通过采用专为自动驾驶应用定制的视觉指令微调数据集,并结合混合微调训练策略实现的。DriveGPT4是首个利用大语言模型开发可解释端到端自动驾驶解决方案的开拓性尝试。在BDD-X数据集上的评估表明,DriveGPT4在定性与定量性能上均表现优异。同时,通过对领域特定数据的微调,DriveGPT4在自动驾驶任务的基础性能上取得了与GPT4-V相近甚至更优的结果。