Autonomous driving increasingly relies on Visual Question Answering (VQA) to enable vehicles to understand complex surroundings by analyzing visual inputs and textual queries. Currently, a paramount concern for VQA in this domain is the stringent requirement for fast latency and real-time processing, as delays directly impact real-world safety in this safety-critical application. However, current state-of-the-art VQA models, particularly large vision-language models (VLMs), often prioritize performance over computational efficiency. These models typically process dense patch tokens for every frame, leading to prohibitive computational costs (FLOPs) and significant inference latency, especially with long video sequences. This focus limits their practical deployment in real-time autonomous driving scenarios. To tackle this issue, we propose an efficient VLM framework for autonomous driving VQA tasks, SRC-Pipeline. It learns to compress early frame tokens into a small number of high-level tokens while retaining full patch tokens for recent frames. Experiments on autonomous driving video question answering tasks show that our approach achieves 66% FLOPs reduction while maintaining comparable performance, enabling VLMs to operate more effectively in real-time, safety-critical autonomous driving settings.
翻译:自动驾驶日益依赖视觉问答(VQA)技术,通过分析视觉输入和文本查询,使车辆能够理解复杂的周围环境。当前,该领域VQA面临的一个核心挑战是对低延迟和实时处理的严格要求,因为在此安全关键型应用中,延迟会直接影响现实世界的安全性。然而,当前最先进的VQA模型,特别是大型视觉语言模型(VLM),通常优先考虑性能而非计算效率。这些模型通常处理每一帧的密集图像块标记,导致极高的计算成本(FLOPs)和显著的推理延迟,尤其是在处理长视频序列时。这种侧重点限制了它们在实时自动驾驶场景中的实际部署。为解决这一问题,我们提出了一种用于自动驾驶VQA任务的高效VLM框架——SRC-Pipeline。该框架学习将早期帧的标记压缩为少量高层级标记,同时为近期帧保留完整的图像块标记。在自动驾驶视频问答任务上的实验表明,我们的方法在保持可比性能的同时,实现了66%的FLOPs减少,使VLM能够在实时、安全关键的自动驾驶环境中更有效地运行。