In this paper, we propose a novel framework for enhancing visual comprehension in autonomous driving systems by integrating visual language models (VLMs) with additional visual perception module specialised in object detection. We extend the Llama-Adapter architecture by incorporating a YOLOS-based detection network alongside the CLIP perception network, addressing limitations in object detection and localisation. Our approach introduces camera ID-separators to improve multi-view processing, crucial for comprehensive environmental awareness. Experiments on the DriveLM visual question answering challenge demonstrate significant improvements over baseline models, with enhanced performance in ChatGPT scores, BLEU scores, and CIDEr metrics, indicating closeness of model answer to ground truth. Our method represents a promising step towards more capable and interpretable autonomous driving systems. Possible safety enhancement enabled by detection modality is also discussed.
翻译:本文提出一种新颖框架,通过将视觉语言模型与专门用于目标检测的附加视觉感知模块相结合,以增强自动驾驶系统的视觉理解能力。我们在Llama-Adapter架构基础上,于CLIP感知网络旁并行集成了基于YOLOS的检测网络,以解决现有模型在目标检测与定位方面的局限性。该方法引入相机ID分隔符以改进多视角处理能力,这对实现全面的环境感知至关重要。在DriveLM视觉问答挑战上的实验表明,相较于基线模型,本方法在ChatGPT评分、BLEU分数和CIDEr指标上均取得显著提升,证明模型输出更接近真实答案。本方法为实现更强大且可解释的自动驾驶系统迈出了重要一步,文中同时探讨了检测模态可能带来的安全性增强。