Vision-Large-Language-models(VLMs) have great application prospects in autonomous driving. Despite the ability of VLMs to comprehend and make decisions in complex scenarios, their integration into safety-critical autonomous driving systems poses serious security risks. In this paper, we propose BadVLMDriver, the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects. Unlike existing backdoor attacks against VLMs that rely on digital modifications, BadVLMDriver uses common physical items, such as a red balloon, to induce unsafe actions like sudden acceleration, highlighting a significant real-world threat to autonomous vehicle safety. To execute BadVLMDriver, we develop an automated pipeline utilizing natural language instructions to generate backdoor training samples with embedded malicious behaviors. This approach allows for flexible trigger and behavior selection, enhancing the stealth and practicality of the attack in diverse scenarios. We conduct extensive experiments to evaluate BadVLMDriver for two representative VLMs, five different trigger objects, and two types of malicious backdoor behaviors. BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon. Thus, BadVLMDriver not only demonstrates a critical security risk but also emphasizes the urgent need for developing robust defense mechanisms to protect against such vulnerabilities in autonomous driving technologies.
翻译:视觉大语言模型(VLM)在自动驾驶领域具有广阔的应用前景。尽管VLM能在复杂场景中理解和决策,但其集成到安全关键的自动驾驶系统中会带来严重的安全风险。本文提出BadVLMDriver——首个可实际部署的针对自动驾驶VLM的后门攻击方法,通过物理对象实现攻击。与依赖数字修改的现有VLM后门攻击不同,BadVLMDriver利用红色气球等常见物理物品触发急加速等危险行为,凸显了对自动驾驶车辆安全的重大现实威胁。为实施BadVLMDriver,我们开发了自动化流水线,通过自然语言指令生成嵌入恶意行为的后门训练样本。该方法支持灵活的触发器与行为选择,增强了攻击在不同场景下的隐蔽性和实用性。我们针对两个代表性VLM、五种不同触发器物体及两种恶意后门行为开展了大量实验评估。BadVLMDriver在遭遇手持红色气球的行人时,诱使车辆急加速的攻击成功率达92%。因此,BadVLMDriver不仅揭示了关键安全风险,更强调了亟待开发稳健防御机制以应对自动驾驶技术中此类漏洞的迫切需求。