Many adversarial attacks on autonomous-driving perception models fail to cause system-level failures once deployed in a full driving stack. The main reason for such ineffectiveness is that once deployed in a system (e.g., within a simulator), attacks tend to be spatially or temporally short-lived, due to the vehicle's dynamics, hence rarely influencing the vehicle behaviour. In this paper, we address both limitations by introducing a system-level attack in which multiple dynamic elements (e.g., two pedestrians) carry adversarial patches (e.g., on cloths) and jointly amplify their effect through coordination and motion. We evaluate our attacks in the CARLA simulator using a state-of-the-art autonomous driving agent. At the system level, single-pedestrian attacks fail in all runs (out of 10), while dynamic collusion by two pedestrians induces full vehicle stops in up to 50\% of runs, with static collusion yielding no successful attack at all. These results show that system-level failures arise only when adversarial signals persist over time and are amplified through coordinated actors, exposing a gap between model-level robustness and end-to-end safety.
翻译:许多针对自动驾驶感知模型的对抗性攻击在部署于完整驾驶系统后,往往无法引发系统级故障。此类失效的主要原因在于,当攻击被部署于系统(例如模拟器)中时,由于车辆自身的动态特性,攻击往往在空间或时间上短暂存在,因此极少能影响车辆行为。本文通过引入一种系统级攻击来解决这两个局限性:在该攻击中,多个动态实体(例如两名行人)携带对抗性补丁(例如附着于衣物上),并通过协调与运动共同放大其攻击效果。我们在CARLA模拟器中使用最先进的自动驾驶智能体评估了所提出的攻击。在系统层面,单行人攻击在全部10次运行中均告失败,而两名行人的动态合谋攻击在高达50%的运行中导致车辆完全停止,静态合谋则完全无法引发成功攻击。这些结果表明,只有当对抗性信号在时间上持续存在并通过协调的参与者被放大时,才会引发系统级故障,这揭示了模型级鲁棒性与端到端安全性之间存在差距。