In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving. Due to its safety-criticality, the security of AD perception has been widely studied. Among different attacks on AD perception, the physical adversarial object evasion attacks are especially severe. However, we find that all existing literature only evaluates their attack effect at the targeted AI component level but not at the system level, i.e., with the entire system semantics and context such as the full AD pipeline. Thereby, this raises a critical research question: can these existing researches effectively achieve system-level attack effects (e.g., traffic rule violations) in the real-world AD context? In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity. Our evaluation results show that all the representative prior works cannot achieve any system-level effects. We observe two design limitations in the prior works: 1) physical model-inconsistent object size distribution in pixel sampling and 2) lack of vehicle plant model and AD system model consideration. Then, we propose SysAdv, a novel system-driven attack design in the AD context and our evaluation results show that the system-level effects can be significantly improved, i.e., the violation rate increases by around 70%.
翻译:在自动驾驶中,精确感知是实现安全行驶不可或缺的基础。由于其安全关键性,自动驾驶感知的安全性已得到广泛研究。在针对自动驾驶感知的各类攻击中,物理对抗性目标规避攻击尤为严重。然而,我们发现现有文献均仅在目标AI组件层面评估攻击效果,而非在系统层面——即结合完整系统语义与上下文(如完整自动驾驶流水线)进行考量。由此引出一个关键研究问题:这些现有研究能否在真实自动驾驶场景中有效实现系统级攻击效果(例如违反交通规则)?本文首次开展测量研究,探明现有设计能否及如何有效引发系统级效果,尤其聚焦于因常见性与严重性而备受关注的停止标志规避攻击。评估结果表明,所有具有代表性的既往工作均无法实现系统级效果。我们发现既往设计存在两个局限:1)像素采样中物理模型不一致的目标尺寸分布,2)缺乏车辆动力学模型与自动驾驶系统模型考量。基于此,我们提出SysAdv——面向自动驾驶场景的新型系统驱动攻击设计,评估结果表明系统级效果可获得显著提升,违规率约提高70%。