Autonomous Vehicles (AVs), especially vision-based AVs, are rapidly being deployed without human operators. As AVs operate in safety-critical environments, understanding their robustness in an adversarial environment is an important research problem. Prior physical adversarial attacks on vision-based autonomous vehicles predominantly target immediate safety failures (e.g., a crash, a traffic-rule violation, or a transient lane departure) by inducing a short-lived perception or control error. This paper shows a qualitatively different risk: a long-horizon route integrity compromise, where an attacker gradually steers a victim AV away from its intended route and into an attacker-chosen destination while the victim continues to drive "normally." This will not pose a danger to the victim vehicle itself, but also to potential passengers sitting inside the vehicle. In this paper, we design and implement the first adversarial framework, called JackZebra, that performs route-level hijacking of a vision-based end-to-end driving stack using a physically plausible attacker vehicle with a reconfigurable display mounted on the rear. The central challenge is temporal persistence: adversarial influence must remain effective in changing viewpoints, lighting, weather, traffic, and the victim's continual replanning -- without triggering conspicuous failures. Our key insight is to treat route hijacking as a closed-loop control problem and to convert adversarial patches into steering primitives that can be selected online via an interactive adjustment loop. Our adversarial patches are also carefully designed against worst-case background and sensor variations so that the adversarial impacts on the victim. Our evaluation shows that JackZebra can successfully hijack victim vehicles to deviate from original routes and stop at adversarial destinations with a high success rate.
翻译:自动驾驶汽车(AVs),特别是基于视觉的自动驾驶汽车,正迅速在无人类操作员的情况下部署。由于自动驾驶汽车在安全关键环境中运行,理解其在对抗环境中的鲁棒性是一个重要的研究问题。先前针对基于视觉自动驾驶汽车的物理对抗攻击主要通过引发短暂的感知或控制错误,主要针对即时安全故障(例如碰撞、违反交通规则或短暂偏离车道)。本文揭示了一种性质不同的风险:长时程路线完整性破坏,即攻击者逐渐将受害自动驾驶汽车从其预定路线引导至攻击者选择的目的地,而受害车辆仍在“正常”行驶。这不仅对受害车辆本身构成威胁,也对车内潜在乘客造成危险。在本文中,我们设计并实现了首个对抗框架,称为JackZebra,该框架利用一辆物理上可行的攻击者车辆(其后部安装可重构显示屏),对基于视觉的端到端驾驶系统执行路线级劫持。核心挑战在于时间持续性:对抗性影响必须在视角变化、光照条件、天气、交通状况以及受害车辆持续重新规划的情况下保持有效——且不引发明显的故障。我们的关键见解是将路线劫持视为一个闭环控制问题,并将对抗性补丁转化为可通过交互式调整循环在线选择的转向基元。我们的对抗性补丁还针对最坏情况的背景和传感器变化进行了精心设计,以确保对受害车辆的对抗性影响。评估结果表明,JackZebra能够成功劫持受害车辆偏离原始路线,并以高成功率停靠在对抗性目的地。