Deep learning-based perception pipelines in autonomous ground vehicles are vulnerable to both adversarial manipulation and network-layer disruption. We present a systematic, on-hardware experimental evaluation of five attack classes: FGSM, PGD, man-in-the-middle (MitM), denial-of-service (DoS), and phantom attacks on low-cost autonomous vehicle platforms (JetRacer and Yahboom). Using a standardized 13-second experimental protocol and comprehensive automated logging, we systematically characterize three dimensions of attack behavior:(i) control deviation, (ii) computational cost, and (iii) runtime responsiveness. Our analysis reveals that distinct attack classes produce consistent and separable "fingerprints" across these dimensions: perception attacks (MitM output manipulation and phantom projection) generate high steering deviation signatures with nominal computational overhead, PGD produces combined steering perturbation and computational load signatures across multiple dimensions, and DoS exhibits frame rate and latency degradation signatures with minimal control-plane perturbation. We demonstrate that our fingerprinting framework generalizes across both digital attacks (adversarial perturbations, network manipulation) and environmental attacks (projected false features), providing a foundation for attack-aware monitoring systems and targeted, signature-based defense mechanisms.
翻译:基于深度学习的自动驾驶地面车辆感知系统易受对抗性操纵和网络层干扰的影响。本文对低成本自动驾驶平台(JetRacer和Yahboom)上的五类攻击进行了系统的硬件实验评估:FGSM、PGD、中间人攻击、拒绝服务攻击和幻影攻击。通过标准化的13秒实验协议和全面的自动化日志记录,我们系统地表征了攻击行为的三个维度:(i)控制偏差,(ii)计算成本,以及(iii)运行时响应性。我们的分析表明,不同攻击类别在这些维度上产生一致且可区分的“指纹”:感知攻击(中间人输出操纵和幻影投影)产生高转向偏差特征且计算开销正常,PGD攻击在多个维度上产生转向扰动与计算负载的复合特征,而拒绝服务攻击则表现出帧率与延迟退化特征且对控制平面的扰动最小。我们证明,该指纹识别框架可同时适用于数字攻击(对抗性扰动、网络操纵)和环境攻击(投影虚假特征),为攻击感知监控系统及基于特征的针对性防御机制奠定了基础。