Although Deep Neural Networks (DNNs) have been widely applied in various real-world scenarios, they remain vulnerable to adversarial examples. Adversarial attacks in computer vision can be categorized into digital attacks and physical attacks based on their different forms. Compared to digital attacks, which generate perturbations in digital pixels, physical attacks are more practical in real-world settings. Due to the serious security risks posed by physically adversarial examples, many studies have been conducted to evaluate the physically adversarial robustness of DNNs in recent years. In this paper, we provide a comprehensive survey of current physically adversarial attacks and defenses in computer vision. We establish a taxonomy by organizing physical attacks according to attack tasks, attack forms, and attack methods. This approach offers readers a systematic understanding of the topic from multiple perspectives. For physical defenses, we categorize them into pre-processing, in-processing, and post-processing for DNN models to ensure comprehensive coverage of adversarial defenses. Based on this survey, we discuss the challenges facing this research field and provide an outlook on future directions.
翻译:尽管深度神经网络(DNNs)已在众多现实场景中得到广泛应用,但其仍易受对抗样本的攻击。计算机视觉领域的对抗攻击根据其不同形式可分为数字攻击与物理攻击。相较于在数字像素层面生成扰动的数字攻击,物理攻击在现实环境中更具实际威胁。鉴于物理对抗样本带来的严重安全风险,近年来已有大量研究致力于评估DNNs的物理对抗鲁棒性。本文对当前计算机视觉领域的物理对抗攻击与防御研究进行了全面综述。我们通过从攻击任务、攻击形式和攻击方法三个维度对物理攻击进行分类,构建了一个系统化的分类体系,从而为读者提供多视角的系统性理解。针对物理防御,我们将其归类为面向DNN模型的预处理、处理中和后处理防御,以确保全面涵盖对抗防御策略。基于本次综述,我们讨论了该研究领域面临的挑战,并对未来发展方向进行了展望。