Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial examples, raising broad security concerns about their applications. Besides the attacks in the digital world, the practical implications of adversarial examples in the physical world present significant challenges and safety concerns. However, current research on physical adversarial examples (PAEs) lacks a comprehensive understanding of their unique characteristics, leading to limited significance and understanding. In this paper, we address this gap by thoroughly examining the characteristics of PAEs within a practical workflow encompassing training, manufacturing, and re-sampling processes. By analyzing the links between physical adversarial attacks, we identify manufacturing and re-sampling as the primary sources of distinct attributes and particularities in PAEs. Leveraging this knowledge, we develop a comprehensive analysis and classification framework for PAEs based on their specific characteristics, covering over 100 studies on physical-world adversarial examples. Furthermore, we investigate defense strategies against PAEs and identify open challenges and opportunities for future research. We aim to provide a fresh, thorough, and systematic understanding of PAEs, thereby promoting the development of robust adversarial learning and its application in open-world scenarios to provide the community with a continuously updated list of physical world adversarial sample resources, including papers, code, \etc, within the proposed framework
翻译:深度神经网络(DNNs)已显示出对对抗样本的高度脆弱性,这引发了对其应用安全性的广泛担忧。除了数字世界中的攻击外,物理世界中的对抗样本所带来的实际影响也构成了重大挑战与安全隐患。然而,当前针对物理对抗样本(PAEs)的研究缺乏对其独特特性的全面理解,导致其重要性和认知有限。本文通过在一个涵盖训练、制造和重采样过程的实际工作流中,深入审视PAEs的特性,以弥补这一空白。通过分析物理对抗攻击之间的联系,我们识别出制造和重采样过程是PAEs中独特属性与特殊性的主要来源。基于这一认识,我们开发了一个基于PAEs具体特性的全面分析与分类框架,涵盖了超过100项关于物理世界对抗样本的研究。此外,我们研究了针对PAEs的防御策略,并指出了未来研究的开放挑战与机遇。我们旨在为PAEs提供一个新颖、全面且系统化的理解,从而推动鲁棒对抗学习的发展及其在开放世界场景中的应用,并在所提出的框架内,为研究社区提供一个持续更新的物理世界对抗样本资源列表,包括论文、代码等。