Adversarial attacks pose significant challenges in 3D object recognition, especially in scenarios involving multi-view analysis where objects can be observed from varying angles. This paper introduces View-Invariant Adversarial Perturbations (VIAP), a novel method for crafting robust adversarial examples that remain effective across multiple viewpoints. Unlike traditional methods, VIAP enables targeted attacks capable of manipulating recognition systems to classify objects as specific, pre-determined labels, all while using a single universal perturbation. Leveraging a dataset of 1,210 images across 121 diverse rendered 3D objects, we demonstrate the effectiveness of VIAP in both targeted and untargeted settings. Our untargeted perturbations successfully generate a singular adversarial noise robust to 3D transformations, while targeted attacks achieve exceptional results, with top-1 accuracies exceeding 95% across various epsilon values. These findings highlight VIAPs potential for real-world applications, such as testing the robustness of 3D recognition systems. The proposed method sets a new benchmark for view-invariant adversarial robustness, advancing the field of adversarial machine learning for 3D object recognition.
翻译:对抗性攻击在三维物体识别领域带来了重大挑战,尤其是在涉及多视角分析的场景中,物体可能从不同角度被观测。本文提出了视角不变对抗性扰动(View-Invariant Adversarial Perturbations, VIAP),这是一种新颖的方法,用于构建在多个视角下均保持有效的鲁棒对抗样本。与传统方法不同,VIAP能够实现定向攻击,仅使用一个通用扰动即可操纵识别系统将物体分类为特定的预设标签。我们利用一个包含121个不同渲染三维物体共计1,210张图像的数据集,证明了VIAP在定向和非定向设置下的有效性。我们的非定向扰动成功地生成了对三维变换鲁棒的单一对抗噪声,而定向攻击则取得了卓越的成果,在不同epsilon值下,其top-1准确率均超过95%。这些发现凸显了VIAP在现实世界应用中的潜力,例如测试三维识别系统的鲁棒性。所提出的方法为视角不变的对抗鲁棒性设立了新的基准,推动了三维物体识别领域对抗性机器学习的发展。