Neural Radiance Field (NeRF) represents a significant advancement in computer vision, offering implicit neural network-based scene representation and novel view synthesis capabilities. Its applications span diverse fields including robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, etc., some of which are considered high-risk AI applications. However, despite its widespread adoption, the robustness and security of NeRF remain largely unexplored. In this study, we contribute to this area by introducing the Illusory Poisoning Attack against Neural Radiance Fields (IPA-NeRF). This attack involves embedding a hidden backdoor view into NeRF, allowing it to produce predetermined outputs, i.e. illusory, when presented with the specified backdoor view while maintaining normal performance with standard inputs. Our attack is specifically designed to deceive users or downstream models at a particular position while ensuring that any abnormalities in NeRF remain undetectable from other viewpoints. Experimental results demonstrate the effectiveness of our Illusory Poisoning Attack, successfully presenting the desired illusory on the specified viewpoint without impacting other views. Notably, we achieve this attack by introducing small perturbations solely to the training set. The code can be found at https://github.com/jiang-wenxiang/IPA-NeRF.
翻译:神经辐射场(NeRF)代表了计算机视觉领域的一项重要进展,它提供了基于隐式神经网络的场景表示和新视角合成能力。其应用涵盖机器人、城市测绘、自主导航、虚拟现实/增强现实等多个领域,其中一些被视为高风险人工智能应用。然而,尽管NeRF已被广泛采用,其鲁棒性和安全性在很大程度上仍未得到充分探索。在本研究中,我们通过提出针对神经辐射场的幻觉投毒攻击(IPA-NeRF)来推动这一领域的研究。该攻击通过在NeRF中嵌入一个隐藏的后门视角,使其在接收到指定的后门视角时产生预定的输出(即幻觉),同时在使用标准输入时保持正常性能。我们的攻击专门设计用于在特定位置欺骗用户或下游模型,同时确保从其他视角无法检测到NeRF的任何异常。实验结果表明,我们的幻觉投毒攻击能够有效实现预期目标,成功地在指定视角呈现所需幻觉,且不影响其他视角。值得注意的是,我们仅通过对训练集引入微小扰动就实现了这一攻击。相关代码可在 https://github.com/jiang-wenxiang/IPA-NeRF 获取。