As Neural Radiance Fields (NeRFs) have emerged as a powerful tool for 3D scene representation and novel view synthesis, protecting their intellectual property (IP) from unauthorized use is becoming increasingly crucial. In this work, we aim to protect the IP of NeRFs by injecting adversarial perturbations that disrupt their unauthorized applications. However, perturbing the 3D geometry of NeRFs can easily deform the underlying scene structure and thus substantially degrade the rendering quality, which has led existing attempts to avoid geometric perturbations or restrict them to explicit spaces like meshes. To overcome this limitation, we introduce a learnable sensitivity to quantify the spatially varying impact of geometric perturbations on rendering quality. Building upon this, we propose AegisRF, a novel framework that consists of a Perturbation Field, which injects adversarial perturbations into the pre-rendering outputs (color and volume density) of NeRF models to fool an unauthorized downstream target model, and a Sensitivity Field, which learns the sensitivity to adaptively constrain geometric perturbations, preserving rendering quality while disrupting unauthorized use. Our experimental evaluations demonstrate the generalized applicability of AegisRF across diverse downstream tasks and modalities, including multi-view image classification and voxel-based 3D localization, while maintaining high visual fidelity. Codes are available at https://github.com/wkim97/AegisRF.
翻译:随着神经辐射场(NeRFs)作为一种强大的三维场景表示和新视角合成工具的出现,保护其知识产权免遭未经授权的使用变得日益关键。在本研究中,我们旨在通过注入对抗性扰动来破坏其未经授权的应用,从而保护NeRFs的知识产权。然而,扰动NeRFs的三维几何结构容易导致底层场景结构变形,从而显著降低渲染质量,这使得现有尝试避免几何扰动或将其限制在网格等显式空间中。为克服这一限制,我们引入了一种可学习的敏感度,用于量化几何扰动对渲染质量在空间上的变化影响。在此基础上,我们提出了AegisRF这一新颖框架,该框架包含一个扰动场和一个敏感度场:扰动场将对抗性扰动注入NeRF模型的预渲染输出(颜色和体密度)中,以欺骗未经授权的下游目标模型;敏感度场则学习敏感度来自适应地约束几何扰动,在保持渲染质量的同时破坏未经授权的使用。我们的实验评估表明,AegisRF在多种下游任务和模态(包括多视图图像分类和基于体素的三维定位)中具有广泛的适用性,同时保持了较高的视觉保真度。代码可在https://github.com/wkim97/AegisRF获取。