Neural Radiance Fields (NeRF) have revolutionized 3D computer vision and graphics, facilitating novel view synthesis and influencing sectors like extended reality and e-commerce. However, NeRF's dependence on extensive data collection, including sensitive scene image data, introduces significant privacy risks when users upload this data for model training. To address this concern, we first propose SplitNeRF, a training framework that incorporates split learning (SL) techniques to enable privacy-preserving collaborative model training between clients and servers without sharing local data. Despite its benefits, we identify vulnerabilities in SplitNeRF by developing two attack methods, Surrogate Model Attack and Scene-aided Surrogate Model Attack, which exploit the shared gradient data and a few leaked scene images to reconstruct private scene information. To counter these threats, we introduce $S^2$NeRF, secure SplitNeRF that integrates effective defense mechanisms. By introducing decaying noise related to the gradient norm into the shared gradient information, $S^2$NeRF preserves privacy while maintaining a high utility of the NeRF model. Our extensive evaluations across multiple datasets demonstrate the effectiveness of $S^2$NeRF against privacy breaches, confirming its viability for secure NeRF training in sensitive applications.
翻译:神经辐射场(NeRF)彻底改变了三维计算机视觉与图形学领域,促进了新视角合成,并对扩展现实和电子商务等行业产生了深远影响。然而,NeRF依赖于包括敏感场景图像数据在内的大规模数据收集,当用户上传这些数据进行模型训练时,会带来重大的隐私风险。为解决这一问题,我们首先提出了SplitNeRF,这是一个结合了分割学习(SL)技术的训练框架,能够在客户端与服务器之间实现不共享本地数据的隐私保护协作模型训练。尽管具有优势,我们通过开发两种攻击方法——代理模型攻击和场景辅助代理模型攻击,发现了SplitNeRF的脆弱性。这些攻击利用共享的梯度数据和少量泄露的场景图像来重建私有场景信息。为应对这些威胁,我们引入了$S^2$NeRF,即安全的SplitNeRF,它集成了有效的防御机制。通过在共享梯度信息中引入与梯度范数相关的衰减噪声,$S^2$NeRF在保护隐私的同时,保持了NeRF模型的高可用性。我们在多个数据集上的广泛评估证明了$S^2$NeRF在抵御隐私泄露方面的有效性,确认了其在敏感应用中实现安全NeRF训练的可行性。