Advances in NERFs have allowed for 3D scene reconstructions and novel view synthesis. Yet, efficiently editing these representations while retaining photorealism is an emerging challenge. Recent methods face three primary limitations: they're slow for interactive use, lack precision at object boundaries, and struggle to ensure multi-view consistency. We introduce IReNe to address these limitations, enabling swift, near real-time color editing in NeRF. Leveraging a pre-trained NeRF model and a single training image with user-applied color edits, IReNe swiftly adjusts network parameters in seconds. This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image while also controlling object boundaries and view-specific effects. Object boundary control is achieved by integrating a trainable segmentation module into the model. The process gains efficiency by retraining only the weights of the last network layer. We observed that neurons in this layer can be classified into those responsible for view-dependent appearance and those contributing to diffuse appearance. We introduce an automated classification approach to identify these neuron types and exclusively fine-tune the weights of the diffuse neurons. This further accelerates training and ensures consistent color edits across different views. A thorough validation on a new dataset, with edited object colors, shows significant quantitative and qualitative advancements over competitors, accelerating speeds by 5x to 500x.
翻译:神经辐射场(NeRF)的进展已实现三维场景重建和新视角合成。然而,在保持照片级真实感的同时高效编辑这些表征仍是一个新兴挑战。现有方法主要面临三个局限:交互使用速度缓慢、物体边界精度不足以及难以保证多视角一致性。本文提出IReNe以解决这些局限,实现NeRF中快速、近实时的色彩编辑。该方法利用预训练的NeRF模型和单张经用户色彩编辑的训练图像,在数秒内快速调整网络参数。这种调整使模型能够生成新的场景视角,精确呈现训练图像中的色彩变化,同时控制物体边界和视角特定效果。物体边界控制通过将可训练分割模块集成到模型中实现。通过仅重新训练网络最后一层的权重,该过程获得效率提升。我们观察到该层神经元可分为负责视角相关外观的神经元与贡献漫反射外观的神经元。我们提出一种自动分类方法以识别这些神经元类型,并仅对漫反射神经元的权重进行微调。这进一步加速了训练过程,并确保不同视角间色彩编辑的一致性。在新构建的物体色彩编辑数据集上的全面验证表明,本方法在定量与定性评估上均显著优于现有方案,速度提升达5倍至500倍。