Neural Radiance Fields (NeRFs) have unmatched fidelity on large, real-world scenes. A common approach for scaling NeRFs is to partition the scene into regions, each of which is assigned its own parameters. When implemented naively, such an approach is limited by poor test-time scaling and inconsistent appearance and geometry. We instead propose InterNeRF, a novel architecture for rendering a target view using a subset of the model's parameters. Our approach enables out-of-core training and rendering, increasing total model capacity with only a modest increase to training time. We demonstrate significant improvements in multi-room scenes while remaining competitive on standard benchmarks.
翻译:神经辐射场(NeRFs)在大型真实场景中具有无可比拟的保真度。扩展NeRFs的常见方法是将场景划分为多个区域,并为每个区域分配独立的参数。若采用简单实现,该方法会受限于较差的测试时扩展能力以及不一致的外观与几何表现。我们提出了InterNeRF这一新颖架构,该架构仅使用模型参数子集即可渲染目标视角。我们的方法支持核心外训练与渲染,能以训练时间的适度增加显著提升模型总容量。实验表明,该方法在多房间场景中取得显著改进,同时在标准基准测试中保持竞争力。