Neural Radiance Fields have achieved success in creating powerful 3D media representations with their exceptional reconstruction capabilities. However, the computational demands of volume rendering pose significant challenges during model training. Existing acceleration techniques often involve redesigning the model architecture, leading to limitations in compatibility across different frameworks. Furthermore, these methods tend to overlook the substantial memory costs incurred. In response to these challenges, we introduce an expansive supervision mechanism that efficiently balances computational load, rendering quality and flexibility for neural radiance field training. This mechanism operates by selectively rendering a small but crucial subset of pixels and expanding their values to estimate the error across the entire area for each iteration. Compare to conventional supervision, our method effectively bypasses redundant rendering processes, resulting in notable reductions in both time and memory consumption. Experimental results demonstrate that integrating expansive supervision within existing state-of-the-art acceleration frameworks can achieve 69% memory savings and 42% time savings, with negligible compromise in visual quality.
翻译:神经辐射场凭借其卓越的重建能力,在创建强大的三维媒体表示方面取得了成功。然而,体积渲染的计算需求在模型训练过程中带来了重大挑战。现有的加速技术通常涉及重新设计模型架构,导致不同框架间的兼容性受限。此外,这些方法往往忽视了由此产生的大量内存开销。针对这些挑战,我们提出了一种扩展监督机制,该机制能有效平衡神经辐射场训练中的计算负载、渲染质量与灵活性。该机制通过选择性渲染一小部分关键像素,并将其值扩展以估计每次迭代中整个区域的误差来实现。与传统监督方法相比,我们的方法有效绕过了冗余的渲染过程,从而显著降低了时间和内存消耗。实验结果表明,在现有最先进的加速框架中集成扩展监督机制,可实现69%的内存节省和42%的时间节省,且视觉质量损失可忽略不计。