Modern Neural Radiance Fields (NeRFs) learn a mapping from position to volumetric density leveraging proposal network samplers. In contrast to the coarse-to-fine sampling approach with two NeRFs, this offers significant potential for acceleration using lower network capacity. Given that NeRFs utilize most of their network capacity to estimate radiance, they could store valuable density information in their parameters or their deep features. To investigate this proposition, we take one step back and analyze large, trained ReLU-MLPs used in coarse-to-fine sampling. Building on our novel activation visualization method, we find that trained NeRFs, Mip-NeRFs and proposal network samplers map samples with high density to local minima along a ray in activation feature space. We show how these large MLPs can be accelerated by transforming intermediate activations to a weight estimate, without any modifications to the training protocol or the network architecture. With our approach, we can reduce the computational requirements of trained NeRFs by up to 50% with only a slight hit in rendering quality. Extensive experimental evaluation on a variety of datasets and architectures demonstrates the effectiveness of our approach. Consequently, our methodology provides valuable insight into the inner workings of NeRFs.
翻译:现代神经辐射场(NeRF)通过建议网络采样器学习从位置到体素密度的映射。与采用双NeRF的粗到细采样方法不同,这种方法可通过降低网络容量实现显著加速。由于NeRF将大部分网络容量用于辐射估计,其参数或深层特征中可能储存了有价值的密度信息。为验证这一假设,我们回溯一步,分析粗到细采样中使用的大型已训练ReLU-MLP。基于创新的激活可视化方法,我们发现训练后的NeRF、Mip-NeRF和建议网络采样器将高密度样本映射至激活特征空间中沿光线方向的局部极小值点。我们展示了如何通过将中间激活转化为权重估计来加速这些大型MLP,且无需修改训练协议或网络架构。通过该方法,我们可将训练后NeRF的计算需求降低50%,仅牺牲极少量渲染质量。在多种数据集和架构上的广泛实验验证了该方法的有效性,从而为理解NeRF内部工作机制提供了重要洞见。