While Neural Radiance Fields (NeRFs) have demonstrated exceptional quality, their protracted training duration remains a limitation. Generalizable and MVS-based NeRFs, although capable of mitigating training time, often incur tradeoffs in quality. This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes. We first identify limitations in MVS-based NeRF methods, such as restricted viewport coverage and artifacts due to limited input views. Then, we address these limitations by proposing a new method that selects and combines multiple cost volumes during volume rendering. Our method does not require training and can adapt to any MVS-based NeRF methods in a feed-forward fashion to improve rendering quality. Furthermore, our approach is also end-to-end trainable, allowing fine-tuning on specific scenes. We demonstrate the effectiveness of our method through experiments on large-scale datasets, showing significant rendering quality improvements in large-scale scenes and unbounded outdoor scenarios. We release the source code of BoostMVSNeRFs at https://su-terry.github.io/BoostMVSNeRFs/.
翻译:尽管神经辐射场(NeRFs)已展现出卓越的质量,但其漫长的训练时间仍然是一个限制。基于多视图立体(MVS)的泛化性NeRF方法虽然能够减少训练时间,但往往在质量上有所折衷。本文提出了一种名为BoostMVSNeRFs的新方法,旨在提升基于MVS的NeRF在大规模场景中的渲染质量。我们首先分析了基于MVS的NeRF方法的局限性,例如视口覆盖范围受限以及因输入视图有限而产生的伪影。随后,我们通过提出一种在体渲染过程中选择并融合多个代价体积的新方法来解决这些局限。我们的方法无需训练,能够以前馈方式适配任何基于MVS的NeRF方法以提升渲染质量。此外,我们的方法也支持端到端训练,允许对特定场景进行微调。我们通过在大型数据集上的实验证明了该方法的有效性,结果显示其在大规模场景及无界户外场景中均能带来显著的渲染质量提升。BoostMVSNeRFs的源代码已在https://su-terry.github.io/BoostMVSNeRFs/发布。