Since the introduction of NeRFs, considerable attention has been focused on improving their training and inference times, leading to the development of Fast-NeRFs models. Despite demonstrating impressive rendering speed and quality, the rapid convergence of such models poses challenges for further improving reconstruction quality. Common strategies to improve rendering quality involves augmenting model parameters or increasing the number of sampled points. However, these computationally intensive approaches encounter limitations in achieving significant quality enhancements. This study introduces a model-agnostic framework inspired by Sparsely-Gated Mixture of Experts to enhance rendering quality without escalating computational complexity. Our approach enables specialization in rendering different scene components by employing a mixture of experts with varying resolutions. We present a novel gate formulation designed to maximize expert capabilities and propose a resolution-based routing technique to effectively induce sparsity and decompose scenes. Our work significantly improves reconstruction quality while maintaining competitive performance.
翻译:自NeRF问世以来,其训练与推理时间的优化备受关注,并催生了Fast-NeRF等模型。尽管此类模型展现出令人印象深刻的渲染速度与质量,但其快速收敛的特性对进一步提升重建质量构成了挑战。提升渲染质量的常见策略包括增加模型参数或提高采样点数量。然而,这些计算密集型方法在实现显著质量提升方面存在局限。本研究受稀疏门控专家混合模型启发,提出一种模型无关的框架,旨在不增加计算复杂度的前提下提升渲染质量。我们的方法通过采用具有不同分辨率的专家混合模型,实现了对不同场景组件的专业化渲染。我们提出了一种新颖的门控公式,旨在最大化专家能力,并设计了一种基于分辨率的路由技术,以有效诱导稀疏性并分解场景。本工作显著提升了重建质量,同时保持了具有竞争力的性能。