In recent years, implicit online dense mapping methods have achieved high-quality reconstruction results, showcasing great potential in robotics, AR/VR, and digital twins applications. However, existing methods struggle with slow texture modeling which limits their real-time performance. To address these limitations, we propose a NeRF-based dense mapping method that enables faster and higher-quality reconstruction. To improve texture modeling, we introduce quasi-heterogeneous feature grids, which inherit the fast querying ability of uniform feature grids while adapting to varying levels of texture complexity. Besides, we present a gradient-aided coverage-maximizing strategy for keyframe selection that enables the selected keyframes to exhibit a closer focus on rich-textured regions and a broader scope for weak-textured areas. Experimental results demonstrate that our method surpasses existing NeRF-based approaches in texture fidelity, geometry accuracy, and time consumption. The code for our method will be available at: https://github.com/SYSU-STAR/H3-Mapping.
翻译:近年来,隐式在线密集建图方法已实现高质量重建结果,在机器人、增强现实/虚拟现实及数字孪生应用中展现出巨大潜力。然而,现有方法在纹理建模方面存在速度缓慢的问题,严重限制了其实时性能。针对这些局限,我们提出了一种基于神经辐射场的密集建图方法,能实现更快速、更高质量的重建。为提升纹理建模能力,我们引入了准异构特征网格——该网格既继承了均匀特征网格的快速查询能力,又能自适应不同纹理复杂度层次。此外,我们提出了一种梯度辅助的最大覆盖关键帧选取策略,使所选关键帧能更聚焦于纹理丰富区域,同时更广泛涵盖弱纹理区域。实验结果表明,本方法在纹理保真度、几何精度和时间消耗方面均超越了现有基于神经辐射场的方法。本方法的代码将开源在:https://github.com/SYSU-STAR/H3-Mapping。