3D Gaussian Splatting has achieved impressive performance in novel view synthesis with real-time rendering capabilities. However, reconstructing high-quality surfaces with fine details using 3D Gaussians remains a challenging task. In this work, we introduce GausSurf, a novel approach to high-quality surface reconstruction by employing geometry guidance from multi-view consistency in texture-rich areas and normal priors in texture-less areas of a scene. We observe that a scene can be mainly divided into two primary regions: 1) texture-rich and 2) texture-less areas. To enforce multi-view consistency at texture-rich areas, we enhance the reconstruction quality by incorporating a traditional patch-match based Multi-View Stereo (MVS) approach to guide the geometry optimization in an iterative scheme. This scheme allows for mutual reinforcement between the optimization of Gaussians and patch-match refinement, which significantly improves the reconstruction results and accelerates the training process. Meanwhile, for the texture-less areas, we leverage normal priors from a pre-trained normal estimation model to guide optimization. Extensive experiments on the DTU and Tanks and Temples datasets demonstrate that our method surpasses state-of-the-art methods in terms of reconstruction quality and computation time.
翻译:三维高斯泼溅凭借其实时渲染能力,在新视角合成任务中取得了令人瞩目的性能。然而,利用三维高斯实现具有精细细节的高质量表面重建仍是一项具有挑战性的任务。在本工作中,我们提出了GausSurf,一种通过利用场景中纹理丰富区域的多视图一致性几何引导与纹理缺失区域的法向先验来实现高质量表面重建的新方法。我们观察到场景主要可分为两个区域:1)纹理丰富区域与2)纹理缺失区域。为在纹理丰富区域强化多视图一致性,我们通过引入基于传统块匹配的多视图立体视觉方法,在迭代优化框架中引导几何优化,从而提升重建质量。该框架实现了高斯优化与块匹配细化之间的相互增强,显著改善了重建结果并加速了训练过程。同时,对于纹理缺失区域,我们利用预训练法向估计模型提供的法向先验来指导优化。在DTU及Tanks and Temples数据集上的大量实验表明,本方法在重建质量与计算时间方面均优于现有先进方法。