Surface reconstruction from multi-view images is a core challenge in 3D vision. Recent studies have explored signed distance fields (SDF) within Neural Radiance Fields (NeRF) to achieve high-fidelity surface reconstructions. However, these approaches often suffer from slow training and rendering speeds compared to 3D Gaussian splatting (3DGS). Current state-of-the-art techniques attempt to fuse depth information to extract geometry from 3DGS, but frequently result in incomplete reconstructions and fragmented surfaces. In this paper, we introduce GSurf, a novel end-to-end method for learning a signed distance field directly from Gaussian primitives. The continuous and smooth nature of SDF addresses common issues in the 3DGS family, such as holes resulting from noisy or missing depth data. By using Gaussian splatting for rendering, GSurf avoids the redundant volume rendering typically required in other GS and SDF integrations. Consequently, GSurf achieves faster training and rendering speeds while delivering 3D reconstruction quality comparable to neural implicit surface methods, such as VolSDF and NeuS. Experimental results across various benchmark datasets demonstrate the effectiveness of our method in producing high-fidelity 3D reconstructions.
翻译:从多视角图像进行表面重建是三维视觉领域的核心挑战。近期研究探索了在神经辐射场(NeRF)中使用有符号距离场(SDF)以实现高保真表面重建。然而,相较于三维高斯泼溅(3DGS),这些方法通常存在训练与渲染速度缓慢的问题。当前最先进的技术尝试融合深度信息以从3DGS中提取几何结构,但往往导致重建不完整和表面破碎。本文提出GSurf,一种直接从高斯基元学习有符号距离场的端到端新方法。SDF的连续平滑特性解决了3DGS系列方法中的常见问题,例如由噪声或缺失深度数据导致的孔洞。通过采用高斯泼溅进行渲染,GSurf避免了其他GS与SDF融合方法中通常需要的冗余体渲染。因此,GSurf在实现与神经隐式表面方法(如VolSDF和NeuS)相当的三维重建质量的同时,获得了更快的训练和渲染速度。在不同基准数据集上的实验结果验证了本方法在生成高保真三维重建方面的有效性。