3D Gaussian Splatting (3DGS) has emerged as a promising approach for 3D scene representation, offering a reduction in computational overhead compared to Neural Radiance Fields (NeRF). However, 3DGS is susceptible to high-frequency artifacts and demonstrates suboptimal performance under sparse viewpoint conditions, thereby limiting its applicability in robotics and computer vision. To address these limitations, we introduce SVS-GS, a novel framework for Sparse Viewpoint Scene reconstruction that integrates a 3D Gaussian smoothing filter to suppress artifacts. Furthermore, our approach incorporates a Depth Gradient Profile Prior (DGPP) loss with a dynamic depth mask to sharpen edges and 2D diffusion with Score Distillation Sampling (SDS) loss to enhance geometric consistency in novel view synthesis. Experimental evaluations on the MipNeRF-360 and SeaThru-NeRF datasets demonstrate that SVS-GS markedly improves 3D reconstruction from sparse viewpoints, offering a robust and efficient solution for scene understanding in robotics and computer vision applications.
翻译:三维高斯溅射(3DGS)作为一种新兴的三维场景表示方法,相较于神经辐射场(NeRF)能够降低计算开销。然而,3DGS 易受高频伪影影响,且在稀疏视角条件下性能欠佳,这限制了其在机器人与计算机视觉领域的应用。为克服这些局限,本文提出 SVS-GS——一种用于稀疏视角场景重建的新型框架,该框架通过集成三维高斯平滑滤波器来抑制伪影。此外,我们的方法结合了带有动态深度掩码的深度梯度剖面先验(DGPP)损失以锐化边缘,并利用带有分数蒸馏采样(SDS)损失的二维扩散技术来增强新视角合成中的几何一致性。在 MipNeRF-360 和 SeaThru-NeRF 数据集上的实验评估表明,SVS-GS 显著提升了稀疏视角下的三维重建质量,为机器人与计算机视觉应用中的场景理解提供了一个鲁棒且高效的解决方案。