3D Gaussian Splatting (3DGS) has shown significant advantages in novel view synthesis (NVS), particularly in achieving high rendering speeds and high-quality results. However, its geometric accuracy in 3D reconstruction remains limited due to the lack of explicit geometric constraints during optimization. This paper introduces CDGS, a confidence-aware depth regularization approach developed to enhance 3DGS. We leverage multi-cue confidence maps of monocular depth estimation and sparse Structure-from-Motion depth to adaptively adjust depth supervision during the optimization process. Our method demonstrates improved geometric detail preservation in early training stages and achieves competitive performance in both NVS quality and geometric accuracy. Experiments on the publicly available Tanks and Temples benchmark dataset show that our method achieves more stable convergence behavior and more accurate geometric reconstruction results, with improvements of up to 2.31 dB in PSNR for NVS and consistently lower geometric errors in M3C2 distance metrics. Notably, our method reaches comparable F-scores to the original 3DGS with only 50% of the training iterations. We expect this work will facilitate the development of efficient and accurate 3D reconstruction systems for real-world applications such as digital twin creation, heritage preservation, or forestry applications.
翻译:3D高斯泼溅(3DGS)在新视角合成(NVS)中展现出显著优势,尤其在实现高渲染速度与高质量结果方面。然而,由于优化过程中缺乏显式几何约束,其在三维重建中的几何精度仍存在局限。本文提出CDGS,一种专为增强3DGS而设计的置信感知深度正则化方法。我们利用单目深度估计的多线索置信度图与运动恢复结构的稀疏深度数据,在优化过程中自适应调整深度监督。该方法在训练早期阶段展现出更好的几何细节保持能力,并在NVS质量与几何精度方面均取得具有竞争力的性能。在公开基准数据集Tanks and Temples上的实验表明,我们的方法具有更稳定的收敛行为与更精确的几何重建结果:在NVS的PSNR指标上最高提升2.31 dB,在M3C2距离度量中几何误差持续降低。值得注意的是,本方法仅需50%的训练迭代次数即可达到与原版3DGS相当的F分数。我们期望这项工作能为数字孪生构建、文化遗产保护及林业应用等现实场景中高效精准的三维重建系统发展提供助力。