Abstract representations of 3D scenes play a crucial role in computer vision, enabling a wide range of applications such as mapping, localization, surface reconstruction, and even advanced tasks like SLAM and rendering. Among these representations, line segments are widely used because of their ability to succinctly capture the structural features of a scene. However, existing 3D reconstruction methods often face significant challenges. Methods relying on 2D projections suffer from instability caused by errors in multi-view matching and occlusions, while direct 3D approaches are hampered by noise and sparsity in 3D point cloud data. This paper introduces LineGS, a novel method that combines geometry-guided 3D line reconstruction with a 3D Gaussian splatting model to address these challenges and improve representation ability. The method leverages the high-density Gaussian point distributions along the edge of the scene to refine and optimize initial line segments generated from traditional geometric approaches. By aligning these segments with the underlying geometric features of the scene, LineGS achieves a more precise and reliable representation of 3D structures. The results show significant improvements in both geometric accuracy and model compactness compared to baseline methods.
翻译:三维场景的抽象表示在计算机视觉中起着至关重要的作用,支撑着从地图构建、定位、表面重建到SLAM和渲染等高级任务在内的广泛应用。在这些表示方法中,线段因其能够简洁地捕捉场景的结构特征而被广泛采用。然而,现有的三维重建方法常面临显著挑战。依赖二维投影的方法易受多视图匹配误差和遮挡导致的不稳定性影响,而直接的三维方法则受限于三维点云数据中的噪声和稀疏性问题。本文提出LineGS,一种将几何引导的三维线段重建与三维高斯泼溅模型相结合的新方法,以应对这些挑战并提升表示能力。该方法利用场景边缘处的高密度高斯点分布,对传统几何方法生成的初始线段进行细化和优化。通过将这些线段与场景的底层几何特征对齐,LineGS实现了对三维结构更精确、更可靠的表示。实验结果表明,与基线方法相比,该方法在几何精度和模型紧凑性方面均有显著提升。