Semantic segmentation on point clouds is critical for 3D scene understanding. However, sparse and irregular point distributions provide limited appearance evidence, making geometry-only features insufficient to distinguish objects with similar shapes but distinct appearances (e.g., color, texture, material). We propose Gaussian-to-Point (G2P), which transfers appearance-aware attributes from 3D Gaussian Splatting to point clouds for more discriminative and appearance-consistent segmentation. Our G2P address the misalignment between optimized Gaussians and original point geometry by establishing point-wise correspondences. By leveraging Gaussian opacity attributes, we resolve the geometric ambiguity that limits existing models. Additionally, Gaussian scale attributes enable precise boundary localization in complex 3D scenes. Extensive experiments demonstrate that our approach achieves superior performance on standard benchmarks and shows significant improvements on geometrically challenging classes, all without any 2D or language supervision.
翻译:点云语义分割对于三维场景理解至关重要。然而,稀疏且不规则的点分布仅能提供有限的外观证据,使得纯几何特征难以区分形状相似但外观(如颜色、纹理、材质)不同的物体。我们提出高斯-点云(G2P)方法,将外观感知属性从三维高斯泼溅模型迁移至点云,以实现更具区分度且外观一致的语义分割。G2P通过建立逐点对应关系,解决了优化后的高斯模型与原始点云几何结构之间的错位问题。通过利用高斯不透明度属性,我们克服了制约现有模型的几何歧义性局限。此外,高斯尺度属性实现了复杂三维场景中的精确边界定位。大量实验表明,该方法在标准基准测试中取得了优越性能,并在几何结构复杂的类别上展现出显著提升,且全程无需任何二维或语言监督。