3D Gaussian Splatting (3DGS) enables photorealistic rendering but suffers from artefacts due to sparse Structure-from-Motion (SfM) initialisation. To address this limitation, we propose GP-GS, a Gaussian Process (GP) based densification framework for 3DGS optimisation. GP-GS formulates point cloud densification as a continuous regression problem, where a GP learns a local mapping from 2D pixel coordinates to 3D position and colour attributes. An adaptive neighbourhood-based sampling strategy generates candidate pixels for inference, while GP-predicted uncertainty is used to filter unreliable predictions, reducing noise and preserving geometric structure. Extensive experiments on synthetic and real-world benchmarks demonstrate that GP-GS consistently improves reconstruction quality and rendering fidelity, achieving up to 1.12 dB PSNR improvement over strong baselines.
翻译:3D高斯溅射(3DGS)能够实现逼真的渲染效果,但由于稀疏的运动恢复结构(SfM)初始化,常产生伪影。为克服这一局限,我们提出GP-GS——一种基于高斯过程(GP)的3DGS优化致密化框架。GP-GS将点云致密化建模为连续回归问题,通过高斯过程学习从二维像素坐标到三维位置与颜色属性的局部映射。基于自适应邻域的采样策略生成待推理的候选像素,同时利用GP预测的不确定性过滤不可靠的预测结果,从而降低噪声并保持几何结构。在合成与真实场景基准数据集上的大量实验表明,GP-GS能持续提升重建质量与渲染保真度,相较于强基线方法最高可获得1.12 dB的峰值信噪比提升。