3D Gaussian Splatting (3DGS) creates a radiance field consisting of 3D Gaussians to represent a scene. With sparse training views, 3DGS easily suffers from overfitting, negatively impacting rendering. This paper introduces a new co-regularization perspective for improving sparse-view 3DGS. When training two 3D Gaussian radiance fields, we observe that the two radiance fields exhibit point disagreement and rendering disagreement that can unsupervisedly predict reconstruction quality, stemming from the randomness of densification implementation. We further quantify the two disagreements and demonstrate the negative correlation between them and accurate reconstruction, which allows us to identify inaccurate reconstruction without accessing ground-truth information. Based on the study, we propose CoR-GS, which identifies and suppresses inaccurate reconstruction based on the two disagreements: (1) Co-pruning considers Gaussians that exhibit high point disagreement in inaccurate positions and prunes them. (2) Pseudo-view co-regularization considers pixels that exhibit high rendering disagreement are inaccurate and suppress the disagreement. Results on LLFF, Mip-NeRF360, DTU, and Blender demonstrate that CoR-GS effectively regularizes the scene geometry, reconstructs the compact representations, and achieves state-of-the-art novel view synthesis quality under sparse training views.
翻译:三维高斯泼溅(3DGS)通过构建由三维高斯分布组成的辐射场来表示场景。在稀疏训练视图下,3DGS容易出现过拟合,从而对渲染质量产生负面影响。本文提出一种新的协同正则化视角以改进稀疏视图下的3DGS。在训练两个三维高斯辐射场时,我们观察到二者会因致密化实现的随机性而产生点位置分歧与渲染分歧,这两种分歧能够无监督地预测重建质量。我们进一步量化了这两种分歧,并证明其与精确重建之间存在负相关关系,这使得我们能够在无需真实值信息的情况下识别不准确的重建。基于此研究,我们提出了CoR-GS,该方法依据两种分歧识别并抑制不准确的重建:(1)协同剪枝将呈现高点位置分歧的高斯分布视为位于不准确位置并进行剪枝;(2)伪视图协同正则化将呈现高渲染分歧的像素视为不准确并抑制该分歧。在LLFF、Mip-NeRF360、DTU和Blender数据集上的实验结果表明,CoR-GS能有效正则化场景几何结构,重建紧凑表示,并在稀疏训练视图下实现了最先进的新视角合成质量。