We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS) that leverages comprehensive multi-view photometric error signals within a unified Metropolis-Hastings approach. Vanilla 3DGS heavily relies on heuristic-based density-control mechanisms (e.g., cloning, splitting, and pruning), which can lead to redundant computations or premature removal of beneficial Gaussians. Our framework overcomes these limitations by reformulating densification and pruning as a probabilistic sampling process, dynamically inserting and relocating Gaussians based on aggregated multi-view errors and opacity scores. Guided by Bayesian acceptance tests derived from these error-based importance scores, our method substantially reduces reliance on heuristics, offers greater flexibility, and adaptively infers Gaussian distributions without requiring predefined scene complexity. Experiments on benchmark datasets, including Mip-NeRF360, Tanks and Temples and Deep Blending, show that our approach reduces the number of Gaussians needed, achieving faster convergence while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.
翻译:我们提出了一种用于三维高斯泼溅(3DGS)的自适应采样框架,该框架在统一的Metropolis-Hastings方法中利用了全面的多视角光度误差信号。原始的3DGS严重依赖于基于启发式的密度控制机制(例如克隆、分裂和剪枝),这可能导致冗余计算或过早移除有益的高斯元素。我们的框架通过将致密化和剪枝重新表述为一个概率采样过程来克服这些限制,基于聚合的多视角误差和不透明度分数动态地插入和重新定位高斯元素。在由这些基于误差的重要性分数推导出的贝叶斯接受测试的指导下,我们的方法显著减少了对启发式规则的依赖,提供了更大的灵活性,并能自适应地推断高斯分布,而无需预定义场景复杂度。在包括Mip-NeRF360、Tanks and Temples以及Deep Blending在内的基准数据集上的实验表明,我们的方法减少了所需的高斯元素数量,在匹配或略微超越最先进模型的视图合成质量的同时,实现了更快的收敛速度。