Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.
翻译:基于前馈的高斯模型通过利用大型多视角数据集的先验知识,在稀疏视角三维重建领域取得了显著进展。然而,由于高斯数量有限,这些模型往往难以表征高频细节。虽然逐场景三维高斯泼溅优化中使用的致密化策略可适配于前馈模型,但其可能并非广义场景的理想选择。本文提出生成式致密化,一种高效且可泛化的方法,用于致密化前馈模型生成的高斯分布。与三维高斯泼溅通过迭代分裂克隆原始高斯参数的致密化策略不同,我们的方法对前馈模型的特征表示进行上采样,并在单次前向传播中生成对应的精细高斯分布,利用嵌入的先验知识以增强泛化能力。在物体级与场景级重建任务上的实验结果表明,本方法在模型规模相当或更小的条件下优于现有先进方法,在精细细节表征方面取得了显著提升。