Per-scene optimization methods such as 3D Gaussian Splatting provide state-of-the-art novel view synthesis quality but extrapolate poorly to under-observed areas. Methods that leverage generative priors to correct artifacts in these areas hold promise but currently suffer from two shortcomings. The first is scalability, as existing methods use image diffusion models or bidirectional video models that are limited in the number of views they can generate in a single pass (and thus require a costly iterative distillation process for consistency). The second is quality itself, as generators used in prior work tend to produce outputs that are inconsistent with existing scene content and fail entirely in completely unobserved regions. To solve these, we propose a two-stage pipeline that leverages two key insights. First, we train a powerful bidirectional generative model with a novel opacity mixing strategy that encourages consistency with existing observations while retaining the model's ability to extrapolate novel content in unseen areas. Second, we distill it into a causal auto-regressive model that generates hundreds of frames in a single pass. This model can directly produce novel views or serve as pseudo-supervision to improve the underlying 3D representation in a simple and highly efficient manner. We evaluate our method extensively and demonstrate that it can generate plausible reconstructions in scenarios where existing approaches fail completely. When measured on commonly benchmarked datasets, we outperform existing all existing baselines by a wide margin, exceeding prior state-of-the-art methods by 1-3 dB PSNR.
翻译:诸如三维高斯泼溅等逐场景优化方法在新型视角合成质量方面达到了最先进水平,但在观测不足区域的外推效果较差。利用生成先验来校正这些区域伪影的方法展现出潜力,但目前存在两个缺陷。首先是可扩展性问题,现有方法使用图像扩散模型或双向视频模型,这些模型单次生成视角数量有限(因而需要昂贵的迭代蒸馏过程以保证一致性)。其次是质量本身,先前工作中使用的生成器往往产生与现有场景内容不一致的输出,并在完全未观测区域完全失效。为解决这些问题,我们提出一个两阶段流程,其基于两个关键洞见。首先,我们训练一个强大的双向生成模型,采用新颖的不透明度混合策略,该策略既鼓励与现有观测保持一致性,又保留模型在未观测区域外推新内容的能力。其次,我们将其蒸馏为一个因果自回归模型,该模型能够单次生成数百帧。此模型可直接生成新视角,或以伪监督方式简单高效地改进底层三维表示。我们对该方法进行了广泛评估,结果表明它能在现有方法完全失效的场景中生成合理的重建结果。在常用基准数据集上的评测显示,我们以显著优势超越所有现有基线方法,PSNR指标较先前最先进方法提升1-3 dB。