We present SemanticNVS, a camera-conditioned multi-view diffusion model for novel view synthesis (NVS), which improves generation quality and consistency by integrating pre-trained semantic feature extractors. Existing NVS methods perform well for views near the input view, however, they tend to generate semantically implausible and distorted images under long-range camera motion, revealing severe degradation. We speculate that this degradation is due to current models failing to fully understand their conditioning or intermediate generated scene content. Here, we propose to integrate pre-trained semantic feature extractors to incorporate stronger scene semantics as conditioning to achieve high-quality generation even at distant viewpoints. We investigate two different strategies, (1) warped semantic features and (2) an alternating scheme of understanding and generation at each denoising step. Experimental results on multiple datasets demonstrate the clear qualitative and quantitative (4.69%-15.26% in FID) improvement over state-of-the-art alternatives.
翻译:本文提出语义NVS,一种基于相机条件的多视角扩散模型,用于新视角合成。该模型通过集成预训练的语义特征提取器,显著提升了生成质量与一致性。现有新视角合成方法在输入视角附近表现良好,但在长距离相机运动下往往生成语义不合理且扭曲的图像,显示出严重的性能退化。我们推测这种退化源于当前模型未能充分理解其条件信息或中间生成的场景内容。为此,我们提出集成预训练的语义特征提取器,将更强的场景语义信息作为条件输入,从而即使在远距离视角下也能实现高质量生成。我们研究了两种不同策略:(1)经形变对齐的语义特征;(2)在每一步去噪过程中交替执行理解与生成的交替机制。在多个数据集上的实验结果表明,本方法在定性与定量指标(FID提升4.69%-15.26%)上均显著优于当前最优的替代方案。