Recent advances in large reconstruction and generative models have significantly improved scene reconstruction and novel view generation. However, due to compute limitations, each inference with these large models is confined to a small area, making long-range consistent scene generation challenging. To address this, we propose StarGen, a novel framework that employs a pre-trained video diffusion model in an autoregressive manner for long-range scene generation. The generation of each video clip is conditioned on the 3D warping of spatially adjacent images and the temporally overlapping image from previously generated clips, improving spatiotemporal consistency in long-range scene generation with precise pose control. The spatiotemporal condition is compatible with various input conditions, facilitating diverse tasks, including sparse view interpolation, perpetual view generation, and layout-conditioned city generation. Quantitative and qualitative evaluations demonstrate StarGen's superior scalability, fidelity, and pose accuracy compared to state-of-the-art methods. Project page: https://zju3dv.github.io/StarGen.
翻译:近年来,大规模重建与生成模型的进展显著提升了场景重建和新视图生成的质量。然而,由于计算资源的限制,这些大型模型的每次推理仅能处理小范围区域,使得长距离一致性场景生成面临挑战。为解决此问题,我们提出了StarGen,一种新颖的框架,该框架以自回归方式利用预训练的视频扩散模型进行长距离场景生成。每个视频片段的生成以前一生成片段中空间相邻图像的三维形变结果及其时间重叠图像为条件,从而在精确位姿控制下提升了长距离场景生成的时空一致性。该时空条件与多种输入条件兼容,可支持稀疏视图插值、无限视图生成及布局条件城市生成等多样化任务。定量与定性评估表明,相较于现有先进方法,StarGen在可扩展性、保真度与位姿准确性方面均表现出优越性能。项目页面:https://zju3dv.github.io/StarGen。