Video diffusion models have made substantial progress in various video generation applications. However, training models for long video generation tasks require significant computational and data resources, posing a challenge to developing long video diffusion models. This paper investigates a straightforward and training-free approach to extend an existing short video diffusion model (e.g. pre-trained on 16-frame videos) for consistent long video generation (e.g. 128 frames). Our preliminary observation has found that directly applying the short video diffusion model to generate long videos can lead to severe video quality degradation. Further investigation reveals that this degradation is primarily due to the distortion of high-frequency components in long videos, characterized by a decrease in spatial high-frequency components and an increase in temporal high-frequency components. Motivated by this, we propose a novel solution named FreeLong to balance the frequency distribution of long video features during the denoising process. FreeLong blends the low-frequency components of global video features, which encapsulate the entire video sequence, with the high-frequency components of local video features that focus on shorter subsequences of frames. This approach maintains global consistency while incorporating diverse and high-quality spatiotemporal details from local videos, enhancing both the consistency and fidelity of long video generation. We evaluated FreeLong on multiple base video diffusion models and observed significant improvements. Additionally, our method supports coherent multi-prompt generation, ensuring both visual coherence and seamless transitions between scenes.
翻译:视频扩散模型已在各类视频生成应用中取得显著进展。然而,针对长视频生成任务训练模型需要大量计算与数据资源,这对开发长视频扩散模型构成了挑战。本文研究了一种简单且免训练的方法,可将现有的短视频扩散模型(例如基于16帧视频预训练的模型)扩展用于生成连贯的长视频(例如128帧)。我们的初步观察发现,直接应用短视频扩散模型生成长视频会导致严重的视频质量退化。进一步研究表明,这种退化主要源于长视频中高频成分的失真,具体表现为空间高频成分的减少与时序高频成分的增加。受此启发,我们提出了一种名为FreeLong的新解决方案,通过在去噪过程中平衡长视频特征的频率分布。FreeLong将包含完整视频序列的全局视频特征的低频成分,与聚焦于较短帧子序列的局部视频特征的高频成分相融合。该方法在保持全局一致性的同时,融入了局部视频中多样化且高质量的时空细节,从而提升了长视频生成的一致性与保真度。我们在多个基础视频扩散模型上评估了FreeLong,均观察到显著改进。此外,我们的方法支持连贯的多提示词生成,确保视觉连贯性与场景间的平滑过渡。