Recent advances in text-to-image (T2I) diffusion models have enabled impressive image generation capabilities guided by text prompts. However, extending these techniques to video generation remains challenging, with existing text-to-video (T2V) methods often struggling to produce high-quality and motion-consistent videos. In this work, we introduce Control-A-Video, a controllable T2V diffusion model that can generate videos conditioned on text prompts and reference control maps like edge and depth maps. To tackle video quality and motion consistency issues, we propose novel strategies to incorporate content prior and motion prior into the diffusion-based generation process. Specifically, we employ a first-frame condition scheme to transfer video generation from the image domain. Additionally, we introduce residual-based and optical flow-based noise initialization to infuse motion priors from reference videos, promoting relevance among frame latents for reduced flickering. Furthermore, we present a Spatio-Temporal Reward Feedback Learning (ST-ReFL) algorithm that optimizes the video diffusion model using multiple reward models for video quality and motion consistency, leading to superior outputs. Comprehensive experiments demonstrate that our framework generates higher-quality, more consistent videos compared to existing state-of-the-art methods in controllable text-to-video generation
翻译:近年来,文本到图像扩散模型在文本提示引导下展现出卓越的图像生成能力。然而,将这些技术扩展到视频生成领域仍面临挑战,现有的文本到视频方法往往难以生成高质量且运动一致的视频。本研究提出Control-A-Video,一种可控的文本到视频扩散模型,能够根据文本提示以及边缘图、深度图等参考控制图生成视频。为解决视频质量和运动一致性问题,我们提出了新颖的策略,将内容先验和运动先验融入基于扩散的生成过程。具体而言,我们采用首帧条件方案,将视频生成任务从图像域迁移过来。此外,我们引入了基于残差和基于光流的噪声初始化方法,以从参考视频中注入运动先验,增强帧潜在表示之间的相关性,从而减少闪烁。进一步地,我们提出了一种时空奖励反馈学习算法,该算法利用多个针对视频质量和运动一致性的奖励模型来优化视频扩散模型,从而获得更优的输出结果。综合实验表明,在可控文本到视频生成任务中,我们的框架相比现有最先进方法能够生成质量更高、一致性更强的视频。