In recent years, the field of text-to-video (T2V) generation has made significant strides. Despite this progress, there is still a gap between theoretical advancements and practical application, amplified by issues like degraded image quality and flickering artifacts. Recent advancements in enhancing the video diffusion model (VDM) through feedback learning have shown promising results. However, these methods still exhibit notable limitations, such as misaligned feedback and inferior scalability. To tackle these issues, we introduce OnlineVPO, a more efficient preference learning approach tailored specifically for video diffusion models. Our method features two novel designs, firstly, instead of directly using image-based reward feedback, we leverage the video quality assessment (VQA) model trained on synthetic data as the reward model to provide distribution and modality-aligned feedback on the video diffusion model. Additionally, we introduce an online DPO algorithm to address the off-policy optimization and scalability issue in existing video preference learning frameworks. By employing the video reward model to offer concise video feedback on the fly, OnlineVPO offers effective and efficient preference guidance. Extensive experiments on the open-source video-diffusion model demonstrate OnlineVPO as a simple yet effective and more importantly scalable preference learning algorithm for video diffusion models, offering valuable insights for future advancements in this domain.
翻译:近年来,文本到视频(T2V)生成领域取得了显著进展。尽管有此进步,理论进展与实际应用之间仍存在差距,图像质量下降和闪烁伪影等问题加剧了这一差距。最近通过反馈学习增强视频扩散模型(VDM)的进展显示出有希望的结果。然而,这些方法仍表现出明显的局限性,例如反馈失准和可扩展性不足。为解决这些问题,我们提出了OnlineVPO,一种专为视频扩散模型量身定制的、更高效的偏好学习方法。我们的方法具有两个新颖的设计:首先,我们不直接使用基于图像的奖励反馈,而是利用在合成数据上训练的视频质量评估(VQA)模型作为奖励模型,为视频扩散模型提供分布和模态对齐的反馈。此外,我们引入了一种在线DPO算法,以解决现有视频偏好学习框架中的离策略优化和可扩展性问题。通过使用视频奖励模型实时提供简洁的视频反馈,OnlineVPO提供了有效且高效的偏好指导。在开源视频扩散模型上进行的大量实验表明,OnlineVPO是一种简单而有效、更重要的是可扩展的视频扩散模型偏好学习算法,为该领域的未来发展提供了宝贵的见解。