We have made significant progress towards building foundational video diffusion models. As these models are trained using large-scale unsupervised data, it has become crucial to adapt these models to specific downstream tasks. Adapting these models via supervised fine-tuning requires collecting target datasets of videos, which is challenging and tedious. In this work, we utilize pre-trained reward models that are learned via preferences on top of powerful vision discriminative models to adapt video diffusion models. These models contain dense gradient information with respect to generated RGB pixels, which is critical to efficient learning in complex search spaces, such as videos. We show that backpropagating gradients from these reward models to a video diffusion model can allow for compute and sample efficient alignment of the video diffusion model. We show results across a variety of reward models and video diffusion models, demonstrating that our approach can learn much more efficiently in terms of reward queries and computation than prior gradient-free approaches. Our code, model weights,and more visualization are available at https://vader-vid.github.io.
翻译:我们在构建基础视频扩散模型方面取得了显著进展。由于这些模型使用大规模无监督数据进行训练,如何使其适应特定下游任务变得至关重要。通过监督微调来适配这些模型需要收集目标视频数据集,这一过程既具挑战性又耗时费力。在本研究中,我们利用基于强大视觉判别模型通过偏好学习得到的预训练奖励模型来适配视频扩散模型。这些模型包含针对生成RGB像素的密集梯度信息,这对于在视频等复杂搜索空间中进行高效学习至关重要。我们证明,将这些奖励模型的梯度反向传播至视频扩散模型,能够以较低的计算和样本成本实现视频扩散模型的对齐。我们在多种奖励模型和视频扩散模型上展示了实验结果,表明相较于先前的无梯度方法,我们的方法在奖励查询和计算效率方面具有显著优势。我们的代码、模型权重及更多可视化结果可通过 https://vader-vid.github.io 获取。