Diffusion probabilistic models have shown significant progress in video generation; however, their computational efficiency is limited by the large number of sampling steps required. Reducing sampling steps often compromises video quality or generation diversity. In this work, we introduce a distillation method that combines variational score distillation and consistency distillation to achieve few-step video generation, maintaining both high quality and diversity. We also propose a latent reward model fine-tuning approach to further enhance video generation performance according to any specified reward metric. This approach reduces memory usage and does not require the reward to be differentiable. Our method demonstrates state-of-the-art performance in few-step generation for 10-second videos (128 frames at 12 FPS). The distilled student model achieves a score of 82.57 on VBench, surpassing the teacher model as well as baseline models Gen-3, T2V-Turbo, and Kling. One-step distillation accelerates the teacher model's diffusion sampling by up to 278.6 times, enabling near real-time generation. Human evaluations further validate the superior performance of our 4-step student models compared to teacher model using 50-step DDIM sampling.
翻译:扩散概率模型在视频生成领域已取得显著进展,但其计算效率受限于所需的大量采样步骤。减少采样步骤通常会损害视频质量或生成多样性。在本工作中,我们引入一种结合变分分数蒸馏与一致性蒸馏的蒸馏方法,以实现少步数视频生成,同时保持高质量与多样性。我们还提出一种潜在奖励模型微调方法,可根据任意指定的奖励指标进一步提升视频生成性能。该方法降低了内存使用量,且不要求奖励函数可微。我们的方法在10秒视频(12 FPS下128帧)的少步数生成中展现了最先进的性能。经蒸馏的学生模型在VBench上获得82.57分,超越了教师模型以及基线模型Gen-3、T2V-Turbo和Kling。一步蒸馏将教师模型的扩散采样速度最高提升278.6倍,实现了近实时生成。人工评估进一步验证了我们的4步学生模型相较于使用50步DDIM采样的教师模型具有更优性能。