Diffusion models have made significant strides in image generation, mastering tasks such as unconditional image synthesis, text-image translation, and image-to-image conversions. However, their capability falls short in the realm of video prediction, mainly because they treat videos as a collection of independent images, relying on external constraints such as temporal attention mechanisms to enforce temporal coherence. In our paper, we introduce a novel model class, that treats video as a continuous multi-dimensional process rather than a series of discrete frames. We also report a reduction of 75\% sampling steps required to sample a new frame thus making our framework more efficient during the inference time. Through extensive experimentation, we establish state-of-the-art performance in video prediction, validated on benchmark datasets including KTH, BAIR, Human3.6M, and UCF101. Navigate to the project page https://www.cs.umd.edu/~gauravsh/cvp/supp/website.html for video results.
翻译:扩散模型在图像生成领域取得了显著进展,已能熟练完成无条件图像合成、文本-图像转换及图像间转换等任务。然而,其在视频预测领域的能力仍显不足,主要原因在于现有方法将视频视为独立图像的集合,需依赖时间注意力机制等外部约束来强制实现时序一致性。本文提出一种新颖的模型类别,将视频视为连续的多维过程而非离散帧序列。实验表明,该方法可将生成新帧所需的采样步骤减少75%,从而在推理阶段显著提升框架效率。通过在KTH、BAIR、Human3.6M和UCF101等基准数据集上的广泛实验验证,本方法在视频预测任务中取得了最先进的性能表现。视频结果请访问项目页面:https://www.cs.umd.edu/~gauravsh/cvp/supp/website.html。