This paper considers an efficient video modeling process called Video Latent Flow Matching (VLFM). Unlike prior works, which randomly sampled latent patches for video generation, our method relies on current strong pre-trained image generation models, modeling a certain caption-guided flow of latent patches that can be decoded to time-dependent video frames. We first speculate multiple images of a video are differentiable with respect to time in some latent space. Based on this conjecture, we introduce the HiPPO framework to approximate the optimal projection for polynomials to generate the probability path. Our approach gains the theoretical benefits of the bounded universal approximation error and timescale robustness. Moreover, VLFM processes the interpolation and extrapolation abilities for video generation with arbitrary frame rates. We conduct experiments on several text-to-video datasets to showcase the effectiveness of our method.
翻译:本文提出一种名为视频潜在流匹配(VLFM)的高效视频建模方法。与先前工作中随机采样潜在块进行视频生成不同,本方法基于当前强大的预训练图像生成模型,通过建模特定描述引导的潜在块流来生成时间相关的视频帧。我们首先推测视频的多个图像在某个潜在空间中关于时间可微。基于这一假设,我们引入HiPPO框架来近似多项式的最优投影以生成概率路径。该方法获得了有界通用逼近误差和时间尺度鲁棒性的理论优势。此外,VLFM能够以任意帧率处理视频生成的插值和外推任务。我们在多个文本到视频数据集上进行了实验,验证了本方法的有效性。