Generating videos with realistic and physically plausible motion is one of the main recent challenges in computer vision. While diffusion models are achieving compelling results in image generation, video diffusion models are limited by heavy training and huge models, resulting in videos that are still biased to the training dataset. In this work we propose MotionCraft, a new zero-shot video generator to craft physics-based and realistic videos. MotionCraft is able to warp the noise latent space of an image diffusion model, such as Stable Diffusion, by applying an optical flow derived from a physics simulation. We show that warping the noise latent space results in coherent application of the desired motion while allowing the model to generate missing elements consistent with the scene evolution, which would otherwise result in artefacts or missing content if the flow was applied in the pixel space. We compare our method with the state-of-the-art Text2Video-Zero reporting qualitative and quantitative improvements, demonstrating the effectiveness of our approach to generate videos with finely-prescribed complex motion dynamics. Project page: https://mezzelfo.github.io/MotionCraft/
翻译:生成具有真实感且物理合理的运动视频是计算机视觉领域近期的主要挑战之一。虽然扩散模型在图像生成方面取得了引人注目的成果,但视频扩散模型受限于繁重的训练和庞大的模型规模,导致生成的视频仍偏向于训练数据集。在本工作中,我们提出了MotionCraft,一种新的零样本视频生成器,用于生成基于物理的真实视频。MotionCraft能够通过应用源自物理模拟的光流,对图像扩散模型(如Stable Diffusion)的噪声潜在空间进行形变。我们证明,对噪声潜在空间进行形变能够实现期望运动的一致性应用,同时允许模型生成与场景演化一致的缺失元素;若在像素空间直接应用光流,则会导致伪影或内容缺失。我们将所提方法与最先进的Text2Video-Zero进行了比较,报告了定性与定量改进,证明了该方法在生成具有精细指定复杂运动动力学视频方面的有效性。项目页面:https://mezzelfo.github.io/MotionCraft/