Recent advances in text-to-video generation have harnessed the power of diffusion models to create visually compelling content conditioned on text prompts. However, they usually encounter high computational costs and often struggle to produce videos with coherent physical motions. To tackle these issues, we propose GPT4Motion, a training-free framework that leverages the planning capability of large language models such as GPT, the physical simulation strength of Blender, and the excellent image generation ability of text-to-image diffusion models to enhance the quality of video synthesis. Specifically, GPT4Motion employs GPT-4 to generate a Blender script based on a user textual prompt, which commands Blender's built-in physics engine to craft fundamental scene components that encapsulate coherent physical motions across frames. Then these components are inputted into Stable Diffusion to generate a video aligned with the textual prompt. Experimental results on three basic physical motion scenarios, including rigid object drop and collision, cloth draping and swinging, and liquid flow, demonstrate that GPT4Motion can generate high-quality videos efficiently in maintaining motion coherency and entity consistency. GPT4Motion offers new insights in text-to-video research, enhancing its quality and broadening its horizon for further explorations.
翻译:近年来,文本到视频生成领域借助扩散模型的力量,能够根据文本提示生成视觉上引人注目的内容。然而,这些方法通常面临高计算成本,且难以生成具有连贯物理运动的视频。为解决这些问题,我们提出GPT4Motion——一种无需训练的框架,它结合了大语言模型(如GPT)的规划能力、Blender的物理模拟优势以及文本到图像扩散模型的卓越图像生成能力,从而提升视频合成的质量。具体而言,GPT4Motion利用GPT-4根据用户文本提示生成Blender脚本,该脚本驱动Blender内置物理引擎构建基础场景组件,这些组件可跨帧封装连贯的物理运动。随后,将这些组件输入Stable Diffusion以生成与文本提示对齐的视频。在三个基本物理运动场景(包括刚体下落与碰撞、布料垂坠与摆动以及液体流动)上的实验结果表明,GPT4Motion能够在保持运动连贯性和实体一致性的同时高效生成高质量视频。GPT4Motion为文本到视频研究提供了新见解,提升了生成质量并拓宽了进一步探索的视野。