Recent large-scale pre-trained diffusion models have demonstrated a powerful generative ability to produce high-quality videos from detailed text descriptions. However, exerting control over the motion of objects in videos generated by any video diffusion model is a challenging problem. In this paper, we propose a novel zero-shot moving object trajectory control framework, Motion-Zero, to enable a bounding-box-trajectories-controlled text-to-video diffusion model. To this end, an initial noise prior module is designed to provide a position-based prior to improve the stability of the appearance of the moving object and the accuracy of position. In addition, based on the attention map of the U-net, spatial constraints are directly applied to the denoising process of diffusion models, which further ensures the positional and spatial consistency of moving objects during the inference. Furthermore, temporal consistency is guaranteed with a proposed shift temporal attention mechanism. Our method can be flexibly applied to various state-of-the-art video diffusion models without any training process. Extensive experiments demonstrate our proposed method can control the motion trajectories of objects and generate high-quality videos. Our project page is https://vpx-ecnu.github.io/MotionZero-website/
翻译:近期大规模预训练扩散模型展现出强大的生成能力,能够根据详细文本描述生成高质量视频。然而,如何对任意视频扩散模型生成视频中物体的运动轨迹进行精确控制仍是一个具有挑战性的问题。本文提出一种新颖的零样本运动物体轨迹控制框架Motion-Zero,旨在实现基于边界框轨迹控制的文本到视频扩散模型。为此,我们设计了初始噪声先验模块,通过提供基于位置的先验信息来提升运动物体外观的稳定性与位置准确性。此外,基于U-net的注意力机制,我们在扩散模型去噪过程中直接施加空间约束,进一步保证了推理过程中运动物体的位置与空间一致性。同时,通过提出的平移时序注意力机制确保了时序一致性。本方法无需任何训练过程即可灵活应用于各类先进视频扩散模型。大量实验表明,所提方法能够有效控制物体运动轨迹并生成高质量视频。项目页面详见 https://vpx-ecnu.github.io/MotionZero-website/