Recent advancements in video generation, particularly in diffusion models, have driven notable progress in text-to-video (T2V) and image-to-video (I2V) synthesis. However, challenges remain in effectively integrating dynamic motion signals and flexible spatial constraints. Existing T2V methods typically rely on text prompts, which inherently lack precise control over the spatial layout of generated content. In contrast, I2V methods are limited by their dependence on real images, which restricts the editability of the synthesized content. Although some methods incorporate ControlNet to introduce image-based conditioning, they often lack explicit motion control and require computationally expensive training. To address these limitations, we propose AnyI2V, a training-free framework that animates any conditional images with user-defined motion trajectories. AnyI2V supports a broader range of modalities as the conditional image, including data types such as meshes and point clouds that are not supported by ControlNet, enabling more flexible and versatile video generation. Additionally, it supports mixed conditional inputs and enables style transfer and editing via LoRA and text prompts. Extensive experiments demonstrate that the proposed AnyI2V achieves superior performance and provides a new perspective in spatial- and motion-controlled video generation. Code is available at https://henghuiding.com/AnyI2V/.
翻译:近期,视频生成领域,特别是扩散模型方面取得了显著进展,推动了文本到视频(T2V)和图像到视频(I2V)合成的长足发展。然而,在有效整合动态运动信号和灵活的空间约束方面仍存在挑战。现有的T2V方法通常依赖于文本提示,这本质上缺乏对生成内容空间布局的精确控制。相比之下,I2V方法受限于其对真实图像的依赖,这限制了合成内容的可编辑性。尽管一些方法通过引入ControlNet来提供基于图像的条件控制,但它们往往缺乏明确的运动控制,并且需要计算成本高昂的训练。为了应对这些局限性,我们提出了AnyI2V,这是一个免训练框架,能够根据用户定义的运动轨迹对任意条件图像进行动画生成。AnyI2V支持更广泛的条件图像模态,包括ControlNet不支持的数据类型,如网格和点云,从而实现更灵活、更多样化的视频生成。此外,它支持混合条件输入,并能通过LoRA和文本提示实现风格迁移和编辑。大量实验表明,所提出的AnyI2V实现了卓越的性能,并为空间和运动控制的视频生成提供了新的视角。代码可在 https://henghuiding.com/AnyI2V/ 获取。