Recent advancements in diffusion models have significantly improved video generation and editing capabilities. However, multi-grained video editing, which encompasses class-level, instance-level, and part-level modifications, remains a formidable challenge. The major difficulties in multi-grained editing include semantic misalignment of text-to-region control and feature coupling within the diffusion model. To address these difficulties, we present VideoGrain, a zero-shot approach that modulates space-time (cross- and self-) attention mechanisms to achieve fine-grained control over video content. We enhance text-to-region control by amplifying each local prompt's attention to its corresponding spatial-disentangled region while minimizing interactions with irrelevant areas in cross-attention. Additionally, we improve feature separation by increasing intra-region awareness and reducing inter-region interference in self-attention. Extensive experiments demonstrate our method achieves state-of-the-art performance in real-world scenarios. Our code, data, and demos are available at https://knightyxp.github.io/VideoGrain_project_page/
翻译:近年来,扩散模型的发展显著提升了视频生成与编辑能力。然而,涵盖类别级、实例级和部件级修改的多粒度视频编辑仍然是一项艰巨挑战。多粒度编辑的主要困难包括文本到区域控制的语义失准以及扩散模型内部的特征耦合。为解决这些难题,我们提出了VideoGrain,一种零样本方法,通过调制时空(交叉与自)注意力机制来实现对视频内容的细粒度控制。我们通过增强每个局部提示词对其对应空间解耦区域的注意力,同时最小化其在交叉注意力中与无关区域的交互,从而提升文本到区域的控制精度。此外,我们通过在自注意力中增强区域内感知并减少区域间干扰,以改善特征分离效果。大量实验表明,我们的方法在真实场景中实现了最先进的性能。我们的代码、数据及演示可在 https://knightyxp.github.io/VideoGrain_project_page/ 获取。