Flow matching models have recently emerged as an efficient alternative to diffusion, especially for text-guided image generation and editing, offering faster inference through continuous-time dynamics. However, existing flow-based editors predominantly support global or single-instruction edits and struggle with multi-instance scenarios, where multiple parts of a reference input must be edited independently without semantic interference. We identify this limitation as a consequence of globally conditioned velocity fields and joint attention mechanisms, which entangle concurrent edits. To address this issue, we introduce Instance-Disentangled Attention, a mechanism that partitions joint attention operations, enforcing binding between instance-specific textual instructions and spatial regions during velocity field estimation. We evaluate our approach on both natural image editing and a newly introduced benchmark of text-dense infographics with region-level editing instructions. Experimental results demonstrate that our approach promotes edit disentanglement and locality while preserving global output coherence, enabling single-pass, instance-level editing.
翻译:流匹配模型近年来作为扩散模型的高效替代方案出现,尤其在文本引导的图像生成与编辑领域,通过连续时间动力学实现了更快的推理速度。然而,现有的基于流的编辑器主要支持全局或单指令编辑,难以应对多实例场景,即需要独立编辑参考输入的多个部分而不产生语义干扰。我们将此局限性归因于全局条件化的速度场和联合注意力机制,这些机制使得并发编辑相互纠缠。为解决这一问题,我们提出了实例解缠注意力机制,该机制通过划分联合注意力操作,在速度场估计过程中强制绑定实例特定的文本指令与空间区域。我们在自然图像编辑以及新引入的文本密集型信息图区域级编辑指令基准上评估了我们的方法。实验结果表明,我们的方法在保持全局输出一致性的同时,促进了编辑的解缠与局部性,实现了单次传递的实例级编辑。