Text-guided image editing aims to modify specific regions of an image according to natural language instructions while maintaining the general structure and the background fidelity. Existing methods utilize masks derived from cross-attention maps generated from diffusion models to identify the target regions for modification. However, since cross-attention mechanisms focus on semantic relevance, they struggle to maintain the image integrity. As a result, these methods often lack spatial consistency, leading to editing artifacts and distortions. In this work, we address these limitations and introduce LOCATEdit, which enhances cross-attention maps through a graph-based approach utilizing self-attention-derived patch relationships to maintain smooth, coherent attention across image regions, ensuring that alterations are limited to the designated items while retaining the surrounding structure. LOCATEdit consistently and substantially outperforms existing baselines on PIE-Bench, demonstrating its state-of-the-art performance and effectiveness on various editing tasks. Code can be found on https://github.com/LOCATEdit/LOCATEdit/
翻译:文本引导图像编辑旨在根据自然语言指令修改图像的特定区域,同时保持整体结构和背景保真度。现有方法利用扩散模型生成的跨注意力图导出的掩码来识别待修改的目标区域。然而,由于跨注意力机制侧重于语义相关性,难以维持图像完整性。因此,这些方法常缺乏空间一致性,导致编辑伪影和失真。本研究针对这些局限性提出了LOCATEdit,该方法通过基于图的自注意力块关系增强跨注意力图,以保持图像区域间平滑连贯的注意力分布,确保修改仅限于指定对象同时保留周围结构。LOCATEdit在PIE-Bench评估中持续显著超越现有基线方法,证明了其在多种编辑任务上的先进性能和有效性。代码可见于https://github.com/LOCATEdit/LOCATEdit/