Recently, diffusion models have emerged as promising newcomers in the field of generative models, shining brightly in image generation. However, when employed for object removal tasks, they still encounter issues such as generating random artifacts and the incapacity to repaint foreground object areas with appropriate content after removal. To tackle these problems, we propose Attentive Eraser, a tuning-free method to empower pre-trained diffusion models for stable and effective object removal. Firstly, in light of the observation that the self-attention maps influence the structure and shape details of the generated images, we propose Attention Activation and Suppression (ASS), which re-engineers the self-attention mechanism within the pre-trained diffusion models based on the given mask, thereby prioritizing the background over the foreground object during the reverse generation process. Moreover, we introduce Self-Attention Redirection Guidance (SARG), which utilizes the self-attention redirected by ASS to guide the generation process, effectively removing foreground objects within the mask while simultaneously generating content that is both plausible and coherent. Experiments demonstrate the stability and effectiveness of Attentive Eraser in object removal across a variety of pre-trained diffusion models, outperforming even training-based methods. Furthermore, Attentive Eraser can be implemented in various diffusion model architectures and checkpoints, enabling excellent scalability. Code is available at https://github.com/Anonym0u3/AttentiveEraser.
翻译:近年来,扩散模型作为生成模型领域的新兴方法,在图像生成任务中展现出卓越性能。然而,当应用于物体移除任务时,现有方法仍存在生成随机伪影、移除前景物体后无法用合理内容重绘区域等问题。为解决这些挑战,本文提出注意力擦除器——一种无需微调的方法,能够赋能预训练扩散模型实现稳定高效的物体移除。首先,基于自注意力图影响生成图像结构与形状细节的观察,我们提出注意力激活与抑制机制,该机制根据给定掩码重构预训练扩散模型中的自注意力机制,从而在反向生成过程中优先处理背景而非前景物体。此外,我们引入自注意力重定向引导技术,利用注意力激活与抑制机制重定向后的自注意力来指导生成过程,在有效移除掩码内前景物体的同时,生成合理且连贯的内容。实验表明,注意力擦除器在多种预训练扩散模型中均能实现稳定有效的物体移除,其性能甚至优于基于训练的方法。此外,该方法可适配于不同架构与检查点的扩散模型,展现出优秀的可扩展性。代码已开源:https://github.com/Anonym0u3/AttentiveEraser。