A significant research effort is focused on exploiting the amazing capacities of pretrained diffusion models for the editing of images.They either finetune the model, or invert the image in the latent space of the pretrained model. However, they suffer from two problems: (1) Unsatisfying results for selected regions and unexpected changes in non-selected regions.(2) They require careful text prompt editing where the prompt should include all visual objects in the input image.To address this, we propose two improvements: (1) Only optimizing the input of the value linear network in the cross-attention layers is sufficiently powerful to reconstruct a real image. (2) We propose attention regularization to preserve the object-like attention maps after reconstruction and editing, enabling us to obtain accurate style editing without invoking significant structural changes. We further improve the editing technique that is used for the unconditional branch of classifier-free guidance as used by P2P. Extensive experimental prompt-editing results on a variety of images demonstrate qualitatively and quantitatively that our method has superior editing capabilities compared to existing and concurrent works. See our accompanying code in Stylediffusion: \url{https://github.com/sen-mao/StyleDiffusion}.
翻译:当前大量研究致力于利用预训练扩散模型的卓越能力进行图像编辑。这些方法要么对模型进行微调,要么在预训练模型的潜空间中对图像进行反演。然而,它们存在两个主要问题:(1)对选定区域的编辑效果不理想,且在非选定区域产生意外变化;(2)需要精心设计文本提示,且提示词必须涵盖输入图像中的所有视觉对象。为解决这些问题,我们提出两项改进:(1)仅优化交叉注意力层中值线性网络的输入即足以重建真实图像;(2)我们提出注意力正则化方法,以在重建和编辑后保持类似对象的注意力图,从而在不引发显著结构变化的情况下实现精确的风格编辑。我们进一步改进了用于P2P中使用的无分类器引导无条件分支的编辑技术。在各种图像上进行的大量提示编辑实验结果表明,我们的方法在定性和定量上均优于现有及同期工作。相关代码请参见StyleDiffusion项目:\url{https://github.com/sen-mao/StyleDiffusion}。