Recent advances in image editing leverage latent diffusion models (LDMs) for versatile, text-prompt-driven edits across diverse tasks. Yet, maintaining pixel-level edge structures-crucial for tasks such as photorealistic style transfer or image tone adjustment-remains as a challenge for latent-diffusion-based editing. To overcome this limitation, we propose a novel Structure Preservation Loss (SPL) that leverages local linear models to quantify structural differences between input and edited images. Our training-free approach integrates SPL directly into the diffusion model's generative process to ensure structural fidelity. This core mechanism is complemented by a post-processing step to mitigate LDM decoding distortions, a masking strategy for precise edit localization, and a color preservation loss to preserve hues in unedited areas. Experiments confirm SPL enhances structural fidelity, delivering state-of-the-art performance in latent-diffusion-based image editing. Our code will be publicly released at https://github.com/gongms00/SPL.
翻译:近年来,图像编辑领域利用潜在扩散模型(LDMs)实现了跨多种任务的、由文本提示驱动的多功能编辑。然而,对于基于潜在扩散的编辑方法而言,保持像素级的边缘结构——这对于诸如照片级真实感风格迁移或图像色调调整等任务至关重要——仍然是一个挑战。为克服这一限制,我们提出了一种新颖的结构保持损失(SPL),该损失利用局部线性模型来量化输入图像与编辑后图像之间的结构差异。我们的免训练方法将SPL直接集成到扩散模型的生成过程中,以确保结构保真度。这一核心机制辅以一个后处理步骤以减轻LDM解码失真、一种用于精确定位编辑区域的掩码策略,以及一个用于保留未编辑区域色调的颜色保持损失。实验证实SPL增强了结构保真度,在基于潜在扩散的图像编辑中实现了最先进的性能。我们的代码将在 https://github.com/gongms00/SPL 公开。