Text-guided image editing model has achieved great success in general domain. However, directly applying these models to the fashion domain may encounter two issues: (1) Inaccurate localization of editing region; (2) Weak editing magnitude. To address these issues, the MADiff model is proposed. Specifically, to more accurately identify editing region, the MaskNet is proposed, in which the foreground region, densepose and mask prompts from large language model are fed into a lightweight UNet to predict the mask for editing region. To strengthen the editing magnitude, the Attention-Enhanced Diffusion Model is proposed, where the noise map, attention map, and the mask from MaskNet are fed into the proposed Attention Processor to produce a refined noise map. By integrating the refined noise map into the diffusion model, the edited image can better align with the target prompt. Given the absence of benchmarks in fashion image editing, we constructed a dataset named Fashion-E, comprising 28390 image-text pairs in the training set, and 2639 image-text pairs for four types of fashion tasks in the evaluation set. Extensive experiments on Fashion-E demonstrate that our proposed method can accurately predict the mask of editing region and significantly enhance editing magnitude in fashion image editing compared to the state-of-the-art methods.
翻译:文本引导图像编辑模型在通用领域已取得显著成功。然而,将这些模型直接应用于时尚领域可能面临两个问题:(1) 编辑区域定位不准确;(2) 编辑强度不足。为解决这些问题,本文提出了MADiff模型。具体而言,为更精准地识别编辑区域,我们提出了MaskNet,该网络将前景区域、密集姿态以及来自大语言模型的掩码提示输入轻量级UNet,以预测编辑区域的掩码。为增强编辑强度,我们提出了注意力增强扩散模型,其中噪声图、注意力图以及来自MaskNet的掩码被输入所提出的注意力处理器,以生成精炼的噪声图。通过将精炼噪声图集成到扩散模型中,编辑后的图像能更好地与目标提示对齐。鉴于时尚图像编辑领域缺乏基准数据集,我们构建了名为Fashion-E的数据集,其训练集包含28390个图像-文本对,评估集包含针对四类时尚任务的2639个图像-文本对。在Fashion-E上进行的大量实验表明,与现有最先进方法相比,我们提出的方法能够准确预测编辑区域的掩码,并显著提升时尚图像编辑的编辑强度。