Personalized diffusion models have gained popularity for adapting pre-trained text-to-image models to generate images of specific topics with minimal training data. However, these models are vulnerable to minor adversarial perturbations, leading to degraded performance on corrupted datasets. Such vulnerabilities are further exploited to craft protective perturbations on sensitive images like portraits that prevent unauthorized generation. In response, diffusion-based purification methods have been proposed to remove these perturbations and retain generation performance. However, existing works turn to over-purifying the images, which causes information loss. In this paper, we take a closer look at the fine-tuning process of personalized diffusion models through the lens of shortcut learning. And we propose a hypothesis explaining the manipulation mechanisms of existing perturbation methods, demonstrating that perturbed images significantly deviate from their original prompts in the CLIP-based latent space. This misalignment during fine-tuning causes models to associate noisy patterns with identifiers, resulting in performance degradation. Based on these insights, we introduce a systematic approach to maintain training performance through purification. Our method first purifies the images to realign them with their original semantic meanings in latent space. Then, we introduce contrastive learning with negative tokens to decouple the learning of clean identities from noisy patterns, which shows a strong potential capacity against adaptive perturbation. Our study uncovers shortcut learning vulnerabilities in personalized diffusion models and provides a firm evaluation framework for future protective perturbation research. Code is available at https://github.com/liuyixin-louis/DiffShortcut.
翻译:个性化扩散模型因其能够以少量训练数据将预训练的文本到图像模型适配至特定主题的图像生成而广受欢迎。然而,这些模型对微小的对抗性扰动较为脆弱,导致其在受损数据集上的性能下降。此类漏洞被进一步利用,以在肖像等敏感图像上构造保护性扰动,从而防止未经授权的生成。作为应对,基于扩散的净化方法被提出,以移除这些扰动并保持生成性能。然而,现有工作倾向于对图像进行过度净化,从而导致信息损失。本文中,我们通过捷径学习的视角,更细致地审视了个性化扩散模型的微调过程。我们提出了一个假说,用以解释现有扰动方法的操纵机制,并证明受扰动的图像在基于CLIP的潜在空间中显著偏离其原始提示。这种微调过程中的错位导致模型将噪声模式与标识符相关联,从而引发性能下降。基于这些洞见,我们引入了一种通过净化来维持训练性能的系统性方法。我们的方法首先净化图像,使其在潜在空间中与原始语义含义重新对齐。随后,我们引入了带有负标记的对比学习,以将干净身份的学习与噪声模式解耦,这显示出对抗自适应扰动的强大潜力。本研究揭示了个性化扩散模型中的捷径学习漏洞,并为未来的保护性扰动研究提供了一个坚实的评估框架。代码可在 https://github.com/liuyixin-louis/DiffShortcut 获取。