Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic content in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers shortcut learning vulnerabilities in PDMs but also provides a thorough evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its advantages over existing purification methods and its robustness against adaptive perturbations.
翻译:个性化扩散模型(PDMs)在利用少量训练数据将预训练的文本到图像模型适配至特定主体图像生成方面已变得突出。然而,PDMs易受微小对抗性扰动影响,导致在受损数据集上微调时性能显著下降。这些漏洞被利用来创建保护性扰动,以防止未经授权的图像生成。现有的净化方法试图通过红队测试来破解保护性扰动,但往往过度净化图像,导致信息丢失。在本工作中,我们通过捷径学习的视角对PDMs的微调过程进行了深入分析。我们假设并实证证明了对抗性扰动在CLIP嵌入空间中引发了图像与其文本提示之间的潜在空间错位。这种错位导致模型在微调过程中错误地将噪声模式与唯一标识符关联,从而导致泛化能力差。基于这些见解,我们提出了一个系统的红队测试框架,包括数据净化和对比解耦学习。我们首先采用现成的图像恢复技术,在潜在空间中将图像与其原始语义内容重新对齐。然后,我们引入带有噪声标记的对比解耦学习,以将个性化概念的学习与虚假噪声模式解耦。我们的研究不仅揭示了PDMs中的捷径学习漏洞,还为开发更强的保护提供了全面的评估框架。我们广泛的评估证明了其相对于现有净化方法的优势及其对自适应扰动的鲁棒性。