Personalized diffusion models have gained popularity for adapting pre-trained text-to-image models to generate images of specific topics with only a few images. However, recent studies find that these models are vulnerable to minor adversarial perturbation, and the fine-tuning performance is largely degraded on corrupted datasets. Such characteristics are further exploited to craft protective perturbation on sensitive images like portraits that prevent unauthorized generation. In response, diffusion-based purification methods have been proposed to remove these perturbations and retain generation performance. However, existing works lack detailed analysis of the fundamental shortcut learning vulnerability of personalized diffusion models and also turn to over-purifying the images cause information loss. In this paper, we take a closer look at the fine-tuning process of personalized diffusion models through the lens of shortcut learning and propose a hypothesis that could explain the underlying manipulation mechanisms of existing perturbation methods. Specifically, we find that the perturbed images are greatly shifted from their original paired prompt in the CLIP-based latent space. As a result, training with this mismatched image-prompt pair creates a construction that causes the models to dump their out-of-distribution noisy patterns to the identifier, thus causing serious performance degradation. Based on this observation, we propose a systematic approach to retain the training performance with purification that realigns the latent image and its semantic meaning and also introduces contrastive learning with a negative token to decouple the learning of wanted clean identity and the unwanted noisy pattern, that shows strong potential capacity against further adaptive perturbation.
翻译:个性化扩散模型因能够通过少量图像将预训练的文本到图像模型适配至特定主题的图像生成而广受欢迎。然而,近期研究发现,此类模型易受微小对抗性扰动的影响,且在受损数据集上的微调性能大幅下降。这一特性被进一步用于对肖像等敏感图像施加保护性扰动,以防止未经授权的生成。作为应对,基于扩散的净化方法被提出以消除这些扰动并保持生成性能。然而,现有工作缺乏对个性化扩散模型根本性捷径学习漏洞的详细分析,且往往因过度净化图像而导致信息损失。本文通过捷径学习的视角,深入审视个性化扩散模型的微调过程,并提出一个可解释现有扰动方法潜在操纵机制的假设。具体而言,我们发现扰动图像在基于CLIP的潜在空间中与其原始配对提示词显著偏离。因此,使用这种不匹配的图像-提示词对进行训练会形成一种结构,导致模型将其分布外的噪声模式转储至标识符,从而引发严重的性能下降。基于此观察,我们提出一种系统性方法,通过净化来保持训练性能:该方法重新对齐潜在图像与其语义含义,并引入带有负标记的对比学习,以解耦期望的干净身份与不期望的噪声模式的学习,展现出对抗进一步自适应扰动的强大潜力。