Recent advancements in text-to-image (T2I) models have unlocked a wide range of applications but also present significant risks, particularly in their potential to generate unsafe content. To mitigate this issue, researchers have developed unlearning techniques to remove the model's ability to generate potentially harmful content. However, these methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images. In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing Not Safe For Work (NSFW) content from T2I models while preserving their performance on unrelated topics. DUO employs a preference optimization approach using curated paired image data, ensuring that the model learns to remove unsafe visual concepts while retaining unrelated features. Furthermore, we introduce an output-preserving regularization term to maintain the model's generative capabilities on safe content. Extensive experiments demonstrate that DUO can robustly defend against various state-of-the-art red teaming methods without significant performance degradation on unrelated topics, as measured by FID and CLIP scores. Our work contributes to the development of safer and more reliable T2I models, paving the way for their responsible deployment in both closed-source and open-source scenarios.
翻译:近年来,文本到图像(T2I)模型的发展开启了广泛的应用前景,但也带来了显著风险,尤其是在生成不安全内容方面。为缓解此问题,研究者们开发了遗忘技术以消除模型生成潜在有害内容的能力。然而,这些方法极易被对抗性攻击绕过,导致其无法可靠保证生成图像的安全性。本文提出直接遗忘优化(DUO),一种新颖的框架,用于从T2I模型中移除不适宜工作场所(NSFW)内容,同时保持其在无关主题上的性能。DUO采用基于精选配对图像数据的偏好优化方法,确保模型在遗忘不安全视觉概念的同时保留无关特征。此外,我们引入了输出保持正则化项,以维持模型在安全内容上的生成能力。大量实验表明,DUO能够鲁棒地抵御各种先进的“红队”攻击方法,且根据FID和CLIP分数评估,在无关主题上未出现显著性能下降。本工作为开发更安全可靠的T2I模型作出贡献,为其在闭源和开源场景中的负责任部署铺平道路。