Text-to-image (T2I) diffusion models have drawn attention for their ability to generate high-quality images with precise text alignment. However, these models can also be misused to produce inappropriate content. Existing safety measures, which typically rely on text classifiers or ControlNet-like approaches, are often insufficient. Traditional text classifiers rely on large-scale labeled datasets and can be easily bypassed by rephrasing. As diffusion models continue to scale, fine-tuning these safeguards becomes increasingly challenging and lacks flexibility. Recent red-teaming attack researches further underscore the need for a new paradigm to prevent the generation of inappropriate content. In this paper, we introduce SteerDiff, a lightweight adaptor module designed to act as an intermediary between user input and the diffusion model, ensuring that generated images adhere to ethical and safety standards with little to no impact on usability. SteerDiff identifies and manipulates inappropriate concepts within the text embedding space to guide the model away from harmful outputs. We conduct extensive experiments across various concept unlearning tasks to evaluate the effectiveness of our approach. Furthermore, we benchmark SteerDiff against multiple red-teaming strategies to assess its robustness. Finally, we explore the potential of SteerDiff for concept forgetting tasks, demonstrating its versatility in text-conditioned image generation.
翻译:文本到图像(T2I)扩散模型因其能够生成高质量且与文本精确对齐的图像而备受关注。然而,这些模型也可能被滥用以产生不当内容。现有的安全措施,通常依赖于文本分类器或类似ControlNet的方法,往往不够充分。传统的文本分类器依赖于大规模标注数据集,且容易被改写绕过。随着扩散模型规模的持续扩大,对这些安全措施进行微调变得越来越困难且缺乏灵活性。最近的"红队"攻击研究进一步凸显了需要一种新范式来防止不当内容的生成。本文中,我们介绍了SteerDiff,一个轻量级的适配器模块,旨在作为用户输入与扩散模型之间的中介,确保生成的图像符合道德和安全标准,同时对可用性影响极小甚至没有影响。SteerDiff在文本嵌入空间中识别并操纵不当概念,以引导模型远离有害输出。我们在各种概念遗忘任务上进行了广泛的实验,以评估我们方法的有效性。此外,我们针对多种"红队"攻击策略对SteerDiff进行了基准测试,以评估其鲁棒性。最后,我们探索了SteerDiff在概念遗忘任务中的潜力,展示了其在文本条件图像生成中的多功能性。