Text-to-image (T2I) models, such as Stable Diffusion, have exhibited remarkable performance in generating high-quality images from text descriptions in recent years. However, text-to-image models may be tricked into generating not-safe-for-work (NSFW) content, particularly in sexually explicit scenarios. Existing countermeasures mostly focus on filtering inappropriate inputs and outputs, or suppressing improper text embeddings, which can block sexually explicit content (e.g., naked) but may still be vulnerable to adversarial prompts -- inputs that appear innocent but are ill-intended. In this paper, we present SafeGen, a framework to mitigate sexual content generation by text-to-image models in a text-agnostic manner. The key idea is to eliminate explicit visual representations from the model regardless of the text input. In this way, the text-to-image model is resistant to adversarial prompts since such unsafe visual representations are obstructed from within. Extensive experiments conducted on four datasets and large-scale user studies demonstrate SafeGen's effectiveness in mitigating sexually explicit content generation while preserving the high-fidelity of benign images. SafeGen outperforms eight state-of-the-art baseline methods and achieves 99.4% sexual content removal performance. Furthermore, our constructed benchmark of adversarial prompts provides a basis for future development and evaluation of anti-NSFW-generation methods.
翻译:近年来,文本到图像(T2I)模型(如Stable Diffusion)在根据文本描述生成高质量图像方面表现出卓越的性能。然而,文本到图像模型可能被诱导生成不适合在工作场所(NSFW)的内容,尤其是在色情场景中。现有的对策主要侧重于过滤不适当的输入和输出,或抑制不当的文本嵌入,这些方法可以阻止色情内容(例如裸体),但仍可能对对抗性提示——即看似无害但实则恶意的输入——存在脆弱性。本文提出SafeGen,这是一个以文本无关的方式缓解文本到图像模型生成色情内容的框架。其核心思想是无论文本输入如何,都从模型中消除明确的视觉表征。通过这种方式,文本到图像模型能够抵抗对抗性提示,因为此类不安全的视觉表征从内部被阻断。在四个数据集上进行的大量实验和大规模用户研究表明,SafeGen在缓解色情内容生成的同时,保持了良性图像的高保真度。SafeGen优于八种最先进的基线方法,实现了99.4%的色情内容去除性能。此外,我们构建的对抗性提示基准为未来反NSFW生成方法的开发和评估提供了基础。