Text-to-image models have shown remarkable capabilities in generating high-quality images from natural language descriptions. However, these models are highly vulnerable to adversarial prompts, which can bypass safety measures and produce harmful content. Despite various defensive strategies, achieving robustness against attacks while maintaining practical utility in real-world applications remains a significant challenge. To address this issue, we first conduct an empirical study of the text encoder in the Stable Diffusion (SD) model, which is a widely used and representative text-to-image model. Our findings reveal that the [EOS] token acts as a semantic aggregator, exhibiting distinct distributional patterns between benign and adversarial prompts in its embedding space. Building on this insight, we introduce \textbf{SafeGuider}, a two-step framework designed for robust safety control without compromising generation quality. SafeGuider combines an embedding-level recognition model with a safety-aware feature erasure beam search algorithm. This integration enables the framework to maintain high-quality image generation for benign prompts while ensuring robust defense against both in-domain and out-of-domain attacks. SafeGuider demonstrates exceptional effectiveness in minimizing attack success rates, achieving a maximum rate of only 5.48\% across various attack scenarios. Moreover, instead of refusing to generate or producing black images for unsafe prompts, \textbf{SafeGuider} generates safe and meaningful images, enhancing its practical utility. In addition, SafeGuider is not limited to the SD model and can be effectively applied to other text-to-image models, such as the Flux model, demonstrating its versatility and adaptability across different architectures. We hope that SafeGuider can shed some light on the practical deployment of secure text-to-image systems.
翻译:文本到图像模型已展现出根据自然语言描述生成高质量图像的卓越能力。然而,这些模型极易受到对抗性提示的攻击,此类提示可绕过安全防护措施并生成有害内容。尽管存在多种防御策略,如何在现实应用中实现对抗攻击的鲁棒性同时保持实际可用性,仍是重大挑战。为解决该问题,我们首先对广泛使用且具有代表性的文本到图像模型——Stable Diffusion(SD)模型中的文本编码器进行实证研究。研究发现,[EOS] 标记在嵌入空间中充当语义聚合器,在良性提示与对抗性提示之间表现出明显的分布差异。基于此发现,我们提出 **SafeGuider**——一个在不影响生成质量的前提下实现鲁棒安全控制的两阶段框架。SafeGuider 将嵌入层级的识别模型与安全感知的特征擦除束搜索算法相结合。该集成使框架能够在保持良性提示高质量图像生成的同时,确保对领域内与领域外攻击的鲁棒防御。SafeGuider 在降低攻击成功率方面表现出卓越效能,在各种攻击场景中最高攻击成功率仅为 5.48%。此外,针对不安全提示,**SafeGuider** 并非拒绝生成或输出黑图,而是生成安全且有意义的图像,从而提升其实用价值。同时,SafeGuider 不局限于 SD 模型,可有效应用于其他文本到图像模型(如 Flux 模型),展现了其在不同架构间的通用性与适应性。我们希望 SafeGuider 能为安全文本到图像系统的实际部署提供有益启示。