Using risky text prompts, such as pornography and violent prompts, to test the safety of text-to-image (T2I) models is a critical task. However, existing risky prompt datasets are limited in three key areas: 1) limited risky categories, 2) coarse-grained annotation, and 3) low effectiveness. To address these limitations, we introduce T2I-RiskyPrompt, a comprehensive benchmark designed for evaluating safety-related tasks in T2I models. Specifically, we first develop a hierarchical risk taxonomy, which consists of 6 primary categories and 14 fine-grained subcategories. Building upon this taxonomy, we construct a pipeline to collect and annotate risky prompts. Finally, we obtain 6,432 effective risky prompts, where each prompt is annotated with both hierarchical category labels and detailed risk reasons. Moreover, to facilitate the evaluation, we propose a reason-driven risky image detection method that explicitly aligns the MLLM with safety annotations. Based on T2I-RiskyPrompt, we conduct a comprehensive evaluation of eight T2I models, nine defense methods, five safety filters, and five attack strategies, offering nine key insights into the strengths and limitations of T2I model safety. Finally, we discuss potential applications of T2I-RiskyPrompt across various research fields. The dataset and code are provided in https://github.com/datar001/T2I-RiskyPrompt.
翻译:使用包含色情、暴力等风险内容的文本提示来测试文生图模型的安全性是一项关键任务。然而,现有的风险提示数据集在三个关键方面存在局限:1) 风险类别有限;2) 标注粒度粗糙;3) 有效性不足。为应对这些局限,我们提出了T2I-RiskyPrompt,一个为文生图模型安全性相关任务评估而设计的综合性基准。具体而言,我们首先构建了一个包含6个一级类别和14个细分子类别的分层风险分类体系。基于该分类体系,我们建立了一套用于收集与标注风险提示的流程。最终,我们获得了6,432条有效风险提示,每条提示均标注了分层类别标签及详细的风险原因。此外,为便于评估,我们提出了一种原因驱动的风险图像检测方法,该方法显式地将MLLM与安全标注对齐。基于T2I-RiskyPrompt,我们对八个文生图模型、九种防御方法、五个安全过滤器及五种攻击策略进行了全面评估,并就文生图模型安全性的优势与局限提出了九项关键见解。最后,我们探讨了T2I-RiskyPrompt在不同研究领域的潜在应用。数据集与代码发布于 https://github.com/datar001/T2I-RiskyPrompt。