With the rise of text-to-image (T2I) generative AI models reaching wide audiences, it is critical to evaluate model robustness against non-obvious attacks to mitigate the generation of offensive images. By focusing on ``implicitly adversarial'' prompts (those that trigger T2I models to generate unsafe images for non-obvious reasons), we isolate a set of difficult safety issues that human creativity is well-suited to uncover. To this end, we built the Adversarial Nibbler Challenge, a red-teaming methodology for crowdsourcing a diverse set of implicitly adversarial prompts. We have assembled a suite of state-of-the-art T2I models, employed a simple user interface to identify and annotate harms, and engaged diverse populations to capture long-tail safety issues that may be overlooked in standard testing. The challenge is run in consecutive rounds to enable a sustained discovery and analysis of safety pitfalls in T2I models. In this paper, we present an in-depth account of our methodology, a systematic study of novel attack strategies and discussion of safety failures revealed by challenge participants. We also release a companion visualization tool for easy exploration and derivation of insights from the dataset. The first challenge round resulted in over 10k prompt-image pairs with machine annotations for safety. A subset of 1.5k samples contains rich human annotations of harm types and attack styles. We find that 14% of images that humans consider harmful are mislabeled as ``safe'' by machines. We have identified new attack strategies that highlight the complexity of ensuring T2I model robustness. Our findings emphasize the necessity of continual auditing and adaptation as new vulnerabilities emerge. We are confident that this work will enable proactive, iterative safety assessments and promote responsible development of T2I models.
翻译:随着文本到图像(T2I)生成式AI模型广泛普及,评估模型对抗非显式攻击的鲁棒性以降低冒犯性图像生成风险变得至关重要。通过聚焦“隐式对抗性”提示(那些因非显式原因触发T2I模型生成不安全图像的提示),我们分离出一组人类创造力特别适合发现的复杂安全问题。为此,我们构建了“对抗性试探者挑战”项目,这是一种通过众包方式获取多样化隐式对抗性提示的红队方法论。我们整合了一套最先进的T2I模型,采用简易用户界面识别和标注危害,并动员多元化人群捕捉标准测试中可能遗漏的长尾安全问题。该挑战采用连续轮次运行模式,以实现对T2I模型安全缺陷的持续发现与分析。本文深入阐述了我们的方法论,系统研究了参与者揭示的新型攻击策略与安全失败案例,并配套发布了可视化工具以便于数据集探索与洞察挖掘。首轮挑战产生了超过1万组包含机器安全标注的提示-图像对,其中1500个样本拥有丰富的人工标注(含危害类型与攻击风格)。研究发现14%被人类判定为有害的图像被机器误标为“安全”。我们识别出新型攻击策略,凸显了确保T2I模型鲁棒性的复杂性。研究结论强调,随着新漏洞出现,持续性审计与自适应调整必不可少。我们确信这项工作将推动主动式迭代安全评估,促进T2I模型的负责任开发。