Text-based image generation models, such as Stable Diffusion and DALL-E 3, hold significant potential in content creation and publishing workflows, making them the focus in recent years. Despite their remarkable capability to generate diverse and vivid images, considerable efforts are being made to prevent the generation of harmful content, such as abusive, violent, or pornographic material. To assess the safety of existing models, we introduce a novel jailbreaking method called Chain-of-Jailbreak (CoJ) attack, which compromises image generation models through a step-by-step editing process. Specifically, for malicious queries that cannot bypass the safeguards with a single prompt, we intentionally decompose the query into multiple sub-queries. The image generation models are then prompted to generate and iteratively edit images based on these sub-queries. To evaluate the effectiveness of our CoJ attack method, we constructed a comprehensive dataset, CoJ-Bench, encompassing nine safety scenarios, three types of editing operations, and three editing elements. Experiments on four widely-used image generation services provided by GPT-4V, GPT-4o, Gemini 1.5 and Gemini 1.5 Pro, demonstrate that our CoJ attack method can successfully bypass the safeguards of models for over 60% cases, which significantly outperforms other jailbreaking methods (i.e., 14%). Further, to enhance these models' safety against our CoJ attack method, we also propose an effective prompting-based method, Think Twice Prompting, that can successfully defend over 95% of CoJ attack. We release our dataset and code to facilitate the AI safety research.
翻译:基于文本的图像生成模型,如Stable Diffusion和DALL-E 3,在内容创作与出版流程中展现出巨大潜力,已成为近年来的研究热点。尽管这些模型能够生成多样且生动的图像,但业界正投入大量努力防止其生成有害内容,例如辱骂性、暴力或色情材料。为评估现有模型的安全性,我们提出了一种名为链式越狱(CoJ)攻击的新型越狱方法,该方法通过逐步编辑过程攻破图像生成模型。具体而言,对于无法通过单次提示绕过安全防护的恶意查询,我们有意将其分解为多个子查询。随后引导图像生成模型基于这些子查询生成并迭代编辑图像。为评估CoJ攻击方法的有效性,我们构建了综合性数据集CoJ-Bench,涵盖九种安全场景、三类编辑操作及三种编辑要素。在GPT-4V、GPT-4o、Gemini 1.5和Gemini 1.5 Pro提供的四种广泛使用的图像生成服务上的实验表明,我们的CoJ攻击方法能在超过60%的情况下成功绕过模型的安全防护,显著优于其他越狱方法(即14%)。此外,为提升模型抵御CoJ攻击的能力,我们还提出了一种有效的基于提示的防御方法——三思提示法,可成功防御超过95%的CoJ攻击。我们公开了数据集与代码以促进人工智能安全研究。