Advanced text-to-image models such as DALL$\cdot$E 2 and Midjourney possess the capacity to generate highly realistic images, raising significant concerns regarding the potential proliferation of unsafe content. This includes adult, violent, or deceptive imagery of political figures. Despite claims of rigorous safety mechanisms implemented in these models to restrict the generation of not-safe-for-work (NSFW) content, we successfully devise and exhibit the first prompt attacks on Midjourney, resulting in the production of abundant photorealistic NSFW images. We reveal the fundamental principles of such prompt attacks and suggest strategically substituting high-risk sections within a suspect prompt to evade closed-source safety measures. Our novel framework, SurrogatePrompt, systematically generates attack prompts, utilizing large language models, image-to-text, and image-to-image modules to automate attack prompt creation at scale. Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios. Both subjective and objective assessments validate that the images generated from our attack prompts present considerable safety hazards.
翻译:先进的文本到图像模型(如DALL·E 2和Midjourney)能够生成高度逼真的图像,这引发了人们对不安全内容(包括成人、暴力或政治人物的欺骗性图像)可能扩散的重大担忧。尽管这些模型声称实施了严格的安全机制以限制生成不适合工作场所(NSFW)的内容,但我们成功设计并展示了针对Midjourney的首个提示攻击,导致生成了大量逼真的NSFW图像。我们揭示了此类提示攻击的基本原理,并建议策略性地替换可疑提示中的高风险部分,以规避闭源安全措施。我们的新颖框架SurrogatePrompt系统性地生成攻击提示,利用大型语言模型、图像到文本和图像到图像模块,大规模自动化创建攻击提示。评估结果显示,使用我们的攻击提示绕过Midjourney专有安全过滤器的成功率达到88%,从而生成了描绘政治人物处于暴力场景中的伪造图像。主观和客观评估均证实,从我们的攻击提示生成的图像存在显著的安全隐患。