While safety mechanisms have significantly progressed in filtering harmful text inputs, MLLMs remain vulnerable to multimodal jailbreaks that exploit their cross-modal reasoning capabilities. We present MIRAGE, a novel multimodal jailbreak framework that exploits narrative-driven context and role immersion to circumvent safety mechanisms in Multimodal Large Language Models (MLLMs). By systematically decomposing the toxic query into environment, role, and action triplets, MIRAGE constructs a multi-turn visual storytelling sequence of images and text using Stable Diffusion, guiding the target model through an engaging detective narrative. This process progressively lowers the model's defences and subtly guides its reasoning through structured contextual cues, ultimately eliciting harmful responses. In extensive experiments on the selected datasets with six mainstream MLLMs, MIRAGE achieves state-of-the-art performance, improving attack success rates by up to 17.5% over the best baselines. Moreover, we demonstrate that role immersion and structured semantic reconstruction can activate inherent model biases, facilitating the model's spontaneous violation of ethical safeguards. These results highlight critical weaknesses in current multimodal safety mechanisms and underscore the urgent need for more robust defences against cross-modal threats.
翻译:尽管安全机制在过滤有害文本输入方面已取得显著进展,但多模态大语言模型(MLLMs)仍易受利用其跨模态推理能力的多模态越狱攻击。本文提出MIRAGE,一种新颖的多模态越狱框架,通过利用叙事驱动的情境与角色沉浸来规避多模态大语言模型的安全机制。该框架通过将恶意查询系统性地解构为环境、角色与行动三元组,并借助Stable Diffusion构建包含图像与文本的多轮视觉叙事序列,使目标模型沉浸于引人入胜的侦探叙事中。这一过程逐步削弱模型的防御机制,并通过结构化情境线索微妙地引导其推理,最终诱导出有害响应。在精选数据集上对六种主流MLLMs进行的大规模实验中,MIRAGE实现了最先进的性能,攻击成功率较最佳基线模型提升高达17.5%。此外,我们证明角色沉浸与结构化语义重构能够激活模型固有的偏见,促使其自发违反伦理安全防护。这些结果揭示了当前多模态安全机制的关键弱点,并凸显了针对跨模态威胁构建更强健防御体系的迫切需求。