Multimodal large language models (MLLMs) exhibit remarkable capabilities but remain susceptible to jailbreak attacks exploiting cross-modal vulnerabilities. In this work, we introduce a novel method that leverages sequential comic-style visual narratives to circumvent safety alignments in state-of-the-art MLLMs. Our method decomposes malicious queries into visually innocuous storytelling elements using an auxiliary LLM, generates corresponding image sequences through diffusion models, and exploits the models' reliance on narrative coherence to elicit harmful outputs. Extensive experiments on harmful textual queries from established safety benchmarks show that our approach achieves an average attack success rate of 83.5\%, surpassing prior state-of-the-art by 46\%. Compared with existing visual jailbreak methods, our sequential narrative strategy demonstrates superior effectiveness across diverse categories of harmful content. We further analyze attack patterns, uncover key vulnerability factors in multimodal safety mechanisms, and evaluate the limitations of current defense strategies against narrative-driven attacks, revealing significant gaps in existing protections.
翻译:多模态大语言模型(MLLMs)展现出卓越能力,但仍易受利用跨模态漏洞的越狱攻击影响。本文提出一种新颖方法,利用序列化漫画风格的视觉叙事来规避先进MLLMs的安全对齐机制。该方法通过辅助LLM将恶意查询分解为视觉无害的叙事元素,借助扩散模型生成对应图像序列,并利用模型对叙事连贯性的依赖来诱导有害输出。基于现有安全基准中有害文本查询的广泛实验表明,本方法平均攻击成功率高达83.5%,较先前最优方法提升46%。与现有视觉越狱方法相比,我们的序列化叙事策略在多种有害内容类别中均表现出更优的攻击效果。我们进一步分析攻击模式,揭示多模态安全机制中的关键脆弱性因素,评估现有防御策略对叙事驱动攻击的局限性,从而暴露当前保护体系存在的显著缺陷。