The rapid advancement of Multimodal Large Language Models (MLLMs) has introduced complex security challenges, particularly at the intersection of textual and visual safety. While existing schemes have explored the security vulnerabilities of MLLMs, the investigation into their visual safety boundaries remains insufficient. In this paper, we propose Beyond Visual Safety (BVS), a novel image-text pair jailbreaking framework specifically designed to probe the visual safety boundaries of MLLMs. BVS employs a "reconstruction-then-generation" strategy, leveraging neutralized visual splicing and inductive recomposition to decouple malicious intent from raw inputs, thereby leading MLLMs to be induced into generating harmful images. Experimental results demonstrate that BVS achieves a remarkable jailbreak success rate of 98.21\% against GPT-5 (12 January 2026 release). Our findings expose critical vulnerabilities in the visual safety alignment of current MLLMs.
翻译:多模态大语言模型的快速发展带来了复杂的安全挑战,尤其在文本与视觉安全的交叉领域尤为突出。尽管现有研究已探索了多模态大语言模型的安全漏洞,但对其视觉安全边界的考察仍显不足。本文提出"超越视觉安全"——一种专为探测多模态大语言模型视觉安全边界而设计的新型图文对越狱框架。该框架采用"重建-生成"策略,通过中性化视觉拼接与归纳重组技术,将恶意意图从原始输入中解耦,从而诱导多模态大语言模型生成有害图像。实验结果表明,本方法对GPT-5(2026年1月12日版本)的越狱成功率高达98.21%。我们的研究揭示了当前多模态大语言模型在视觉安全对齐方面存在的严重漏洞。