With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we introduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to MLLMs, thereby evaluating the robustness of MLLMs against diverse jailbreak attacks. Utilizing a dataset of 2, 000 malicious queries that is also proposed in this paper, we generate 20, 000 text-based jailbreak prompts using advanced jailbreak attacks on LLMs, alongside 8, 000 image-based jailbreak inputs from recent MLLMs jailbreak attacks, our comprehensive dataset includes 28, 000 test cases across a spectrum of adversarial scenarios. Our evaluation of 10 open-source MLLMs reveals a notably high Attack Success Rate (ASR) for attacks transferred from LLMs, highlighting a critical vulnerability in MLLMs that stems from their text-processing capabilities. Our findings underscore the urgent need for future research to address alignment vulnerabilities in MLLMs from both textual and visual inputs.
翻译:随着多模态大语言模型(MLLMs)的快速发展,保护这些模型免受恶意输入的影响并使其与人类价值观对齐已成为一项关键挑战。本文研究了一个重要且尚未探索的问题:成功越狱大语言模型(LLMs)的技术是否同样能有效越狱MLLMs。为探究此问题,我们提出了JailBreakV-28K,这是一个开创性的基准,旨在评估LLM越狱技术向MLLMs的可迁移性,从而评估MLLMs对抗多样化越狱攻击的鲁棒性。利用本文同时提出的包含2,000个恶意查询的数据集,我们通过对LLMs实施先进的越狱攻击生成了20,000个基于文本的越狱提示,并结合近期MLLMs越狱攻击中产生的8,000个基于图像的越狱输入,构建了一个包含28,000个测试用例、覆盖多种对抗场景的综合数据集。我们对10个开源MLLMs的评估显示,从LLMs迁移而来的攻击具有显著高的攻击成功率(ASR),这揭示了MLLMs因其文本处理能力而存在的一个关键脆弱性。我们的研究结果强调了未来研究迫切需要从文本和视觉输入两方面解决MLLMs的对齐脆弱性。