With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we introduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to MLLMs, thereby evaluating the robustness of MLLMs against diverse jailbreak attacks. Utilizing a dataset of 2, 000 malicious queries that is also proposed in this paper, we generate 20, 000 text-based jailbreak prompts using advanced jailbreak attacks on LLMs, alongside 8, 000 image-based jailbreak inputs from recent MLLMs jailbreak attacks, our comprehensive dataset includes 28, 000 test cases across a spectrum of adversarial scenarios. Our evaluation of 10 open-source MLLMs reveals a notably high Attack Success Rate (ASR) for attacks transferred from LLMs, highlighting a critical vulnerability in MLLMs that stems from their text-processing capabilities. Our findings underscore the urgent need for future research to address alignment vulnerabilities in MLLMs from both textual and visual inputs.
翻译:随着多模态大语言模型(MLLMs)的快速发展,在使模型符合人类价值观的同时保护其免受恶意输入攻击已成为关键挑战。本文研究了一个重要且尚未探索的问题:成功越狱大语言模型(LLMs)的技术是否对越狱MLLMs同样有效。为探究此问题,我们提出了JailBreakV-28K——一个开创性基准,旨在评估LLM越狱技术向MLLMs的可迁移性,从而评估MLLMs对抗多样化越狱攻击的鲁棒性。基于本文同时提出的包含2,000条恶意查询的数据集,我们利用先进的LLM越狱攻击生成了20,000条基于文本的越狱提示,并结合近期MLLMs越狱攻击中的8,000个基于图像的越狱输入,构建了涵盖多种对抗场景的28,000个测试用例的综合数据集。我们对10个开源MLLMs的评估显示,从LLMs迁移的攻击具有显著高的攻击成功率(ASR),这揭示了MLLMs因其文本处理能力而存在的关键脆弱性。我们的研究结果强调,未来研究亟需从文本和视觉输入两方面解决MLLMs的对齐脆弱性问题。