With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we introduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to MLLMs, thereby evaluating the robustness of MLLMs against diverse jailbreak attacks. Utilizing a dataset of 2, 000 malicious queries that is also proposed in this paper, we generate 20, 000 text-based jailbreak prompts using advanced jailbreak attacks on LLMs, alongside 8, 000 image-based jailbreak inputs from recent MLLMs jailbreak attacks, our comprehensive dataset includes 28, 000 test cases across a spectrum of adversarial scenarios. Our evaluation of 10 open-source MLLMs reveals a notably high Attack Success Rate (ASR) for attacks transferred from LLMs, highlighting a critical vulnerability in MLLMs that stems from their text-processing capabilities. Our findings underscore the urgent need for future research to address alignment vulnerabilities in MLLMs from both textual and visual inputs.
翻译:随着多模态大语言模型(MLLMs)的快速发展,如何使这些模型抵御恶意输入并与人类价值观对齐已成为一项关键挑战。本文研究了一个重要且尚未探索的问题:成功越狱大语言模型(LLMs)的技术是否同样能有效越狱MLLMs。为探讨这一问题,我们提出了JailBreakV-28K,这是一个开创性的基准,旨在评估LLM越狱技术向MLLMs的可迁移性,从而衡量MLLMs对抗多种越狱攻击的鲁棒性。利用本文提出的2000条恶意查询数据集,我们通过针对LLMs的高级越狱攻击生成了20000条基于文本的越狱提示,并结合近期针对MLLMs的越狱攻击生成了8000条基于图像的越狱输入。我们的综合数据集包含28000个测试用例,覆盖多种对抗场景。对10个开源MLLMs的评估显示,从LLMs迁移而来的攻击具有显著较高的攻击成功率(ASR),这凸显了MLLMs中源于其文本处理能力的严重漏洞。我们的研究结果强调,未来亟需研究从文本和视觉输入两方面解决MLLMs的对齐脆弱性问题。