Safety concerns of Multimodal large language models (MLLMs) have gradually become an important problem in various applications. Surprisingly, previous works indicate a counter-intuitive phenomenon that using textual unlearning to align MLLMs achieves comparable safety performances with MLLMs trained with image-text pairs. To explain such a counter-intuitive phenomenon, we discover a visual safety information leakage (VSIL) problem in existing multimodal safety benchmarks, i.e., the potentially risky and sensitive content in the image has been revealed in the textual query. In this way, MLLMs can easily refuse these sensitive text-image queries according to textual queries. However, image-text pairs without VSIL are common in real-world scenarios and are overlooked by existing multimodal safety benchmarks. To this end, we construct multimodal visual leakless safety benchmark (VLSBench) preventing visual safety leakage from image to textual query with 2.4k image-text pairs. Experimental results indicate that VLSBench poses a significant challenge to both open-source and close-source MLLMs, including LLaVA, Qwen2-VL, Llama3.2-Vision, and GPT-4o. This study demonstrates that textual alignment is enough for multimodal safety scenarios with VSIL, while multimodal alignment is a more promising solution for multimodal safety scenarios without VSIL. Please see our code and data at: http://hxhcreate.github.io/VLSBench
翻译:多模态大语言模型(MLLMs)的安全问题已逐渐成为各类应用中的重要挑战。令人惊讶的是,先前的研究表明了一个反直觉的现象:仅使用文本遗忘对齐的MLLMs,其安全性能与使用图像-文本对训练的MLLMs相当。为解释这一反直觉现象,我们发现了现有多模态安全基准中存在视觉安全信息泄露(VSIL)问题,即图像中潜在的风险和敏感内容已在文本查询中暴露。如此一来,MLLMs可以轻易地根据文本查询拒绝这些敏感的文本-图像查询。然而,在现实场景中,不存在VSIL的图像-文本对是普遍存在的,却被现有的多模态安全基准所忽视。为此,我们构建了防止图像到文本查询视觉安全泄露的多模态视觉无泄露安全基准(VLSBench),包含2.4k个图像-文本对。实验结果表明,VLSBench对包括LLaVA、Qwen2-VL、Llama3.2-Vision和GPT-4o在内的开源和闭源MLLMs均构成了显著挑战。本研究证明,对于存在VSIL的多模态安全场景,文本对齐已足够;而对于不存在VSIL的多模态安全场景,多模态对齐是更具前景的解决方案。代码与数据请访问:http://hxhcreate.github.io/VLSBench