As Multimodal Large Language Models (MLLMs) acquire stronger reasoning capabilities to handle complex, multi-image instructions, this advancement may pose new safety risks. We study this problem by introducing MIR-SafetyBench, the first benchmark focused on multi-image reasoning safety, which consists of 2,676 instances across a taxonomy of 9 multi-image relations. Our extensive evaluations on 19 MLLMs reveal a troubling trend: models with more advanced multi-image reasoning can be more vulnerable on MIR-SafetyBench. Beyond attack success rates, we find that many responses labeled as safe are superficial, often driven by misunderstanding or evasive, non-committal replies. We further observe that unsafe generations exhibit lower attention entropy than safe ones on average. This internal signature suggests a possible risk that models may over-focus on task solving while neglecting safety constraints. Our code and data are available at https://github.com/thu-coai/MIR-SafetyBench.
翻译:随着多模态大语言模型(MLLMs)获得更强的推理能力以处理复杂的多图像指令,这一进步可能带来新的安全风险。我们通过引入首个专注于多图像推理安全的基准测试MIR-SafetyBench来研究此问题,该基准包含涵盖9种多图像关系分类的2,676个实例。我们对19个MLLM的广泛评估揭示了一个令人担忧的趋势:具有更先进多图像推理能力的模型在MIR-SafetyBench上可能更加脆弱。除了攻击成功率之外,我们发现许多被标记为安全的回答是肤浅的,通常源于误解或回避性的、不置可否的回复。我们进一步观察到,不安全的生成结果平均表现出比安全结果更低的注意力熵。这一内部特征表明,模型可能过度专注于任务解决而忽视安全约束,这是一种潜在的风险。我们的代码和数据可在 https://github.com/thu-coai/MIR-SafetyBench 获取。