Humans are prone to cognitive distortions -- biased thinking patterns that lead to exaggerated responses to specific stimuli, albeit in very different contexts. This paper demonstrates that advanced Multimodal Large Language Models (MLLMs) exhibit similar tendencies. While these models are designed to respond queries under safety mechanism, they sometimes reject harmless queries in the presence of certain visual stimuli, disregarding the benign nature of their contexts. As the initial step in investigating this behavior, we identify three types of stimuli that trigger the oversensitivity of existing MLLMs: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To systematically evaluate MLLMs' oversensitivity to these stimuli, we propose the Multimodal OverSenSitivity Benchmark (MOSSBench). This toolkit consists of 300 manually collected benign multimodal queries, cross-verified by third-party reviewers (AMT). Empirical studies using MOSSBench on 20 MLLMs reveal several insights: (1). Oversensitivity is prevalent among SOTA MLLMs, with refusal rates reaching up to 76% for harmless queries. (2). Safer models are more oversensitive: increasing safety may inadvertently raise caution and conservatism in the model's responses. (3). Different types of stimuli tend to cause errors at specific stages -- perception, intent reasoning, and safety judgement -- in the response process of MLLMs. These findings highlight the need for refined safety mechanisms that balance caution with contextually appropriate responses, improving the reliability of MLLMs in real-world applications. We make our project available at https://turningpoint-ai.github.io/MOSSBench/.
翻译:人类容易受到认知扭曲的影响——这些有偏的思维模式会导致对特定刺激产生夸大的反应,尽管情境截然不同。本文证明先进的多模态大语言模型(MLLMs)表现出类似的倾向。尽管这些模型被设计为在安全机制下响应查询,但它们有时会在某些视觉刺激存在时拒绝无害的查询,忽视了其情境的良性本质。作为研究这一行为的初步步骤,我们识别出三种会触发现有MLLMs过度敏感的刺激类型:夸大风险、否定伤害和反直觉解释。为了系统评估MLLMs对这些刺激的过度敏感性,我们提出了多模态过度敏感基准(MOSSBench)。该工具包包含300个手动收集的良性多模态查询,并经过第三方评审员(AMT)交叉验证。使用MOSSBench对20个MLLMs进行的实证研究揭示了若干发现:(1)过度敏感在SOTA MLLMs中普遍存在,对无害查询的拒绝率高达76%。(2)更安全的模型反而更过度敏感:增强安全性可能无意中提高了模型响应的谨慎性和保守性。(3)不同类型的刺激倾向于在MLLMs响应过程的特定阶段——感知、意图推理和安全判断——引发错误。这些发现强调了需要改进安全机制,在谨慎与情境相适应的响应之间取得平衡,从而提高MLLMs在现实应用中的可靠性。我们的项目可在https://turningpoint-ai.github.io/MOSSBench/ 获取。