With the advent of large vision-language models (LVLMs) demonstrating increasingly human-like abilities, a pivotal question emerges: do different LVLMs interpret multimodal sarcasm differently, and can a single model grasp sarcasm from multiple perspectives like humans? To explore this, we introduce an analytical framework using systematically designed prompts on existing multimodal sarcasm datasets. Evaluating 12 state-of-the-art LVLMs over 2,409 samples, we examine interpretive variations within and across models, focusing on confidence levels, alignment with dataset labels, and recognition of ambiguous "neutral" cases. Our findings reveal notable discrepancies -- across LVLMs and within the same model under varied prompts. While classification-oriented prompts yield higher internal consistency, models diverge markedly when tasked with interpretive reasoning. These results challenge binary labeling paradigms by highlighting sarcasm's subjectivity. We advocate moving beyond rigid annotation schemes toward multi-perspective, uncertainty-aware modeling, offering deeper insights into multimodal sarcasm comprehension. Our code and data are available at: https://github.com/CoderChen01/LVLMSarcasmAnalysis
翻译:随着大型视觉语言模型(LVLMs)展现出日益类人的能力,一个关键问题随之浮现:不同的LVLMs是否对多模态讽刺存在差异化解读?单个模型能否像人类一样从多重视角理解讽刺?为探究此问题,我们引入一个分析框架,通过在现有多模态讽刺数据集上系统设计提示词展开研究。通过对12个前沿LVLMs在2,409个样本上的评估,我们考察了模型内部及模型间的解读差异,重点关注置信度水平、与数据集标签的一致性以及对模糊"中性"案例的识别。研究结果揭示了显著的差异性——既存在于不同LVLM之间,也出现在同一模型在不同提示条件下的表现中。虽然分类导向的提示能产生更高的内部一致性,但当模型执行解读推理任务时则表现出明显分歧。这些发现通过凸显讽刺的主观性,对二元标注范式提出了挑战。我们主张超越僵化的标注体系,转向多视角、不确定性感知的建模方法,从而为多模态讽刺理解提供更深刻的见解。相关代码与数据已公开于:https://github.com/CoderChen01/LVLMSarcasmAnalysis