Despite significant advancements in image segmentation and object detection, understanding complex scenes remains a significant challenge. Here, we focus on graphical humor as a paradigmatic example of image interpretation that requires elucidating the interaction of different scene elements in the context of prior cognitive knowledge. This paper introduces \textbf{HumorDB}, a novel, controlled, and carefully curated dataset designed to evaluate and advance visual humor understanding by AI systems. The dataset comprises diverse images spanning photos, cartoons, sketches, and AI-generated content, including minimally contrastive pairs where subtle edits differentiate between humorous and non-humorous versions. We evaluate humans, state-of-the-art vision models, and large vision-language models on three tasks: binary humor classification, funniness rating prediction, and pairwise humor comparison. The results reveal a gap between current AI systems and human-level humor understanding. While pretrained vision-language models perform better than vision-only models, they still struggle with abstract sketches and subtle humor cues. Analysis of attention maps shows that even when models correctly classify humorous images, they often fail to focus on the precise regions that make the image funny. Preliminary mechanistic interpretability studies and evaluation of model explanations provide initial insights into how different architectures process humor. Our results identify promising trends and current limitations, suggesting that an effective understanding of visual humor requires sophisticated architectures capable of detecting subtle contextual features and bridging the gap between visual perception and abstract reasoning. All the code and data are available here: \href{https://github.com/kreimanlab/HumorDB}{https://github.com/kreimanlab/HumorDB}
翻译:尽管图像分割和目标检测技术取得了显著进展,但理解复杂场景仍然是一个重大挑战。本文聚焦于图像幽默这一典型示例,它需要在先验认知知识的背景下阐明不同场景元素的交互作用。本文介绍了**HumorDB**——一个新颖、受控且精心策划的数据集,旨在评估和推进人工智能系统对视觉幽默的理解。该数据集包含照片、漫画、素描和AI生成内容等多种图像,其中包含最小对比对,即通过细微编辑区分幽默与非幽默版本。我们在三项任务上评估了人类、最先进的视觉模型以及大型视觉语言模型:二元幽默分类、趣味性评分预测和成对幽默比较。结果显示,当前人工智能系统与人类水平的幽默理解之间存在差距。虽然预训练的视觉语言模型表现优于纯视觉模型,但它们仍难以处理抽象素描和微妙的幽默线索。注意力图分析表明,即使模型能正确分类幽默图像,也常常无法聚焦于使图像有趣的关键区域。初步的机制可解释性研究及模型解释评估,为不同架构处理幽默的方式提供了初步见解。我们的研究结果揭示了有希望的趋势和当前局限,表明有效理解视觉幽默需要能够检测微妙上下文特征、并弥合视觉感知与抽象推理之间差距的复杂架构。所有代码和数据均在此处公开:\href{https://github.com/kreimanlab/HumorDB}{https://github.com/kreimanlab/HumorDB}