Ambiguity resolution is key to effective communication. While humans effortlessly address ambiguity through conversational grounding strategies, the extent to which current language models can emulate these strategies remains unclear. In this work, we examine referential ambiguity in image-based question answering by introducing RACQUET, a carefully curated dataset targeting distinct aspects of ambiguity. Through a series of evaluations, we reveal significant limitations and problems of overconfidence of state-of-the-art large multimodal language models in addressing ambiguity in their responses. The overconfidence issue becomes particularly relevant for RACQUET-BIAS, a subset designed to analyze a critical yet underexplored problem: failing to address ambiguity leads to stereotypical, socially biased responses. Our results underscore the urgency of equipping models with robust strategies to deal with uncertainty without resorting to undesirable stereotypes.
翻译:歧义消解是有效沟通的关键。人类能够通过对话中的共同基础策略轻松应对歧义,而当前的语言模型能在多大程度上模拟这些策略尚不清楚。在本工作中,我们通过引入RACQUET——一个针对歧义不同方面精心构建的数据集——来研究基于图像的问答任务中的指代歧义。通过一系列评估,我们揭示了当前最先进的大型多模态语言模型在应对其回答中的歧义时存在显著局限性及过度自信的问题。过度自信问题在RACQUET-BIAS子集中尤为突出,该子集旨在分析一个关键但尚未被充分探讨的问题:未能妥善处理歧义会导致模型产生刻板且带有社会偏见的回答。我们的结果强调了为模型配备稳健策略以应对不确定性、避免陷入不良刻板印象的紧迫性。