Multimodal Large Language Models (MLLMs) have garnered significant attention for their strong visual-semantic understanding. Most existing chart benchmarks evaluate MLLMs' ability to parse information from charts to answer questions. However, they overlook the inherent output biases of MLLMs, where models rely on their parametric memory to answer questions rather than genuinely understanding the chart content. To address this limitation, we introduce a novel Chart Hypothetical Question Answering (HQA) task, which imposes assumptions on the same question to compel models to engage in counterfactual reasoning based on the chart content. Furthermore, we introduce HAI, a human-AI interactive data synthesis approach that leverages the efficient text-editing capabilities of LLMs alongside human expert knowledge to generate diverse and high-quality HQA data at a low cost. Using HAI, we construct Chart-HQA, a challenging benchmark synthesized from publicly available data sources. Evaluation results on 18 MLLMs of varying model sizes reveal that current models face significant generalization challenges and exhibit imbalanced reasoning performance on the HQA task.
翻译:多模态大语言模型因其强大的视觉语义理解能力而备受关注。现有图表基准测试大多评估MLLMs从图表中解析信息以回答问题的能力。然而,这些测试忽略了MLLMs固有的输出偏差——模型往往依赖参数化记忆而非真正理解图表内容来回答问题。为突破此局限,我们提出创新的图表假设问答任务,通过对同一问题施加假设条件,迫使模型基于图表内容进行反事实推理。此外,我们引入HAI(人机交互数据合成方法),该方法结合大语言模型的高效文本编辑能力与人类专家知识,以低成本生成多样化、高质量的HQA数据。基于HAI方法,我们利用公开数据源构建了具有挑战性的Chart-HQA基准测试。对18种不同规模的MLLMs评估结果表明,现有模型在HQA任务中面临显著泛化挑战,且推理能力呈现不均衡性。