Existing benchmarks for visual question answering lack in visual grounding and complexity, particularly in evaluating spatial reasoning skills. We introduce FlowVQA, a novel benchmark aimed at assessing the capabilities of visual question-answering multimodal language models in reasoning with flowcharts as visual contexts. FlowVQA comprises 2,272 carefully generated and human-verified flowchart images from three distinct content sources, along with 22,413 diverse question-answer pairs, to test a spectrum of reasoning tasks, including information localization, decision-making, and logical progression. We conduct a thorough baseline evaluation on a suite of both open-source and proprietary multimodal language models using various strategies, followed by an analysis of directional bias. The results underscore the benchmark's potential as a vital tool for advancing the field of multimodal modeling, providing a focused and challenging environment for enhancing model performance in visual and logical reasoning tasks.
翻译:现有视觉问答基准在视觉基础性和复杂性方面存在不足,尤其在评估空间推理能力方面。我们提出了FlowVQA这一新颖基准,旨在评估视觉问答多模态语言模型以流程图作为视觉上下文进行推理的能力。FlowVQA包含来自三个不同内容源的2,272张精心生成并经人工验证的流程图图像,以及22,413个多样化的问题-答案对,用于测试包括信息定位、决策制定和逻辑推进在内的一系列推理任务。我们采用多种策略对开源和专有多模态语言模型进行了全面的基线评估,并分析了方向性偏差。结果凸显了该基准作为推进多模态建模领域关键工具的潜力,为提升模型在视觉与逻辑推理任务中的性能提供了聚焦且具有挑战性的评估环境。