Recently, many versatile Multi-modal Large Language Models (MLLMs) have emerged continuously. However, their capacity to query information depicted in visual charts and engage in reasoning based on the queried contents remains under-explored. In this paper, to comprehensively and rigorously benchmark the ability of the off-the-shelf MLLMs in the chart domain, we construct ChartX, a multi-modal evaluation set covering 18 chart types, 7 chart tasks, 22 disciplinary topics, and high-quality chart data. Besides, we develop ChartVLM to offer a new perspective on handling multi-modal tasks that strongly depend on interpretable patterns, such as reasoning tasks in the field of charts or geometric images. We evaluate the chart-related ability of mainstream MLLMs and our ChartVLM on the proposed ChartX evaluation set. Extensive experiments demonstrate that ChartVLM surpasses both versatile and chart-related large models, achieving results comparable to GPT-4V. We believe that our study can pave the way for further exploration in creating a more comprehensive chart evaluation set and developing more interpretable multi-modal models. Both ChartX and ChartVLM are available at: https://github.com/UniModal4Reasoning/ChartVLM
翻译:近年来,各类通用多模态大语言模型(MLLMs)不断涌现。然而,它们在查询视觉图表中所呈现信息并基于查询内容进行推理的能力仍未得到充分探索。本文为全面而严谨地评估现有MLLMs在图表领域的性能,构建了ChartX——一个涵盖18种图表类型、7项图表任务、22个学科主题且包含高质量图表数据的多模态评估集。此外,我们开发了ChartVLM,为处理高度依赖可解释模式的多模态任务(如图表或几何图像领域的推理任务)提供了新的解决视角。我们在提出的ChartX评估集上对主流MLLMs及我们的ChartVLM进行了图表相关能力评估。大量实验表明,ChartVLM在超越通用及图表专用大模型的同时,取得了与GPT-4V相媲美的性能。我们相信本研究能为构建更全面的图表评估集与开发更具可解释性的多模态模型开辟新的探索路径。ChartX与ChartVLM均已开源:https://github.com/UniModal4Reasoning/ChartVLM