Multi-modal large language models (MLLMs) have demonstrated promising capabilities across various tasks by integrating textual and visual information to achieve visual understanding in complex scenarios. Despite the availability of several benchmarks aims to evaluating MLLMs in tasks from visual question answering to complex problem-solving, most focus predominantly on mathematics or general visual understanding tasks. This reveals a critical gap in current benchmarks, which often overlook the inclusion of other key scientific disciplines such as physics and chemistry. To address this gap, we meticulously construct a comprehensive benchmark, named VisScience, which is utilized to assess the multi-modal scientific reasoning across the three disciplines of mathematics, physics, and chemistry. This benchmark comprises 3,000 questions drawn from K12 education - spanning elementary school through high school - equally distributed across three disciplines, with 1,000 questions per discipline. The questions within VisScience span 21 distinct subjects and are categorized into five difficulty levels, offering a broad spectrum of topics within each discipline. With VisScience, we present a detailed evaluation of the performance of 25 representative MLLMs in scientific reasoning. Experimental results demonstrate that closed-source MLLMs generally outperform open-source models. The best performance observed include a 53.4\% accuracy in mathematics by Claude3.5-Sonnet, 38.2\% in physics by GPT-4o, and 47.0\% in chemistry by Gemini-1.5-Pro. These results underscore the strengths and limitations of MLLMs, suggesting areas for future improvement and highlighting the importance of developing models that can effectively handle the diverse demands of multi-modal scientific reasoning.
翻译:多模态大语言模型(MLLMs)通过整合文本和视觉信息以实现复杂场景下的视觉理解,已在多种任务中展现出有前景的能力。尽管已有若干基准旨在评估MLLMs在从视觉问答到复杂问题解决等任务上的表现,但大多数主要集中于数学或通用视觉理解任务。这揭示了当前基准中存在一个关键空白,即往往忽略了纳入其他关键科学学科,如物理和化学。为填补这一空白,我们精心构建了一个全面的基准,命名为VisScience,用于评估跨数学、物理和化学三个学科的多模态科学推理能力。该基准包含3,000道源自K12教育(涵盖小学至高中)的题目,在三个学科间均匀分布,每个学科1,000道题。VisScience中的问题涵盖21个不同的科目,并被划分为五个难度等级,提供了每个学科内广泛的主题范围。基于VisScience,我们对25个代表性MLLM在科学推理上的性能进行了详细评估。实验结果表明,闭源MLLM普遍优于开源模型。观察到的最高性能包括:Claude3.5-Sonnet在数学上达到53.4%的准确率,GPT-4o在物理上达到38.2%,以及Gemini-1.5-Pro在化学上达到47.0%。这些结果凸显了MLLM的优势与局限,指出了未来需要改进的方向,并强调了开发能够有效应对多模态科学推理多样化需求的模型的重要性。