Large Language Models (LLMs) are being explored for applications in scientific research, including their capabilities to synthesize literature, answer research questions, generate research ideas, and even conduct computational experiments. Ultimately, our goal is for these to help scientists derive novel scientific insights. In many areas of science, such insights often arise from processing and visualizing data to understand its patterns. However, evaluating whether an LLM-mediated scientific workflow produces outputs conveying the correct scientific insights is challenging to evaluate and has not been addressed in past work. We introduce AstroVisBench, the first benchmark for both scientific computing and visualization in the astronomy domain. AstroVisBench judges a language model's ability to both (1) create astronomy-specific workflows to process and analyze data and (2) visualize the results of these workflows through complex plots. Our evaluation of visualizations uses a novel LLM-as-a-judge workflow, which is validated against annotation by five professional astronomers. Using AstroVisBench we present an evaluation of state-of-the-art language models, showing a significant gap in their ability to engage in astronomy research as useful assistants. This evaluation provides a strong end-to-end evaluation for AI scientists that offers a path forward for the development of visualization-based workflows, which are central to a broad range of domains from physics to biology.
翻译:大型语言模型(LLMs)正被探索应用于科学研究,包括其综合文献、回答研究问题、生成研究思路乃至进行计算实验的能力。最终,我们的目标是让这些模型帮助科学家获得新颖的科学见解。在许多科学领域,此类见解通常源于处理和可视化数据以理解其模式。然而,评估LLM介导的科学工作流是否产生传达正确科学见解的输出具有挑战性,且以往工作尚未解决此问题。我们提出了AstroVisBench,这是天文学领域首个同时涵盖科学计算与可视化的基准。AstroVisBench评估语言模型在以下两方面的能力:(1)创建天文学特定的工作流以处理和分析数据;(2)通过复杂图表可视化这些工作流的结果。我们的可视化评估采用了一种新颖的LLM作为评判者的工作流,该流程经五位专业天文学家的标注验证。基于AstroVisBench,我们对前沿语言模型进行了评估,结果显示它们在作为有用助手参与天文学研究方面存在显著差距。该评估为AI科学家提供了强有力的端到端评估框架,为开发基于可视化的工作流指明了前进方向,这类工作流对从物理学到生物学的广泛领域至关重要。