Assessing the capabilities of large multimodal models (LMMs) often requires the creation of ad-hoc evaluations. Currently, building new benchmarks requires tremendous amounts of manual work for each specific analysis. This makes the evaluation process tedious and costly. In this paper, we present APEx, Automatic Programming of Experiments, the first framework for automatic benchmarking of LMMs. Given a research question expressed in natural language, APEx leverages a large language model (LLM) and a library of pre-specified tools to generate a set of experiments for the model at hand, and progressively compile a scientific report. The report drives the testing procedure: based on the current status of the investigation, APEx chooses which experiments to perform and whether the results are sufficient to draw conclusions. Finally, the LLM refines the report, presenting the results to the user in natural language. Thanks to its modularity, our framework is flexible and extensible as new tools become available. Empirically, APEx reproduces the findings of existing studies while allowing for arbitrary analyses and hypothesis testing.
翻译:评估大型多模态模型的能力通常需要创建临时性的评估方案。目前,构建新的基准测试需要针对每项具体分析投入大量人工工作,这使得评估过程变得繁琐且成本高昂。本文提出了APEx(实验自动编程),这是首个用于大型多模态模型自动基准测试的框架。给定以自然语言表述的研究问题,APEx利用大型语言模型和预定义工具库,为当前模型生成一系列实验,并逐步编译科学报告。该报告驱动测试流程:根据当前研究进展,APEx自主选择需要执行的实验,并判断实验结果是否足以得出结论。最终,大型语言模型对报告进行精炼,以自然语言向用户呈现结果。得益于模块化设计,本框架具有高度灵活性和可扩展性,可随新工具的出现而持续演进。实证研究表明,APEx不仅能复现现有研究的结论,还能支持任意分析和假设检验。