Recent advances in large language models (LLMs) have increased the demand for comprehensive benchmarks to evaluate their capabilities as human-like agents. Existing benchmarks, while useful, often focus on specific application scenarios, emphasizing task completion but failing to dissect the underlying skills that drive these outcomes. This lack of granularity makes it difficult to deeply discern where failures stem from. Additionally, setting up these environments requires considerable effort, and issues of unreliability and reproducibility sometimes arise, especially in interactive tasks. To address these limitations, we introduce the Massive Multitask Agent Understanding (MMAU) benchmark, featuring comprehensive offline tasks that eliminate the need for complex environment setups. It evaluates models across five domains, including Tool-use, Directed Acyclic Graph (DAG) QA, Data Science and Machine Learning coding, Contest-level programming and Mathematics, and covers five essential capabilities: Understanding, Reasoning, Planning, Problem-solving, and Self-correction. With a total of 20 meticulously designed tasks encompassing over 3K distinct prompts, MMAU provides a comprehensive framework for evaluating the strengths and limitations of LLM agents. By testing 18 representative models on MMAU, we provide deep and insightful analyses. Ultimately, MMAU not only sheds light on the capabilities and limitations of LLM agents but also enhances the interpretability of their performance. Datasets and evaluation scripts of MMAU are released at https://github.com/apple/axlearn/tree/main/docs/research/mmau.
翻译:近年来,大型语言模型(LLM)的快速发展加大了对全面评估其类人智能体能力基准的需求。现有基准虽然有用,但通常侧重于特定应用场景,强调任务完成度,却未能剖析驱动这些结果的内在技能。这种粒度上的缺失使得难以深入辨别失败根源。此外,搭建这些评估环境需要大量精力,且时常出现不可靠性与可复现性问题,在交互式任务中尤为突出。为应对这些局限,我们提出了大规模多任务智能体理解(MMAU)基准,其包含全面的离线任务,无需复杂环境配置。该基准在五个领域评估模型能力,包括工具使用、有向无环图(DAG)问答、数据科学与机器学习编码、竞赛级编程与数学,并涵盖理解、推理、规划、问题解决和自我修正五项核心能力。通过总计20项精心设计的任务(包含超过3000个独立提示),MMAU为评估LLM智能体的优势与局限提供了全面框架。通过在MMAU上测试18个代表性模型,我们提供了深入且具有洞察力的分析。最终,MMAU不仅揭示了LLM智能体的能力与局限,也增强了其性能表现的可解释性。MMAU的数据集与评估脚本已发布于 https://github.com/apple/axlearn/tree/main/docs/research/mmau。