Evaluating the reasoning abilities of large language models (LLMs) is challenging. Existing benchmarks often depend on static datasets, which are vulnerable to data contamination and may get saturated over time, or on binary live human feedback that conflates reasoning with other abilities. As the most prominent dynamic benchmark, Chatbot Arena evaluates open-ended questions in real-world settings, but lacks the granularity in assessing specific reasoning capabilities. We introduce GameArena, a dynamic benchmark designed to evaluate LLM reasoning capabilities through interactive gameplay with humans. GameArena consists of three games designed to test specific reasoning capabilities (e.g., deductive and inductive reasoning), while keeping participants entertained and engaged. We analyze the gaming data retrospectively to uncover the underlying reasoning processes of LLMs and measure their fine-grained reasoning capabilities. We collect over 2000 game sessions and provide detailed assessments of various reasoning capabilities for five state-of-the-art LLMs. Our user study with 100 participants suggests that GameArena improves user engagement compared to Chatbot Arena. For the first time, GameArena enables the collection of step-by-step LLM reasoning data in the wild.
翻译:评估大语言模型(LLMs)的推理能力具有挑战性。现有基准测试通常依赖于静态数据集,这些数据集容易受到数据污染的影响,并可能随时间推移而趋于饱和;或者依赖于将推理能力与其他能力混为一谈的二元实时人类反馈。作为最突出的动态基准测试,Chatbot Arena 在真实场景中评估开放式问题,但在评估特定推理能力方面缺乏细粒度。我们提出了 GameArena,这是一个旨在通过与人类进行交互式游戏来评估 LLM 推理能力的动态基准测试。GameArena 包含三款游戏,旨在测试特定的推理能力(例如演绎推理和归纳推理),同时保持参与者的娱乐性和参与度。我们回顾性地分析游戏数据,以揭示 LLMs 底层的推理过程并衡量其细粒度的推理能力。我们收集了超过 2000 个游戏会话,并对五种最先进的 LLM 的各种推理能力进行了详细评估。我们对 100 名参与者进行的用户研究表明,与 Chatbot Arena 相比,GameArena 提高了用户参与度。GameArena 首次实现了在真实场景中收集 LLM 逐步推理数据。