Large Language Models (LLMs) are increasingly deployed in real-world applications that demand complex reasoning. To track progress, robust benchmarks are required to evaluate their capabilities beyond superficial pattern recognition. However, current LLM reasoning benchmarks often face challenges such as insufficient interpretability, performance saturation or data contamination. To address these challenges, we introduce GAMEBoT, a gaming arena designed for rigorous and transparent assessment of LLM reasoning capabilities. GAMEBoT decomposes complex reasoning in games into predefined modular subproblems. This decomposition allows us to design a suite of Chain-of-Thought (CoT) prompts that leverage domain knowledge to guide LLMs in addressing these subproblems before action selection. Furthermore, we develop a suite of rule-based algorithms to generate ground truth for these subproblems, enabling rigorous validation of the LLMs' intermediate reasoning steps. This approach facilitates evaluation of both the quality of final actions and the accuracy of the underlying reasoning process. GAMEBoT also naturally alleviates the risk of data contamination through dynamic games and head-to-head LLM competitions. We benchmark 17 prominent LLMs across eight games, encompassing various strategic abilities and game characteristics. Our results suggest that GAMEBoT presents a significant challenge, even when LLMs are provided with detailed CoT prompts. Project page: \url{https://visual-ai.github.io/gamebot}
翻译:大语言模型(LLMs)正日益部署在需要复杂推理的实际应用中。为追踪进展,需要建立稳健的基准来评估其超越表层模式识别的能力。然而,当前的大语言模型推理基准常面临可解释性不足、性能饱和或数据污染等挑战。为应对这些挑战,我们提出了GAMEBoT——一个专为大语言模型推理能力进行严谨透明评估而设计的游戏竞技场。GAMEBoT将游戏中的复杂推理分解为预定义的模块化子问题。这种分解使我们能够设计一套思维链(CoT)提示,利用领域知识引导大语言模型在行动选择前处理这些子问题。此外,我们开发了一套基于规则的算法来为这些子问题生成真实标注,从而实现对大语言模型中间推理步骤的严格验证。该方法有助于同时评估最终行动的质量与底层推理过程的准确性。GAMEBoT还通过动态游戏和大语言模型间的直接对抗竞赛,自然降低了数据污染的风险。我们在八款涵盖多种策略能力与游戏特性的游戏中,对17个主流大语言模型进行了基准测试。结果表明,即使为大语言模型提供了详细的思维链提示,GAMEBoT仍构成显著挑战。项目页面:\url{https://visual-ai.github.io/gamebot}