Large Language Models (LLMs) are increasingly deployed as reasoning systems, where reasoning paradigms - such as Chain-of-Thought (CoT) and multi-agent systems (MAS) - play a critical role, yet their relative effectiveness and cost-accuracy trade-offs remain poorly understood. In this work, we conduct a comprehensive and unified evaluation of reasoning paradigms, spanning direct single-model generation, CoT-augmented single-model reasoning, and representative MAS workflows, characterizing their reasoning performance across a diverse suite of closed-form benchmarks. Beyond overall performance, we probe role-specific capability demands in MAS using targeted role isolation analyses, and analyze cost-accuracy trade-offs to identify which MAS workflows offer a favorable balance between cost and accuracy, and which incur prohibitive overhead for marginal gains. We further introduce MIMeBench, a new open-ended benchmark that targets two foundational yet underexplored semantic capabilities - semantic abstraction and contrastive discrimination - thereby providing an alternative evaluation axis beyond closed-form accuracy and enabling fine-grained assessment of semantic competence that is difficult to capture with existing benchmarks. Our results show that increased structural complexity does not consistently lead to improved reasoning performance, with its benefits being highly dependent on the properties and suitability of the reasoning paradigm itself. The codes are released at https://gitcode.com/HIT1920/OpenLLMBench.
翻译:大型语言模型(LLMs)正日益被部署为推理系统,其中推理范式——如思维链(CoT)和多智能体系统(MAS)——发挥着关键作用,然而它们的相对有效性以及成本-准确性的权衡仍未得到充分理解。在本工作中,我们对推理范式进行了全面且统一的评估,涵盖直接单模型生成、CoT增强的单模型推理以及代表性的MAS工作流,并通过一系列多样化的封闭式基准测试来刻画它们的推理性能。除了整体性能外,我们通过针对性的角色隔离分析,探究了MAS中对特定角色的能力需求,并分析了成本-准确性的权衡,以确定哪些MAS工作流在成本与准确性之间提供了有利的平衡,而哪些则因边际收益而产生了过高的开销。我们进一步引入了MIMeBench,这是一个新的开放式基准测试,针对两个基础但尚未充分探索的语义能力——语义抽象和对比判别——从而提供了一个超越封闭式准确性的替代评估维度,并实现了对难以用现有基准捕捉的语义能力的细粒度评估。我们的结果表明,增加结构复杂性并不总能带来推理性能的提升,其益处高度依赖于推理范式本身的特性与适用性。代码发布于 https://gitcode.com/HIT1920/OpenLLMBench。