Complex reasoning ability is one of the most important features of current LLMs, which has also been leveraged to play an integral role in complex decision-making tasks. Therefore, the investigation into the reasoning capabilities of Large Language Models (LLMs) is critical: numerous benchmarks have been established to assess the reasoning abilities of LLMs. However, current benchmarks are inadequate in offering a rigorous evaluation of the full extent of reasoning abilities that LLMs are capable of achieving. They are also prone to the risk of overfitting, as these benchmarks, being publicly accessible and static, allow models to potentially tailor their responses to specific benchmark metrics, thereby inflating their performance. Addressing these limitations, our research introduces a new benchmark, named NPHardEval. This benchmark is designed to evaluate the reasoning abilities of LLMs across a broad spectrum of 900 algorithmic questions, extending up to the NP-Hard complexity class. These questions are meticulously chosen to represent a wide range of complexity class below the NP-hard complexity class, offering a rigorous measure of the reasoning ability of LLMs. Through this study, we shed light on the current state of reasoning in LLMs, providing an objective and rigorous perspective through the comparison of LLMs' performance across complex classes. Moreover, this benchmark is designed with a dynamic update mechanism, where the datapoints are refreshed on a monthly basis. Such regular updates play a crucial role in mitigating the risk of LLMs overfitting to the benchmark, promoting a more accurate and reliable assessment of their reasoning capabilities. The benchmark dataset and code of NPHardEval are available at https://github.com/casmlab/NPHardEval.
翻译:复杂推理能力是当前大语言模型(LLMs)最重要特征之一,亦被用于在复杂决策任务中发挥关键作用。因此,探究大语言模型的推理能力至关重要:研究者已建立众多基准测试来评估LLMs的推理能力。然而,现有基准测试在严格评估LLMs所能达到的全部推理能力方面存在不足。这些公开且静态的基准测试易引发过拟合风险——模型可能针对特定基准指标定制回答,从而虚增性能表现。为解决上述局限,本研究提出名为NPHardEval的新基准测试。该基准测试旨在通过涵盖NP-Hard复杂度类别在内的900道算法题,全面评估LLMs的推理能力。这些题目经过精心筛选,代表NP-Hard复杂度类别以下的广泛复杂度层级,为衡量LLMs推理能力提供严格标尺。通过本研究,我们揭示了LLMs推理能力的当前水平,并通过比较LLMs在不同复杂度层级上的表现,提供客观严谨的视角。此外,该基准测试采用动态更新机制,数据点每月刷新。这种定期更新对降低LLMs过拟合风险至关重要,可促进对其推理能力的更准确可靠评估。NPHardEval基准数据集与代码已开源至:https://github.com/casmlab/NPHardEval。