As language models regularly make mistakes when solving math problems, automated identification of errors in the reasoning process becomes increasingly significant for their scalable oversight. In this paper, we introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning. It consists of 3,400 test cases, primarily focused on competition- and Olympiad-level math problems. Each test case contains a step-by-step solution with error location annotated by human experts. Models are required to identify the earliest step that contains an error, or conclude that all steps are correct. We conduct extensive evaluation on ProcessBench, involving two types of models: process reward models (PRMs) and critic models, where for the latter we prompt general language models to critique each solution step by step. We draw two main observations: (1) Existing PRMs typically fail to generalize to more challenging math problems beyond GSM8K and MATH. They underperform both critic models (i.e., prompted general language models) and our own trained PRM that is straightforwardly fine-tuned on the PRM800K dataset. (2) The best open-source model, QwQ-32B-Preview, has demonstrated the critique capability competitive with the proprietary model GPT-4o, despite that it still lags behind the reasoning-specialized o1-mini. We hope ProcessBench can foster future research in reasoning process assessment, paving the way toward scalable oversight of language models.
翻译:随着语言模型在解决数学问题时频繁出错,自动识别推理过程中的错误对于实现可扩展监督变得日益重要。本文提出ProcessBench,用于评估识别数学推理中错误步骤的能力。该数据集包含3,400个测试案例,主要聚焦于竞赛和奥林匹克级别的数学问题。每个测试案例包含由专家标注错误位置的逐步解答。模型需要识别包含错误的最早步骤,或判定所有步骤均正确。我们在ProcessBench上进行了广泛评估,涉及两类模型:过程奖励模型(PRMs)和评判模型(后者通过提示通用语言模型对每个解答步骤进行逐步评判)。我们得出两个主要结论:(1)现有PRMs通常难以泛化至超越GSM8K和MATH的更具挑战性的数学问题。其表现既逊于评判模型(即经提示的通用语言模型),也不及我们在PRM800K数据集上直接微调训练的PRM。(2)最佳开源模型QwQ-32B-Preview已展现出与专有模型GPT-4o相媲美的评判能力,尽管仍落后于专精推理的o1-mini模型。我们希望ProcessBench能推动推理过程评估的未来研究,为语言模型的可扩展监督铺平道路。