Large reasoning models such as OpenAI o1 and DeepSeek-R1 have demonstrated remarkable performance in complex reasoning tasks. A critical component of their training is the incorporation of reference-based reward systems within reinforcement learning (RL), where model outputs are evaluated against ground truth references. However, existing reward benchmarks focus on preference comparisons between responses rather than evaluating verification against ground truth references, leaving a critical gap in our ability to evaluate verification systems used in reasoning model training. In this paper, we introduce VerifyBench and its challenging variant VerifyBench-Hard, two benchmarks specifically designed to assess reference-based reward systems. These benchmarks are constructed through meticulous data collection and curation, followed by careful human annotation to ensure high quality. Our comprehensive evaluation reveals that while larger model-based verifiers show promise on standard cases, all current systems demonstrate substantial room for improvement on challenging instances. Through systematic analysis of performance patterns across reasoning tasks and error categories, we provide insights for advancing reference-based reward systems. These benchmarks establish a standardized framework for improving verification accuracy, ultimately enhancing reasoning capabilities in models trained via RL.
翻译:诸如OpenAI o1和DeepSeek-R1等大型推理模型已在复杂推理任务中展现出卓越性能。其训练的关键组成部分是在强化学习(RL)中整合基于参考的奖励系统,即根据真实参考答案对模型输出进行评估。然而,现有奖励基准主要关注响应之间的偏好比较,而非基于真实参考答案的验证评估,这导致评估推理模型训练中使用的验证系统时存在关键空白。本文提出VerifyBench及其挑战性变体VerifyBench-Hard,这两个基准专门用于评估基于参考的奖励系统。这些基准通过细致的数据收集与整理构建,并经过严格的人工标注以确保高质量。我们的综合评估表明,尽管基于更大模型的验证器在标准案例中表现良好,但所有现有系统在挑战性实例上仍有显著改进空间。通过对推理任务和错误类别的性能模式进行系统分析,我们为推进基于参考的奖励系统提供了重要见解。这些基准建立了提升验证准确度的标准化框架,最终将增强通过RL训练的模型的推理能力。