Reinforcement Learning with Verifiable Rewards (RLVR) has recently demonstrated notable success in enhancing the reasoning performance of large language models (LLMs), particularly on mathematics and programming tasks. Similar to how traditional RL helps agents explore and learn new strategies, RLVR is believed to enable LLMs to continuously self-improve, thus acquiring novel reasoning abilities beyond those of the corresponding base models. In this study we critically examine the current state of RLVR by systematically probing the reasoning capability boundaries of RLVR-trained LLMs across various model families, RL algorithms, and math, coding, and visual reasoning benchmarks, using pass@k at large k values as the evaluation metric. Surprisingly, we find that the current training setup does not elicit fundamentally new reasoning patterns. While RLVR-trained models outperform their base models at small k (e.g., k = 1), the base models achieve a higher pass@k score when k is large. Coverage and perplexity analyses show that the observed reasoning abilities originate from and are bounded by the base model. Treating the base model as an upper bound, our quantitative analysis shows that six popular RLVR algorithms perform similarly and remain far from optimal in leveraging the potential of the base model. By contrast, we find that distillation can introduce new reasoning patterns from the teacher and genuinely expand the model's reasoning capabilities. Overall, our findings suggest that current RLVR methods have not yet realized the potential of RL to elicit truly novel reasoning abilities in LLMs. This highlights the need for improved RL paradigms, such as continual scaling and multi-turn agent-environment interaction, to unlock this potential.
翻译:基于可验证奖励的强化学习(RLVR)近期在提升大型语言模型(LLMs)的推理性能方面取得了显著成功,尤其在数学和编程任务上。与传统强化学习帮助智能体探索和学习新策略类似,RLVR被认为能使LLMs持续自我改进,从而获得超越对应基础模型的新型推理能力。本研究通过系统性地探究RLVR训练的LLMs在不同模型家族、强化学习算法以及数学、编程和视觉推理基准测试中的推理能力边界,以大规模k值下的pass@k作为评估指标,对RLVR的现状进行了批判性审视。令人惊讶的是,我们发现当前的训练设置并未引发根本性的新推理模式。尽管RLVR训练的模型在较小k值(例如k = 1)下优于其基础模型,但当k值较大时,基础模型反而获得更高的pass@k分数。覆盖率和困惑度分析表明,观察到的推理能力源自并受限于基础模型。将基础模型视为上界,我们的定量分析显示,六种流行的RLVR算法表现相似,且在利用基础模型潜力方面远未达到最优。相比之下,我们发现蒸馏方法能够从教师模型中引入新的推理模式,真正扩展模型的推理能力。总体而言,我们的研究结果表明,当前的RLVR方法尚未实现强化学习在激发LLMs真正新颖推理能力方面的潜力。这凸显了需要改进的强化学习范式,例如持续扩展和多轮智能体-环境交互,以释放这一潜力。