Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model's self-verification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model's next-token log-probability assigned to any pre-specified token at the solution's last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model's reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance.
翻译:可验证奖励强化学习(RLVR)最近已成为提升大型语言模型(LLMs)推理能力的核心范式。为解决测试时缺乏验证信号的问题,先前研究将模型自验证能力的训练纳入标准RLVR流程,从而在单一LLM中统一推理与验证能力。然而,现有方法要求LLM使用两个独立的提示模板顺序生成解决方案和自验证结果,这显著降低了效率。本工作从理论上揭示了自验证RL目标的闭式解可简化为一种极其简洁的形式:解决方案的真实推理奖励等于其末词自奖励分数,该分数通过策略模型在解决方案末词处对任意预设词符分配的下一个词符对数概率与预计算常数的差值(经KL系数缩放)计算得出。基于这一洞见,我们提出LaSeR(基于末词自奖励的强化学习),该算法通过在原始RLVR损失中增加均方误差损失来对齐末词自奖励分数与基于验证器的推理奖励,从而联合优化LLMs的推理能力和自奖励能力。优化后的自奖励分数可在训练和测试中用于提升模型性能。值得注意的是,我们的算法直接从生成结束后末词的预测下一个词符概率分布中推导这些分数,仅需承担一个额外词符推断的最小附加成本。实验表明,我们的方法不仅提升了模型的推理性能,还使其具备卓越的自奖励能力,从而显著增强了推理时的扩展性能。