The recent success and openness of DeepSeek-R1 have brought widespread attention to Group Relative Policy Optimization (GRPO) as a reinforcement learning method for large reasoning models (LRMs). In this work, we analyze the GRPO objective under a binary reward setting and reveal an inherent limitation of question-level difficulty bias. We also identify a connection between GRPO and traditional discriminative methods in supervised learning. Motivated by these insights, we introduce a new Discriminative Constrained Optimization (DisCO) framework for reinforcing LRMs, grounded in the principle of discriminative learning. The main differences between DisCO and GRPO and its recent variants are: (1) it replaces the group relative objective with a discriminative objective defined by a scoring function; (2) it abandons clipping-based surrogates in favor of non-clipping RL surrogate objectives used as scoring functions; (3) it employs a simple yet effective constrained optimization approach to enforce the KL divergence constraint. As a result, DisCO offers notable advantages over GRPO and its variants: (i) it completely eliminates difficulty bias by adopting discriminative objectives; (ii) it addresses the entropy instability in GRPO and its variants through the use of non-clipping scoring functions and a constrained optimization approach, yielding long and stable training dynamics; (iii) it allows the incorporation of advanced discriminative learning techniques to address data imbalance, where a significant number of questions have more negative than positive generated answers during training. Our experiments on enhancing the mathematical reasoning capabilities of SFT-finetuned models show that DisCO significantly outperforms GRPO and its improved variants such as DAPO, achieving average gains of 7\% over GRPO and 6\% over DAPO across six benchmark tasks for an 1.5B model.
翻译:DeepSeek-R1 的最新成功与开源使群体相对策略优化(GRPO)作为一种大型推理模型(LRM)的强化学习方法受到广泛关注。在本工作中,我们分析了二元奖励设置下的 GRPO 目标函数,揭示了其固有的问题级难度偏差局限。我们还发现了 GRPO 与监督学习中传统判别式方法之间的联系。受这些见解启发,我们基于判别式学习原理,提出了一种用于增强 LRM 的新框架——判别式约束优化(DisCO)。DisCO 与 GRPO 及其近期变体的主要区别在于:(1)它将群体相对目标替换为由评分函数定义的判别式目标;(2)它放弃了基于裁剪的代理目标,转而采用用作评分函数的非裁剪强化学习代理目标;(3)它采用一种简单而有效的约束优化方法来强制执行 KL 散度约束。因此,DisCO 相较于 GRPO 及其变体具有显著优势:(i)通过采用判别式目标,完全消除了难度偏差;(ii)通过使用非裁剪评分函数和约束优化方法,解决了 GRPO 及其变体中的熵不稳定性,实现了长期且稳定的训练动态;(iii)允许引入先进的判别式学习技术来处理数据不平衡问题,即在训练过程中大量问题产生的负样本答案多于正样本答案。我们在提升经过 SFT 微调模型的数学推理能力上进行的实验表明,DisCO 显著优于 GRPO 及其改进变体(如 DAPO),对于一个 1.5B 模型,在六项基准任务上平均分别取得了超过 GRPO 7% 和超过 DAPO 6% 的性能提升。