While large reasoning models trained with critic-free reinforcement learning and verifiable rewards (RLVR) represent the state-of-the-art, their practical utility is hampered by ``overthinking'', a critical issue where models generate excessively long reasoning paths without any performance benefit. Existing solutions that penalize length often fail, inducing performance degradation due to a fundamental misalignment between trajectory-level rewards and token-level optimization. In this work, we introduce a novel framework, DECS, built on our theoretical discovery of two previously unaddressed flaws in current length rewards: (1) the erroneous penalization of essential exploratory tokens and (2) the inadvertent rewarding of partial redundancy. Our framework's innovations include (i) a first-of-its-kind decoupled token-level reward mechanism that surgically distinguishes and penalizes redundant tokens, and (ii) a novel curriculum batch scheduling strategy to master the efficiency-efficacy equilibrium. Experimental results show DECS can achieve a dramatic reduction in reasoning tokens by over 50\% across seven benchmarks while simultaneously maintaining or even improving performance. It demonstrates conclusively that substantial gains in reasoning efficiency can be achieved without compromising a model's underlying reasoning power. Code is available at https://github.com/pixas/DECS.
翻译:尽管采用无批评强化学习与可验证奖励训练的大型推理模型代表了当前最高水平,但其实际应用受到"过度思考"问题的严重制约——该关键问题表现为模型生成长度过度的推理路径却未带来任何性能提升。现有通过惩罚长度来解决问题的方案往往失效,甚至因轨迹级奖励与词元级优化之间的根本性错配而导致性能下降。本工作中,我们提出了一个名为DECS的新型框架,其理论基础是我们对当前长度奖励机制中两个尚未被揭示的缺陷的理论发现:(1) 对必要探索性词元的错误惩罚;(2) 对部分冗余内容的无意识奖励。本框架的创新点包括:(i) 首创的解耦词元级奖励机制,能够精准区分并惩罚冗余词元;(ii) 新颖的课程批次调度策略,以掌握效率与效能的平衡。实验结果表明,DECS在七个基准测试中实现了推理词元数量超过50%的显著削减,同时保持甚至提升了模型性能。这确凿证明,在不损害模型底层推理能力的前提下,可以实现推理效率的大幅提升。代码发布于 https://github.com/pixas/DECS。