Large Reasoning Models (LRMs) demonstrate strong performance in complex tasks but often face the challenge of overthinking, leading to substantially high inference costs. Existing approaches synthesize shorter reasoning responses for LRMs to learn, but are inefficient for online usage due to the time-consuming data generation and filtering processes. Meanwhile, online reinforcement learning mainly adopts a length reward to encourage short reasoning responses, but it tends to lose reflection ability and harm performance. To address these issues, we propose REA-RL, which introduces a small reflection model for efficient scaling in online training, offering both parallel sampling and sequential revision. Besides, a reflection reward is designed to further prevent LRMs from favoring short yet non-reflective responses. Experiments show that both methods maintain or enhance performance while significantly improving inference efficiency. Their combination achieves a good balance between performance and efficiency, reducing inference costs by 36% without compromising performance. Further analysis demonstrates that our methods are effective by maintaining reflection frequency for hard problems while appropriately reducing it for easier ones without losing reflection ability. Code is available at https://github.com/hexuandeng/REA-RL.
翻译:大型推理模型在复杂任务中展现出强大性能,但常面临过度思考的挑战,导致极高的推理成本。现有方法通过合成较短的推理响应供大型推理模型学习,但由于耗时的数据生成与筛选过程,难以高效应用于在线场景。同时,在线强化学习主要采用长度奖励来鼓励简短推理响应,但这往往会削弱模型的反思能力并损害性能。为解决这些问题,我们提出REA-RL,该方法引入一个轻量级反射模型以支持在线训练的高效扩展,同时提供并行采样与序列修正能力。此外,我们设计了一种反射奖励机制,以进一步防止大型推理模型倾向于生成简短但缺乏反思的响应。实验表明,两种方法在显著提升推理效率的同时,均能保持或增强模型性能。二者的结合实现了性能与效率的良好平衡,在保持性能不变的前提下将推理成本降低36%。进一步分析证明,我们的方法通过维持困难问题的反思频率,同时对简单问题适度降低反思频率而不丧失反思能力,从而有效发挥作用。代码发布于 https://github.com/hexuandeng/REA-RL。