We introduce Self-correction Relative Policy Optimization (ScRPO), a novel reinforcement learning framework designed to empower large language models with advanced mathematical reasoning capabilities through iterative self-reflection and error correction. The ScRPO framework operates in two distinct phases: (1) Trial-and-error learning stage, where the model is trained via GRPO, and incorrect responses are collected to form an "error pool"; and (2) Self-correction learning stage, which guides the model to introspectively analyze and rectify the reasoning flaws behind its previous errors. Extensive evaluations across challenging mathematical benchmarks, including AIME, AMC, Olympiad, MATH-500, and GSM8k, validate the efficacy of our approach. Using DeepSeek-R1-Distill-Qwen-1.5B and 7B as backbones, ScRPO achieves average accuracies of 64.8% and 77.8%, respectively. This represents a significant improvement of 6.0% and 3.2% over vanilla baselines, consistently outperforming strong post-training methods such as DAPO and GRPO. These findings establish ScRPO as a robust paradigm for enabling autonomous self-improvement in AI systems, particularly in tasks with limited external feedback.
翻译:本文提出自校正相对策略优化(ScRPO),这是一种新颖的强化学习框架,旨在通过迭代式自我反思与错误校正,赋予大语言模型高级数学推理能力。ScRPO框架包含两个独立阶段:(1)试错学习阶段:模型通过GRPO进行训练,同时收集错误响应以构建“错误池”;(2)自校正学习阶段:引导模型对其先前错误背后的推理缺陷进行内省分析与修正。在包括AIME、AMC、Olympiad、MATH-500及GSM8k在内的多个高难度数学基准测试上的广泛评估验证了本方法的有效性。以DeepSeek-R1-Distill-Qwen-1.5B和7B为骨干网络时,ScRPO分别实现了64.8%与77.8%的平均准确率,相较于原始基线模型显著提升了6.0%和3.2%,且持续优于DAPO、GRPO等强后训练方法。这些发现确立了ScRPO作为实现人工智能系统自主自我改进的稳健范式,尤其在外部反馈有限的任务中具有突出价值。