The enhancement of reasoning capabilities in large language models (LLMs) has garnered significant attention, with supervised fine-tuning (SFT) and reinforcement learning emerging as dominant paradigms. While recent studies recognize the importance of reflection in reasoning processes, existing methodologies seldom address proactive reflection encouragement during training. This study focuses on mathematical reasoning by proposing a four-stage framework integrating Group Relative Policy Optimization (GRPO) with reflection reward mechanisms to strengthen LLMs' self-reflective capabilities. Besides, this approach incorporates established accuracy and format reward. Experimental results demonstrate GRPO's state-of-the-art performance through reflection-encouraged training, with ablation studies confirming the reflection reward's pivotal role. Comparative evaluations demonstrate full-parameter SFT's superiority over low-rank adaptation (LoRA) despite heightened computational demands. Building on these cumulative findings, this research substantiates GRPO's methodological significance in post-training optimization and envisions its potential to serve as a pivotal enabler for future LLM-based intelligent agents through the synergistic integration of cognitive rewards with dynamic environmental interactions.
翻译:大型语言模型推理能力的增强已引起广泛关注,其中监督微调与强化学习已成为主流范式。尽管近期研究认识到反思在推理过程中的重要性,现有方法却很少在训练阶段主动鼓励反思行为。本研究聚焦数学推理任务,提出一个四阶段框架,将组相对策略优化与反思奖励机制相结合,以强化大型语言模型的自我反思能力。此外,该方法还整合了成熟的准确性与格式奖励机制。实验结果表明,通过鼓励反思的训练方式,GRPO实现了最先进的性能表现,消融研究也证实了反思奖励的关键作用。对比评估显示,尽管计算需求更高,全参数监督微调仍优于低秩自适应方法。基于这些累积发现,本研究证实了GRPO在训练后优化中的方法论意义,并展望其通过认知奖励与动态环境交互的协同整合,有望成为未来基于大型语言模型的智能代理的关键赋能技术。