Training Large Language Models (LLMs) for multi-turn Tool-Integrated Reasoning (TIR) - where models iteratively reason, generate code, and verify through execution - remains challenging for existing reinforcement learning (RL) approaches. Current RL methods, exemplified by Group Relative Policy Optimization (GRPO), suffer from coarse-grained, trajectory-level rewards that provide insufficient learning signals for complex multi-turn interactions, leading to training stagnation. To address this issue, we propose Group Turn Policy Optimization (GTPO), a novel RL algorithm specifically designed for training LLMs on multi-turn TIR tasks. GTPO introduces three key innovations: (1) turn-level reward assignment that provides fine-grained feedback for individual turns, (2) return-based advantage estimation where normalized discounted returns are calculated as advantages, and (3) self-supervised reward shaping that exploits self-supervision signals from generated code to densify sparse binary outcome-based rewards. Our comprehensive evaluation demonstrates that GTPO outperforms GRPO by 3.0% across diverse math reasoning benchmarks, establishing its effectiveness. GTPO also improves GRPO by 3.9% on commonsense reasoning and program synthesis tasks, demonstrating its generalizability to non-math domains. Importantly, GTPO incurs negligible overhead, ensuring its practicality for real-world scenarios.
翻译:训练大型语言模型(LLMs)以执行多轮工具集成推理(TIR)——其中模型迭代地进行推理、生成代码并通过执行验证——对现有强化学习(RL)方法而言仍具挑战性。当前的RL方法(例如组相对策略优化(GRPO))受限于粗粒度的轨迹级别奖励,这种奖励机制为复杂多轮交互提供的学习信号不足,导致训练出现停滞。为解决此问题,我们提出组轮策略优化(GTPO),这是一种专为训练LLMs执行多轮TIR任务而设计的新型RL算法。GTPO引入三项关键创新:(1)轮次级别奖励分配,为单个轮次提供细粒度反馈;(2)基于回报的优势估计,其中归一化折现回报被计算为优势值;(3)自监督奖励塑形,通过利用生成代码中的自监督信号,对稀疏的二元结果型奖励进行稠密化。我们的综合评估表明,GTPO在多种数学推理基准上比GRPO提升3.0%,验证了其有效性。在常识推理与程序合成任务中,GTPO进一步将GRPO性能提升3.9%,展示了其向非数学领域的泛化能力。重要的是,GTPO仅引入可忽略不计的额外开销,确保了其在现实场景中的实用性。