Enhancing the reasoning capabilities of large language models effectively using reinforcement learning (RL) remains a crucial challenge. Existing approaches primarily adopt two contrasting advantage estimation granularities: token-level methods (e.g., PPO) aim to provide fine-grained advantage signals but suffer from inaccurate estimation due to difficulties in training an accurate critic model. On the other extreme, trajectory-level methods (e.g., GRPO) solely rely on a coarse-grained advantage signal from the final reward, leading to imprecise credit assignment. To address these limitations, we propose Segment Policy Optimization (SPO), a novel RL framework that leverages segment-level advantage estimation at an intermediate granularity, achieving a better balance by offering more precise credit assignment than trajectory-level methods and requiring fewer estimation points than token-level methods, enabling accurate advantage estimation based on Monte Carlo (MC) without a critic model. SPO features three components with novel strategies: (1) flexible segment partition; (2) accurate segment advantage estimation; and (3) policy optimization using segment advantages, including a novel probability-mask strategy. We further instantiate SPO for two specific scenarios: (1) SPO-chain for short chain-of-thought (CoT), featuring novel cutpoint-based partition and chain-based advantage estimation, achieving $6$-$12$ percentage point improvements in accuracy over PPO and GRPO on GSM8K. (2) SPO-tree for long CoT, featuring novel tree-based advantage estimation, which significantly reduces the cost of MC estimation, achieving $7$-$11$ percentage point improvements over GRPO on MATH500 under 2K and 4K context evaluation. We make our code publicly available at https://github.com/AIFrameResearch/SPO.
翻译:有效利用强化学习(RL)增强大语言模型的推理能力仍然是一个关键挑战。现有方法主要采用两种对比鲜明的优势估计粒度:令牌级方法(如PPO)旨在提供细粒度的优势信号,但由于难以训练精确的评论家模型而存在估计不准确的问题。另一个极端是轨迹级方法(如GRPO),仅依赖于最终奖励的粗粒度优势信号,导致信用分配不精确。为解决这些局限性,我们提出了分段策略优化(SPO),这是一种新颖的RL框架,利用中间粒度的分段级优势估计,通过提供比轨迹级方法更精确的信用分配,同时比令牌级方法需要更少的估计点,实现了更好的平衡,从而能够基于蒙特卡洛(MC)方法进行准确的优势估计,而无需评论家模型。SPO包含三个具有新颖策略的组件:(1)灵活的分段划分;(2)准确的分段优势估计;(3)利用分段优势的策略优化,包括一种新颖的概率掩码策略。我们进一步为两种具体场景实例化了SPO:(1)用于短思维链(CoT)的SPO-chain,采用新颖的基于分割点的划分和基于链的优势估计,在GSM8K上相比PPO和GRPO实现了$6$-$12$个百分点的准确率提升。(2)用于长思维链的SPO-tree,采用新颖的基于树的优势估计,显著降低了MC估计的成本,在MATH500的2K和4K上下文评估下相比GRPO实现了$7$-$11$个百分点的提升。我们的代码已在 https://github.com/AIFrameResearch/SPO 公开。