A significant portion of recent research on Large Language Model (LLM) alignment focuses on developing new policy optimization methods based on Group Relative Policy Optimization (GRPO). Two prominent directions have emerged: (i) a shift toward sequence-level importance sampling weights that better align with the sequence-level rewards used in many tasks, and (ii) alternatives to PPO-style clipping that aim to avoid the associated loss of training signal and entropy collapse. Recent work, such as Soft Adaptive Policy Optimization (SAPO), reformulates the Scopic objective within the GRPO framework and achieves both sequence coherence and token adaptivity. Geometric-Mean Policy Optimization (GMPO) leverages token-wise ratio clipping within sequence importance sampling weights. Building on these ideas, this work proposes a new objective that promotes effective policy exploration while maintaining training stability. Specifically, we introduce Soft Sequence Policy Optimization, an off-policy reinforcement learning objective that incorporates soft gating functions over token-level probability ratios within sequence-level importance weights.
翻译:近期关于大语言模型对齐的研究中,相当一部分致力于基于组相对策略优化框架开发新的策略优化方法。两个主要研究方向已经形成:一是转向序列级重要性采样权重,以更好地匹配许多任务中使用的序列级奖励;二是寻找替代PPO风格裁剪的方法,旨在避免由此产生的训练信号损失和熵崩溃。近期工作,如软自适应策略优化,在GRPO框架内重新表述了Scopic目标,同时实现了序列连贯性和令牌自适应性。几何平均策略优化则在序列重要性采样权重中利用令牌级比率裁剪。基于这些思想,本研究提出了一种新目标,在保持训练稳定性的同时促进有效的策略探索。具体而言,我们引入了软序列策略优化——一种离策略强化学习目标,该目标在序列级重要性权重中引入了对令牌级概率比率的软门控函数。