RL-based post-training of language models is almost exclusively done using on-policy methods such as PPO. These methods cannot learn from arbitrary sequences such as those produced earlier in training, in earlier runs, by human experts or other policies, or by decoding and exploration methods. This results in severe sample inefficiency and exploration difficulties, as well as a potential loss of diversity in the policy responses. Moreover, asynchronous PPO implementations require frequent and costly model transfers, and typically use value models which require a large amount of memory. In this paper we introduce Soft Policy Optimization (SPO), a simple, scalable and principled Soft RL method for sequence model policies that can learn from arbitrary online and offline trajectories and does not require a separate value model. In experiments on code contests, we shows that SPO outperforms PPO on pass@10, is significantly faster and more memory efficient, is able to benefit from off-policy data, enjoys improved stability, and learns more diverse (i.e. soft) policies.
翻译:基于强化学习的语言模型后训练几乎完全采用PPO等在线策略方法。这些方法无法从任意序列中学习,例如训练早期生成的序列、先前运行的结果、人类专家或其他策略产生的输出,以及解码与探索方法生成的数据。这导致严重的样本效率低下和探索困难,并可能造成策略响应多样性的丧失。此外,异步PPO实现需要频繁且高成本的模型传输,且通常依赖需要大量内存的价值模型。本文提出软策略优化(SPO),这是一种面向序列模型策略的简单、可扩展且原理清晰的软强化学习方法,能够从任意在线与离线轨迹中学习,且无需独立的价值模型。在代码竞赛实验中,我们证明SPO在pass@10指标上优于PPO,具有显著的速度与内存效率优势,能够受益于离线数据,具备更好的稳定性,并能学习更具多样性(即更软)的策略。