Training stability remains a central challenge in reinforcement learning (RL) for large language models (LLMs). Policy staleness, asynchronous training, and mismatches between training and inference engines all cause the behavior policy to diverge from the current policy, risking training collapse. Importance sampling provides a principled correction for this distribution shift but suffers from high variance; existing remedies such as token-level clipping and sequence-level normalization lack a unified theoretical foundation. We propose Variational sEquence-level Soft Policy Optimization (VESPO). By incorporating variance reduction into a variational formulation over proposal distributions, VESPO derives a closed-form reshaping kernel that operates directly on sequence-level importance weights without length normalization. Experiments on mathematical reasoning benchmarks show that VESPO maintains stable training under staleness ratios up to 64x and fully asynchronous execution, and delivers consistent gains across both dense and Mixture-of-Experts models. Code is available at https://github.com/FloyedShen/VESPO
翻译:训练稳定性仍然是大语言模型强化学习中的一个核心挑战。策略陈旧、异步训练以及训练与推理引擎之间的不匹配都会导致行为策略偏离当前策略,从而引发训练崩溃风险。重要性采样为这种分布偏移提供了理论上的修正方法,但存在高方差问题;现有的解决方案(如词元级裁剪和序列级归一化)缺乏统一的理论基础。我们提出了变分序列级软策略优化方法。通过将方差缩减融入提案分布的变分框架中,VESPO推导出可直接作用于序列级重要性权重(无需长度归一化)的闭式重塑核函数。在数学推理基准测试上的实验表明,VESPO能在高达64倍的陈旧率及完全异步执行条件下保持训练稳定性,并在稠密模型与混合专家模型中均取得持续的性能提升。代码发布于 https://github.com/FloyedShen/VESPO