Current reinforcement learning objectives for large-model reasoning primarily focus on maximizing expected rewards. This paradigm can lead to overfitting to dominant reward signals, while neglecting alternative yet valid reasoning trajectories, thereby limiting diversity and exploration. To address this issue, we introduce Learning Advantage Distributions (LAD), a distribution-matching framework that replaces advantage maximization with learning the advantage-induced distribution. By establishing the equivalence between the optimal policy update and an advantage-based target distribution, we derive a practical LAD objective formulated as minimizing an $f$-divergence between the policy-induced and advantage-induced distributions. This yields a gradient update that increases likelihood for high-advantage responses while suppressing over-confident probability growth, preventing collapse without requiring auxiliary entropy regularization. LAD incurs no extra training cost compared to GRPO and scales naturally to LLM post-training. In a controlled bandit setting, LAD faithfully recovers the multimodal advantage distribution, validating the theoretical formulation. Experiments on math and code reasoning tasks across several LLM backbones show that LAD reliably improves both accuracy and generative diversity.
翻译:当前面向大模型推理的强化学习目标主要聚焦于最大化期望奖励。该范式可能导致对主导性奖励信号的过拟合,同时忽略其他有效但非主流的推理轨迹,从而限制多样性与探索能力。为解决此问题,我们提出优势分布学习框架,这是一种以学习优势诱导分布替代优势最大化的分布匹配框架。通过建立最优策略更新与基于优势的目标分布之间的等价关系,我们推导出以最小化策略诱导分布与优势诱导分布间$f$散度为表述的实用LAD目标。该框架产生的梯度更新在提升高优势响应似然的同时,抑制过度自信的概率增长,从而无需辅助熵正则化即可防止分布坍缩。相较于GRPO,LAD不会产生额外训练成本,并可自然扩展至大语言模型的后训练阶段。在受控赌博机环境中,LAD能准确复现多模态优势分布,验证了理论框架的有效性。在多个大语言模型骨干网络上进行的数学与代码推理实验表明,LAD能持续提升准确率与生成多样性。