We provide a new perspective on GSPO's length-normalized importance ratios by establishing their connection to information-theoretic quantities. We show that GSPO's sequence-level weight $s(\theta) = (\pi_\theta/\pi_{\theta_{\text{old}}})^{1/|y|}$ can be equivalently expressed as the inverse perplexity ratio $\text{PPL}_{\theta_{\text{old}}}/\text{PPL}_\theta$ and as the exponential cross-entropy change $\exp(\Delta H)$. While the perplexity-entropy relationship follows from standard definitions, this observation provides a useful lens for understanding GSPO: the algorithm weights policy gradient updates by perplexity ratios, offering an information-theoretic interpretation of the importance weights. This perspective helps explain GSPO's empirical properties, including log-domain variance reduction through geometric averaging and stability in training mixture-of-experts models. We validate the mathematical equivalences and variance predictions through controlled experiments on mathematical reasoning tasks.
翻译:我们通过建立GSPO长度归一化重要性比率与信息论量之间的联系,提出了新的理论视角。研究表明,GSPO的序列级权重$s(\theta) = (\pi_\theta/\pi_{\theta_{\text{old}}})^{1/|y|}$可等价表述为逆困惑度比率$\text{PPL}_{\theta_{\text{old}}}/\text{PPL}_\theta$,也可表示为指数化交叉熵变化$\exp(\Delta H)$。虽然困惑度-熵关系源于标准定义,但这一发现为理解GSPO提供了重要视角:该算法通过困惑度比率加权策略梯度更新,为重要性权重提供了信息论解释。该视角有助于解释GSPO的实证特性,包括通过几何平均实现的对数域方差缩减,以及在专家混合模型训练中的稳定性。我们通过在数学推理任务上的受控实验验证了数学等价性与方差预测。