Group relative policy optimization (GRPO), a core methodological component of DeepSeekMath and DeepSeek-R1, has emerged as a cornerstone for scaling reasoning capabilities of large language models. Despite its widespread adoption and the proliferation of follow-up works, the theoretical properties of GRPO remain less studied. This paper provides a unified framework to understand GRPO through the lens of classical U-statistics. We demonstrate that the GRPO policy gradient is inherently a U-statistic, allowing us to characterize its mean squared error (MSE), derive the finite-sample error bound and asymptotic distribution of the suboptimality gap for its learned policy. Our findings reveal that GRPO is asymptotically equivalent to an oracle policy gradient algorithm -- one with access to a value function that quantifies the goodness of its learning policy at each training iteration -- and achieves asymptotically optimal performance within a broad class of policy gradient algorithms. Furthermore, we establish a universal scaling law that offers principled guidance for selecting the optimal group size. Empirical experiments further validate our theoretical findings, demonstrating that the optimal group size is universal, and verify the oracle property of GRPO.
翻译:群组相对策略优化(GRPO)作为DeepSeekMath和DeepSeek-R1的核心方法组件,已成为扩展大语言模型推理能力的基石。尽管该方法被广泛采用并涌现出大量后续研究,但其理论性质仍未得到充分探讨。本文通过经典U统计量的视角,提出了理解GRPO的统一框架。我们证明GRPO策略梯度本质上是U统计量,据此可刻画其均方误差(MSE),推导其学习策略次优间隙的有限样本误差界及渐近分布。研究结果表明:GRPO渐近等价于具有价值函数访问权限的预言策略梯度算法(该价值函数能在每次训练迭代中量化学习策略的优劣),并在广泛的策略梯度算法类中达到渐近最优性能。此外,我们建立了普适的缩放定律,为选择最优群组规模提供理论指导。实证实验进一步验证了我们的理论发现,证明最优群组规模具有普适性,并证实了GRPO的预言特性。