Sampling efficiency is a key bottleneck in reinforcement learning with verifiable rewards. Existing group-based policy optimization methods, such as GRPO, allocate a fixed number of rollouts for all training prompts. This uniform allocation implicitly treats all prompts as equally informative, and could lead to inefficient computational budget usage and impede training progress. We introduce VIP, a Variance-Informed Predictive allocation strategy that allocates a given rollout budget to the prompts in the incumbent batch to minimize the expected gradient variance of the policy update. At each iteration, VIP uses a lightweight Gaussian process model to predict per-prompt success probabilities based on recent rollouts. These probability predictions are translated into variance estimates, which are then fed into a convex optimization problem to determine the optimal rollout allocations under a hard compute budget constraint. Empirical results show that VIP consistently improves sampling efficiency and achieves higher performance than uniform or heuristic allocation strategies in multiple benchmarks.
翻译:采样效率是可验证奖励强化学习中的一个关键瓶颈。现有的基于群体的策略优化方法(如GRPO)为所有训练提示分配固定数量的采样。这种均匀分配隐含地将所有提示视为同等信息量,可能导致计算预算使用效率低下并阻碍训练进展。我们提出VIP(方差感知预测分配策略),该策略将给定的采样预算分配给当前批次中的提示,以最小化策略更新的期望梯度方差。在每次迭代中,VIP使用轻量级高斯过程模型基于近期采样结果预测每个提示的成功概率。这些概率预测被转换为方差估计,随后输入凸优化问题以确定在严格计算预算约束下的最优采样分配。实证结果表明,在多个基准测试中,VIP持续提升采样效率,并比均匀分配或启发式分配策略获得更高性能。