Group Relative Policy Optimization (GRPO) has been shown to be an effective algorithm when an accurate reward model is available. However, such a highly reliable reward model is not available in many real-world tasks. In this paper, we particularly focus on multi-objective settings, in which we identify that GRPO is vulnerable to reward hacking, optimizing only one of the objectives at the cost of the others. To address this issue, we propose MO-GRPO, an extension of GRPO with a simple normalization method to reweight the reward functions automatically according to the variances of their values. We first show analytically that MO-GRPO ensures that all reward functions contribute evenly to the loss function while preserving the order of preferences, eliminating the need for manual tuning of the reward functions' scales. Then, we evaluate MO-GRPO experimentally in four domains: (i) the multi-armed bandits problem, (ii) simulated control task (Mo-Gymnasium), (iii) machine translation tasks on the WMT benchmark (En-Ja, En-Zh), and (iv) instruction following task. MO-GRPO achieves stable learning by evenly distributing correlations among the components of rewards, outperforming GRPO, showing MO-GRPO to be a promising algorithm for multi-objective reinforcement learning problems.
翻译:当存在精确的奖励模型时,群体相对策略优化(GRPO)已被证明是一种有效的算法。然而,在许多现实任务中,这种高度可靠的奖励模型并不可用。本文特别关注多目标设置,我们发现GRPO在此类设置中容易受到奖励黑客行为的影响,即仅优化其中一个目标而牺牲其他目标。为解决这一问题,我们提出了MO-GRPO,它是GRPO的一种扩展,采用一种简单的归一化方法,根据各奖励函数值的方差自动重新调整其权重。我们首先通过分析证明,MO-GRPO能确保所有奖励函数对损失函数做出均衡贡献,同时保持偏好顺序不变,从而无需手动调整奖励函数的尺度。随后,我们在四个领域对MO-GRPO进行了实验评估:(i)多臂赌博机问题,(ii)模拟控制任务(Mo-Gymnasium),(iii)WMT基准测试上的机器翻译任务(英-日、英-中),以及(iv)指令跟随任务。MO-GRPO通过均衡分配奖励各组成部分之间的相关性,实现了稳定的学习,其性能优于GRPO,表明MO-GRPO是多目标强化学习问题中一种具有前景的算法。