Reinforcement Learning from Human Feedback (RLHF) has emerged as a critical technique for training large language models. However, reward hacking-a phenomenon where models exploit flaws in the reward model-remains a significant barrier to achieving robust and scalable intelligence through long-term training. Existing studies have proposed uncertain reward model to address reward hacking, however, they often lack systematic or theoretical foundations, failing to model the uncertainty intrinsically emerging from preference data. In this paper, we propose the Probabilistic Uncertain Reward Model (PURM), a natural generalization of the classical Bradley-Terry reward model. PURM learns reward distributions directly from preference data and quantifies per-sample uncertainty via the average overlap area between reward distributions. To mitigate reward hacking, we further introduce an uncertainty-aware penalty into Proximal Policy Optimization (PPO), which leverages the learned uncertainty to dynamically balance reward optimization and exploration. We propose a lightweight and easy-to-use implementation of PURM. Experiments demonstrate that PURM significantly delays the onset of reward hacking while improving final reward performance, outperforming baseline methods in both stability and effectiveness.
翻译:基于人类反馈的强化学习已成为训练大型语言模型的关键技术。然而,奖励破解——即模型利用奖励模型缺陷的现象——仍然是实现长期训练获得鲁棒且可扩展智能的主要障碍。现有研究提出了不确定奖励模型以应对奖励破解问题,但这些方法往往缺乏系统性或理论基础,未能对偏好数据中固有产生的不确定性进行建模。本文提出概率不确定奖励模型,这是经典布拉德利-特里奖励模型的自然推广。该模型直接从偏好数据中学习奖励分布,并通过奖励分布间的平均重叠面积量化每个样本的不确定性。为缓解奖励破解,我们进一步在近端策略优化中引入不确定性感知惩罚项,利用学习到的不确定性动态平衡奖励优化与探索过程。我们提出了一种轻量级且易于实现的概率不确定奖励模型方案。实验表明,该模型能显著延迟奖励破解的发生时间,同时提升最终奖励性能,在稳定性和有效性方面均优于基线方法。