Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.
翻译:从人类反馈中强化学习(RLHF)已成为将模型与人类意图对齐的主流范式。典型RLHF算法分为两个阶段:首先利用人类偏好学习奖励函数,其次通过强化学习(RL)优化所学奖励以对齐模型。该范式假设人类偏好服从奖励分布,但近期研究表明,偏好实际上遵循用户最优策略下的遗憾值。因此,基于反馈学习奖励函数不仅依赖有缺陷的人类偏好假设,还会因RL阶段的策略梯度或自举方法引发复杂的优化难题。由于这些优化挑战,现有RLHF方法局限于上下文赌博机场景(如大语言模型)或限制观测维度(如基于状态量的机器人控制)。我们通过引入基于人类偏好遗憾值模型的新算法族突破这些限制。利用最大熵原理,我们推导出对比偏好学习(CPL)算法——无需学习奖励函数即可从偏好中学习最优策略,从而规避RL需求。CPL完全基于离策略方法,仅需简单对比目标,并可适用于任意马尔可夫决策过程。这使得CPL在保持比先前方法更简洁的同时,能够优雅地扩展到高维与序列型RLHF问题。