Reward inference (learning a reward model from human preferences) is a critical intermediate step in the Reinforcement Learning from Human Feedback (RLHF) pipeline for fine-tuning Large Language Models (LLMs). In practice, RLHF faces fundamental challenges such as distribution shift, reward model overfitting, and problem misspecification. An alternative approach is direct policy optimization without reward inference, such as Direct Preference Optimization (DPO), which provides a much simpler pipeline and has shown empirical success in LLM applications. However, DPO utilizes the closed-form expression between the optimal policy and the reward function, which is only suitable under the bandit setting or deterministic MDPs. This paper develops two RLHF algorithms without reward inference for general RL problems beyond bandits and deterministic MDPs, and general preference models beyond the Bradley-Terry model. The key idea is to estimate the local value function difference from human preferences and then approximate the policy gradient with a zeroth-order gradient approximator. For both algorithms, we establish polynomial convergence rates in terms of the number of policy gradient iterations, the number of trajectory samples, and human preference queries per iteration. Numerical experiments in stochastic environments validate the performance of our proposed algorithms, outperforming popular RLHF baselines such as DPO and PPO. Our paper shows there exist provably efficient methods to solve general RLHF problems without reward inference.
翻译:奖励推断(从人类偏好中学习奖励模型)是基于人类反馈的强化学习(RLHF)流程中微调大语言模型(LLM)的关键中间步骤。在实践中,RLHF面临分布偏移、奖励模型过拟合和问题设定错误等根本性挑战。另一种方法是不依赖奖励推断的直接策略优化,例如直接偏好优化(DPO),它提供了更简化的流程,并在LLM应用中取得了实证成功。然而,DPO利用了最优策略与奖励函数之间的闭式表达式,该表达式仅适用于赌博机设定或确定性马尔可夫决策过程(MDP)。本文针对超越赌博机和确定性MDP的一般强化学习问题,以及超越Bradley-Terry模型的一般偏好模型,开发了两种无需奖励推断的RLHF算法。核心思想是从人类偏好中估计局部价值函数差异,然后通过零阶梯度逼近器近似策略梯度。针对两种算法,我们建立了关于策略梯度迭代次数、轨迹样本数和每次迭代的人类偏好查询次数的多项式收敛速率。随机环境中的数值实验验证了所提算法的性能,其表现优于DPO和PPO等主流RLHF基线方法。本文证明存在无需奖励推断即可解决一般RLHF问题的可证明高效方法。