While originally developed for continuous control problems, Proximal Policy Optimization (PPO) has emerged as the work-horse of a variety of reinforcement learning (RL) applications including the fine-tuning of generative models. Unfortunately, PPO requires multiple heuristics to enable stable convergence (e.g. value networks, clipping) and is notorious for its sensitivity to the precise implementation of these components. In response, we take a step back and ask what a minimalist RL algorithm for the era of generative models would look like. We propose REBEL, an algorithm that cleanly reduces the problem of policy optimization to regressing the relative rewards via a direct policy parameterization between two completions to a prompt, enabling strikingly lightweight implementation. In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which allows us to match the strongest known theoretical guarantees in terms of convergence and sample complexity in the RL literature. REBEL can also cleanly incorporate offline data and handle the intransitive preferences we frequently see in practice. Empirically, we find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO, all while being simpler to implement and more computationally tractable than PPO.
翻译:摘要:尽管最初是为连续控制问题开发的,近端策略优化(PPO)已成为包括生成模型微调在内的多种强化学习(RL)应用的核心工具。然而,PPO需要多种启发式方法(如价值网络、裁剪)以实现稳定收敛,并且因其对这些组件具体实现的敏感性而著称。为此,我们退一步思考:在生成模型时代,极简强化学习算法应是什么样?我们提出REBEL算法,该算法通过直接策略参数化在提示的两个完成结果之间回归相对奖励,干净地将策略优化问题简化为相对奖励回归,从而实现异常轻量级的实现。理论上,我们证明自然策略梯度等基础强化学习算法可视为REBEL的变体,这使我们能够匹配强化学习文献中关于收敛性和样本复杂度的最强已知理论保证。REBEL还能干净地整合离线数据,并处理实践中常见的不可传递偏好。实验上,我们发现REBEL提供了语言建模和图像生成的统一方法,其性能与PPO和DPO相当或更优,同时实现比PPO更简单且计算更易处理。