Reinforcement Learning from Human Feedback (RLHF) has been crucial to the recent success of Large Language Models (LLMs), however, it is often a complex and brittle process. In the classical RLHF framework, a reward model is first trained to represent human preferences, which is in turn used by an online reinforcement learning (RL) algorithm to optimize the LLM. A prominent issue with such methods is reward over-optimization or reward hacking, where performance as measured by the learned proxy reward model increases, but true quality plateaus or even deteriorates. Direct Alignment Algorithms (DDAs) like Direct Preference Optimization have emerged as alternatives to the classical RLHF pipeline by circumventing the reward modeling phase. However, although DAAs do not use a separate proxy reward model, they still commonly deteriorate from over-optimization. While the so-called reward hacking phenomenon is not well-defined for DAAs, we still uncover similar trends: at higher KL budgets, DAA algorithms exhibit similar degradation patterns to their classic RLHF counterparts. In particular, we find that DAA methods deteriorate not only across a wide range of KL budgets but also often before even a single epoch of the dataset is completed. Through extensive empirical experimentation, this work formulates and formalizes the reward over-optimization or hacking problem for DAAs and explores its consequences across objectives, training regimes, and model scales.
翻译:基于人类反馈的强化学习(RLHF)对于大型语言模型(LLM)近期的成功至关重要,然而,这一过程通常复杂且脆弱。在经典的RLHF框架中,首先训练一个奖励模型来表征人类偏好,随后由在线强化学习(RL)算法使用该模型来优化LLM。此类方法的一个突出问题是奖励过度优化或奖励黑客攻击,即学习到的代理奖励模型所衡量的性能提升,但真实质量却停滞不前甚至下降。诸如直接偏好优化等直接对齐算法(DAA)通过绕过奖励建模阶段,已成为经典RLHF流程的替代方案。然而,尽管DAA不使用单独的代理奖励模型,它们仍然常因过度优化而性能下降。虽然所谓的奖励黑客现象对于DAA尚无明确定义,但我们仍发现了类似的趋势:在较高的KL预算下,DAA算法表现出与经典RLHF方法相似的性能退化模式。具体而言,我们发现DAA方法不仅在广泛的KL预算范围内性能下降,而且常常在数据集尚未完成一个完整训练周期时即已开始退化。通过大量实证实验,本研究对DAA中的奖励过度优化或黑客攻击问题进行了阐述与形式化,并探讨了其在不同优化目标、训练机制和模型规模下的影响。