Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human preferences. Typically, RLHF involves the initial step of learning a reward model from human feedback, often expressed as preferences between pairs of text generations produced by a pre-trained LLM. Subsequently, the LLM's policy is fine-tuned by optimizing it to maximize the reward model through a reinforcement learning algorithm. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash-MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deep-learning architectures. To demonstrate the effectiveness of our approach, we present experimental results involving the fine-tuning of a LLM for a text summarization task. We believe NLHF offers a compelling avenue for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences.
翻译:基于人类反馈的强化学习(RLHF)已成为将大型语言模型(LLMs)与人类偏好对齐的主要范式。通常,RLHF 首先从人类反馈中学习一个奖励模型,这些反馈通常表示为预训练 LLM 生成的文本对之间的偏好。随后,通过强化学习算法优化 LLM 的策略,以最大化该奖励模型。然而,当前奖励模型的一个固有限制是,它们无法完全表征人类偏好的丰富性,且高度依赖采样分布。在本研究中,我们提出了一种使用成对人类反馈微调 LLM 的替代流程。我们的方法首先学习一个偏好模型,该模型在给定提示的条件下对两个输入进行判断,然后追求一个能够持续生成优于任何竞争策略所生成响应的策略,从而定义该偏好模型的纳什均衡。我们将此方法称为从人类反馈中学习纳什均衡(NLHF)。在表格策略表示的场景下,我们提出了一种基于镜像下降原理的新算法——Nash-MD。该算法生成一系列策略,最终迭代收敛到正则化纳什均衡。此外,我们探索了策略的参数化表示,并引入了适用于深度学习架构的梯度下降算法。为了展示我们方法的有效性,我们给出了在文本摘要任务中微调 LLM 的实验结果。我们相信,NLHF 为偏好学习和策略优化提供了一条引人注目的途径,有望推动将 LLM 与人类偏好对齐领域的发展。