Traditional reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed \textit{Self-play Probabilistic Preference Optimization} (SPPO), approximates the Nash equilibrium through iterative policy updates and enjoys a theoretical convergence guarantee. Our method can effectively increase the log-likelihood of the chosen response and decrease that of the rejected response, which cannot be trivially achieved by symmetric pairwise loss such as Direct Preference Optimization (DPO) and Identity Preference Optimization (IPO). In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53\% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench and the Open LLM Leaderboard. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models.
翻译:传统基于人类反馈的强化学习(RLHF)方法依赖诸如Bradley-Terry模型等参数化模型,难以捕捉人类偏好中的不可传递性和非理性。近期研究表明,直接处理偏好概率能够更准确地反映人类偏好,从而实现更灵活、更准确的语言模型对齐。本文提出一种基于自对弈的语言模型对齐方法,将问题视为一个常和双人博弈,旨在寻找纳什均衡策略。我们的方法称为“自对弈概率偏好优化”(SPPO),通过迭代策略更新逼近纳什均衡,并具有理论收敛保证。本方法能有效提升被选回答的对数似然,同时降低被拒回答的对数似然,这是直接偏好优化(DPO)和恒等偏好优化(IPO)等对称成对损失函数无法轻易实现的。实验中,我们仅使用UltraFeedback数据集中6万个提示(不含回答),且未进行任何提示增强,通过调用仅含0.4B参数的预训练偏好模型PairRM,对Mistral-7B-Instruct-v0.2进行微调,获得的模型在AlpacaEval 2.0上取得了针对GPT-4-Turbo的28.53%最优长度控制胜率。该方法在MT-Bench和Open LLM排行榜上也优于(迭代)DPO和IPO。值得注意的是,SPPO的优异性能是在未使用GPT-4或其他更强语言模型提供额外外部监督(如回答、偏好等)的情况下实现的。