Traditional reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed \textit{Self-Play Preference Optimization} (SPPO), approximates the Nash equilibrium through iterative policy updates and enjoys theoretical convergence guarantee. Our method can effectively increase the log-likelihood of the chosen response and decrease that of the rejected response, which cannot be trivially achieved by symmetric pairwise loss such as Direct Preference Optimization (DPO) and Identity Preference Optimization (IPO). In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench and the Open LLM Leaderboard. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models.
翻译:传统基于人类反馈的强化学习(RLHF)方法,如依赖Bradley-Terry等参数化模型,难以捕捉人类偏好中的非传递性与非理性。最新研究表明,直接处理偏好概率能更准确地反映人类偏好,从而实现更灵活且精准的语言模型对齐。本文提出一种基于自对弈的语言模型对齐方法,将该问题建模为常和双人博弈,旨在识别纳什均衡策略。我们所提出的方法名为“自对弈偏好优化”(Self-Play Preference Optimization, SPPO),通过迭代策略更新逼近纳什均衡,并具备理论收敛保证。该方法能有效提升所选响应的对数似然,同时降低被拒响应的对数似然,而这一效果无法通过对称成对损失(如直接偏好优化DPO和身份偏好优化IPO)简单实现。实验表明,仅使用UltraFeedback数据集中的6万条提示(不包含响应),且无需任何提示增强,借助预训练参数为0.4B的PairRM偏好模型,通过微调Mistral-7B-Instruct-v0.2得到的模型,在AlpacaEval 2.0上对GPT-4-Turbo实现了28.53%的最优长度控制胜率。该方法在MT-Bench和Open LLM排行榜上亦优于(迭代式)DPO和IPO。值得注意的是,SPPO的优异性能无需依赖GPT-4或其他更强语言模型提供额外外部监督(如响应、偏好等)。