Standard reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed Self-Play Preference Optimization (SPPO), utilizes iterative policy updates to provably approximate the Nash equilibrium. Additionally, we propose a new SPPO objective which is both strongly motivated by theory and is simple and effective in practice. In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench, Arena-Hard, and the Open LLM Leaderboard. Starting from a stronger base model Llama-3-8B-Instruct, we are able to achieve a length-controlled win rate of 38.77%. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models. Codes are available at https://github.com/uclaml/SPPO.
翻译:传统基于人类反馈的强化学习(RLHF)方法依赖布拉德利-特里模型等参数化模型,难以捕捉人类偏好中的不可传递性与非理性特征。最新研究表明,直接基于偏好概率建模能更精确反映人类偏好,从而实现更灵活准确的语言模型对齐。本文提出一种基于自博弈的语言模型对齐方法,将对齐问题建模为常和双人博弈以求解纳什均衡策略。我们提出的自博弈偏好优化(SPPO)方法通过迭代策略更新可证明地逼近纳什均衡。此外,我们提出具有强理论支撑且在实践中简洁高效的SPPO新目标函数。实验表明,仅使用UltraFeedback数据集中6万条提示(不含响应)且无需提示增强,借助仅含0.4B参数的预训练偏好模型PairRM,通过对Mistral-7B-Instruct-v0.2进行微调,SPPO获得的模型在AlpacaEval 2.0基准测试中取得28.53%的长度控制胜率(对阵GPT-4-Turbo),达到当前最优水平。该方法在MT-Bench、Arena-Hard和Open LLM排行榜上均优于(迭代式)DPO与IPO方法。以更强的Llama-3-8B-Instruct为基础模型时,我们实现了38.77%的长度控制胜率。值得注意的是,SPPO的优异性能无需GPT-4或其他更强语言模型提供额外外部监督(如响应、偏好等)。代码已发布于https://github.com/uclaml/SPPO。