Preference tuning of large language models (LLMs) relies on high-quality human preference data, which is often expensive and time-consuming to gather. While existing methods can use trained reward models or proprietary model as judges for preference annotation, they have notable drawbacks: training reward models remain dependent on initial human data, and using proprietary model imposes license restrictions that inhibits commercial usage. In this paper, we introduce customized density ratio (CDR), a training-free and highly effective method that leverages off-the-shelf LLMs for preference data annotation. Our approach uses the log-density ratio between a better-aligned LLM and a less aligned LLM as a reward signal. We explores 221 different LLMs pairs and empirically demonstrate that increasing the performance gap between paired LLMs correlates with better reward generalization. Furthermore, we show that tailoring the density ratio reward function with specific criteria and preference exemplars enhances performance across domains and within target areas. In our experiment using density ratio from a pair of Mistral-7B models, CDR achieves a RewardBench score of 82.6, outperforming the best trained reward functions from same model class and demonstrating competitive performance against SoTA models in Safety (91.0) and Reasoning (88.0) domains. We use CDR to annotate an on-policy preference dataset with which we preference tune Llama-3-8B-Instruct with SimPO. Using reward signals from two relatively weak models, our approach pushes Llama-3-8B to achieve a 37.4% (+15.1%) win rate on ArenaHard and a 40.7% (+17.8%) win rate on Length-Controlled AlpacaEval 2.0, along with a score of 8.0 on MT-Bench.
翻译:大语言模型(LLM)的偏好调优依赖于高质量的人类偏好数据,而此类数据的收集往往成本高昂且耗时。现有方法虽可采用训练后的奖励模型或专有模型作为偏好标注的评判者,但存在明显缺陷:训练奖励模型仍需依赖初始人类数据,而使用专有模型则受限于许可协议,阻碍商业应用。本文提出可定制密度比率(CDR),一种无需训练且高效的方法,利用现成的大语言模型进行偏好数据标注。该方法通过计算对齐程度较高与对齐程度较低的两个大语言模型之间的对数密度比作为奖励信号。我们探索了221组不同的大语言模型配对,并通过实验证明:扩大配对模型间的性能差距可提升奖励泛化能力。此外,研究表明结合特定准则与偏好示例定制密度比率奖励函数,能有效提升跨领域及目标领域内的性能。在使用Mistral-7B模型对的密度比实验中,CDR在RewardBench上获得82.6分,优于同模型类别中训练得到的最佳奖励函数,并在安全性(91.0)与推理(88.0)领域与前沿模型表现相当。我们运用CDR标注了在线策略偏好数据集,并基于此通过SimPO对Llama-3-8B-Instruct进行偏好调优。仅使用两个相对较弱模型的奖励信号,该方法使Llama-3-8B在ArenaHard上获得37.4%(相对提升15.1%)的胜率,在长度控制AlpacaEval 2.0上获得40.7%(相对提升17.8%)的胜率,同时在MT-Bench上取得8.0分的成绩。