Direct Preference Optimisation (DPO) is effective at significantly improving the performance of large language models (LLMs) on downstream tasks such as reasoning, summarisation, and alignment. Using pairs of preferred and dispreferred data, DPO models the \textit{relative} probability of picking one response over another. In this work, first we show theoretically that the standard DPO loss can lead to a \textit{reduction} of the model's likelihood of the preferred examples, as long as the relative probability between the preferred and dispreferred classes increases. We then show empirically that this phenomenon occurs when fine-tuning LLMs on common datasets, especially datasets in which the edit distance between pairs of completions is low. Using these insights, we design DPO-Positive (DPOP), a new loss function and training procedure which avoids this failure mode. Surprisingly, we also find that DPOP significantly outperforms DPO across a wide variety of datasets and downstream tasks, including datasets with high edit distances between completions. By fine-tuning with DPOP, we create and release Smaug-34B and Smaug-72B, which achieve state-of-the-art open-source performance. Notably, Smaug-72B is nearly 2\% better than any other open-source model on the HuggingFace Open LLM Leaderboard and becomes the first open-source LLM to surpass an average accuracy of 80\%.
翻译:直接偏好优化(DPO)能够显著提升大型语言模型(LLMs)在推理、摘要和人类对齐等下游任务中的性能。通过使用偏好数据与非偏好数据对,DPO对选择某响应而非另一响应的*相对*概率进行建模。本文首先从理论上证明:只要偏好类与非偏好类之间的相对概率增加,标准DPO损失会导致模型对偏好样本的似然*降低*。随后我们通过实验验证,这种现象出现在LLMs基于常见数据集(尤其是完成对之间编辑距离较小的数据集)进行微调时。基于这些发现,我们设计了DPO-Positive(DPOP)——一种能避免该失效模式的新损失函数与训练流程。令人惊讶的是,我们发现DPOP在包含高编辑距离完成对数据集的多种数据集及下游任务中均显著优于DPO。通过使用DPOP微调,我们构建并发布了Smaug-34B和Smaug-72B模型,它们在开源性能上达到了领先水平。值得注意的是,Smaug-72B在HuggingFace开放LLM排行榜上比任何其他开源模型高出近2%,成为首个平均准确率超过80%的开源LLM。