Direct Preference Optimization (DPO) has become a widely used training method for the instruction fine-tuning of large language models (LLMs). In this work, we explore an under-investigated aspect of DPO - its dependency on the reference model or policy. Such reference policies, typically instantiated as the model to be further fine-tuned, are important since they can impose an upper limit on DPO's effectiveness. Therefore, we address three related research questions in this work. First, we explore the optimal strength of the KL divergence constraint in DPO, which penalizes deviations from the reference policy, and find that DPO is sensitive to this strength. Next, we examine the necessity of the KL-constraint from the reference policies in DPO by providing both theoretical and empirical comparisons between DPO and related learning objectives, demonstrating DPO's superiority in this controlled setting. Additionally, we investigate whether DPO benefits from stronger reference policies, finding that a stronger reference policy can lead to improved performance, but only when it is similar to the model being fine-tuned. Our findings highlight the confounding role of reference policies in DPO and offer insights for best practices, while also identifying open research questions for future studies.
翻译:直接偏好优化(DPO)已成为大型语言模型(LLM)指令微调中广泛使用的训练方法。本研究探讨了DPO中一个尚未被充分研究的方面——其对参考模型或策略的依赖性。此类参考策略(通常实例化为待进一步微调的模型)具有重要意义,因为它们可能对DPO的有效性设定上限。为此,我们针对三个相关研究问题展开探讨。首先,我们研究了DPO中KL散度约束(用于惩罚与参考策略的偏差)的最佳强度,发现DPO对此强度参数具有敏感性。其次,通过理论分析与实验对比DPO及相关学习目标,我们论证了参考策略的KL约束在DPO中的必要性,并在此受控设定下验证了DPO的优越性。此外,我们探究了更强参考策略是否有利于DPO性能提升,发现仅当参考策略与待微调模型相似时,强化参考策略才能带来改进。本研究揭示了参考策略在DPO中存在的混杂效应,为最佳实践提供了理论依据,同时指明了未来研究中待解决的关键问题。