Direct Preference Optimization (DPO) optimizes language models to align with human preferences. Utilizing on-policy samples, generated directly by the policy model, typically results in better performance due to its distribution consistency with the model compared to off-policy samples. This paper identifies the quality of candidate preference samples as another critical factor. While the quality of on-policy data is inherently constrained by the capabilities of the policy model, off-policy data, which can be derived from diverse sources, offers greater potential for quality despite experiencing distribution shifts. However, current research mostly relies on on-policy data and neglects the value of off-policy data in terms of data quality, due to the challenge posed by distribution shift. In this paper, we propose InCo-DPO, an efficient method for synthesizing preference data by integrating on-policy and off-policy data, allowing dynamic adjustments to balance distribution shifts and data quality, thus finding an optimal trade-off. Consequently, InCo-DPO overcomes the limitations of distribution shifts in off-policy data and the quality constraints of on-policy data. We evaluated InCo-DPO with the Alpaca-Eval 2.0 and Arena-Hard benchmarks. Experimental results demonstrate that our approach not only outperforms both on-policy and off-policy data but also achieves a state-of-the-art win rate of 60.8 on Arena-Hard with the vanilla DPO using Gemma-2 model.
翻译:直接偏好优化(Direct Preference Optimization, DPO)旨在优化语言模型以使其与人类偏好对齐。利用由策略模型直接生成的同策略样本,通常因其与模型具有分布一致性而比异策略样本带来更好的性能。本文指出,候选偏好样本的质量是另一个关键因素。虽然同策略数据的质量本质上受限于策略模型的能力,但异策略数据可以来源于多样化的数据源,尽管存在分布偏移,却具有更高的质量潜力。然而,当前研究主要依赖同策略数据,并因分布偏移带来的挑战而忽视了异策略数据在数据质量方面的价值。在本文中,我们提出了InCo-DPO,一种通过整合同策略与异策略数据来合成偏好数据的高效方法,该方法允许动态调整以平衡分布偏移与数据质量,从而找到最优权衡。因此,InCo-DPO克服了异策略数据中的分布偏移限制以及同策略数据的质量约束。我们在Alpaca-Eval 2.0和Arena-Hard基准上评估了InCo-DPO。实验结果表明,我们的方法不仅优于单独使用同策略或异策略数据,而且在使用Gemma-2模型的标准DPO设置下,在Arena-Hard上实现了60.8%的最新胜率。