We study the problem of aligning large language models (LLMs) with human preference data. Contrastive preference optimization has shown promising results in aligning LLMs with available preference data by optimizing the implicit reward associated with the policy. However, the contrastive objective focuses mainly on the relative values of implicit rewards associated with two responses while ignoring their actual values, resulting in suboptimal alignment with human preferences. To address this limitation, we propose calibrated direct preference optimization (Cal-DPO), a simple yet effective algorithm. We show that substantial improvement in alignment with the given preferences can be achieved simply by calibrating the implicit reward to ensure that the learned implicit rewards are comparable in scale to the ground-truth rewards. We demonstrate the theoretical advantages of Cal-DPO over existing approaches. The results of our experiments on a variety of standard benchmarks show that Cal-DPO remarkably improves off-the-shelf methods.
翻译:我们研究了将大型语言模型(LLM)与人类偏好数据对齐的问题。对比偏好优化通过优化与策略相关的隐式奖励,在利用可用偏好数据对齐LLM方面已显示出有希望的结果。然而,对比目标主要关注与两个响应相关的隐式奖励的相对值,而忽略了它们的实际值,导致与人类偏好的对齐效果欠佳。为了解决这一局限性,我们提出了校准直接偏好优化(Cal-DPO),一种简单而有效的算法。我们证明,只需通过校准隐式奖励以确保学习到的隐式奖励在尺度上与真实奖励具有可比性,就能显著提升与给定偏好的对齐效果。我们论证了Cal-DPO相较于现有方法的理论优势。我们在多种标准基准测试上的实验结果表明,Cal-DPO显著改进了现有方法。