Direct preference optimization (DPO) has emerged as a promising approach for aligning large language models (LLMs) with human preferences. However, the widespread reliance on the response-level Bradley-Terry (BT) model may limit its full potential, as the reference and learnable models are assumed to be autoregressive only after deriving the objective function. Motivated by this limitation, we revisit the theoretical foundations of DPO and propose a novel formulation that explicitly introduces the autoregressive assumption prior to applying the BT model. By reformulating and extending DPO, we derive a novel variant, termed Autoregressive DPO (ADPO), that explicitly integrates autoregressive modeling into the preference optimization framework. Without violating the theoretical foundations, the derived loss takes an elegant form: it shifts the summation operation in the DPO objective outside the log-sigmoid function. Furthermore, through theoretical analysis of ADPO, we show that there exist two length measures to be considered when designing DPO-based algorithms: the token length $μ$ and the feedback length $μ$'. To the best of our knowledge, we are the first to explicitly distinguish these two measures and analyze their implications for preference optimization in LLMs.
翻译:直接偏好优化(DPO)已成为一种将大型语言模型(LLM)与人类偏好对齐的有前景的方法。然而,广泛依赖响应层面的Bradley-Terry(BT)模型可能限制了其全部潜力,因为在推导目标函数后,参考模型和可学习模型才被假定为自回归的。受此局限性启发,我们重新审视了DPO的理论基础,并提出了一种新的表述,该表述在应用BT模型之前明确引入了自回归假设。通过重新表述和扩展DPO,我们推导出一种新颖的变体,称为自回归DPO(ADPO),它将自回归建模明确地整合到偏好优化框架中。在不违背理论基础的前提下,推导出的损失函数具有优雅的形式:它将DPO目标中的求和操作移至log-sigmoid函数之外。此外,通过对ADPO的理论分析,我们表明在设计基于DPO的算法时需要考虑两种长度度量:令牌长度 $μ$ 和反馈长度 $μ'$。据我们所知,我们是首个明确区分这两种度量并分析它们对LLM中偏好优化影响的。