Varied approaches for aligning language models have been proposed, including supervised fine-tuning, RLHF, and direct optimization methods such as DPO. Although DPO has rapidly gained popularity due to its straightforward training process and competitive results, there is an open question of whether there remain practical advantages of using a discriminator, like a reward model, to evaluate responses. We propose D2PO, discriminator-guided DPO, an approach for the online setting where preferences are being collected throughout learning. As we collect gold preferences, we use these not only to train our policy, but to train a discriminative response evaluation model to silver-label even more synthetic data for policy training. We explore this approach across a set of diverse tasks, including a realistic chat setting, we find that our approach leads to higher-quality outputs compared to DPO with the same data budget, and greater efficiency in terms of preference data requirements. Furthermore, we show conditions under which silver labeling is most helpful: it is most effective when training the policy with DPO, outperforming traditional PPO, and benefits from maintaining a separate discriminator from the policy model.
翻译:语言模型对齐已提出多种方法,包括监督微调、RLHF以及DPO等直接优化方法。尽管DPO因其简洁的训练流程和具有竞争力的结果迅速获得广泛关注,但使用判别器(如奖励模型)评估响应是否仍具有实际优势,仍是一个开放性问题。本文提出D2PO——基于判别器引导的DPO,该方法适用于在线学习场景,即在持续学习过程中动态收集偏好数据。在收集黄金偏好数据的同时,我们不仅利用这些数据训练策略模型,还训练一个判别式响应评估模型,以对更多合成数据进行银标标注,进而用于策略训练。我们在包括现实对话场景在内的多种任务上验证该方法,发现与使用相同数据预算的DPO相比,我们的方法能生成更高质量的输出,且在偏好数据需求方面更具效率。此外,我们揭示了银标标注最具效用的条件:在基于DPO训练策略时效果最为显著(其表现优于传统PPO),且保持判别器与策略模型相互独立能带来额外增益。