In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.
翻译:在大型语言模型(LLM)的对齐过程中,奖励模型发挥着重要作用,但传统上通常作为判别式模型进行训练,且仅依赖于标注的人类偏好数据。本文探索了利用未标注和已标注数据共同训练奖励模型的方法。基于LLM中的生成式模型,我们开发了一种生成式奖励模型,该模型首先通过大规模无监督学习进行预训练,随后通过监督学习进行微调。我们还证明,通过使用标签平滑技术,实际上是在优化一种正则化的成对排序损失。这一结果进而为奖励模型的训练提供了新的视角,将生成式模型与判别式模型在同一类训练目标下联系起来。这些技术最终产生了一个基础奖励模型,该模型可广泛应用于多种任务,且几乎无需进一步微调。大量实验表明,该模型在多项任务中均表现出良好的泛化能力,包括响应排序、基于人类反馈的强化学习以及通过微调实现的任务适应,相较于多个强基线模型均取得了显著的性能提升。