Reward models (RM) play a critical role in aligning language models through the process of reinforcement learning from human feedback. RMs are trained to predict a score reflecting human preference, which requires significant time and cost for human annotation. Additionally, RMs tend to quickly overfit on superficial features in the training set, hindering their generalization performance on unseen distributions. We propose a novel approach using synthetic natural language critiques generated by large language models to provide additional feedback, evaluating aspects such as instruction following, correctness, and style. This offers richer signals and more robust features for RMs to assess and score on. We demonstrate that high-quality critiques improve the performance and data efficiency of RMs initialized from different pretrained models. Conversely, we also show that low-quality critiques negatively impact performance. Furthermore, incorporating critiques enhances the interpretability and robustness of RM training.
翻译:奖励模型在通过人类反馈强化学习对齐语言模型的过程中起着关键作用。奖励模型被训练用于预测反映人类偏好的分数,这需要大量时间和成本进行人工标注。此外,奖励模型容易快速过拟合训练集中的表面特征,从而阻碍其在未见分布上的泛化性能。我们提出一种新颖方法,利用大型语言模型生成的合成自然语言评论来提供额外反馈,评估诸如指令遵循、正确性和风格等方面。这为奖励模型提供了更丰富的信号和更稳健的特征以进行评估和打分。我们证明,高质量评论能够提升基于不同预训练模型初始化的奖励模型的性能和数据效率。相反,我们也表明低质量评论会对性能产生负面影响。此外,引入评论还能增强奖励模型训练的可解释性和鲁棒性。