Reward models (RMs) play a critical role in aligning language models through the process of reinforcement learning from human feedback. RMs are trained to predict a score reflecting human preference, which requires significant time and cost for human annotation. Additionally, RMs tend to quickly overfit on superficial features in the training set, hindering their generalization performance on unseen distributions. We propose a novel approach using synthetic natural language critiques generated by large language models to provide additional feedback, evaluating aspects such as instruction following, correctness, and style. This offers richer signals and more robust features for RMs to assess and score on. We demonstrate that high-quality critiques improve the performance and data efficiency of RMs initialized from different pretrained models, reducing the reliance on costly human annotations. Furthermore, incorporating critiques improves both the interpretability and robustness of RM training.
翻译:奖励模型(RMs)在通过人类反馈强化学习实现语言模型对齐的过程中起着关键作用。奖励模型通过训练来预测反映人类偏好的评分,这需要耗费大量时间和成本进行人工标注。此外,奖励模型容易快速过拟合训练集中的表面特征,从而影响其在未见分布上的泛化性能。我们提出一种新颖方法,利用大型语言模型生成的合成自然语言评注来提供额外反馈,评估指令遵循、正确性和风格等方面。这为奖励模型的评估和评分提供了更丰富的信号和更稳健的特征。我们证明,高质量的评注能够提升基于不同预训练模型初始化的奖励模型的性能和数据效率,减少对昂贵人工标注的依赖。此外,引入评注还能同时提升奖励模型训练的可解释性和鲁棒性。