The reward model has become increasingly important in alignment, assessment, and data construction for large language models (LLMs). Most existing researchers focus on enhancing reward models through data improvements, following the conventional training framework for reward models that directly optimizes the predicted rewards. In this paper, we propose a hybrid alignment framework HaF-RM for reward model training by introducing an additional constraint on token-level policy probabilities in addition to the reward score. It can simultaneously supervise the internal preference model at the token level and optimize the mapping layer of the reward model at the sequence level. Theoretical justifications and experiment results on five datasets show the validity and effectiveness of our proposed hybrid framework for training a high-quality reward model. By decoupling the reward modeling procedure and incorporating hybrid supervision, our HaF-RM framework offers a principled and effective approach to enhancing the performance and alignment of reward models, a critical component in the responsible development of powerful language models. We release our code at https://haf-rm.github.io.
翻译:奖励模型在大语言模型的对齐、评估和数据构建中日益重要。现有研究大多遵循传统的奖励模型训练框架,即直接优化预测的奖励值,并通过数据改进来增强奖励模型。本文提出了一种用于奖励模型训练的混合对齐框架HaF-RM,在奖励分数之外,额外引入了对词元级策略概率的约束。该框架能够同时在词元级别监督内部偏好模型,并在序列级别优化奖励模型的映射层。在五个数据集上的理论论证和实验结果表明,我们所提出的混合框架对于训练高质量奖励模型具有有效性和优越性。通过解耦奖励建模过程并融入混合监督,我们的HaF-RM框架为提升奖励模型的性能和对齐性提供了一种原则性且有效的途径,而奖励模型是负责任地开发强大语言模型的关键组件。我们在https://haf-rm.github.io发布了代码。