Reinforcement learning from human feedback (RLHF) provides a paradigm for aligning large language models (LLMs) with human preferences. This involves the initial training of a reward model based on pairwise human feedback. The reward model is subsequently utilized in reinforcement learning to assess the scores of each generated sentence as a whole, further guiding the optimization of LLMs. However, current approaches have a significant shortcoming: \emph{They allocate a single, sparse, and delayed reward to an entire sequence of output}. This may overlook some significant individual contributions of each token towards the desired outcome. To overcome this limitation, our paper proposes a novel reward redistribution method called R3HF, which facilitates a more fine-grained, token-level reward allocation. Specifically, our method treats the reward prediction task of the reward model as a regression problem. As a result, the redistributed rewards are computed by evaluating the specific contribution of each token to the reward model's output. This detailed approach improves the model's understanding of language nuances, leading to more precise enhancements in its performance. Our method is crafted to integrate seamlessly with most current techniques while incurring minimal computational costs. Through comprehensive experiments across diverse datasets and tasks, we have verified the effectiveness and superiority of our approach.
翻译:基于人类反馈的强化学习(RLHF)为将大型语言模型(LLM)与人类偏好对齐提供了一种范式。该方法首先基于成对人类反馈训练奖励模型,随后在强化学习中利用该奖励模型对每个生成句子的整体得分进行评估,进而指导LLM的优化。然而,现有方法存在一个显著缺陷:\emph{它们为整个输出序列分配单一、稀疏且延迟的奖励}。这可能忽略每个词元对期望结果的重要个体贡献。为克服此局限,本文提出一种新颖的奖励再分配方法R3HF,实现更细粒度的词元级奖励分配。具体而言,本方法将奖励模型的奖励预测任务视为回归问题,通过评估每个词元对奖励模型输出的具体贡献来计算再分配奖励。这种精细化方法提升了模型对语言细微差别的理解能力,从而使其性能得到更精准的提升。本方法设计为能够与当前主流技术无缝集成,同时仅需极低计算开销。通过在多类数据集和任务上的综合实验,我们验证了该方法的有效性和优越性。