Reward models are critical for aligning models to follow instructions, and are typically trained following one of two popular paradigms: Bradley-Terry style or Regression style. However, there is a lack of evidence that either approach is better than the other, when adequately matched for data. This is primarily because these approaches require data collected in different (but incompatible) formats, meaning that adequately matched data is not available in existing public datasets. To tackle this problem, we release preference annotations (designed for Bradley-Terry training) to complement existing ratings (designed for Regression style training) in the HelpSteer2 dataset. To improve data interpretability, preference annotations are accompanied with human-written justifications. Using this data, we conduct the first head-to-head comparison of Bradley-Terry and Regression models when adequately matched for data. Based on insights derived from such a comparison, we propose a novel approach to combine Bradley-Terry and Regression reward modeling. A Llama-3.1-70B-Instruct model tuned with this approach scores 94.1 on RewardBench, emerging top of more than 140 reward models as of 1 Oct 2024. This reward model can then be used with REINFORCE algorithm (RLHF) to align an Instruct model to reach 85.0 on Arena Hard, which is No. 1 as of 1 Oct 2024. We open-source this dataset (CC-BY-4.0 license) at https://huggingface.co/datasets/nvidia/HelpSteer2#preferences-new -- 1-oct-2024 and openly release the trained Reward and Instruct models at https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward and https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct
翻译:奖励模型对于使模型遵循指令对齐至关重要,其训练通常遵循两种主流范式之一:Bradley-Terry 风格或回归风格。然而,当数据充分匹配时,目前缺乏证据表明其中任何一种方法优于另一种。这主要是因为这两种方法需要以不同(且互不兼容)格式收集的数据,这意味着现有的公共数据集中无法提供充分匹配的数据。为解决此问题,我们在 HelpSteer2 数据集中发布了偏好标注(专为 Bradley-Terry 训练设计),以补充现有的评分数据(专为回归风格训练设计)。为提高数据的可解释性,偏好标注附有人工撰写的理由说明。利用这些数据,我们首次在数据充分匹配的条件下,对 Bradley-Terry 模型和回归模型进行了直接比较。基于此比较得出的见解,我们提出了一种结合 Bradley-Terry 和回归奖励建模的新方法。使用此方法微调的 Llama-3.1-70B-Instruct 模型在 RewardBench 上获得 94.1 分,截至 2024 年 10 月 1 日,在超过 140 个奖励模型中位列第一。该奖励模型随后可与 REINFORCE 算法(RLHF)结合使用,以对齐一个指令模型,使其在 Arena Hard 上达到 85.0 分,该分数截至 2024 年 10 月 1 日同样排名第一。我们在 https://huggingface.co/datasets/nvidia/HelpSteer2#preferences-new -- 1-oct-2024 开源此数据集(CC-BY-4.0 许可证),并在 https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward 和 https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct 公开发布训练好的奖励模型和指令模型。