Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences. The RLHF process typically starts by training a reward model (RM) using human preference data. Conventional RMs are trained on pairwise responses to the same user request, with relative ratings indicating which response humans prefer. The trained RM serves as a proxy for human preferences. However, due to the black-box nature of RMs, their outputs lack interpretability, as humans cannot intuitively understand why an RM thinks a response is good or not. As RMs act as human preference proxies, we believe they should be human-interpretable to ensure that their internal decision processes are consistent with human preferences and to prevent reward hacking in LLM alignment. To build RMs with interpretable preferences, we propose a two-stage approach: i) train an Absolute-Rating Multi-Objective Reward Model (ArmoRM) with multi-dimensional absolute-rating data, each dimension corresponding to a human-interpretable objective (e.g., honesty, verbosity, safety); ii) employ a Mixture-of-Experts (MoE) strategy with a gating network that automatically selects the most suitable reward objectives based on the context. We efficiently trained an ArmoRM with Llama-3 8B and a gating network consisting of a shallow MLP on top of the ArmoRM. Our trained model, ArmoRM-Llama3-8B, obtains state-of-the-art performance on RewardBench, a benchmark evaluating RMs for language modeling. Notably, the performance of our model surpasses the LLM-as-a-judge method with GPT-4 judges by a margin, and approaches the performance of the much larger Nemotron-4 340B reward model.
翻译:从人类反馈中进行强化学习(RLHF)已成为将大型语言模型(LLMs)与人类偏好对齐的主要方法。RLHF过程通常从使用人类偏好数据训练奖励模型(RM)开始。传统的RM基于对同一用户请求的成对回复进行训练,其中相对评分指示了人类更偏好哪个回复。训练后的RM充当人类偏好的代理。然而,由于RM的黑盒性质,其输出缺乏可解释性,人类无法直观理解为何RM认为某个回复好或不好。鉴于RM作为人类偏好的代理,我们认为它们应具备人类可解释性,以确保其内部决策过程与人类偏好一致,并防止在LLM对齐中出现奖励破解。为构建具有可解释偏好的RM,我们提出了一种两阶段方法:i) 使用多维度绝对评分数据训练一个绝对评分多目标奖励模型(ArmoRM),每个维度对应一个人类可解释的目标(例如,诚实性、详尽性、安全性);ii) 采用专家混合(MoE)策略,配备一个门控网络,根据上下文自动选择最合适的奖励目标。我们高效地训练了一个基于Llama-3 8B的ArmoRM和一个位于ArmoRM顶层的浅层MLP组成的门控网络。我们训练好的模型ArmoRM-Llama3-8B在评估语言建模RM的基准测试RewardBench上取得了最先进的性能。值得注意的是,我们的模型性能显著超越了使用GPT-4作为评判者的LLM-as-a-judge方法,并接近了规模大得多的Nemotron-4 340B奖励模型的性能。