Effectively aligning Large Language Models (LLMs) with human-centric values while preventing the degradation of abilities acquired through Pre-training and Supervised Fine-tuning (SFT) poses a central challenge in Reinforcement Learning from Human Feedback (RLHF). In this paper, we first discover that interpolating RLHF and SFT model parameters can adjust the trade-off between human preference and basic capabilities, thereby reducing the alignment tax at the cost of alignment reward. Inspired by this, we propose integrating the RL policy and SFT models at each optimization step in RLHF to continuously regulate the training direction, introducing the Online Merging Optimizer. Specifically, we merge gradients with the parameter differences between SFT and pretrained models, effectively steering the gradient towards maximizing rewards in the direction of SFT optimization. We demonstrate that our optimizer works well with different LLM families, such as Qwen and LLaMA, across various model sizes ranging from 1.8B to 8B, various RLHF algorithms like DPO and KTO, and existing model merging methods. It significantly enhances alignment reward while mitigating alignment tax, achieving higher overall performance across 14 benchmarks.
翻译:有效对齐大型语言模型(LLM)以符合人类中心价值观,同时防止通过预训练和监督微调(SFT)获得的能力退化,是强化学习人类反馈(RLHF)中的核心挑战。本文首先发现,插值RLHF与SFT模型参数可以调节人类偏好与基础能力之间的权衡,从而以牺牲对齐奖励为代价降低对齐税。受此启发,我们提出在RLHF的每个优化步骤中集成RL策略与SFT模型,以持续调控训练方向,引入在线合并优化器。具体而言,我们将梯度与SFT及预训练模型之间的参数差异相融合,有效引导梯度沿SFT优化的方向最大化奖励。我们证明,该优化器适用于不同的LLM系列(如Qwen和LLaMA),涵盖1.8B至8B的多种模型规模、多种RLHF算法(如DPO和KTO)以及现有模型合并方法。它在14个基准测试中显著提升对齐奖励,同时缓解对齐税,实现了更高的综合性能。