Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. Due to the difficulty of obtaining high-quality human preference annotations, distilling preferences from generative LLMs has emerged as a standard practice. However, existing approaches predominantly treat teacher models as simple binary annotators, failing to fully exploit the rich knowledge and capabilities for RM distillation. To address this, we propose RM-Distiller, a framework designed to systematically exploit the multifaceted capabilities of teacher LLMs: (1) Refinement capability, which synthesizes highly correlated response pairs to create fine-grained and contrastive signals. (2) Scoring capability, which guides the RM in capturing precise preference strength via a margin-aware optimization objective. (3) Generation capability, which incorporates the teacher's generative distribution to regularize the RM to preserve its fundamental linguistic knowledge. Extensive experiments demonstrate that RM-Distiller significantly outperforms traditional distillation methods both on RM benchmarks and reinforcement learning-based alignment, proving that exploiting multifaceted teacher capabilities is critical for effective reward modeling. To the best of our knowledge, this is the first systematic research on RM distillation from generative LLMs.
翻译:奖励模型在使大语言模型与人类偏好对齐方面起着关键作用。由于获取高质量人类偏好标注的困难,从生成式大语言模型中蒸馏偏好已成为标准做法。然而,现有方法主要将教师模型视为简单的二元标注器,未能充分利用其丰富的知识与能力进行奖励模型蒸馏。为此,我们提出RM-Distiller框架,旨在系统性地利用教师大语言模型的多方面能力:(1)精炼能力,通过合成高度相关的响应对来创建细粒度且具有对比性的信号。(2)评分能力,通过边界感知优化目标引导奖励模型捕捉精确的偏好强度。(3)生成能力,通过融入教师的生成分布来正则化奖励模型,以保持其基础语言知识。大量实验表明,RM-Distiller在奖励模型基准测试和基于强化学习的对齐任务上均显著优于传统蒸馏方法,证明利用教师模型的多方面能力对于有效的奖励建模至关重要。据我们所知,这是首次关于从生成式大语言模型进行奖励模型蒸馏的系统性研究。