Standard reward models typically predict scalar scores that fail to capture the multifaceted nature of response quality in non-verifiable domains, such as creative writing or open-ended instruction following. To address this limitation, we propose Rubric-ARM, a framework that jointly optimizes a rubric generator and a judge using reinforcement learning from preference feedback. Unlike existing methods that rely on static rubrics or disjoint training pipelines, our approach treats rubric generation as a latent action learned to maximize judgment accuracy. We introduce an alternating optimization strategy to mitigate the non-stationarity of simultaneous updates, providing theoretical analysis that demonstrates how this schedule reduces gradient variance during training. Extensive experiments show that Rubric-ARM achieves state-of-the-art performance among baselines on multiple benchmarks and significantly improves downstream policy alignment in both offline and online reinforcement learning settings.
翻译:标准奖励模型通常预测标量分数,难以捕捉不可验证领域(如创意写作或开放式指令遵循)中响应质量的多维特性。为解决这一局限性,我们提出Rubric-ARM框架,该框架通过基于偏好的强化学习联合优化评分标准生成器与评判器。与依赖静态评分标准或分离训练流程的现有方法不同,我们的方法将评分标准生成视为潜在动作进行学习以最大化评判准确性。我们引入交替优化策略以缓解同步更新的非平稳性问题,并通过理论分析证明该调度策略如何降低训练过程中的梯度方差。大量实验表明,Rubric-ARM在多个基准测试中取得了优于基线模型的性能,并在离线和在线强化学习场景中显著提升了下游策略对齐效果。