Reward models (RMs) play a crucial role in aligning large language models (LLMs) with human preferences and enhancing reasoning quality. Traditionally, RMs are trained to rank candidate outputs based on their correctness and coherence. However, in this work, we present several surprising findings that challenge common assumptions about RM behavior. Our analysis reveals that state-of-the-art reward models prioritize structural consistency over causal correctness. Specifically, removing the problem statement has minimal impact on reward scores, whereas altering numerical values or disrupting the reasoning flow significantly affects RM outputs. Furthermore, RMs exhibit a strong dependence on complete reasoning trajectories truncated or incomplete steps lead to significant variations in reward assignments, indicating that RMs primarily rely on learned reasoning patterns rather than explicit problem comprehension. These findings hold across multiple architectures, datasets, and tasks, leading to three key insights: (1) RMs primarily assess coherence rather than true reasoning quality; (2) The role of explicit problem comprehension in reward assignment is overstated; (3) Current RMs may be more effective at ranking responses than verifying logical validity. Our results suggest a fundamental limitation in existing reward modeling approaches, emphasizing the need for a shift toward causality-aware reward models that go beyond consistency-driven evaluation.
翻译:奖励模型(RMs)在将大型语言模型(LLMs)与人类偏好对齐以及提升推理质量方面发挥着关键作用。传统上,奖励模型被训练用于根据候选输出的正确性和连贯性进行排序。然而,在本研究中,我们提出了若干令人惊讶的发现,这些发现挑战了关于奖励模型行为的常见假设。我们的分析表明,最先进的奖励模型优先考虑结构一致性而非因果正确性。具体而言,移除问题陈述对奖励分数的影响微乎其微,而改变数值或破坏推理流程则会显著影响奖励模型的输出。此外,奖励模型表现出对完整推理轨迹的强烈依赖性:截断或不完整的推理步骤会导致奖励分配的显著变化,这表明奖励模型主要依赖于学习到的推理模式,而非对问题的明确理解。这些发现在多种架构、数据集和任务中均成立,并得出三个关键洞见:(1)奖励模型主要评估连贯性而非真实的推理质量;(2)明确的问题理解在奖励分配中的作用被高估;(3)当前的奖励模型在排序回答方面可能比验证逻辑有效性更为有效。我们的研究结果揭示了现有奖励建模方法存在一个根本性局限,强调需要转向超越一致性驱动评估的、具备因果感知能力的奖励模型。