Reward hacking, where a reasoning model exploits loopholes in a reward function to achieve high rewards without solving the intended task, poses a significant threat. This behavior may be explicit, i.e. verbalized in the model's chain-of-thought (CoT), or implicit, where the CoT appears benign thus bypasses CoT monitors. To detect implicit reward hacking, we propose TRACE (Truncated Reasoning AUC Evaluation). Our key observation is that hacking occurs when exploiting the loophole is easier than solving the actual task. This means that the model is using less 'effort' than required to achieve high reward. TRACE quantifies effort by measuring how early a model's reasoning becomes sufficient to obtain the reward. We progressively truncate a model's CoT at various lengths, force the model to answer, and estimate the expected reward at each cutoff. A hacking model, which takes a shortcut, will achieve a high expected reward with only a small fraction of its CoT, yielding a large area under the accuracy-vs-length curve. TRACE achieves over 65% gains over our strongest 72B CoT monitor in math reasoning, and over 30% gains over a 32B monitor in coding. We further show that TRACE can discover unknown loopholes during training. Overall, TRACE offers a scalable unsupervised approach for oversight where current monitoring methods prove ineffective.
翻译:奖励破解是指推理模型利用奖励函数中的漏洞获得高额奖励却未解决预期任务的行为,这构成了重大威胁。该行为可能是显式的,即在模型的思维链中明确表述;也可能是隐式的,此时思维链看似正常从而绕过思维链监控器。为检测隐式奖励破解,我们提出TRACE(截断推理AUC评估)。我们的核心观察是:当利用漏洞比解决实际任务更容易时,破解行为就会发生。这意味着模型为获得高奖励所付出的"努力"低于所需水平。TRACE通过测量模型推理在何时变得足以获得奖励来量化努力程度:我们逐步截断模型思维链至不同长度,强制模型作答,并估算每个截断点处的期望奖励。采用捷径的破解模型仅需少量思维链片段即可获得高期望奖励,从而在准确率-长度曲线下产生较大面积。在数学推理任务中,TRACE相比我们最强的720亿参数思维链监控器提升超过65%;在代码生成任务中,相比320亿参数监控器提升超过30%。我们进一步证明TRACE能在训练过程中发现未知漏洞。总体而言,在当前监控方法失效的监督场景中,TRACE提供了一种可扩展的无监督解决方案。