Large language models (LLMs) increasingly rely on thinking models that externalize intermediate steps and allocate extra test-time compute, with think-twice strategies showing that a deliberate second pass can elicit stronger reasoning. In contrast, most reward models (RMs) still compress many quality dimensions into a single scalar in one shot, a design that induces judgment diffusion: attention spreads across evaluation criteria, yielding diluted focus and shallow analysis. We introduce branch-and-rethink (BR-RM), a two-turn RM that transfers the think-twice principle to reward modeling. Turn 1 performs adaptive branching, selecting a small set of instance-critical dimensions (such as factuality and safety) and sketching concise, evidence-seeking hypotheses. Turn 2 executes branch-conditioned rethinking, a targeted reread that tests those hypotheses and scrutinizes only what matters most. We train with GRPO-style reinforcement learning over structured two-turn traces using a simple binary outcome reward with strict format checks, making the approach compatible with standard RLHF pipelines. By converting all-at-once scoring into focused, second-look reasoning, BR-RM reduces judgment diffusion and improves sensitivity to subtle yet consequential errors while remaining practical and scalable. Experimental results demonstrate that our model achieves state-of-the-art performance on three challenging reward modeling benchmarks across diverse domains.
翻译:大型语言模型(LLMs)日益依赖能够外化中间步骤并分配额外测试时间计算资源的思维模型,其中“三思而行”策略表明,经过审慎的第二轮思考能够激发更强的推理能力。相比之下,大多数奖励模型(RMs)仍将多个质量维度一次性压缩为单个标量,这种设计会导致判断扩散:注意力分散于多个评估标准之间,造成焦点稀释和浅层分析。我们提出了分支与再思考奖励模型(BR-RM),这是一种将“三思而行”原则迁移至奖励建模的两阶段模型。第一阶段执行自适应分支,选择一小部分实例关键维度(如事实性与安全性),并草拟简洁的、寻求证据的假设。第二阶段执行分支条件化再思考,即一次有针对性的重读,用于检验这些假设并仅深入审查最关键的内容。我们采用GRPO风格的强化学习,基于结构化的两阶段轨迹进行训练,使用带有严格格式检查的简单二元结果奖励,使该方法与标准RLHF流程兼容。通过将一次性评分转化为聚焦的、二次审视的推理过程,BR-RM减少了判断扩散,提升了对细微但关键错误的敏感性,同时保持了实用性与可扩展性。实验结果表明,我们的模型在涵盖多个领域的三个具有挑战性的奖励建模基准测试中,均达到了最先进的性能水平。