Web agents hold great potential for automating complex computer tasks, yet their interactions involve long-horizon, sequential decision-making with irreversible actions. In such settings, outcome-based supervision is sparse and delayed, often rewarding incorrect trajectories and failing to support inference-time scaling. This motivates the use of Process Reward Models (WebPRMs) for web navigation, but existing approaches remain limited: scalar WebPRMs collapse progress into coarse, weakly grounded signals, while checklist-based WebPRMs rely on brittle template matching that fails under layout or semantic changes and often mislabels superficially correct actions as successful, providing little insight or interpretability. To address these challenges, we introduce WebArbiter, a reasoning-first, principle-inducing WebPRM that formulates reward modeling as text generation, producing structured justifications that conclude with a preference verdict and identify the action most conducive to task completion under the current context. Training follows a two-stage pipeline: reasoning distillation equips the model with coherent principle-guided reasoning, and reinforcement learning corrects teacher biases by directly aligning verdicts with correctness, enabling stronger generalization. To support systematic evaluation, we release WebPRMBench, a comprehensive benchmark spanning four diverse web environments with rich tasks and high-quality preference annotations. On WebPRMBench, WebArbiter-7B outperforms the strongest baseline, GPT-5, by 9.1 points. In reward-guided trajectory search on WebArena-Lite, it surpasses the best prior WebPRM by up to 7.2 points, underscoring its robustness and practical value in real-world complex web tasks.
翻译:网络智能体在自动化复杂计算机任务方面具有巨大潜力,但其交互涉及长视野、序列化的决策过程,且包含不可逆操作。在此类场景中,基于结果的监督信号稀疏且延迟,常常会奖励错误的轨迹,并难以支持推理时的扩展。这促使了过程奖励模型在网络导航中的应用,但现有方法仍存在局限:标量型WebPRM将进展压缩为粗糙、弱基础的信号,而基于清单的WebPRM则依赖于脆弱的模板匹配,在布局或语义变化时失效,且常将表面正确的动作误判为成功,提供的可解释性有限。为解决这些挑战,我们提出了WebArbiter,一种推理优先、原则引导的WebPRM,它将奖励建模构建为文本生成任务,生成结构化的论证,并以偏好裁决作为结论,同时识别在当前情境下最有利于任务完成的动作。训练采用两阶段流程:推理蒸馏使模型具备连贯的原则引导推理能力,强化学习则通过直接对齐裁决与正确性来纠正教师模型的偏差,从而实现更强的泛化能力。为支持系统化评估,我们发布了WebPRMBench,这是一个涵盖四个多样化网络环境、包含丰富任务和高质量偏好标注的综合基准。在WebPRMBench上,WebArbiter-7B以9.1分的优势超越了最强基线GPT-5。在WebArena-Lite上的奖励引导轨迹搜索中,其表现优于先前最佳WebPRM高达7.2分,突显了其在现实世界复杂网络任务中的鲁棒性和实用价值。