LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B student to imitate a Llama 3.3 70B teacher via supervised fine-tuning (SFT), followed by on-policy reinforcement learning. We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models.
翻译:基于LLM的网络智能体近期取得了显著进展,但多数成果集中于闭源系统,进一步拉大了与开源替代方案的差距。进展主要受限于两大挑战:其一,现有研究多聚焦于单步任务,忽视了多步网络交互的复杂性;其二,基于LLM的网络智能体后训练所需计算成本高昂。为此,我们首次提出了针对LLM网络智能体后训练计算资源分配的统计基础研究。我们采用两阶段流程:首先通过监督微调训练Llama 3.1 8B学生模型模仿Llama 3.3 70B教师模型,随后实施同策略强化学习。研究发现该过程对超参数选择极为敏感,使得穷举搜索难以实现。为免去昂贵的试错成本,我们采样1,370组配置并采用自助法估计有效超参数。实验结果表明:在WorkArena和MiniWob++基准测试中,监督微调与同策略强化学习的组合策略始终优于单一方法。此外,该策略仅需55%计算量即可在MiniWob++上达到纯监督微调的峰值性能,有效推进了计算-性能帕累托前沿,且是唯一能缩小与闭源模型差距的技术路径。