Reinforcement Learning (RL) has the potential to improve the robustness of GUI agents in stochastic environments, yet training is highly sensitive to the quality of the reward function. Existing reward approaches struggle to achieve both scalability and performance. To address this, we propose OS-Themis, a scalable and accurate multi-agent critic framework. Unlike a single judge, OS-Themis decomposes trajectories into verifiable milestones to isolate critical evidence for decision making and employs a review mechanism to strictly audit the evidence chain before making the final verdict. To facilitate evaluation, we further introduce OmniGUIRewardBench (OGRBench), a holistic cross-platform benchmark for GUI outcome rewards, where all evaluated models achieve their best performance under OS-Themis. Extensive experiments on AndroidWorld show that OS-Themis yields a 10.3% improvement when used to support online RL training, and a 6.9% gain when used for trajectory validation and filtering in the self-training loop, highlighting its potential to drive agent evolution.
翻译:强化学习(Reinforcement Learning, RL)有望提升图形用户界面(GUI)智能体在随机环境中的鲁棒性,但其训练过程对奖励函数的质量高度敏感。现有奖励方法难以同时兼顾可扩展性与性能。为解决这一问题,我们提出OS-Themis——一种可扩展且精准的多智能体评论框架。与单一评判者不同,OS-Themis将轨迹分解为可验证的里程碑,从而分离决策所需的关键证据,并采用审查机制在做出最终裁决前严格审核证据链。为便于评估,我们进一步引入OmniGUIRewardBench(OGRBench)——一个面向GUI结果奖励的跨平台全景基准测试集。在该基准测试中,所有受评估模型在OS-Themis框架下均取得最优性能。在AndroidWorld上的大量实验表明:当用于支持在线强化学习训练时,OS-Themis带来10.3%的性能提升;当用于自训练循环中的轨迹验证与过滤时,其性能提升达6.9%。这些结果凸显了该框架推动智能体进化的潜力。