Graphical User Interface (GUI) agents hold great potential for automating complex tasks across diverse digital environments, from web applications to desktop software. However, the development of such agents is hindered by the lack of high-quality, multi-step trajectory data required for effective training. Existing approaches rely on expensive and labor-intensive human annotation, making them unsustainable at scale. To address this challenge, we propose AgentTrek, a scalable data synthesis pipeline that generates high-quality GUI agent trajectories by leveraging web tutorials. Our method automatically gathers tutorial-like texts from the internet, transforms them into task goals with step-by-step instructions, and employs a visual-language model agent to simulate their execution in a real digital environment. A VLM-based evaluator ensures the correctness of the generated trajectories. We demonstrate that training GUI agents with these synthesized trajectories significantly improves their grounding and planning performance over the current models. Moreover, our approach is more cost-efficient compared to traditional human annotation methods. This work underscores the potential of guided replay with web tutorials as a viable strategy for large-scale GUI agent training, paving the way for more capable and autonomous digital agents.
翻译:图形用户界面(GUI)智能体在自动化跨数字环境(从网络应用到桌面软件)的复杂任务方面具有巨大潜力。然而,此类智能体的开发因缺乏高质量、多步骤轨迹数据而受到阻碍,而这些数据是有效训练所必需的。现有方法依赖于昂贵且劳动密集型的人工标注,难以实现规模化应用。为应对这一挑战,我们提出了AgentTrek——一种可扩展的数据合成流程,通过利用网络教程生成高质量的GUI智能体轨迹。我们的方法自动从互联网收集类教程文本,将其转化为具有分步指令的任务目标,并采用视觉语言模型智能体在真实数字环境中模拟执行这些指令。基于VLM的评估器确保生成轨迹的正确性。实验表明,使用这些合成轨迹训练GUI智能体,能显著提升其在环境感知与任务规划方面的性能,优于现有模型。此外,与传统人工标注方法相比,本方法具有更高的成本效益。这项工作凸显了基于网络教程的引导回放作为大规模GUI智能体训练可行策略的潜力,为开发更强大、更自主的数字智能体开辟了新途径。