Graphical User Interface (GUI) agents are designed to automate complex tasks on digital devices, such as smartphones and desktops. Most existing GUI agents interact with the environment through extracted structured data, which can be notably lengthy (e.g., HTML) and occasionally inaccessible (e.g., on desktops). To alleviate this issue, we propose a novel visual GUI agent -- SeeClick, which only relies on screenshots for task automation. In our preliminary study, we have discovered a key challenge in developing visual GUI agents: GUI grounding -- the capacity to accurately locate screen elements based on instructions. To tackle this challenge, we propose to enhance SeeClick with GUI grounding pre-training and devise a method to automate the curation of GUI grounding data. Along with the efforts above, we have also created ScreenSpot, the first realistic GUI grounding benchmark that encompasses mobile, desktop, and web environments. After pre-training, SeeClick demonstrates significant improvement in ScreenSpot over various baselines. Moreover, comprehensive evaluations on three widely used benchmarks consistently support our finding that advancements in GUI grounding directly correlate with enhanced performance in downstream GUI agent tasks. The model, data and code are available at https://github.com/njucckevin/SeeClick.
翻译:图形用户界面(GUI)智能体旨在自动化智能手机、桌面电脑等数字设备上的复杂任务。现有大多数GUI智能体通过解析结构化数据与环境交互,但这些数据通常冗长(如HTML),且在部分场景下(如桌面端)难以获取。为缓解这一问题,我们提出了一种新型视觉GUI智能体——SeeClick,其仅依赖截图即可实现任务自动化。在初步研究中,我们发现开发视觉GUI智能体的核心挑战在于GUI定位能力——即根据指令精准定位界面元素的能力。为此,我们提出通过GUI定位预训练增强SeeClick,并设计了一种自动化构建GUI定位数据的方法。同时,我们构建了首个覆盖移动端、桌面端和网页环境的真实GUI定位基准ScreenSpot。预训练后,SeeClick在ScreenSpot上的表现显著优于各类基线模型。此外,在三个广泛使用的基准上的综合评估一致表明:GUI定位能力的提升与下游GUI智能体任务性能的增强呈直接正相关。模型、数据及代码已开源至https://github.com/njucckevin/SeeClick。