Existing efforts in building GUI agents heavily rely on the availability of robust commercial Vision-Language Models (VLMs) such as GPT-4o and GeminiProVision. Practitioners are often reluctant to use open-source VLMs due to their significant performance lag compared to their closed-source counterparts, particularly in GUI grounding and Out-Of-Distribution (OOD) scenarios. To facilitate future research in this area, we developed OS-Atlas - a foundational GUI action model that excels at GUI grounding and OOD agentic tasks through innovations in both data and modeling. We have invested significant engineering effort in developing an open-source toolkit for synthesizing GUI grounding data across multiple platforms, including Windows, Linux, MacOS, Android, and the web. Leveraging this toolkit, we are releasing the largest open-source cross-platform GUI grounding corpus to date, which contains over 13 million GUI elements. This dataset, combined with innovations in model training, provides a solid foundation for OS-Atlas to understand GUI screenshots and generalize to unseen interfaces. Through extensive evaluation across six benchmarks spanning three different platforms (mobile, desktop, and web), OS-Atlas demonstrates significant performance improvements over previous state-of-the-art models. Our evaluation also uncovers valuable insights into continuously improving and scaling the agentic capabilities of open-source VLMs.
翻译:现有构建图形用户界面(GUI)智能体的工作严重依赖如GPT-4o和GeminiProVision等成熟的商业视觉语言模型(VLMs)。由于开源VLMs在性能上显著落后于闭源模型,尤其在GUI定位和分布外(OOD)场景中,实践者往往不愿采用。为促进该领域未来研究,我们开发了OS-Atlas——一个通过数据与建模双重创新,在GUI定位和OOD智能体任务中表现卓越的基础GUI动作模型。我们投入大量工程努力开发了一套开源工具包,用于跨Windows、Linux、MacOS、Android及Web等多平台合成GUI定位数据。借助此工具包,我们发布了迄今最大的开源跨平台GUI定位语料库,包含超过1300万个GUI元素。该数据集结合模型训练创新,为OS-Atlas理解GUI截图并泛化至未见界面奠定了坚实基础。通过在涵盖移动端、桌面端和Web端三大平台的六个基准测试上进行广泛评估,OS-Atlas相较于先前最先进模型展现出显著性能提升。我们的评估还揭示了持续改进和扩展开源VLMs智能体能力的重要洞见。