Modern computer-use agents (CUA) must perceive a screen as a structured state, what elements are visible, where they are, and what text they contain, before they can reliably ground instructions and act. Yet, most available grounding datasets provide sparse supervision, with insufficient and low-diversity labels that annotate only a small subset of task-relevant elements per screen, which limits both coverage and generalization; moreover, practical deployment requires efficiency to enable low-latency, on-device use. We introduce ScreenParse, a large-scale dataset for complete screen parsing, with dense annotations of all visible UI elements (boxes, 55-class types, and text) across 771K web screenshots (21M elements). ScreenParse is generated by Webshot, an automated, scalable pipeline that renders diverse urls, extracts annotations and applies VLM-based relabeling and quality filtering. Using ScreenParse, we train ScreenVLM, a compact, 316M-parameter vision language model (VLM) that decodes a compact ScreenTag markup representation with a structure-aware loss that upweights structure-critical tokens. ScreenVLM substantially outperforms much larger foundation VLMs on dense parsing (e.g., 0.592 vs. 0.294 PageIoU on ScreenParse) and shows strong transfer to public benchmarks. Moreover, finetuning foundation VLMs on ScreenParse consistently improves their grounding performance, suggesting that dense screen supervision provides transferable structural priors for UI understanding. Project page: https://saidgurbuz.github.io/screenparse/.
翻译:现代计算机使用代理(CUA)必须将屏幕感知为结构化状态——识别可见元素、其位置及包含的文本——才能可靠地理解指令并执行操作。然而,现有的大多数标注数据集仅提供稀疏监督,其标注数量不足且多样性有限,仅对每个屏幕中任务相关元素的小子集进行注释,这限制了模型的覆盖范围与泛化能力;此外,实际部署需要高效率以支持低延迟的端侧应用。本研究提出ScreenParse,一个用于完整屏幕解析的大规模数据集,包含对77.1万张网页截图(涉及2100万个元素)中所有可见UI元素(边界框、55种类别标签及文本)的密集标注。ScreenParse通过Webshot自动化可扩展流程生成,该流程能够渲染多样化网址、提取标注信息,并应用基于视觉语言模型(VLM)的重标注与质量过滤。基于ScreenParse,我们训练了ScreenVLM——一个紧凑的3.16亿参数视觉语言模型,该模型通过结构感知损失函数解码紧凑的ScreenTag标记表示,并对结构关键标记进行加权。ScreenVLM在密集解析任务上显著优于参数量更大的基础VLM(例如在ScreenParse数据集上PageIoU达到0.592,对比基准模型0.294),并在公开基准测试中展现出强大的迁移能力。此外,将基础VLM在ScreenParse上进行微调可持续提升其界面理解性能,这表明密集屏幕监督能为UI理解提供可迁移的结构先验知识。项目页面:https://saidgurbuz.github.io/screenparse/。