Computer-Use Agents (CUAs) are emerging as a new paradigm in human-computer interaction, enabling autonomous execution of tasks in desktop environment by perceiving high-level natural-language instructions. As such agents become increasingly capable and are deployed across diverse desktop environments, evaluating their behavior in a scalable and reliable manner becomes a critical challenge. Existing evaluation pipelines rely on static benchmarks, rule-based success checks, or manual inspection, which are brittle, costly, and poorly aligned with real-world usage. In this work, we study Vision-Language Models (VLMs) as autonomous auditors for assessing CUA task completion directly from observable interactions and conduct a large-scale meta-evaluation of five VLMs that judge task success given a natural-language instruction and the final environment state. Our evaluation spans three widely used CUA benchmarks across macOS, Windows, and Linux environments and analyzes auditor behavior along three complementary dimensions: accuracy, calibration of confidence estimates, and inter-model agreement. We find that while state-of-the-art VLMs achieve strong accuracy and calibration, all auditors exhibit notable performance degradation in more complex or heterogeneous environments, and even high-performing models show significant disagreement in their judgments. These results expose fundamental limitations of current model-based auditing approaches and highlight the need to explicitly account for evaluator reliability, uncertainty, and variance when deploying autonomous CUAs in real-world settings.
翻译:计算机使用代理(CUA)正成为人机交互的新范式,能够通过感知高级自然语言指令在桌面环境中自主执行任务。随着此类代理能力日益增强并部署于多样化的桌面环境,如何以可扩展且可靠的方式评估其行为成为一个关键挑战。现有的评估流程依赖于静态基准测试、基于规则的成功检查或人工检查,这些方法脆弱、成本高昂且与现实使用场景契合度低。在本研究中,我们探索将视觉语言模型(VLM)作为自主审计员,通过可观测的交互直接评估CUA任务完成情况,并对五种VLM进行了大规模元评估——这些模型根据自然语言指令和最终环境状态判断任务成功与否。我们的评估涵盖macOS、Windows和Linux环境中三个广泛使用的CUA基准测试,并从三个互补维度分析审计员行为:准确性、置信度估计的校准以及模型间一致性。研究发现,虽然最先进的VLM实现了较高的准确性和校准度,但所有审计员在更复杂或异构环境中均表现出明显的性能下降,即使是高性能模型在其判断中也存在显著分歧。这些结果揭示了当前基于模型的审计方法存在根本性局限,并强调在现实场景中部署自主CUA时,必须明确考虑评估者的可靠性、不确定性和差异性。