The advent of multimodal agents facilitates effective interaction within graphical user interface (GUI), especially in ubiquitous GUI control. However, their inability to reliably execute toggle control instructions remains a key bottleneck. To investigate this, we construct a state control benchmark with binary toggle instructions derived from public datasets. Evaluation results of existing agents demonstrate their notable unreliability, particularly when the current toggle state already matches the desired state. To address the challenge, we propose State-aware Reasoning (StaR), a multimodal reasoning method that enables agents to perceive the current toggle state, infer the desired state from the instruction, and act accordingly. Experiments on four multimodal agents demonstrate that StaR can improve toggle instruction execution accuracy by over 30\%. Further evaluations on three public agentic benchmarks show that StaR also enhances general agentic task performance. Finally, evaluations on a dynamic environment highlight the potential of StaR for real-world applications. Code and benchmark: https://github.com/ZrW00/StaR.
翻译:多模态智能体的发展为图形用户界面(GUI)内的有效交互提供了便利,尤其在普适性GUI控制领域。然而,现有智能体在执行开关控制指令时仍存在可靠性不足的关键瓶颈。为探究此问题,我们基于公开数据集构建了包含二元开关指令的状态控制基准测试。对现有智能体的评估结果表明其执行可靠性显著不足,特别是当当前开关状态已符合目标状态时。为解决这一挑战,我们提出状态感知推理(StaR)——一种使智能体能够感知当前开关状态、从指令推断目标状态并执行相应操作的多模态推理方法。在四种多模态智能体上的实验表明,StaR可将开关指令执行准确率提升超过30%。在三个公开智能体基准测试上的进一步评估显示,StaR还能提升通用智能体任务性能。最后,在动态环境中的评估凸显了StaR在实际应用场景中的潜力。代码与基准测试:https://github.com/ZrW00/StaR。