The advent of multimodal agents facilitates effective interaction within graphical user interface (GUI), especially in ubiquitous GUI control. However, their inability to reliably execute toggle control instructions remains a key bottleneck. To investigate this, we construct a state control benchmark with binary toggle instructions derived from public datasets. Evaluation results of existing agents demonstrate their notable unreliability, particularly when the current toggle state already matches the desired state. To address the challenge, we propose State-aware Reasoning (StaR), a multimodal reasoning method that enables agents to perceive the current toggle state, infer the desired state from the instruction, and act accordingly. Experiments on four multimodal agents demonstrate that StaR can improve toggle instruction execution accuracy by over 30\%. Further evaluations on three public agentic benchmarks show that StaR also enhances general agentic task performance. Finally, evaluations on a dynamic environment highlight the potential of StaR for real-world applications. Code and benchmark: https://github.com/ZrW00/StaR.
翻译:多模态智能体的兴起促进了图形用户界面(GUI)内的有效交互,尤其在普适性GUI控制领域。然而,其无法可靠执行开关控制指令仍是关键瓶颈。为探究此问题,我们基于公开数据集构建了包含二元开关指令的状态控制基准。对现有智能体的评估结果表明其存在显著不可靠性,尤其当当前开关状态已符合期望状态时。为解决这一挑战,我们提出状态感知推理(StaR)——一种使智能体能够感知当前开关状态、从指令推断期望状态并相应执行操作的多模态推理方法。在四个多模态智能体上的实验表明,StaR可将开关指令执行准确率提升超过30%。在三个公开智能体基准上的进一步评估显示,StaR亦能提升通用智能体任务性能。最后,动态环境中的评估凸显了StaR在实际应用中的潜力。代码与基准:https://github.com/ZrW00/StaR。