The advent of multimodal agents facilitates effective interaction within graphical user interface (GUI), especially in ubiquitous GUI control. However, their inability to reliably execute toggle control instructions remains a key bottleneck. To investigate this, we construct a state control benchmark with binary toggle instructions derived from public datasets. Evaluation results of existing agents demonstrate their notable unreliability, particularly when the current toggle state already matches the desired state. To address the challenge, we propose State-aware Reasoning (StaR), a multimodal reasoning method that enables agents to perceive the current toggle state, infer the desired state from the instruction, and act accordingly. Experiments on four multimodal agents demonstrate that StaR can improve toggle instruction execution accuracy by over 30\%. Further evaluations on three public agentic benchmarks show that StaR also enhances general agentic task performance. Finally, evaluations on a dynamic environment highlight the potential of StaR for real-world applications. Code and benchmark: https://github.com/ZrW00/StaR.
翻译:多模态智能体的出现促进了在图形用户界面(GUI)内的有效交互,尤其是在普适的GUI控制场景中。然而,它们无法可靠地执行开关控制指令仍然是一个关键瓶颈。为研究此问题,我们基于公开数据集构建了一个包含二元开关指令的状态控制基准。对现有智能体的评估结果表明其存在显著的不可靠性,特别是当当前开关状态已与期望状态相符时。为应对这一挑战,我们提出了状态感知推理(StaR),一种多模态推理方法,使智能体能够感知当前开关状态、从指令中推断期望状态,并据此采取行动。在四种多模态智能体上的实验表明,StaR能将开关指令执行准确率提升超过30%。在三个公开智能体基准上的进一步评估显示,StaR也能提升通用智能体任务性能。最后,在动态环境中的评估突显了StaR在现实世界应用中的潜力。代码与基准:https://github.com/ZrW00/StaR。