Large language model (LLM) agents have demonstrated remarkable capabilities in complex decision-making and tool-use tasks, yet their ability to generalize across varying environments remains a under-examined concern. Current evaluation paradigms predominantly rely on trajectory-based metrics that measure task success, while failing to assess whether agents possess a grounded, transferable model of the environment. To address this gap, we propose Task-to-Quiz (T2Q), a deterministic and automated evaluation paradigm designed to decouple task execution from world-state understanding. We instantiate this paradigm in T2QBench, a suite comprising 30 environments and 1,967 grounded QA pairs across multiple difficulty levels. Our extensive experiments reveal that task success is often a poor proxy for environment understanding, and that current memory machanism can not effectively help agents acquire a grounded model of the environment. These findings identify proactive exploration and fine-grained state representation as primary bottlenecks, offering a robust foundation for developing more generalizable autonomous agents.
翻译:大型语言模型(LLM)智能体在复杂决策与工具使用任务中展现出卓越能力,然而其在不同环境间的泛化能力仍是一个未得到充分检验的问题。当前评估范式主要依赖基于轨迹的指标来衡量任务成功率,却未能评估智能体是否拥有一个具身化、可迁移的环境模型。为弥补这一不足,我们提出了任务转测验(Task-to-Quiz, T2Q)——一种确定性的自动化评估范式,旨在将任务执行与世界状态理解进行解耦。我们在T2QBench中实例化了该范式,该基准套件包含30个环境及1,967个具身化问答对,覆盖多个难度级别。大量实验表明,任务成功率通常不能有效反映环境理解水平,且现有记忆机制无法有效帮助智能体获得具身化的环境模型。这些发现指出主动探索与细粒度状态表征是主要瓶颈,为开发更具泛化能力的自主智能体奠定了坚实基础。