The evolution of Large Language Models (LLMs) into autonomous agents necessitates the management of extensive, dynamic contexts. Current benchmarks, however, remain largely static, relying on passive retrieval tasks that fail to simulate the complexities of agent-environment interaction, such as non-linear reasoning and iterative feedback. To address this, we introduce \textbf{AgentLongBench}, which evaluates agents through simulated environment rollouts based on Lateral Thinking Puzzles. This framework generates rigorous interaction trajectories across knowledge-intensive and knowledge-free scenarios. Experiments with state-of-the-art models and memory systems (32K to 4M tokens) expose a critical weakness: while adept at static retrieval, agents struggle with the dynamic information synthesis essential for workflows. Our analysis indicates that this degradation is driven by the minimum number of tokens required to resolve a query. This factor explains why the high information density inherent in massive tool responses poses a significantly greater challenge than the memory fragmentation typical of long-turn dialogues.
翻译:大型语言模型(LLM)向自主智能体的演进需要处理海量动态上下文。然而,现有基准测试大多保持静态,依赖被动检索任务,无法模拟智能体与环境交互的复杂性(如非线性推理与迭代反馈)。为此,我们提出 **AgentLongBench**,该框架通过基于横向思维谜题的环境推演来评估智能体。该框架在知识密集型与知识无关场景中生成严格的交互轨迹。通过对前沿模型与记忆系统(32K至400万词元)的实验,我们揭示了一个关键缺陷:虽然智能体擅长静态检索,但在工作流所必需的动态信息整合方面表现欠佳。分析表明,性能下降主要由解决查询所需的最小词元数量驱动。这一因素解释了为何海量工具响应固有的高信息密度,比长轮次对话中典型的内存碎片化问题带来更严峻的挑战。