Current LLM agents typically lack instance-level context, which comprises concrete facts such as environment structure, system configurations, and local mechanics. Consequently, existing methods are forced to intertwine exploration with task execution. This coupling leads to redundant interactions and fragile decision-making, as agents must repeatedly rediscover the same information for every new task. To address this, we introduce AutoContext, a method that decouples exploration from task solving. AutoContext performs a systematic, one-off exploration to construct a reusable knowledge graph for each environment instance. This structured context allows off-the-shelf agents to access necessary facts directly, eliminating redundant exploration. Experiments across TextWorld, ALFWorld, Crafter, and InterCode-Bash demonstrate substantial gains: for example, the success rate of a ReAct agent on TextWorld improves from 37% to 95%, highlighting the critical role of structured instance context in efficient agentic systems.
翻译:当前的大语言模型智能体通常缺乏实例级上下文,这类上下文包含环境结构、系统配置与局部机制等具体事实。因此,现有方法被迫将探索过程与任务执行相耦合,导致智能体必须为每个新任务重复发现相同信息,进而产生冗余交互与脆弱的决策机制。为解决此问题,我们提出AutoContext方法,该方法将探索阶段与任务求解阶段解耦。AutoContext通过系统性的一次性探索,为每个环境实例构建可复用的知识图谱。这种结构化上下文使得现成智能体能够直接获取必要事实,从而消除冗余探索。在TextWorld、ALFWorld、Crafter和InterCode-Bash等环境上的实验表明,该方法带来显著性能提升:例如,ReAct智能体在TextWorld上的成功率从37%提高至95%,凸显了结构化实例上下文在高效智能体系统中的关键作用。