Current context augmentation methods, such as retrieval-augmented generation, are essential for solving knowledge-intensive reasoning tasks.However, they typically adhere to a rigid, brute-force strategy that executes retrieval at every step. This indiscriminate approach not only incurs unnecessary computational costs but also degrades performance by saturating the context with irrelevant noise. To address these limitations, we introduce Agentic Context Evolution (ACE), a framework inspired by human metacognition that dynamically determines whether to seek new evidence or reason with existing knowledge. ACE employs a central orchestrator agent to make decisions strategically via majority voting.It aims to alternate between activating a retriever agent for external retrieval and a reasoner agent for internal analysis and refinement. By eliminating redundant retrieval steps, ACE maintains a concise and evolved context. Extensive experiments on challenging multi-hop QA benchmarks demonstrate that ACE significantly outperforms competitive baselines in accuracy while achieving efficient token consumption.Our work provides valuable insights into advancing context-evolved generation for complex, knowledge-intensive tasks.
翻译:当前语境增强方法(如检索增强生成)对于解决知识密集型推理任务至关重要。然而,这些方法通常遵循僵化的暴力策略,在每一步都执行检索操作。这种无差别的处理方式不仅会产生不必要的计算成本,还会因在语境中引入无关噪声而导致性能下降。为应对这些局限性,我们提出受人类元认知启发的智能体驱动语境演化框架,该框架能动态决定是寻求新证据还是基于现有知识进行推理。该框架采用中央协调智能体,通过多数表决机制进行策略性决策,旨在交替激活检索智能体执行外部检索与推理智能体执行内部分析与精化。通过消除冗余检索步骤,该框架得以维持简洁且持续演化的语境。在具有挑战性的多跳问答基准上的大量实验表明,该框架在准确率上显著优于现有竞争基线,同时实现了高效的标记消耗。本研究为推进面向复杂知识密集型任务的语境演化生成提供了重要见解。