Large language model-based agents operating in long-horizon interactions require memory systems that support temporal consistency, multi-hop reasoning, and evidence-grounded reuse across sessions. Existing approaches largely rely on unstructured retrieval or coarse abstractions, which often lead to temporal conflicts, brittle reasoning, and limited traceability. We propose MemWeaver, a unified memory framework that consolidates long-term agent experiences into three interconnected components: a temporally grounded graph memory for structured relational reasoning, an experience memory that abstracts recurring interaction patterns from repeated observations, and a passage memory that preserves original textual evidence. MemWeaver employs a dual-channel retrieval strategy that jointly retrieves structured knowledge and supporting evidence to construct compact yet information-dense contexts for reasoning. Experiments on the LoCoMo benchmark demonstrate that MemWeaver substantially improves multi-hop and temporal reasoning accuracy while reducing input context length by over 95\% compared to long-context baselines.
翻译:基于大语言模型的智能体在长程交互中运行时,需要能够支持时序一致性、多跳推理以及跨会话证据驱动复用的记忆系统。现有方法主要依赖非结构化检索或粗粒度抽象,这常常导致时序冲突、推理脆弱以及可追溯性有限。我们提出了MemWeaver,一个统一的记忆框架,它将智能体的长期经验整合为三个相互关联的组件:一个用于结构化关系推理的时序基础图记忆,一个从重复观察中抽象出重复交互模式的经验记忆,以及一个保留原始文本证据的段落记忆。MemWeaver采用双通道检索策略,联合检索结构化知识和支持性证据,以构建紧凑但信息密集的推理上下文。在LoCoMo基准测试上的实验表明,与长上下文基线相比,MemWeaver显著提高了多跳和时序推理的准确性,同时将输入上下文长度减少了95%以上。