Agent memory systems often adopt the standard Retrieval-Augmented Generation (RAG) pipeline, yet its underlying assumptions differ in this setting. RAG targets large, heterogeneous corpora where retrieved passages are diverse, whereas agent memory is a bounded, coherent dialogue stream with highly correlated spans that are often duplicates. Under this shift, fixed top-$k$ similarity retrieval tends to return redundant context, and post-hoc pruning can delete temporally linked prerequisites needed for correct reasoning. We argue retrieval should move beyond similarity matching and instead operate over latent components, following decoupling to aggregation: disentangle memories into semantic components, organise them into a hierarchy, and use this structure to drive retrieval. We propose xMemory, which builds a hierarchy of intact units and maintains a searchable yet faithful high-level node organisation via a sparsity--semantics objective that guides memory split and merge. At inference, xMemory retrieves top-down, selecting a compact, diverse set of themes and semantics for multi-fact queries, and expanding to episodes and raw messages only when it reduces the reader's uncertainty. Experiments on LoCoMo and PerLTQA across the three latest LLMs show consistent gains in answer quality and token efficiency.
翻译:智能体记忆系统通常采用标准的检索增强生成(RAG)流程,但其底层假设在此场景中存在差异。RAG面向大规模异构语料库,其检索到的段落具有多样性;而智能体记忆则是一个有界、连贯的对话流,其中包含高度相关且常为重复内容的文本片段。在此差异下,固定的top-$k$相似性检索倾向于返回冗余上下文,而事后剪枝操作可能删除正确推理所需的时间关联前提条件。我们认为检索应当超越相似性匹配,转而基于潜在组件进行操作,遵循“解耦到聚合”的路径:将记忆解耦为语义组件,将其组织为层次结构,并利用该结构驱动检索。本文提出xMemory系统,该系统构建完整单元的层次结构,并通过指导记忆拆分与合并的稀疏性-语义目标,维护一个可搜索且忠实的高层节点组织。在推理阶段,xMemory采用自上而下的检索方式:针对多事实查询选择紧凑且多样化的主题与语义集合,仅当能降低阅读器不确定性时才扩展至事件片段与原始消息。在LoCoMo与PerLTQA数据集上基于三种最新大语言模型的实验表明,该方法在答案质量与令牌效率方面均取得持续提升。