Memory mechanism is a core component of LLM-based agents, enabling reasoning and knowledge discovery over long-horizon contexts. Existing agent memory systems are typically designed within isolated paradigms (e.g., explicit, parametric, or latent memory) with tightly coupled retrieval methods that hinder cross-paradigm generalization and fusion. In this work, we take a first step toward unifying heterogeneous memory paradigms within a single memory system. We propose MemAdapter, a memory retrieval framework that enables fast alignment across agent memory paradigms. MemAdapter adopts a two-stage training strategy: (1) training a generative subgraph retriever from the unified memory space, and (2) adapting the retriever to unseen memory paradigms by training a lightweight alignment module through contrastive learning. This design improves the flexibility for memory retrieval and substantially reduces alignment cost across paradigms. Comprehensive experiments on three public evaluation benchmarks demonstrate that the generative subgraph retriever consistently outperforms five strong agent memory systems across three memory paradigms and agent model scales. Notably, MemAdapter completes cross-paradigm alignment within 13 minutes on a single GPU, achieving superior performance over original memory retrievers with less than 5% of training compute. Furthermore, MemAdapter enables effective zero-shot fusion across memory paradigms, highlighting its potential as a plug-and-play solution for agent memory systems.
翻译:记忆机制是基于大语言模型(LLM)智能体的核心组件,使其能够在长程上下文中进行推理与知识发现。现有的智能体记忆系统通常设计于孤立的范式(例如显式记忆、参数化记忆或隐式记忆)中,并采用紧密耦合的检索方法,这阻碍了跨范式的泛化与融合。在本工作中,我们首次尝试将异构记忆范式统一于单一记忆系统中。我们提出了MemAdapter,一个能够实现智能体记忆范式间快速对齐的记忆检索框架。MemAdapter采用两阶段训练策略:(1)从统一记忆空间中训练一个生成式子图检索器;(2)通过对比学习训练一个轻量级对齐模块,使该检索器能够适应未见过的记忆范式。该设计提升了记忆检索的灵活性,并大幅降低了跨范式的对齐成本。在三个公开评估基准上的综合实验表明,该生成式子图检索器在三种记忆范式及不同规模的智能体模型上均持续优于五种强大的现有智能体记忆系统。值得注意的是,MemAdapter在单GPU上仅需13分钟即可完成跨范式对齐,并以不足原训练计算量5%的代价,实现了优于原始记忆检索器的性能。此外,MemAdapter支持跨记忆范式的有效零样本融合,凸显了其作为智能体记忆系统即插即用解决方案的潜力。