Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic memory stores, entangling temporal, causal, and entity information. This design limits interpretability and alignment between query intent and retrieved evidence, leading to suboptimal reasoning accuracy. In this paper, we propose MAGMA, a multi-graph agentic memory architecture that represents each memory item across orthogonal semantic, temporal, causal, and entity graphs. MAGMA formulates retrieval as policy-guided traversal over these relational views, enabling query-adaptive selection and structured context construction. By decoupling memory representation from retrieval logic, MAGMA provides transparent reasoning paths and fine-grained control over retrieval. Experiments on LoCoMo and LongMemEval demonstrate that MAGMA consistently outperforms state-of-the-art agentic memory systems in long-horizon reasoning tasks.
翻译:记忆增强生成(MAG)通过外部记忆扩展大型语言模型以支持长上下文推理,但现有方法主要依赖于对单一记忆存储的语义相似性搜索,将时序、因果和实体信息纠缠在一起。这种设计限制了可解释性以及查询意图与检索证据之间的对齐,导致推理准确性欠佳。本文提出MAGMA,一种多图智能体记忆架构,该架构将每个记忆项表示在正交的语义图、时序图、因果图和实体图中。MAGMA将检索过程形式化为在这些关系视图上进行策略引导的遍历,从而实现查询自适应的选择和结构化的上下文构建。通过将记忆表示与检索逻辑解耦,MAGMA提供了透明的推理路径和对检索的细粒度控制。在LoCoMo和LongMemEval数据集上的实验表明,MAGMA在长视野推理任务中持续优于最先进的智能体记忆系统。