While Large Language Models (LLMs) excel at generalized reasoning, standard retrieval-augmented approaches fail to address the disconnected nature of long-term agentic memory. To bridge this gap, we introduce Synapse (Synergistic Associative Processing Semantic Encoding), a unified memory architecture that transcends static vector similarity. Drawing from cognitive science, Synapse models memory as a dynamic graph where relevance emerges from spreading activation rather than pre-computed links. By integrating lateral inhibition and temporal decay, the system dynamically highlights relevant sub-graphs while filtering interference. We implement a Triple Hybrid Retrieval strategy that fuses geometric embeddings with activation-based graph traversal. Comprehensive evaluations on the LoCoMo benchmark show that Synapse significantly outperforms state-of-the-art methods in complex temporal and multi-hop reasoning tasks, offering a robust solution to the "Contextual Tunneling" problem. Our code and data will be made publicly available upon acceptance.
翻译:尽管大型语言模型(LLM)在泛化推理方面表现出色,但标准的检索增强方法未能解决长期智能体记忆的割裂性问题。为弥补这一缺陷,我们提出了Synapse(协同关联处理语义编码),这是一种超越静态向量相似度的统一记忆架构。借鉴认知科学原理,Synapse将记忆建模为动态图结构,其关联性通过扩散激活机制而非预计算链接产生。通过整合侧向抑制与时间衰减机制,系统能动态突显相关子图并过滤干扰信息。我们实现了三重混合检索策略,将几何嵌入与基于激活的图遍历相融合。在LoCoMo基准测试上的综合评估表明,Synapse在复杂时序和多跳推理任务中显著优于现有最优方法,为“上下文隧道”问题提供了稳健解决方案。我们的代码与数据将在论文录用后公开。