Sustaining long-term interactions remains a bottleneck for Large Language Models (LLMs), as their limited context windows struggle to manage dialogue histories that extend over time. Existing memory systems often treat interactions as disjointed snippets, failing to capture the underlying narrative coherence of the dialogue stream. We propose TraceMem, a cognitively-inspired framework that weaves structured, narrative memory schemata from user conversational traces through a three-stage pipeline: (1) Short-term Memory Processing, which employs a deductive topic segmentation approach to demarcate episode boundaries and extract semantic representation; (2) Synaptic Memory Consolidation, a process that summarizes episodes into episodic memories before distilling them alongside semantics into user-specific traces; and (3) Systems Memory Consolidation, which utilizes two-stage hierarchical clustering to organize these traces into coherent, time-evolving narrative threads under unifying themes. These threads are encapsulated into structured user memory cards, forming narrative memory schemata. For memory utilization, we provide an agentic search mechanism to enhance reasoning process. Evaluation on the LoCoMo benchmark shows that TraceMem achieves state-of-the-art performance with a brain-inspired architecture. Analysis shows that by constructing coherent narratives, it surpasses baselines in multi-hop and temporal reasoning, underscoring its essential role in deep narrative comprehension. Additionally, we provide an open discussion on memory systems, offering our perspectives and future outlook on the field. Our code implementation is available at: https://github.com/YimingShu-teay/TraceMem
翻译:维持长期交互仍是大语言模型(LLM)面临的技术瓶颈,因其有限的上下文窗口难以处理随时间延伸的对话历史。现有记忆系统通常将交互视为离散片段,未能捕捉对话流中潜在的叙事连贯性。本文提出TraceMem——一个受认知启发的框架,通过三阶段流程从用户对话轨迹中编织结构化叙事记忆图式:(1)短期记忆处理:采用演绎式主题分割方法划分事件边界并提取语义表征;(2)突触记忆巩固:将事件摘要为情景记忆,随后将其与语义表征共同提炼为用户特定轨迹;(3)系统记忆巩固:利用两阶段层次聚类将这些轨迹组织成具有统一主题的连贯、时变叙事线索。这些线索被封装为结构化用户记忆卡片,形成叙事记忆图式。在记忆利用方面,我们提供增强推理过程的智能搜索机制。基于LoCoMo基准的评估表明,TraceMem凭借类脑架构实现了最先进的性能。分析显示,通过构建连贯叙事,其在多跳推理与时序推理任务上超越基线模型,凸显了该框架在深度叙事理解中的关键作用。此外,我们对记忆系统展开开放性讨论,提出对该领域的见解与未来展望。代码实现已开源:https://github.com/YimingShu-teay/TraceMem