Large language models deployed as autonomous agents face critical memory limitations, lacking selective forgetting mechanisms that lead to either catastrophic forgetting at context boundaries or information overload within them. While human memory naturally balances retention and forgetting through adaptive decay processes, current AI systems employ binary retention strategies that preserve everything or lose it entirely. We propose FadeMem, a biologically-inspired agent memory architecture that incorporates active forgetting mechanisms mirroring human cognitive efficiency. FadeMem implements differential decay rates across a dual-layer memory hierarchy, where retention is governed by adaptive exponential decay functions modulated by semantic relevance, access frequency, and temporal patterns. Through LLM-guided conflict resolution and intelligent memory fusion, our system consolidates related information while allowing irrelevant details to fade. Experiments on Multi-Session Chat, LoCoMo, and LTI-Bench demonstrate superior multi-hop reasoning and retrieval with 45\% storage reduction, validating the effectiveness of biologically-inspired forgetting in agent memory systems.
翻译:作为自主智能体部署的大语言模型面临关键的内存限制,其缺乏选择性遗忘机制,导致在上下文边界处出现灾难性遗忘或在边界内产生信息过载。人类记忆通过自适应衰减过程自然平衡保留与遗忘,而当前人工智能系统采用二元保留策略——要么完全保存,要么彻底丢失。本文提出FadeMem,一种受生物启发的智能体记忆架构,它整合了模拟人类认知效率的主动遗忘机制。FadeMem在双层记忆层次结构中实现差异化衰减率,其中信息保留由自适应指数衰减函数调控,该函数受语义相关性、访问频率和时间模式调制。通过LLM引导的冲突解决和智能记忆融合,我们的系统能够整合相关信息,同时允许无关细节逐渐消退。在Multi-Session Chat、LoCoMo和LTI-Bench数据集上的实验表明,该系统在减少45%存储空间的同时实现了卓越的多跳推理与检索能力,验证了生物启发式遗忘机制在智能体记忆系统中的有效性。