Memory is critical for enabling large language model (LLM) based agents to maintain coherent behavior over long-horizon interactions. However, existing agent memory systems suffer from two key gaps: they rely on a one-size-fits-all memory structure and do not model memory structure selection as a context-adaptive decision, limiting their ability to handle heterogeneous interaction patterns and resulting in suboptimal performance. We propose a unified framework, FluxMem, that enables adaptive memory organization for LLM agents. Our framework equips agents with multiple complementary memory structures. It explicitly learns to select among these structures based on interaction-level features, using offline supervision derived from downstream response quality and memory utilization. To support robust long-horizon memory evolution, we further introduce a three-level memory hierarchy and a Beta Mixture Model-based probabilistic gate for distribution-aware memory fusion, replacing brittle similarity thresholds. Experiments on two long-horizon benchmarks, PERSONAMEM and LoCoMo, demonstrate that our method achieves average improvements of 9.18% and 6.14%.
翻译:记忆对于使基于大语言模型(LLM)的智能体能够在长程交互中保持行为连贯性至关重要。然而,现有的智能体记忆系统存在两个关键缺陷:它们依赖于“一刀切”的记忆结构,并且未将记忆结构选择建模为一种上下文自适应的决策,这限制了其处理异构交互模式的能力,并导致性能欠佳。我们提出了一个统一框架FluxMem,以实现LLM智能体的自适应记忆组织。该框架为智能体配备了多种互补的记忆结构。它基于交互层面的特征,利用从下游响应质量和内存利用率推导出的离线监督,显式地学习在这些结构中进行选择。为了支持稳健的长程记忆演化,我们进一步引入了一个三级记忆层次结构,以及一个基于Beta混合模型的概率门控机制,用于进行分布感知的记忆融合,从而替代了脆弱的相似度阈值。在两个长程基准测试PERSONAMEM和LoCoMo上的实验表明,我们的方法平均性能分别提升了9.18%和6.14%。