Long-context processing is a critical ability that constrains the applicability of large language models. Although there exist various methods devoted to enhancing the long-context processing ability of large language models (LLMs), they are developed in an isolated manner and lack systematic analysis and integration of their strengths, hindering further developments. In this paper, we introduce UniMem, a unified framework that reformulates existing long-context methods from the view of memory augmentation of LLMs. UniMem is characterized by four key dimensions: Memory Management, Memory Writing, Memory Reading, and Memory Injection, providing a systematic theory for understanding various long-context methods. We reformulate 16 existing methods based on UniMem and analyze four representative methods: Transformer-XL, Memorizing Transformer, RMT, and Longformer into equivalent UniMem forms to reveal their design principles and strengths. Based on these analyses, we propose UniMix, an innovative approach that integrates the strengths of these algorithms. Experimental results show that UniMix achieves superior performance in handling long contexts with significantly lower perplexity than baselines.
翻译:长上下文处理是制约大语言模型应用能力的关键技术。尽管已有多种方法致力于增强大语言模型的长上下文处理能力,但这些方法多被孤立开发,缺乏系统性分析与优势整合,阻碍了进一步发展。本文提出UniMem统一框架,从大语言模型记忆增强的视角重构现有长上下文方法。UniMem以四个关键维度为特征:记忆管理、记忆写入、记忆读取与记忆注入,为理解各类长上下文方法提供了系统化理论。我们基于UniMem重构了16种现有方法,并选取Transformer-XL、Memorizing Transformer、RMT和Longformer四种代表性方法,将其转化为等效UniMem形式以揭示其设计原理与优势。基于这些分析,我们提出UniMix创新方法,整合了上述算法的优势。实验结果表明,UniMix在处理长上下文时展现出卓越性能,其困惑度显著低于基线方法。