Large Language Models (LLMs) face significant computational and memory constraints when processing long contexts, despite growing demand for applications requiring reasoning over extensive documents, multi-session dialogues, and book length texts. While recent advances have extended context windows to 100K-1M tokens, such approaches incur prohibitive costs for resource constrained deployments. We propose BudgetMem, a novel memory augmented architecture that learns what to remember rather than remembering everything. Our system combines selective memory policies with feature based salience scoring (entity density, TF-IDF, discourse markers, position bias) to decide which information merits storage under strict budget constraints. Unlike existing retrieval augmented generation (RAG) systems that store all chunks, BudgetMem employs learned gating mechanisms coupled with BM25 sparse retrieval for efficient information access. Through comprehensive experiments on 700 question answer pairs across short (237 tokens) and long (5K-10K tokens) documents with Llama-3.2-3B-Instruct, we demonstrate that BudgetMem achieves remarkable results on long documents: only 1.0% F1 score degradation while saving 72.4% memory compared to baseline RAG. We validate our approach through budget sensitivity analysis (testing 7 budget ratios), naive baseline comparisons, and document length analysis, showing that BudgetMem's benefits increase with document length. Our work provides a practical pathway for deploying capable long context systems on modest hardware, democratizing access to advanced language understanding capabilities.
翻译:大型语言模型在处理长上下文时面临显著的计算和内存限制,尽管需要推理超长文档、多轮对话和书籍长度文本的应用需求日益增长。尽管近期进展已将上下文窗口扩展至10万至100万标记,但此类方法在资源受限的部署中会产生难以承受的成本。我们提出BudgetMem,一种新颖的记忆增强架构,其学习应记住的内容而非记住全部信息。该系统将选择性记忆策略与基于特征的显著性评分(实体密度、TF-IDF、语篇标记、位置偏差)相结合,以在严格的预算约束下决定哪些信息值得存储。与现有存储所有文本块的检索增强生成系统不同,BudgetMem采用学习型门控机制结合BM25稀疏检索来实现高效信息访问。通过在短文档(237标记)和长文档(5K-10K标记)上对Llama-3.2-3B-Instruct模型进行700组问答对的综合实验,我们证明BudgetMem在长文档处理中取得显著成果:与基线RAG系统相比,仅产生1.0%的F1分数下降,同时节省72.4%的内存。我们通过预算敏感性分析(测试7种预算比例)、朴素基线比较和文档长度分析验证了该方法,表明BudgetMem的优势随文档长度增加而提升。本研究为在有限硬件上部署高效的长上下文系统提供了实用路径,促进了先进语言理解能力的普及化应用。