Existing Large Language Models (LLMs) usually remain static after deployment, which might make it hard to inject new knowledge into the model. We aim to build models containing a considerable portion of self-updatable parameters, enabling the model to integrate new knowledge effectively and efficiently. To this end, we introduce MEMORYLLM, a model that comprises a transformer and a fixed-size memory pool within the latent space of the transformer. MEMORYLLM can self-update with text knowledge and memorize the knowledge injected earlier. Our evaluations demonstrate the ability of MEMORYLLM to effectively incorporate new knowledge, as evidenced by its performance on model editing benchmarks. Meanwhile, the model exhibits long-term information retention capacity, which is validated through our custom-designed evaluations and long-context benchmarks. MEMORYLLM also shows operational integrity without any sign of performance degradation even after nearly a million memory updates. Our code and model are open-sourced at https://github.com/wangyu-ustc/MemoryLLM.
翻译:现有的大型语言模型(LLM)在部署后通常保持静态,这可能导致难以向模型中注入新知识。我们的目标是构建包含相当一部分可自我更新参数的模型,使模型能够高效且有效地整合新知识。为此,我们提出了MEMORYLLM,该模型在Transformer的潜在空间中包含一个Transformer编码器和一个固定大小的记忆池。MEMORYLLM能够利用文本知识进行自我更新,并记住先前注入的知识。我们的评估表明,MEMORYLLM能够有效整合新知识,这在其在模型编辑基准测试中的性能上得到了验证。同时,该模型展现出长期信息保留能力,这一点通过我们自定义的评估和长上下文基准测试得到了确认。即使经过近百万次记忆更新,MEMORYLLM仍表现出运行完整性,没有任何性能下降的迹象。我们的代码和模型已在 https://github.com/wangyu-ustc/MemoryLLM 开源。