While recent research increasingly showcases the remarkable capabilities of Large Language Models (LLMs), it's vital to confront their hidden pitfalls. Among these challenges, the issue of memorization stands out, posing significant ethical and legal risks. In this paper, we presents a Systematization of Knowledge (SoK) on the topic of memorization in LLMs. Memorization is the effect that a model tends to store and reproduce phrases or passages from the training data and has been shown to be the fundamental issue to various privacy and security attacks against LLMs. We begin by providing an overview of the literature on the memorization, exploring it across five key dimensions: intentionality, degree, retrievability, abstraction, and transparency. Next, we discuss the metrics and methods used to measure memorization, followed by an analysis of the factors that contribute to memorization phenomenon. We then examine how memorization manifests itself in specific model architectures and explore strategies for mitigating these effects. We conclude our overview by identifying potential research topics for the near future: to develop methods for balancing performance and privacy in LLMs, and the analysis of memorization in specific contexts, including conversational agents, retrieval-augmented generation, multilingual language models, and diffusion language models.
翻译:尽管近期研究日益展示出大型语言模型(LLMs)的卓越能力,但正视其潜在缺陷至关重要。在这些挑战中,记忆问题尤为突出,构成了重大的伦理与法律风险。本文针对LLMs中的记忆现象进行了知识体系化梳理。记忆效应指模型倾向于存储并复现训练数据中的短语或段落,已被证明是各类针对LLMs的隐私与安全攻击的根本症结。我们首先从五个关键维度(意向性、程度、可检索性、抽象性和透明性)系统梳理记忆现象的研究文献。继而讨论衡量记忆效应的指标与方法,分析导致记忆现象的影响因素。随后考察记忆效应在特定模型架构中的表现形式,并探讨缓解该效应的策略。最后通过指出近期潜在研究方向作为总结:开发平衡LLMs性能与隐私的方法,以及在对话代理、检索增强生成、多语言模型和扩散语言模型等具体语境中的记忆现象分析。