Language models (LMs) are trained on vast amounts of text data, which may include private and copyrighted content. Data owners may request the removal of their data from a trained model due to privacy or copyright concerns. However, exactly unlearning only these datapoints (i.e., retraining with the data removed) is intractable in modern-day models. This has led to the development of many approximate unlearning algorithms. The evaluation of the efficacy of these algorithms has traditionally been narrow in scope, failing to precisely quantify the success and practicality of the algorithm from the perspectives of both the model deployers and the data owners. We address this issue by proposing MUSE, a comprehensive machine unlearning evaluation benchmark that enumerates six diverse desirable properties for unlearned models: (1) no verbatim memorization, (2) no knowledge memorization, (3) no privacy leakage, (4) utility preservation on data not intended for removal, (5) scalability with respect to the size of removal requests, and (6) sustainability over sequential unlearning requests. Using these criteria, we benchmark how effectively eight popular unlearning algorithms on 7B-parameter LMs can unlearn Harry Potter books and news articles. Our results demonstrate that most algorithms can prevent verbatim memorization and knowledge memorization to varying degrees, but only one algorithm does not lead to severe privacy leakage. Furthermore, existing algorithms fail to meet deployer's expectations because they often degrade general model utility and also cannot sustainably accommodate successive unlearning requests or large-scale content removal. Our findings identify key issues with the practicality of existing unlearning algorithms on language models, and we release our benchmark to facilitate further evaluations: muse-bench.github.io
翻译:语言模型(LMs)在大量文本数据上进行训练,这些数据可能包含私有及受版权保护的内容。出于隐私或版权考虑,数据所有者可能要求从已训练的模型中移除其数据。然而,在现代模型中,精确地仅遗忘这些特定数据点(即,在移除数据后重新训练)是难以实现的。这促使了多种近似遗忘算法的发展。传统上对这些算法效能的评估范围较为局限,未能从模型部署者和数据所有者的角度,精确量化算法的成功程度与实用性。针对此问题,我们提出了MUSE——一个全面的机器遗忘评估基准,该基准列举了遗忘模型应具备的六项多样化理想特性:(1)无逐字记忆,(2)无知识记忆,(3)无隐私泄露,(4)在非目标移除数据上保持效用,(5)对移除请求规模的可扩展性,以及(6)对连续遗忘请求的可持续性。基于这些标准,我们在7B参数的语言模型上,针对八种流行的遗忘算法在遗忘《哈利·波特》书籍和新闻文章任务上进行了基准测试。结果表明,大多数算法能在不同程度上防止逐字记忆和知识记忆,但仅有一种算法不会导致严重的隐私泄露。此外,现有算法往往无法满足部署者的期望,因为它们通常会降低模型的通用效用,并且无法可持续地适应连续的遗忘请求或大规模内容移除。我们的研究结果揭示了现有语言模型遗忘算法在实用性方面存在的关键问题,并发布了我们的基准以促进进一步评估:muse-bench.github.io