Language models (LMs) are trained on vast amounts of text data, which may include private and copyrighted content. Data owners may request the removal of their data from a trained model due to privacy or copyright concerns. However, exactly unlearning only these datapoints (i.e., retraining with the data removed) is intractable in modern-day models. This has led to the development of many approximate unlearning algorithms. The evaluation of the efficacy of these algorithms has traditionally been narrow in scope, failing to precisely quantify the success and practicality of the algorithm from the perspectives of both the model deployers and the data owners. We address this issue by proposing MUSE, a comprehensive machine unlearning evaluation benchmark that enumerates six diverse desirable properties for unlearned models: (1) no verbatim memorization, (2) no knowledge memorization, (3) no privacy leakage, (4) utility preservation on data not intended for removal, (5) scalability with respect to the size of removal requests, and (6) sustainability over sequential unlearning requests. Using these criteria, we benchmark how effectively eight popular unlearning algorithms on 7B-parameter LMs can unlearn Harry Potter books and news articles. Our results demonstrate that most algorithms can prevent verbatim memorization and knowledge memorization to varying degrees, but only one algorithm does not lead to severe privacy leakage. Furthermore, existing algorithms fail to meet deployer's expectations because they often degrade general model utility and also cannot sustainably accommodate successive unlearning requests or large-scale content removal. Our findings identify key issues with the practicality of existing unlearning algorithms on language models, and we release our benchmark to facilitate further evaluations: muse-bench.github.io
翻译:语言模型(LMs)在包含大量可能涉及隐私及受版权保护内容的文本数据上进行训练。出于隐私或版权考虑,数据所有者可能要求从已训练模型中移除其数据。然而,在现代模型中精确地仅遗忘这些特定数据点(即通过移除数据重新训练)是难以实现的。这推动了多种近似遗忘算法的发展。传统上对这些算法效能的评估范围较为局限,未能从模型部署者和数据所有者的角度精确量化算法的成功度与实用性。为解决此问题,我们提出MUSE——一个全面的机器遗忘评估基准,该基准枚举了遗忘模型应具备的六项多样化理想特性:(1)无逐字记忆,(2)无知识记忆,(3)无隐私泄露,(4)对非移除数据保持效用,(5)适应不同规模移除请求的可扩展性,以及(6)对连续遗忘请求的可持续性。基于这些标准,我们在7B参数的语言模型上对八种主流遗忘算法进行了基准测试,评估其遗忘《哈利·波特》系列书籍与新闻文章的效果。实验结果表明,大多数算法能在不同程度上防止逐字记忆和知识记忆,但仅有一种算法不会导致严重的隐私泄露。此外,现有算法往往降低模型的通用效用,且无法持续适应连续遗忘请求或大规模内容移除,因而未能满足部署者的期望。我们的研究揭示了现有语言模型遗忘算法在实用性方面存在的关键问题,并发布了评估基准以促进后续研究:muse-bench.github.io