Language models (LMs) risk inadvertently memorizing and divulging sensitive or personally identifiable information (PII) seen in training data, causing privacy concerns. Current approaches to address this issue involve costly dataset scrubbing, or model filtering through unlearning and model editing, which can be bypassed through extraction attacks. We propose REVS, a novel non-gradient-based method for unlearning sensitive information from LMs. REVS identifies and modifies a small subset of neurons relevant for constituent tokens which form sensitive information. To adequately evaluate our method on truly sensitive information, we curate two datasets: an email dataset naturally memorized by Llama-3-8B and GPT-J-6B, and a synthetic social security number dataset that we tune the models to memorize. Compared to other methods, REVS demonstrates superior performance in unlearning sensitive information and robustness to extraction attacks, while retaining underlying model integrity.
翻译:语言模型(LMs)存在无意中记忆并泄露训练数据中敏感信息或个人可识别信息(PII)的风险,从而引发隐私担忧。当前解决此问题的方法涉及成本高昂的数据集清洗,或通过遗忘学习与模型编辑进行模型过滤,但这些方法可能被提取攻击绕过。我们提出REVS,一种新颖的、非基于梯度的从语言模型中消除敏感信息的方法。REVS识别并修改与构成敏感信息的组成词元相关的一小部分神经元。为了在真实的敏感信息上充分评估我们的方法,我们构建了两个数据集:一个由Llama-3-8B和GPT-J-6B自然记忆的电子邮件数据集,以及一个我们调整模型以记忆的合成社会安全号码数据集。与其他方法相比,REVS在消除敏感信息、抵抗提取攻击的鲁棒性方面表现出优越性能,同时保持了模型的内在完整性。