Pre-trained large language models (LLMs) are becoming useful for various tasks. To improve their performance on certain tasks, it is necessary to fine-tune them on specific data corpora (e.g., medical reports, business data). These specialized data corpora may contain sensitive data (e.g., personal or confidential data) that will be memorized by the model and likely to be regurgitated during its subsequent use. This memorization of sensitive information by the model poses a significant privacy or confidentiality issue. To remove this memorization and sanitize the model without requiring costly additional fine-tuning on a secured data corpus, we propose SANI. SANI is an unlearning approach to sanitize language models. It relies on both an erasure and repair phases that 1) reset certain neurons in the last layers of the model to disrupt the memorization of fine-grained information, and then 2) fine-tune the model while avoiding memorizing sensitive information. We comprehensively evaluate SANI to sanitize both a model fine-tuned and specialized with medical data by removing directly and indirectly identifiers from the memorization of the model, and a standard pre-trained model by removing specific terms defined as confidential information from the model. Results show that with only few additional epochs of unlearning, the model is sanitized and the number of regurgitations is drastically reduced. This approach can be particularly useful for hospitals or other industries that have already spent significant resources training models on large datasets and wish to sanitize them before sharing.
翻译:预训练大型语言模型(LLMs)正日益适用于各类任务。为提升其在特定任务上的性能,需在专业数据语料(如医疗报告、商业数据)上进行微调。这些专业语料可能包含敏感数据(如个人或机密信息),这些数据会被模型记忆并在后续使用中可能被复现。模型对敏感信息的记忆会引发严重的隐私或保密问题。为消除此类记忆并在无需对安全数据语料进行昂贵额外微调的情况下净化模型,我们提出SANI方法。SANI是一种用于净化语言模型的反学习方法。该方法包含擦除与修复两个阶段:1)重置模型最后若干层中的特定神经元以破坏细粒度信息的记忆,随后2)在避免记忆敏感信息的前提下对模型进行微调。我们通过两个场景全面评估SANI的净化效果:其一是对经医疗数据微调的专业化模型,通过直接与间接移除模型记忆中的标识符;其二是对标准预训练模型,通过移除模型中定义为机密信息的特定术语。实验结果表明,仅需少量轮次的反学习,模型即可得到有效净化,信息复现数量显著降低。该方法对医院等已投入大量资源训练大模型、且希望在共享前进行净化的行业具有重要应用价值。