The proliferation of Large Language Models (LLMs) has driven considerable interest in fine-tuning them with domain-specific data to create specialized language models. Nevertheless, such domain-specific fine-tuning data often contains contextually sensitive personally identifiable information (PII). Direct fine-tuning of LLMs on this data without privacy protection poses a risk of data leakage of sensitive PII during inference time. To address this challenge, we introduce Contextual Privacy Protection Language Models (PrivacyMind), a novel paradigm for fine-tuning LLMs that effectively injects domain-specific knowledge while safeguarding inference-time data privacy. Our work offers a theoretical analysis for model design and benchmarks various techniques such as corpus curation, penalty-based unlikelihood in training loss, instruction-based tuning, etc. Extensive experiments across diverse datasets and scenarios demonstrate the effectiveness of our approaches. In particular, instruction tuning with both positive and negative examples stands out as a promising method, effectively protecting private data while enhancing the model's knowledge. Our work underscores the potential for Large Language Models as robust contextual privacy protection learners. The complete code and data for the work can be found at https://github.com/Yijia-Xiao/PrivacyMind.
翻译:随着大语言模型(LLMs)的普及,利用领域特定数据对其进行微调以创建专用语言模型引起了广泛关注。然而,此类领域特定的微调数据通常包含上下文敏感的个人可识别信息(PII)。若在无隐私保护的情况下直接使用该数据对LLMs进行微调,存在推理阶段敏感PII数据泄露的风险。为应对这一挑战,我们提出了上下文隐私保护语言模型(PrivacyMind),这是一种新颖的LLMs微调范式,能够有效注入领域知识,同时保障推理时的数据隐私。本研究为模型设计提供了理论分析,并对多种技术进行了基准测试,包括语料库筛选、基于惩罚的训练损失非似然性、基于指令的微调等。跨多种数据集和场景的广泛实验证明了我们方法的有效性。特别地,结合正负例的指令微调表现突出,成为一种有效的方法,在保护私有数据的同时增强了模型的知识。我们的工作凸显了大语言模型作为强大上下文隐私保护学习器的潜力。本研究的完整代码与数据可在 https://github.com/Yijia-Xiao/PrivacyMind 获取。