The ever-increasing adoption of Large Language Models in critical sectors like finance, healthcare, and government raises privacy concerns regarding the handling of sensitive Personally Identifiable Information (PII) during training. In response, regulations such as European Union's General Data Protection Regulation (GDPR) mandate the deletion of PII upon requests, underscoring the need for reliable and cost-effective data removal solutions. Machine unlearning has emerged as a promising direction for selectively forgetting data points. However, existing unlearning techniques typically apply a uniform forgetting strategy that neither accounts for the varying privacy risks posed by different PII attributes nor reflects associated business risks. In this work, we propose UnPII, the first PII-centric unlearning approach that prioritizes forgetting based on the risk of individual or combined PII attributes. To this end, we introduce the PII risk index (PRI), a composite metric that incorporates multiple dimensions of risk factors: identifiability, sensitivity, usability, linkability, permanency, exposability, and compliancy. The PRI enables a nuanced evaluation of privacy risks associated with PII exposures and can be tailored to align with organizational privacy policies. To support realistic assessment, we systematically construct a synthetic PII dataset (e.g., 1,700 PII instances) that simulates realistic exposure scenarios. UnPII seamlessly integrates with established unlearning algorithms, such as Gradient Ascent, Negative Preference Optimization, and Direct Preference Optimization, without modifying their underlying principles. Our experimental results demonstrate that UnPII achieves the improvements of accuracy up to 11.8%, utility up to 6.3%, and generalizability up to 12.4%, respectively, while incurring a modest fine-tuning overhead of 27.5% on average during unlearning.
翻译:大型语言模型在金融、医疗和政府等关键领域的日益广泛应用,引发了关于训练过程中敏感个人可识别信息处理的隐私担忧。对此,欧盟《通用数据保护条例》等法规要求应请求删除PII,这凸显了对可靠且具成本效益的数据移除方案的需求。机器遗忘已成为选择性遗忘数据点的一个有前景的研究方向。然而,现有遗忘技术通常采用统一的遗忘策略,既未考虑不同PII属性带来的差异化隐私风险,也未反映相关的业务风险。本文提出UnPII,这是首个以PII为中心的遗忘方法,其基于单个或组合PII属性的风险来优先实施遗忘。为此,我们引入了PII风险指数,这是一个综合了可识别性、敏感性、可用性、可关联性、持久性、可暴露性和合规性等多维度风险因素的复合指标。PRI能够对PII暴露相关的隐私风险进行细致评估,并可定制以符合组织的隐私政策。为支持现实评估,我们系统构建了一个模拟真实暴露场景的合成PII数据集。UnPII能够无缝集成到现有的遗忘算法中,而无需修改其基本原理。实验结果表明,UnPII在遗忘过程中平均仅产生27.5%的微调开销,同时分别实现了高达11.8%的准确性提升、6.3%的效用提升和12.4%的泛化性提升。