Text anonymization is crucial for sharing sensitive data while maintaining privacy. Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models (LLMs), which have shown advanced capability in memorizing detailed information and patterns as well as connecting disparate pieces of information. In defending against LLM-based re-identification attacks, anonymization could jeopardize the utility of the resulting anonymized data in downstream tasks -- the trade-off between privacy and data utility requires deeper understanding within the context of LLMs. This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component, which work collaboratively to perform anonymization. To provide a practical model for large-scale and real-time environments, we distill the anonymization capabilities into a lightweight model using Direct Preference Optimization (DPO). Extensive experiments demonstrate that the proposed models outperform baseline models, showing robustness in reducing the risk of re-identification while preserving greater data utility in downstream tasks. Our code and dataset are available at https://github.com/UKPLab/arxiv2024-rupta.
翻译:文本匿名化对于在共享敏感数据的同时保护隐私至关重要。现有技术面临大型语言模型(LLMs)再识别攻击能力带来的新兴挑战,LLMs在记忆详细信息与模式以及连接分散信息片段方面已展现出先进能力。在防御基于LLM的再识别攻击时,匿名化可能损害处理后数据在下游任务中的效用——隐私与数据效用之间的权衡需要在LLM语境下得到更深入的理解。本文提出一个由三个基于LLM的组件构成的框架——隐私评估器、效用评估器和优化组件,这些组件协同工作以执行匿名化。为构建适用于大规模实时环境的实用模型,我们采用直接偏好优化(DPO)将匿名化能力蒸馏至轻量级模型。大量实验表明,所提模型优于基线模型,在降低再识别风险的同时,能更好地保持下游任务中的数据效用,展现出鲁棒性。我们的代码与数据集公开于 https://github.com/UKPLab/arxiv2024-rupta。