Detecting personally identifiable information (PII) in user queries is critical for ensuring privacy in question-answering systems. Current approaches mainly redact all PII, disregarding the fact that some of them may be contextually relevant to the user's question, resulting in a degradation of response quality. Large language models (LLMs) might be able to help determine which PII are relevant, but due to their closed source nature and lack of privacy guarantees, they are unsuitable for sensitive data processing. To achieve privacy-preserving PII detection, we propose CAPID, a practical approach that fine-tunes a locally owned small language model (SLM) that filters sensitive information before it is passed to LLMs for QA. However, existing datasets do not capture the context-dependent relevance of PII needed to train such a model effectively. To fill this gap, we propose a synthetic data generation pipeline that leverages LLMs to produce a diverse, domain-rich dataset spanning multiple PII types and relevance levels. Using this dataset, we fine-tune an SLM to detect PII spans, classify their types, and estimate contextual relevance. Our experiments show that relevance-aware PII detection with a fine-tuned SLM substantially outperforms existing baselines in span, relevance and type accuracy while preserving significantly higher downstream utility under anonymization.
翻译:在用户查询中检测个人身份信息(PII)对于保障问答系统的隐私至关重要。现有方法通常直接屏蔽所有PII,却忽略了部分信息可能与用户问题存在上下文关联,导致回答质量下降。虽然大语言模型(LLMs)可能具备判断PII相关性的能力,但由于其闭源特性及缺乏隐私保障机制,无法适用于敏感数据处理。为实现隐私保护的PII检测,本文提出CAPID——一种实用方案,通过微调本地部署的小语言模型(SLM),在将数据传递给LLMs进行问答前完成敏感信息过滤。然而,现有数据集未能涵盖训练此类模型所需的上下文相关PII标注。为此,我们设计了基于LLMs的合成数据生成流程,构建了涵盖多类型PII及多级相关性的跨领域数据集。基于该数据集,我们微调SLM以实现PII片段检测、类型分类及上下文相关性评估。实验表明:经过微调的SLM在实现上下文感知PII检测时,在片段识别、相关性判断和类型分类准确率上均显著超越现有基线方法,同时在匿名化处理下能保持更高的下游任务效用。