Dementia is a sensitive neurocognitive disorder affecting tens of millions of people worldwide and its cases are expected to triple by 2050. Alarmingly, recent advancements in dementia classification make it possible for adversaries to violate affected individuals' privacy and infer their sensitive condition from speech transcriptions. Existing obfuscation methods in text have never been applied for dementia and depend on the availability of large labeled datasets which are challenging to collect for sensitive medical attributes. In this work, we bridge this research gap and tackle the above issues by leveraging Large-Language-Models (LLMs) with diverse prompt designs (zero-shot, few-shot, and knowledge-based) to obfuscate dementia in speech transcripts. Our evaluation shows that LLMs are more effective dementia obfuscators compared to competing methods. However, they have billions of parameters which renders them hard to train, store and share, and they are also fragile suffering from hallucination, refusal and contradiction effects among others. To further mitigate these, we propose a novel method, DiDOTS. DiDOTS distills knowledge from LLMs using a teacher-student paradigm and parameter-efficient fine-tuning. DiDOTS has one order of magnitude fewer parameters compared to its teacher LLM and can be fine-tuned using three orders of magnitude less parameters compared to full fine-tuning. Our evaluation shows that compared to prior work DiDOTS retains the performance of LLMs achieving 1.3x and 2.2x improvement in privacy performance on two datasets, while humans rate it as better in preserving utility even when compared to state-of-the-art paraphrasing models.
翻译:痴呆症是一种影响全球数千万人的敏感神经认知障碍,预计到2050年病例数将增加两倍。令人担忧的是,痴呆症分类技术的最新进展使得攻击者可能侵犯受影响个体的隐私,并仅从语音转录中推断其敏感健康状况。现有的文本混淆方法从未应用于痴呆症场景,且依赖于大规模标注数据集——这对于敏感医学属性的数据收集极具挑战性。本研究通过利用大语言模型(LLMs)及其多样化提示设计(零样本、少样本和基于知识的提示),填补了这一研究空白并解决了上述问题,实现了对语音转录中痴呆症特征的混淆。评估结果表明,相较于现有方法,LLMs是更有效的痴呆症混淆器。然而,这些模型具有数十亿参数,导致其训练、存储和共享极为困难,且存在幻觉、拒绝响应和自相矛盾等脆弱性问题。为深入缓解这些问题,我们提出了一种新颖方法DiDOTS。该方法通过师生范式与参数高效微调技术,从LLMs中蒸馏知识。DiDOTS的参数规模比其教师LLM小一个数量级,且微调所需参数量比全参数微调少三个数量级。评估显示,相较于现有方法,DiDOTS在保持LLMs性能的同时,在两个数据集上的隐私保护性能分别提升1.3倍和2.2倍;即使与最先进的复述模型相比,人类评估者也认为其在效用保持方面表现更优。