The consequences of a healthcare data breach can be devastating for the patients, providers, and payers. The average financial impact of a data breach in recent months has been estimated to be close to USD 10 million. This is especially significant for healthcare organizations in India that are managing rapid digitization while still establishing data governance procedures that align with the letter and spirit of the law. Computer-based systems for de-identification of personal information are vulnerable to data drift, often rendering them ineffective in cross-institution settings. Therefore, a rigorous assessment of existing de-identification against local health datasets is imperative to support the safe adoption of digital health initiatives in India. Using a small set of de-identified patient discharge summaries provided by an Indian healthcare institution, in this paper, we report the nominal performance of de-identification algorithms (based on language models) trained on publicly available non-Indian datasets, pointing towards a lack of cross-institutional generalization. Similarly, experimentation with off-the-shelf de-identification systems reveals potential risks associated with the approach. To overcome data scarcity, we explore generating synthetic clinical reports (using publicly available and Indian summaries) by performing in-context learning over Large Language Models (LLMs). Our experiments demonstrate the use of generated reports as an effective strategy for creating high-performing de-identification systems with good generalization capabilities.
翻译:医疗数据泄露的后果对患者、提供方和支付方而言可能是毁灭性的。据估计,近几个月数据泄露的平均财务影响已接近1000万美元。这对于印度医疗保健机构尤为重要,这些机构在快速推进数字化的同时,仍需建立符合法律条文与精神的数据治理规程。基于计算机的个人信息去身份化系统易受数据漂移影响,常导致其在跨机构场景中失效。因此,对现有去身份化技术在当地医疗数据集上进行严格评估,对支持印度数字健康计划的安全实施至关重要。本文利用印度医疗机构提供的一小组去身份化患者出院小结,报告了基于公开非印度数据集训练的去身份化算法(基于语言模型)的基准性能,结果表明其缺乏跨机构泛化能力。同样,对现成去身份化系统的实验揭示了该方法的潜在风险。为克服数据稀缺问题,我们探索通过大型语言模型(LLMs)的上下文学习生成合成临床报告(使用公开及印度本土摘要)。实验证明,使用生成报告作为训练数据能有效构建具有良好泛化能力的高性能去身份化系统。