As the prevalence of data-driven technologies in healthcare continues to rise, concerns regarding data privacy and security become increasingly paramount. This thesis aims to address the vulnerability of personalized healthcare models, particularly in the context of ECG monitoring, to adversarial attacks that compromise patient privacy. We propose an approach termed "Machine Unlearning" to mitigate the impact of exposed data points on machine learning models, thereby enhancing model robustness against adversarial attacks while preserving individual privacy. Specifically, we investigate the efficacy of Machine Unlearning in the context of personalized ECG monitoring, utilizing a dataset of clinical ECG recordings. Our methodology involves training a deep neural classifier on ECG data and fine-tuning the model for individual patients. We demonstrate the susceptibility of fine-tuned models to adversarial attacks, such as the Fast Gradient Sign Method (FGSM), which can exploit additional data points in personalized models. To address this vulnerability, we propose a Machine Unlearning algorithm that selectively removes sensitive data points from fine-tuned models, effectively enhancing model resilience against adversarial manipulation. Experimental results demonstrate the effectiveness of our approach in mitigating the impact of adversarial attacks while maintaining the pre-trained model accuracy.
翻译:随着数据驱动技术在医疗领域的日益普及,数据隐私与安全问题变得愈发重要。本文旨在探讨个性化医疗模型(特别是在心电图监测场景下)面对对抗性攻击时的脆弱性,这些攻击可能危及患者隐私。我们提出一种名为"机器遗忘"的方法,以减轻暴露数据点对机器学习模型的影响,从而在保护个体隐私的同时增强模型对抗对抗性攻击的鲁棒性。具体而言,我们利用临床心电图记录数据集,研究机器遗忘在个性化心电图监测中的有效性。我们的方法包括基于心电图数据训练深度神经分类器,并为个体患者微调模型。我们证明了微调模型易受对抗性攻击(如快速梯度符号法),这类攻击可利用个性化模型中的额外数据点。为应对此脆弱性,我们提出一种机器遗忘算法,能有选择地从微调模型中移除敏感数据点,从而有效增强模型对抗对抗性操纵的韧性。实验结果表明,我们的方法在维持预训练模型精度的同时,能有效减轻对抗性攻击的影响。