Federated learning (FL) allows a set of clients to collaboratively train a machine-learning model without exposing local training samples. In this context, it is considered to be privacy-preserving and hence has been adopted by medical centers to train machine-learning models over private data. However, in this paper, we propose a novel attack named MediLeak that enables a malicious parameter server to recover high-fidelity patient images from the model updates uploaded by the clients. MediLeak requires the server to generate an adversarial model by adding a crafted module in front of the original model architecture. It is published to the clients in the regular FL training process and each client conducts local training on it to generate corresponding model updates. Then, based on the FL protocol, the model updates are sent back to the server and our proposed analytical method recovers private data from the parameter updates of the crafted module. We provide a comprehensive analysis for MediLeak and show that it can successfully break the state-of-the-art cryptographic secure aggregation protocols, designed to protect the FL systems from privacy inference attacks. We implement MediLeak on the MedMNIST and COVIDx CXR-4 datasets. The results show that MediLeak can nearly perfectly recover private images with high recovery rates and quantitative scores. We further perform downstream tasks such as disease classification with the recovered data, where our results show no significant performance degradation compared to using the original training samples.
翻译:联邦学习(FL)允许一组客户端在不暴露本地训练样本的情况下协作训练机器学习模型。在此背景下,该方法被视为具有隐私保护性,因此已被医疗中心用于在私有数据上训练机器学习模型。然而,本文提出了一种名为MediLeak的新型攻击方法,使恶意参数服务器能够从客户端上传的模型更新中恢复高保真度的患者图像。MediLeak要求服务器通过在原始模型架构前添加定制模块来生成对抗性模型。该模型在常规FL训练过程中发布给客户端,每个客户端在其上进行本地训练以生成相应的模型更新。随后,根据FL协议,模型更新被发送回服务器,我们提出的分析方法从定制模块的参数更新中恢复私有数据。我们对MediLeak进行了全面分析,并证明其能够成功突破旨在保护FL系统免受隐私推断攻击的最先进加密安全聚合协议。我们在MedMNIST和COVIDx CXR-4数据集上实现了MediLeak。结果表明,MediLeak能以高恢复率和量化分数近乎完美地恢复私有图像。我们进一步使用恢复数据执行疾病分类等下游任务,结果显示与使用原始训练样本相比,性能未出现显著下降。