Federated Learning (FL) enables clients to train a joint model without disclosing their local data. Instead, they share their local model updates with a central server that moderates the process and creates a joint model. However, FL is susceptible to a series of privacy attacks. Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record. n this work, we propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model. We employ a combination of unary encoding with shuffling, which can effectively blend all clients' model updates, preventing the central server from inferring information about each client's model update separately. In order to address the increased communication cost of unary encoding we employ quantization. Our preliminary experiments show promising results; the proposed mechanism notably decreases the accuracy of SIAs without compromising the accuracy of the joint model.
翻译:联邦学习(FL)使得客户端能够在无需披露本地数据的情况下协同训练一个联合模型。客户端仅向协调过程并生成联合模型的中央服务器共享其本地模型更新。然而,联邦学习易受一系列隐私攻击。近期提出的源推断攻击(SIA)中,诚实但好奇的中央服务器试图精确识别特定数据记录所属的客户端。本文提出一种针对SIA的防御方法,该方法通过引入可信混洗器实现,且不损害联合模型的准确性。我们结合使用一元编码与混洗技术,能有效混合所有客户端的模型更新,防止中央服务器单独推断各客户端模型更新的信息。为应对一元编码带来的通信开销增加问题,我们采用了量化技术。初步实验结果表明,所提机制在保持联合模型精度的同时,显著降低了源推断攻击的准确率。