Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices. However, recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks and the potential disclosure of sensitive information in individual model updates as well as the aggregated global model. This paper explores the inadequacies of existing FL protection measures when applied independently, and the challenges of creating effective compositions. Addressing these issues, we propose WW-FL, an innovative framework that combines secure multi-party computation (MPC) with hierarchical FL to guarantee data and global model privacy. One notable feature of WW-FL is its capability to prevent malicious clients from directly poisoning model parameters, confining them to less destructive data poisoning attacks. We furthermore provide a PyTorch-based FL implementation integrated with Meta's CrypTen MPC framework to systematically measure the performance and robustness of WW-FL. Our extensive evaluation demonstrates that WW-FL is a promising solution for secure and private large-scale federated learning.
翻译:联邦学习(FL)是一种用于大规模分布式机器学习的高效方法,它通过将训练数据保留在客户端设备上来保障数据隐私。然而,近期研究揭示了联邦学习存在的脆弱性,这些脆弱性通过投毒攻击以及单个模型更新乃至聚合后的全局模型中可能泄露的敏感信息,同时影响了安全性与隐私性。本文探讨了现有联邦学习保护措施在单独应用时的不足,以及构建有效组合方案所面临的挑战。针对这些问题,我们提出了WW-FL,这是一个将安全多方计算(MPC)与分层联邦学习相结合的创新框架,旨在保障数据与全局模型的隐私。WW-FL的一个显著特点是能够阻止恶意客户端直接对模型参数进行投毒,将其攻击限制在破坏性较低的数据投毒层面。此外,我们还提供了一个基于PyTorch的联邦学习实现,该实现与Meta的CrypTen MPC框架集成,用以系统性地评估WW-FL的性能与鲁棒性。我们广泛的评估表明,WW-FL是实现安全且隐私的大规模联邦学习的一个有前景的解决方案。