Federated learning enables decentralized model training without sharing raw data, preserving data privacy. However, its vulnerability towards critical security threats, such as gradient inversion and model poisoning by malicious clients, remain unresolved. Existing solutions often address these issues separately, sacrificing either system robustness or model accuracy. This work introduces Tazza, a secure and efficient federated learning framework that simultaneously addresses both challenges. By leveraging the permutation equivariance and invariance properties of neural networks via weight shuffling and shuffled model validation, Tazza enhances resilience against diverse poisoning attacks, while ensuring data confidentiality and high model accuracy. Comprehensive evaluations on various datasets and embedded platforms show that Tazza achieves robust defense with up to 6.7x improved computational efficiency compared to alternative schemes, without compromising performance.
翻译:联邦学习支持去中心化模型训练而无需共享原始数据,从而保护数据隐私。然而,其对于梯度反演和恶意客户端模型投毒等关键安全威胁的脆弱性仍未得到解决。现有解决方案通常单独处理这些问题,往往以牺牲系统鲁棒性或模型精度为代价。本文提出Tazza,一种安全高效的联邦学习框架,可同时应对这两类挑战。通过利用神经网络的置换等变性与不变性特性,结合权重重排与重排模型验证机制,Tazza增强了对多种投毒攻击的防御能力,同时确保数据机密性与高模型精度。在不同数据集和嵌入式平台上的综合评估表明,相较于现有方案,Tazza在保持性能不损失的前提下,实现了鲁棒的安全防御,并将计算效率提升最高达6.7倍。