Federated Learning has emerged as a leading approach for decentralized machine learning, enabling multiple clients to collaboratively train a shared model without exchanging private data. While FL enhances data privacy, it remains vulnerable to inference attacks, such as gradient inversion and membership inference, during both training and inference phases. Homomorphic Encryption provides a promising solution by encrypting model updates to protect against such attacks, but it introduces substantial communication overhead, slowing down training and increasing computational costs. To address these challenges, we propose QuanCrypt-FL, a novel algorithm that combines low-bit quantization and pruning techniques to enhance protection against attacks while significantly reducing computational costs during training. Further, we propose and implement mean-based clipping to mitigate quantization overflow or errors. By integrating these methods, QuanCrypt-FL creates a communication-efficient FL framework that ensures privacy protection with minimal impact on model accuracy, thereby improving both computational efficiency and attack resilience. We validate our approach on MNIST, CIFAR-10, and CIFAR-100 datasets, demonstrating superior performance compared to state-of-the-art methods. QuanCrypt-FL consistently outperforms existing method and matches Vanilla-FL in terms of accuracy across varying client. Further, QuanCrypt-FL achieves up to 9x faster encryption, 16x faster decryption, and 1.5x faster inference compared to BatchCrypt, with training time reduced by up to 3x.
翻译:联邦学习已成为去中心化机器学习的主流方法,它允许多个客户端在不交换私有数据的情况下协作训练共享模型。尽管联邦学习增强了数据隐私性,但在训练和推理阶段仍容易受到梯度反演、成员推断等推理攻击。同态加密通过加密模型更新来防御此类攻击,提供了一种前景广阔的解决方案,但同时也带来了巨大的通信开销,导致训练速度下降和计算成本增加。为应对这些挑战,我们提出了QuanCrypt-FL,这是一种结合低位量化和剪枝技术的新型算法,在显著降低训练阶段计算成本的同时,增强了对攻击的防护能力。此外,我们提出并实现了基于均值的截断方法,以缓解量化溢出或误差问题。通过整合这些方法,QuanCrypt-FL构建了一个通信高效的联邦学习框架,在保证隐私保护的同时对模型精度影响最小,从而同时提升了计算效率和攻击鲁棒性。我们在MNIST、CIFAR-10和CIFAR-100数据集上验证了所提方法的有效性,结果表明其性能优于现有先进方法。在不同客户端设置下,QuanCrypt-FL始终超越现有方法,并在准确率方面与Vanilla-FL持平。此外,与BatchCrypt相比,QuanCrypt-FL实现了最高9倍的加密加速、16倍的解密加速和1.5倍的推理加速,训练时间最多可减少3倍。