In federated learning, particularly in cross-device scenarios, secure aggregation has recently gained popularity as it effectively defends against inference attacks by malicious aggregators. However, secure aggregation often requires additional communication overhead and can impede the convergence rate of the global model, which is particularly challenging in wireless network environments with extremely limited bandwidth. Therefore, achieving efficient communication compression under the premise of secure aggregation presents a highly challenging and valuable problem. In this work, we propose a novel uplink communication compression method for federated learning, named FedMPQ, which is based on multi shared codebook product quantization.Specifically, we utilize updates from the previous round to generate sufficiently robust codebooks. Secure aggregation is then achieved through trusted execution environments (TEE) or a trusted third party (TTP).In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed (non-IID) and there is a lack of sufficient public data. The experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy, while reducing uplink communications by 90-95%
翻译:在联邦学习中,尤其是在跨设备场景下,安全聚合近期因其能有效防御恶意聚合器的推理攻击而受到广泛关注。然而,安全聚合通常需要额外的通信开销,并可能阻碍全局模型的收敛速度,这在带宽极为受限的无线网络环境中尤为棘手。因此,在保证安全聚合的前提下实现高效的通信压缩,是一个极具挑战且具有价值的问题。本文提出了一种名为FedMPQ的联邦学习上行通信压缩新方法,该方法基于多共享码本乘积量化。具体而言,我们利用上一轮的更新来生成足够鲁棒的码本。随后,通过可信执行环境(TEE)或可信第三方(TTP)实现安全聚合。与以往工作相比,我们的方法在数据非独立同分布(non-IID)且缺乏充足公共数据的场景下表现出更强的鲁棒性。在LEAF数据集上进行的实验表明,所提方法在将上行通信量减少90-95%的同时,达到了基线最终准确率的99%。