Federated learning has drawn widespread interest from researchers, yet the data heterogeneity across edge clients remains a key challenge, often degrading model performance. Existing methods enhance model compatibility with data heterogeneity by splitting models and knowledge distillation. However, they neglect the insufficient communication bandwidth and computing power on the client, failing to strike an effective balance between addressing data heterogeneity and accommodating limited client resources. To tackle this limitation, we propose a personalized federated learning method based on cosine sparsification parameter packing and dual-weighted aggregation (FedCSPACK), which effectively leverages the limited client resources and reduces the impact of data heterogeneity on model performance. In FedCSPACK, the client packages model parameters and selects the most contributing parameter packages for sharing based on cosine similarity, effectively reducing bandwidth requirements. The client then generates a mask matrix anchored to the shared parameter package to improve the alignment and aggregation efficiency of sparse updates on the server. Furthermore, directional and distribution distance weights are embedded in the mask to implement a weighted-guided aggregation mechanism, enhancing the robustness and generalization performance of the global model. Extensive experiments across four datasets using ten state-of-the-art methods demonstrate that FedCSPACK effectively improves communication and computational efficiency while maintaining high model accuracy.
翻译:联邦学习已引起研究者的广泛关注,然而边缘客户端间的数据异构性仍是关键挑战,往往会降低模型性能。现有方法通过分割模型和知识蒸馏来增强模型对数据异构的兼容性。然而,这些方法忽视了客户端通信带宽和计算能力的不足,未能有效平衡处理数据异构性与适应有限客户端资源之间的矛盾。为应对这一局限,本文提出一种基于余弦稀疏化参数打包与双权重聚合的个性化联邦学习方法(FedCSPACK),该方法能有效利用有限的客户端资源并降低数据异构性对模型性能的影响。在FedCSPACK中,客户端打包模型参数,并基于余弦相似度选择贡献最大的参数包进行共享,从而有效降低带宽需求。客户端随后生成以共享参数包为锚点的掩码矩阵,以提升服务器端稀疏更新的对齐与聚合效率。此外,掩码中嵌入了方向性与分布距离权重,以实现加权引导的聚合机制,从而增强全局模型的鲁棒性与泛化性能。在四个数据集上使用十种先进方法进行的广泛实验表明,FedCSPACK在保持高模型精度的同时,有效提升了通信与计算效率。