In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are communicated each round between the clients and the server. We validate SSFL's effectiveness using standard non-IID benchmarks, noting marked improvements in the sparsity--accuracy trade-offs. Finally, we deploy our method in a real-world federated learning framework and report improvement in communication time.
翻译:本文提出显著稀疏联邦学习(SSFL),一种用于稀疏联邦学习的高效通信精简方法。SSFL在训练前识别稀疏子网络,利用在非独立同分布场景下本地客户端数据上分别计算的参数显著分数,然后进行聚合,以确定全局掩码。每轮仅需在客户端与服务器之间通信稀疏模型权重。我们使用标准非独立同分布基准验证了SSFL的有效性,注意到其在稀疏性与准确性权衡方面的显著改进。最后,我们将该方法部署于真实联邦学习框架中,并报告了通信时间的改善。