In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are trained and communicated each round between the clients and the server. On standard benchmarks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, SSFL consistently improves the accuracy sparsity trade off, achieving more than 20\% relative error reduction on CIFAR-10 compared to the strongest sparse baseline, while reducing communication costs by $2 \times$ relative to dense FL. Finally, in a real-world federated learning deployment, SSFL delivers over $2.3 \times$ faster communication time, underscoring its practical efficiency.
翻译:本文提出显著稀疏联邦学习(SSFL),一种用于稀疏联邦学习的高效通信简化方法。SSFL在训练前识别稀疏子网络,通过利用在非独立同分布场景下基于本地客户端数据分别计算的参数显著性分数进行聚合,从而确定全局掩码。每轮训练中仅需在客户端与服务器之间传输和更新稀疏模型权重。在包括CIFAR-10、CIFAR-100和Tiny-ImageNet的标准基准测试中,SSFL持续优化了精度与稀疏度的权衡关系,在CIFAR-10数据集上相比最强稀疏基线实现了超过20%的相对误差降低,同时将通信开销相较于稠密联邦学习降低$2 \times$。最终在实际联邦学习部署中,SSFL实现了超过$2.3 \times$的通信加速,凸显了其实际应用效率。