Adaptive moment estimation (Adam), as a Stochastic Gradient Descent (SGD) variant, has gained widespread popularity in federated learning (FL) due to its fast convergence. However, federated Adam (FedAdam) algorithms suffer from a threefold increase in uplink communication overhead compared to federated SGD (FedSGD) algorithms, which arises from the necessity to transmit both local model updates and first and second moment estimates from distributed devices to the centralized server for aggregation. Driven by this issue, we propose a novel sparse FedAdam algorithm called FedAdam-SSM, wherein distributed devices sparsify the updates of local model parameters and moment estimates and subsequently upload the sparse representations to the centralized server. To further reduce the communication overhead, the updates of local model parameters and moment estimates incorporate a shared sparse mask (SSM) into the sparsification process, eliminating the need for three separate sparse masks. Theoretically, we develop an upper bound on the divergence between the local model trained by FedAdam-SSM and the desired model trained by centralized Adam, which is related to sparsification error and imbalanced data distribution. By minimizing the divergence bound between the model trained by FedAdam-SSM and centralized Adam, we optimize the SSM to mitigate the learning performance degradation caused by sparsification error. Additionally, we provide convergence bounds for FedAdam-SSM in both convex and non-convex objective function settings, and investigate the impact of local epoch, learning rate and sparsification ratio on the convergence rate of FedAdam-SSM. Experimental results show that FedAdam-SSM outperforms baselines in terms of convergence rate (over 1.1$\times$ faster than the sparse FedAdam baselines) and test accuracy (over 14.5\% ahead of the quantized FedAdam baselines).
翻译:自适应矩估计(Adam)作为随机梯度下降(SGD)的一种变体,因其快速收敛特性在联邦学习(FL)中得到广泛应用。然而,与联邦SGD(FedSGD)算法相比,联邦Adam(FedAdam)算法的上行通信开销增加了三倍,这是由于需要将本地模型更新以及一阶和二阶矩估计从分布式设备传输到中心服务器进行聚合。针对此问题,我们提出了一种新颖的稀疏FedAdam算法——FedAdam-SSM,其中分布式设备对本地模型参数和矩估计的更新进行稀疏化处理,随后将稀疏表示上传至中心服务器。为进一步降低通信开销,本地模型参数与矩估计的更新在稀疏化过程中引入了共享稀疏掩码(SSM),从而无需使用三个独立的稀疏掩码。在理论上,我们建立了FedAdam-SSM训练的本地模型与集中式Adam训练的理想模型之间差异的上界,该上界与稀疏化误差及不平衡数据分布相关。通过最小化FedAdam-SSM与集中式Adam所训练模型间的差异上界,我们优化SSM以减轻稀疏化误差导致的学习性能下降。此外,我们给出了FedAdam-SSM在凸与非凸目标函数设置下的收敛界,并分析了本地训练轮次、学习率及稀疏化比例对FedAdam-SSM收敛速度的影响。实验结果表明,FedAdam-SSM在收敛速度(较稀疏FedAdam基线快1.1倍以上)和测试准确率(较量化FedAdam基线领先14.5%以上)方面均优于基线方法。