Recent research highlights frequent model communication as a significant bottleneck to the efficiency of decentralized machine learning (ML), especially for large-scale and over-parameterized neural networks (NNs). To address this, we present Malcom-PSGD, a novel decentralized ML algorithm that combines gradient compression techniques with model sparsification. We promote model sparsity by adding $\ell_1$ regularization to the objective and present a decentralized proximal SGD method for training. Our approach employs vector source coding and dithering-based quantization for the compressed gradient communication of sparsified models. Our analysis demonstrates that Malcom-PSGD achieves a convergence rate of $\mathcal{O}(1/\sqrt{t})$ with respect to the iterations $t$, assuming a constant consensus and learning rate. This result is supported by our proof for the convergence of non-convex compressed Proximal SGD methods. Additionally, we conduct a bit analysis, providing a closed-form expression for the communication costs associated with Malcom-PSGD. Numerical results verify our theoretical findings and demonstrate that our method reduces communication costs by approximately $75\%$ when compared to the state-of-the-art.
翻译:近期研究指出,频繁的模型通信已成为分布式机器学习(ML)效率提升的主要瓶颈,尤其对于大规模过参数化神经网络(NNs)而言。为解决这一问题,我们提出Malcom-PSGD——一种结合梯度压缩技术与模型稀疏化的新型分布式ML算法。通过向目标函数添加$\ell_1$正则化项促进模型稀疏性,并提出一种分布式近端随机梯度下降(SGD)方法进行训练。本方法采用向量信源编码与基于抖动的量化技术实现稀疏化模型的压缩梯度通信。理论分析表明,在恒定共识步长与学习率假设下,Malcom-PSGD关于迭代次数$t$的收敛速率为$\mathcal{O}(1/\sqrt{t})$。该结果通过我们对非凸压缩近端SGD方法收敛性的证明得以支撑。此外,我们通过比特数分析给出了Malcom-PSGD通信成本的闭式表达式。数值实验结果验证了理论发现,表明与当前最优方法相比,本方法可降低约$75\%$的通信成本。