Decentralized learning provides a scalable alternative to parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time to improve global generalization, including determining when and how frequently devices synchronize. Counterintuitive empirical results show that concentrating communication budgets in the later stages of decentralized training remarkably improves global generalization. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, can significant improve the generalization performance of decentralized learning under serve high data heterogeneity. Our theoretical contributions, which explains these phenomena, are first to establish that the globally merged model of decentralized SGD can match the convergence rate of parallel SGD. Technically, we reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components essential for matching this rate. This work provides promising results that decentralized learning is able to generalize under high data heterogeneity and limited communication, while offering broad new avenues for model merging research. The code will be made publicly available.
翻译:去中心化学习为基于参数服务器的训练提供了一种可扩展的替代方案,但其性能常受限于对等节点间的有限通信。本文研究了如何随时间调度通信以提升全局泛化能力,包括确定设备同步的时机与频率。反直觉的实验结果表明,将通信资源集中于去中心化训练的后期阶段能显著提升全局泛化能力。令人惊讶的是,我们发现通过在最终步骤实施完全连接通信——即执行一次单次全局合并——能在严重数据异质性条件下显著提升去中心化学习的泛化性能。我们的理论贡献首次论证了去中心化随机梯度下降的全局合并模型可以达到并行随机梯度下降的收敛速率,从而解释了这些现象。从技术层面,我们重新诠释了局部模型间部分差异的作用:这些曾被视作有害噪声的成分,实则是达到该收敛速率不可或缺的建构性要素。本研究表明去中心化学习能够在高数据异质性和有限通信条件下实现良好泛化,同时为模型合并研究开辟了广阔的新途径。代码将公开提供。