Decentralized federated learning (DFL) has emerged as a transformative server-free paradigm that enables collaborative learning over large-scale heterogeneous networks. However, it continues to face fundamental challenges, including data heterogeneity, restrictive assumptions for theoretical analysis, and degraded convergence when standard communication- or privacyenhancing techniques are applied. To overcome these drawbacks, this paper develops a novel algorithm, PaME (DFL by Partial Message Exchange). The central principle is to allow only randomly selected sparse coordinates to be exchanged between two neighbor nodes. Consequently, PaME achieves substantial reductions in communication costs while still preserving a high level of privacy, without sacrificing accuracy. Moreover, grounded in rigorous analysis, the algorithm is shown to converge at a linear rate under the gradient to be locally Lipschitz continuous and the communication matrix to be doubly stochastic. These two mild assumptions not only dispense with many restrictive conditions commonly imposed by existing DFL methods but also enables PaME to effectively address data heterogeneity. Furthermore, comprehensive numerical experiments demonstrate its superior performance compared with several representative decentralized learning algorithms.
翻译:去中心化联邦学习(DFL)作为一种无需服务器的变革性范式,已在大规模异构网络上实现协同学习。然而,该范式仍面临若干根本性挑战,包括数据异质性、理论分析所需的限制性假设,以及在应用标准通信增强或隐私增强技术时出现的收敛性能下降问题。为克服这些缺陷,本文提出一种新颖算法——PaME(基于部分消息交换的DFL)。其核心原理是仅允许随机选取的稀疏坐标在两个相邻节点间进行交换。因此,PaME在保持高精度水平的同时,实现了通信成本的大幅降低,并维持了较高的隐私保护水平。此外,基于严格的理论分析证明:当梯度满足局部Lipschitz连续且通信矩阵为双随机矩阵时,该算法能以线性速率收敛。这两个温和假设不仅摒弃了现有DFL方法通常施加的诸多限制条件,还使PaME能够有效应对数据异质性。最终,大量数值实验表明,相较于若干代表性去中心化学习算法,PaME展现出更优越的性能。