Federated Learning (FL) is gaining widespread interest for its ability to share knowledge while preserving privacy and reducing communication costs. Unlike Centralized FL, Decentralized FL (DFL) employs a network architecture that eliminates the need for a central server, allowing direct communication among clients and leading to significant communication resource savings. However, due to data heterogeneity, not all neighboring nodes contribute to enhancing the local client's model performance. In this work, we introduce \textbf{\emph{AFIND+}}, a simple yet efficient algorithm for sampling and aggregating neighbors in DFL, with the aim of leveraging collaboration to improve clients' model performance. AFIND+ identifies helpful neighbors, adaptively adjusts the number of selected neighbors, and strategically aggregates the sampled neighbors' models based on their contributions. Numerical results on real-world datasets with diverse data partitions demonstrate that AFIND+ outperforms other sampling algorithms in DFL and is compatible with most existing DFL optimization algorithms.
翻译:联邦学习(FL)因其能够在保护隐私和降低通信成本的同时共享知识而受到广泛关注。与中心化联邦学习不同,去中心化联邦学习(DFL)采用无需中心服务器的网络架构,允许客户端之间直接通信,从而显著节省通信资源。然而,由于数据异质性,并非所有相邻节点都有助于提升本地客户端的模型性能。本文提出\textbf{\emph{AFIND+}}算法,这是一种用于DFL中邻居采样与聚合的简洁高效算法,旨在通过协作提升客户端模型性能。AFIND+能够识别有益邻居,自适应调整所选邻居数量,并根据邻居的贡献度策略性地聚合其模型。在具有多样化数据划分的真实数据集上的数值实验表明,AFIND+在DFL中优于其他采样算法,且与大多数现有DFL优化算法兼容。