Fully decentralised federated learning enables collaborative training of individual machine learning models on a distributed network of communicating devices while keeping the training data localised on each node. This approach avoids central coordination, enhances data privacy and eliminates the risk of a single point of failure. Our research highlights that the effectiveness of decentralised federated learning is significantly influenced by the network topology of connected devices and the learning models' initial conditions. We propose a strategy for uncoordinated initialisation of the artificial neural networks based on the distribution of eigenvector centralities of the underlying communication network, leading to a radically improved training efficiency. Additionally, our study explores the scaling behaviour and the choice of environmental parameters under our proposed initialisation strategy. This work paves the way for more efficient and scalable artificial neural network training in a distributed and uncoordinated environment, offering a deeper understanding of the intertwining roles of network structure and learning dynamics.
翻译:完全去中心化的联邦学习使得分布式通信网络中的设备能够在保持各节点训练数据本地化的同时,协同训练各自的机器学习模型。该方法避免了中心化协调,增强了数据隐私性,并消除了单点故障风险。我们的研究表明,去中心化联邦学习的有效性显著受到连接设备的网络拓扑结构以及学习模型初始条件的影响。我们提出了一种基于底层通信网络特征向量中心性分布的人工神经网络非协调初始化策略,从而极大提升了训练效率。此外,本研究还探讨了在所提初始化策略下的缩放行为与环境参数选择。这项工作为在分布式、非协调环境中实现更高效、可扩展的人工神经网络训练开辟了道路,并为深入理解网络结构与学习动态之间相互交织的作用提供了新的见解。