Fully decentralised federated learning enables collaborative training of individual machine learning models on distributed devices on a communication network while keeping the training data localised. This approach enhances data privacy and eliminates both the single point of failure and the necessity for central coordination. Our research highlights that the effectiveness of decentralised federated learning is significantly influenced by the network topology of connected devices. We propose a strategy for uncoordinated initialisation of the artificial neural networks, which leverages the distribution of eigenvector centralities of the nodes of the underlying communication network, leading to a radically improved training efficiency. Additionally, our study explores the scaling behaviour and choice of environmental parameters under our proposed initialisation strategy. This work paves the way for more efficient and scalable artificial neural network training in a distributed and uncoordinated environment, offering a deeper understanding of the intertwining roles of network structure and learning dynamics.
翻译:完全去中心化的联邦学习使得分布式设备能够在通信网络上协作训练各自的机器学习模型,同时保持训练数据本地化。这种方法增强了数据隐私性,消除了单点故障和中心化协调的必要性。我们的研究表明,去中心化联邦学习的有效性显著受到设备连接网络拓扑结构的影响。我们提出了一种人工神经网络的非协调初始化策略,该策略利用底层通信网络节点特征向量中心性的分布,从而极大提升了训练效率。此外,本研究还探讨了在所提初始化策略下的扩展行为与环境参数选择。这项工作为在分布式非协调环境中实现更高效、可扩展的人工神经网络训练开辟了道路,并深化了对网络结构与学习动态相互交织作用的理解。