Fully decentralized learning algorithms are still in an early stage of development. Creating modular Gossip Learning strategies is not trivial due to convergence challenges and Byzantine faults intrinsic in systems of decentralized nature. Our contribution provides a novel means to simulate custom Gossip Learning systems by leveraging the state-of-the-art Flower Framework. Specifically, we introduce GLow, which will allow researchers to train and assess scalability and convergence of devices, across custom network topologies, before making a physical deployment. The Flower Framework is selected for being a simulation featured library with a very active community on Federated Learning research. However, Flower exclusively includes vanilla Federated Learning strategies and, thus, is not originally designed to perform simulations without a centralized authority. GLow is presented to fill this gap and make simulation of Gossip Learning systems possible. Results achieved by GLow in the MNIST and CIFAR10 datasets, show accuracies over 0.98 and 0.75 respectively. More importantly, GLow performs similarly in terms of accuracy and convergence to its analogous Centralized and Federated approaches in all designed experiments.
翻译:完全去中心化的学习算法仍处于发展的早期阶段。由于去中心化系统固有的收敛挑战和拜占庭故障,创建模块化的Gossip学习策略并非易事。我们的贡献提供了一种利用最先进的Flower框架模拟定制Gossip学习系统的新方法。具体而言,我们引入了GLow,该工具将使研究人员能够在物理部署之前,针对自定义网络拓扑结构,训练和评估设备的可扩展性与收敛性。选择Flower框架是因为它是一个专注于联邦学习研究的模拟库,并拥有非常活跃的社区。然而,Flower仅包含经典的联邦学习策略,因此最初并非为在无中心权威的情况下执行模拟而设计。GLow的提出旨在填补这一空白,使Gossip学习系统的模拟成为可能。GLow在MNIST和CIFAR10数据集上取得的准确率分别超过0.98和0.75。更重要的是,在所有设计的实验中,GLow在准确率和收敛性方面均表现出与对应的集中式和联邦式方法相似的性能。