Traditional federated learning mainly focuses on parallel settings (PFL), which can suffer significant communication and computation costs. In contrast, one-shot and sequential federated learning (SFL) have emerged as innovative paradigms to alleviate these costs. However, the issue of non-IID (Independent and Identically Distributed) data persists as a significant challenge in one-shot and SFL settings, exacerbated by the restricted communication between clients. In this paper, we improve the one-shot sequential federated learning for non-IID data by proposing a local model diversity-enhancing strategy. Specifically, to leverage the potential of local model diversity for improving model performance, we introduce a local model pool for each client that comprises diverse models generated during local training, and propose two distance measurements to further enhance the model diversity and mitigate the effect of non-IID data. Consequently, our proposed framework can improve the global model performance while maintaining low communication costs. Extensive experiments demonstrate that our method exhibits superior performance to existing one-shot PFL methods and achieves better accuracy compared with state-of-the-art one-shot SFL methods on both label-skew and domain-shift tasks (e.g., 6%+ accuracy improvement on the CIFAR-10 dataset).
翻译:传统联邦学习主要关注并行设置(PFL),这类方法可能产生显著的通信和计算成本。相比之下,一次性联邦学习和顺序联邦学习(SFL)作为创新范式出现,旨在降低这些成本。然而,非独立同分布(Non-IID)数据问题在一次性联邦学习和顺序联邦学习设置中仍构成重大挑战,客户端间的受限通信进一步加剧了这一问题。本文通过提出一种局部模型多样性增强策略,改进了针对非独立同分布数据的一次性顺序联邦学习方法。具体而言,为利用局部模型多样性提升模型性能的潜力,我们为每个客户端引入了一个包含本地训练过程中生成多样化模型的本地模型池,并提出了两种距离度量方法以进一步增强模型多样性并减轻非独立同分布数据的影响。因此,我们提出的框架能够在保持低通信成本的同时提升全局模型性能。大量实验表明,在标签偏斜和领域偏移任务(例如,在CIFAR-10数据集上实现6%以上的准确率提升)中,我们的方法优于现有一次性PFL方法,并与最先进的一次性SFL方法相比取得了更优的准确率。