Although Split Federated Learning (SFL) is good at enabling knowledge sharing among resource-constrained clients, it suffers from the problem of low training accuracy due to the neglect of data heterogeneity and catastrophic forgetting. To address this issue, we propose a novel SFL approach named KoReA-SFL, which adopts a multi-model aggregation mechanism to alleviate gradient divergence caused by heterogeneous data and a knowledge replay strategy to deal with catastrophic forgetting. Specifically, in KoReA-SFL cloud servers (i.e., fed server and main server) maintain multiple branch model portions rather than a global portion for local training and an aggregated master-model portion for knowledge sharing among branch portions. To avoid catastrophic forgetting, the main server of KoReA-SFL selects multiple assistant devices for knowledge replay according to the training data distribution of each server-side branch-model portion. Experimental results obtained from non-IID and IID scenarios demonstrate that KoReA-SFL significantly outperforms conventional SFL methods (by up to 23.25\% test accuracy improvement).
翻译:尽管分割联邦学习(SFL)能够有效促进资源受限客户端间的知识共享,但因其忽视数据异构性和灾难性遗忘问题,导致训练精度较低。针对此问题,我们提出一种名为KoReA-SFL的新型SFL方法,该方法采用多模型聚合机制缓解异构数据引发的梯度发散,并采用知识重放策略应对灾难性遗忘。具体而言,KoReA-SFL的云服务器(即联邦服务器与主服务器)维护多个分支模型部分用于本地训练,同时维护一个聚合后的主模型部分用于分支间的知识共享。为避免灾难性遗忘,KoReA-SFL的主服务器根据每个服务器端分支模型部分的训练数据分布,选择多个辅助设备进行知识重放。在非独立同分布(Non-IID)与独立同分布(IID)场景下的实验结果表明,KoReA-SFL显著优于传统SFL方法(测试精度最高提升23.25%)。