Mobile devices contribute more than half of the world's web traffic, providing massive and diverse data for powering various federated learning (FL) applications. In order to avoid the communication bottleneck on the parameter server (PS) and accelerate the training of large-scale models on resourceconstraint workers in edge computing (EC) system, we propose a novel split federated learning (SFL) framework, termed ParallelSFL. Concretely, we split an entire model into a bottom submodel and a top submodel, and divide participating workers into multiple clusters, each of which collaboratively performs the SFL training procedure and exchanges entire models with the PS. However, considering the statistical and system heterogeneity in edge systems, it is challenging to arrange suitable workers to specific clusters for efficient model training. To address these challenges, we carefully develop an effective clustering strategy by optimizing a utility function related to training efficiency and model accuracy. Specifically, ParallelSFL partitions workers into different clusters under the heterogeneity restrictions, thereby promoting model accuracy as well as training efficiency. Meanwhile, ParallelSFL assigns diverse and appropriate local updating frequencies for each cluster to further address system heterogeneity. Extensive experiments are conducted on a physical platform with 80 NVIDIA Jetson devices, and the experimental results show that ParallelSFL can reduce the traffic consumption by at least 21%, speed up the model training by at least 1.36x, and improve model accuracy by at least 5% in heterogeneous scenarios, compared to the baselines.
翻译:移动设备贡献了全球超过一半的网络流量,为各类联邦学习应用提供了海量且多样化的数据。为规避参数服务器的通信瓶颈,并加速边缘计算系统中资源受限工作节点上大规模模型的训练,我们提出了一种创新的分割联邦学习框架,称为ParallelSFL。具体而言,我们将整个模型分割为底部子模型和顶部子模型,并将参与工作节点划分为多个集群,每个集群协作执行SFL训练流程并与参数服务器交换完整模型。然而,考虑到边缘系统中存在的统计异构性与系统异构性,如何将合适的工作节点安排到特定集群以实现高效的模型训练是一项挑战。为应对这些挑战,我们通过优化一个与训练效率和模型精度相关的效用函数,精心设计了一种有效的集群划分策略。具体来说,ParallelSFL在异构性约束下将工作节点划分至不同集群,从而提升模型精度与训练效率。同时,ParallelSFL为每个集群分配多样且恰当的本地更新频率,以进一步应对系统异构性。我们在一个包含80个NVIDIA Jetson设备的物理平台上进行了大量实验,结果表明,与基线方法相比,ParallelSFL在异构场景下能够至少降低21%的流量消耗,将模型训练速度提升至少1.36倍,并将模型精度提高至少5%。