As AI models expand in size, it has become increasingly challenging to deploy federated learning (FL) on resource-constrained edge devices. To tackle this issue, split federated learning (SFL) has emerged as an FL framework with reduced workload on edge devices via model splitting; it has received extensive attention from the research community in recent years. Nevertheless, most prior works on SFL focus only on a two-tier architecture without harnessing multi-tier cloudedge computing resources. In this paper, we intend to analyze and optimize the learning performance of SFL under multi-tier systems. Specifically, we propose the hierarchical SFL (HSFL) framework and derive its convergence bound. Based on the theoretical results, we formulate a joint optimization problem for model splitting (MS) and model aggregation (MA). To solve this rather hard problem, we then decompose it into MS and MA subproblems that can be solved via an iterative descending algorithm. Simulation results demonstrate that the tailored algorithm can effectively optimize MS and MA for SFL within virtually any multi-tier system.
翻译:随着人工智能模型规模的扩大,在资源受限的边缘设备上部署联邦学习(FL)变得越来越具有挑战性。为解决这一问题,分割联邦学习(SFL)作为一种通过模型分割来减少边缘设备工作负载的FL框架应运而生,近年来受到了研究界的广泛关注。然而,先前大多数关于SFL的研究仅关注双层架构,未能充分利用多层云边计算资源。本文旨在分析和优化SFL在多层系统下的学习性能。具体而言,我们提出了分层SFL(HSFL)框架,并推导了其收敛界。基于理论结果,我们构建了一个针对模型分割(MS)与模型聚合(MA)的联合优化问题。为解决这一相当困难的问题,我们将其分解为MS和MA子问题,并可通过迭代下降算法求解。仿真结果表明,所设计的算法能够有效优化几乎任何多层系统内SFL的MS和MA。