With the prevalence of Large Learning Models (LLM), Split Federated Learning (SFL), which divides a learning model into server-side and client-side models, has emerged as an appealing technology to deal with the heavy computational burden for network edge clients. However, existing SFL frameworks would frequently upload smashed data and download gradients between the server and each client, leading to severe communication overheads. To address this issue, this work proposes a novel communication-and-computation efficient SFL framework, which allows dynamic model splitting (server- and client-side model cutting point selection) and broadcasting of aggregated smashed data gradients. We theoretically analyze the impact of the cutting point selection on the convergence rate of the proposed framework, revealing that model splitting with a smaller client-side model size leads to a better convergence performance and vise versa. Based on the above insights, we formulate an optimization problem to minimize the model convergence rate and latency under the consideration of data privacy via a joint Cutting point selection, Communication and Computation resource allocation (CCC) strategy. To deal with the proposed mixed integer nonlinear programming optimization problem, we develop an algorithm by integrating the Double Deep Q-learning Network (DDQN) with convex optimization methods. Extensive experiments validate our theoretical analyses across various datasets, and the numerical results demonstrate the effectiveness and superiority of the proposed communication-efficient SFL compared with existing schemes, including parallel split learning and traditional SFL mechanisms.
翻译:随着大规模学习模型(LLM)的普及,分割联邦学习(SFL)作为一种将学习模型划分为服务器端和客户端模型的技术,已成为应对网络边缘客户端沉重计算负担的吸引人方案。然而,现有的SFL框架需要在服务器与每个客户端之间频繁上传粉碎数据并下载梯度,导致严重的通信开销。为解决此问题,本文提出了一种新颖的通信与计算高效的SFL框架,该框架支持动态模型分割(服务器端与客户端模型切割点选择)以及聚合粉碎数据梯度的广播。我们从理论上分析了切割点选择对所提框架收敛速度的影响,揭示了客户端模型规模较小的模型分割会带来更好的收敛性能,反之亦然。基于上述洞察,我们通过联合切割点选择、通信与计算资源分配(CCC)策略,在考虑数据隐私的前提下,构建了一个最小化模型收敛速度与延迟的优化问题。为处理所提出的混合整数非线性规划优化问题,我们开发了一种将双深度Q学习网络(DDQN)与凸优化方法相结合的算法。大量实验在不同数据集上验证了我们的理论分析,数值结果表明,与现有方案(包括并行分割学习和传统SFL机制)相比,所提出的通信高效SFL具有显著的有效性和优越性。