As a paradigm of distributed machine learning, federated learning typically requires all edge devices to train a complete model locally. However, with the increasing scale of artificial intelligence models, the limited resources on edge devices often become a bottleneck for efficient fine-tuning. To address this challenge, federated split learning (FedSL) implements collaborative training across the edge devices and the server through model splitting. In this paper, we propose a lightweight FedSL scheme, that further alleviates the training burden on resource-constrained edge devices by pruning the client-side model dynamicly and using quantized gradient updates to reduce computation overhead. Additionally, we apply random dropout to the activation values at the split layer to reduce communication overhead. We conduct theoretical analysis to quantify the convergence performance of the proposed scheme. Finally, simulation results verify the effectiveness and advantages of the proposed lightweight FedSL in wireless network environments.
翻译:作为一种分布式机器学习范式,联邦学习通常要求所有边缘设备在本地训练完整模型。然而,随着人工智能模型规模的不断增长,边缘设备的有限资源往往成为高效微调的瓶颈。为应对这一挑战,联邦分割学习通过模型分割实现边缘设备与服务器之间的协同训练。本文提出一种轻量化联邦分割学习方案,通过动态剪枝客户端模型并采用量化梯度更新降低计算开销,从而进一步缓解资源受限边缘设备的训练负担。此外,我们在分割层对激活值实施随机丢弃以减少通信开销。通过理论分析量化了所提方案的收敛性能。最终,仿真结果验证了所提轻量化联邦分割学习方案在无线网络环境中的有效性与优势。