In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server's energy consumption by 70.8\% and 53.1\%, compared to the benchmarks, respectively.
翻译:本文提出了一种面向边缘网络大语言模型微调的高效能分割学习框架,该框架利用网络边缘地理分布的个人数据进行大语言模型微调,将大语言模型分割并交替部署于海量移动设备与边缘服务器之间。针对边缘网络中存在的设备异构性与信道动态性,本文开发了一种名为CARD(Cut lAyer and computing Resource Decision)的算法,旨在最小化训练延迟与能耗。仿真结果表明,与基准方法相比,所提方案能够将平均训练延迟与服务器能耗分别降低70.8%和53.1%。