In 5G smart cities, edge computing is employed to provide nearby computing services for end devices, and the large-scale models (e.g., GPT and LLaMA) can be deployed at the network edge to boost the service quality. However, due to the constraints of memory size and computing capacity, it is difficult to run these large-scale models on a single edge node. To meet the resource constraints, a large-scale model can be partitioned into multiple sub-models and deployed across multiple edge nodes. Then tasks are offloaded to the edge nodes for collaborative inference. Additionally, we incorporate the early exit mechanism to further accelerate inference. However, the heterogeneous system and dynamic environment will significantly affect the inference efficiency. To address these challenges, we theoretically analyze the coupled relationship between task offloading strategy and confidence thresholds, and develop a distributed algorithm, termed DTO-EE, based on the coupled relationship and convex optimization. DTO-EE enables each edge node to jointly optimize its offloading strategy and the confidence threshold, so as to achieve a promising trade-off between response delay and inference accuracy. The experimental results show that DTO-EE can reduce the average response delay by 21%-41% and improve the inference accuracy by 1%-4%, compared to the baselines.
翻译:在5G智慧城市中,边缘计算被用于为终端设备提供就近计算服务,大规模模型(如GPT和LLaMA)可部署于网络边缘以提升服务质量。然而,受限于内存容量与计算能力,在单一边缘节点上运行此类大规模模型存在困难。为满足资源约束,可将大规模模型划分为多个子模型并部署于多个边缘节点,进而将任务卸载至边缘节点进行协同推理。此外,我们引入早期退出机制以进一步加速推理过程。然而,异构系统与动态环境将显著影响推理效率。为应对这些挑战,我们通过理论分析揭示了任务卸载策略与置信度阈值间的耦合关系,并基于该耦合关系与凸优化理论提出了一种分布式算法DTO-EE。该算法使各边缘节点能够联合优化其卸载策略与置信度阈值,从而在响应延迟与推理精度间实现有效权衡。实验结果表明,相较于基线方法,DTO-EE能够将平均响应延迟降低21%-41%,并将推理精度提升1%-4%。