With the rapid growth in the number of large language model (LLM) users, it is difficult for bandwidth-constrained cloud servers to simultaneously process massive LLM services in real-time. Recently, edge-cloud infrastructures have been used to improve the processing efficiency of large-scale LLM services. However, the diversity of task requirements and the dynamics of resources pose great challenges to inference scheduling, leading to the wastage of many resources. In this paper, we present PerLLM, a personalized inference scheduling framework with edge-cloud collaboration designed for diverse LLM services. For the complexity of multiple constraints and the decision-making process of edge-cloud collaboration, we integrate the upper confidence bound algorithm based on the constraint satisfaction mechanism in PerLLM. For diverse LLM services, PerLLM can optimize service scheduling and resource allocation solutions within the edge-cloud infrastructure to meet processing time requirements while minimizing energy costs. Experimental results from different model deployments show that PerLLM can effectively meet the processing time requirements of personalized services. Compared to other methods, PerLLM achieves 2.2x, 2.1x, and 1.6x throughput and reduces the energy cost by more than 50%.
翻译:随着大语言模型(LLM)用户数量的快速增长,带宽受限的云服务器难以实时并行处理海量的LLM服务。近期,边缘-云基础设施被用于提升大规模LLM服务的处理效率。然而,任务需求的多样性与资源的动态性为推理调度带来了巨大挑战,导致大量资源浪费。本文提出PerLLM,一个面向多样化LLM服务的个性化边缘-云协同推理调度框架。针对多约束的复杂性以及边缘-云协同的决策过程,我们在PerLLM中集成了基于约束满足机制的上置信界算法。对于多样化的LLM服务,PerLLM能够在边缘-云基础设施内优化服务调度与资源分配方案,在满足处理时间要求的同时最小化能耗成本。不同模型部署下的实验结果表明,PerLLM能够有效满足个性化服务的处理时间要求。与其他方法相比,PerLLM实现了2.2倍、2.1倍和1.6倍的吞吐量,并将能耗成本降低了50%以上。