Large language model (LLM) serving is becoming an increasingly critical workload for cloud providers. Existing LLM serving systems focus on interactive requests, such as chatbots and coding assistants, with tight latency SLO requirements. However, when such systems execute batch requests that have relaxed SLOs along with interactive requests, it leads to poor multiplexing and inefficient resource utilization. To address these challenges, we propose QLM, a queue management system for LLM serving. QLM maintains batch and interactive requests across different models and SLOs in a request queue. Optimal ordering of the request queue is critical to maintain SLOs while ensuring high resource utilization. To generate this optimal ordering, QLM uses a Request Waiting Time (RWT) Estimator that estimates the waiting times for requests in the request queue. These estimates are used by a global scheduler to orchestrate LLM Serving Operations (LSOs) such as request pulling, request eviction, load balancing, and model swapping. Evaluation on heterogeneous GPU devices and models with real-world LLM serving dataset shows that QLM improves SLO attainment by 40-90% and throughput by 20-400% while maintaining or improving device utilization compared to other state-of-the-art LLM serving systems. QLM's evaluation is based on the production requirements of a cloud provider. QLM is publicly available at https://www.github.com/QLM-project/QLM.
翻译:大型语言模型(LLM)服务正日益成为云服务提供商的关键工作负载。现有的LLM服务系统主要面向聊天机器人和代码助手等具有严格延迟服务等级目标(SLO)要求的交互式请求。然而,当此类系统同时执行具有宽松SLO的批处理请求与交互式请求时,会导致多路复用效果不佳和资源利用效率低下。为应对这些挑战,我们提出了QLM——一种面向LLM服务的队列管理系统。QLM在请求队列中维护跨不同模型和SLO的批处理与交互式请求。请求队列的最优排序对于维持SLO同时确保高资源利用率至关重要。为生成这种最优排序,QLM采用请求等待时间(RWT)估计器来预测请求队列中各请求的等待时间。全局调度器利用这些估计值来协调LLM服务操作(LSO),包括请求拉取、请求驱逐、负载均衡和模型切换。在异构GPU设备和真实世界LLM服务数据集上的评估表明,相较于其他最先进的LLM服务系统,QLM在保持或提升设备利用率的同时,将SLO达成率提高了40-90%,吞吐量提升了20-400%。QLM的评估基于云服务提供商的生产需求。QLM已在https://www.github.com/QLM-project/QLM 公开提供。