LAPS identifies and disaggregates requests with different prompt lengths in LLM serving to reduce TTFT latency. While recent systems have decoupled the prefill and decode stages to improve throughput, they still rely on unified scheduling policies that fail to adapt to heterogeneous workload characteristics. We observe that prompt-length variations lead to distinct performance bottlenecks, motivating an adaptive scheduling strategy. LAPS disaggregates multi-turn long-prefill requests from short-prefill ones and introduces a length-aware smart batching mechanism for short-prefill workloads. It adopts a dual-queue design that supports temporal disaggregation on a single prefill instance or spatial disaggregation across multiple instances. For short-prefill batches, a batch waiting window and CUDA Graph-based clustering mitigate interference from heterogeneous computation, reducing batching delay and lowering average latency. In real multi-turn workloads, LAPS reduces prefill latency by over 30\% compared to vanilla SGLang under prefill-decode disaggregation, and further decreases SLO violations by 28\% in multi-instance deployments with vanilla data-parallel configuration. Compared to the SGLang router with load balancing, it further lowers SLO violations by 12\% in multi-GPU settings. Under high concurrency and mixed-request scenarios, LAPS improves request throughput by 35\% serving Qwen2.5-32B model for prefill instance, demonstrating its effectiveness in optimizing heterogeneous LLM serving workloads.
翻译:LAPS通过识别并分离大语言模型服务中不同提示长度的请求,以降低首令牌延迟。尽管近期系统已通过解耦预填充和解码阶段来提升吞吐量,它们仍依赖统一的调度策略,无法适应异构工作负载特性。我们观察到提示长度的变化会导致不同的性能瓶颈,这促使我们采用自适应调度策略。LAPS将多轮长预填充请求与短预填充请求分离,并为短预填充工作负载引入了长度感知的智能批处理机制。该系统采用双队列设计,支持在单个预填充实例上进行时间解耦或在多个实例间进行空间解耦。针对短预填充批次,通过批处理等待窗口和基于CUDA Graph的聚类技术,缓解异构计算带来的干扰,从而减少批处理延迟并降低平均延迟。在实际多轮工作负载中,与预填充-解码解耦架构下的原始SGLang相比,LAPS将预填充延迟降低了30%以上;在采用原始数据并行配置的多实例部署中,进一步将服务等级目标违反率降低了28%。与具备负载均衡功能的SGLang路由器相比,LAPS在多GPU设置下将服务等级目标违反率额外降低了12%。在高并发和混合请求场景下,LAPS在服务Qwen2.5-32B模型的预填充实例时,将请求吞吐量提升了35%,证明了其在优化异构大语言模型服务负载方面的有效性。