The widespread adoption of Large Language Models (LLMs) has enabled diverse applications with very different latency requirements. Existing LLM serving frameworks rely on siloed infrastructure with coarse-grained workload segregation -- interactive and batch -- leading to inefficient resource utilization and limited support for fine-grained Quality-of-Service (QoS) differentiation. This results in operational inefficiencies, over-provisioning and poor load management during traffic surges. We present Niyama, a novel QoS-driven inference serving system that enables efficient co-scheduling of diverse workloads on shared infrastructure. Niyama introduces fine-grained QoS classification allowing applications to specify precise latency requirements, and dynamically adapts scheduling decisions based on real-time system state. Leveraging the predictable execution characteristics of LLM inference, Niyama implements a dynamic chunking mechanism to improve overall throughput while maintaining strict QoS guarantees. Additionally, Niyama employs a hybrid prioritization policy that balances fairness and efficiency, and employs selective request relegation that enables graceful service degradation during overload conditions. Our evaluation demonstrates that Niyama increases serving capacity by 32% compared to current siloed deployments, while maintaining QoS guarantees. Notably, under extreme load, our system reduces SLO violations by an order of magnitude compared to current strategies.
翻译:大语言模型的广泛采用催生了具有不同延迟需求的多样化应用。现有的大语言模型服务框架依赖于粗粒度工作负载隔离(交互式与批处理)的孤岛式基础设施,导致资源利用效率低下,且对细粒度服务质量差异化支持有限。这造成了运营效率低下、资源过度配置以及在流量激增时负载管理能力不足的问题。本文提出Niyama,一种新颖的基于服务质量的推理服务系统,能够在共享基础设施上实现多样化工作负载的高效协同调度。Niyama引入了细粒度的服务质量分类机制,允许应用指定精确的延迟要求,并基于实时系统状态动态调整调度决策。利用大语言模型推理可预测的执行特性,Niyama实现了动态分块机制,在保持严格服务质量保证的同时提升整体吞吐量。此外,Niyama采用兼顾公平与效率的混合优先级策略,并实施选择性请求降级机制,使系统在过载条件下能够实现优雅的服务降级。评估结果表明,与当前孤岛式部署方案相比,Niyama在保证服务质量的前提下将服务容量提升了32%。值得注意的是,在极端负载下,本系统相比现有策略将服务水平目标违反率降低了一个数量级。