Large Language Models (LLMs) demonstrate substantial potential across a diverse array of domains via request serving. However, as trends continue to push for expanding context sizes, the autoregressive nature of LLMs results in highly dynamic behavior of the attention layers, showcasing significant differences in computational characteristics and memory requirements from the non-attention layers. This presents substantial challenges for resource management and performance optimization in service systems. Existing static model parallelism and resource allocation strategies fall short when dealing with this dynamicity. To address the issue, we propose Infinite-LLM, a novel LLM serving system designed to effectively handle dynamic context lengths. Infinite-LLM disaggregates attention layers from an LLM's inference process, facilitating flexible and independent resource scheduling that optimizes computational performance and enhances memory utilization jointly. By leveraging a pooled GPU memory strategy across a cluster, Infinite-LLM not only significantly boosts system throughput but also supports extensive context lengths. Evaluated on a dataset with context lengths ranging from a few to 2000K tokens across a cluster with 32 A100 GPUs, Infinite-LLM demonstrates throughput improvement of 1.35-3.4x compared to state-of-the-art methods, enabling efficient and elastic LLM deployment.
翻译:大语言模型(LLMs)通过请求服务在众多领域展现出巨大潜力。然而,随着上下文长度持续扩展的趋势,LLMs的自回归特性导致注意力层呈现高度动态行为,其计算特性与内存需求与非注意力层存在显著差异。这对服务系统中的资源管理与性能优化提出了重大挑战。现有的静态模型并行与资源分配策略难以应对这种动态性。为解决该问题,我们提出无限-LLM——一种专为高效处理动态上下文长度而设计的新型LLM服务系统。无限-LLM将注意力层从LLM推理过程中解耦,实现灵活独立的资源调度,从而协同优化计算性能并提升内存利用率。通过采用跨集群的GPU内存池化策略,无限-LLM不仅显著提升系统吞吐量,还能支持超长上下文处理。在包含32张A100 GPU的集群上,使用上下文长度从数个到200万令牌的数据集进行评估,无限-LLM相比现有最优方法实现了1.35-3.4倍的吞吐量提升,为大语言模型提供了高效弹性的部署方案。