Large Language Models (LLMs) are wildly popular today and it is important to serve them efficiently. Existing LLM serving systems are stateless across requests. Consequently, when LLMs are used in the common setting of multi-turn conversations, a growing log of the conversation history must be processed alongside any request by the serving system at each turn, resulting in repeated processing. In this paper, we design Pensieve, a system optimized for multi-turn conversation LLM serving. Pensieve maintains the conversation state across requests by caching previously processed history to avoid duplicate processing. Pensieve's multi-tier caching strategy can utilize both GPU and CPU memory to efficiently store and retrieve cached data. Pensieve also generalizes the recent PagedAttention kernel to support attention between multiple input tokens with a GPU cache spread over non-contiguous memory. Our evaluation shows that Pensieve can achieve 13-58% more throughput compared to vLLM and TensorRT-LLM and significantly reduce latency.
翻译:大语言模型(LLMs)在当今极为流行,因此高效地为其提供服务至关重要。现有的大语言模型服务系统在跨请求时是无状态的。因此,当大语言模型被用于常见的多轮对话场景时,服务系统在每一轮都必须处理不断增长的对话历史记录以及当前请求,导致重复计算。本文设计了Pensieve,一个为多轮对话大语言模型服务优化的系统。Pensieve通过缓存先前已处理的历史记录来跨请求维护对话状态,从而避免重复处理。Pensieve的多层缓存策略能够同时利用GPU和CPU内存来高效存储和检索缓存数据。Pensieve还推广了最近的PagedAttention内核,以支持多个输入token之间在GPU缓存分布于非连续内存情况下的注意力计算。我们的评估表明,与vLLM和TensorRT-LLM相比,Pensieve能够实现13-58%的吞吐量提升,并显著降低延迟。