Large Language Models (LLMs) have become more prevalent in long-context applications such as interactive chatbots, document analysis, and agent workflows, but it is challenging to serve long-context requests with low latency and high throughput. Speculative decoding (SD) is a widely used technique to reduce latency without sacrificing performance but the conventional wisdom suggests that its efficacy is limited to small batch sizes. In MagicDec, we show that surprisingly SD can achieve speedup even for a high throughput inference regime for moderate to long sequences. More interestingly, an intelligent drafting strategy can achieve better speedup with increasing batch size based on our rigorous analysis. MagicDec first identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy speculative decoding more effectively for high throughput inference. Then, it leverages draft models with sparse KV cache to address the KV bottleneck that scales with both sequence length and batch size. This finding underscores the broad applicability of speculative decoding in long-context serving, as it can enhance throughput and reduce latency without compromising accuracy. For moderate to long sequences, we demonstrate up to 2x speedup for LLaMA-2-7B-32K and 1.84x speedup for LLaMA-3.1-8B when serving batch sizes ranging from 32 to 256 on 8 NVIDIA A100 GPUs. The code is available at https://github.com/Infini-AI-Lab/MagicDec/.
翻译:大型语言模型(LLMs)在交互式聊天机器人、文档分析和智能体工作流等长上下文应用中日益普及,但以低延迟和高吞吐量服务长上下文请求仍具挑战性。推测解码(SD)是一种广泛用于降低延迟而不牺牲性能的技术,但传统观点认为其有效性仅限于小批量处理。在MagicDec中,我们出人意料地证明,对于中等至长序列的高吞吐量推理场景,SD仍可实现加速。更有趣的是,基于我们的严格分析,一种智能草稿生成策略能够随着批量大小的增加而实现更好的加速效果。MagicDec首先识别了随着批量大小和序列长度增加而出现的瓶颈转移,并利用这些洞见在高吞吐量推理中更有效地部署推测解码。随后,它采用具有稀疏KV缓存的草稿模型来解决随序列长度和批量大小同时扩展的KV瓶颈。这一发现凸显了推测解码在长上下文服务中的广泛适用性,因为它能在不影响准确性的前提下提升吞吐量并降低延迟。对于中等至长序列,我们在8个NVIDIA A100 GPU上服务批量大小为32至256的请求时,为LLaMA-2-7B-32K展示了高达2倍的加速,为LLaMA-3.1-8B展示了1.84倍的加速。代码发布于https://github.com/Infini-AI-Lab/MagicDec/。