With the wide adoption of language models for IR -- and specifically RAG systems -- the latency of the underlying LLM becomes a crucial bottleneck, since the long contexts of retrieved passages lead large prompts and therefore, compute increase. Prompt compression, which reduces the size of input prompts while aiming to preserve performance on downstream tasks, has established itself as a cost-effective and low-latency method for accelerating inference in large language models. However, its usefulness depends on whether the additional preprocessing time during generation is offset by faster decoding. We present the first systematic, large-scale study of this trade-off, with thousands of runs and 30,000 queries across several open-source LLMs and three GPU classes. Our evaluation separates compression overhead from decoding latency while tracking output quality and memory usage. LLMLingua achieves up to 18% end-to-end speed-ups, when prompt length, compression ratio, and hardware capacity are well matched, with response quality remaining statistically unchanged across summarization, code generation, and question answering tasks. Outside this operating window, however, the compression step dominates and cancels out the gains. We also show that effective compression can reduce memory usage enough to offload workloads from data center GPUs to commodity cards, with only a 0.3s increase in latency. Our open-source profiler predicts the latency break-even point for each model-hardware setup, providing practical guidance on when prompt compression delivers real-world benefits.
翻译:随着语言模型在信息检索——特别是检索增强生成系统——中的广泛应用,大语言模型的底层延迟成为关键瓶颈,因为检索到的长段落上下文导致提示词规模增大,从而增加计算量。提示词压缩通过缩减输入提示词规模,同时保持下游任务性能,已成为一种经济高效且低延迟的加速大语言模型推理方法。然而其有效性取决于生成过程中额外预处理时间是否被更快的解码所抵消。我们首次对这一权衡进行了系统性大规模研究,涵盖三个GPU类别上的多个开源LLM,涉及数千次运行和30,000个查询。我们的评估在追踪输出质量和内存使用的同时,将压缩开销与解码延迟分离开来。当提示词长度、压缩比和硬件能力匹配良好时,LLMLingua可实现高达18%的端到端加速,且在摘要生成、代码生成和问答任务中响应质量保持统计无显著变化。但在该最优运行窗口之外,压缩步骤会主导延迟并抵消加速收益。我们还证明,有效压缩可降低内存使用,足以将工作负载从数据中心GPU转移至商用显卡,延迟仅增加0.3秒。我们的开源分析器可预测每个模型-硬件配置的延迟盈亏平衡点,为提示词压缩何时能带来实际收益提供实用指导。