The rapid increase in the number of parameters in large language models (LLMs) has significantly increased the cost involved in fine-tuning and retraining LLMs, a necessity for keeping models up to date and improving accuracy. Retrieval-Augmented Generation (RAG) offers a promising approach to improving the capabilities and accuracy of LLMs without the necessity of retraining. Although RAG eliminates the need for continuous retraining to update model data, it incurs a trade-off in the form of slower model inference times. Resultingly, the use of RAG in enhancing the accuracy and capabilities of LLMs often involves diverse performance implications and trade-offs based on its design. In an effort to begin tackling and mitigating the performance penalties associated with RAG from a systems perspective, this paper introduces a detailed taxonomy and characterization of the different elements within the RAG ecosystem for LLMs that explore trade-offs within latency, throughput, and memory. Our study reveals underlying inefficiencies in RAG for systems deployment, that can result in TTFT latencies that are twice as long and unoptimized datastores that consume terabytes of storage.
翻译:大型语言模型(LLM)参数数量的快速增长显著提高了微调和重新训练LLM的成本,而这是保持模型更新与提升准确性所必需的。检索增强生成(RAG)提供了一种有前景的方法,可在无需重新训练的情况下提升LLM的能力与准确性。尽管RAG消除了为更新模型数据而持续重新训练的需求,但它也带来了模型推理时间变慢的权衡。因此,利用RAG来增强LLM的准确性与能力,通常会因其设计不同而产生多样的性能影响与权衡。为了从系统视角着手应对和缓解与RAG相关的性能损失,本文针对LLM的RAG生态系统中的不同组成部分,提出了一个详细的分类与特征描述体系,以探究其在延迟、吞吐量和内存方面的权衡。我们的研究揭示了RAG在系统部署中存在的潜在低效问题,这些问题可能导致首次令牌生成时间(TTFT)延迟延长一倍,以及未优化的数据存储消耗数TB的存储空间。