Retrieval-augmented generation (RAG) improves the reliability of large language model (LLM) answers by integrating external knowledge. However, RAG increases the end-to-end inference time since looking for relevant documents from large vector databases is computationally expensive. To address this, we introduce Proximity, an approximate key-value cache that optimizes the RAG workflow by leveraging similarities in user queries. Instead of treating each query independently, Proximity reuses previously retrieved documents when similar queries appear, substantially reducing the reliance on expensive vector database lookups. To efficiently scale, Proximity employs a locality-sensitive hashing (LSH) scheme that enables fast cache lookups while preserving retrieval accuracy. We evaluate Proximity using the MMLU and MedRAG question-answering benchmarks. Our experiments demonstrate that Proximity with our LSH scheme and a realistically-skewed MedRAG workload reduces database calls by 77.2% while maintaining database recall and test accuracy. We experiment with different similarity tolerances and cache capacities, and show that the time spent within the Proximity cache remains low and constant (4.8 microseconds) even as the cache grows substantially in size. Our results demonstrate that approximate caching is a practical and effective strategy for optimizing RAG-based systems.
翻译:检索增强生成(RAG)通过整合外部知识提升了大语言模型(LLM)回答的可靠性。然而,由于从大规模向量数据库中查找相关文档的计算开销较大,RAG增加了端到端的推理时间。为解决这一问题,我们提出了Proximity,一种近似键值缓存,通过利用用户查询之间的相似性来优化RAG工作流。Proximity并非独立处理每个查询,而是在出现相似查询时复用先前检索到的文档,从而显著减少对昂贵向量数据库查询的依赖。为实现高效扩展,Proximity采用局部敏感哈希(LSH)方案,在保持检索准确性的同时支持快速缓存查找。我们使用MMLU和MedRAG问答基准对Proximity进行评估。实验表明,结合我们的LSH方案和现实偏斜的MedRAG工作负载,Proximity将数据库调用减少了77.2%,同时保持了数据库召回率和测试准确率。我们测试了不同的相似性容差和缓存容量,结果显示即使缓存规模大幅增长,Proximity缓存内的处理时间仍保持较低且恒定(4.8微秒)。我们的结果表明,近似缓存是优化基于RAG系统的实用且有效的策略。