Serving long-context LLMs is challenging because request lengths and batch composition vary during token generation, causing the memory footprint to fluctuate significantly at runtime. Offloading KV caches to host memory limits effective memory usage, but existing static and predetermined offloading strategies cannot adapt to the rapidly shifting memory demands of long-context serving. This often leads to excessive CPU-to-GPU KV transfers that translate into latency spikes and frequent SLO violations. To address these challenges, we introduce ORBITFLOW, a fine-grained and adaptive KV cache management system that meets latency SLOs in long-context LLM serving. ORBITFLOW employs a lightweight ILP solver to decide which layers' KV caches to retain on the GPU for each request, within memory capacity constraints. It continuously refines KV placements based on runtime feedback when the active plan becomes suboptimal during token generation. Under heavy load, ORBITFLOW invokes a fallback mechanism to temporarily defer in-flight requests with large memory footprints, preserving overall SLO attainment. Our experiments demonstrate that ORBITFLOW improves SLO attainment for TPOT and TBT by up to 66% and 48%, respectively, while reducing the 95th percentile latency by 38% and achieving up to 3.3x higher throughput compared to existing offloading methods.
翻译:服务长上下文大语言模型(LLM)具有挑战性,因为请求长度和批次组合在令牌生成过程中会发生变化,导致运行时内存占用显著波动。将KV缓存卸载到主机内存限制了有效内存使用,但现有的静态预定义卸载策略无法适应长上下文服务中快速变化的内存需求。这通常导致过多的CPU到GPU的KV传输,进而引发延迟尖峰和频繁的服务水平目标(SLO)违规。为应对这些挑战,我们提出了ORBITFLOW,一个细粒度且自适应的KV缓存管理系统,旨在满足长上下文LLM服务中的延迟SLO。ORBITFLOW采用一个轻量级整数线性规划(ILP)求解器,在内存容量限制内,为每个请求决定将哪些层的KV缓存保留在GPU上。当令牌生成过程中当前方案变得次优时,系统会根据运行时反馈持续优化KV放置策略。在高负载下,ORBITFLOW会触发一个回退机制,临时推迟内存占用大的进行中请求,以维护整体SLO达成率。我们的实验表明,与现有卸载方法相比,ORBITFLOW将TPOT和TBT的SLO达成率分别提升了高达66%和48%,同时将第95百分位延迟降低了38%,并实现了高达3.3倍的吞吐量提升。