Serving long-context LLMs is challenging because request lengths and batch composition vary during token generation, causing the memory footprint to fluctuate significantly at runtime. Offloading KV caches to host memory limits effective memory usage, but existing static and predetermined offloading strategies cannot adapt to the rapidly shifting memory demands of long-context serving. This often leads to excessive CPU-to-GPU KV transfers that translate into latency spikes and frequent SLO violations. To address these challenges, we introduce OrbitFlow, a fine-grained and adaptive KV cache management system that meets latency SLOs in long-context LLM serving. OrbitFlow employs a lightweight ILP solver to decide which layers' KV caches to retain on the GPU for each request, within memory capacity constraints. It continuously refines KV placements based on runtime feedback when the active plan becomes suboptimal during token generation. Under heavy load, OrbitFlow invokes a fallback mechanism to temporarily defer in-flight requests with large memory footprints, preserving overall SLO attainment. Our experiments demonstrate that OrbitFlow improves SLO attainment for TPOT and TBT by up to 66% and 48%, respectively, while reducing the 95th percentile latency by 38% and achieving up to 3.3x higher throughput compared to existing offloading methods.
翻译:长上下文大语言模型(LLM)的服务部署面临挑战,主要源于请求长度和批次组合在令牌生成过程中的动态变化,导致运行时内存占用显著波动。将KV缓存卸载至主机内存虽能提升内存使用效率,但现有静态预定的卸载策略无法适应长上下文服务中快速变化的内存需求。这通常导致过量的CPU至GPU间KV数据传输,进而引发延迟尖峰并频繁违反服务等级目标(SLO)。为应对这些挑战,我们提出了OrbitFlow——一种细粒度自适应的KV缓存管理系统,旨在满足长上下文LLM服务中的延迟SLO要求。OrbitFlow采用轻量级整数线性规划(ILP)求解器,在内存容量约束下动态决策各请求的KV缓存应保留在GPU的哪些层级。当令牌生成过程中当前执行方案不再最优时,系统能基于运行时反馈持续优化KV数据布局。在高负载场景下,OrbitFlow会触发后备机制,临时延迟处理内存占用量大的进行中请求,从而保障整体SLO达标率。实验结果表明,相较于现有卸载方法,OrbitFlow将TPOT与TBT的SLO达标率分别最高提升66%和48%,同时将第95百分位延迟降低38%,并实现最高3.3倍的吞吐量提升。