Private large language model (LLM) inference based on secure multi-party computation (MPC) achieves formal data privacy protection but suffers from significant latency overhead, especially for long input sequences. While key-value (KV) cache eviction and sparse attention algorithms have been proposed for efficient LLM inference in plaintext, they are not designed for MPC and cannot benefit private LLM inference directly. In this paper, we propose an accurate and MPC-friendly KV cache eviction framework, dubbed MPCache, building on the observation that historical tokens in a long sequence may have different effects on the downstream decoding. Hence, MPCache combines a look-once static eviction algorithm to discard unimportant KV cache and a query-aware dynamic selection algorithm to activate only a small subset of KV cache for attention computation. MPCache further incorporates a series of optimizations for efficient dynamic KV cache selection, including MPC-friendly similarity approximation, hierarchical KV cache clustering, and cross-layer index-sharing strategy. Extensive experiments demonstrate that MPCache consistently outperforms prior-art KV cache eviction baselines across different generation tasks and achieves 1.8 ~ 2.01x and 3.39 ~ 8.37x decoding latency and communication reduction on different sequence lengths, respectively.
翻译:基于安全多方计算(MPC)的私有大语言模型(LLM)推理能够实现形式化的数据隐私保护,但其存在显著的延迟开销,尤其在处理长输入序列时更为突出。尽管已有研究提出键值(KV)缓存淘汰算法与稀疏注意力机制以提升明文环境下的LLM推理效率,这些方法并非为MPC场景设计,无法直接惠及私有LLM推理。本文提出一种精确且MPC友好的KV缓存淘汰框架MPCache,其核心洞见在于:长序列中的历史标记对下游解码过程的影响程度存在差异。因此,MPCache结合了单次扫描静态淘汰算法以丢弃不重要的KV缓存,以及查询感知动态选择算法以仅激活一小部分KV缓存参与注意力计算。MPCache进一步集成了一系列针对高效动态KV缓存选择的优化技术,包括MPC友好的相似度近似计算、分层KV缓存聚类以及跨层索引共享策略。大量实验表明,MPCache在不同生成任务中均持续优于现有KV缓存淘汰基线方法,在不同序列长度下分别实现了1.8~2.01倍的解码延迟降低和3.39~8.37倍的通信开销减少。