Long-context agentic workflows have emerged as a defining use case for large language models, making attention efficiency critical for both inference speed and serving cost. Sparse attention addresses this challenge effectively, and DeepSeek Sparse Attention (DSA) is a representative production-grade solution: a lightweight lightning indexer selects the top-k most relevant tokens per query, reducing core attention from $O(L^2)$ to $O(Lk)$. However, the indexer itself retains $O(L^2)$ complexity and must run independently at every layer, despite the fact that the resulting top-k selections are highly similar across consecutive layers. We present IndexCache, which exploits this cross-layer redundancy by partitioning layers into a small set of Full layers that run their own indexers and a majority of Shared layers that simply reuse the nearest Full layer's top-k indices. We propose two complementary approaches to determine and optimize this configuration. Training-free IndexCache applies a greedy search algorithm that selects which layers to retain indexers by directly minimizing language modeling loss on a calibration set, requiring no weight updates. Training-aware IndexCache introduces a multi-layer distillation loss that trains each retained indexer against the averaged attention distributions of all layers it serves, enabling even simple interleaved patterns to match full-indexer accuracy. Experimental results on a 30B DSA model show that IndexCache can remove 75% of indexer computations with negligible quality degradation, achieving up to 1.82$\times$ prefill speedup and 1.48$\times$ decode speedup compared to standard DSA. These positive results are further confirmed by our preliminary experiments on the production-scale GLM-5 model (Figure 1).
翻译:长上下文智能体工作流已成为大型语言模型的关键应用场景,使得注意力机制的效率对推理速度和服务成本至关重要。稀疏注意力有效应对了这一挑战,其中DeepSeek稀疏注意力(DSA)是代表性的生产级解决方案:其轻量级闪电索引器为每个查询选择最相关的k个令牌,将核心注意力复杂度从$O(L^2)$降至$O(Lk)$。然而,索引器本身仍保持$O(L^2)$复杂度,且必须在每一层独立运行,尽管相邻层产生的top-k选择结果高度相似。本文提出IndexCache,通过将网络层划分为两类来利用这种跨层冗余:少量运行独立索引器的完整层,以及多数直接复用最近完整层top-k索引的共享层。我们提出两种互补的方法来确定和优化此配置。免训练的IndexCache采用贪心搜索算法,通过在校准集上直接最小化语言建模损失来选择保留索引器的层,无需权重更新。支持训练的IndexCache引入多层蒸馏损失,使每个保留的索引器针对其服务所有层的平均注意力分布进行训练,即使简单的交错模式也能达到全索引器精度。在30B参数的DSA模型上的实验表明,IndexCache可消除75%的索引器计算量且质量损失可忽略,相比标准DSA实现最高达1.82$\times$的预填充加速和1.48$\times$的解码加速。这些积极结果在我们对生产级GLM-5模型的初步实验中得到了进一步验证(图1)。