On-device running Large Language Models (LLMs) is nowadays a critical enabler towards preserving user privacy. We observe that the attention operator falls back from the special-purpose NPU to the general-purpose CPU/GPU because of quantization sensitivity in state-of-the-art frameworks. This fallback results in a degraded user experience and increased complexity in system scheduling. To this end, this paper presents shadowAttn, a system-algorithm codesigned sparse attention module with minimal reliance on CPU/GPU by only sparsely calculating the attention on a tiny portion of tokens. The key idea is to hide the overhead of estimating the important tokens with a NPU-based pilot compute. Further, shadowAttn proposes insightful techniques such as NPU compute graph bucketing, head-wise NPU-CPU/GPU pipeline and per-head fine-grained sparsity ratio to achieve high accuracy and efficiency. shadowAttn delivers the best performance with highly limited CPU/GPU resource; it requires much less CPU/GPU resource to deliver on-par performance of SoTA frameworks.
翻译:在端侧运行大语言模型(LLMs)如今是保护用户隐私的关键使能技术。我们观察到,现有主流框架中,注意力算子因量化敏感性问题从专用NPU回退至通用CPU/GPU。这种回退导致用户体验下降并增加系统调度复杂度。为此,本文提出shadowAttn——一种系统-算法协同设计的稀疏注意力模块,通过仅对极少量令牌进行稀疏注意力计算,最小化对CPU/GPU的依赖。其核心思想是利用基于NPU的预计算来隐藏重要令牌估计开销。此外,shadowAttn提出了创新性技术,包括NPU计算图分桶、逐头NPU-CPU/GPU流水线及细粒度逐头稀疏率,以实现高精度与高效率。在CPU/GPU资源极度受限场景下,shadowAttn实现最佳性能;仅需极少的CPU/GPU资源即可达到与主流框架相当的卓越表现。