Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths. However, this uniform approach fails to capture the diverse attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose the Mixture of Attention (MoA), which automatically tailors distinct sparse attention configurations to different heads and layers. MoA constructs and navigates a search space of various attention patterns and their scaling rules relative to input sequence lengths. It profiles the model, evaluates potential configurations, and pinpoints the optimal sparse attention compression plan. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer sequences, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by $3.9\times$ with the same average attention span, boosting retrieval accuracy by $1.5-7.1\times$ over the uniform-attention baseline across Vicuna-7B, Vicuna-13B, and Llama3-8B models. Moreover, MoA narrows the capability gaps between sparse and dense models, reducing the maximum relative performance drop from $9\%-36\%$ to within $5\%$ across two long-context understanding benchmarks. MoA achieves a $1.2-1.4\times$ GPU memory reduction and boosts decode throughput by $5.5-6.7 \times$ for 7B and 13B dense models on a single GPU, with minimal impact on performance.
翻译:稀疏注意力能有效缓解大语言模型(LLMs)在长上下文场景下面临的巨大内存与吞吐量需求。现有方法通常采用统一的稀疏注意力掩码,在不同注意力头与输入长度上应用相同的稀疏模式。然而,这种统一方法无法捕捉大语言模型固有的多样化注意力模式,忽略了其各自不同的精度-延迟权衡。为解决这一挑战,我们提出注意力混合方法(MoA),该方法能够为不同的注意力头与网络层自动定制相应的稀疏注意力配置。MoA构建并探索了多种注意力模式及其随输入序列长度变化的缩放规律所组成的搜索空间。它通过对模型进行性能剖析,评估可能的配置方案,从而精准确定最优的稀疏注意力压缩策略。MoA能够适应变化的输入规模,研究发现部分注意力头会扩大关注范围以处理更长序列,而其他注意力头则持续聚焦于固定长度的局部上下文。实验表明,在保持平均注意力范围不变的条件下,MoA将有效上下文长度提升了$3.9$倍,在Vicuna-7B、Vicuna-13B及Llama3-8B模型上,其检索准确率较统一注意力基线提升$1.5-7.1$倍。此外,MoA缩小了稀疏模型与稠密模型之间的能力差距,在两个长上下文理解基准测试中,将最大相对性能下降从$9\%-36\%$降低至$5\%$以内。对于单GPU上运行的7B与13B稠密模型,MoA实现了$1.2-1.4$倍的内存占用降低,并将解码吞吐量提升$5.5-6.7$倍,且对模型性能影响极小。