Large Language Models (LLMs) are known to contain significant redundancy, yet a systematic explanation for why certain components, particularly in higher layers, are more redundant has remained elusive. In this work, we identify the BOS sink phenomenon as a key mechanism driving this layer-wise sensitivity. We show that attention heads with high BOS sink scores are strongly associated with functional redundancy: such heads, especially in deeper layers, contribute little to predictive performance and effectively serve as \emph{dumping grounds} for superfluous attention weights. This provides a concrete functional explanation for the structural redundancy reported in prior studies. Leveraging this insight, we introduce a simple pruning strategy that removes high-BOS sink heads. Experiments on Gemma-3, Llama-3.1, and Qwen3 demonstrate that this approach identifies redundant transformer components more reliably than weight- or activation-based criteria, while preserving performance close to dense baselines even under aggressive pruning. Moreover, we find that the behavior of sink heads remains stable across different sequence lengths. Overall, our results suggest that structural properties of attention offer a more intuitive and robust basis for model compression than magnitude-based methods.
翻译:大型语言模型(LLMs)已知存在显著冗余,但对于为何某些组件(尤其是高层组件)更具冗余性,一直缺乏系统性解释。本研究将BOS汇现象确定为驱动这种层间敏感性的关键机制。我们证明具有高BOS汇得分的注意力头部与功能冗余高度相关:此类头部(特别是在深层)对预测性能贡献甚微,实质上充当了冗余注意力权重的“倾倒场”。这为先前研究中报道的结构冗余提供了具体的功能性解释。基于这一发现,我们提出了一种简单的剪枝策略,专门移除高BOS汇头部。在Gemma-3、Llama-3.1和Qwen3上的实验表明,该方法比基于权重或激活的准则能更可靠地识别冗余Transformer组件,即使在激进剪枝下仍能保持接近稠密基线的性能。此外,我们发现汇头部的行为在不同序列长度下保持稳定。总体而言,我们的研究结果表明,注意力机制的结构特性为模型压缩提供了比基于幅度的更直观且鲁棒的基础。