We identify a systematic attention collapse pathology in the BLOOM family of transformer language models, where ALiBi positional encoding causes 31-44% of attention heads to attend almost entirely to the beginning-of-sequence token. The collapse follows a predictable pattern across four model scales (560M to 7.1B parameters), concentrating in head indices where ALiBi's slope schedule imposes the steepest distance penalties. We introduce surgical reinitialization: targeted Q/K/V reinitialization with zeroed output projections and gradient-masked freezing of all non-surgical parameters. Applied to BLOOM-1b7 on a single consumer GPU, the technique recovers 98.7% operational head capacity (242 to 379 of 384 heads) in two passes. A controlled comparison with C4 training data confirms that reinitialization -- not corpus content -- drives recovery, and reveals two distinct post-surgical phenomena: early global functional redistribution that improves the model, and late local degradation that accumulates under noisy training signal. An extended experiment reinitializing mostly-healthy heads alongside collapsed ones produces a model that transiently outperforms stock BLOOM-1b7 by 25% on training perplexity (12.70 vs. 16.99), suggesting that pretrained attention configurations are suboptimal local minima. Code, checkpoints, and diagnostic tools are released as open-source software.
翻译:本文识别出BLOOM系列Transformer语言模型中存在系统性注意力塌缩病理现象:ALiBi位置编码导致31-44%的注意力头几乎完全聚焦于序列起始标记。该塌缩在四种模型规模(5.6亿至71亿参数)中呈现可预测模式,集中出现在ALiBi斜率调度施加最强距离惩罚的头部索引位置。我们提出手术式重初始化方法:通过针对性重初始化Q/K/V矩阵并清零输出投影,同时对所有非手术参数实施梯度掩码冻结。在单张消费级GPU上对BLOOM-1b7模型应用该技术,经过两次处理即可恢复98.7%的注意力头功能(384个头中从242个恢复至379个)。基于C4训练数据的对照实验证实:驱动功能恢复的是重初始化操作而非语料内容,并揭示出两种术后现象:早期全局功能重分配能改进模型性能,而后期局部退化会在噪声训练信号下累积。扩展实验将基本健康头与塌缩头同步重初始化后,所得模型在训练困惑度上暂时超越原始BLOOM-1b7达25%(12.70对比16.99),表明预训练注意力配置处于次优局部极小值。代码、检查点与诊断工具均已开源发布。