While the Transformer architecture dominates many fields, its quadratic self-attention complexity hinders its use in large-scale applications. Linear attention offers an efficient alternative, but its direct application often degrades performance, with existing fixes typically re-introducing computational overhead through extra modules (e.g., depthwise separable convolution) that defeat the original purpose. In this work, we identify a key failure mode in these methods: global context collapse, where the model loses representational diversity. To address this, we propose Multi-Head Linear Attention (MHLA), which preserves this diversity by computing attention within divided heads along the token dimension. We prove that MHLA maintains linear complexity while recovering much of the expressive power of softmax attention, and verify its effectiveness across multiple domains, achieving a 3.6\% improvement on ImageNet classification, a 6.3\% gain on NLP, a 12.6\% improvement on image generation, and a 41\% enhancement on video generation under the same time complexity.
翻译:尽管Transformer架构在众多领域占据主导地位,但其二次复杂度的自注意力机制阻碍了其在大规模应用中的使用。线性注意力提供了一种高效的替代方案,但其直接应用往往会导致性能下降,而现有的修复方法通常通过引入额外模块(例如深度可分离卷积)重新引入计算开销,这违背了最初的效率目标。在本研究中,我们识别了这些方法中的一个关键失效模式:全局上下文坍缩,即模型丧失了表征多样性。为解决此问题,我们提出了多头线性注意力(MHLA),该方法通过在令牌维度上划分的注意力头内部计算注意力,从而保持了这种多样性。我们证明了MHLA在保持线性复杂度的同时,能够恢复大部分softmax注意力的表达能力,并在多个领域验证了其有效性:在相同时间复杂度下,在ImageNet分类任务上实现了3.6%的性能提升,在自然语言处理任务上获得了6.3%的增益,在图像生成任务上取得了12.6%的改进,并在视频生成任务上实现了41%的性能增强。