Transformers can capture long-range dependencies using self-attention, allowing tokens to attend to all others directly. However, stacking multiple attention layers leads to attention concentration. One natural way to address this issue is to use cross-layer attention, allowing information from earlier layers to be directly accessible to later layers. However, this approach is computationally expensive. To address this problem, we propose Transformer with residual value (ResFormer) which approximates cross-layer attention through adding a residual connection from the values of the the first layer to all subsequent layers. Based on this method, one variant is the Transformer with single layer value (SVFormer), where all layers share the same value embedding from first layer. Comprehensive empirical evidence demonstrates ResFormer achieves equivalent validation loss with 10.4% fewer model parameters and 13.6% less training data compared to Transformer, while maintaining similar memory usage and computational cost. Besides, SVFormer reduces KV cache size by nearly half with only a small performance penalty and can be integrated with other KV-efficient methods, yielding further reductions in KV cache, with performance influenced by sequence length and cumulative learning rate. Further visualization results suggest that Resformer and SVFormer alleviate attention concentration in deeper layers through avoiding value-state drains and enhance representation across most layers.
翻译:Transformer通过自注意力机制能够捕捉长距离依赖关系,使各标记能够直接关注所有其他标记。然而,堆叠多层注意力层会导致注意力集中现象。解决该问题的一种自然方法是采用跨层注意力机制,使深层能够直接获取浅层信息,但这种方法计算成本高昂。为此,我们提出具有残差价值的Transformer(ResFormer),通过在第一层的价值向量与所有后续层之间添加残差连接来近似实现跨层注意力。基于该方法的一种变体是单层价值共享Transformer(SVFormer),其中所有层共享来自第一层的相同价值嵌入。综合实验结果表明,与标准Transformer相比,ResFormer在减少10.4%模型参数和13.6%训练数据的情况下达到同等验证损失,同时保持相近的内存占用和计算成本。此外,SVFormer能以微小性能损失将KV缓存大小降低近一半,并能与其他KV高效方法结合实现进一步的KV缓存压缩,其性能受序列长度和累积学习率影响。可视化结果进一步表明,ResFormer和SVFormer通过避免价值状态衰减缓解了深层注意力集中问题,并增强了大多数层的表征能力。