The efficiency of attention is important due to its quadratic time complexity. We enhance the efficiency of attention through two key contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to accelerate attention computation. Our implementation achieves 1038 TOPS on RTX5090, which is a 5x speedup over the fastest FlashAttention on RTX5090. Experiments show that our FP4 attention can accelerate inference of various models in a plug-and-play way. Second, we pioneer low-bit attention to training tasks. Existing low-bit attention works like FlashAttention3 and SageAttention focus only on inference. However, the efficiency of training large models is also important. To explore whether low-bit attention can be effectively applied to training tasks, we design an accurate and efficient 8-bit attention for both forward and backward propagation. Experiments indicate that 8-bit attention achieves lossless performance in fine-tuning tasks but exhibits slower convergence in pretraining tasks. The code will be available at https://github.com/thu-ml/SageAttention.
翻译:注意力的效率因其二次时间复杂度而至关重要。我们通过两项关键贡献提升注意力效率:首先,我们利用Blackwell GPU中的新型FP4张量核心加速注意力计算。我们的实现在RTX5090上达到1038 TOPS,相比RTX5090上最快的FlashAttention实现有5倍加速。实验表明,我们的FP4注意力能以即插即用方式加速各类模型的推理过程。其次,我们首次将低位宽注意力应用于训练任务。现有低位宽注意力研究(如FlashAttention3和SageAttention)仅关注推理场景,然而大模型训练的效率同样重要。为探究低位宽注意力能否有效应用于训练任务,我们设计了一种精确高效的8位注意力算法,同时支持前向传播与反向传播。实验表明,8位注意力在微调任务中可实现无损性能,但在预训练任务中表现出收敛速度减缓的现象。代码将在https://github.com/thu-ml/SageAttention开源。