The efficiency of attention is important due to its quadratic time complexity. We enhance the efficiency of attention through two key contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to accelerate attention computation. Our implementation achieves 1038 TOPS on RTX5090, which is a 5x speedup over the fastest FlashAttention on RTX5090. Experiments show that our FP4 attention can accelerate inference of various models in a plug-and-play way. Second, we pioneer low-bit attention to training tasks. Existing low-bit attention works like FlashAttention3 and SageAttention focus only on inference. However, the efficiency of training large models is also important. To explore whether low-bit attention can be effectively applied to training tasks, we design an accurate and efficient 8-bit attention for both forward and backward propagation. Experiments indicate that 8-bit attention achieves lossless performance in fine-tuning tasks but exhibits slower convergence in pretraining tasks. The code is available at https://github.com/thu-ml/SageAttention.
翻译:注意力机制因其二次时间复杂度而效率至关重要。我们通过两项关键贡献提升注意力效率:首先,我们利用Blackwell GPU中的新型FP4张量核心加速注意力计算。我们的实现在RTX5090上达到1038 TOPS,比RTX5090上最快的FlashAttention提速5倍。实验表明,我们的FP4注意力能以即插即用方式加速各类模型的推理。其次,我们率先将低位宽注意力应用于训练任务。现有低位宽注意力研究如FlashAttention3和SageAttention仅关注推理场景,但大模型训练效率同样重要。为探索低位宽注意力能否有效应用于训练任务,我们设计了一种精确高效的8位注意力算法,同时支持前向传播与反向传播。实验表明,8位注意力在微调任务中可实现无损性能,但在预训练任务中收敛速度较慢。代码已发布于https://github.com/thu-ml/SageAttention。