Benefiting from the self-attention mechanism, Transformer models have attained impressive contextual comprehension capabilities for lengthy texts. The requirements of high-throughput inference arise as the large language models (LLMs) become increasingly prevalent, which calls for large-scale token parallel processing (LTPP). However, existing dynamic sparse accelerators struggle to effectively handle LTPP, as they solely focus on separate stage optimization, and with most efforts confined to computational enhancements. By re-examining the end-to-end flow of dynamic sparse acceleration, we pinpoint an ever-overlooked opportunity that the LTPP can exploit the intrinsic coordination among stages to avoid excessive memory access and redundant computation. Motivated by our observation, we present SOFA, a cross-stage compute-memory efficient algorithm-hardware co-design, which is tailored to tackle the challenges posed by LTPP of Transformer inference effectively. We first propose a novel leading zero computing paradigm, which predicts attention sparsity by using log-based add-only operations to avoid the significant overhead of prediction. Then, a distributed sorting and a sorted updating FlashAttention mechanism are proposed with a cross-stage coordinated tiling principle, which enables fine-grained and lightweight coordination among stages, helping optimize memory access and latency. Further, we propose a SOFA accelerator to support these optimizations efficiently. Extensive experiments on 20 benchmarks show that SOFA achieves $9.5\times$ speed up and $71.5\times$ higher energy efficiency than Nvidia A100 GPU. Compared to 8 SOTA accelerators, SOFA achieves an average $15.8\times$ energy efficiency, $10.3\times$ area efficiency and $9.3\times$ speed up, respectively.
翻译:得益于自注意力机制,Transformer模型在处理长文本时展现出卓越的上下文理解能力。随着大语言模型日益普及,高吞吐量推理需求应运而生,这要求大规模令牌并行处理。然而,现有的动态稀疏加速器难以有效应对LTPP,因为它们仅关注独立阶段的优化,且多数工作局限于计算增强。通过重新审视动态稀疏加速的端到端流程,我们发现了一个长期被忽视的机遇:LTPP可利用阶段间的内在协同性,避免过度的内存访问和冗余计算。基于这一观察,我们提出了SOFA,一种跨阶段计算-内存高效算法-硬件协同设计,专门为有效应对Transformer推理中LTPP带来的挑战而定制。我们首先提出了一种新颖的前导零计算范式,通过基于对数的纯加法操作预测注意力稀疏性,以避免预测带来的显著开销。随后,基于跨阶段协同分块原则,我们提出了分布式排序和排序更新FlashAttention机制,实现了阶段间细粒度、轻量级的协同,有助于优化内存访问和延迟。进一步,我们设计了SOFA加速器以高效支持这些优化。在20个基准测试上的广泛实验表明,SOFA相比Nvidia A100 GPU实现了$9.5\times$的加速和$71.5\times$的能效提升。与8个先进加速器相比,SOFA平均分别实现了$15.8\times$的能效、$10.3\times$的面积效率和$9.3\times$的加速。