The attention mechanism is becoming increasingly popular in Natural Language Processing (NLP) applications, showing superior performance than convolutional and recurrent architectures. However, attention becomes the compution bottleneck because of its quadratic computational complexity to input length, complicated data movement and low arithmetic intensity. Moreover, existing NN accelerators mainly focus on optimizing convolutional or recurrent models, and cannot efficiently support attention. In this paper, we present SpAtten, an efficient algorithm-architecture co-design that leverages token sparsity, head sparsity, and quantization opportunities to reduce the attention computation and memory access. Inspired by the high redundancy of human languages, we propose the novel cascade token pruning to prune away unimportant tokens in the sentence. We also propose cascade head pruning to remove unessential heads. Cascade pruning is fundamentally different from weight pruning since there is no trainable weight in the attention mechanism, and the pruned tokens and heads are selected on the fly. To efficiently support them on hardware, we design a novel top-k engine to rank token and head importance scores with high throughput. Furthermore, we propose progressive quantization that first fetches MSBs only and performs the computation; if the confidence is low, it fetches LSBs and recomputes the attention outputs, trading computation for memory reduction. Extensive experiments on 30 benchmarks show that, on average, SpAtten reduces DRAM access by 10.0x with no accuracy loss, and achieves 1.6x, 3.0x, 162x, 347x speedup, and 1,4x, 3.2x, 1193x, 4059x energy savings over A3 accelerator, MNNFast accelerator, TITAN Xp GPU, Xeon CPU, respectively.
翻译:注意力机制在自然语言处理(NLP)应用中日益普及,其性能优于卷积与循环架构。然而,由于其对输入长度的二次计算复杂度、复杂的数据移动以及较低的算术强度,注意力机制已成为计算瓶颈。此外,现有的神经网络加速器主要专注于优化卷积或循环模型,无法高效支持注意力计算。本文提出SpAtten,一种高效的算法-架构协同设计方法,利用令牌稀疏性、注意力头稀疏性及量化机会来减少注意力计算与内存访问。受人类语言高度冗余性的启发,我们提出新颖的级联令牌剪枝方法,以剪除句子中不重要的令牌。同时,我们提出级联注意力头剪枝以移除非必要的注意力头。级联剪枝与权重剪枝有本质区别,因为注意力机制中无可训练权重,且剪枝的令牌与注意力头是动态选择的。为在硬件上高效支持这些操作,我们设计了一种新颖的top-k引擎,以高吞吐量对令牌与注意力头的重要性分数进行排序。此外,我们提出渐进式量化方法:首先仅获取最高有效位(MSB)并执行计算;若置信度较低,则再获取最低有效位(LSB)并重新计算注意力输出,从而以计算量换取内存占用的降低。在30个基准测试上的大量实验表明,SpAtten平均可在无精度损失的情况下将DRAM访问减少10.0倍,并相较于A3加速器、MNNFast加速器、TITAN Xp GPU及Xeon CPU分别实现1.6倍、3.0倍、162倍、347倍的加速,以及1.4倍、3.2倍、1193倍、4059倍的能效提升。