Transformer-based large language models are a memory-bound model whose operation is based on a large amount of data that are marginally reused. Thus, the data movement between a host and accelerator likely dictates the total wall-clock time. Layer normalization is one of the key workloads in the transformer model, following each of multi-head attention and feed-forward network blocks. To reduce data movement, layer normalization needs to be performed on the same chip as the matrix-matrix multiplication engine. To this end, we introduce an iterative L2-normalization method for 1D input (IterL2Norm), ensuring fast convergence to the steady-state solution within five iteration steps and high precision, outperforming the fast inverse square root algorithm in six out of nine cases for FP32 and five out of nine for BFloat16 across the embedding lengths used in the OPT models. Implemented in 32/28nm CMOS, the IterL2Norm macro normalizes $d$-dimensional vectors, where $64 \leq d \leq 1024$, with a latency of 116-227 cycles at 100MHz/1.05V.
翻译:基于Transformer的大语言模型是一种内存受限模型,其运算依赖于大量重复利用率较低的数据。因此,主机与加速器之间的数据移动很可能决定了总运行时间。层归一化是Transformer模型中的关键工作负载之一,位于每个多头注意力模块和前馈网络模块之后。为了减少数据移动,层归一化需要在与矩阵乘法引擎相同的芯片上执行。为此,我们提出了一种针对一维输入的迭代L2归一化方法(IterL2Norm),该方法能在五次迭代步内快速收敛至稳态解,并保持高精度。在OPT模型所使用的嵌入长度范围内,其性能在FP32的九种情况中有六种、在BFloat16的九种情况中有五种优于快速平方根倒数算法。该宏单元采用32/28nm CMOS工艺实现,可对维度$d$(其中$64 \leq d \leq 1024$)的向量进行归一化,在100MHz/1.05V工作频率下的延迟为116-227个时钟周期。