We prove that hardmax attention transformers perfectly classify datasets of $N$ labeled sequences in $\mathbb{R}^d$, $d\geq 2$. Specifically, given $N$ sequences with an arbitrary but finite length in $\mathbb{R}^d$, we construct a transformer with $\mathcal{O}(N)$ blocks and $\mathcal{O}(Nd)$ parameters perfectly classifying this dataset. Our construction achieves the best complexity estimate to date, independent of the length of the sequences, by innovatively alternating feed-forward and self-attention layers and by capitalizing on the clustering effect inherent to the latter. Our novel constructive method also uses low-rank parameter matrices within the attention mechanism, a common practice in real-life transformer implementations. Consequently, our analysis holds twofold significance: it substantially advances the mathematical theory of transformers and it rigorously justifies their exceptional real-world performance in sequence classification tasks.
翻译:我们证明了采用hardmax注意力机制的Transformer能够完美分类$\mathbb{R}^d$($d\geq 2$)空间中包含$N$个标注序列的数据集。具体而言,给定$\mathbb{R}^d$中任意有限长度的$N$个序列,我们构建了一个具有$\mathcal{O}(N)$个模块和$\mathcal{O}(Nd)$个参数的Transformer,能够完美分类该数据集。通过创新性地交替使用前馈层与自注意力层,并充分利用后者固有的聚类效应,我们的构造实现了迄今最优的复杂度估计,且与序列长度无关。我们提出的新颖构造方法还在注意力机制中使用了低秩参数矩阵,这符合实际Transformer实现的常见做法。因此,我们的分析具有双重意义:它显著推进了Transformer的数学理论发展,并严格论证了其在序列分类任务中卓越的实际性能。