When and how can an attention mechanism learn to selectively attend to informative tokens, thereby enabling detection of weak, rare, and sparsely located features? We address these questions theoretically in a sparse-token classification model in which positive samples embed a weak signal vector in a randomly chosen subset of tokens, whereas negative samples are pure noise. In the long-sequence limit, we show that a simple single-layer attention classifier can in principle achieve vanishing test error when the signal strength grows only logarithmically in the sequence length $L$, whereas linear classifiers require $\sqrt{L}$ scaling. Moving from representational power to learnability, we study training at finite $L$ in a high-dimensional regime, where sample size and embedding dimension grow proportionally. We prove that just two gradient updates suffice for the query weight vector of the attention classifier to acquire a nontrivial alignment with the hidden signal, inducing an attention map that selectively amplifies informative tokens. We further derive an exact asymptotic expression for the test error and training loss of the trained attention-based classifier, and quantify its capacity -- the largest dataset size that is typically perfectly separable -- thereby explaining the advantage of adaptive token selection over nonadaptive linear baselines.
翻译:注意力机制何时以及如何能够学会选择性地关注信息丰富的标记,从而实现对微弱、罕见且稀疏分布特征的检测?我们在一个稀疏标记分类模型中从理论上探讨了这些问题,其中正样本在随机选择的标记子集中嵌入了一个微弱信号向量,而负样本则完全是噪声。在长序列极限下,我们证明,一个简单的单层注意力分类器在原则上可以实现趋近于零的测试误差,此时信号强度仅需随序列长度 $L$ 呈对数增长;而线性分类器则需要 $\sqrt{L}$ 的缩放比例。从表示能力转向可学习性,我们研究了在高维机制下有限 $L$ 时的训练过程,其中样本量和嵌入维度成比例增长。我们证明,仅需两次梯度更新就足以使注意力分类器的查询权重向量与隐藏信号获得非平凡的对齐,从而诱导出一个能够选择性放大信息标记的注意力图。我们进一步推导了训练后的基于注意力的分类器的测试误差和训练损失的精确渐近表达式,并量化了其容量——即通常能够完美分离的最大数据集规模——从而解释了自适应标记选择相对于非自适应线性基线的优势。