We develop a mathematical framework that interprets Transformer attention as an interacting particle system and studies its continuum (mean-field) limits. By idealizing attention on the sphere, we connect Transformer dynamics to Wasserstein gradient flows, synchronization models (Kuramoto), and mean-shift clustering. Central to our results is a global clustering phenomenon whereby tokens cluster asymptotically after long metastable states where they are arranged into multiple clusters. We further analyze a tractable equiangular reduction to obtain exact clustering rates, show how commonly used normalization schemes alter contraction speeds, and identify a phase transition for long-context attention. The results highlight both the mechanisms that drive representation collapse and the regimes that preserve expressive, multi-cluster structure in deep attention architectures.
翻译:我们建立了一个数学框架,将 Transformer 注意力机制解释为一个相互作用的粒子系统,并研究其连续(均场)极限。通过在球面上理想化注意力机制,我们将 Transformer 动力学与 Wasserstein 梯度流、同步模型(Kuramoto)以及均值漂移聚类联系起来。我们结果的核心是一种全局聚类现象,即经过长时间的多簇排列亚稳态后,词元会渐近地聚集成簇。我们进一步分析了一个易于处理的等角约化模型,以获得精确的聚类速率,展示了常用的归一化方案如何改变收缩速度,并识别了长上下文注意力的相变。这些结果既揭示了驱动表征坍缩的机制,也指明了在深度注意力架构中保持富有表达力的多簇结构的运行状态。