Transformers empirically perform precise probabilistic reasoning in carefully constructed ``Bayesian wind tunnels'' and in large-scale language models, yet the mechanisms by which gradient-based learning creates the required internal geometry remain opaque. We provide a complete first-order analysis of how cross-entropy training reshapes attention scores and value vectors in a transformer attention head. Our core result is an \emph{advantage-based routing law} for attention scores, \[ \frac{\partial L}{\partial s_{ij}} = α_{ij}\bigl(b_{ij}-\mathbb{E}_{α_i}[b]\bigr), \qquad b_{ij} := u_i^\top v_j, \] coupled with a \emph{responsibility-weighted update} for values, \[ Δv_j = -η\sum_i α_{ij} u_i, \] where $u_i$ is the upstream gradient at position $i$ and $α_{ij}$ are attention weights. These equations induce a positive feedback loop in which routing and content specialize together: queries route more strongly to values that are above-average for their error signal, and those values are pulled toward the queries that use them. We show that this coupled specialization behaves like a two-timescale EM procedure: attention weights implement an E-step (soft responsibilities), while values implement an M-step (responsibility-weighted prototype updates), with queries and keys adjusting the hypothesis frame. Through controlled simulations, including a sticky Markov-chain task where we compare a closed-form EM-style update to standard SGD, we demonstrate that the same gradient dynamics that minimize cross-entropy also sculpt the low-dimensional manifolds identified in our companion work as implementing Bayesian inference. This yields a unified picture in which optimization (gradient flow) gives rise to geometry (Bayesian manifolds), which in turn supports function (in-context probabilistic reasoning).
翻译:Transformer模型在精心构建的“贝叶斯风洞”和大规模语言模型中均能经验性地执行精确的概率推理,然而基于梯度的学习如何形成所需内部几何结构的机制仍不明确。本文对交叉熵训练如何重塑Transformer注意力头中的注意力分数与值向量进行了完整的一阶分析。我们的核心成果是注意力分数的**基于优势的路由定律**:\[ \frac{\partial L}{\partial s_{ij}} = α_{ij}\bigl(b_{ij}-\mathbb{E}_{α_i}[b]\bigr), \qquad b_{ij} := u_i^\top v_j, \] 以及与值向量相关的**责任加权更新规则**:\[ Δv_j = -η\sum_i α_{ij} u_i, \] 其中 $u_i$ 表示位置 $i$ 的上游梯度,$α_{ij}$ 为注意力权重。这些方程诱导了一个路由与内容协同特化的正反馈循环:查询会更强烈地路由至那些对其误差信号具有高于平均水平优势的值向量,而这些值向量会被拉向使用它们的查询。我们证明这种耦合特化过程类似于双时间尺度的EM算法:注意力权重实现E步(软责任分配),值向量实现M步(责任加权原型更新),而查询与键则调整假设框架。通过受控模拟实验(包括在黏性马尔可夫链任务中对比闭式EM风格更新与标准SGD),我们证实最小化交叉熵的梯度动力学同样塑造了我们在姊妹工作中识别出的、用于实现贝叶斯推理的低维流形。这形成了一个统一的理论图景:优化过程(梯度流)催生几何结构(贝叶斯流形),而几何结构进而支撑功能实现(上下文概率推理)。