We develop an algorithm which, given a trained transformer model $\mathcal{M}$ as input, as well as a string of tokens $s$ of length $n_{fix}$ and an integer $n_{free}$, can generate a mathematical proof that $\mathcal{M}$ is ``overwhelmed'' by $s$, in time and space $\widetilde{O}(n_{fix}^2 + n_{free}^3)$. We say that $\mathcal{M}$ is ``overwhelmed'' by $s$ when the output of the model evaluated on this string plus any additional string $t$, $\mathcal{M}(s + t)$, is completely insensitive to the value of the string $t$ whenever length($t$) $\leq n_{free}$. Along the way, we prove a particularly strong worst-case form of ``over-squashing'', which we use to bound the model's behavior. Our technique uses computer-aided proofs to establish this type of operationally relevant guarantee about transformer models. We empirically test our algorithm on a single layer transformer complete with an attention head, layer-norm, MLP/ReLU layers, and RoPE positional encoding. We believe that this work is a stepping stone towards the difficult task of obtaining useful guarantees for trained transformer models.
翻译:我们提出一种算法,给定一个训练好的Transformer模型$\mathcal{M}$作为输入,以及长度为$n_{fix}$的标记串$s$和一个整数$n_{free}$,该算法能在$\widetilde{O}(n_{fix}^2 + n_{free}^3)$的时间和空间复杂度内生成数学证明,证明$\mathcal{M}$被$s$“压倒”。当模型在该字符串加上任意附加字符串$t$上的输出$\mathcal{M}(s + t)$对$t$的取值完全不敏感时(只要长度($t$) $\leq n_{free}$),我们称$\mathcal{M}$被$s$“压倒”。在此过程中,我们证明了一种特别强的“过度挤压”最坏情况形式,并利用它来界定模型的行为。我们的技术使用计算机辅助证明来建立这类关于Transformer模型的操作相关保证。我们在一个包含注意力头、层归一化、MLP/ReLU层和RoPE位置编码的单层Transformer上对算法进行了实证测试。我们相信这项工作是朝着为训练好的Transformer模型获取有用保证这一艰巨任务迈出的重要一步。