We study inductive bias in Transformers in the infinitely over-parameterized Gaussian process limit and argue transformers tend to be biased towards more permutation symmetric functions in sequence space. We show that the representation theory of the symmetric group can be used to give quantitative analytical predictions when the dataset is symmetric to permutations between tokens. We present a simplified transformer block and solve the model at the limit, including accurate predictions for the learning curves and network outputs. We show that in common setups, one can derive tight bounds in the form of a scaling law for the learnability as a function of the context length. Finally, we argue WikiText dataset, does indeed possess a degree of permutation symmetry.
翻译:我们在无限过参数化的高斯过程极限下研究Transformer的归纳偏置,并论证Transformer倾向于对序列空间中更具置换对称性的函数存在偏置。我们证明,当数据集对词元间的置换具有对称性时,对称群的表示论可用于给出定量解析预测。我们提出一个简化的Transformer模块并在极限下求解该模型,包括对学习曲线和网络输出的精确预测。我们证明在常见设置中,可以以可学习性随上下文长度变化的标度律形式推导出紧致界。最后,我们论证WikiText数据集确实具有一定程度的置换对称性。