Transformer based large-language models (LLMs) display extreme proficiency with language yet a precise understanding of how they work remains elusive. One way of demystifying transformer predictions would be to describe how they depend on their context in terms of simple template functions. This paper takes a first step in this direction by considering families of functions (i.e. rules) formed out of simple N-gram based statistics of the training data. By studying how well these rulesets approximate transformer predictions, we obtain a variety of novel discoveries: a simple method to detect overfitting during training without using a holdout set, a quantitative measure of how transformers progress from learning simple to more complex statistical rules over the course of training, a model-variance criterion governing when transformer predictions tend to be described by N-gram rules, and insights into how well transformers can be approximated by N-gram rulesets in the limit where these rulesets become increasingly complex. In this latter direction, we find that for 79% and 68% of LLM next-token distributions on TinyStories and Wikipedia, respectively, their top-1 predictions agree with those provided by our N-gram rulesets.
翻译:基于Transformer的大型语言模型(LLMs)在语言处理方面展现出卓越的能力,但其工作原理的精确理解仍不明确。一种揭示Transformer预测机制的方法是,通过简单的模板函数描述其预测如何依赖于上下文。本文朝着这一方向迈出了第一步,考察了由训练数据中简单的N-gram统计所构成的函数族(即规则)。通过研究这些规则集对Transformer预测的近似程度,我们获得了多项新发现:一种无需使用验证集即可检测训练过程中过拟合的简单方法;一种量化衡量Transformer在训练过程中如何从学习简单统计规则过渡到更复杂规则的指标;一个模型方差准则,用于判断Transformer预测何时倾向于由N-gram规则描述;以及对Transformer在规则集趋于无限复杂时能被N-gram规则集近似程度的深入洞察。在后一方向的研究中,我们发现对于TinyStories和Wikipedia数据集上LLM的下一词分布,分别有79%和68%的Top-1预测结果与我们构建的N-gram规则集给出的预测一致。