Specialized attention heads dubbed induction heads (IHs) have been argued to underlie the remarkable in-context learning capabilities of modern language models; yet, a precise characterization of their emergence, especially in the context of language modeling, remains wanting. In this study, we investigate the relationship between statistical properties of the training data and IH formation in both natural and synthetic training data settings. We show that: (1) A simple equation combining batch size and context size predicts the point at which IHs form and that this emergence point is agnostic to model size; (2) Surface bigram repetition frequency and reliability strongly affect the formation of IHs, and we find an effective Pareto frontier in terms of these two values; (3) local dependency with high bigram repetition frequency and reliability is sufficient for IH formation, but when the frequency and reliability are low, categoriality and the shape of the marginal distribution matter.
翻译:被称为归纳头(IHs)的专用注意力头被认为是现代语言模型卓越上下文学习能力的基础;然而,对其涌现过程的精确刻画,尤其是在语言建模背景下,仍显不足。本研究探讨了训练数据的统计特性与IH形成之间的关系,涵盖自然与合成训练数据两种设定。我们证明:(1)结合批量大小与上下文长度的简单方程可预测IH形成的时间点,且该涌现点与模型规模无关;(2)表层二元组重复频率与可靠性强烈影响IH的形成,我们基于这两个值发现了一个有效的帕累托前沿;(3)具有高二元组重复频率与可靠性的局部依赖关系足以促成IH形成,但当频率与可靠性较低时,范畴性及边缘分布的形态则变得重要。