Transformers exhibit in-context learning (ICL): the ability to use novel information presented in the context without additional weight updates. Recent work shows that ICL emerges when models are trained on a sufficiently diverse set of tasks and the transition from memorization to generalization is sharp with increasing task diversity. One interpretation is that a network's limited capacity to memorize favors generalization. Here, we examine the mechanistic underpinnings of this transition using a small transformer applied to a synthetic ICL task. Using theory and experiment, we show that the sub-circuits that memorize and generalize can be viewed as largely independent. The relative rates at which these sub-circuits learn explains the transition from memorization to generalization, rather than capacity constraints. We uncover a memorization scaling law, which determines the task diversity threshold at which the network generalizes. The theory quantitatively explains a variety of other ICL-related phenomena, including the long-tailed distribution of when ICL is acquired, the bimodal behavior of solutions close to the task diversity threshold, the influence of contextual and data distributional statistics on ICL, and the transient nature of ICL.
翻译:Transformer模型展现出上下文学习能力:即无需额外权重更新即可利用上下文呈现的新信息。近期研究表明,当模型在足够多样化的任务集上训练时,上下文学习能力会自然涌现,且随着任务多样性的增加,从记忆到泛化的转变呈现突变特征。一种解释是网络有限的记忆容量促进了泛化能力。本文通过将小型Transformer应用于合成上下文学习任务,探究这一转变的机制基础。结合理论与实验,我们证明记忆与泛化的子电路可被视为基本独立的模块。这些子电路学习速率的相对差异——而非容量限制——解释了从记忆到泛化的转变过程。我们发现了记忆标度律,该定律决定了网络实现泛化所需的任务多样性阈值。该理论定量解释了多种其他上下文学习相关现象,包括上下文学习能力获取时间的长尾分布、接近任务多样性阈值时解决方案的双峰行为、上下文与数据分布统计特性对上下文学习的影响,以及上下文学习能力的瞬态特性。