Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs' internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein can replicate across model scale. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional pre-training and over model scale.
翻译:当前部署的大语言模型大多经过持续训练或额外微调。相比之下,多数针对大语言模型内部机制的研究聚焦于特定时间节点的模型(即预训练结束时),这引发了一个问题:其研究结论能否推广至实际应用场景。现有关于机制随时间演化的研究主要关注仅编码器模型或玩具模型,这些模型与多数实际部署模型存在显著差异。本研究追踪了仅解码器大语言模型在3000亿标记训练过程中,以电路形式表征的模型机制如何形成与演化,涵盖参数量从7000万至28亿的模型系列。我们发现:任务能力及其支撑功能组件在相近的标记数量阈值处会稳定涌现,且该现象在不同规模模型中具有一致性。此外,尽管这些功能组件可能随时间由不同的注意力头实现,但其承载的核心算法结构保持稳定。值得注意的是,这些算法及其涉及的功能组件类型在模型规模扩展过程中呈现可复现性。这些结果表明,基于预训练结束时的小规模模型进行的电路分析,其结论在额外预训练后及跨模型规模场景中仍具有参考价值。