Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs' internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein can replicate across model scale. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional pre-training and over model scale.
翻译:当前部署的大多数大型语言模型(LLMs)都经历了持续训练或额外微调。相比之下,针对LLM内部机制的研究大多聚焦于特定时间节点的模型(即预训练结束时),这引发了一个问题:这些研究结论能否推广到实际应用场景?现有关于机制随时间演变的研究主要集中于仅编码器模型或玩具模型,这些模型与大多数实际部署模型存在显著差异。在本研究中,我们追踪了仅解码器LLMs中模型机制(以电路形式呈现)在3000亿训练词元过程中的形成与演化规律,研究模型参数规模涵盖7000万至28亿。研究发现:任务能力及其支撑功能组件在不同规模模型中以相近的词元数量稳定涌现;尽管这些组件可能随时间由不同的注意力头实现,但其承载的核心算法结构保持不变。令人惊讶的是,这些算法及其涉及的功能组件类型能够在模型规模扩展过程中实现跨尺度复现。这些结果表明,基于预训练结束时的小型模型进行的电路分析,其研究结论在额外预训练后及跨模型规模场景中仍具有适用性。