Modern generative models demonstrate impressive capabilities, likely stemming from an ability to identify and manipulate abstract concepts underlying their training data. However, fundamental questions remain: what determines the concepts a model learns, the order in which it learns them, and its ability to manipulate those concepts? To address these questions, we propose analyzing a model's learning dynamics via a framework we call the concept space, where each axis represents an independent concept underlying the data generating process. By characterizing learning dynamics in this space, we identify how the speed at which a concept is learned, and hence the order of concept learning, is controlled by properties of the data we term concept signal. Further, we observe moments of sudden turns in the direction of a model's learning dynamics in concept space. Surprisingly, these points precisely correspond to the emergence of hidden capabilities, i.e., where latent interventions show the model possesses the capability to manipulate a concept, but these capabilities cannot yet be elicited via naive input prompting. While our results focus on synthetically defined toy datasets, we hypothesize a general claim on emergence of hidden capabilities may hold: generative models possess latent capabilities that emerge suddenly and consistently during training, though a model might not exhibit these capabilities under naive input prompting.
翻译:现代生成模型展现出令人印象深刻的能力,这很可能源于其识别和操纵训练数据背后抽象概念的能力。然而,一些基本问题仍然存在:是什么决定了模型学习的概念、学习这些概念的顺序以及其操纵这些概念的能力?为了回答这些问题,我们提出通过一个我们称之为概念空间的框架来分析模型的学习动态,其中每个坐标轴代表数据生成过程背后的一个独立概念。通过刻画模型在此空间中的学习动态,我们揭示了概念被学习的速度(从而决定了概念学习的顺序)是如何受数据中我们称之为概念信号的特性所控制的。此外,我们观察到模型在概念空间中的学习动态方向会发生突然转变的时刻。令人惊讶的是,这些点精确对应着隐藏能力的涌现,即潜在的干预表明模型已具备操纵某个概念的能力,但这些能力尚无法通过简单的输入提示来激发。虽然我们的结果聚焦于人工定义的玩具数据集,但我们推测一个关于隐藏能力涌现的普遍主张可能成立:生成模型拥有在训练过程中突然且一致地涌现的潜在能力,尽管模型在简单的输入提示下可能不会展现出这些能力。