We aim to understand physics of skill learning, i.e., how skills are learned in neural networks during training. We start by observing the Domino effect, i.e., skills are learned sequentially, and notably, some skills kick off learning right after others complete learning, similar to the sequential fall of domino cards. To understand the Domino effect and relevant behaviors of skill learning, we take physicists' approach of abstraction and simplification. We propose three models with varying complexities -- the Geometry model, the Resource model, and the Domino model, trading between reality and simplicity. The Domino effect can be reproduced in the Geometry model, whose resource interpretation inspires the Resource model, which can be further simplified to the Domino model. These models present different levels of abstraction and simplification; each is useful to study some aspects of skill learning. The Geometry model provides interesting insights into neural scaling laws and optimizers; the Resource model sheds light on the learning dynamics of compositional tasks; the Domino model reveals the benefits of modularity. These models are not only conceptually interesting -- e.g., we show how Chinchilla scaling laws can emerge from the Geometry model, but also are useful in practice by inspiring algorithmic development -- e.g., we show how simple algorithmic changes, motivated by these toy models, can speed up the training of deep learning models.
翻译:我们旨在理解技能学习的物理学,即神经网络在训练过程中如何习得技能。我们首先观察到多米诺骨牌效应,即技能是顺序习得的,并且值得注意的是,某些技能恰在其他技能完成学习后立即启动学习,类似于多米诺骨牌的依次倒下。为了理解多米诺骨牌效应以及技能学习的相关行为,我们采用了物理学家的抽象与简化方法。我们提出了三个复杂度各异的模型——几何模型、资源模型和多米诺模型,在现实性与简洁性之间进行权衡。多米诺骨牌效应可以在几何模型中复现,该模型的资源解释启发了资源模型,而资源模型又可进一步简化为多米诺模型。这些模型呈现了不同层次的抽象与简化;每个模型对于研究技能学习的某些方面都很有用。几何模型为神经缩放定律和优化器提供了有趣的见解;资源模型阐明了组合任务的学习动态;多米诺模型揭示了模块化的优势。这些模型不仅在概念上具有启发性——例如,我们展示了Chinchilla缩放定律如何能从几何模型中涌现,而且通过启发算法开发在实践中也很有用——例如,我们展示了受这些玩具模型启发而进行的简单算法更改,如何能加速深度学习模型的训练。