Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This "implicit bias" is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of "conservation laws", that define quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the maximal number of independent conservation laws by performing finite-dimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms to: a) compute a family of polynomial laws; b) compute the maximal number of (not necessarily polynomial) independent conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that there are no other independent laws. Such computational tools pave the way to understanding desirable properties of optimization initialization in large machine learning models.
翻译:理解梯度下降动力学的几何特性是解读超大规模机器学习模型近期成功的关键要素。一个引人注目的现象是,经过训练的过参数化模型会保留优化初始化阶段的某些特性。这种"隐式偏置"被认为赋予了训练模型某些优良特性,并可能解释其良好的泛化性能。本文旨在实现三个目标:首先,我们严格阐释"守恒定律"的定义与基本性质——这些定律定义了给定模型(例如具有特定架构的ReLU网络)在任意训练数据和任意损失函数下梯度流过程中保持不变的量。其次,我们通过模型雅可比矩阵生成的李代数进行有限维代数运算,阐明如何确定独立守恒定律的最大数量。最后,我们提供两种算法:a) 计算多项式定律族;b) 计算(非多项式)独立守恒定律的最大数量。我们通过理论完整推导的展示案例进行说明。此外,对多种ReLU网络架构应用这两种算法证实:算法能复现所有已知定律,且不存在其他独立定律。此类计算工具为理解大规模机器学习模型中优化初始化的理想特性开辟了新路径。