The training dynamics of linear networks are well studied in two distinct setups: the lazy regime and balanced/active regime, depending on the initialization and width of the network. We provide a surprisingly simple unifying formula for the evolution of the learned matrix that contains as special cases both lazy and balanced regimes but also a mixed regime in between the two. In the mixed regime, a part of the network is lazy while the other is balanced. More precisely the network is lazy along singular values that are below a certain threshold and balanced along those that are above the same threshold. At initialization, all singular values are lazy, allowing for the network to align itself with the task, so that later in time, when some of the singular value cross the threshold and become active they will converge rapidly (convergence in the balanced regime is notoriously difficult in the absence of alignment). The mixed regime is the `best of both worlds': it converges from any random initialization (in contrast to balanced dynamics which require special initialization), and has a low rank bias (absent in the lazy dynamics). This allows us to prove an almost complete phase diagram of training behavior as a function of the variance at initialization and the width, for a MSE training task.
翻译:线性网络的训练动力学在两种不同设置下已得到深入研究:取决于网络的初始化和宽度,可分为惰性机制与平衡/活跃机制。我们提出了一个令人惊讶的简单统一公式,用于描述学习矩阵的演化过程。该公式不仅将惰性机制和平衡机制作为特例包含在内,还涵盖了两者之间的混合机制。在混合机制中,网络的一部分呈现惰性特征,而另一部分则保持平衡特性。更精确地说,网络在低于特定阈值的奇异值方向上表现为惰性,而在高于该阈值的奇异值方向上则表现为平衡。初始化时,所有奇异值均处于惰性状态,这使得网络能够与任务目标对齐;随着时间的推移,当部分奇异值跨越阈值进入活跃状态后,它们将快速收敛(在缺乏对齐的情况下,平衡机制中的收敛过程 notoriously difficult)。混合机制实现了“两全其美”:它可以从任意随机初始化开始收敛(与需要特殊初始化的平衡动力学形成对比),并具有低秩偏好(这在惰性动力学中不存在)。基于此,我们能够针对均方误差训练任务,以初始化方差和宽度为函数,绘制出近乎完整的训练行为相图。