The training dynamics of linear networks are well studied in two distinct setups: the lazy regime and balanced/active regime, depending on the initialization and width of the network. We provide a surprisingly simple unyfing formula for the evolution of the learned matrix that contains as special cases both lazy and balanced regimes but also a mixed regime in between the two. In the mixed regime, a part of the network is lazy while the other is balanced. More precisely the network is lazy along singular values that are below a certain threshold and balanced along those that are above the same threshold. At initialization, all singular values are lazy, allowing for the network to align itself with the task, so that later in time, when some of the singular value cross the threshold and become active they will converge rapidly (convergence in the balanced regime is notoriously difficult in the absence of alignment). The mixed regime is the `best of both worlds': it converges from any random initialization (in contrast to balanced dynamics which require special initialization), and has a low rank bias (absent in the lazy dynamics). This allows us to prove an almost complete phase diagram of training behavior as a function of the variance at initialization and the width, for a MSE training task.
翻译:线性网络的训练动力学在两种不同设置下得到了深入研究:惰性机制与平衡/活跃机制,具体取决于网络的初始化和宽度。我们提出了一个出人意料简洁的统一公式来描述学习矩阵的演化过程,该公式不仅将惰性和平衡机制作为特例包含在内,还涵盖了两者之间的混合机制。在混合机制中,网络的一部分呈现惰性特征,而另一部分则保持平衡。更精确地说,网络在低于特定阈值的奇异值维度上表现为惰性,而在高于该阈值的奇异值维度上则呈现平衡特性。初始化时,所有奇异值均处于惰性状态,这使得网络能够与任务目标进行对齐;随后当部分奇异值跨越阈值进入活跃状态时,它们将实现快速收敛(平衡机制下的收敛在缺乏对齐的情况下 notoriously 困难)。混合机制实现了"两全其美":它可以从任意随机初始化开始收敛(与需要特殊初始化的平衡动力学形成对比),并具有低秩偏置特性(这在惰性动力学中并不存在)。基于此,我们能够针对均方误差训练任务,以初始化方差和网络宽度为变量,绘制出近乎完整的训练行为相图。