We investigate the training dynamics of two-layer neural networks when learning multi-index target functions. We focus on multi-pass gradient descent (GD) that reuses the batches multiple times and show that it significantly changes the conclusion about which functions are learnable compared to single-pass gradient descent. In particular, multi-pass GD with finite stepsize is found to overcome the limitations of gradient flow and single-pass GD given by the information exponent (Ben Arous et al., 2021) and leap exponent (Abbe et al., 2023) of the target function. We show that upon re-using batches, the network achieves in just two time steps an overlap with the target subspace even for functions not satisfying the staircase property (Abbe et al., 2021). We characterize the (broad) class of functions efficiently learned in finite time. The proof of our results is based on the analysis of the Dynamical Mean-Field Theory (DMFT). We further provide a closed-form description of the dynamical process of the low-dimensional projections of the weights, and numerical experiments illustrating the theory.
翻译:本研究探讨了双层神经网络在学习多索引目标函数时的训练动态。我们重点关注多次重用批次的多次梯度下降法,并证明与单次梯度下降相比,该方法显著改变了关于哪些函数可学习的结论。具体而言,研究发现有限步长的多次梯度下降能够克服梯度流和单次梯度下降所受限于目标函数信息指数(Ben Arous等人,2021)与跃迁指数(Abbe等人,2023)的约束。我们证明,通过重用批次,网络仅需两个时间步即可实现与目标子空间的重叠,即使对于不满足阶梯性质(Abbe等人,2021)的函数亦然。我们刻画了在有限时间内可高效学习的(广泛)函数类别。结果的证明基于动力学平均场理论的分析。我们进一步给出了权重低维投影动态过程的闭式描述,并通过数值实验验证了理论。