We study nonparametric regression by an over-parameterized two-layer neural network trained by gradient descent (GD) in this paper. We show that, if the neural network is trained by GD with early stopping, then the trained network renders a sharp rate of the nonparametric regression risk of $\cO(\eps_n^2)$, which is the same rate as that for the classical kernel regression trained by GD with early stopping, where $\eps_n$ is the critical population rate of the Neural Tangent Kernel (NTK) associated with the network and $n$ is the size of the training data. It is remarked that our result does not require distributional assumptions on the training data, in a strong contrast with many existing results which rely on specific distributions such as the spherical uniform data distribution or distributions satisfying certain restrictive conditions. The rate $\cO(\eps_n^2)$ is known to be minimax optimal for specific cases, such as the case that the NTK has a polynomial eigenvalue decay rate which happens under certain distributional assumptions. Our result formally fills the gap between training a classical kernel regression model and training an over-parameterized but finite-width neural network by GD for nonparametric regression without distributional assumptions. We also provide confirmative answers to certain open questions or address particular concerns in the literature of training over-parameterized neural networks by GD with early stopping for nonparametric regression, including the characterization of the stopping time, the lower bound for the network width, and the constant learning rate used in GD.
翻译:本文研究通过梯度下降法训练的过参数化双层神经网络进行非参数回归。我们证明,若神经网络采用早停策略的梯度下降法进行训练,则训练所得网络可实现非参数回归风险$\cO(\eps_n^2)$的锐利收敛速率,该速率与采用早停梯度下降法训练的经典核回归速率相同。其中$\eps_n$为网络对应神经正切核的临界总体速率,$n$为训练数据规模。需要特别指出的是,我们的结果不依赖于训练数据的分布假设,这与许多现有研究形成鲜明对比——这些研究通常需要特定分布假设(如球面均匀数据分布或满足某些限制性条件的分布)。已知$\cO(\eps_n^2)$速率在特定情况下是最小化最优的,例如当神经正切核具有多项式特征值衰减率时(该情况通常出现在特定分布假设下)。我们的结果正式填补了在无分布假设条件下,训练经典核回归模型与训练过参数化有限宽度神经网络进行非参数回归的理论空白。同时,我们针对文献中关于采用早停梯度下降法训练过参数化神经网络进行非参数回归的若干开放问题与特定关切给出了确认性解答,包括停止时间的表征、网络宽度的下界约束以及梯度下降法中恒定学习率的设定。