We study nonparametric regression by an over-parameterized two-layer neural network trained by gradient descent (GD) in this paper. We show that, if the neural network is trained by GD with early stopping, then the trained network renders a sharp rate of the nonparametric regression risk of $\cO(\eps_n^2)$, which is the same rate as that for the classical kernel regression trained by GD with early stopping, where $\eps_n$ is the critical population rate of the Neural Tangent Kernel (NTK) associated with the network and $n$ is the size of the training data. It is remarked that our result does not require distributional assumptions about the covariate as long as the covariate is bounded, in a strong contrast with many existing results which rely on specific distributions of the covariates such as the spherical uniform data distribution or distributions satisfying certain restrictive conditions. The rate $\cO(\eps_n^2)$ is known to be minimax optimal for specific cases, such as the case that the NTK has a polynomial eigenvalue decay rate which happens under certain distributional assumptions on the covariates. Our result formally fills the gap between training a classical kernel regression model and training an over-parameterized but finite-width neural network by GD for nonparametric regression without distributional assumptions on the bounded covariate. We also provide confirmative answers to certain open questions or address particular concerns in the literature of training over-parameterized neural networks by GD with early stopping for nonparametric regression, including the characterization of the stopping time, the lower bound for the network width, and the constant learning rate used in GD.
翻译:本文研究了通过梯度下降法训练的过参数化两层神经网络在非参数回归中的应用。我们证明,若神经网络采用带早停的梯度下降法进行训练,则训练后的网络可实现非参数回归风险$\cO(\eps_n^2)$的锐利收敛速率,该速率与采用带早停梯度下降法训练的经典核回归方法相同,其中$\eps_n$为网络对应神经正切核的临界总体速率,$n$为训练数据规模。需要强调的是,只要协变量有界,我们的结果不依赖于协变量的分布假设,这与许多现有研究形成鲜明对比——这些研究通常要求协变量服从特定分布(如球面均匀分布)或满足某些限制性条件。已知$\cO(\eps_n^2)$的速率在特定情况下是最小极大最优的,例如当神经正切核具有多项式特征值衰减速率时(该情况发生在协变量满足特定分布假设下)。我们的结果正式填补了在有限协变量无分布假设条件下,训练经典核回归模型与训练过参数化有限宽度神经网络进行非参数回归的理论空白。此外,我们针对文献中关于采用带早停梯度下降法训练过参数化神经网络进行非参数回归的若干开放问题与特定关切给出了肯定性解答,包括停止时间的表征、网络宽度的下界以及梯度下降法中恒定学习率的设定。