In this paper, we study benign overfitting of fixed width leaky ReLU two-layer neural network classifiers trained on mixture data via gradient descent. We provide both, upper and lower classification error bounds, and discover a phase transition in the bound as a function of signal strength. The lower bound leads to a characterization of cases when benign overfitting provably fails even if directional convergence occurs. Our analysis allows us to considerably relax the distributional assumptions that are made in existing work on benign overfitting of leaky ReLU two-layer neural network classifiers. We can allow for non-sub-Gaussian data and do not require near orthogonality. Our results are derived by establishing directional convergence of the network parameters and studying classification error bounds for the convergent direction. Previously, directional convergence in (leaky) ReLU neural networks was established only for gradient flow. By first establishing directional convergence, we are able to study benign overfitting of fixed width leaky ReLU two-layer neural network classifiers in a much wider range of scenarios than was done before.
翻译:本文研究了在混合数据上通过梯度下降训练的固定宽度带泄露ReLU双层神经网络分类器的良性过拟合现象。我们同时给出了分类误差的上界与下界,并发现误差界随信号强度变化存在相变现象。下界结果揭示了即使发生方向性收敛,良性过拟合仍可能被严格证明失效的情形。我们的分析显著放宽了现有关于带泄露ReLU双层神经网络分类器良性过拟合研究中采用的数据分布假设:允许非亚高斯分布数据,且不要求近似正交性。通过建立网络参数的方向性收敛并分析收敛方向的分类误差界,我们推导出相关结论。此前,(带泄露)ReLU神经网络中的方向性收敛仅在梯度流设定下得以建立。通过首先确立方向性收敛,我们能够在比以往研究更广泛的场景中研究固定宽度带泄露ReLU双层神经网络分类器的良性过拟合特性。