The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that rate-optimal benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.
翻译:在过参数化神经网络训练至接近零训练误差的成功案例中,有益过拟合现象引起了广泛关注——即使估计器对含噪声的训练数据实现了完全拟合,仍能保持统计一致性。虽然固定维度下的有益过拟合已在某些学习方法中得到证实,但现有文献表明,对于典型核方法与宽神经网络的回归任务,有益过拟合需要高维设置,即维度需随样本量增长。本文证明,估计器的平滑性而非维度才是关键:当且仅当估计器的导数足够大时,有益过拟合才可能实现。我们将现有的非一致性结果推广至非插值模型与更多核函数,证明具有适度导数的估计器在固定维度下不可能实现有益过拟合。反之,我们通过构建具有大导数的尖峰-平滑核序列,证明了回归任务中可实现速率最优的有益过拟合。借助神经正切核理论,我们将结论迁移至宽神经网络。研究证明:虽然使用ReLU激活函数的无限宽度网络无法实现有益过拟合,但通过对激活函数添加微小的高频波动即可解决此问题。实验验证表明,此类过拟合神经网络在低维数据集上仍能保持良好的泛化性能。