Motivated by recent work on benign overfitting in overparameterized machine learning, we study the generalization behavior of functions in Sobolev spaces $W^{k, p}(\mathbb{R}^d)$ that perfectly fit a noisy training data set. Under assumptions of label noise and sufficient regularity in the data distribution, we show that approximately norm-minimizing interpolators, which are canonical solutions selected by smoothness bias, exhibit harmful overfitting: even as the training sample size $n \to \infty$, the generalization error remains bounded below by a positive constant with high probability. Our results hold for arbitrary values of $p \in [1, \infty)$, in contrast to prior results studying the Hilbert space case ($p = 2$) using kernel methods. Our proof uses a geometric argument which identifies harmful neighborhoods of the training data using Sobolev inequalities.
翻译:受近期关于过参数化机器学习中良性过拟合研究的启发,我们研究了完美拟合含噪训练数据集的Sobolev空间$W^{k, p}(\mathbb{R}^d)$中函数的泛化行为。在标签噪声和数据分布充分正则性的假设下,我们证明由平滑性偏置选择的典型解——近似范数最小插值函数——会表现出有害过拟合现象:即使训练样本量$n \to \infty$,其泛化误差仍以高概率被正常数下界约束。我们的结果适用于任意$p \in [1, \infty)$取值,这与先前使用核方法研究希尔伯特空间情形($p = 2$)的结论形成对比。证明采用几何论证方法,通过Sobolev不等式识别训练数据的有害邻域。