Common practice in modern machine learning involves fitting a large number of parameters relative to the number of observations. These overparameterized models can exhibit surprising generalization behavior, e.g., ``double descent'' in the prediction error curve when plotted against the raw number of model parameters, or another simplistic notion of complexity. In this paper, we revisit model complexity from first principles, by first reinterpreting and then extending the classical statistical concept of (effective) degrees of freedom. Whereas the classical definition is connected to fixed-X prediction error (in which prediction error is defined by averaging over the same, nonrandom covariate points as those used during training), our extension of degrees of freedom is connected to random-X prediction error (in which prediction error is averaged over a new, random sample from the covariate distribution). The random-X setting more naturally embodies modern machine learning problems, where highly complex models, even those complex enough to interpolate the training data, can still lead to desirable generalization performance under appropriate conditions. We demonstrate the utility of our proposed complexity measures through a mix of conceptual arguments, theory, and experiments, and illustrate how they can be used to interpret and compare arbitrary prediction models.
翻译:现代机器学习中的常见实践涉及拟合相对于观测数量而言的大量参数。这些过参数化模型可能表现出令人惊讶的泛化行为,例如当预测误差曲线相对于模型参数的原始数量或另一种简单化的复杂性概念绘制时,会出现"双重下降"现象。本文从基本原理出发重新审视模型复杂性,首先重新阐释并扩展了(有效)自由度的经典统计概念。经典定义与固定X预测误差相关联(其中预测误差通过对与训练期间使用的相同非随机协变量点进行平均来定义),而我们扩展的自由度概念则与随机X预测误差相关联(其中预测误差通过对来自协变量分布的新随机样本进行平均来计算)。随机X设置更自然地体现了现代机器学习问题,其中高度复杂的模型——即使是那些复杂到足以插值训练数据的模型——在适当条件下仍能产生理想的泛化性能。我们通过概念论证、理论分析和实验相结合的方式,证明了所提出的复杂性度量方法的实用性,并阐释了如何利用它们来理解和比较任意预测模型。