From a model-building perspective, we propose a paradigm shift for fitting over-parameterized models. Philosophically, the mindset is to fit models to future observations rather than to the observed sample. Technically, given an imputation method to generate future observations, we fit over-parameterized models to these future observations by optimizing an approximation of the desired expected loss function based on its sample counterpart and an adaptive $\textit{duality function}$. The required imputation method is also developed using the same estimation technique with an adaptive $m$-out-of-$n$ bootstrap approach. We illustrate its applications with the many-normal-means problem, $n < p$ linear regression, and neural network-based image classification of MNIST digits. The numerical results demonstrate its superior performance across these diverse applications. While primarily expository, the paper conducts an in-depth investigation into the theoretical aspects of the topic. It concludes with remarks on some open problems.
翻译:从模型构建的视角出发,我们提出了一种用于拟合过参数化模型的范式转变。在哲学层面,其核心思想是将模型拟合至未来观测而非已观测样本。在技术层面,给定一种用于生成未来观测的插补方法,我们通过优化基于样本对应项和自适应$\textit{对偶函数}$的期望损失函数近似值,将过参数化模型拟合至这些未来观测。所需的插补方法同样采用自适应$m$-out-of-$n$自助法,并基于相同的估计技术进行开发。我们通过多正态均值问题、$n < p$线性回归以及基于神经网络的MNIST手写数字图像分类等应用场景进行说明。数值结果表明该方法在这些多样化应用中均表现出优越性能。本文虽以阐述为主,但仍对该主题的理论层面进行了深入探讨,并在结尾对若干开放性问题进行了评述。