Cross-validation (CV) is known to provide asymptotically exact tests and confidence intervals for model improvement but only when the model comparison is relatively stable. Surprisingly, we prove that even simple, individually stable models can generate relatively unstable comparisons, calling into question the validity of CV inference. Specifically, we show that the Lasso and its close cousin, soft-thresholding, generate relatively unstable comparisons and invalid CV inferences, even in the most favorable of learning settings and when both models are individually stable. These findings highlight the importance of verifying relative stability before deploying CV for model comparison.
翻译:交叉验证(CV)在模型比较相对稳定的情况下,能够提供渐近精确的检验和置信区间以评估模型改进效果。然而令人惊讶的是,我们证明即使简单且各自稳定的模型也可能产生相对不稳定的比较,从而对CV推断的有效性提出质疑。具体而言,我们证明Lasso及其近亲软阈值方法即使在最有利的学习环境中,且两个模型各自均保持稳定的情况下,仍会产生相对不稳定的比较和无效的CV推断。这些发现凸显了在运用CV进行模型比较前验证相对稳定性的重要性。