The choice of the tuning parameter in the Lasso is central to its statistical performance in high-dimensional linear regression. Classical consistency theory identifies the rate of the Lasso tuning parameter, and numerous studies have established non-asymptotic guarantees. Nevertheless, the question of optimal tuning within a non-asymptotic framework has not yet been fully resolved. We establish tuning criteria above which the Lasso becomes inadmissible under mean squared prediction error. More specifically, we establish thresholds showing that certain classical tuning choices yield Lasso estimators strictly dominated by a simple Lasso-Ridge refinement. We also address how the structure of the design matrix and the noise vector influences the inadmissibility phenomenon.
翻译:Lasso方法中调节参数的选择对其在高维线性回归中的统计性能至关重要。经典一致性理论确定了Lasso调节参数的收敛速率,大量研究已建立了非渐近保证。然而,在非渐近框架内优化调节参数的问题尚未得到完全解决。本文建立了在均方预测误差下Lasso方法变得不可容许的调节参数准则。具体而言,我们通过建立阈值证明某些经典调节参数选择会导致Lasso估计量严格劣于简单的Lasso-Ridge改进估计量。同时,我们还探讨了设计矩阵和噪声向量的结构如何影响这种不可容许性现象。