I review some of the main methods for selecting tuning parameters in nonparametric and $\ell_1$-penalized estimation. For the nonparametric estimation, I consider the methods of Mallows, Stein, Lepski, cross-validation, penalization, and aggregation in the context of series estimation. For the $\ell_1$-penalized estimation, I consider the methods based on the theory of self-normalized moderate deviations, bootstrap, Stein's unbiased risk estimation, and cross-validation in the context of Lasso estimation. I explain the intuition behind each of the methods and discuss their comparative advantages. I also give some extensions.
翻译:本文回顾了非参数估计和$\ell_1$惩罚估计中调参选择的主要方法。对于非参数估计,我考虑了在级数估计框架下的Mallow方法、Stein方法、Lepski方法、交叉验证、惩罚方法以及聚合方法。对于$\ell_1$惩罚估计,我考虑了基于自归一化中偏差理论、自助法、Stein无偏风险估计以及交叉验证的Lasso估计方法。我阐释了每种方法背后的直觉原理,并讨论了它们各自的比较优势,同时还进行了一些拓展分析。