Model selection in penalized regression critically depends on an accurate assessment of model complexity, commonly quantified through the effective degrees of freedom. While the Lasso admits a simple and unbiased characterization, given by the size of the active set, this property does not extend to adaptive penalization methods, despite the widespread use of this approximation in practice. To solve this issue, in this paper we derive a novel unbiased estimator of the effective degrees of freedom for the Adaptive Lasso within Stein's unbiased risk estimation framework. Our analysis reveals additional terms induced by data-dependent penalization, reflecting the role of adaptive weights and regularization in determining model complexity. We further revisit the Group Lasso, providing an alternative derivation of its degrees of freedom, and extend these results to the Adaptive Group Lasso. Importantly, we characterize the behavior of the degrees of freedom along the regularization path beyond the orthonormal design setting commonly assumed in the literature, providing a new theoretical description of this behavior under general design matrices. By correcting the common misuse of active set size as a proxy for degrees of freedom, our results enable more reliable risk estimation and inference, offering a rigorous foundation for understanding model complexity in adaptive penalized regression.
翻译:暂无翻译