We introduce Renet, a principled generalization of the Relaxed Lasso to the Elastic Net family of estimators. While, on the one hand, $\ell_1$-regularization is a standard tool for variable selection in high-dimensional regimes and, on the other hand, the $\ell_2$ penalty provides stability and solution uniqueness through strict convexity, the standard Elastic Net nevertheless suffers from shrinkage bias that frequently yields suboptimal prediction accuracy. We propose to address this limitation through a framework called \textit{relaxation}. Existing relaxation implementations rely on naive linear interpolations of penalized and unpenalized solutions, which ignore the non-linear geometry that characterizes the entire regularization path and risk violating the Karush-Kuhn-Tucker conditions. Renet addresses these limitations by enforcing sign consistency through an adaptive relaxation procedure that dynamically dispatches between convex blending and efficient sub-path refitting. Furthermore, we identify and formalize a unique synergy between relaxation and the ``One-Standard-Error'' rule: relaxation serves as a robust debiasing mechanism, allowing practitioners to leverage the parsimony of the 1-SE rule without the traditional loss in predictive fidelity. Our theoretical framework incorporates automated stability safeguards for ultra-high dimensional regimes and is supported by a comprehensive benchmarking suite across 20 synthetic and real-world datasets, demonstrating that Renet consistently outperforms the standard Elastic Net and provides a more robust alternative to the Adaptive Elastic Net in high-dimensional, low signal-to-noise ratio and high-multicollinearity regimes. By leveraging an adaptive solver backend, Renet delivers these statistical gains while offering a computational profile that remains competitive with state-of-the-art coordinate descent implementations.
翻译:本文提出Renet,这是对松弛Lasso方法的一种原则性推广,适用于弹性网络估计器族。一方面,$\ell_1$正则化是高维场景下变量选择的标准工具;另一方面,$\ell_2$惩罚项通过严格凸性提供稳定性和解的唯一性。然而,标准弹性网络仍存在收缩偏差,常导致预测精度欠佳。我们提出通过名为“松弛”的框架来解决这一局限。现有松弛实现依赖于惩罚解与非惩罚解的简单线性插值,这种方法忽略了刻画整个正则化路径的非线性几何结构,且可能违反Karush-Kuhn-Tucker条件。Renet通过自适应松弛程序强制符号一致性来解决这些缺陷,该程序动态调度凸混合与高效子路径重拟合。此外,我们发现并形式化了松弛与“一倍标准误差”规则之间的独特协同作用:松弛作为一种稳健的去偏机制,使实践者能够利用1-SE规则的简约性,同时避免传统方法中预测保真度的损失。我们的理论框架包含针对超高维场景的自动稳定性保障机制,并在20个合成与真实数据集上通过综合基准测试得到验证。实验表明,Renet在预测性能上持续优于标准弹性网络,并在高维、低信噪比与高多重共线性场景中为自适应弹性网络提供了更稳健的替代方案。通过采用自适应求解器后端,Renet在保持与先进坐标下降实现相当的计算效率的同时,实现了上述统计性能提升。