Tuning the regularization parameter in penalized regression models is an expensive task, requiring multiple models to be fit along a path of parameters. Strong screening rules drastically reduce computational costs by lowering the dimensionality of the input prior to fitting. We develop strong screening rules for group-based Sorted L-One Penalized Estimation (SLOPE) models: Group SLOPE and Sparse-group SLOPE. The developed rules are applicable for the wider family of group-based OWL models, including OSCAR. Our experiments on both synthetic and real data show that the screening rules significantly accelerate the fitting process. The screening rules make it accessible for group SLOPE and sparse-group SLOPE to be applied to high-dimensional datasets, particularly those encountered in genetics.
翻译:在惩罚回归模型中调整正则化参数是一项计算成本高昂的任务,需要沿参数路径拟合多个模型。强筛选规则通过在拟合前降低输入维度,大幅减少了计算成本。我们为基于群组的排序L1惩罚估计(SLOPE)模型——群组SLOPE和稀疏群组SLOPE——开发了强筛选规则。所开发的规则适用于更广泛的基于群组的OWL模型家族,包括OSCAR。我们在合成数据和真实数据上的实验表明,这些筛选规则显著加速了拟合过程。这些筛选规则使得群组SLOPE和稀疏群组SLOPE能够应用于高维数据集,特别是在遗传学中常见的数据集。