Feature subsampling is a core component of random forests and other ensemble methods. While recent theory suggests that this randomization acts solely as a variance reduction mechanism analogous to ridge regularization, these results largely rely on base learners optimized via ordinary least squares. We investigate the effects of feature subsampling on greedy forward selection, a model that better captures the adaptive nature of decision trees. Assuming an orthogonal design, we prove that ensembling with feature subsampling can reduce both bias and variance, contrasting with the pure variance reduction of convex base learners. More precisely, we show that both the training error and degrees of freedom can be non-monotonic in the subsampling rate, breaking the analogy with standard shrinkage methods like the lasso or ridge regression. Furthermore, we characterize the exact asymptotic behavior of the estimator, showing that it adaptively reweights OLS coefficients based on their rank, with weights that are well-approximated by a logistic function. These results elucidate the distinct role of algorithmic randomization when interleaved with greedy optimization.
翻译:特征子采样是随机森林及其他集成方法的核心组成部分。尽管近期理论表明这种随机化仅起到类似于岭正则化的方差缩减作用,但这些结论主要依赖于通过普通最小二乘法优化的基学习器。本研究考察特征子采样对贪婪前向选择的影响——该模型能更好地捕捉决策树的自适应特性。在正交设计假设下,我们证明采用特征子采样的集成方法能够同时降低偏差与方差,这与凸基学习器的纯方差缩减形成鲜明对比。更精确地说,我们证明训练误差与自由度均可能随子采样率呈现非单调变化,从而打破了其与套索或岭回归等标准收缩方法的类比关系。此外,我们刻画了估计量的精确渐近行为,表明该估计量会根据OLS系数的秩进行自适应重加权,其权重可通过逻辑函数良好逼近。这些结果阐明了算法随机化与贪婪优化交织时所发挥的独特作用。