Ensemble learning is a popular technique to improve the accuracy of machine learning models. It traditionally hinges on the rationale that aggregating multiple weak models can lead to better models with lower variance and hence higher stability, especially for discontinuous base learners. In this paper, we provide a new perspective on ensembling. By selecting the most frequently generated model from the base learner when repeatedly applied to subsamples, we can attain exponentially decaying tails for the excess risk, even if the base learner suffers from slow (i.e., polynomial) decay rates. This tail enhancement power of ensembling applies to base learners that have reasonable predictive power to begin with and is stronger than variance reduction in the sense of exhibiting rate improvement. We demonstrate how our ensemble methods can substantially improve out-of-sample performances in a range of numerical examples involving heavy-tailed data or intrinsically slow rates.
翻译:集成学习是提升机器学习模型准确性的常用技术。传统上,其核心原理在于:聚合多个弱模型可获得方差更低、稳定性更高的更优模型,尤其适用于不连续的基础学习器。本文提出一种关于集成的新视角:通过对基础学习器重复应用于子样本时最常生成的模型进行选择,即使基础学习器存在衰减速率缓慢(即多项式衰减)的问题,我们仍能实现超额风险的指数级衰减尾部。这种集成方法的尾部增强能力适用于本身具有合理预测能力的基础学习器,且在体现速率改进的意义上强于方差缩减。我们通过一系列涉及重尾数据或固有慢速率的数值算例,展示了所提集成方法如何显著提升样本外性能。