Under-bagging (UB), which combines under-sampling and bagging, is a popular ensemble learning method for training classifiers on an imbalanced data. Using bagging to reduce the increased variance caused by the reduction in sample size due to under-sampling is a natural approach. However, it has recently been pointed out that in generalized linear models, naive bagging, which does not consider the class imbalance structure, and ridge regularization can produce the same results. Therefore, it is not obvious whether it is better to use UB, which requires an increased computational cost proportional to the number of under-sampled data sets, when training linear models. Given such a situation, in this study, we heuristically derive a sharp asymptotics of UB and use it to compare with several other popular methods for learning from imbalanced data, in the scenario where a linear classifier is trained from a two-component mixture data. The methods compared include the under-sampling (US) method, which trains a model using a single realization of the under-sampled data, and the simple weighting (SW) method, which trains a model with a weighted loss on the entire data. It is shown that the performance of UB is improved by increasing the size of the majority class while keeping the size of the minority fixed, even though the class imbalance can be large, especially when the size of the minority class is small. This is in contrast to US, whose performance is almost independent of the majority class size. In this sense, bagging and simple regularization differ as methods to reduce the variance increased by under-sampling. On the other hand, the performance of SW with the optimal weighting coefficients is almost equal to UB, indicating that the combination of reweighting and regularization may be similar to UB.
翻译:欠采样装袋法(UB)结合了欠采样与装袋技术,是一种常用于不平衡数据分类器训练的集成学习方法。通过装袋来降低因欠采样导致样本量减少所引起的方差增加,是一种自然的处理思路。然而,近期有研究指出,在广义线性模型中,不考虑类别不平衡结构的朴素装袋法与岭正则化可能产生相同的结果。因此,当训练线性模型时,使用UB(其计算成本随欠采样数据集数量成比例增加)是否更优并不明确。在此背景下,本研究采用启发式方法推导了UB的精确渐近性质,并在从双组分混合数据训练线性分类器的场景下,将其与几种其他常用的不平衡数据学习方法进行比较。对比方法包括:使用单次欠采样数据实现训练模型的欠采样(US)方法,以及在整个数据上使用加权损失训练模型的简单加权(SW)方法。结果表明,即使在类别不平衡程度较大(尤其是少数类样本量较小时),通过保持少数类规模不变并增加多数类规模,UB的性能仍能得到提升。这与US形成鲜明对比——US的性能几乎与多数类规模无关。从这个意义上说,装袋与简单正则化作为降低欠采样所致方差增加的方法具有本质差异。另一方面,采用最优加权系数的SW方法性能几乎与UB等同,这表明重加权与正则化的组合可能与UB具有相似的效果。