Synthetic Minority Oversampling Technique (SMOTE) is a common rebalancing strategy for handling imbalanced tabular data sets. However, few works analyze SMOTE theoretically. In this paper, we derive several non-asymptotic upper bound on SMOTE density. From these results, we prove that SMOTE (with default parameter) tends to copy the original minority samples asymptotically. We confirm and illustrate empirically this first theoretical behavior on a real-world data-set.bFurthermore, we prove that SMOTE density vanishes near the boundary of the support of the minority class distribution. We then adapt SMOTE based on our theoretical findings to introduce two new variants. These strategies are compared on 13 tabular data sets with 10 state-of-the-art rebalancing procedures, including deep generative and diffusion models. One of our key findings is that, for most data sets, applying no rebalancing strategy is competitive in terms of predictive performances, would it be with LightGBM, tuned random forests or logistic regression. However, when the imbalance ratio is artificially augmented, one of our two modifications of SMOTE leads to promising predictive performances compared to SMOTE and other state-of-the-art strategies.
翻译:合成少数类过采样技术(SMOTE)是处理不平衡表格数据集的常用重平衡策略。然而,很少有研究从理论角度分析SMOTE。本文推导了SMOTE密度的若干非渐近上界。基于这些结果,我们证明了SMOTE(采用默认参数时)在渐近意义上倾向于复制原始少数类样本。我们在真实数据集上实证验证并阐释了这一理论特性。此外,我们证明了SMOTE密度在少数类分布支撑集的边界附近趋近于零。随后,我们依据理论发现对SMOTE进行改进,提出了两种新变体。我们在13个表格数据集上将这些策略与10种先进的重平衡方法(包括深度生成模型和扩散模型)进行比较。关键发现之一是:对于大多数数据集,不采用任何重平衡策略在预测性能上具有竞争力——无论是使用LightGBM、调优随机森林还是逻辑回归。然而,当人为增大不平衡比例时,我们提出的两种SMOTE改进方法之一相较于原始SMOTE及其他先进策略,展现出更优的预测性能。