Imbalanced data, where the positive samples represent only a small proportion compared to the negative samples, makes it challenging for classification problems to balance the false positive and false negative rates. A common approach to addressing the challenge involves generating synthetic data for the minority group and then training classification models with both observed and synthetic data. However, since the synthetic data depends on the observed data and fails to replicate the original data distribution accurately, prediction accuracy is reduced when the synthetic data is naively treated as the true data. In this paper, we address the bias introduced by synthetic data and provide consistent estimators for this bias by borrowing information from the majority group. We propose a bias correction procedure to mitigate the adverse effects of synthetic data, enhancing prediction accuracy while avoiding overfitting. This procedure is extended to broader scenarios with imbalanced data, such as imbalanced multi-task learning and causal inference. Theoretical properties, including bounds on bias estimation errors and improvements in prediction accuracy, are provided. Simulation results and data analysis on handwritten digit datasets demonstrate the effectiveness of our method.
翻译:在不平衡数据中,正样本仅占负样本的极小比例,这使得分类问题在平衡假阳性率与假阴性率方面面临挑战。解决该问题的常见方法是为少数类生成合成数据,然后利用观测数据与合成数据共同训练分类模型。然而,由于合成数据依赖于观测数据且无法精确复现原始数据分布,若将合成数据简单视为真实数据,会导致预测精度下降。本文针对合成数据引入的偏差问题,通过从多数类中借用信息,提出了该偏差的一致性估计量。我们设计了一种偏差校正流程,以减轻合成数据的不利影响,在提升预测精度的同时避免过拟合。该流程可扩展至更广泛的不平衡数据场景,如不平衡多任务学习与因果推断。我们提供了包括偏差估计误差界及预测精度改进在内的理论性质分析。基于手写数字数据集的仿真结果与数据分析验证了本方法的有效性。