The Bayes Error Rate (BER) is the fundamental limit on the achievable generalizable classification accuracy of any machine learning model due to inherent uncertainty within the data. BER estimators offer insight into the difficulty of any classification problem and set expectations for optimal classification performance. In order to be useful, the estimators must also be accurate with a limited number of samples on multivariate problems with unknown class distributions. To determine which estimators meet the minimum requirements for "usefulness", an in-depth examination of their accuracy is conducted using Monte Carlo simulations with synthetic data in order to obtain their confidence bounds for binary classification. To examine the usability of the estimators for real-world applications, new non-linear multi-modal test scenarios are introduced. In each scenario, 2500 Monte Carlo simulations per scenario are run over a wide range of BER values. In a comparison of k-Nearest Neighbor (kNN), Generalized Henze-Penrose (GHP) divergence and Kernel Density Estimation (KDE) techniques, results show that kNN is overwhelmingly the more accurate non-parametric estimator. In order to reach the target of an under 5% range for the 95% confidence bounds, the minimum number of required samples per class is 1000. As more features are added, more samples are needed, so that 2500 samples per class are required at only 4 features. Other estimators do become more accurate than kNN as more features are added, but continuously fail to meet the target range.
翻译:贝叶斯错误率(BER)是由于数据内在不确定性导致的任何机器学习模型可达到的泛化分类准确率的基本极限。BER估计器能够揭示任何分类问题的难度,并为最优分类性能设定预期。为了具备实用性,这些估计器还必须在类别分布未知的多变量问题上,仅用有限样本量实现准确估计。为确定哪些估计器满足"实用性"的最低要求,本文采用合成数据的蒙特卡洛模拟对其准确性进行深入检验,以获得二元分类的置信区间。为检验估计器在实际应用中的可用性,本文引入了新的非线性多模态测试场景。在每个场景中,针对广泛的BER值范围进行了每场景2500次蒙特卡洛模拟。通过比较k-最近邻(kNN)、广义Henze-Penrose(GHP)散度和核密度估计(KDE)技术,结果表明kNN在绝大多数情况下是更精确的非参数估计器。要达到95%置信区间小于5%范围的目标,每个类别所需的最小样本量为1000。随着特征维度的增加,所需样本量也相应增加——仅需4个特征时,每个类别就需要2500个样本。虽然其他估计器在特征增加时确实比kNN更精确,但始终未能达到目标区间范围。