With the increasing deployment of machine learning models in many socially sensitive tasks, there is a growing demand for reliable and trustworthy predictions. One way to accomplish these requirements is to allow a model to abstain from making a prediction when there is a high risk of making an error. This requires adding a selection mechanism to the model, which selects those examples for which the model will provide a prediction. The selective classification framework aims to design a mechanism that balances the fraction of rejected predictions (i.e., the proportion of examples for which the model does not make a prediction) versus the improvement in predictive performance on the selected predictions. Multiple selective classification frameworks exist, most of which rely on deep neural network architectures. However, the empirical evaluation of the existing approaches is still limited to partial comparisons among methods and settings, providing practitioners with little insight into their relative merits. We fill this gap by benchmarking 18 baselines on a diverse set of 44 datasets that includes both image and tabular data. Moreover, there is a mix of binary and multiclass tasks. We evaluate these approaches using several criteria, including selective error rate, empirical coverage, distribution of rejected instance's classes, and performance on out-of-distribution instances. The results indicate that there is not a single clear winner among the surveyed baselines, and the best method depends on the users' objectives.
翻译:随着机器学习模型在众多社会敏感性任务中的部署日益增多,对可靠且可信预测的需求也在不断增长。满足这些要求的一种方法是允许模型在存在较高错误风险时放弃做出预测。这需要在模型中添加一个选择机制,该机制选择模型将提供预测的样本。选择性分类框架旨在设计一种机制,以平衡被拒绝预测的比例(即模型未做出预测的样本比例)与所选预测上预测性能的提升。目前存在多种选择性分类框架,其中大多数依赖于深度神经网络架构。然而,现有方法的实证评估仍局限于方法和设置之间的部分比较,未能为实践者提供关于其相对优点的深入见解。我们通过在一组包含44个数据集的多样化集合上对18个基线方法进行基准测试来填补这一空白,这些数据集同时包含图像和表格数据。此外,任务类型混合了二分类和多分类。我们使用多个标准评估这些方法,包括选择性错误率、经验覆盖率、被拒绝实例类别的分布以及分布外实例上的性能。结果表明,在所调查的基线方法中并不存在单一的明确优胜者,最佳方法取决于用户的具体目标。