Selective classification frameworks are useful tools for automated decision making in highly risky scenarios, since they allow for a classifier to only make highly confident decisions, while abstaining from making a decision when it is not confident enough to do so, which is otherwise known as an indecision. For a given level of classification accuracy, we aim to make as many decisions as possible. For many problems, this can be achieved without abstaining from making decisions. But when the problem is hard enough, we show that we can still control the misclassification rate of a classifier up to any user specified level, while only abstaining from the minimum necessary amount of decisions, even if this level of misclassification is smaller than the Bayes optimal error rate. In many problem settings, the user could obtain a dramatic decrease in misclassification while only paying a comparatively small price in terms of indecisions.
翻译:选择性分类框架在高度风险场景下的自动化决策中是有用的工具,因为它们允许分类器仅做出高度自信的决策,而在置信度不足时选择放弃决策,这通常被称为犹豫决策。在给定的分类准确率水平下,我们的目标是尽可能多地做出决策。对于许多问题,这可以在不放弃决策的情况下实现。但当问题足够困难时,我们证明,即使指定的错误分类率低于贝叶斯最优错误率,我们仍然可以将分类器的错误分类率控制在任何用户指定的水平,同时仅放弃最小必要数量的决策。在许多问题设置中,用户只需付出相对较小的犹豫决策代价,即可显著降低错误分类率。