Selective classification is a powerful tool for automated decision-making in high-risk scenarios, allowing classifiers to act only when confident and abstain when uncertainty is high. Given a target accuracy, our goal is to minimize indecisions, observations we do not automate. For difficult problems, the target accuracy may be unattainable without abstention. By using indecisions, we can control the misclassification rate to any user-specified level, even below the Bayes optimal error rate, while minimizing overall indecision mass. We provide a complete characterization of the minimax risk in selective classification, establishing continuity and monotonicity properties that enable optimal indecision selection. We revisit selective inference via the Neyman-Pearson testing framework, where indecision enables control of type 2 error given fixed type 1 error probability. For both classification and testing, we propose a finite-sample calibration method with non-asymptotic guarantees, proving plug-in classifiers remain consistent and that accuracy-based calibration effectively controls indecision mass. In the binary Gaussian mixture model, we uncover the first sharp phase transition in selective inference, showing minimal indecision can yield near-optimal accuracy even under poor class separation. Experiments on Gaussian mixtures and real datasets confirm that small indecision proportions yield substantial accuracy gains, making indecision a principled tool for risk control.
翻译:选择性分类是高风险场景中自动化决策的强大工具,允许分类器仅在置信度高时采取行动,并在不确定性高时选择弃权。给定目标准确率,我们的目标是最小化犹豫决策——即我们未实现自动化的观测样本。对于困难问题,若不采用弃权机制,目标准确率可能无法达到。通过利用犹豫决策,我们能够将误分类率控制在任意用户指定水平,甚至低于贝叶斯最优错误率,同时最小化整体犹豫决策量。我们完整刻画了选择性分类中的极小极大风险,建立了连续性和单调性性质,从而实现了最优犹豫决策选择。我们通过Neyman-Pearson检验框架重新审视选择性推断,其中犹豫决策能够在固定第一类错误概率下控制第二类错误。针对分类和检验问题,我们提出了一种具有非渐近保证的有限样本校准方法,证明了插件分类器保持一致性,且基于准确率的校准能有效控制犹豫决策量。在二元高斯混合模型中,我们首次揭示了选择性推断中的尖锐相变现象,表明即使类别分离度较差时,最小化犹豫决策仍能获得接近最优的准确率。在高斯混合模型和真实数据集上的实验证实,小比例的犹豫决策能带来显著的准确率提升,使犹豫决策成为风险控制的原理性工具。