Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.
翻译:基于不精确概率领域的现有思想,我们提出了一种评估生成式概率分类器个体预测可靠性的新方法。我们将该方法称为鲁棒性量化,将其与不确定性量化进行比较,并证明即使对于从偏移分布中采样的小训练集学习得到的分类器,该方法仍能保持良好的性能。