To build robust, fair, and safe AI systems, we would like our classifiers to say ``I don't know'' when facing test examples that are difficult or fall outside of the training classes.The ubiquitous strategy to predict under uncertainty is the simplistic \emph{reject-or-classify} rule: abstain from prediction if epistemic uncertainty is high, classify otherwise.Unfortunately, this recipe does not allow different sources of uncertainty to communicate with each other, produces miscalibrated predictions, and it does not allow to correct for misspecifications in our uncertainty estimates. To address these three issues, we introduce \emph{unified uncertainty calibration (U2C)}, a holistic framework to combine aleatoric and epistemic uncertainties. U2C enables a clean learning-theoretical analysis of uncertainty estimation, and outperforms reject-or-classify across a variety of ImageNet benchmarks. Our code is available at: https://github.com/facebookresearch/UnifiedUncertaintyCalibration
翻译:为了构建鲁棒、公平且安全的AI系统,我们希望分类器在面对困难或超出训练类别的测试样本时能够表示“我不知道”。预测不确定性时普遍采用的策略是简单的“拒绝或分类”规则:若认知不确定性高则放弃预测,否则进行分类。遗憾的是,这一方案不允许不同来源的不确定性相互沟通,会产生校准不当的预测,并且无法纠正我们对不确定性估计中的误设定。针对这三个问题,我们提出了“统一不确定性校准”(U2C),一种结合偶然不确定性和认知不确定性的整体框架。U2C支持对不确定性估计进行清晰的学习理论分析,并在多种ImageNet基准测试中优于“拒绝或分类”策略。我们的代码可在以下地址获取:https://github.com/facebookresearch/UnifiedUncertaintyCalibration