We make two contributions to the problem of estimating the $L_1$ calibration error of a binary classifier from a finite dataset. First, we provide an upper bound for any classifier where the calibration function has bounded variation. Second, we provide a method of modifying any classifier so that its calibration error can be upper bounded efficiently without significantly impacting classifier performance and without any restrictive assumptions. All our results are non-asymptotic and distribution-free. We conclude by providing advice on how to measure calibration error in practice. Our methods yield practical procedures that can be run on real-world datasets with modest overhead.
翻译:本文针对从有限数据集中估计二元分类器的$L_1$校准误差问题做出两项贡献。首先,我们为任何校准函数具有有界变差的分类器提供了一个上界。其次,我们提出了一种修改任意分类器的方法,使得其校准误差能够被高效地上界估计,同时不会显著影响分类器性能,且无需任何限制性假设。我们所有的结果均为非渐近且与分布无关的。最后,我们提供了关于实践中如何度量校准误差的建议。我们的方法产生了可在现实世界数据集上以适度开销运行的实用流程。