Several variants of reweighted risk functionals, such as focal loss, inverse focal loss, and the Area Under the Risk Coverage Curve (AURC), have been proposed for improving model calibration; yet their theoretical connections to calibration errors remain under-explored. In this paper, we revisit a broad class of weighted risk functions and find a principled connection between calibration error and selective classification. We show that minimizing calibration error is closely linked to the selective classification paradigm and demonstrate that optimizing selective risk in low confidence regions naturally improves calibration. Our proposed loss shares a similar reweighting strategy with dual focal loss but offers greater flexibility through the choice of confidence score functions (CSFs). Furthermore, our approach utilizes a bin-based cumulative distribution function (CDF) approximation, enabling efficient gradient-based optimization with O(nM) complexity for n samples and M bins. Empirical evaluations demonstrate that our method achieves competitive calibration performance across a range of datasets and model architectures.
翻译:诸如Focal Loss、逆Focal Loss以及风险覆盖曲线下面积(AURC)等多种重加权风险泛函变体已被提出用于改进模型校准,然而它们与校准误差的理论联系仍未得到充分探索。本文重新审视了一类广泛的加权风险函数,并发现了校准误差与选择性分类之间的原理性关联。我们证明了最小化校准误差与选择性分类范式紧密相关,并展示了在低置信度区域优化选择性风险能够自然地改善校准。我们提出的损失函数与双重Focal Loss共享相似的重加权策略,但通过置信度评分函数(CSF)的选择提供了更大的灵活性。此外,我们的方法采用了一种基于分箱的累积分布函数(CDF)近似,实现了复杂度为O(nM)(n为样本数,M为分箱数)的高效梯度优化。实证评估表明,我们的方法在多种数据集和模型架构上均取得了具有竞争力的校准性能。