A machine learning model is calibrated if its predicted probability for an outcome matches the observed frequency for that outcome conditional on the model prediction. This property has become increasingly important as the impact of machine learning models has continued to spread to various domains. As a result, there are now a dizzying number of recent papers on measuring and improving the calibration of (specifically deep learning) models. In this work, we reassess the reporting of calibration metrics in the recent literature. We show that there exist trivial recalibration approaches that can appear seemingly state-of-the-art unless calibration and prediction metrics (i.e. test accuracy) are accompanied by additional generalization metrics such as negative log-likelihood. We then derive a calibration-based decomposition of Bregman divergences that can be used to both motivate a choice of calibration metric based on a generalization metric, and to detect trivial calibration. Finally, we apply these ideas to develop a new extension to reliability diagrams that can be used to jointly visualize calibration as well as the estimated generalization error of a model.
翻译:一个机器学习模型是校准的,当其对某一结果预测的概率与该结果在给定模型预测条件下的观测频率相匹配。随着机器学习模型的影响持续扩展到各个领域,这一特性已变得日益重要。因此,近期涌现了大量关于衡量和改进(尤其是深度学习)模型校准性的论文。在这项工作中,我们重新审视了近期文献中校准指标的报告方式。我们表明,存在一些微不足道的重校准方法,除非校准指标和预测指标(如测试准确率)伴随额外的泛化指标(如负对数似然),否则这些方法可能看似达到最先进水平。接着,我们推导了基于校准的Bregman散度分解,这既可用来根据泛化指标启发校准指标的选择,也可用于检测微不足道的校准。最后,我们应用这些思想开发了可靠性图的新扩展,可用于联合可视化模型的校准性以及估计的泛化误差。