Perturbation-based explanations are widely utilized to enhance the transparency of machine-learning models in practice. However, their reliability is often compromised by the unknown model behavior under the specific perturbations used. This paper investigates the relationship between uncertainty calibration - the alignment of model confidence with actual accuracy - and perturbation-based explanations. We show that models systematically produce unreliable probability estimates when subjected to explainability-specific perturbations and theoretically prove that this directly undermines global and local explanation quality. To address this, we introduce ReCalX, a novel approach to recalibrate models for improved explanations while preserving their original predictions. Empirical evaluations across diverse models and datasets demonstrate that ReCalX consistently reduces perturbation-specific miscalibration most effectively while enhancing explanation robustness and the identification of globally important input features.
翻译:基于扰动的解释方法在实践中被广泛用于增强机器学习模型的透明度。然而,其可靠性常因模型在特定扰动下未知的行为表现而受到损害。本文研究了不确定性校准(即模型置信度与实际准确性的对齐程度)与基于扰动的解释方法之间的关系。我们证明,在受到解释性专用扰动时,模型会系统性地产生不可靠的概率估计,并从理论上证明这会直接损害全局与局部解释的质量。为解决此问题,我们提出了ReCalX——一种新颖的模型重校准方法,可在保持原始预测性能的同时提升解释质量。跨多种模型与数据集的实证评估表明,ReCalX能最有效地持续降低扰动特定的校准误差,同时增强解释的鲁棒性及对全局重要输入特征的识别能力。