Explainable AI methods facilitate the understanding of model behaviour, yet, small, imperceptible perturbations to inputs can vastly distort explanations. As these explanations are typically evaluated holistically, before model deployment, it is difficult to assess when a particular explanation is trustworthy. Some studies have tried to create confidence estimators for explanations, but none have investigated an existing link between uncertainty and explanation quality. We artificially simulate epistemic uncertainty in text input by introducing noise at inference time. In this large-scale empirical study, we insert different levels of noise perturbations and measure the effect on the output of pre-trained language models and different uncertainty metrics. Realistic perturbations have minimal effect on performance and explanations, yet masking has a drastic effect. We find that high uncertainty doesn't necessarily imply low explanation plausibility; the correlation between the two metrics can be moderately positive when noise is exposed during the training process. This suggests that noise-augmented models may be better at identifying salient tokens when uncertain. Furthermore, when predictive and epistemic uncertainty measures are over-confident, the robustness of a saliency map to perturbation can indicate model stability issues. Integrated Gradients shows the overall greatest robustness to perturbation, while still showing model-specific patterns in performance; however, this phenomenon is limited to smaller Transformer-based language models.
翻译:可解释人工智能方法有助于理解模型行为,然而输入中微小且难以察觉的扰动可能严重扭曲解释结果。由于这些解释通常在模型部署前被整体评估,因此难以判断特定解释何时可信。已有研究尝试为解释建立置信度估计器,但尚未深入探究不确定性与解释质量之间存在的关联。我们通过在推理阶段引入噪声,人工模拟文本输入中的认知不确定性。在此大规模实证研究中,我们注入不同强度的噪声扰动,并测量其对预训练语言模型输出及多种不确定性指标的影响。现实扰动对性能和解释的影响微乎其微,而掩码操作则产生剧烈影响。研究发现高不确定性并不必然意味着解释合理性低;当训练过程中暴露噪声时,这两个指标之间可能呈现中等程度的正相关性。这表明经过噪声增强的模型在不确定时可能更擅长识别关键标记。此外,当预测不确定性和认知不确定性度量表现出过度自信时,显著性映射对扰动的鲁棒性能够揭示模型稳定性问题。综合梯度法在整体上表现出最强的抗扰动鲁棒性,同时仍显示模型特定的性能模式;然而这种现象仅限于基于Transformer的小型语言模型。