Explainability and uncertainty quantification are two pillars of trustable artificial intelligence. However, the reasoning behind uncertainty estimates is generally left unexplained. Identifying the drivers of uncertainty complements explanations of point predictions in recognizing model limitations and enhances trust in decisions and their communication. So far, explanations of uncertainties have been rarely studied. The few exceptions rely on Bayesian neural networks or technically intricate approaches, such as auxiliary generative models, thereby hindering their broad adoption. We present a simple approach to explain predictive aleatoric uncertainties. We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution. Subsequently, we apply out-of-the-box explainers to the model's variance output. This approach can explain uncertainty influences more reliably than literature baselines, which we evaluate in a synthetic setting with a known data-generating process. We further adapt multiple metrics from conventional XAI research to uncertainty explanations. We quantify our findings with a nuanced benchmark analysis that includes real-world datasets. Finally, we apply our approach to an age regression model and discover reasonable sources of uncertainty. Overall, we explain uncertainty estimates with little modifications to the model architecture and demonstrate that our approach competes effectively with more intricate methods.
翻译:可解释性与不确定性量化是可信人工智能的两大支柱。然而,不确定性估计背后的推理通常未被解释。识别不确定性的驱动因素,能够通过补充点预测的解释来认识模型局限性,并增强对决策及其传达过程的信任。迄今为止,对不确定性的解释研究甚少。少数例外依赖于贝叶斯神经网络或技术复杂的方法,例如辅助生成模型,从而阻碍了其广泛应用。我们提出了一种解释预测偶然不确定性的简单方法。我们通过采用具有高斯输出分布的神经网络来估计预测方差作为不确定性度量。随后,我们将即用型解释器应用于模型的方差输出。该方法能够比文献基线更可靠地解释不确定性的影响,我们在已知数据生成过程的合成环境中对此进行了评估。我们进一步将传统可解释人工智能研究中的多种指标适配于不确定性解释。我们通过包含真实数据集的细致基准分析来量化研究结果。最后,我们将该方法应用于年龄回归模型,并发现了合理的不确定性来源。总体而言,我们以极小的模型架构修改解释了不确定性估计,并证明我们的方法与更复杂的方法相比具有有效竞争力。