This position paper argues that, to its detriment, transparency research overlooks many foundational concepts of artificial intelligence. Here, we focus on uncertainty quantification -- in the context of ante-hoc interpretability and counterfactual explainability -- showing how its adoption could address key challenges in the field. First, we posit that uncertainty and ante-hoc interpretability offer complementary views of the same underlying idea; second, we assert that uncertainty provides a principled unifying framework for counterfactual explainability. Consequently, inherently transparent models can benefit from human-centred explanatory insights -- like counterfactuals -- which are otherwise missing. At a higher level, integrating artificial intelligence fundamentals into transparency research promises to yield more reliable, robust and understandable predictive models.
翻译:本立场论文认为,透明度研究忽视了人工智能的诸多基础概念,这对其发展产生了不利影响。本文聚焦于不确定性量化——在前瞻可解释性与反事实可解释性语境下——论证其如何能解决该领域的关键挑战。首先,我们主张不确定性与前瞻可解释性为同一核心理念提供了互补视角;其次,我们断言不确定性可为反事实可解释性提供原则性的统一框架。因此,具有内在透明性的模型能够获得以人为中心的解释性洞见(如反事实),而这些洞见原本是缺失的。在更高层面上,将人工智能基础原理融入透明度研究,有望催生更可靠、更稳健且更易理解的预测模型。