Explainable AI (XAI) is frequently positioned as a technical problem of revealing the inner workings of an AI model. This position is affected by unexamined onto-epistemological assumptions: meaning is treated as immanent to the model, the explainer is positioned outside the system, and a causal structure is presumed recoverable through computational techniques. In this paper, we draw on Barad's agential realism to develop an alternative onto-epistemology of XAI. We propose that interpretations are material-discursive performances that emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus. To develop this position, we read a comprehensive set of XAI methods through agential realism and reveal the assumptions and limitations that underpin several of these methods. We then articulate the framework's ethical dimension and propose design directions for XAI interfaces that support emergent interpretation, using a speculative text-to-music interface as a case study.
翻译:可解释人工智能(XAI)常被定位为揭示AI模型内部运行机制的技术问题。这一立场受到未经审视的本体认识论假设的影响:意义被视为模型内在固有,解释者被置于系统之外,且假定可通过计算技术还原因果结构。本文借助巴拉德的能动实在论,构建了一种替代性的XAI本体认识论框架。我们提出:解释是从AI模型与人类、语境及解释装置在具体情境中相互纠缠而涌现的物质-话语实践。为发展这一立场,我们通过能动实在论视角系统解读了一系列XAI方法,揭示其中多种方法所依托的假设与局限。继而阐明该框架的伦理维度,并以推测性文本转音乐界面为案例,提出支持涌现式解释的XAI界面设计方向。