Artificial intelligence (AI) systems increasingly support decision-making across critical domains, yet current explainable AI (XAI) approaches prioritize algorithmic transparency over human comprehension. While XAI methods reveal computational processes for model validation and audit, end users require explanations integrating domain knowledge, contextual reasoning, and professional frameworks. This disconnect reveals a fundamental design challenge: existing AI explanation approaches fail to address how practitioners actually need to understand and act upon recommendations. This paper introduces Explanatory AI as a complementary paradigm where AI systems leverage generative and multimodal capabilities to serve as explanatory partners for human understanding. Unlike traditional XAI that answers "How did the algorithm decide?" for validation purposes, Explanatory AI addresses "Why does this make sense?" for practitioners making informed decisions. Through theory-informed design, we synthesize multidisciplinary perspectives on explanation from cognitive science, communication research, and education with empirical evidence from healthcare contexts and AI expert interviews. Our analysis identifies five dimensions distinguishing Explanatory AI from traditional XAI: explanatory purpose (from diagnostic to interpretive sense-making), communication mode (from static technical to dynamic narrative interaction), epistemic stance (from algorithmic correspondence to contextual plausibility), adaptivity (from uniform design to personalized accessibility), and cognitive design (from information overload to cognitively aligned delivery). We derive five meta-requirements specifying what systems must achieve and formulate ten design principles prescribing how to build them.
翻译:人工智能系统日益广泛地应用于关键领域的决策支持,然而当前可解释人工智能方法优先考虑算法透明度而非人类理解。尽管XAI方法通过揭示计算过程以进行模型验证与审计,终端用户需要的却是融合领域知识、情境推理与专业框架的解释。这种脱节揭示了一个根本性的设计挑战:现有AI解释方法未能解决从业者实际需要如何理解并依据建议采取行动的问题。本文提出解释性人工智能作为一种互补范式,其中AI系统利用生成式与多模态能力,充当促进人类理解的解释性伙伴。不同于传统XAI为验证目的回答“算法如何决策”,解释性人工智能着眼于为从业者提供知情决策所需的“为何此结论具有合理性”。通过理论驱动的设计,我们综合了认知科学、传播学与教育学中关于解释的多学科视角,并结合医疗场景的实证证据与AI专家访谈。我们的分析揭示了区分解释性人工智能与传统XAI的五个维度:解释目的(从诊断性到解释性意义建构)、沟通模式(从静态技术呈现到动态叙事交互)、认知立场(从算法对应性到情境合理性)、适应性(从统一设计到个性化可及性)以及认知设计(从信息过载到认知协调的传递方式)。由此我们推导出五项规定系统必须实现目标的元需求,并制定了十条指导系统构建的设计原则。