Explainability is a critical factor influencing the wide deployment of deep vision models (DVMs). Concept-based post-hoc explanation methods can provide both global and local insights into model decisions. However, current methods in this field face challenges in that they are inflexible to automatically construct accurate and sufficient linguistic explanations for global concepts and local circuits. Particularly, the intrinsic polysemanticity in semantic Visual Concepts (VCs) impedes the interpretability of concepts and DVMs, which is underestimated severely. In this paper, we propose a Chain-of-Explanation (CoE) approach to address these issues. Specifically, CoE automates the decoding and description of VCs to construct global concept explanation datasets. Further, to alleviate the effect of polysemanticity on model explainability, we design a concept polysemanticity disentanglement and filtering mechanism to distinguish the most contextually relevant concept atoms. Besides, a Concept Polysemanticity Entropy (CPE), as a measure of model interpretability, is formulated to quantify the degree of concept uncertainty. The modeling of deterministic concepts is upgraded to uncertain concept atom distributions. Finally, CoE automatically enables linguistic local explanations of the decision-making process of DVMs by tracing the concept circuit. GPT-4o and human-based experiments demonstrate the effectiveness of CPE and the superiority of CoE, achieving an average absolute improvement of 36% in terms of explainability scores.
翻译:可解释性是影响深度视觉模型广泛部署的关键因素。基于概念的事后解释方法能够为模型决策提供全局与局部的洞察。然而,该领域现有方法面临挑战,难以灵活地自动构建针对全局概念与局部回路的准确且充分的语言解释。特别是语义视觉概念中固有的多义性严重阻碍了概念及深度视觉模型的可解释性,这一问题被严重低估。本文提出一种解释链方法以解决上述问题。具体而言,CoE 自动解码并描述视觉概念,以构建全局概念解释数据集。进一步地,为减轻多义性对模型可解释性的影响,我们设计了一种概念多义性解耦与过滤机制,以区分最具上下文相关性的概念原子。此外,我们提出了概念多义性熵作为模型可解释性的度量指标,用于量化概念不确定性的程度。确定性概念的建模被升级为不确定的概念原子分布。最终,CoE 通过追踪概念回路,自动实现对深度视觉模型决策过程的语言化局部解释。基于 GPT-4o 和人工的实验验证了 CPE 的有效性及 CoE 的优越性,其在可解释性评分上实现了平均 36% 的绝对提升。