Deep networks have shown remarkable performance across a wide range of tasks, yet getting a global concept-level understanding of how they function remains a key challenge. Many post-hoc concept-based approaches have been introduced to understand their workings, yet they are not always faithful to the model. Further, they make restrictive assumptions on the concepts a model learns, such as class-specificity, small spatial extent, or alignment to human expectations. In this work, we put emphasis on the faithfulness of such concept-based explanations and propose a new model with model-inherent mechanistic concept-explanations. Our concepts are shared across classes and, from any layer, their contribution to the logit and their input-visualization can be faithfully traced. We also leverage foundation models to propose a new concept-consistency metric, C$^2$-Score, that can be used to evaluate concept-based methods. We show that, compared to prior work, our concepts are quantitatively more consistent and users find our concepts to be more interpretable, all while retaining competitive ImageNet performance.
翻译:深度网络在广泛的任务中展现出卓越性能,但如何从全局概念层面理解其运作机制仍是一个关键挑战。为理解其工作原理,已有许多基于概念的后验解释方法被提出,但这些方法并不总能忠实反映模型内部机制。此外,这些方法对模型所学概念施加了限制性假设,例如类别特异性、小空间范围或与人类预期的对齐性。本研究重点关注此类基于概念的解释方法的忠实性,并提出一种具有模型内在机制化概念解释的新模型。我们的概念在类别间共享,且可从任意网络层忠实追踪其对逻辑值的贡献及其输入可视化结果。我们还利用基础模型提出新的概念一致性度量指标——C$^2$分数,该指标可用于评估基于概念的解释方法。实验表明,与现有方法相比,我们的概念在定量上具有更高的一致性,用户认为我们的概念更具可解释性,同时模型在ImageNet数据集上保持了具有竞争力的性能。