Vision transformers (ViTs) have emerged as a significant area of focus, particularly for their capacity to be jointly trained with large language models and to serve as robust vision foundation models. Yet, the development of trustworthy explanation methods for ViTs has lagged, particularly in the context of post-hoc interpretations of ViT predictions. Existing sub-image selection approaches, such as feature-attribution and conceptual models, fall short in this regard. This paper proposes five desiderata for explaining ViTs -- faithfulness, stability, sparsity, multi-level structure, and parsimony -- and demonstrates the inadequacy of current methods in meeting these criteria comprehensively. We introduce a variational Bayesian explanation framework, dubbed ProbAbilistic Concept Explainers (PACE), which models the distributions of patch embeddings to provide trustworthy post-hoc conceptual explanations. Our qualitative analysis reveals the distributions of patch-level concepts, elucidating the effectiveness of ViTs by modeling the joint distribution of patch embeddings and ViT's predictions. Moreover, these patch-level explanations bridge the gap between image-level and dataset-level explanations, thus completing the multi-level structure of PACE. Through extensive experiments on both synthetic and real-world datasets, we demonstrate that PACE surpasses state-of-the-art methods in terms of the defined desiderata.
翻译:视觉Transformer(ViT)已成为一个重要的研究焦点,尤其因其能够与大型语言模型联合训练并作为鲁棒的视觉基础模型。然而,针对ViT的可信解释方法的发展相对滞后,特别是在ViT预测的事后解释方面。现有的子图像选择方法,如特征归因和概念模型,在这方面存在不足。本文提出了解释ViT的五个理想特性——忠实性、稳定性、稀疏性、多层级结构和简洁性——并论证了现有方法无法全面满足这些标准。我们引入了一个变分贝叶斯解释框架,称为概率概念解释器(PACE),该框架通过建模图像块嵌入的分布来提供可信的事后概念解释。我们的定性分析揭示了块级概念的分布,通过建模图像块嵌入与ViT预测的联合分布阐明了ViT的有效性。此外,这些块级解释弥合了图像级与数据集级解释之间的差距,从而完善了PACE的多层级结构。通过在合成和真实数据集上的大量实验,我们证明PACE在定义的理想特性方面超越了现有最先进方法。