In this paper, we introduce an explainable algorithm designed from a multi-modal foundation model, that performs fast and explainable image classification. Drawing inspiration from CLIP-based Concept Bottleneck Models (CBMs), our method creates a latent space where each neuron is linked to a specific word. Observing that this latent space can be modeled with simple distributions, we use a Mixture of Gaussians (MoG) formalism to enhance the interpretability of this latent space. Then, we introduce CLIP-QDA, a classifier that only uses statistical values to infer labels from the concepts. In addition, this formalism allows for both local and global explanations. These explanations come from the inner design of our architecture, our work is part of a new family of greybox models, combining performances of opaque foundation models and the interpretability of transparent models. Our empirical findings show that in instances where the MoG assumption holds, CLIP-QDA achieves similar accuracy with state-of-the-art methods CBMs. Our explanations compete with existing XAI methods while being faster to compute.
翻译:本文提出一种基于多模态基础模型构建的可解释算法,该算法能够执行快速且可解释的图像分类。受基于CLIP的概念瓶颈模型(CBMs)启发,我们的方法构建了一个潜在空间,其中每个神经元都与特定词汇相关联。通过观察发现该潜在空间可用简单分布建模,我们采用高斯混合模型(MoG)形式化方法来增强该潜在空间的可解释性。随后,我们提出CLIP-QDA分类器,该分类器仅利用统计量即可从概念中推断标签。此外,该形式化框架支持局部与全局双重解释机制。这些解释源于我们架构的内在设计,我们的工作属于新型灰盒模型范畴,融合了不透明基础模型的性能优势与透明模型的可解释特性。实验结果表明,在满足MoG假设的场景下,CLIP-QDA能达到与最先进CBM方法相当的准确率。我们的解释机制在保持计算速度优势的同时,其解释效果可与现有XAI方法相媲美。