Recent concept-based interpretable models have succeeded in providing meaningful explanations by pre-defined concept sets. However, the dependency on the pre-defined concepts restricts the application because of the limited number of concepts for explanations. This paper proposes a novel interpretable deep neural network called explanation bottleneck models (XBMs). XBMs generate a text explanation from the input without pre-defined concepts and then predict a final task prediction based on the generated explanation by leveraging pre-trained vision-language encoder-decoder models. To achieve both the target task performance and the explanation quality, we train XBMs through the target task loss with the regularization penalizing the explanation decoder via the distillation from the frozen pre-trained decoder. Our experiments, including a comparison to state-of-the-art concept bottleneck models, confirm that XBMs provide accurate and fluent natural language explanations without pre-defined concept sets. Code will be available at https://github.com/yshinya6/xbm/.
翻译:近期基于概念的可解释模型通过预定义概念集成功提供了有意义的解释。然而,对预定义概念的依赖限制了其应用范围,因为可用于解释的概念数量有限。本文提出了一种称为解释瓶颈模型(XBM)的新型可解释深度神经网络。XBM无需预定义概念即可从输入生成文本解释,随后利用预训练的视觉-语言编码器-解码器模型,基于生成的解释进行最终任务预测。为实现目标任务性能与解释质量的双重优化,我们通过目标任务损失训练XBM,并采用正则化方法——通过冻结预训练解码器的知识蒸馏来约束解释解码器。我们的实验(包括与最先进概念瓶颈模型的对比)证实,XBM无需预定义概念集即可提供准确流畅的自然语言解释。代码将在 https://github.com/yshinya6/xbm/ 发布。