While deep neural networks are extremely effective at classifying images, they remain opaque and hard to interpret. We introduce local and global explanation methods for black-box models that generate explanations in terms of human-recognizable primitive concepts. Both the local explanations for a single image and the global explanations for a set of images are cast as logical formulas in monotone disjunctive-normal-form (MDNF), whose satisfaction guarantees that the model yields a high score on a given class. We also present an algorithm for explaining the classification of examples into multiple classes in the form of a monotone explanation list over primitive concepts. Despite their simplicity and interpretability we show that the explanations maintain high fidelity and coverage with respect to the blackbox models they seek to explain in challenging vision datasets.
翻译:尽管深度神经网络在图像分类方面极为有效,但其内部机制依然不透明且难以解释。我们针对黑盒模型提出了局部与全局解释方法,这些方法能够以人类可识别的原始概念形式生成解释。无论是针对单张图像的局部解释,还是针对一组图像的全局解释,均被表述为单调析取范式(MDNF)的逻辑公式,其满足性保证了模型在给定类别上产生高评分。我们还提出了一种算法,能以原始概念上的单调解释列表形式,对样本被划分到多个类别的情况进行解释。尽管这些解释形式简单且易于理解,我们通过实验证明,在具有挑战性的视觉数据集上,它们对所解释的黑盒模型仍保持着较高的忠实度与覆盖度。