We propose a general framework called VisionLogic to extract interpretable logic rules from deep vision models, with a focus on image classification tasks. Given any deep vision model that uses a fully connected layer as the output head, VisionLogic transforms neurons in the last layer into predicates and grounds them into vision concepts using causal validation. In this way, VisionLogic can provide local explanations for single images and global explanations for specific classes in the form of logic rules. Compared to existing interpretable visualization tools such as saliency maps, VisionLogic addresses several key challenges, including the lack of causal explanations, overconfidence in visualizations, and ambiguity in interpretation. VisionLogic also facilitates the study of visual concepts encoded by predicates, particularly how they behave under perturbation -- an area that remains underexplored in the field of hidden semantics. Apart from providing better visual explanations and insights into the visual concepts learned by the model, we show that VisionLogic retains most of the neural network's discriminative power in an interpretable and transparent manner. We envision it as a bridge between complex model behavior and human-understandable explanations, providing trustworthy and actionable insights for real-world applications.
翻译:我们提出了一个名为VisionLogic的通用框架,用于从深度视觉模型中提取可解释的逻辑规则,重点关注图像分类任务。给定任何使用全连接层作为输出头的深度视觉模型,VisionLogic将最后一层的神经元转化为谓词,并通过因果验证将其与视觉概念进行关联。通过这种方式,VisionLogic能够以逻辑规则的形式为单张图像提供局部解释,并为特定类别提供全局解释。与现有的可解释可视化工具(如显著性图)相比,VisionLogic解决了几个关键挑战,包括缺乏因果解释、可视化中的过度自信以及解释的模糊性。VisionLogic还有助于研究由谓词编码的视觉概念,特别是它们在扰动下的行为——这是隐藏语义领域中尚未充分探索的一个方向。除了提供更好的视觉解释和对模型所学视觉概念的深入洞察外,我们还证明VisionLogic以可解释且透明的方式保留了神经网络的大部分判别能力。我们将其视为复杂模型行为与人类可理解解释之间的桥梁,为实际应用提供可信且可操作的见解。