In this paper, we present a Neuron Abandoning Attention Flow (NAFlow) method to address the open problem of visually explaining the attention evolution dynamics inside CNNs when making their classification decisions. A novel cascading neuron abandoning back-propagation algorithm is designed to trace neurons in all layers of a CNN that involve in making its prediction to address the problem of significant interference from abandoned neurons. Firstly, a Neuron Abandoning Back-Propagation (NA-BP) module is proposed to generate Back-Propagated Feature Maps (BPFM) by using the inverse function of the intermediate layers of CNN models, on which the neurons not used for decision-making are abandoned. Meanwhile, the cascading NA-BP modules calculate the tensors of importance coefficients which are linearly combined with the tensors of BPFMs to form the NAFlow. Secondly, to be able to visualize attention flow for similarity metric-based CNN models, a new channel contribution weights module is proposed to calculate the importance coefficients via Jacobian Matrix. The effectiveness of the proposed NAFlow is validated on nine widely-used CNN models for various tasks of general image classification, contrastive learning classification, few-shot image classification, and image retrieval.
翻译:本文提出了一种神经元舍弃注意力流(NAFlow)方法,以解决在CNN做出分类决策时,对其内部注意力演化动态进行可视化解释的开放性问题。针对被舍弃神经元产生的显著干扰问题,我们设计了一种新颖的级联神经元舍弃反向传播算法,用于追踪CNN所有层中参与预测决策的神经元。首先,提出神经元舍弃反向传播(NA-BP)模块,通过利用CNN模型中间层的逆函数生成反向传播特征图(BPFM),并在此过程中舍弃不参与决策的神经元。同时,级联的NA-BP模块计算重要性系数张量,该张量与BPFM张量进行线性组合以形成NAFlow。其次,为能够可视化基于相似度度量的CNN模型的注意力流,提出新的通道贡献权重模块,通过雅可比矩阵计算重要性系数。所提出的NAFlow方法在九种广泛使用的CNN模型上进行了验证,涵盖通用图像分类、对比学习分类、少样本图像分类和图像检索等多种任务,证明了其有效性。