Convolutional Neural Networks (CNNs) are known for their ability to learn hierarchical structures, naturally developing detectors for objects, and semantic concepts within their deeper layers. Activation maps (AMs) reveal these saliency regions, which are crucial for many Explainable AI (XAI) methods. However, the direct exploitation of raw AMs in CNNs for feature attribution remains underexplored in literature. This work revises Class Activation Map (CAM) methods by introducing the Label-free Activation Map (LaFAM), a streamlined approach utilizing raw AMs for feature attribution without reliance on labels. LaFAM presents an efficient alternative to conventional CAM methods, demonstrating particular effectiveness in saliency map generation for self-supervised learning while maintaining applicability in supervised learning scenarios.
翻译:卷积神经网络(CNN)以其学习层次化结构的能力而著称,能够在深层网络中自然地形成对物体及语义概念的检测器。激活图(AM)揭示了这些显著性区域,这对许多可解释人工智能(XAI)方法至关重要。然而,在现有文献中,直接利用CNN中的原始激活图进行特征归因的研究仍显不足。本研究通过引入无标签激活图(LaFAM),对类激活图(CAM)方法进行了改进。LaFAM是一种利用原始激活图进行特征归因的简化方法,无需依赖标签。该方法为传统CAM方法提供了一种高效的替代方案,在自监督学习的显著性图生成中表现出特别的有效性,同时保持其在监督学习场景中的适用性。