Convolutional neural networks (CNNs) are widely used for high-stakes applications like medicine, often surpassing human performance. However, most explanation methods rely on post-hoc attribution, approximating the decision-making process of already trained black-box models. These methods are often sensitive, unreliable, and fail to reflect true model reasoning, limiting their trustworthiness in critical applications. In this work, we introduce SoftCAM, a straightforward yet effective approach that makes standard CNN architectures inherently interpretable. By removing the global average pooling layer and replacing the fully connected classification layer with a convolution-based class evidence layer, SoftCAM preserves spatial information and produces explicit class activation maps that form the basis of the model's predictions. Evaluated on three medical datasets, SoftCAM maintains classification performance while significantly improving both the qualitative and quantitative explanation compared to existing post-hoc methods. Our results demonstrate that CNNs can be inherently interpretable without compromising performance, advancing the development of self-explainable deep learning for high-stakes decision-making. The code is available at https://github.com/kdjoumessi/SoftCAM
翻译:卷积神经网络(CNN)在医学等高风险领域得到广泛应用,其性能常超越人类水平。然而,大多数解释方法依赖于事后归因技术,只能近似模拟已训练黑盒模型的决策过程。这些方法通常敏感且不可靠,难以反映模型真实的推理逻辑,限制了其在关键应用中的可信度。本研究提出SoftCAM——一种简洁而有效的方法,能使标准CNN架构具备固有可解释性。通过移除全局平均池化层,并将全连接分类层替换为基于卷积的类别证据层,SoftCAM保留了空间信息并生成显式的类别激活图,这些激活图构成了模型预测的基础。在三个医学数据集上的评估表明,SoftCAM在保持分类性能的同时,其解释效果在定性与定量层面均显著优于现有的事后解释方法。我们的研究证明,CNN可以在不牺牲性能的前提下实现固有可解释性,这推动了面向高风险决策的自解释深度学习的发展。代码公开于:https://github.com/kdjoumessi/SoftCAM