Interpreting complex neural networks is crucial for understanding their decision-making processes, particularly in applications where transparency and accountability are essential. This proposed method addresses this need by focusing on layer-wise Relevance Propagation (LRP), a technique used in explainable artificial intelligence (XAI) to attribute neural network outputs to input features through backpropagated relevance scores. Existing LRP methods often struggle with precision in evaluating individual neuron contributions. To overcome this limitation, we present a novel approach that improves the parsing of selected neurons during LRP backward propagation, using the Visual Geometry Group 16 (VGG16) architecture as a case study. Our method creates neural network graphs to highlight critical paths and visualizes these paths with heatmaps, optimizing neuron selection through accuracy metrics like Mean Squared Error (MSE) and Symmetric Mean Absolute Percentage Error (SMAPE). Additionally, we utilize a deconvolutional visualization technique to reconstruct feature maps, offering a comprehensive view of the network's inner workings. Extensive experiments demonstrate that our approach enhances interpretability and supports the development of more transparent artificial intelligence (AI) systems for computer vision applications. This advancement has the potential to improve the trustworthiness of AI models in real-world machine vision applications, thereby increasing their reliability and effectiveness.
翻译:解释复杂神经网络对于理解其决策过程至关重要,特别是在需要透明度和可问责性的应用场景中。本文提出的方法通过聚焦层间相关性传播(LRP)技术来应对这一需求,LRP是可解释人工智能(XAI)中通过反向传播相关性分数将神经网络输出归因于输入特征的技术。现有LRP方法在评估单个神经元贡献时往往存在精度不足的问题。为突破这一局限,我们提出一种创新方法,以Visual Geometry Group 16(VGG16)架构为案例,改进了LRP反向传播过程中选定神经元的解析机制。该方法通过构建神经网络图来突显关键路径,并利用热力图实现路径可视化,同时通过均方误差(MSE)和对称平均绝对百分比误差(SMAPE)等精度指标优化神经元选择。此外,我们采用反卷积可视化技术重建特征图,为网络内部运作机制提供全景式解析。大量实验表明,该方法显著提升了可解释性,有助于开发更透明的计算机视觉人工智能(AI)系统。这一进展有望提升现实世界机器视觉应用中AI模型的可信度,从而增强其可靠性与有效性。