Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-stakes domains like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of widely used explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (GradCAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based data from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches' strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We quantitatively and quantitatively compare these methods, focusing on feature importance ranking, interpretability, and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, GradCAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application's requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and GradCAM could complement each other to provide more interpretable and actionable model explanations.
翻译:利用可解释人工智能(XAI)技术解释机器学习(ML)模型已成为提高模型透明度和可信度的关键手段。这在医疗等高风险领域尤为重要,理解模型决策对于确保预测结果符合伦理、可靠且可信至关重要。然而,用户往往难以针对具体应用场景选择合适的可解释性方法。本研究在基于图卷积网络(GCNs)的人类活动识别(HAR)领域内,对广泛使用的两种可解释性方法——沙普利加性解释(SHAP)和梯度加权类激活映射(GradCAM)进行了比较分析。通过使用两个真实世界数据集(包括医疗关键场景中的脑性麻痹病例)的基于骨架数据进行评估,本研究深入揭示了两种方法的优势、局限性和差异,为根据具体模型和应用选择最合适的解释方法提供了路线图。我们通过定量和定性分析,聚焦于特征重要性排序、可解释性以及基于扰动实验的模型敏感性,对这些方法进行了比较。SHAP能提供详细的输入特征归因,而GradCAM则能生成更快速、空间导向的解释,这使得两种方法可根据应用需求互为补充。鉴于XAI在增强机器学习模型(尤其是在医疗等敏感环境中的模型)的可信度与透明度方面的重要性,我们的研究表明SHAP和GradCAM如何能够相互补充,以提供更具可解释性和可操作性的模型解释。