In eXplainable Artificial Intelligence (XAI), instance-based explanations for time series have gained increasing attention due to their potential for actionable and interpretable insights in domains such as healthcare. Addressing the challenges of explainability of state-of-the-art models, we propose a prototype-driven framework for generating sparse counterfactual explanations tailored to 12-lead ECG classification models. Our method employs SHAP-based thresholds to identify critical signal segments and convert them into interval rules, uses Dynamic Time Warping (DTW) and medoid clustering to extract representative prototypes, and aligns these prototypes to query R-peaks for coherence with the sample being explained. The framework generates counterfactuals that modify only 78% of the original signal while maintaining 81.3% validity across all classes and achieving 43% improvement in temporal stability. We evaluate three variants of our approach, Original, Sparse, and Aligned Sparse, with class-specific performance ranging from 98.9% validity for myocardial infarction (MI) to challenges with hypertrophy (HYP) detection (13.2%). This approach supports near realtime generation (< 1 second) of clinically valid counterfactuals and provides a foundation for interactive explanation platforms. Our findings establish design principles for physiologically-aware counterfactual explanations in AI-based diagnosis systems and outline pathways toward user-controlled explanation interfaces for clinical deployment.
翻译:在可解释人工智能领域,基于实例的时间序列解释因其在医疗等领域的可操作性和可解释性潜力而日益受到关注。针对当前先进模型可解释性面临的挑战,我们提出了一种基于原型的框架,用于生成面向12导联心电图分类模型的稀疏反事实解释。该方法采用基于SHAP的阈值识别关键信号片段并将其转换为区间规则,利用动态时间规整和中心点聚类提取代表性原型,并将这些原型与查询样本的R峰对齐以保持解释一致性。该框架生成的反事实解释仅需修改原始信号的78%,同时在全类别中保持81.3%的有效性,并在时间稳定性方面实现43%的改进。我们评估了原始版、稀疏版和对齐稀疏版三种方法变体,其类别特异性性能范围从心肌梗死检测的98.9%有效性到肥厚型心肌病检测面临的挑战(13.2%)。该方法支持近实时生成(<1秒)临床有效的反事实解释,为交互式解释平台奠定了基础。我们的研究为基于人工智能的诊断系统建立了生理感知反事实解释的设计原则,并规划了面向临床部署的用户可控解释界面的发展路径。