In eXplainable Artificial Intelligence (XAI), instance-based explanations for time series have gained increasing attention due to their potential for actionable and interpretable insights in domains such as healthcare. Addressing the challenges of explainability of state-of-the-art models, we propose a prototype-driven framework for generating sparse counterfactual explanations tailored to 12-lead ECG classification models. Our method employs SHAP-based thresholds to identify critical signal segments and convert them into interval rules, uses Dynamic Time Warping (DTW) and medoid clustering to extract representative prototypes, and aligns these prototypes to query R-peaks for coherence with the sample being explained. The framework generates counterfactuals that modify only 78% of the original signal while maintaining 81.3% validity across all classes and achieving 43% improvement in temporal stability. We evaluate three variants of our approach, Original, Sparse, and Aligned Sparse, with class-specific performance ranging from 98.9% validity for myocardial infarction (MI) to challenges with hypertrophy (HYP) detection (13.2%). This approach supports near realtime generation (< 1 second) of clinically valid counterfactuals and provides a foundation for interactive explanation platforms. Our findings establish design principles for physiologically-aware counterfactual explanations in AI-based diagnosis systems and outline pathways toward user-controlled explanation interfaces for clinical deployment.
翻译:在可解释人工智能(XAI)领域,针对时间序列的实例级解释因其在医疗健康等领域的可操作性与可解释性潜力而日益受到关注。为应对前沿模型可解释性面临的挑战,我们提出一种基于原型的框架,专门为12导联心电图分类模型生成稀疏反事实解释。该方法采用基于SHAP的阈值识别关键信号片段并将其转化为区间规则,利用动态时间规整(DTW)与中心点聚类提取代表性原型,并将这些原型与待解释样本的R波峰值对齐以保持一致性。该框架生成的反事实解释仅需修改原始信号的78%,同时在所有类别中保持81.3%的有效性,并将时间稳定性提升43%。我们评估了原始版、稀疏版和对齐稀疏版三种变体,其类别特异性性能表现为:心肌梗死(MI)检测有效性达98.9%,而肥厚(HYP)检测面临挑战(13.2%)。该方法支持临床有效反事实的近实时生成(<1秒),为交互式解释平台奠定基础。我们的研究确立了基于AI诊断系统中生理感知反事实解释的设计原则,并规划了面向临床部署的用户可控解释接口的实现路径。