Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the local (attributions per sample) and global (based on domain expert concepts) perspectives. We have established a set of sanity checks to identify sensible attribution methods, and we provide quantitative evidence in accordance with expert rules. This dataset-wide analysis goes beyond anecdotal evidence by aggregating data across patient subgroups. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.
翻译:深度神经网络因其能够准确识别心脏状况和隐藏的临床因素,在心电图数据分析中日益普及。然而,这些模型的黑箱特性导致缺乏透明度,这是一个普遍关注的问题。为解决此问题,可采用可解释人工智能方法。在本研究中,我们对事后可解释人工智能方法进行了全面分析,从局部(每个样本的归因)和全局(基于领域专家概念)两个视角展开研究。我们建立了一套完整性检验标准,以识别合理的归因方法,并根据专家规则提供了定量证据。这种全数据集分析通过聚合患者亚组的数据,超越了轶事证据的范畴。此外,我们展示了如何利用这些可解释人工智能技术进行知识发现,例如识别心肌梗死的亚型。我们相信,这些提出的方法可作为构建模块,用于在心电图分析领域的认证过程中对内部有效性进行补充评估,以及用于知识发现。