With wide application of Artificial Intelligence (AI), it has become particularly important to make decisions of AI systems explainable and transparent. In this paper, we proposed a new Explainable Artificial Intelligence (XAI) method called ShapG (Explanations based on Shapley value for Graphs) for measuring feature importance. ShapG is a model-agnostic global explanation method. At the first stage, it defines an undirected graph based on the dataset, where nodes represent features and edges are added based on calculation of correlation coefficients between features. At the second stage, it calculates an approximated Shapley value by sampling the data taking into account this graph structure. The sampling approach of ShapG allows to calculate the importance of features efficiently, i.e. to reduce computational complexity. Comparison of ShapG with other existing XAI methods shows that it provides more accurate explanations for two examined datasets. We also compared other XAI methods developed based on cooperative game theory with ShapG in running time, and the results show that ShapG exhibits obvious advantages in its running time, which further proves efficiency of ShapG. In addition, extensive experiments demonstrate a wide range of applicability of the ShapG method for explaining complex models. We find ShapG an important tool in improving explainability and transparency of AI systems and believe it can be widely used in various fields.
翻译:随着人工智能(AI)的广泛应用,使AI系统的决策具有可解释性和透明性变得尤为重要。本文提出了一种新的可解释人工智能(XAI)方法——ShapG(基于图的沙普利值解释),用于度量特征重要性。ShapG是一种与模型无关的全局解释方法。在第一阶段,它基于数据集定义一个无向图,其中节点表示特征,边则根据特征间相关系数的计算来添加。在第二阶段,它通过考虑该图结构对数据进行采样,计算近似的沙普利值。ShapG的采样方法能够高效地计算特征重要性,即降低计算复杂度。将ShapG与其他现有XAI方法进行比较的结果表明,对于两个测试数据集,它能提供更准确的解释。我们还将在合作博弈理论基础上开发的其他XAI方法与ShapG在运行时间上进行了比较,结果显示ShapG在运行时间上表现出明显优势,这进一步证明了ShapG的高效性。此外,大量实验证明了ShapG方法在解释复杂模型方面具有广泛的适用性。我们认为ShapG是提升AI系统可解释性与透明性的重要工具,并相信其可在多个领域中得到广泛应用。