With wide application of Artificial Intelligence (AI), it has become particularly important to make decisions of AI systems explainable and transparent. In this paper, we proposed a new Explainable Artificial Intelligence (XAI) method called ShapG (Explanations based on Shapley value for Graphs) for measuring feature importance. ShapG is a model-agnostic global explanation method. At the first stage, it defines an undirected graph based on the dataset, where nodes represent features and edges are added based on calculation of correlation coefficients between features. At the second stage, it calculates an approximated Shapley value by sampling the data taking into account this graph structure. The sampling approach of ShapG allows to calculate the importance of features efficiently, i.e. to reduce computational complexity. Comparison of ShapG with other existing XAI methods shows that it provides more accurate explanations for two examined datasets. We also compared other XAI methods developed based on cooperative game theory with ShapG in running time, and the results show that ShapG exhibits obvious advantages in its running time, which further proves efficiency of ShapG. In addition, extensive experiments demonstrate a wide range of applicability of the ShapG method for explaining complex models. We find ShapG an important tool in improving explainability and transparency of AI systems and believe it can be widely used in various fields.
翻译:随着人工智能(AI)的广泛应用,使AI系统的决策过程具备可解释性与透明度变得尤为重要。本文提出了一种新的可解释人工智能(XAI)方法——ShapG(基于图结构的沙普利值解释方法),用于度量特征重要性。ShapG是一种与模型无关的全局解释方法。在第一阶段,该方法基于数据集定义无向图,其中节点表示特征,边则根据特征间相关系数的计算来添加。在第二阶段,通过考虑此图结构对数据进行采样,计算近似的沙普利值。ShapG的采样策略能够高效计算特征重要性,即降低计算复杂度。将ShapG与其他现有XAI方法的比较表明,对于两个测试数据集,ShapG能提供更精确的解释。我们还在运行时间上将ShapG与其他基于合作博弈论开发的XAI方法进行了对比,结果显示ShapG在运行时间上具有明显优势,进一步证明了其高效性。此外,大量实验证明了ShapG方法在解释复杂模型方面具有广泛的适用性。我们认为ShapG是提升AI系统可解释性与透明度的重要工具,并相信其可在多个领域中得到广泛应用。