Explainable artificial intelligence (XAI) is one of the most intensively developed area of AI in recent years. It is also one of the most fragmented with multiple methods that focus on different aspects of explanations. This makes difficult to obtain the full spectrum of explanation at once in a compact and consistent way. To address this issue, we present Local Universal Explainer (LUX), which is a rule-based explainer that can generate factual, counterfactual and visual explanations. It is based on a modified version of decision tree algorithms that allows for oblique splits and integration with feature importance XAI methods such as SHAP. It limits the use data generation in opposite to other algorithms, but is focused on selecting local concepts in a form of high-density clusters of real data that have the highest impact on forming the decision boundary of the explained model and generating artificial samples with novel SHAP-guided sampling algorithm. We tested our method on real and synthetic datasets and compared it with state-of-the-art rule-based explainers such as LORE, EXPLAN and Anchor. Our method outperforms the existing approaches in terms of simplicity, fidelity, representativeness, and consistency.
翻译:可解释人工智能(XAI)是近年来人工智能领域发展最迅猛的方向之一,同时也是方法论最为分散的领域,现有多种方法往往侧重于解释的不同维度。这使得难以以紧凑且一致的方式一次性获得完整的解释谱系。为解决此问题,我们提出了局部通用解释器(LUX),这是一种能够生成事实性解释、反事实解释及可视化解释的基于规则的解释器。该方法基于改进版决策树算法实现,支持斜划分并能与SHAP等特征重要性XAI方法集成。与依赖数据生成的其他算法不同,本方法侧重于从真实数据的高密度聚类中筛选对形成被解释模型决策边界影响最大的局部概念,并通过新型SHAP引导采样算法生成人工样本。我们在真实数据集与合成数据集上测试了该方法,并与LORE、EXPLAN及Anchor等前沿基于规则的解释器进行了对比。实验表明,本方法在简洁性、保真度、代表性及一致性方面均优于现有方法。