Suppressor variables can influence model predictions without being dependent on the target outcome and they pose a significant challenge for Explainable AI (XAI) methods. These variables may cause false-positive feature attributions, undermining the utility of explanations. Although effective remedies exist for linear models, their extension to non-linear models and to instance-based explanations has remained limited. We introduce PatternLocal, a novel XAI technique that addresses this gap. PatternLocal begins with a locally linear surrogate, e.g. LIME, KernelSHAP, or gradient-based methods, and transforms the resulting discriminative model weights into a generative representation, thereby suppressing the influence of suppressor variables while preserving local fidelity. In extensive hyperparameter optimization on the XAI-TRIS benchmark, PatternLocal consistently outperformed other XAI methods and reduced false-positive attributions when explaining non-linear tasks, thereby enabling more reliable and actionable insights.
翻译:抑制变量可在不依赖于目标结果的情况下影响模型预测,这对可解释人工智能(XAI)方法构成了重大挑战。此类变量可能导致误报特征归因,从而削弱解释的实用性。尽管线性模型存在有效的解决方案,但其向非线性模型及基于实例的解释的扩展仍十分有限。本文提出一种新型XAI技术PatternLocal以填补这一空白。该方法以局部线性代理模型(如LIME、KernelSHAP或基于梯度的方法)为起点,将所得判别模型权重转化为生成式表征,从而在保持局部保真度的同时抑制抑制变量的影响。在XAI-TRIS基准测试的广泛超参数优化中,PatternLocal在解释非线性任务时持续优于其他XAI方法,有效减少了误报归因,为获得更可靠、更具可操作性的见解提供了可能。