A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifier) and what current methods such as SHAP explain (e.g. the scalar probability of a class). This paper addresses such gap for probabilistic models by generalising cooperative games and value operators. We introduce the distributional values, random variables that track changes in the model output (e.g. flipping of the predicted class) and derive their analytic expressions for games with Gaussian, Bernoulli and Categorical payoffs. We further establish several characterising properties, and show that our framework provides fine-grained and insightful explanations with case studies on vision and language models.
翻译:可解释机器学习的一个重要分支建立在合作博弈论基础之上。然而,研究表明基于博弈论的解释可能存在误导性或难以解释。我们认为,现有方法(如SHAP)所解释的对象(例如类别的标量概率)与人们希望解释的目标(例如分类器的输出)之间常常存在关键错配。本文通过推广合作博弈与价值算子,针对概率模型解决了这一鸿沟。我们提出了分布值——用于追踪模型输出变化(例如预测类别的翻转)的随机变量,并推导了具有高斯、伯努利及分类分布收益的博弈中分布值的解析表达式。我们进一步建立了若干特征性质,并通过视觉与语言模型的案例研究表明,本框架能够提供细粒度且富有洞察力的解释。