Recently, several methods have leveraged deep generative modeling to produce example-based explanations of decision algorithms for high-dimensional input data. Despite promising results, a disconnect exists between these methods and the classical explainability literature, which focuses on lower-dimensional data with semantically meaningful features. This conceptual and communication gap leads to misunderstandings and misalignments in goals and expectations. In this paper, we bridge this gap by proposing a novel probabilistic framework for local example-based explanations. Our framework integrates the critical characteristics of classical local explanation desiderata while being amenable to high-dimensional data and their modeling through deep generative models. Our aim is to facilitate communication, foster rigor and transparency, and improve the quality of peer discussion and research progress.
翻译:近年来,多种方法利用深度生成建模技术为高维输入数据的决策算法生成基于示例的解释。尽管取得了有希望的结果,但这些方法与经典可解释性研究之间存在脱节——后者主要关注具有语义意义特征的低维数据。这种概念与交流上的鸿沟导致目标与期望的误解和错位。本文通过提出一种新颖的基于局部示例解释的概率框架来弥合这一鸿沟。该框架在继承经典局部解释需求关键特征的同时,能够适应高维数据及其通过深度生成模型的建模过程。我们的目标在于促进学术交流、增强严谨性与透明度,并提升学术讨论与研究进展的质量。