The need to explain decisions made by AI systems is driven by both recent regulation and user demand. The decisions are often explainable only post hoc. In counterfactual explanations, one may ask what constitutes the best counterfactual explanation. Clearly, multiple criteria must be taken into account, although "distance from the sample" is a key criterion. Recent methods that consider the plausibility of a counterfactual seem to sacrifice this original objective. Here, we present a system that provides high-likelihood explanations that are, at the same time, close and sparse. We show that the search for the most likely explanations satisfying many common desiderata for counterfactual explanations can be modeled using Mixed-Integer Optimization (MIO). We use a Sum-Product Network (SPN) to estimate the likelihood of a counterfactual. To achieve that, we propose an MIO formulation of an SPN, which can be of independent interest. The source code with examples is available at https://github.com/Epanemu/LiCE.
翻译:解释人工智能系统决策的需求既源于近期法规要求,也来自用户的实际需要。这些决策通常只能在事后进行解释。在反事实解释中,人们可能会问:什么是最佳的反事实解释?显然,必须考虑多重标准,尽管“与样本的距离”是关键标准之一。近期考虑反事实合理性的方法似乎牺牲了这一原始目标。本文提出一种系统,能够同时提供既接近原始样本又具有稀疏性的高似然解释。我们证明,寻找满足反事实解释多种常见需求的最可能解释问题,可以通过混合整数优化(MIO)建模实现。我们采用和-积网络(SPN)来估计反事实的似然度。为此,我们提出了一种SPN的MIO建模方法,该方法本身也具有独立研究价值。附带示例的源代码可在 https://github.com/Epanemu/LiCE 获取。