The emergence of tools based on artificial intelligence has also led to the need of producing explanations which are understandable by a human being. In most approaches, the system is considered a \emph{black box}, making it difficult to generate appropriate explanations. In this work, though, we consider a setting where models are \emph{transparent}: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty. However, given a query, the usual notion of \emph{explanation} is associated with a set of choices, one for each random variable of the model. Unfortunately, such a set does not explain \emph{why} the query is true and, in fact, it may contain choices that are actually irrelevant for the considered query. To improve this situation, we present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with \emph{choice expressions}, a compact and easy to manipulate representation for sets of choices. The combination of proof trees and choice expressions allows one to produce comprehensible query justifications with a causal structure.
翻译:人工智能工具的出现也催生了生成人类可理解的解释的需求。在大多数方法中,系统被视为一个"黑箱",这使得生成恰当的解释变得困难。然而,在本工作中,我们考虑一种模型"透明"的设置:概率逻辑编程(PLP),这是一种结合逻辑编程进行知识表示和概率建模不确定性的范式。然而,给定一个查询,通常的"解释"概念与一组选择相关联,即模型每个随机变量对应一个选择。遗憾的是,这样的集合并不能解释查询为何为真,事实上,它可能包含与所考虑查询实际上无关的选择。为了改善这种情况,本文提出了一种解释阐释的方法,其基础是为PLP定义一种新的查询驱动推理机制,其中证明过程用"选择表达式"进行标注——这是一种用于表示选择集合的紧凑且易于操作的形式。证明树与选择表达式的结合使得我们能够生成具有因果结构的、易于理解的查询论证。