While explainability is a desirable characteristic of increasingly complex black-box models, modern explanation methods have been shown to be inconsistent and contradictory. The semantics of explanations is not always fully understood - to what extent do explanations "explain" a decision and to what extent do they merely advocate for a decision? Can we help humans gain insights from explanations accompanying correct predictions and not over-rely on incorrect predictions advocated for by explanations? With this perspective in mind, we introduce the notion of dissenting explanations: conflicting predictions with accompanying explanations. We first explore the advantage of dissenting explanations in the setting of model multiplicity, where multiple models with similar performance may have different predictions. In such cases, providing dissenting explanations could be done by invoking the explanations of disagreeing models. Through a pilot study, we demonstrate that dissenting explanations reduce overreliance on model predictions, without reducing overall accuracy. Motivated by the utility of dissenting explanations we present both global and local methods for their generation.
翻译:尽管可解释性是日益复杂的黑箱模型的一项理想特性,但现代解释方法已被证明存在不一致性和矛盾性。解释的语义尚未被完全理解——解释在多大程度上真正“解释”了决策,又在多大程度上仅为决策进行辩护?我们能否帮助人类从伴随正确预测的解释中获取洞见,同时避免过度依赖被解释所辩护的错误预测?基于这一视角,我们引入了异议解释的概念:即伴随解释的冲突性预测。我们首先在模型多重性场景中探索了异议解释的优势——在此场景下,多个性能相似的模型可能得出不同预测。此时,通过调用持不同意见模型的解释即可提供异议解释。通过一项试点研究,我们证明异议解释能减少对模型预测的过度依赖,且不降低整体准确率。鉴于异议解释的实用性,我们提出了全局与局部两种生成方法。