Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, CEs found by existing methods often become invalid when slight changes occur in the parameters of the model they were generated for. The literature lacks a way to provide exhaustive robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are mostly heuristic, and the robustness performances are evaluated empirically using only a limited number of retrained models. To bridge this gap, we propose a novel interval abstraction technique for parametric machine learning models, which allows us to obtain provable robustness guarantees for CEs under a possibly infinite set of plausible model changes $\Delta$. Based on this idea, we formalise a robustness notion for CEs, which we call $\Delta$-robustness, in both binary and multi-class classification settings. We present procedures to verify $\Delta$-robustness based on Mixed Integer Linear Programming, using which we further propose algorithms to generate CEs that are $\Delta$-robust. In an extensive empirical study involving neural networks and logistic regression models, we demonstrate the practical applicability of our approach. We discuss two strategies for determining the appropriate hyperparameters in our method, and we quantitatively benchmark CEs generated by eleven methods, highlighting the effectiveness of our algorithms in finding robust CEs.
翻译:反事实解释(CEs)已成为可解释人工智能研究的重要范式,为受机器学习模型决策影响的用户提供补救建议。然而,现有方法生成的CEs在模型参数发生微小变化时往往失效。现有文献缺乏在模型变化下为CEs提供完备鲁棒性保证的方法:当前提升CEs鲁棒性的方法大多基于启发式策略,且仅通过有限数量重训练模型的实证评估来检验鲁棒性表现。为填补这一空白,我们提出一种针对参数化机器学习模型的新型区间抽象技术,该技术使我们能够在可能无限的合理模型变化集合$\Delta$下,为CEs获得可证明的鲁棒性保证。基于这一思想,我们在二分类与多分类场景中形式化定义了CEs的鲁棒性概念——$\Delta$-鲁棒性。我们提出了基于混合整数线性规划的$\Delta$-鲁棒性验证流程,并进一步设计了生成具有$\Delta$-鲁棒性CEs的算法。在涵盖神经网络与逻辑回归模型的广泛实证研究中,我们验证了该方法的实际适用性。我们探讨了确定本方法中超参数的两种策略,并对十一种方法生成的CEs进行了定量基准测试,结果凸显了所提算法在寻找鲁棒CEs方面的有效性。