Counterfactual Explanation (CE) techniques have garnered attention as a means to provide insights to the users engaging with AI systems. While extensively researched in domains such as medical imaging and autonomous vehicles, Graph Counterfactual Explanation (GCE) methods have been comparatively under-explored. GCEs generate a new graph similar to the original one, with a different outcome grounded on the underlying predictive model. Among these GCE techniques, those rooted in generative mechanisms have received relatively limited investigation despite demonstrating impressive accomplishments in other domains, such as artistic styles and natural language modelling. The preference for generative explainers stems from their capacity to generate counterfactual instances during inference, leveraging autonomously acquired perturbations of the input graph. Motivated by the rationales above, our study introduces RSGG-CE, a novel Robust Stochastic Graph Generator for Counterfactual Explanations able to produce counterfactual examples from the learned latent space considering a partially ordered generation sequence. Furthermore, we undertake quantitative and qualitative analyses to compare RSGG-CE's performance against SoA generative explainers, highlighting its increased ability to engendering plausible counterfactual candidates.
翻译:反事实解释(CE)技术因能为与人工智能系统交互的用户提供洞见而受到关注。尽管在医学影像和自动驾驶等领域已有广泛研究,但图反事实解释(GCE)方法相对而言探索不足。GCE生成与原始图相似的新图,并根据底层预测模型产生不同的结果。在这些GCE技术中,基于生成机制的方案尽管在艺术风格和自然语言建模等其他领域展现出令人瞩目的成就,但所受研究仍较为有限。对生成式解释器的偏好源于其能够利用输入图的自主获取扰动,在推理过程中生成反事实实例的能力。基于上述动机,我们的研究引入了RSGG-CE——一种新颖的鲁棒随机图生成器用于反事实解释,能够从学习到的潜在空间中,考虑部分有序的生成序列,产生反事实示例。此外,我们开展了定量与定性分析,将RSGG-CE的性能与当前最先进的生成式解释器进行比较,凸显其在生成可信反事实候选方面更强的能力。