Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code. However, GNNs face significant challenges in explainability due to their inherently black-box nature. To this end, several factual reasoning-based explainers have been proposed. These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes. We argue that these factual reasoning-based explanations cannot answer critical what-if questions: What would happen to the GNN's decision if we were to alter the code graph into alternative structures? Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection. Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions for vulnerability detection. We term this perturbation a counterfactual explanation, which can pinpoint the root causes of the detected vulnerability and furnish valuable insights for developers to undertake appropriate actions for fixing the vulnerability. Extensive experiments on four GNN-based vulnerability detection models demonstrate the effectiveness of CFExplainer over existing state-of-the-art factual reasoning-based explainers.
翻译:漏洞检测对于确保软件系统的安全性和可靠性至关重要。近年来,图神经网络(GNNs)因其能够捕捉源代码的底层语义结构,已成为漏洞检测中一种重要的代码嵌入方法。然而,由于其固有的黑盒特性,GNNs在可解释性方面面临重大挑战。为此,已有研究者提出了多种基于事实推理的解释器。这些解释器通过分析对预测结果起关键作用的特征,为GNNs的预测提供解释。我们认为,这些基于事实推理的解释无法回答关键的反事实问题:如果我们改变代码图的结构,GNN的决策会发生什么变化?受人工智能领域反事实推理进展的启发,我们提出了CFExplainer,一种用于基于GNN的漏洞检测的新型反事实解释器。与基于事实推理的解释器不同,CFExplainer寻求对输入代码图的最小扰动,以导致预测结果发生变化,从而解决漏洞检测中的反事实问题。我们将这种扰动称为反事实解释,它可以精确定位检测到的漏洞的根本原因,并为开发人员采取适当的修复措施提供有价值的见解。在四种基于GNN的漏洞检测模型上进行的大量实验表明,CFExplainer优于现有最先进的基于事实推理的解释器。