Advanced cyber threats (e.g., Fileless Malware and Advanced Persistent Threat (APT)) have driven the adoption of provenance-based security solutions. These solutions employ Machine Learning (ML) models for behavioral modeling and critical security tasks such as malware and anomaly detection. However, the opacity of ML-based security models limits their broader adoption, as the lack of transparency in their decision-making processes restricts explainability and verifiability. We tailored our solution towards Graph Neural Network (GNN)-based security solutions since recent studies employ GNNs to comprehensively digest system provenance graphs for security critical tasks. To enhance the explainability of GNN-based security models, we introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model. PROVEXPLAINER's interpretable feature space consists of discriminant subgraph patterns and graph structural features, which can be directly mapped to the system provenance problem space, making the explanations human understandable. By considering prominent GNN architectures (e.g., GAT and GraphSAGE) for anomaly detection tasks, we show how PROVEXPLAINER synergizes with current state-of-the-art (SOTA) GNN explainers to deliver domain and instance-specific explanations. We measure the explanation quality using the fidelity+/fidelity- metric as used by traditional GNN explanation literature, and we incorporate the precision/recall metric where we consider the accuracy of the explanation against the ground truth. On malware and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25% higher fidelity+, precision and recall, and 12% lower fidelity- respectively, compared to SOTA GNN explainers.
翻译:高级网络威胁(例如无文件恶意软件和高级持续性威胁)推动了基于溯源的安全解决方案的采用。这些解决方案采用机器学习模型进行行为建模以及恶意软件和异常检测等关键安全任务。然而,基于机器学习的安全模型的不透明性限制了其更广泛的应用,因为其决策过程缺乏透明度,制约了可解释性和可验证性。由于近期研究采用图神经网络来全面解析系统溯源图以执行安全关键任务,我们的解决方案专门针对基于图神经网络的安全方案。为了增强基于图神经网络的安全模型的可解释性,我们提出了PROVEXPLAINER框架,该框架利用可解释的替代模型提供实例级的安全感知解释。PROVEXPLAINER的可解释特征空间由判别性子图模式和图表征结构特征构成,这些特征可以直接映射到系统溯源问题空间,从而使解释易于人类理解。通过考虑用于异常检测任务的突出图神经网络架构(例如GAT和GraphSAGE),我们展示了PROVEXPLAINER如何与当前最先进的图神经网络解释器协同工作,以提供领域特定和实例特定的解释。我们采用传统图神经网络解释文献中使用的fidelity+/fidelity-指标来衡量解释质量,并引入了精确率/召回率指标,以考量解释相对于真实情况的准确性。在恶意软件和高级持续性威胁数据集上,与最先进的图神经网络解释器相比,PROVEXPLAINER分别实现了高达29%/27%/25%的fidelity+、精确率和召回率提升,以及12%的fidelity-降低。