The effectiveness of Intrusion Detection Systems (IDS) is critical in an era where cyber threats are becoming increasingly complex. Machine learning (ML) and deep learning (DL) models provide an efficient and accurate solution for identifying attacks and anomalies in computer networks. However, using ML and DL models in IDS has led to a trust deficit due to their non-transparent decision-making. This transparency gap in IDS research is significant, affecting confidence and accountability. To address, this paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data, while also adapting a new Explainable AI (XAI) methodology. Unlike most GNN-based IDS that depend on labeled network traffic and node features, thereby overlooking critical packet-level information, our approach leverages a broader range of traffic data through network flows, including edge attributes, to improve detection capabilities and adapt to novel threats. Through empirical testing, we establish that our approach not only achieves high accuracy with 99.47% in threat detection but also advances the field by providing clear, actionable explanations of its analytical outcomes. This research also aims to bridge the current gap and facilitate the broader integration of ML/DL technologies in cybersecurity defenses by offering a local and global explainability solution that is both precise and interpretable.
翻译:入侵检测系统(IDS)的有效性在网络安全威胁日益复杂的时代至关重要。机器学习和深度学习模型为识别计算机网络中的攻击与异常提供了高效且准确的解决方案。然而,由于ML和DL模型的决策过程不透明,其在IDS中的应用导致了信任缺失。IDS研究中的这种透明度差距影响重大,损害了系统可信度与可问责性。为解决此问题,本文提出了一种新颖的可解释IDS方法X-CBA,该方法利用图神经网络的结构优势有效处理网络流量数据,同时采用新的可解释人工智能方法。与大多数依赖标记网络流量和节点特征而忽略关键数据包层信息的GNN基IDS不同,我们的方法通过网络流(包括边属性)利用更广泛的流量数据,从而提升检测能力并适应新型威胁。实证测试表明,我们的方法不仅实现了99.47%的高威胁检测准确率,还通过提供清晰、可操作的分析结果解释推动了该领域发展。本研究还旨在通过提供精确且可解释的局部与全局可解释性方案,弥合当前研究缺口,促进ML/DL技术在网络安全防御中更广泛的集成。