Network Intrusion Detection Systems (NIDSs) which use machine learning (ML) models achieve high detection performance and accuracy while avoiding dependence on fixed signatures extracted from attack artifacts. However, there is a noticeable hesitance among network security experts and practitioners when it comes to deploying ML-based NIDSs in real-world production environments due to their black-box nature, i.e., how and why the underlying models make their decisions. In this work, we analyze state-of-the-art ML-based online NIDS models using explainable AI (xAI) techniques (e.g., TRUSTEE, SHAP). Using the explanations generated for the models' decisions, the most prominent features used by each NIDS model considered are presented. We compare the explanations generated across xAI methods for a given NIDS model as well as the explanations generated across the NIDS models for a given xAI method. Finally, we evaluate the vulnerability of each NIDS model to inductive bias (artifacts learnt from training data). The results show that: (1) some ML-based NIDS models can be better explained than other models, (2) xAI explanations are in conflict for most of the NIDS models considered in this work and (3) some NIDS models are more vulnerable to inductive bias than other models.
翻译:基于机器学习模型的网络入侵检测系统能够实现较高的检测性能和准确率,同时避免依赖从攻击特征中提取的固定签名。然而,由于这类系统的黑箱特性——即底层模型如何以及为何做出决策——网络安全专家和从业者在实际生产环境中部署基于机器学习的网络入侵检测系统时表现出明显的犹豫。本研究利用可解释人工智能技术(如TRUSTEE、SHAP)分析了最先进的基于机器学习的在线网络入侵检测系统模型。通过为模型决策生成解释,我们展示了每个被考察的网络入侵检测系统模型所使用的最显著特征。我们比较了针对给定网络入侵检测系统模型不同可解释人工智能方法生成的解释,以及针对给定可解释人工智能方法不同网络入侵检测系统模型生成的解释。最后,我们评估了每个网络入侵检测系统模型对归纳偏置(从训练数据中学习到的伪影)的脆弱性。结果表明:(1)某些基于机器学习的网络入侵检测系统模型比其他模型更容易解释;(2)对于本研究中大多数网络入侵检测系统模型,不同可解释人工智能方法生成的解释存在冲突;(3)某些网络入侵检测系统模型比其他模型更容易受到归纳偏置的影响。