In this paper, we focus on addressing the challenges of detecting malicious attacks in networks by designing an advanced Explainable Intrusion Detection System (xIDS). The existing machine learning and deep learning approaches have invisible limitations, such as potential biases in predictions, a lack of interpretability, and the risk of overfitting to training data. These issues can create doubt about their usefulness, transparency, and a decrease in trust among stakeholders. To overcome these challenges, we propose an ensemble learning technique called "EnsembleGuard." This approach uses the predicted outputs of multiple models, including tree-based methods (LightGBM, GBM, Bagging, XGBoost, CatBoost) and deep learning models such as LSTM (long short-term memory) and GRU (gated recurrent unit), to maintain a balance and achieve trustworthy results. Our work is unique because it combines both tree-based and deep learning models to design an interpretable and explainable meta-model through model distillation. By considering the predictions of all individual models, our meta-model effectively addresses key challenges and ensures both explainable and reliable results. We evaluate our model using well-known datasets, including UNSW-NB15, NSL-KDD, and CIC-IDS-2017, to assess its reliability against various types of attacks. During analysis, we found that our model outperforms both tree-based models and other comparative approaches in different attack scenarios.
翻译:本文聚焦于通过设计一种先进的可解释入侵检测系统(xIDS)来应对网络恶意攻击检测的挑战。现有的机器学习和深度学习方法存在一些隐性局限,例如预测中潜在的偏见、缺乏可解释性以及对训练数据过拟合的风险。这些问题可能导致对其有效性产生怀疑、透明度不足,并降低利益相关者的信任度。为克服这些挑战,我们提出了一种名为“EnsembleGuard”的集成学习技术。该方法利用多种模型的预测输出,包括基于树的方法(LightGBM、GBM、Bagging、XGBoost、CatBoost)以及深度学习模型(如LSTM和GRU),以保持平衡并获得可信的结果。我们工作的独特之处在于,它结合了基于树的模型和深度学习模型,通过模型蒸馏设计出一个可解释且可说明的元模型。通过综合考虑所有个体模型的预测,我们的元模型有效应对了关键挑战,并确保了结果的可解释性与可靠性。我们使用知名数据集(包括UNSW-NB15、NSL-KDD和CIC-IDS-2017)评估了我们的模型,以检验其针对各类攻击的可靠性。在分析过程中,我们发现该模型在不同攻击场景下的表现均优于基于树的模型及其他对比方法。