Explainability and evaluation of AI models are crucial parts of the security of modern intrusion detection systems (IDS) in the network security field, yet they are lacking. Accordingly, feature selection is essential for such parts in IDS because it identifies the most paramount features, enhancing attack detection and its description. In this work, we tackle the feature selection problem for IDS by suggesting new ways of applying eXplainable AI (XAI) methods for this problem. We identify the crucial attributes originated by distinct AI methods in tandem with the novel five attribute selection methods. We then compare many state-of-the-art feature selection strategies with our XAI-based feature selection methods, showing that most AI models perform better when using the XAI-based approach proposed in this work. By providing novel feature selection techniques and establishing the foundation for several XAI-based strategies, this research aids security analysts in the AI decision-making reasoning of IDS by providing them with a better grasp of critical intrusion traits. Furthermore, we make the source codes available so that the community may develop additional models on top of our foundational XAI-based feature selection framework.
翻译:在网络安全领域,现代入侵检测系统(IDS)中人工智能模型的可解释性与评估是保障安全的关键环节,然而目前这些方面仍存在不足。相应地,特征选择对于IDS中的这些环节至关重要,因为它能识别最关键的特征,从而提升攻击检测能力及其可描述性。本研究通过提出应用可解释人工智能(XAI)方法解决该问题的新途径,致力于攻克IDS中的特征选择难题。我们结合五种新颖的属性选择方法,识别了由不同AI方法生成的关键属性。随后,我们将多种先进的特征选择策略与我们提出的基于XAI的特征选择方法进行比较,结果表明大多数AI模型采用本工作提出的XAI方法后性能得到提升。通过提供创新的特征选择技术并为多种基于XAI的策略奠定基础,本研究通过帮助安全分析师更深入地理解关键入侵特征,从而辅助其在IDS的AI决策推理过程。此外,我们公开了源代码,以便学术界能在我们构建的基于XAI的特征选择框架基础上开发更多模型。