Machine Learning (ML) has become ubiquitous, and its deployment in Network Intrusion Detection Systems (NIDS) is inevitable due to its automated nature and high accuracy compared to traditional models in processing and classifying large volumes of data. However, ML has been found to have several flaws, most importantly, adversarial attacks, which aim to trick ML models into producing faulty predictions. While most adversarial attack research focuses on computer vision datasets, recent studies have explored the suitability of these attacks against ML-based network security entities, especially NIDS, due to the wide difference between different domains regarding the generation of adversarial attacks. To further explore the practicality of adversarial attacks against ML-based NIDS in-depth, this paper presents three distinct contributions: identifying numerous practicality issues for evasion adversarial attacks on ML-NIDS using an attack tree threat model, introducing a taxonomy of practicality issues associated with adversarial attacks against ML-based NIDS, and investigating how the dynamicity of some real-world ML models affects adversarial attacks against NIDS. Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks. While adversarial attacks can compromise ML-based NIDSs, our aim is to highlight the significant gap between research and real-world practicality in this domain, warranting attention.
翻译:机器学习(ML)已变得无处不在,且因其自动化特性和在处理与分类海量数据时相较于传统模型的高准确性,其在网络入侵检测系统(NIDS)中的部署不可避免。然而,机器学习已被发现存在若干缺陷,其中最重要的是对抗性攻击,其目的在于诱骗机器学习模型产生错误预测。尽管大多数对抗性攻击研究聚焦于计算机视觉数据集,但近期研究已探索这些攻击在基于机器学习的网络安全实体(尤其是NIDS)中的适用性,原因是不同领域在生成对抗性攻击方面存在显著差异。为深入探究对抗性攻击对基于ML的NIDS的实用性,本文提出三项独特贡献:利用攻击树威胁模型识别针对ML-NIDS的规避对抗攻击的多个实用性难题;引入对抗性攻击针对基于ML的NIDS的实用性难题分类体系;并考察真实世界机器学习模型动态性如何影响针对NIDS的对抗性攻击。我们的实验表明,持续重新训练(即使未采用对抗训练)可降低对抗性攻击的有效性。尽管对抗性攻击可能危及基于ML的NIDS,我们旨在强调该领域研究与现实实用性之间的显著差距,这值得关注。