Machine Learning (ML) has become pervasive, and its deployment in Network Intrusion Detection Systems (NIDS) is inevitable due to its automated nature and high accuracy compared to traditional models in processing and classifying large volumes of data. However, ML has been found to have several flaws, most importantly, adversarial attacks, which aim to trick ML models into producing faulty predictions. While most adversarial attack research focuses on computer vision datasets, recent studies have explored the suitability of these attacks against ML-based network security entities, especially NIDS, due to the wide difference between different domains regarding the generation of adversarial attacks. To further explore the practicality of adversarial attacks against ML-based NIDS in-depth, this paper presents several key contributions: identifying numerous practicality issues for evasion adversarial attacks on ML-NIDS using an attack tree threat model, introducing a taxonomy of practicality issues associated with adversarial attacks against ML-based NIDS, identifying specific leaf nodes in our attack tree that demonstrate some practicality for real-world implementation and conducting a comprehensive review and exploration of these potentially viable attack approaches, and investigating how the dynamicity of real-world ML models affects evasion adversarial attacks against NIDS. Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks. While adversarial attacks can compromise ML-based NIDSs, our aim is to highlight the significant gap between research and real-world practicality in this domain, which warrants attention.
翻译:机器学习(ML)已变得无处不在,其在网络入侵检测系统(NIDS)中的部署是不可避免的,因为与传统模型相比,ML在处理和分类海量数据方面具有自动化特性和高准确性。然而,ML已被发现存在若干缺陷,其中最重要的是对抗性攻击,其目的是诱骗ML模型产生错误预测。尽管大多数对抗性攻击研究集中于计算机视觉数据集,但由于不同领域在生成对抗性攻击方面存在巨大差异,近期研究已开始探索这些攻击对基于ML的网络安全实体(尤其是NIDS)的适用性。为深入探讨针对基于ML的NIDS的对抗性攻击的实用性,本文提出了若干关键贡献:通过攻击树威胁模型识别了针对ML-NIDS的规避性对抗攻击的众多实用性问题;建立了与针对基于ML的NIDS的对抗性攻击相关的实用性问题的分类体系;识别了攻击树中若干展现出现实世界实施可行性的特定叶节点,并对这些潜在可行的攻击方法进行了全面综述与探讨;研究了现实世界ML模型的动态性如何影响针对NIDS的规避性对抗攻击。我们的实验表明,即使不进行对抗训练,持续重新训练也能降低对抗攻击的有效性。虽然对抗性攻击可能危及基于ML的NIDS,但我们的目标在于揭示该领域研究与现实实用性之间的显著差距,这一问题值得关注。