The training phase of machine learning models is a delicate step, especially in cybersecurity contexts. Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks without altering the training labels. With this work, we propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks, while preserving the model utility. By performing density-based clustering on a carefully chosen feature subspace, and progressively isolating the suspicious clusters through a novel iterative scoring procedure, our defensive mechanism can mitigate the attacks without requiring many of the common assumptions in the existing backdoor defense literature. To show the generality of our proposed mitigation, we evaluate it on two clean-label model-agnostic attacks on two different classic cybersecurity data modalities: network flows classification and malware classification, using gradient boosting and neural network models.
翻译:机器学习模型的训练阶段是一个关键环节,尤其在网络安全背景下。近期研究揭示了一系列隐蔽的训练时攻击,这些攻击旨在为安全分类任务设计的模型中注入后门,且不改变训练标签。本文提出新技术,利用网络安全威胁模型的洞见,有效缓解此类干净标签投毒攻击,同时保持模型效用。通过在精心选择的特征子空间上执行基于密度的聚类,并借助一种新颖的迭代评分程序逐步隔离可疑聚类,我们的防御机制能够在不依赖现有后门防御文献中诸多常见假设的情况下缓解攻击。为展示所提缓解方法的通用性,我们在两种经典网络安全数据模态(网络流分类与恶意软件分类)上,使用梯度提升和神经网络模型,针对两种干净标签模型无关攻击进行了评估。