Data-poisoning backdoor attacks are serious security threats to machine learning models, where an adversary can manipulate the training dataset to inject backdoors into models. In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned. Unlike most existing methods that primarily detect and remove/unlearn suspicious samples to mitigate malicious backdoor attacks, we propose a novel defense approach called PDB (Proactive Defensive Backdoor). Specifically, PDB leverages the "home field" advantage of defenders by proactively injecting a defensive backdoor into the model during training. Taking advantage of controlling the training process, the defensive backdoor is designed to suppress the malicious backdoor effectively while remaining secret to attackers. In addition, we introduce a reversible mapping to determine the defensive target label. During inference, PDB embeds a defensive trigger in the inputs and reverses the model's prediction, suppressing malicious backdoor and ensuring the model's utility on the original task. Experimental results across various datasets and models demonstrate that our approach achieves state-of-the-art defense performance against a wide range of backdoor attacks.
翻译:数据投毒后门攻击对机器学习模型构成严重安全威胁,攻击者可通过操纵训练数据集向模型中注入后门。本文聚焦于训练期后门防御,旨在即使数据集可能被投毒时仍能训练出干净模型。与现有主要依赖检测并移除/遗忘可疑样本以缓解恶意后门攻击的方法不同,我们提出了一种名为PDB(主动防御性后门)的新型防御方法。具体而言,PDB通过训练期间主动向模型注入防御性后门,充分利用防御者的"主场优势"。借助对训练过程的控制权,该防御性后门被设计为能有效抑制恶意后门,同时保持对攻击者的隐蔽性。此外,我们引入可逆映射机制来确定防御目标标签。在推理阶段,PDB在输入中嵌入防御性触发器并反转模型预测,从而抑制恶意后门并确保模型在原始任务上的可用性。跨多个数据集和模型的实验结果表明,我们的方法针对各类后门攻击均实现了最先进的防御性能。