The application of deep neural network models in various security-critical applications has raised significant security concerns, particularly the risk of backdoor attacks. Neural backdoors pose a serious security threat as they allow attackers to maliciously alter model behavior. While many defenses have been explored, existing approaches are often bounded by model-specific constraints, or necessitate complex alterations to the training process, or fall short against diverse backdoor attacks. In this work, we introduce a novel method for comprehensive and effective elimination of backdoors, called ULRL (short for UnLearn and ReLearn for backdoor removal). ULRL requires only a small set of clean samples and works effectively against all kinds of backdoors. It first applies unlearning for identifying suspicious neurons and then targeted neural weight tuning for backdoor mitigation (i.e., by promoting significant weight deviation on the suspicious neurons). Evaluated against 12 different types of backdoors, ULRL is shown to significantly outperform state-of-the-art methods in eliminating backdoors whilst preserving the model utility.
翻译:深度神经网络模型在各种安全关键应用中的部署引发了重大安全隐患,尤其是后门攻击的风险。神经后门允许攻击者恶意篡改模型行为,构成了严重的安全威胁。尽管已有多种防御方法被探索,现有方案往往受限于模型特定的约束条件,或需要对训练过程进行复杂修改,或在应对多样化后门攻击时效果不足。本文提出了一种新颖的全面有效后门消除方法,称为ULRL(后门移除的遗忘与再学习简称)。该方法仅需少量干净样本即可有效应对各类后门攻击。其首先通过遗忘过程识别可疑神经元,随后针对可疑神经元进行定向神经权重调优以实现后门缓解(即通过促进可疑神经元的显著权重偏差)。在12种不同类型后门攻击的评估中,ULRL在消除后门的同时保持模型效用方面显著优于现有最先进方法。