Backdoor attacks compromise the integrity and reliability of machine learning models by embedding a hidden trigger during the training process, which can later be activated to cause unintended misbehavior. We propose a novel backdoor mitigation approach via machine unlearning to counter such backdoor attacks. The proposed method utilizes model activation of domain-equivalent unseen data to guide the editing of the model's weights. Unlike the previous unlearning-based mitigation methods, ours is computationally inexpensive and achieves state-of-the-art performance while only requiring a handful of unseen samples for unlearning. In addition, we also point out that unlearning the backdoor may cause the whole targeted class to be unlearned, thus introducing an additional repair step to preserve the model's utility after editing the model. Experiment results show that the proposed method is effective in unlearning the backdoor on different datasets and trigger patterns.
翻译:后门攻击通过在训练过程中嵌入隐藏触发器来破坏机器学习模型的完整性和可靠性,该触发器可在后续被激活以引发非预期的错误行为。我们提出了一种通过机器遗忘来应对此类后门攻击的新型缓解方法。所提出的方法利用领域等效未见数据的模型激活来指导模型权重的编辑。与先前基于遗忘的缓解方法不同,我们的方法计算成本低廉,仅需少量未见样本进行遗忘即可达到最先进的性能。此外,我们还指出,遗忘后门可能导致整个目标类别被遗忘,因此在编辑模型后引入了一个额外的修复步骤以保持模型的实用性。实验结果表明,所提出的方法在不同数据集和触发器模式上遗忘后门是有效的。