Backdoor attacks undermine the integrity of machine learning models by allowing attackers to manipulate predictions using poisoned training data. Such attacks lead to targeted misclassification when specific triggers are present, while the model behaves normally under other conditions. This paper considers a post-training backdoor defense task, aiming to detoxify the backdoors in pre-trained models. We begin by analyzing the underlying issues of vanilla fine-tuning and observe that it is often trapped in regions with low loss for both clean and poisoned samples. Motivated by such observations, we propose Distance-Driven Detoxification (D3), an innovative approach that reformulates backdoor defense as a constrained optimization problem. Specifically, D3 promotes the model's departure from the vicinity of its initial weights, effectively reducing the influence of backdoors. Extensive experiments on state-of-the-art (SOTA) backdoor attacks across various model architectures and datasets demonstrate that D3 not only matches but often surpasses the performance of existing SOTA post-training defense techniques.
翻译:后门攻击通过使用投毒训练数据操纵预测,从而破坏机器学习模型的完整性。此类攻击在特定触发器存在时会导致定向错误分类,而模型在其他条件下表现正常。本文研究一种训练后后门防御任务,旨在对预训练模型中的后门进行去毒化处理。我们首先分析标准微调的根本问题,发现其常陷入对干净样本和投毒样本均具有低损失的局部区域。基于此观察,我们提出距离驱动去毒化(D3)这一创新方法,将后门防御重新构建为约束优化问题。具体而言,D3促使模型远离其初始权重邻域,从而有效降低后门影响。在不同模型架构和数据集上对最先进(SOTA)后门攻击进行的大量实验表明,D3不仅匹配且经常超越现有SOTA训练后防御技术的性能。