Despite the notable success of language models (LMs) in various natural language processing (NLP) tasks, the reliability of LMs is susceptible to backdoor attacks. Prior research attempts to mitigate backdoor learning while training the LMs on the poisoned dataset, yet struggles against complex backdoor attacks in real-world scenarios. In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis. Our findings indicate that the backdoor mapping presented on the poisoned datasets exhibits a more discernible inclination towards lower frequency compared to clean mapping, resulting in the faster convergence of backdoor mapping. To alleviate this dilemma, we propose Multi-Scale Low-Rank Adaptation (MuScleLoRA), which deploys multiple radial scalings in the frequency space with low-rank adaptation to the target model and further aligns the gradients when updating parameters. Through downscaling in the frequency space, MuScleLoRA encourages the model to prioritize the learning of relatively high-frequency clean mapping, consequently mitigating backdoor learning. Experimental results demonstrate that MuScleLoRA outperforms baselines significantly. Notably, MuScleLoRA reduces the average success rate of diverse backdoor attacks to below 15\% across multiple datasets and generalizes to various backbone LMs, including BERT, RoBERTa, GPT2-XL, and Llama2. The codes are publicly available at https://github.com/ZrW00/MuScleLoRA.
翻译:尽管语言模型在各种自然语言处理任务中取得了显著成功,但其可靠性易受后门攻击影响。先前研究尝试在污染数据集上训练语言模型时减轻后门学习,但在现实场景中对抗复杂后门攻击仍面临困难。本文通过傅里叶分析研究了后门语言模型在频率空间中的学习机制。我们的研究发现,相较于纯净映射,污染数据集上呈现的后门映射表现出更明显的低频倾向,导致后门映射收敛更快。为缓解这一困境,我们提出多尺度低秩适配方法,该方法在频率空间中部署多个径向缩放,对目标模型进行低秩适配,并在更新参数时进一步对齐梯度。通过在频率空间进行降维处理,MuScleLoRA促使模型优先学习相对高频的纯净映射,从而有效抑制后门学习。实验结果表明,MuScleLoRA显著优于基线方法。值得注意的是,该方法将多种后门攻击的平均成功率降低至15%以下(跨多个数据集验证),并能泛化至多种骨干语言模型,包括BERT、RoBERTa、GPT2-XL和Llama2。相关代码已在https://github.com/ZrW00/MuScleLoRA公开。