Despite being widely applied due to their exceptional capabilities, Large Language Models (LLMs) have been proven to be vulnerable to backdoor attacks. These attacks introduce targeted vulnerabilities into LLMs by poisoning training samples and full-parameter fine-tuning. However, this kind of backdoor attack is limited since they require significant computational resources, especially as the size of LLMs increases. Besides, parameter-efficient fine-tuning (PEFT) offers an alternative but the restricted parameter updating may impede the alignment of triggers with target labels. In this study, we first verify that backdoor attacks with PEFT may encounter challenges in achieving feasible performance. To address these issues and improve the effectiveness of backdoor attacks with PEFT, we propose a novel backdoor attack algorithm from weak to strong based on contrastive knowledge distillation (W2SAttack). Specifically, we poison small-scale language models through full-parameter fine-tuning to serve as the teacher model. The teacher model then covertly transfers the backdoor to the large-scale student model through contrastive knowledge distillation, which employs PEFT. Theoretical analysis reveals that W2SAttack has the potential to augment the effectiveness of backdoor attacks. We demonstrate the superior performance of W2SAttack on classification tasks across four language models, four backdoor attack algorithms, and two different architectures of teacher models. Experimental results indicate success rates close to 100% for backdoor attacks targeting PEFT.
翻译:尽管大语言模型因其卓越能力而被广泛应用,但已被证实易受后门攻击。这类攻击通过污染训练样本并结合全参数微调,将针对性漏洞植入大语言模型。然而,此类后门攻击存在局限性,因其需要大量计算资源,且随着模型规模增大会愈加显著。此外,参数高效微调虽提供了替代方案,但其受限的参数更新可能阻碍触发器与目标标签的对齐。本研究首先验证了基于参数高效微调的后门攻击在实现可行性能方面可能面临挑战。为解决这些问题并提升参数高效微调下后门攻击的有效性,我们提出一种基于对比知识蒸馏的从弱到强新型后门攻击算法。具体而言,我们通过全参数微调污染小规模语言模型作为教师模型,随后教师模型通过采用参数高效微调的对比知识蒸馏,将后门隐蔽地迁移至大规模学生模型。理论分析表明,该算法具有增强后门攻击效果的潜力。我们在四个语言模型、四种后门攻击算法及两种不同架构的教师模型上,通过分类任务验证了该算法的优越性能。实验结果表明,针对参数高效微调的后门攻击成功率接近100%。