Knowledge distillation has become a cornerstone in modern machine learning systems, celebrated for its ability to transfer knowledge from a large, complex teacher model to a more efficient student model. Traditionally, this process is regarded as secure, assuming the teacher model is clean. This belief stems from conventional backdoor attacks relying on poisoned training data with backdoor triggers and attacker-chosen labels, which are not involved in the distillation process. Instead, knowledge distillation uses the outputs of a clean teacher model to guide the student model, inherently preventing recognition or response to backdoor triggers as intended by an attacker. In this paper, we challenge this assumption by introducing a novel attack methodology that strategically poisons the distillation dataset with adversarial examples embedded with backdoor triggers. This technique allows for the stealthy compromise of the student model while maintaining the integrity of the teacher model. Our innovative approach represents the first successful exploitation of vulnerabilities within the knowledge distillation process using clean teacher models. Through extensive experiments conducted across various datasets and attack settings, we demonstrate the robustness, stealthiness, and effectiveness of our method. Our findings reveal previously unrecognized vulnerabilities and pave the way for future research aimed at securing knowledge distillation processes against backdoor attacks.
翻译:知识蒸馏已成为现代机器学习系统的基石,因其能够将知识从庞大复杂的教师模型迁移至更高效的学生模型而备受赞誉。传统上,这一过程被认为是安全的,前提是教师模型是干净的。这一观念源于传统的后门攻击依赖于带有后门触发器和攻击者选定标签的污染训练数据,而这些数据并不参与蒸馏过程。相反,知识蒸馏使用干净教师模型的输出来指导学生模型,本质上防止了按照攻击者意图识别或响应后门触发器。本文通过引入一种新颖的攻击方法挑战了这一假设,该方法策略性地将嵌入后门触发器的对抗样本注入蒸馏数据集。该技术能够在保持教师模型完整性的同时,隐秘地破坏学生模型。我们的创新方法代表了首次成功利用干净教师模型在知识蒸馏过程中的漏洞。通过在多种数据集和攻击设置下进行的大量实验,我们证明了该方法的鲁棒性、隐蔽性和有效性。我们的研究揭示了先前未被识别的漏洞,并为未来研究指明了方向,以保障知识蒸馏过程免受后门攻击。