The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs. Despite the increasing support for multilingual capabilities in open-source and proprietary LLMs, the impact of backdoor attacks on these systems remains largely under-explored. Our research focuses on cross-lingual backdoor attacks against multilingual LLMs, particularly investigating how poisoning the instruction-tuning data for one or two languages can affect the outputs for languages whose instruction-tuning data were not poisoned. Despite its simplicity, our empirical analysis reveals that our method exhibits remarkable efficacy in models like mT5 and GPT-4o, with high attack success rates, surpassing 90% in more than 7 out of 12 languages across various scenarios. Our findings also indicate that more powerful models show increased susceptibility to transferable cross-lingual backdoor attacks, which also applies to LLMs predominantly pre-trained on English data, such as Llama2, Llama3, and Gemma. Moreover, our experiments demonstrate 1) High Transferability: the backdoor mechanism operates successfully in cross-lingual response scenarios across 26 languages, achieving an average attack success rate of 99%, and 2) Robustness: the proposed attack remains effective even after defenses are applied. These findings expose critical security vulnerabilities in multilingual LLMs and highlight the urgent need for more robust, targeted defense strategies to address the unique challenges posed by cross-lingual backdoor transfer.
翻译:针对以英语为中心的大语言模型的后门攻击影响已得到广泛研究——此类攻击可通过在训练阶段嵌入恶意行为实现,并在特定条件触发时激活恶意输出。尽管开源与专有大语言模型对多语言能力的支持日益增强,但后门攻击对此类系统的影响仍鲜有探索。本研究聚焦针对多语言大语言模型的跨语言后门攻击,重点探究污染一种或两种语言的指令微调数据如何影响未受污染语言的指令微调数据的输出结果。尽管方法简单,我们的实证分析表明该方法在mT5和GPT-4o等模型中展现出显著效力,攻击成功率较高,在多种场景下12种语言中有超过7种语言的成功率突破90%。研究还发现,性能更强的模型对可迁移的跨语言后门攻击表现出更高敏感性,这一现象同样适用于主要基于英语数据预训练的模型(如Llama2、Llama3和Gemma)。此外,实验证明:1)高可迁移性:后门机制在26种语言的跨语言响应场景中均能成功运作,平均攻击成功率达99%;2)强鲁棒性:即使施加防御措施,所提攻击仍保持有效。这些发现揭示了多语言大语言模型中严峻的安全漏洞,并凸显了亟需建立更具鲁棒性、针对性的防御策略以应对跨语言后门迁移带来的独特挑战。