With rapid advances, generative large language models (LLMs) dominate various Natural Language Processing (NLP) tasks from understanding to reasoning. Yet, language models' inherent vulnerabilities may be exacerbated due to increased accessibility and unrestricted model training on massive data. A malicious adversary may publish poisoned data online and conduct backdoor attacks on the victim LLMs pre-trained on the poisoned data. Backdoored LLMs behave innocuously for normal queries and generate harmful responses when the backdoor trigger is activated. Despite significant efforts paid to LLMs' safety issues, LLMs are still struggling against backdoor attacks. As Anthropic recently revealed, existing safety training strategies, including supervised fine-tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), fail to revoke the backdoors once the LLM is backdoored during the pre-training stage. In this paper, we present Simulate and Eliminate (SANDE) to erase the undesired backdoored mappings for generative LLMs. We initially propose Overwrite Supervised Fine-tuning (OSFT) for effective backdoor removal when the trigger is known. Then, to handle scenarios where trigger patterns are unknown, we integrate OSFT into our two-stage framework, SANDE. Unlike other works that assume access to cleanly trained models, our safety-enhanced LLMs are able to revoke backdoors without any reference. Consequently, our safety-enhanced LLMs no longer produce targeted responses when the backdoor triggers are activated. We conduct comprehensive experiments to show that our proposed SANDE is effective against backdoor attacks while bringing minimal harm to LLMs' powerful capability.
翻译:随着技术的飞速发展,生成式大语言模型(LLMs)在从理解到推理的各种自然语言处理(NLP)任务中占据主导地位。然而,由于模型可访问性的提高以及对海量数据进行无限制的模型训练,语言模型固有的脆弱性可能被加剧。恶意攻击者可能在线上发布投毒数据,并对使用该投毒数据进行预训练的受害LLMs实施后门攻击。被植入后门的LLMs在面对正常查询时表现无害,但在后门触发器被激活时会生成有害响应。尽管在LLMs的安全问题上已付出巨大努力,LLMs仍难以有效抵御后门攻击。正如Anthropic最近揭示的,现有的安全训练策略,包括监督微调(SFT)和基于人类反馈的强化学习(RLHF),一旦LLM在预训练阶段被植入后门,便无法撤销这些后门。本文提出模拟与消除(SANDE)方法,旨在消除生成式LLMs中不期望的后门映射。我们首先提出了覆盖式监督微调(OSFT),用于在触发器已知时有效移除后门。接着,为了处理触发器模式未知的场景,我们将OSFT集成到我们的两阶段框架SANDE中。与其它假设能访问干净训练模型的研究不同,我们的安全增强型LLMs能够在没有任何参考模型的情况下撤销后门。因此,当后门触发器被激活时,我们的安全增强型LLMs不再产生目标响应。我们进行了全面的实验,结果表明我们提出的SANDE能有效防御后门攻击,同时对LLMs的强大能力造成最小损害。