The remarkable performance of large language models (LLMs) in generation tasks has enabled practitioners to leverage publicly available models to power custom applications, such as chatbots and virtual assistants. However, the data used to train or fine-tune these LLMs is often undisclosed, allowing an attacker to compromise the data and inject backdoors into the models. In this paper, we develop a novel inference time defense, named CLEANGEN, to mitigate backdoor attacks for generation tasks in LLMs. CLEANGEN is a lightweight and effective decoding strategy that is compatible with the state-of-the-art (SOTA) LLMs. Our insight behind CLEANGEN is that compared to other LLMs, backdoored LLMs assign significantly higher probabilities to tokens representing the attacker-desired contents. These discrepancies in token probabilities enable CLEANGEN to identify suspicious tokens favored by the attacker and replace them with tokens generated by another LLM that is not compromised by the same attacker, thereby avoiding generation of attacker-desired content. We evaluate CLEANGEN against five SOTA backdoor attacks. Our results show that CLEANGEN achieves lower attack success rates (ASR) compared to five SOTA baseline defenses for all five backdoor attacks. Moreover, LLMs deploying CLEANGEN maintain helpfulness in their responses when serving benign user queries with minimal added computational overhead.
翻译:大语言模型(LLMs)在生成任务中的卓越性能使得实践者能够利用公开可用的模型来支持定制化应用,例如聊天机器人和虚拟助手。然而,用于训练或微调这些LLMs的数据通常未公开,这使得攻击者可能破坏数据并向模型中注入后门。本文提出了一种新颖的推理时防御方法,命名为CLEANGEN,以缓解LLMs生成任务中的后门攻击。CLEANGEN是一种轻量级且有效的解码策略,与最先进的(SOTA)LLMs兼容。CLEANGEN背后的核心洞见是,与其他LLMs相比,被植入后门的LLMs会为表示攻击者期望内容的token分配显著更高的概率。这些token概率的差异使得CLEANGEN能够识别出攻击者偏好的可疑token,并将其替换为由另一个未被同一攻击者破坏的LLM生成的token,从而避免生成攻击者期望的内容。我们针对五种SOTA后门攻击评估了CLEANGEN。结果显示,对于所有五种后门攻击,CLEANGEN均实现了比五种SOTA基线防御更低的攻击成功率(ASR)。此外,部署CLEANGEN的LLMs在处理良性用户查询时,其回复的有用性得以保持,且仅增加了极小的计算开销。