Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, challenged participants to find universal backdoors in several large language models. This report summarizes the key findings and promising ideas for future research.
翻译:大型语言模型经过对齐以确保安全性,防止用户生成如虚假信息或非法活动指导等有害内容。然而,先前的研究表明,对齐过程容易受到投毒攻击。攻击者可通过操纵安全训练数据来注入后门,此类后门如同一种通用的sudo命令:在任何提示中添加后门字符串,即可使原本行为安全的模型产生有害响应。我们在IEEE SaTML 2024会议上共同举办的竞赛,要求参赛者在多个大型语言模型中寻找通用后门。本报告总结了关键发现,并展望了未来研究中有潜力的方向。