Large Language Models (LLMs), especially those accessed via APIs, have demonstrated impressive capabilities across various domains. However, users without technical expertise often turn to (untrustworthy) third-party services, such as prompt engineering, to enhance their LLM experience, creating vulnerabilities to adversarial threats like backdoor attacks. Backdoor-compromised LLMs generate malicious outputs to users when inputs contain specific "triggers" set by attackers. Traditional defense strategies, originally designed for small-scale models, are impractical for API-accessible LLMs due to limited model access, high computational costs, and data requirements. To address these limitations, we propose Chain-of-Scrutiny (CoS) which leverages LLMs' unique reasoning abilities to mitigate backdoor attacks. It guides the LLM to generate reasoning steps for a given input and scrutinizes for consistency with the final output -- any inconsistencies indicating a potential attack. It is well-suited for the popular API-only LLM deployments, enabling detection at minimal cost and with little data. User-friendly and driven by natural language, it allows non-experts to perform the defense independently while maintaining transparency. We validate the effectiveness of CoS through extensive experiments on various tasks and LLMs, with results showing greater benefits for more powerful LLMs.
翻译:大型语言模型(LLMs),尤其是通过API访问的模型,已在多个领域展现出卓越能力。然而,缺乏专业知识的用户常依赖(不可信的)第三方服务(如提示工程)来增强其LLM使用体验,这为后门攻击等对抗性威胁创造了漏洞。被植入后门的LLMs在输入包含攻击者设定的特定“触发器”时,会向用户生成恶意输出。传统防御策略最初为小规模模型设计,由于模型访问受限、计算成本高昂及数据需求等问题,难以适用于API访问型LLMs。为应对这些局限,我们提出链式审查(Chain-of-Scrutiny, CoS),利用LLMs独特的推理能力来缓解后门攻击。该方法引导LLM为给定输入生成推理步骤,并审查其与最终输出的一致性——任何不一致均可能指示潜在攻击。该方案特别适用于当前主流的纯API部署模式,能以极低成本和少量数据实现检测。其用户友好的自然语言驱动特性,使得非专业用户可独立执行防御操作,同时保持过程透明性。我们通过在多种任务和LLMs上的大量实验验证了CoS的有效性,结果表明该方法对更强大的LLMs具有更显著的防御优势。