The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 4,157 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop CUT, a state-of-the-art unlearning method based on controlling model representations. CUT reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
翻译:白宫关于人工智能的行政命令强调了大语言模型在赋能恶意行为者开发生物、网络和化学武器方面的风险。为测量这些恶意使用风险,政府机构和主要AI实验室正在开发对大语言模型危险能力的评估方法。然而,现有评估均为私有性质,阻碍了风险缓解的进一步研究。此外,这些评估仅聚焦于少数高度特定的恶意使用途径。为填补这些空白,我们公开发布大规模杀伤性代理基准(WMDP),这是一个包含4,157道选择题的数据集,作为生物安全、网络安全和化学安全领域危险知识的代理测量指标。WMDP由学术界和技术顾问组成的联合体开发,在公开发布前经过严格过滤以消除敏感信息。WMDP发挥双重作用:首先作为大语言模型中危险知识的评估工具,其次作为通过遗忘方法消除此类危险知识的基准。为引导遗忘研究发展,我们提出CUT——一种基于模型表征控制的最先进遗忘方法。CUT在降低WMDP性能的同时,保持了生物学和计算机科学等领域的通用能力,表明遗忘可能成为减少大语言模型恶意使用的具体路径。我们在https://wmdp.ai公开发布了基准和代码。