The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
翻译:白宫人工智能行政令强调了大语言模型(LLMs)被恶意行为者用于开发生物、网络和化学武器中的风险。为衡量这种恶意使用的风险,政府机构与主要AI实验室正在开发针对LLMs危险能力的评估方案。然而现有评估均为非公开,阻碍了风险缓解研究的开展,且仅聚焦于少数高度特定化的恶意使用路径。为填补上述空白,我们公开释放大规模杀伤性代理基准(WMDP),该数据集包含3668道选择题,用于代理测量生物安全、网络安全和化学安全领域的危险知识。WMDP由学术机构与技术顾问联合开发,在公开前经过严格过滤以消除敏感信息。该基准具有双重功能:作为评估LLMs危险知识的工具,以及作为通过遗忘技术消除此类知识的基准。为引导遗忘研究进展,我们提出RMU(一种基于模型表示控制的最先进遗忘方法)。RMU在维持生物学、计算机科学等通用能力的同时降低了模型在WMDP上的性能,这表明遗忘技术可能是减少LLMs恶意使用风险的可行路径。我们已在https://wmdp.ai公开基准与代码。