The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
翻译:白宫关于人工智能的行政命令强调,大语言模型可能助长恶意行为者开发生物、网络及化学武器。为衡量这一恶意使用风险,政府机构与主要AI实验室正开发针对大语言模型危险能力的评估体系。然而,现有评估均属非公开性质,阻碍了风险缓解研究的深入开展,且仅聚焦少数高度特化的恶意使用路径。为填补上述空白,我们公开发布大规模杀伤性代理基准(WMDP),该数据集包含3,668道选择题,可作为生物安全、网络安全与化学安全领域危险知识的代理测量工具。WMDP由学术界与技术顾问联合开发,在公开发布前经过严格过滤以消除敏感信息。该基准具备双重功能:其一评估大语言模型中的危险知识,其二作为遗忘方法消除此类知识的基准测试。为指导遗忘研究进展,我们提出基于模型表征控制的先进遗忘方法RMU。该方法在保持生物学与计算机科学等通用领域能力的同时,显著降低模型在WMDP上的表现,表明遗忘技术或可成为减少大语言模型恶意使用的具体路径。基准数据与代码已开源发布于https://wmdp.ai