The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
翻译:白宫关于人工智能的行政命令强调了大语言模型可能助长恶意行为者开发生物、网络及化学武器所构成的风险。为测度这类恶意使用风险,政府机构与主要人工智能实验室正在开发针对大语言模型危险能力的评估方法。然而,现有评估均为非公开状态,阻碍了风险缓解措施的进一步研究,且仅聚焦于少数高度特定的恶意使用途径。为弥补这些空白,我们公开发布大规模杀伤性代理基准(Weapons of Mass Destruction Proxy, WMDP),该数据集包含3,668道多选题,可代理测度生物安全、网络安全与化学安全领域的危险知识。WMDP由学术界与技术顾问组成的联合团队开发,在公开发布前经过严格过滤以剔除敏感信息。WMDP具有双重作用:其一,作为大语言模型危险知识的评估工具;其二,作为通过遗忘机制消除此类危险知识的基准测试。为引导遗忘技术的研究进展,我们提出基于模型表征控制的先进遗忘方法RMU。该方法在维持模型生物学、计算机科学等领域通用能力的同时,显著降低其在WMDP上的表现,表明遗忘机制可能是减少大语言模型恶意使用风险的具体可行路径。基准测试与代码已开源发布于https://wmdp.ai