To accurately and confidently answer the question 'could an AI model or system increase biorisk', it is necessary to have both a sound theoretical threat model for how AI models or systems could increase biorisk and a robust method for testing that threat model. This paper provides an analysis of existing available research surrounding two AI and biorisk threat models: 1) access to information and planning via large language models (LLMs), and 2) the use of AI-enabled biological tools (BTs) in synthesizing novel biological artifacts. We find that existing studies around AI-related biorisk are nascent, often speculative in nature, or limited in terms of their methodological maturity and transparency. The available literature suggests that current LLMs and BTs do not pose an immediate risk, and more work is needed to develop rigorous approaches to understanding how future models could increase biorisks. We end with recommendations about how empirical work can be expanded to more precisely target biorisk and ensure rigor and validity of findings.
翻译:要准确且可靠地回答“人工智能模型或系统是否会增加生物风险”这一问题,必须同时具备一个关于人工智能模型或系统如何增加生物风险的健全理论威胁模型,以及一个用于测试该威胁模型的可靠方法。本文围绕两种人工智能与生物风险威胁模型,对现有相关研究进行了分析:1)通过大型语言模型(LLMs)获取信息和进行规划,以及2)利用人工智能赋能的生物工具(BTs)合成新型生物制品。我们发现,现有关于人工智能相关生物风险的研究尚处于起步阶段,通常具有推测性质,或在方法学成熟度和透明度方面存在局限。现有文献表明,当前的大型语言模型和生物工具并未构成直接风险,未来需要更多工作来开发严谨的方法,以理解未来模型将如何增加生物风险。最后,我们提出了关于如何扩展实证研究以更精准地定位生物风险,并确保研究结果严谨性和有效性的建议。