International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.
翻译:国际人工智能治理协定与机构可能在降低先进人工智能带来的全球安全风险方面发挥重要作用。为为此类协定与机构的设计提供参考,我们对历史上及当代的国际安全协定进行了案例研究。我们特别聚焦于围绕双重用途技术的相关安排,考察了核安全、化学武器、生物安全及出口管制领域的协定。针对每项协定,我们考察了四个关键领域:(a) 目的,(b) 核心权力,(c) 治理结构,以及(d) 违规案例。从这些案例研究中,我们提炼出对设计国际人工智能协定与治理机构的启示。我们讨论了建立可靠核查方法的重要性、平衡国家间权力的策略、适应快速技术变革的机制、权衡透明度与安全性的方法、促进参与的激励措施以及有效的执行机制。