Recent and unremitting capability advances have been accompanied by calls for comprehensive, rather than patchwork, regulation of frontier artificial intelligence (AI). Approval regulation is emerging as a promising candidate. An approval regulation scheme is one in which a firm cannot legally market, or in some cases develop, a product without explicit approval from a regulator on the basis of experiments performed upon the product that demonstrate its safety. This approach is used successfully by the FDA and FAA. Further, its application to frontier AI has been publicly supported by many prominent stakeholders. This report proposes an approval regulation schematic for only the largest AI projects in which scrutiny begins before training and continues through to post-deployment monitoring. The centerpieces of the schematic are two major approval gates, the first requiring approval for large-scale training and the second for deployment. Five main challenges make implementation difficult: noncompliance through unsanctioned deployment, specification of deployment readiness requirements, reliable model experimentation, filtering out safe models before the process, and minimizing regulatory overhead. This report makes a number of crucial recommendations to increase the feasibility of approval regulation, some of which must be followed urgently if such a regime is to succeed in the near future. Further recommendations, produced by this report's analysis, may improve the effectiveness of any regulatory regime for frontier AI.
翻译:近期持续的能力进步伴随着对全面而非零散的前沿人工智能(AI)监管的呼吁。审批监管正成为一种有前景的候选方案。审批监管制度是指:企业必须在监管机构基于产品安全实验证明其安全性而明确批准后,方可合法上市(或在某些情况下开发)产品。FDA和FAA已成功应用此方法。此外,许多重要利益相关方已公开支持将其应用于前沿AI。本报告提出一个仅针对最大规模AI项目的审批监管框架,其审查从训练前开始并持续至部署后监测。该框架的核心是两个主要审批节点:第一个要求批准大规模训练,第二个要求批准部署。实施面临五大挑战:未经授权部署导致的违规行为、部署就绪要求的规范、可靠的模型实验、在流程前筛选安全模型,以及最小化监管负担。本报告提出若干关键建议以提升审批监管的可行性,其中部分建议若要在近期成功实施此类制度则必须紧急采纳。基于本报告分析提出的进一步建议,可能提升任何前沿AI监管制度的有效性。