We outline a vision for frontier AI auditing, which we define as rigorous third-party verification of frontier AI developers' safety and security claims, and evaluation of their systems and practices against relevant standards, based on deep, secure access to non-public information. Frontier AI audits should not be limited to a company's publicly deployed products, but should instead consider the full range of organization-level safety and security risks, including internal deployment of AI systems, information security practices, and safety decision-making processes. We describe four AI Assurance Levels (AALs), the higher levels of which provide greater confidence in audit findings. We recommend AAL-1 as a baseline for frontier AI generally, and AAL-2 as a near-term goal for the most advanced subset of frontier AI developers. Achieving the vision we outline will require (1) ensuring high quality standards for frontier AI auditing, so it does not devolve into a checkbox exercise or lag behind changes in the industry; (2) growing the ecosystem of audit providers at a rapid pace without compromising quality; (3) accelerating adoption of frontier AI auditing by clarifying and strengthening incentives; and (4) achieving technical readiness for high AI Assurance Levels so they can be applied when needed.
翻译:我们概述了前沿人工智能审计的愿景,其定义为:基于对非公开信息的深入、安全访问,对前沿人工智能开发者关于安全与安全性的声明进行严格的第三方验证,并依据相关标准评估其系统与实践。前沿人工智能审计不应仅限于公司公开部署的产品,而应考虑组织层面的全方位安全与安全风险,包括人工智能系统的内部部署、信息安全实践以及安全决策流程。我们描述了四个“人工智能保障等级”(AALs),其中较高的等级能为审计结果提供更高的置信度。我们建议将AAL-1作为前沿人工智能的通用基线标准,并将AAL-2设定为前沿人工智能开发者中最先进子集的近期目标。实现我们所概述的愿景需要:(1)确保前沿人工智能审计的高质量标准,使其不至沦为走过场的检查或落后于行业变化;(2)在不牺牲质量的前提下,快速扩展审计服务提供商的生态系统;(3)通过明确并强化激励措施,加速前沿人工智能审计的采用;(4)为高等级人工智能保障等级做好技术准备,以便在需要时能够应用。