We present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems. The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a meaningful treatment of safety. It respects long-established safety assurance norms such as sensitivity to context, traceability and risk proportionality. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a broader AI safety case that approaches assurance from both technical and sociotechnical perspectives. Examples illustrating the use of the BIG argument are provided throughout the paper.
翻译:本文提出了一种平衡、综合且基于实践的BIG论证框架,用于保障人工智能系统的安全性。该框架采用全系统方法,为不同能力层级、自主程度与关键性的人工智能系统构建安全案例。无论人工智能能力属于受限的专用范畴,还是基于前沿或基础模型的通用型系统,BIG论证均要求对安全性进行实质性处理。它遵循长期确立的安全保障规范,包括情境敏感性、可追溯性与风险均衡性原则。此外,该框架特别关注前沿人工智能模型的高级能力及其快速部署的开放环境所衍生的新型危险行为。这些复杂问题被置于更广阔的人工智能安全案例框架中,从技术与社会技术双重维度进行系统性保障。文中通过多个实例展示了BIG论证的具体应用。