Powerful new frontier AI technologies are bringing many benefits to society but at the same time bring new risks. AI developers and regulators are therefore seeking ways to assure the safety of such systems, and one promising method under consideration is the use of safety cases. A safety case presents a structured argument in support of a top-level claim about a safety property of the system. Such top-level claims are often presented as a binary statement, for example "Deploying the AI system does not pose unacceptable risk". However, in practice, it is often not possible to make such statements unequivocally. This raises the question of what level of confidence should be associated with a top-level claim. We adopt the Assurance 2.0 safety assurance methodology, and we ground our work by specific application of this methodology to a frontier AI inability argument that addresses the harm of cyber misuse. We find that numerical quantification of confidence is challenging, though the processes associated with generating such estimates can lead to improvements in the safety case. We introduce a method for better enabling reproducibility and transparency in probabilistic assessment of confidence in argument leaf nodes through a purely LLM-implemented Delphi method. We propose a method by which AI developers can prioritise, and thereby make their investigation of argument defeaters more efficient. Proposals are also made on how best to communicate confidence information to executive decision-makers.
翻译:强大的新型前沿人工智能技术为社会带来诸多益处,同时也带来了新的风险。因此,人工智能开发者和监管机构正在寻求确保此类系统安全性的方法,其中一种被考虑的可行方法是采用安全案例。安全案例通过结构化论证来支持关于系统安全属性的顶层声明。此类顶层声明通常以二元陈述形式呈现,例如“部署该人工智能系统不会带来不可接受的风险”。然而在实践中,往往无法明确作出此类断言。这就引出了一个问题:应当为顶层声明关联何种程度的置信度?我们采用Assurance 2.0安全保证方法论,并将该方法具体应用于针对网络滥用危害的前沿人工智能能力缺失论证,以此为基础开展研究。我们发现置信度的数值量化具有挑战性,尽管生成此类估计值的过程能够促进安全案例的改进。我们提出一种通过纯LLM实现的德尔菲法,以增强论证叶节点置信度概率评估的可复现性与透明度。我们设计了一种方法,使人工智能开发者能够对论证反驳因素进行优先级排序,从而提高其调查效率。本文还就如何向高层决策者有效传达置信度信息提出了建议。