This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI. AI risk assessment frameworks increasingly highlight the need to understand existing contexts. However, these approaches do not frequently connect with established institutional analysis scholarship. We outline a novel direction illustrated through a high-level example to understand how clinical oversight is potentially impacted by AI. Much current thinking regarding oversight for AI revolves around the idea of decision makers being in-the-loop and, thus, having capacity to intervene to prevent harm. However, our analysis finds that oversight is complex, frequently made by teams of professionals and relies upon explanation to elicit information. Professional bodies and liability also function as institutions of polycentric oversight. These are all impacted by the challenge of oversight of AI systems. The approach outlined has potential utility as a policy tool of context analysis aligned with the 'Govern and Map' functions of the National Institute of Standards and Technology (NIST) AI Risk Management Framework; however, further empirical research is needed. Our analysis illustrates the benefit of existing institutional analysis approaches in foregrounding team structures within oversight and, thus, in conceptions of 'human in the loop'.
翻译:本文探讨了如何将奥斯特罗姆制度分析与发展框架中的“使用规则”发展为一种针对人工智能的情境分析方法。人工智能风险评估框架日益强调理解现有情境的必要性,但这些方法往往未能与成熟的制度分析学术成果建立联系。我们通过一个高层示例阐明了一种新颖的研究方向,以理解临床监督如何可能受到人工智能的影响。当前关于人工智能监督的许多思考都围绕着决策者“在环”的理念,即其具备干预以防止损害的能力。然而,我们的分析发现,监督是复杂的,通常由专业团队共同完成,并依赖于解释来获取信息。专业机构与责任制度也发挥着多中心监督机制的作用。这些都受到人工智能系统监督挑战的影响。本文概述的方法作为一种情境分析的政策工具具有潜在效用,符合美国国家标准与技术研究院人工智能风险管理框架的“治理与映射”功能;然而,仍需进一步的实证研究。我们的分析表明了现有制度分析方法在凸显监督中的团队结构以及“人在环”概念构建方面的价值。