Risks associated with the use of AI, ranging from algorithmic bias to model hallucinations, have received much attention and extensive research across the AI community, from researchers to end-users. However, a gap exists in the systematic assessment of supply chain risks associated with the complex web of data sources, pre-trained models, agents, services, and other systems that contribute to the output of modern AI systems. This gap is particularly problematic when AI systems are used in critical applications, such as the food supply, healthcare, utilities, law, insurance, and transport. We survey the current state of AI risk assessment and management, with a focus on the supply chain of AI and risks relating to the behavior and outputs of the AI system. We then present a proposed taxonomy specifically for categorizing AI supply chain entities. This taxonomy helps stakeholders, especially those without extensive AI expertise, to "consider the right questions" and systematically inventory dependencies across their organization's AI systems. Our contribution bridges a gap between the current state of AI governance and the urgent need for actionable risk assessment and management of AI use in critical applications.
翻译:人工智能使用所伴随的风险,从算法偏见到模型幻觉,已在人工智能社区中——从研究人员到终端用户——受到广泛关注和深入研究。然而,对于构成现代人工智能系统输出的复杂网络(包括数据源、预训练模型、智能体、服务及其他系统)所关联的供应链风险,目前仍缺乏系统性评估。这一缺口在人工智能系统应用于关键领域时尤为突出,例如食品供应、医疗保健、公共事业、法律、保险和交通运输。本文综述了当前人工智能风险评估与管理的现状,重点关注人工智能供应链及与系统行为和输出相关的风险。随后,我们提出了一种专门用于分类人工智能供应链实体的分类法。该分类法有助于利益相关者,特别是那些缺乏深入人工智能专业知识的群体,『思考恰当的问题』并系统性地梳理其组织内人工智能系统的依赖关系。我们的研究弥合了当前人工智能治理现状与关键应用中人工智能使用亟需可操作的风险评估和管理之间的差距。