Recent advancements in the field of Artificial Intelligence (AI) establish the basis to address challenging tasks. However, with the integration of AI, new risks arise. Therefore, to benefit from its advantages, it is essential to adequately handle the risks associated with AI. Existing risk management processes in related fields, such as software systems, need to sufficiently consider the specifics of AI. A key challenge is to systematically and transparently identify and address AI risks' root causes - also called AI hazards. This paper introduces the AI Hazard Management (AIHM) framework, which provides a structured process to systematically identify, assess, and treat AI hazards. The proposed process is conducted in parallel with the development to ensure that any AI hazard is captured at the earliest possible stage of the AI system's life cycle. In addition, to ensure the AI system's auditability, the proposed framework systematically documents evidence that the potential impact of identified AI hazards could be reduced to a tolerable level. The framework builds upon an AI hazard list from a comprehensive state-of-the-art analysis. Also, we provide a taxonomy that supports the optimal treatment of the identified AI hazards. Additionally, we illustrate how the AIHM framework can increase the overall quality of a power grid AI use case by systematically reducing the impact of identified hazards to an acceptable level.
翻译:人工智能领域的最新进展为应对挑战性任务奠定了基础。然而,随着AI的整合,新的风险也随之出现。因此,要充分利用其优势,必须妥善管理AI相关风险。软件系统等相关领域的现有风险管理流程需充分考虑AI的特殊性。关键挑战在于如何系统化、透明化地识别并处理AI风险的根源——即所谓的AI危害。本文提出了AI危害管理(AIHM)框架,该框架提供了一套结构化流程,用于系统性地识别、评估和处理AI危害。所提出的流程与开发工作并行推进,以确保在AI系统生命周期的最早阶段捕获所有AI危害。此外,为确保AI系统的可审计性,该框架系统性地记录相关证据,证明已识别AI危害的潜在影响可降至可容忍水平。该框架基于对最新技术的全面分析所建立的AI危害清单,并提供分类体系以支持对已识别AI危害的最优处理。同时,我们通过电网AI用例展示了AIHM框架如何通过系统性地将已识别危害的影响降至可接受水平,提升整体系统质量。