Artificial Intelligence (AI) has emerged as a key technology, driving advancements across a range of applications. Its integration into modern autonomous systems requires assuring safety. However, the challenge of assuring safety in systems that incorporate AI components is substantial. The lack of concrete specifications, and also the complexity of both the operational environment and the system itself, leads to various aspects of uncertain behavior and complicates the derivation of convincing evidence for system safety. Nonetheless, scholars proposed to thoroughly analyze and mitigate AI-specific insufficiencies, so-called AI safety concerns, which yields essential evidence supporting a convincing assurance case. In this paper, we build upon this idea and propose the so-called Landscape of AI Safety Concerns, a novel methodology designed to support the creation of safety assurance cases for AI-based systems by systematically demonstrating the absence of AI safety concerns. The methodology's application is illustrated through a case study involving a driverless regional train, demonstrating its practicality and effectiveness.
翻译:人工智能(AI)已成为一项关键技术,推动着众多应用领域的进步。将其集成到现代自主系统中需要确保安全性。然而,在包含AI组件的系统中保证安全是一项重大挑战。具体规范的缺失,以及运行环境和系统本身的复杂性,导致了不确定行为的多个方面,并使获取令人信服的系统安全证据变得复杂。尽管如此,学者们提出应深入分析和缓解AI特有的缺陷,即所谓的人工智能安全关注点,这能为构建令人信服的安全保证案例提供关键证据。本文基于这一思想,提出了所谓的人工智能安全关注点全景图,这是一种新颖的方法论,旨在通过系统性地论证人工智能安全关注点的缺失,来支持为基于AI的系统创建安全保证案例。我们通过一个涉及无人驾驶区域列车的案例研究,展示了该方法论的应用,证明了其实用性和有效性。