Machine Learning (ML) models, such as deep neural networks, are widely applied in autonomous systems to perform complex perception tasks. New dependability challenges arise when ML predictions are used in safety-critical applications, like autonomous cars and surgical robots. Thus, the use of fault tolerance mechanisms, such as safety monitors, is essential to ensure the safe behavior of the system despite the occurrence of faults. This paper presents an extensive literature review on safety monitoring of perception functions using ML in a safety-critical context. In this review, we structure the existing literature to highlight key factors to consider when designing such monitors: threat identification, requirements elicitation, detection of failure, reaction, and evaluation. We also highlight the ongoing challenges associated with safety monitoring and suggest directions for future research.
翻译:机器学习(ML)模型,如深度神经网络,被广泛应用于自主系统中以执行复杂的感知任务。当ML预测被用于安全关键应用(如自动驾驶汽车和手术机器人)时,新的可靠性挑战随之出现。因此,尽管故障可能发生,使用容错机制(如安全监测器)对于确保系统的安全行为至关重要。本文对安全关键背景下使用ML的感知功能安全监测进行了广泛的文献综述。在此综述中,我们梳理了现有文献,以突出设计此类监测器时需考虑的关键因素:威胁识别、需求获取、故障检测、反应与评估。我们还强调了与安全监测相关的持续挑战,并提出了未来研究的方向。