The deep neural network (DNN) models are widely used for object detection in automated driving systems (ADS). Yet, such models are prone to errors which can have serious safety implications. Introspection and self-assessment models that aim to detect such errors are therefore of paramount importance for the safe deployment of ADS. Current research on this topic has focused on techniques to monitor the integrity of the perception mechanism in ADS. Existing introspection models in the literature, however, largely concentrate on detecting perception errors by assigning equal importance to all parts of the input data frame to the perception module. This generic approach overlooks the varying safety significance of different objects within a scene, which obscures the recognition of safety-critical errors, posing challenges in assessing the reliability of perception in specific, crucial instances. Motivated by this shortcoming of state of the art, this paper proposes a novel method integrating raw activation patterns of the underlying DNNs, employed by the perception module, analysis with spatial filtering techniques. This novel approach enhances the accuracy of runtime introspection of the DNN-based 3D object detections by selectively focusing on an area of interest in the data, thereby contributing to the safety and efficacy of ADS perception self-assessment processes.
翻译:深度神经网络(DNN)模型广泛应用于自动驾驶系统(ADS)中的目标检测任务。然而,此类模型易产生具有严重安全性影响的错误。因此,旨在检测此类错误的自省与自评估模型对于ADS的安全部署至关重要。当前关于该主题的研究聚焦于监测ADS感知机制完整性的技术。然而,现有文献中的自省模型大多通过向感知模块输入数据帧的所有部分赋予同等重要性来检测感知错误。这种通用方法忽视了场景中不同对象在安全性上的差异性,导致安全关键性错误的识别被模糊化,从而对评估特定关键实例中感知可靠性的问题构成挑战。针对现有技术的这一不足,本文提出了一种融合原始激活模式(即感知模块所采用底层DNN的激活模式)分析与空间滤波技术的新方法。该方法通过选择性聚焦数据中的感兴趣区域,提升了基于DNN的三维目标检测运行时自省的准确性,从而为ADS感知自评估过程的安全性与有效性做出贡献。