The end-to-end learning pipeline is gradually creating a paradigm shift in the ongoing development of highly autonomous vehicles, largely due to advances in deep learning, the availability of large-scale training datasets, and improvements in integrated sensor devices. However, a lack of interpretability in real-time decisions with contemporary learning methods impedes user trust and attenuates the widespread deployment and commercialization of such vehicles. Moreover, the issue is exacerbated when these cars are involved in or cause traffic accidents. Such drawback raises serious safety concerns from societal and legal perspectives. Consequently, explainability in end-to-end autonomous driving is essential to enable the safety of vehicular automation. However, the safety and explainability aspects of autonomous driving have generally been investigated disjointly by researchers in today's state of the art. In this paper, we aim to bridge the gaps between these topics and seek to answer the following research question: When and how can explanations improve safety of autonomous driving? In this regard, we first revisit established safety and state-of-the-art explainability techniques in autonomous driving. Furthermore, we present three critical case studies and show the pivotal role of explanations in enhancing self-driving safety. Finally, we describe our empirical investigation and reveal potential value, limitations, and caveats with practical explainable AI methods on their role of assuring safety and transparency for vehicle autonomy.
翻译:端到端学习流程正逐步推动高度自动驾驶汽车发展的范式转变,这主要得益于深度学习的进步、大规模训练数据集的可获取性以及集成传感器设备的改进。然而,当前学习方法在实时决策中缺乏可解释性,这阻碍了用户信任,并削弱了此类车辆的广泛部署与商业化进程。当这些车辆卷入或引发交通事故时,这一问题尤为突出。从社会和法律角度来看,该缺陷引发了严重的安全担忧。因此,端到端自动驾驶中的可解释性对于实现车辆自动化安全至关重要。然而,在现有技术中,研究人员通常将自动驾驶的安全性与可解释性分开研究。本文旨在弥合这些主题之间的鸿沟,并试图回答以下研究问题:解释何时以及如何提升自动驾驶的安全性?为此,我们首先回顾了自动驾驶中既定的安全技术和前沿可解释性方法。此外,我们提出了三个关键案例研究,展示了解释在提升自动驾驶安全性中的核心作用。最后,我们描述了实证研究,揭示了实际可解释人工智能方法在确保车辆自主性安全与透明度方面的潜在价值、局限性及注意事项。