The end-to-end learning pipeline is gradually creating a paradigm shift in the ongoing development of highly autonomous vehicles, largely due to advances in deep learning, the availability of large-scale training datasets, and improvements in integrated sensor devices. However, a lack of interpretability in real-time decisions with contemporary learning methods impedes user trust and attenuates the widespread deployment and commercialization of such vehicles. Moreover, the issue is exacerbated when these cars are involved in or cause traffic accidents. Such drawback raises serious safety concerns from societal and legal perspectives. Consequently, explainability in end-to-end autonomous driving is essential to build trust in vehicular automation. However, the safety and explainability aspects of end-to-end driving have generally been investigated disjointly by researchers in today's state of the art. This survey aims to bridge the gaps between these topics and seeks to answer the following research question: When and how can explanations improve safety of end-to-end autonomous driving? In this regard, we first revisit established safety and state-of-the-art explainability techniques in end-to-end driving. Furthermore, we present three critical case studies and show the pivotal role of explanations in enhancing self-driving safety. Finally, we describe insights from empirical studies and reveal potential value, limitations, and caveats of practical explainable AI methods with respect to their safety assurance in end-to-end autonomous driving.
翻译:端到端学习流水线正逐渐推动高度自动驾驶汽车发展的范式转变,这主要得益于深度学习进步、大规模训练数据集可用性以及集成传感器设备的改进。然而,当代学习方法在实时决策中缺乏可解释性,阻碍了用户信任,削弱了此类车辆的广泛部署与商业化进程。更甚者,当这些汽车涉及或导致交通事故时,这一问题更为突出。这种缺陷从社会和法律角度引发了严重的安全担忧。因此,端到端自动驾驶中的可解释性对于建立对车辆自动化的信任至关重要。然而,当前技术现状中,研究者通常孤立地研究端到端驾驶的安全性与可解释性。本综述旨在弥合这些主题之间的差距,并试图回答以下研究问题:解释何时以及如何能提升端到端自动驾驶的安全性?为此,我们首先回顾端到端驾驶中既定的安全技术及前沿可解释性技术。此外,我们提出三个关键案例研究,展示解释在增强自动驾驶安全性中的关键作用。最后,我们描述实证研究的见解,揭示实际可解释人工智能方法在端到端自动驾驶安全保证方面的潜在价值、局限性与注意事项。