Current imitation learning approaches, predominantly based on deep neural networks (DNNs), offer efficient mechanisms for learning driving policies from real-world datasets. However, they suffer from inherent limitations in interpretability and generalizability--issues of critical importance in safety-critical domains such as autonomous driving. In this paper, we introduce Symbolic Imitation Learning (SIL), a novel framework that leverages Inductive Logic Programming (ILP) to derive explainable and generalizable driving policies from synthetic datasets. We evaluate SIL on real-world HighD and NGSim datasets, comparing its performance with state-of-the-art neural imitation learning methods using metrics such as collision rate, lane change efficiency, and average speed. The results indicate that SIL significantly enhances policy transparency while maintaining strong performance across varied driving conditions. These findings highlight the potential of integrating ILP into imitation learning to promote safer and more reliable autonomous systems.
翻译:当前基于深度神经网络(DNN)的模仿学习方法,为从真实世界数据集中学习驾驶策略提供了高效机制。然而,它们在可解释性和泛化性方面存在固有局限——这些局限在自动驾驶等安全关键领域至关重要。本文提出符号模仿学习(SIL),这是一种利用归纳逻辑编程(ILP)从合成数据集中推导出可解释且可泛化的驾驶策略的新颖框架。我们在真实世界的HighD和NGSim数据集上评估SIL,使用碰撞率、变道效率和平均速度等指标,将其性能与最先进的神经模仿学习方法进行比较。结果表明,SIL在保持不同驾驶条件下强大性能的同时,显著增强了策略的透明度。这些发现凸显了将ILP融入模仿学习以促进更安全、更可靠自主系统的潜力。