This survey provides an analysis of current methodologies integrating legal and logical specifications into the perception, prediction, and planning modules of automated driving systems. We systematically explore techniques ranging from logic-based frameworks to computational legal reasoning approaches, emphasizing their capability to ensure regulatory compliance and interpretability in dynamic and uncertain driving environments. A central finding is that significant challenges arise at the intersection of perceptual reliability, legal compliance, and decision-making justifiability. To systematically analyze these challenges, we introduce a taxonomy categorizing existing approaches by their theoretical foundations, architectural implementations, and validation strategies. We particularly focus on methods that address perceptual uncertainty and incorporate explicit legal norms, facilitating decisions that are both technically robust and legally defensible. The review covers neural-symbolic integration methods for perception, logic-driven rule representation, and norm-aware prediction strategies, all contributing toward transparent and accountable autonomous vehicle operation. We highlight critical open questions and practical trade-offs that must be addressed, offering multidisciplinary insights from engineering, logic, and law to guide future developments in legally compliant autonomous driving systems.
翻译:本文综述了当前将法律与逻辑规范集成到自动驾驶系统感知、预测及规划模块中的方法,并对其进行了系统性分析。我们系统探讨了从基于逻辑的框架到计算法律推理方法等多种技术,重点评估了它们在动态不确定驾驶环境中确保法规合规性与系统可解释性的能力。核心研究发现,感知可靠性、法律合规性与决策可辩护性之间的交叉领域存在显著挑战。为系统分析这些挑战,我们提出了一个分类体系,依据现有方法的理论基础、架构实现与验证策略进行分类。我们特别关注那些处理感知不确定性并纳入明确法律规范的方法,这些方法有助于实现技术上稳健且法律上可辩护的决策。本综述涵盖了用于感知的神经符号集成方法、逻辑驱动的规则表示方法以及规范感知的预测策略,这些均有助于实现透明且可问责的自动驾驶车辆运行。我们重点指出了亟待解决的关键开放性问题与实际权衡,并融合工程学、逻辑学与法学的多学科视角,为未来开发符合法律规范的自动驾驶系统提供指导。