Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation. Humans need to understand and anticipate the actions taken by the machines for trustful and safe cooperation. In this work, we aim to develop an explainable model that generates explanations consistent with both human domain knowledge and the model's inherent causal relation. In particular, we focus on an essential building block of autonomous driving, multi-agent interaction modeling. We propose Grounded Relational Inference (GRI). It models an interactive system's underlying dynamics by inferring an interaction graph representing the agents' relations. We ensure a semantically meaningful interaction graph by grounding the relational latent space into semantic interactive behaviors defined with expert domain knowledge. We demonstrate that it can model interactive traffic scenarios under both simulation and real-world settings, and generate semantic graphs explaining the vehicle's behavior by their interactions.
翻译:可解释性对于自动驾驶车辆及其他在运行过程中与人类及其他物体交互的机器人系统至关重要。人类需要理解并预测机器采取的行为,以实现可信且安全的协同。在本研究中,我们致力于开发一种可解释模型,其生成的解释既符合人类领域知识,又与模型内在的因果关系保持一致。我们特别关注自动驾驶的关键构建模块——多智能体交互建模。我们提出了基于领域知识的关系推理模型(GRI)。该模型通过推断表示智能体间关系的交互图,对交互系统的底层动力学进行建模。我们通过将关系隐空间锚定至由专家领域知识定义的语义交互行为,确保生成的交互图具有语义意义。我们证明该模型能够在仿真和真实场景下对交互式交通场景进行建模,并通过车辆间的交互生成解释其行为的语义图。