End-to-end robot policies achieve high performance through neural networks trained via reinforcement learning (RL). Yet, their black box nature and abstract reasoning pose challenges for human-robot interaction (HRI), because humans may experience difficulty in understanding and predicting the robot's navigation decisions, hindering trust development. We present a virtual reality (VR) interface that visualizes explainable AI (XAI) outputs and the robot's lidar perception to support intuitive interpretation of RL-based navigation behavior. By visually highlighting objects based on their attribution scores, the interface grounds abstract policy explanations in the scene context. This XAI visualization bridges the gap between obscure numerical XAI attribution scores and a human-centric semantic level of explanation. A within-subjects study with 24 participants evaluated the effectiveness of our interface for four visualization conditions combining XAI and lidar. Participants ranked scene objects across navigation scenarios based on their importance to the robot, followed by a questionnaire assessing subjective understanding and predictability. Results show that semantic projection of attributions significantly enhances non-expert users' objective understanding and subjective awareness of robot behavior. In addition, lidar visualization further improves perceived predictability, underscoring the value of integrating XAI and sensor for transparent, trustworthy HRI.
翻译:端到端机器人策略通过强化学习(RL)训练的神经网络实现高性能。然而,其黑盒特性与抽象推理过程为人机交互(HRI)带来挑战,因为人类可能难以理解和预测机器人的导航决策,从而阻碍信任建立。我们提出一种虚拟现实(VR)界面,用于可视化可解释人工智能(XAI)输出与机器人的激光雷达感知数据,以支持对基于RL的导航行为的直观解读。该界面通过依据归因分数对物体进行视觉高亮,将抽象的策略解释锚定于场景上下文中。这种XAI可视化弥合了晦涩的数值化XAI归因分数与以人为中心的语义解释层级之间的差距。一项包含24名参与者的受试者内研究评估了我们的界面在结合XAI与激光雷达的四种可视化条件下的有效性。参与者在多个导航场景中根据对机器人的重要性对场景物体进行排序,随后通过问卷评估其主观理解与可预测性感知。结果表明,归因的语义投影显著提升了非专业用户对机器人行为的客观理解与主观认知。此外,激光雷达可视化进一步改善了感知可预测性,突显了整合XAI与传感器数据对于实现透明、可信赖的HRI的重要价值。