Developing autonomous vehicles that can safely interact with pedestrians requires large amounts of pedestrian and vehicle data in order to learn accurate pedestrian-vehicle interaction models. However, gathering data that include crucial but rare scenarios - such as pedestrians jaywalking into heavy traffic - can be costly and unsafe to collect. We propose a virtual reality human-in-the-loop simulator, JaywalkerVR, to obtain vehicle-pedestrian interaction data to address these challenges. Our system enables efficient, affordable, and safe collection of long-tail pedestrian-vehicle interaction data. Using our proposed simulator, we create a high-quality dataset with vehicle-pedestrian interaction data from safety critical scenarios called CARLA-VR. The CARLA-VR dataset addresses the lack of long-tail data samples in commonly used real world autonomous driving datasets. We demonstrate that models trained with CARLA-VR improve displacement error and collision rate by 10.7% and 4.9%, respectively, and are more robust in rare vehicle-pedestrian scenarios.
翻译:开发能够安全与行人交互的自动驾驶车辆需要大量行人与车辆数据,以学习准确的行人-车辆交互模型。然而,收集包含关键但罕见场景(例如行人违规横穿密集车流)的数据可能成本高昂且存在安全风险。为此,我们提出一种虚拟现实人在回路模拟器JaywalkerVR,以获取车辆-行人交互数据来应对这些挑战。该系统能够高效、经济且安全地收集长尾行人-车辆交互数据。利用该模拟器,我们创建了一个名为CARLA-VR的高质量数据集,其中包含安全关键场景中的车辆-行人交互数据。CARLA-VR数据集弥补了常用现实世界自动驾驶数据集中长尾数据样本的不足。实验表明,使用CARLA-VR训练模型可将位移误差和碰撞率分别降低10.7%和4.9%,且在罕见车辆-行人场景中表现出更强的鲁棒性。