Effective leveraging of real-world driving datasets is crucial for enhancing the training of autonomous driving systems. While Offline Reinforcement Learning enables training autonomous vehicles with such data, most available datasets lack meaningful reward labels. Reward labeling is essential as it provides feedback for the learning algorithm to distinguish between desirable and undesirable behaviors, thereby improving policy performance. This paper presents a novel approach for generating human-aligned reward labels. The proposed approach addresses the challenge of absent reward signals in the real-world datasets by generating labels that reflect human judgment and safety considerations. The reward function incorporates an adaptive safety component that is activated by analyzing semantic segmentation maps, enabling the autonomous vehicle to prioritize safety over efficiency in potential collision scenarios. The proposed method is applied to an occluded pedestrian crossing scenario with varying pedestrian traffic levels, using simulation data. When the generated rewards were used to train various Offline Reinforcement Learning algorithms, each model produced a meaningful policy, demonstrating the method's viability. In addition, the method was applied to a subset of the Audi Autonomous Driving Dataset, and the reward labels were compared to human-annotated reward labels. The findings show a moderate disparity between the two reward sets, and, most interestingly, the method flagged unsafe states that the human annotator missed.
翻译:有效利用真实世界驾驶数据集对于提升自动驾驶系统的训练至关重要。尽管离线强化学习能够利用此类数据训练自动驾驶车辆,但大多数可用数据集缺乏有意义的奖励标注。奖励标注至关重要,因为它为学习算法提供了区分期望与不期望行为的反馈,从而改进策略性能。本文提出了一种生成人类对齐奖励标注的新方法。该方法通过生成反映人类判断与安全考量的标注,解决了真实世界数据集中奖励信号缺失的挑战。所提出的奖励函数包含一个自适应安全组件,该组件通过分析语义分割图被激活,使得自动驾驶车辆在潜在碰撞场景中优先考虑安全而非效率。该方法被应用于具有不同行人流量水平的遮挡行人过街场景,使用了仿真数据。当生成的奖励被用于训练多种离线强化学习算法时,每个模型都产生了有意义的策略,证明了该方法的可行性。此外,该方法被应用于奥迪自动驾驶数据集的一个子集,并将生成的奖励标注与人工标注的奖励标注进行了比较。研究结果显示两组奖励标注之间存在中等程度的差异,并且最有趣的是,该方法标记出了人类标注者遗漏的不安全状态。