Despite significant progress in autonomous vehicles (AVs), the development of driving policies that ensure both the safety of AVs and traffic flow efficiency has not yet been fully explored. In this paper, we propose an enhanced human-in-the-loop reinforcement learning method, termed the Human as AI mentor-based deep reinforcement learning (HAIM-DRL) framework, which facilitates safe and efficient autonomous driving in mixed traffic platoon. Drawing inspiration from the human learning process, we first introduce an innovative learning paradigm that effectively injects human intelligence into AI, termed Human as AI mentor (HAIM). In this paradigm, the human expert serves as a mentor to the AI agent. While allowing the agent to sufficiently explore uncertain environments, the human expert can take control in dangerous situations and demonstrate correct actions to avoid potential accidents. On the other hand, the agent could be guided to minimize traffic flow disturbance, thereby optimizing traffic flow efficiency. In detail, HAIM-DRL leverages data collected from free exploration and partial human demonstrations as its two training sources. Remarkably, we circumvent the intricate process of manually designing reward functions; instead, we directly derive proxy state-action values from partial human demonstrations to guide the agents' policy learning. Additionally, we employ a minimal intervention technique to reduce the human mentor's cognitive load. Comparative results show that HAIM-DRL outperforms traditional methods in driving safety, sampling efficiency, mitigation of traffic flow disturbance, and generalizability to unseen traffic scenarios. The code and demo videos for this paper can be accessed at: https://zilin-huang.github.io/HAIM-DRL-website/
翻译:尽管自动驾驶车辆(AVs)取得了显著进展,但确保自动驾驶车辆安全与交通流效率两者兼顾的驾驶策略开发尚未得到充分探索。本文提出一种增强型人在回路强化学习方法,称为基于人类作为AI导师的深度强化学习(HAIM-DRL)框架,旨在促进混合交通车队中安全高效的自动驾驶。受人类学习过程的启发,我们首先引入一种创新的学习范式,将人类智能有效注入AI,称为人类作为AI导师(HAIM)。在此范式中,人类专家担任AI智能体的导师。在允许智能体充分探索不确定环境的同时,人类专家可在危险情况下接管控制并演示正确动作以避免潜在事故。另一方面,智能体可被引导以最小化交通流扰动,从而优化交通流效率。具体而言,HAIM-DRL利用从自由探索和部分人类演示收集的数据作为其两个训练源。值得注意的是,我们规避了手动设计奖励函数的复杂过程;相反,我们直接从部分人类演示中推导代理状态-动作值以指导智能体的策略学习。此外,我们采用最小干预技术来降低人类导师的认知负荷。对比结果表明,HAIM-DRL在驾驶安全性、采样效率、交通流扰动缓解以及对未见交通场景的泛化能力方面均优于传统方法。本文代码及演示视频可通过以下网址访问:https://zilin-huang.github.io/HAIM-DRL-website/