The cooperative driving technology of Connected and Autonomous Vehicles (CAVs) is crucial for improving the efficiency and safety of transportation systems. Learning-based methods, such as Multi-Agent Reinforcement Learning (MARL), have demonstrated strong capabilities in cooperative decision-making tasks. However, existing MARL approaches still face challenges in terms of learning efficiency and performance. In recent years, Large Language Models (LLMs) have rapidly advanced and shown remarkable abilities in various sequential decision-making tasks. To enhance the learning capabilities of cooperative agents while ensuring decision-making efficiency and cost-effectiveness, we propose LDPD, a language-driven policy distillation method for guiding MARL exploration. In this framework, a teacher agent based on LLM trains smaller student agents to achieve cooperative decision-making through its own decision-making demonstrations. The teacher agent enhances the observation information of CAVs and utilizes LLMs to perform complex cooperative decision-making reasoning, which also leverages carefully designed decision-making tools to achieve expert-level decisions, providing high-quality teaching experiences. The student agent then refines the teacher's prior knowledge into its own model through gradient policy updates. The experiments demonstrate that the students can rapidly improve their capabilities with minimal guidance from the teacher and eventually surpass the teacher's performance. Extensive experiments show that our approach demonstrates better performance and learning efficiency compared to baseline methods.
翻译:联网自动驾驶车辆(CAVs)的协同驾驶技术对于提升交通系统的效率与安全性至关重要。基于学习的方法,例如多智能体强化学习(MARL),已在协同决策任务中展现出强大能力。然而,现有的MARL方法在学习效率和性能方面仍面临挑战。近年来,大语言模型(LLMs)发展迅速,并在多种序列决策任务中表现出卓越能力。为了在确保决策效率与成本效益的同时增强协同智能体的学习能力,我们提出了LDPD,一种用于指导MARL探索的语言驱动策略蒸馏方法。在该框架中,一个基于LLM的教师智能体通过自身的决策示范来训练规模较小的学生智能体,以实现协同决策。教师智能体首先增强CAVs的观测信息,并利用LLMs进行复杂的协同决策推理;该过程还借助精心设计的决策工具来实现专家级决策,从而提供高质量的教学经验。随后,学生智能体通过梯度策略更新将教师的先验知识提炼至自身模型中。实验表明,学生在教师的最小化指导下能够快速提升能力,并最终超越教师的性能。大量实验证明,与基线方法相比,我们的方法展现出更优的性能与学习效率。